Corante

Authors

Clay Shirky
( Archive | Home )

Liz Lawley
( Archive | Home )

Ross Mayfield
( Archive | Home )

Sébastien Paquet
( Archive | Home )

David Weinberger
( Archive | Home )

danah boyd
( Archive | Home )

Guest Authors
Recent Comments

Ask Fm Anonymous Finder on My book. Let me show you it.

Ask Fm Anonymous Finder on My book. Let me show you it.

mobile games on My book. Let me show you it.

http://www.gunforums.com/forums/showtopic.php?fid/30/tid/15192/pid/111828/post/last/#LAST on My book. Let me show you it.

temecula dui attorney on My book. Let me show you it.

louboutin chaussures soldes on My book. Let me show you it.

Site Search
Monthly Archives
Syndication
RSS 1.0
RSS 2.0
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

Many-to-Many

February 28, 2008

My book. Let me Amazon show you it.Email This EntryPrint This Article

Posted by Clay Shirky

I’m delighted to say that online bookstores are shipping copies of Here Comes Everybody today, and that it has gotten several terrific notices in the blogosphere:

Cory Doctorow:
Clay’s book makes sense of the way that groups are using the Internet. Really good sense. In a treatise that spans all manner of social activity from vigilantism to terrorism, from Flickr to Howard Dean, from blogs to newspapers, Clay unpicks what has made some “social” Internet media into something utterly transformative, while other attempts have fizzled or fallen to griefers and vandals. Clay picks perfect anecdotes to vividly illustrate his points, then shows the larger truth behind them.
Russell Davies:
Here Comes Everybody goes beyond wild-eyed webby boosterism and points out what seems to be different about web-based communities and organisation and why it’s different; the good and the bad. With useful and interesting examples, good stories and sticky theories. Very good stuff.
Eric Nehrlich:
These newly possible activities are moving us towards the collapse of social structures created by technology limitations. Shirky compares this process to how the invention of the printing press impacted scribes. Suddenly, their expertise in reading and writing went from essential to meaningless. Shirky suggests that those associated with controlling the means to media production are headed for a similar fall.
Philip Young:
Shirky has a piercingly sharp eye for the spotting the illuminating case studies - some familiar, some new - and using them to energise wider themes. His basic thesis is simple: “Everywhere you look groups of people are coming together to share with one another, work together, take some kind of public action.” The difference is that today, unlike even ten years ago, technological change means such groups can be form and act in new and powerful ways. Drawing on a wide range of examples Shirky teases out remarkable contrasts with what has been the expected logic, and shows quite how quickly the dynamics of reputation and relationships have changed.

Comments (30) + TrackBacks (0) | Category: social software

February 7, 2008

My book. Let me show you it.Email This EntryPrint This Article

Posted by Clay Shirky

I’ve written a book, called Here Comes Everybody: The Power of Organizing Without Organizations, which is coming out in a month. It’s coming out first in the US and UK (and in translation later this year in Holland, Portugal and Brazil, Korea, and China.)

Here Comes Everybody is about why new social tools matter for society. It is a non-techie book for the general reader (the letters TCP IP appear nowhere in that order). It is also post-utopian (I assume that the coming changes are both good and bad) and written from the point of view I have adopted from my students, namely that the internet is now boring, and the key question is what we are going to do with it.

One of the great frustrations of writing a book as opposed to blogging is seeing a new story that would have been a perfect illustration, or deepened an argument, and not being able to add it. To remedy that, I’ve just launched a new blog, at HereComesEverybody.org, to continue writing about the effects of social tools.

Wow. What a great response — we’ve given out all the copies we can, but many thanks for all the interest. Also, I’ve convinced the good folks at Penguin Press to let me give a few review copies away to people in the kinds of communities the book is about. I’ve got half a dozen copies to give to anyone reading this, with the only quid pro quo being that you blog your reactions to it, good bad or indifferent, some time in the next month or so. Drop me a line if you would like a review copy — clay@shirky.com.

Comments (117) + TrackBacks (0) | Category: social software

August 3, 2007

User-generated neologism: "Indigenous content"Email This EntryPrint This Article

Posted by Clay Shirky

My class in the fall is called “User-generated”, and it looks, among other things, at the tension surrounding that phrase, and in particular its existence as an external and anxiety-ridden label, by traditional media companies, for the way that advertising can be put next to material not created by Trained Professionals™.

All right-thinking individuals (by which I basically mean Anil Dash and Heather Champ) hate that phrase. Now my friend Kio Stark* has come up with what seems like a nice, and more anthropologically correct version: Indigenous Content (which is to say “Created by the natives for themselves.”)

* ObKio: Best. Tagset. Evar.

Comments (12) + TrackBacks (0) | Category:

August 1, 2007

New Freedom Destroys Old Culture: A response to Nick CarrEmail This EntryPrint This Article

Posted by Clay Shirky

I have never understood Nick Carr’s objections to the cultural effects of the internet. He’s much too smart to lump in with nay-sayers like Keen, and when he talks about the effects of the net on business, he sounds more optimistic, even factoring in the wrenching transition, so why aren’t the cultural effects similar cause for optimism, even accepting the wrenching transition in those domains as well?

I think I finally got understood the dichotomy between his reading of business and culture after reading Long Player, his piece on metadata and what he calls “the myth of liberation”, a post spurred in turn by David Weinberger’s Everything Is Miscellaneous.

Carr discusses the ways in which the long-playing album was both conceived of and executed as an aesthetic unit, its length determined by a desire to hold most of the classical canon on a single record, and its possibilities exploited by musicians who created for the form — who created albums, in other words, rather than mere bags of songs. He illustrates this with an exegesis of the Rolling Stones’ Exile on Main Street, showing how the overall construction makes that album itself a work of art.

Carr uses this point to take on what he calls the myth of liberation: “This mythology is founded on a sweeping historical revisionism that conjures up an imaginary predigital world - a world of profound physical and economic constraints - from which the web is now liberating us.” Carr observes, correctly, that the LP was what it was in part for aesthetic reasons, and the album, as a unit, became what it became in the hands of people who knew how to use it.

That is not, however, the neat story Carr wants to it be, and the messiness of the rest of the story is key, I think, to the anxiety about the effects on culture, his and others.

The LP was an aesthetic unit, but one designed within strong technical constraints. When Edward Wallerstein of Columbia Records was trying to figure out how long the long-playing format should be, he settled on 17 minutes a side as something that would “…enable about 90% of all classical music to be put on two sides of a record.” But why only 90%? Because 100% would be impossible — the rest of the canon was too long for the technology of the day. And why should you have to flip the record in the middle? Why not have it play straight through? Impossible again.

Contra Carr, in other words, the pre-digital world was a world of profound physical and economic constraints. The LP could hold 34 minutes of music, which was a bigger number of minutes than some possibilities (33 possibilities, to be precise), but smaller than an infinite number of others. The album as a form provided modest freedom embedded in serious constraints, and the people who worked well with the form accepted those constraints as a way of getting at those freedoms. And now the constraints are gone; there is no necessary link between an amount of music and its playback vehicle.

And what Carr dislikes, I think, is evidence that the freedoms of the album were only as valuable as they were in the context of the constraints. If Exile on Main Street was as good an idea as he thinks it was, it would survive the removal of those constraints.

And it hasn’t.

Here is the iTunes snapshot of Exile, sorted by popularity:

While we can’t get absolute numbers from this, we can get relative ones — many more people want to listen to Tumbling Dice or Happy than Ventilator Blues or Turd on the Run, even though iTunes makes it cheaper per song to buy the whole album. Even with a financial inducement to preserve the album form, the users still say no thanks.

The only way to support the view that Exile is best listened to as an album, in other words, is to dismiss the actual preferences of most of the people who like the Rolling Stones. Carr sets about this task with gusto:
Who would unbundle Exile on Main Street or Blonde on Blonde or Tonight’s the Night - or, for that matter, Dirty Mind or Youth and Young Manhood or (Come On Feel the) Illinoise? Only a fool would.
Only a fool. If you are one of those people who has, say, Happy on your iPod (as I do), then you are a fool (though you have lots of company). And of course this foolishness extends to the recording industry, and to the Stones themselves, who went and put Tumbling Dice on a Greatest Hits collection. (One can only imagine how Carr feels about Greatest Hits collections.)

I think Weinberger’s got it right about liberation, even taking at face value the cartoonish version Carr offers. Prior to unlimited perfect copyability, media was defined by profound physical and economic constraints, and now it’s not. Fewer constraints and better matching of supply and demand are good for business, because business is not concerned with historical continuity. Fewer constraints and better matching of supply and demand are bad for current culture, because culture continually mistakes current exigencies for eternal verities.

This isn’t just Carr of course. As people come to realize that freedom destroys old forms just as surely as it creates new ones, the lament for the long-lost present is going up everywhere. As another example, Sven Birkerts, the literary critic, has a post in the Boston Globe, Lost in the blogosphere, that is almost indescribably self-involved. His two complaints are that newspapers are reducing the space allotted to literary criticism, and too many people on the Web are writing about books. In other words, literary criticism, as practiced during Birkerts’ lifetime, was just right, and having either fewer or more writers are both lamentable situations.

In order that the “Life was better when I was younger” flavor of his complaint not become too obvious, Birkerts frames the changing landscape not as a personal annoyance but as A Threat To Culture Itself. As he puts it “…what we have been calling “culture” at least since the Enlightenment — is the emergent maturity that constrains unbounded freedom in the interest of mattering.”

This is silly. The constraints of print were not a product of “emergent maturity.” They were accidents of physical production. Newspapers published book reviews because their customers read books and because publishers took out ads, the same reason they published pieces about cars or food or vacations. Some newspapers hired critics because they could afford to, others didn’t because they couldn’t. Ordinary citizens didn’t write about books in a global medium because no such medium existed. None of this was an attempt to “constrain unbounded freedom” because there was no such freedom to constrain; it was just how things were back then.

Genres are always created in part by limitations. Albums are as long as they are because that Wallerstein picked a length his engineers could deliver. Novels are as long as they are because Aldus Manutius’s italic letters and octavo bookbinding could hold about that many words. The album is already a marginal form, and the novel will probably become one in the next fifty years, but that also happened to the sonnet and the madrigal.

I’m old enough to remember the dwindling world, but it never meant enough to me to make me a nostalgist. In my students’ work I see hints of a culture that takes both the new freedoms and the new constraints for granted, but the fullest expression of that world will probably come after I’m dead. But despite living in transitional times, I’m not willing to pretend that the erosion of my worldview is a crisis for culture itself. It’s just how things are right now.

Carr fails to note that the LP was created for classical music, but used by rock and roll bands. Creators work within whatever constraints exist at the time they are creating, and when the old constraints give way, new forms arise while old ones dwindle. Some work from the older forms will survive — Shakespeare’s 116th sonnet remains a masterwork — while other work will wane — Exile as an album-length experience is a fading memory. This kind of transition isn’t a threat to Culture Itself, or even much of a tragedy, and we should resist attempts to preserve old constraints in order to defend old forms.

Comments (49) + TrackBacks (0) | Category: social software

July 20, 2007

Spolsky on Blog Comments: Scale mattersEmail This EntryPrint This Article

Posted by Clay Shirky

Joel Spolsky approvingly quotes Dave Winer on the subject of blog-comments:

The cool thing about blogs is that while they may be quiet, and it may be hard to find what you’re looking for, at least you can say what you think without being shouted down. This makes it possible for unpopular ideas to be expressed. And if you know history, the most important ideas often are the unpopular ones…. That’s what’s important about blogs, not that people can comment on your ideas. As long as they can start their own blog, there will be no shortage of places to comment.

Joel then adds his own observations:

When a blog allows comments right below the writer’s post, what you get is a bunch of interesting ideas, carefully constructed, followed by a long spew of noise, filth, and anonymous rubbish that nobody … nobody … would say out loud if they had to take ownership of their words.

This can be true, all true, as any casual read of blog comments can attest. BoingBoing turned off their comments years ago, because they’d long since passed the scale where polite conversation was possible. The Tragedy of the Conversational Commons becomes too persistently tempting when an audience gorws large. At BoingBoing scale, John Gabriel’s Greater Internet Fuckwad Theory cannot be repealed.

But the uselessness of comments it is not the universal truth that Dave or (fixed, per Dave’s comment below) Joel makes it out to be, for two reasons. First, posting and conversation are different kinds of things — same keyboard, same text box, same web page, different modes of expression. Second, the sites that suffer most from anonymous postings and drivel are the ones operating at large scale.

If you are operating below that scale, comments can be quite good, in a way not replicable in any “everyone post to their own blog”. To take but three recent examples, take a look at the comments on my post on Michael Gorman, on danah’s post at Apophenia on fame, narcissism and MySpace and on Kieran Healy’s biological thought experiment on Crooked Timber.

Those three threads contain a hundred or so comments, including some distinctly low-signal bouquets and brickbats. But there is also spirited disputation and emendation, alternate points of view, linky goodness, and a conversational sharpening of the argument on all sides, in a way that doesn’t happen blog to blog. This, I think, is the missing element in Dave and Joel’s points — two blog posts do not make a conversation. The conversation that can be attached to a post is different in style and content, and in intent and effect, than the post itself.

I have long thought that the ‘freedom of speech means no filtering’ argument is dumb where blogs are concerned — it is the blogger’s space, and he or she should feel free to delete, disemvowel, or otherwise dispose of material, for any reason, or no reason. But we’ve long since passed the point where what happens on a blog is mainly influenced by what the software does — the question to ask about comments is not whether they are available, but how a community uses them. The value in in blogs as communities of practice is considerable, and its a mistake to write off comment threads on those kinds of blogs just because, in other environments, comments are lame.

Comments (20) + TrackBacks (0) | Category: social software

July 10, 2007

"The internet's output is data, but its product is freedom"Email This EntryPrint This Article

Posted by Clay Shirky

I said that in Andrew Keen: Rescuing ‘Luddite’ from the Luddites, to which Phil, one of the commenters, replied

There are assertions of verifiable fact and then there are invocations of shared values. Don’t mix them up.

I meant this as an assertion of fact, but re-reading it after Tom’s feedback, it comes off as simple flag-waving, since I’d compressed the technical part of the argument out of existence. So here it is again, in slightly longer form:

The internet’s essential operation is to encode and transmit data from sender to receiver. In 1969, this was not a new capability; we’d had networks that did this in since the telegraph, at the day of the internet’s launch, we had a phone network that was nearly a hundred years old, alongside more specialized networks for things like telexes and wire-services for photographs.

Thus the basics of what the internet did (and does) isn’t enough to explain its spread; what is it for has to be accounted for by looking at the difference between it and the other data-transfer networks of the day.

The principal difference between older networks and the internet (ARPAnet, at its birth) is the end-to-end principle, which says, roughly, “The best way to design a network is to allow the sender and receiver to decide what the data means, without asking the intervening network to interpret the data.” The original expression of this idea is from the Saltzer and Clark paper End-to-End Arguments in System Design; the same argument is explained in other terms in Isenberg’s Stupid Network and Searls and Weinberger’s World of Ends.

What the internet is for, in other words, what made it worth adopting in a world already well provisioned with other networks, was that the sender and receiver didn’t have to ask for either help or permission before inventing a new kind of message. The core virtue of the internet was a huge increase in the technical freedom of all of its participating nodes, a freedom that has been translated into productive and intellectual freedoms for its users.

As Scott Bradner put it, the Internet means you don’t have to convince anyone else that something is a good idea before trying it. The upshot is that the internet’s output is data, but its product is freedom.

Comments (7) + TrackBacks (0) | Category: social software

July 9, 2007

Andrew Keen: Rescuing 'Luddite' from the LudditesEmail This EntryPrint This Article

Posted by Clay Shirky

Last week, while in a conversation with Andrew Keen on the radio show To The Point, he suggested that he was not opposed to the technology of the internet, but rather to how it was being used.

This reminded me of Michael Gorman’s insistence that digital tools are fine, so long as they are shaped to replicate the social (and particularly academic) institutions that have grown up around paper.

There is a similar strand in these two arguments, namely that technology is one thing, but the way it is used is another, and that the two can and should be separated. I think this view is in the main wrong, even Luddite, but to make such an accusation requires a definition of Luddite considerably more grounded than ‘anti-technology’ (a vacuous notion — no one who wears shoes can reasonably be called ‘anti-technology.’) Both Keen and Gorman have said they are not opposed to digital technology. I believe them when they say this, but I still think their views are Luddite, by historical analogy with the real Luddite movement of the early 1800s.

What follows is a long detour into the Luddite rebellion, followed by a reply to Keen about the inseparability of the internet from its basic effects.

Infernal Machines

The historical record is relatively clear. In March of 1811, a group of weavers in Nottinghamshire began destroying mechanical looms. This was not the first such riot — in the late 1700s, when Parliament refused to guarantee the weaver’s control of supply of woven goods, workers in Nottingham destroyed looms as well. The Luddite rebellion, though, was unusual for several reasons: its breadth and sustained character, taking place in many industrializing towns at once; its having a nominal leader, going by the name Ned Ludd, General Ludd, or King Ludd (the pseudonym itself a reference to an apocryphal figure from an earlier loom-breaking riot in the late 1700s); and its written documentation of grievances and rationale. The rebellion, which lasted two years, was ultimately put down by force, and was over in 1813.

Over the last two decades, several historians have re-examined the record of the Luddite movement, and have attempted to replace the simplistic view of Luddites as being opposed to technological change with a more nuanced accounting of their motivations and actions. The common thread of the analysis is that the Luddites didn’t object to mechanized wide-frame looms per se, they objected to the price collapse of woven goods caused by the way industrialists were using the looms. Though the target of the Luddite attacks were the looms themselves, their concerns and goals were not about technology but about economics.

I believe that the nuanced view is wrong, and that the simpler view of Luddites as counter-revolutionaries is in fact the correct one. The romantic view of Luddites as industrial-age Robin Hoods, concerned not to halt progress but to embrace justice, runs aground on both the written record, in which the Luddites outline a program that is against any technology that increases productivity, and on their actions, which were not anti-capitalist but anti-consumer. It also assumes that there was some coherent distinction between technological and economic effects of the looms; there was none.

A Technology is For Whatever Happens When You Use It

The idea that the Luddites were targeting economic rather than technological change is a category fallacy, where the use of two discrete labels (technology and economics, in this case) are wrongly thought to demonstrate two discrete aspects of the thing labeled (here wide-frame looms.) This separation does not exist in this case; the technological effects of the looms were economic. This is because, at the moment of its arrival, what a technology does and what it is for are different.

What any given technology does is fairly obvious: rifles fire bullets, pencils make marks, looms weave cloth, and so on. What a technology is for, on the other hand, what leads people to adopt it, is whatever new thing becomes possible on the day of its arrival. The Winchester repeating rifle was not for firing bullets — that capability already existed. It was for decreasing the wait between bullets. Similarly, pencils were not for writing but for portability, and so on.

And the wide-frame looms, target of the Luddite’s destructive forays? What were they for? They weren’t for making cloth — humankind was making cloth long before looms arrived. They weren’t for making better cloth — in 1811, industrial cloth was inferior to cloth spun by the weavers. Mechanical looms were for making cheap cloth, lots and lots of cheap cloth. The output of a mechanical loom was cloth, but the product of such a loom was savings.

The wide-frame loom was a cost-lowering machine, and as such, it threatened the old inefficiencies on which the Luddite’s revenues depended. Their revolt had the goal of preventing those savings from being passed along to the customer. One of their demands was that Parliament outlaw “all Machinery hurtful to Commonality” — all machines that worked efficiently enough to lower prices.

Perhaps more tellingly, and against recent fables of Luddism as a principled anti-capitalist movement, they refrained from breaking the looms of industrial weavers who didn’t lower their prices. What the Luddites were rioting in favor of was price gouging; they didn’t care how much a wide-frame loom might save in production costs, so long as none of those savings were passed on to their fellow citizens.

Their common cause was not with citizens and against industrialists, it was against citizens and with those industrialists who joined them in a cartel. The effect of their campaign, had it succeeded, would been to have raise, rather than lower, the profits of the wide-frame operators, while producing no benefit for those consumers who used cloth in their daily lives, which is to say the entire population of England. (Tellingly, none of the “Robin Hood” versions of Luddite history make any mention of the effect of high prices on the buyers of cloth, just on the sellers.)

Back to Keen

A Luddite argument is one in which some broadly useful technology is opposed on the grounds that it will discomfit the people who benefit from the inefficiency the technology destroys. An argument is especially Luddite if the discomfort of the newly challenged professionals is presented as a general social crisis, rather than as trouble for a special interest. (“How will we know what to listen to without record store clerks!”) When the music industry suggests that the prices of music should continue to be inflated, to preserve the industry as we have known it, that is a Luddite argument, as is the suggestion that Google pay reparations to newspapers or the phone company’s opposition to VoIP undermining their ability to profit from older ways of making phone calls.

This is what makes Keen’s argument a Luddite one — he doesn’t oppose all uses of technology, just ones that destroy older ways of doing things. In his view, the internet does not need to undermine the primacy of the copy as the anchor for both filtering and profitability.

But Keen is wrong. What the internet does is move data from point A to B, but what it is for is empowerment. Using the internet without putting new capabilities into the hands of its users (who are, by definition, amateurs in most things they can now do) would be like using a mechanical loom and not lowering the cost of buying a coat — possible, but utterly beside the point.

The internet’s output is data, but its product is freedom, lots and lots of freedom. Freedom of speech, freedom of the press, freedom of association, the freedom of an unprecedented number of people to say absolutely anything they like at any time, with the reasonable expectation that those utterances will be globally available, broadly discoverable at no cost, and preserved for far longer than most utterances are, and possibly forever.

Keen is right in understanding that this massive supply-side shock to freedom will destabilize and in some cases destroy a number of older social institutions. He is wrong in believing that there is some third way — lets deploy the internet, but not use it to increase the freedom of amateurs to do as they like.

It is possible to want a society in which new technology doesn’t demolish traditional ways of doing things. It is not possible to hold this view without being a Luddite, however. That view — incumbents should wield veto-power over adoption of tools they dislike, no matter the positive effects for the citizenry — is the core of Luddism, then and now.

Comments (26) + TrackBacks (0) | Category: social software

June 20, 2007

Gorman, redux: The Siren Song of the InternetEmail This EntryPrint This Article

Posted by Clay Shirky

Michael Gorman has his next post up at the Britannica blog: The Siren Song of the Internet. My reply is also up, and posted below. The themes of the historical lessons of Luddism are also being discussed in the comments to last week’s Gorman response, Old Revolutions Good, New Revolutions Bad

Siren Song of the Internet contains a curious omission and a basic misunderstanding. The omission is part of his defense of the Luddites; the misunderstanding is about the value of paper and the nature of e-books.

The omission comes early: Gorman cavils at being called a Luddite, though he then embraces the label, suggesting that they “…had legitimate grievances and that their lives were adversely affected by the mechanization that led to the Industrial Revolution.” No one using the term Luddite disputes the effects on pre-industrial weavers. This is the general case — any technology that fixes a problem (in this case the high cost of homespun goods) threatens the people who profit from the previous inefficiency. However, Gorman omits mentioning the Luddite response: an attempt to halt the spread of mechanical looms which, though beneficial to the general populace, threatened the livelihoods of King Ludd’s band.

By labeling the Luddite program legitimate, Gorman seems to be suggesting that incumbents are right to expect veto power over technological change. Here his stand in favor of printed matter is inconsistent, since printing was itself enormously disruptive, and many people wanted veto power over its spread as well. Indeed, one of the great Luddites of history (if we can apply the label anachronistically) was Johannes Trithemius, who argued in the late 1400s that the printing revolution be contained, in order to shield scribes from adverse effects. This is the same argument Gorman is making, in defense of the very tools Trithemius opposed. His attempt to rescue Luddism looks less like a principled stand than special pleading: the printing press was good, no matter happened to the scribes, but let’s not let that sort of thing happen to my tribe.

Gorman then defends traditional publishing methods, and ends up conflating several separate concepts into one false conclusion, saying “To think that digitization is the answer to all that ails the world is to ignore the uncomfortable fact that most people, young and old, prefer to interact with recorded knowledge and literature in the form of print on paper.”

Dispensing with the obvious straw man of “all that ails the world”, a claim no one has made, we are presented with a fact that is supposed to be uncomfortable — it’s good to read on paper. Well duh, as the kids say; there’s nothing uncomfortable about that. Paper is obviously superior to the screen for both contrast and resolution; Hewlett-Packard would be about half the size it is today if that were not true. But how did we get to talking about paper when we were talking about knowledge a moment ago?

Gorman is relying on metonymy. When he notes a preference for reading on paper he means a preference for traditional printed forms such as books and journals, but this is simply wrong. The uncomfortable fact is that the advantages of paper have become decoupled from the advantages of publishing; a big part of preference for reading on paper is expressed by hitting the print button. As we know from Lyman and Varian’s “How Much Information” study, “…the vast majority of original information on paper is produced by individuals in office documents and postal mail, not in formally published titles such as books, newspapers and journals.”

We see these effects everywhere: well over 90% of new information produced in any year is stored electronically. Use of the physical holdings of libraries are falling, while the use of electronic resources is rising. Scholarly monographs, contra Gorman, are increasingly distributed electronically. Even the physical form of newspapers is shrinking in response to shrinking demand, and so on.

The belief that a preference for paper leads to a preference for traditional publishing is a simple misunderstanding, demonstrated by his introduction of the failed e-book program as evidence that the current revolution is limited to “hobbyists and premature adopters.” The problems with e-books are that they are not radical enough: they dispense with the best aspect of books (paper as a display medium) while simultaneously aiming to disable the best aspects of electronic data (sharability, copyability, searchability, editability.) The failure of e-books is in fact bad news for Gorman’s thesis, as it demonstrates yet again that users have an overwhelming preference for the full range of digital advantages, and are not content with digital tools that are designed to be inefficient in the ways that printed matter is inefficient.

If we gathered every bit of output from traditional publishers, we could line them up in order of vulnerability to digital evanescence. Reference works were the first to go — phone books, dictionaries, and thesauri have largely gone digital; the encyclopedia is going, as are scholarly journals. Last to go will be novels — it will be some time before anyone reads One Hundred Years of Solitude in any format other than a traditionally printed book. Some time, however, is not forever. The old institutions, and especially publishers and libraries, have been forced to use paper not just for display, for which is it well suited, but also for storage, transport, and categorization, things for which paper is completely terrible. We are now able to recover from those disadvantages, though only by transforming the institutions organized around the older assumptions.

The ideal situation, which we are groping our way towards, will be to have all written material, wherever it lies on the ‘information to knowledge’ continuum, in digital form, right up the moment a reader wants it. At that point, the advantages of paper can be made manifest, either by printing on demand, or by using a display that matches paper’s superior readability. Many of the traditional managers of books and journals will suffer from this change, though it will benefit society as a whole. The question Gorman pointedly asks, by invoking Ned Ludd and his company, is whether we want that change to be in the hands of people who would be happy to discomfit society as a whole in order to preserve the inefficiencies that have defined their world.

Comments (6) + TrackBacks (0) | Category: social software

June 18, 2007

Mis-understanding Fred Wilson's 'Age and Entrepreneurship' argumentEmail This EntryPrint This Article

Posted by Clay Shirky

Technorati reports approximately one buptillion disputatious replies to Fred Wilson’s observations about age and tech entrepreneurship. These come in two basic forms: examples from industry (“Youth doesn’t matter because Steve Jobs is still going strong”) and examples from personal experience (“Youth doesn’t matter because my grandmother invented DoS attacks when she was 87!”)

These arguments, not to put too fine a point on it, are stupid.

Fred is not talking about intelligence or even tech chops. He is talking about a specific kind of tech entrepreneurialism: the likelihood of coming up with an idea that is so powerful it will shift the tech landscape. He is then asserting that, statistically, young people have an edge in their ability to come up with these kinds of ideas.

The first counter-argument, whose commonest explanandum is Steve Jobs current success, not only fails, it actually supports Fred’s point. Back In The Day, Jobs’ best decision was to work with Wozniak, and together they brought out usable versions of the GUI and the mouse. These changes were so radical that they didn’t catch on until they were copied in more pedestrian and backwards-compatible forms. And now? What is Apple doing with a seasoned Jobs at the helm? They are polishing the GUI to a fare-thee-well. They are making Diamond’s idea of an MP3 player work better than anyone imagined it could. They are making (brainstorm alert!) a phone! Woz was a mere tinkerer in light of such revolutionary moves, no?

As a Mac user, I love what Jobs is doing for the company, but no way am I willing to confuse the Polecat release of the current OS with what Lisa tried and the Mac achieved in the early 80s.

Then there is the personal attestation to brilliant ideas, coming from the outraged older set. I guess I should feel some sort of pride that my fellow proto-geriatrics are still in there fighting, but instead, I think they kind of prove Fred’s point by demonstrating that they’ve either forgotten how to read, or that they can’t do math so good anymore.

Fred’s basic observation is statistical: In the domain E, with the actors divided into two groups Y and O, there are more Y in E than you’d expect from a random distribution, and many more if you thought there should be an advantage to being a member of O.

This observation cannot be falsified by a single counter-example. Given that Fred’s argument is about the odds of success (he is a VC, after all), the fact that you remember the words to the Pina Colada song and you recently did something useful is meaningless. Fred’s question is about how many grizzled veterans are founding world-changing tech firms, not whether any are.

There are lots of possible counter-arguments to what Fred is saying (and I am echoing): Maybe so many young people start companies that the observation suffers from denominator bias. Or: young people raise money from VCs in disproportionate numbers because they don’t have the contacts to raise money in other ways. Or: the conservatism of the old is social, not mental, and concerns for family and quality of life turn otherwise undiminished imaginations to lower-risk goals. And so on.

It would be good if someone made those arguments — the thesis is provocative and it matters, so it should be scrutinized and, if false, destroyed. But Fred has said something important, something with both internal evidence (the list of successful recent entrepreneurs) and external existence proofs (mathematicians careers are also statistically front-weighted, so the pattern isn’t obviously absurd.) Given this, the argument cannot simply be whined away, robbing many of the current respondents of the weapon with which they are evidently most adept.

Comments (10) + TrackBacks (0) | Category:

June 16, 2007

The Future Belongs to Those Who Take The Present For Granted: A return to Fred Wilson's "age question"Email This EntryPrint This Article

Posted by Clay Shirky

My friend Fred Wilson had a pair of posts a few weeks back, the first arguing that youth was, in and of itself an advantage for tech entrepreneurs, and the second waffling on that question with idea that age is a mindset.

I think Fred got it right the first time, and I said so at the time, in The (Bayesian) Advantages of Youth:

I’m old enough to know a lot of things, just from life experience. I know that music comes from stores. I know that newspapers are where you get your political news and how you look for a job. I know that if you need to take a trip, you visit a travel agent. In the last 15 years or so, I’ve had to unlearn those things and a million others. This makes me a not-bad analyst, because I have to explain new technology to myself first — I’m too old to understand it natively. But it makes me a lousy entrepreneur.
Today, Fred seems to have returned to his original (and in my view correct) idea in The Age Question (continued):
It is incredibly hard to think of new paradigms when you’ve grown up reading the newspaper every morning. When you turn to TV for your entertainment. When you read magazines on the train home from work. But we have a generation coming of age right now that has never relied on newspapers, TV, and magazines for their information and entertainment.[…] The Internet is their medium and they are showing us how it needs to be used.

This is exactly right.

I think the real issue, of which age is a predictor, is this: the future belongs to those who take the present for granted. I had this thought while talking to Robert Cook of Metaweb, who are making Freebase. They need structured metadata, lots of structured metadata, and one of the places they are getting it is from Wikipedia, by spidering the bio boxes (among other things) for things like birthplace and age of people listed Freebase. While Andrew Keen is trying to get a conversation going on whether Wikipedia is a good idea, Metaweb takes it for granted as a stable part of the environment, which lets them see past this hurdle to the next one.

This is not to handicap the success of Freebase itself — it takes a lot more than taking the present for granted to make a successful tool. But one easy way to fail is to assume that the past is more solid than it is, and the present more contingent. And the people least likely to make this mistake — the people best able to take the present for granted — are young people, for whom knowing what the world is really like is as easy as waking up in the morning, since this is the only world they’ve ever known.

Some things improve with age — I wouldn’t re-live my 20s if you paid me — but high-leverage ignorance isn’t one of them.

Comments (20) + TrackBacks (0) | Category: social software

June 13, 2007

Old Revolutions Good, New Revolutions Bad: A Response to GormanEmail This EntryPrint This Article

Posted by Clay Shirky

Encyclopedia Britannica has started a Web 2.0 Forum, where they are hosting a conversation going on around a set of posts by Michael Gorman. The first post, in two parts, is titled Web 2.0: The Sleep of Reason Brings Forth Monsters, and is a defense of the print culture against alteration by digital technologies. This is my response, which will be going up on the Britannica site later this week.

Web 2.0: The Sleep of Reason Brings Forth Monsters starts with a broad list of complaints against the current culture, from biblical literalism to interest in alternatives to Western medicine.

The life of the mind in the age of Web 2.0 suffers, in many ways, from an increase in credulity and an associated flight from expertise. Bloggers are called “citizen journalists”; alternatives to Western medicine are increasingly popular, though we can thank our stars there is no discernable “citizen surgeon” movement; millions of Americans are believers in Biblical inerrancy—the belief that every word in the Bible is both true and the literal word of God, something that, among other things, pits faith against carbon dating; and, scientific truths on such matters as medical research, accepted by all mainstream scientists, are rejected by substantial numbers of citizens and many in politics. Cartoonist Garry Trudeau’s Dr. Nathan Null, “a White House Situational Science Adviser,” tells us that: “Situational science is about respecting both sides of a scientific argument, not just the one supported by facts.”

This is meant to set the argument against a big canvas of social change, but the list is so at odds with the historical record as to be self-defeating.

The percentage of the US population believing in the literal truth of the Bible has remained relatively constant since the 1980s, while the percentage listing themselves as having “no religion” has grown. Interest in alternative medicine dates to at least the patent medicines of the 19th century; the biggest recent boost for that movement came under Reagan, when health supplements, soi-disant, were exempted from FDA scrutiny. Trudeau’s welcome critique of the White House’s assault on reason targets a political minority, not the internet-using population, and so on. If you didn’t know that this litany appeared under the heading Web 2.0, you might suspect Gorman’s target was anti-intellectualism during Republican administrations.

Even the part of the list specific to new technology gets it wrong. Bloggers aren’t called citizen-journalists; bloggers are called bloggers. Citizen-journalist describes people like Alisara Chirapongse, the Thai student who posted photos and observations of the recent coup during a press blackout. If Gorman can think of a better label for times when citizens operate as journalists, he hasn’t shared it with us.

Similarly, lumping Biblical literalism with Web 2.0 misses the mark. Many of the most active social media sites — Slashdot, Digg, Reddit — are rallying points for those committed to scientific truth. Wikipedia users have so successfully defended articles on Evolution, Creationism and so on from the introduction of counter-factual beliefs that frustrated literalists helped found Conservapedia, whose entry on Evolution is a farrago of anti-scientific nonsense.

But wait — if use of social media is bad, and attacks on the scientific method are bad, what are we to make of social media sites that defend the scientific method? Surely Wikipedia is better than Conservapedia on that score, no? Well, it all gets confusing when you start looking at the details, but Gorman is not interested in the details. His grand theory, of the hell-in-a-handbasket variety, avoids any look at specific instantiations of these tools — how do the social models of Digg and Wikipedia differ? does Huffington Post do better or worse than Instapundit on factual accuracy? — in favor of one sweeping theme: defense of incumbent stewards of knowledge against attenuation of their erstwhile roles.

There are two alternate theories of technology on display in Sleep of Reason. The first is that technology is an empty vessel, into which social norms may be poured. This is the theory behind statements like “The difference is not, emphatically not, in the communication technology involved.” (Emphasis his.) The second theory is that intellectual revolutions are shaped in part by the tools that sustain them. This is the theory behind his observation that the virtues of print were “…often absent in the manuscript age that preceded print.”

These two theories cannot both be true, so it’s odd to find them side by side, but Gorman does not seem to be comfortable with either of them as a general case. This leads to a certain schizophrenic quality to the writing. We’re told that print does not necessarily bestow authenticity and that some digital material does, but we’re also told that he consulted “authoritative printed sources” on Goya. If authenticity is an option for both printed and digital material, why does printedness matter? Would the same words on the screen be less scholarly somehow?

Gorman is adopting a historically contingent view: Revolution then was good, revolution now is bad. As a result, according to Gorman, the shift to digital and networked reproduction of information will fail unless it recapitulates the institutions and habits that have grown up around print.

Gorman’s theory about print — its capabilities ushered in an age very different from manuscript culture — is correct, and the same kind of shift is at work today. As with the transition from manuscripts to print, the new technologies offer virtues that did not previously exist, but are now an assumed and permanent part of our intellectual environment. When reproduction, distribution, and findability were all hard, as they were for the last five hundred years, we needed specialists to undertake those jobs, and we properly venerated them for the service they performed. Now those tasks are simpler, and the earlier roles have instead become obstacles to direct access.

Digital and networked production vastly increase three kinds of freedom: freedom of speech, of the press, and of assembly. This perforce increases the freedom of anyone to say anything at any time. This freedom has led to an explosion in novel content, much of it mediocre, but freedom is like that. Critically, this expansion of freedom has not undermined any of the absolute advantages of expertise; the virtues of mastery remain are as they were. What has happened is that the relative advantages of expertise are in precipitous decline. Experts the world over have been shocked to discover that they were consulted not as a direct result of their expertise, but often as a secondary effect — the apparatus of credentialing made finding experts easier than finding amateurs, even when the amateurs knew the same things as the experts.

This improved ability to find both content and people is one of the core virtues of our age. Gorman insists that he was able to find “…the recorded knowledge and information I wanted [about Goya] in seconds.” This is obviously an impossibility for most of the population; if you wanted detailed printed information on Goya and worked in any environment other than a library, it would take you hours at least. This scholars-eye view is the key to Gorman’s lament: so long as scholars are content with their culture, the inability of most people to enjoy similar access is not even a consideration.

Wikipedia is the best known example of improved findability of knowledge. Gorman is correct that an encyclopedia is not the product of a collective mind; this is as true of Wikipedia as of Britannica. Gorman’s unfamiliarity and even distaste for Wikipedia leads him to mistake the dumbest utterances of its most credulous observers for an authentic accounting of its mechanisms; people pushing arguments about digital collectivism, pro or con, known nothing about how Wikipedia actually works. Wikipedia is the product not of collectivism but of unending argumentation; the corpus grows not from harmonious thought but from constant scrutiny and emendation.

The success of Wikipedia forces a profound question on print culture: how is information is to be shared with the majority of the population? This is an especially tough question, as print culture has so manifestly failed at the transition to a world of unlimited perfect copies. Because Wikipedia’s contents are both useful and available, it has eroded the monopoly held by earlier modes of production. Other encyclopedias now have to compete for value to the user, and they are failing because their model mainly commits them to denying access and forbidding sharing. If Gorman wants more people reading Britannica, the choice lies with its management. Were they to allow users unfettered access to read and share Britannica’s content tomorrow, the only interesting question is whether their readership would rise a ten-fold or a hundred-fold.

Britannica will tell you that they don’t want to compete on universality of access or sharability, but this is the lament of the scribe who thinks that writing fast shouldn’t be part of the test. In a world where copies have become cost-free, people who expend their resources to prevent access or sharing are forgoing the principal advantages of the new tools, and this dilemma is common to every institution modeled on the scarcity and fragility of physical copies. Academic libraries, which in earlier days provided a service, have outsourced themselves as bouncers to publishers like Reed-Elsevier; their principal job, in the digital realm, is to prevent interested readers from gaining access to scholarly material.

If Gorman were looking at Web 2.0 and wondering how print culture could aspire to that level of accessibility, he would be doing something to bridge the gap he laments. Instead, he insists that the historical mediators of access “…promote intellectual development by exercising judgment and expertise to make the task of the seeker of knowledge easier.” This is the argument Catholic priests made to the operators of printing presses against publishing translations of the Bible — the laity shouldn’t have direct access to the source material, because they won’t understand it properly without us. Gorman offers no hint as to why direct access was an improvement when created by the printing press then but a degradation when created by the computer. Despite the high-minded tone, Gorman’s ultimate sentiment is no different from that of everyone from music executives to newspaper publishers: Old revolutions good, new revolutions bad.

Comments (48) + TrackBacks (0) | Category: social software

May 24, 2007

What are we going to say about "Cult of the Amateur"?Email This EntryPrint This Article

Posted by Clay Shirky

A month or so ago, Micah Sifry offered me a chance to respond to Andrew Keen, author of the forthcoming Cult of the Amateur, at a panel at last week’s Personal Democracy Forum (PdF). The book is a polemic against the current expansion of freedom of speech, freedom of the press, and freedom of association. Also on the panel were Craig Newmark and Robert Scoble, so I was in good company; my role would, I thought, be easy — be pro-amateur production, pro-distributed creation, pro-collective action, and so on, things that come naturally to me.

What I did not expect was what happened — I ended up defending Keen, and key points from Cult of the Amateur, against a panel of my peers.

I won’t review CotA here, except to say that the book is going to get a harsh reception from the blogosphere. It is, as Keen himself says, largely anecdotal, which makes it more a list of ‘bad things that have happened where the internet is somewhere in the story’ than an account of cause and effect; as a result, internet gambling and click fraud are lumped together with the problems with DRM and epistemological questions about peer-produced material. In addition to this structural weakness, it is both aggressive enough and reckless enough to make people spitting mad. Dan Gillmor was furious about the inaccuracies, including his erroneous (and since corrected) description in the book, Yochai Benkler asked me why I was even deigning to engage Andrew in conversation, and so on. I don’t think I talked to anyone who wasn’t dismissive of the work.

But even if we stipulate that the book doesn’t do much to separate cause from effect, and has the problems of presentation that often accompany polemic, the core point remains: Keen’s sub-title, “How today’s internet is destroying our culture”, has more than a grain of truth to it, and the only thing those of us who care about the network could do wrong would be to dismiss Keen out of hand.

Which is exactly what people were gearing up to do last week. Because Keen is a master of the dismissive phrase — bloggers are monkeys, only people who get paid do good work, and so on — he will engender a reaction from our side that assumes that everything he says in the book is therefore wrong. This is a bad (but probably inevitable) reaction, but I want to do my bit to try to stave it off, both because fairness dictates it — Keen is at least in part right, and we need to admit that — and because a book-burning accompanied by a hanging-in-effigy will be fun for us, but will weaken the pro-freedom position, not strengthen it.

The Panel

The panel at PdF started with Andrew speaking, in some generality, about ways in which amateurs were discomfiting people who actually know what they are doing, while producing sub-standard work on their own.

My response started by acknowledging that many of the negative effects Keen talked about were real, but that the source of these effect was an increase in the freedom of people to say what they want, when they want to, on a global stage; that the advantages of this freedom outweigh the disadvantages; that many of the disadvantages are localized to professions based on pre-internet inefficiencies; and that the effort required to take expressive power away from citizens was not compatible with a free society.

This was, I thought, a pretty harsh critique of the book. I was wrong; I didn’t know from harsh.

Scoble was simply contemptuous. He had circled offending passages which he would read, and then offer an aphoristic riposte that was more scorn than critique. For instance, in taking on Andrew’s point that talent is unevenly distributed, Scoble’s only comment was, roughly, “Yeah, Britney must be talented…”

Now you know and I know what Scoble meant — traditional media gives outsize rewards to people on characteristics other than pure talent. This is true, but because he was so dismissive of Keen, it’s not the point that Scoble actually got across. Instead, he seemed to be denying either that talent is unevenly distributed, or that Britney is talented.

But Britney is talented. She’s not Yo-Yo Ma, and you don’t have to like her music (back when she made music rather than just headlines), but what she does is hard, and she does it well. Furthermore, deriding the music business’s concern with looks isn’t much of a criticism. It escaped no one’s notice that Amanda Congdon and lonelygirl15 were easy on the eyes, and that that was part of their appeal. So cheap shots at mainstream talent or presumptions of the internet’s high-mindedness are both non-starters.

More importantly, talent is unevenly distributed, and everyone knows it. Indeed, one of the many great things about the net is that talent can now express itself outside traditional frameworks; this extends to blogging, of course, but also to music, as Clive Thompson described in his great NY Times piece, or to software, as with Linus’ talent as an OS developer, and so on. The price of this, however, is that the amount of poorly written or produced material has expanded a million-fold. Increased failure is an inevitable byproduct of increased experimentation, and finding new filtering methods for dealing with an astonishingly adverse signal-to-noise ratio is the great engineering challenge of our age (c.f. Google.) Whatever we think of Keen or CotA, it would be insane to deny that.

Similarly, Scoble scoffed at the idea that there is a war on copyright, but there is a war on copyright, at least as it is currently practiced. As new capabilities go, infinite perfect copyability is a lulu, and it breaks a lot of previously stable systems. In the transition from encoding on atoms to encoding with bits, information goes from having the characteristics of chattel to those of a public good. For the pro-freedom camp to deny that there is a war on copyright puts Keen in the position of truth-teller, and makes us look like employees of the Ministry of Doublespeak.

It will be objected that engaging Keen and discussing a flawed book will give him attention he neither needs nor deserves. This is fantasy. CotA will get an enthusiastic reception no matter what, and whatever we think of it or him, we will be called to account for the issues he raises. This is not right, fair, or just, but it is inevitable, and if we dismiss the book based on its errors or a-causal attributions, we will not be regarded as people who have high standards, but rather as defensive cult members who don’t like to explain ourselves to outsiders.

What We Should Say

Here’s my response to the core of Keen’s argument.

Keen is correct in seeing that the internet is not an improvement to modern society; it is a challenge to it. New technology makes new things possible, or, put another way, when new technology appears, previously impossible things start occurring. If enough of those impossible things are significantly important, and happen in a bundle, quickly, the change becomes a revolution.

The hallmark of revolution is that the goals of the revolutionaries cannot be contained by the institutional structure of the society they live in. As a result, either the revolutionaries are put down, or some of those institutions are transmogrified, replaced, or simply destroyed. We are plainly witnessing a restructuring of the music and newspaper businesses, but their suffering isn’t unique, it’s prophetic. All businesses are media businesses, because whatever else they do, all businesses rely on the managing of information for two audiences — employees and the world. The increase in the power of both individuals and groups, outside traditional organizational structures, is epochal. Many institutions we rely on today will not survive this change without radical alteration.

This change will create three kinds of loss.

First, people whose jobs relied on solving a hard problem will lose those jobs when the hard problems disappear. Creating is hard, filtering is hard, but the basic fact of making acceptable copies of information, previously the basis of the aforementioned music and newspaper industries, is a solved problem, and we should regard with suspicion anyone who tries to return copying to its previously difficult state.

Similarly, Andrew describes a firm running a $50K campaign soliciting user-generated ads, and notes that some professional advertising agency therefore missed out on something like $300,000 dollars of fees. Its possible to regard this as a hardship for the ad guys, but its also possible to wonder whether they were really worth the $300K in the first place if an amateur, working in their spare time with consumer-grade equipment, can create something the client is satisfied with. This loss is real, but it is not general. Video tools are sad for ad guys in the same way movable type was sad for scribes, but as they say in show biz, the world doesn’t owe you a living.

The second kind of loss will come from institutional structures that we like as a society, but which are becoming unsupportable. Online ads offer better value for money, but as a result, they are not going to generate enough cash to stand up the equivalent of the NY Times’ 15-person Baghdad bureau. Josh Wolf has argued that journalistic privilege should be extended to bloggers, but the irony is that Wolf’s very position as a videoblogger makes that view untenable — journalistic privilege is a special exemption to a general requirement for citizens to aid the police. We can’t have a general exception to that case.

The old model of defining a journalist by tying their professional identity to employment by people who own a media outlet is broken. Wolf himself has helped transform journalism from a profession to an activity; now we need a litmus test for when to offer source confidentiality for acts of journalism. This will in some ways be a worse compromise than the one we have now, not least because it will take a long time to unfold, but we can’t have mass amateurization of journalism and keep the social mechanisms that regard journalists as a special minority.

The third kind of loss is the serious kind. Some of these Andrew mentions in his book: the rise of spam, the dramatically enlarged market for identity theft. Other examples he doesn’t: terrorist organizations being more resilient as a result of better communications tools, pro-anorexic girls forming self-help groups to help them remain anorexic. These things are not side-effects of the current increase in freedom, they are effects of that increase. Spam is not just a plague in open, low-entry-cost systems; it is a result of those systems. We can no longer limit things like who gets to form self-help groups through social controls (the church will rent its basement to AA but not to the pro-ana kids), because no one needs help or permission to form such a group anymore.

The hard question contained in Cult of the Amateur is “What are we going to do about the negative effects of freedom?” Our side has generally advocated having as few limits as possible (when we even admit that there are downsides), but we’ve been short on particular cases. It’s easy to tell the newspaper people to quit whining, because the writing has been on the wall since Brad Templeton founded Clarinet. It’s harder to say what we should be doing about the pro-ana kids, or the newly robust terror networks.

Those cases are going to shift us from prevention to reaction (a shift that parallels the current model of publishing first, then filtering later), but so much of the conversation about the social effects of the internet has been so upbeat that even when there is an obvious catastrophe (as with the essjay crisis on Wikipedia), we talk about it amongst ourselves, but not in public.

What Wikipedia (and Digg and eBay and craigslist) have shown us is that mature systems have more controls than immature ones, as the number of bad cases is identified and dealt with, and as these systems become more critical and more populous, the number of bad cases (and therefore the granularity and sophistication of the controls) will continue to increase.

We are creating a governance model for the world that will coalesce after the pre-internet institutions suffer whatever damage or decay they are going to suffer. The conversation about those governance models, what they look like and why we need them, is going to move out into the general public with CotA, and we should be ready for it. My fear, though, is that we will instead get a game of “Did not!”, “Did so!”, and miss the opportunity to say something much more important.

Comments (19) + TrackBacks (0) | Category: social software

May 19, 2007

The (Bayesian) Advantage of YouthEmail This EntryPrint This Article

Posted by Clay Shirky

A couple of weeks ago, Fred Wilson wrote, in The Mid Life Entrepreneur Crisis “…prime time entrepreneurship is 30s. And its possibly getting younger as web technology meets youth culture.” After some followup from Valleywag, he addressed the question at greater length in The Age Question (continued), saying “I don’t totally buy that age matters. I think, as I said in my original post, that age is a mind set.”

This is a relief for people like me — you’re as young as you feel, and all that — or rather it would be a relief but for one little problem: Fred was right before, and he’s wrong now. Young entrepreneurs have an advantage over older ones (and by older I mean over 30), and contra Fred’s second post, age isn’t in fact a mindset. Young people have an advantage that older people don’t have and can’t fake, and it isn’t about vigor or hunger — it’s a mental advantage. The principal asset a young tech entrepreneur has is that they don’t know a lot of things.

In almost every other circumstance, this would be a disadvantage, but not here, and not now. The reason this is so (and the reason smart old people can’t fake their way into this asset) has everything to do with our innate ability to cement past experience into knowledge.

Probability and the Crisis of Novelty

The classic illustration for learning outcomes based on probability uses a bag of colored balls. Imagine that you can take out one ball, record its color, put it back, and draw again. How long does it take you to form an opinion about the contents of the bag, and how correct is that opinion?

Imagine a bag of black and white balls, with a slight majority of white. Drawing out a single ball would provide little information beyond “There is at least one white (or black) ball in this bag.” If you drew out ten balls in a row, you might guess that there are a similar number of black and white balls. A hundred would make you relatively certain of that, and might give you an inkling that white slightly outnumbers black. By a thousand draws, you could put a rough percentage on that imbalance, and by ten thousand draws, you could say something like “53% white to 47% black” with some confidence.

This is the world most of us live in, most of the time; the people with the most experience know the most.

But what would happen if the contents of the bag changed overnight? What if the bag suddenly started yielding balls of all colors and patterns — black and white but also green and blue, striped and spotted? The next day, when the expert draws a striped ball, he might well regard it as a mere anomaly. After all, his considerable experience has revealed a predictable and stable distribution over tens of thousands of draws, so no need to throw out the old theory because of just one anomaly. (To put it in Bayesian terms, the prior beliefs of the expert are valuable precisely because they have been strengthened through repetition, which repetition makes the expert confident in them even in the face of a small number of challenging cases.)

But the expert keeps drawing odd colors, and so after a while, he is forced to throw out the ‘this is an anomaly, and the bag is otherwise as it was’ theory, and start on a new one, which is that some novel variability has indeed entered the system. Now, the expert thinks, we have a world of mostly black and white, but with some new colors as well.

But the expert is still wrong. The bag changed overnight, and the new degree of variation is huge compared to the older black-and-white world. Critically, any attempt to rescue the older theory will cause the expert to misunderstand the world, and the more carefully the expert relies on the very knowledge that constitutes his expertise, the worse his misunderstanding will be.

Meanwhile, on the morning after the contents of the bag turn technicolor, someone who just showed up five minutes ago would say “Hey, this bag has lots of colors and patterns in it.” While the expert is still trying to explain away or minimize the change as a fluke, or as a slight adjustment to an otherwise stable situation, the novice, who has no prior theory to throw out, understands exactly what’s going on.

What our expert should have done, the minute he saw the first odd ball, is to say “I must abandon everything I have ever thought about how this bag works, and start from scratch.” He should, in other words, start behaving like a novice.

Which is exactly the thing he — we — cannot do. We are wired to learn from experience. This is, in almost all cases, absolutely the right strategy, because most things in life benefit from mental continuity. Again, today, gravity pulls things downwards. Again, today, I get hungry and need to eat something in the middle of the day. Again, today, my wife will be happier if I put my socks in the hamper than on the floor. We don’t need to re-learn things like this; once we get the pattern, we can internalize it and move on.

A Lot of Knowledge Is A Dangerous Thing

This is where Fred’s earlier argument comes in. In 999,999 cases, learning from experience is a good idea, but what entrepreneurs do is look for the one in a million shot. When the world really has changed overnight, when wild new things are possible if you don’t have any sense of how things used to be, then it is the people who got here five minutes ago who understand that new possibility, and they understand it precisely because, to them, it isn’t new.

These cases, let it be said, are rare. The mistakes novices make come from a lack of experience. They overestimate mere fads, seeing revolution everywhere, and they make this kind of mistake a thousand times before they learn better. But the experts make the opposite mistake, so that when a real once-in-a-lifetime change comes along, they are at risk of regarding it as a fad. As a result of this asymmetry, the novice makes their one good call during an actual revolution, at exactly the same time the expert makes their one big mistake, but at that moment, that’s all that is needed to give the newcomer a considerable edge.

Here’s a tech history question: Which went mainstream first, the PC or the VCR?

People over 35 have a hard time even understanding why you’d even ask — VCRs obviously pre-date PCs for general adoption.

Here’s another: Which went mainstream first, the radio or the telephone?

The same people often have to think about this question, even though the practical demonstration of radio came almost two decades after the practical demonstration of the telephone. We have to think about that second question because, to us, radio and the telephone arrived at the same time, which is to say the day we were born. And for college students today, that is true of the VCR and the PC.

People who think of the VCR as old and stable, and the PC as a newer invention, are not the kind of people who think up Tivo. It’s people who are presented with two storage choices, tape or disk, without historical bias making tape seem more normal and disk more provisional, who do that kind of work, and those people are, overwhelmingly, young.

This is sad for a lot of us, but its also true, and Fred’s kind lies about age being a mind set won’t reverse that.

The Uses of Experience

I’m old enough to know a lot of things, just from life experience. I know that music comes from stores. I know that you have to try on pants before you buy them. I know that newspapers are where you get your political news and how you look for a job. I know that if you want to have a conversation with someone, you call them on the phone. I know that the library is the most important building on a college campus. I know that if you need to take a trip, you visit a travel agent.

In the last 15 years or so, I’ve had to unlearn every one of those things and a million others. This makes me a not-bad analyst, because I have to explain new technology to myself first — I’m too old to understand it natively. But it makes me a lousy entrepreneur.

Ten years ago, I was the CTO of a web company we built and sold in what seemed like an eon but what was in retrospect an eyeblink. Looking back, I’m embarrassed at how little I knew, but I was a better entrepreneur because of it.

I can take some comfort in the fact that people much more successful than I succumb to the same fate. IBM learned, from decades of experience, that competitive advantage lay in the hardware; Bill Gates had never had those experiences, and didn’t have to unlearn them. Jerry and David at Yahoo learned, after a few short years, that search was a commodity. Sergey and Larry never knew that. Mark Cuban learned that the infrastructure required for online video made the economics of web video look a lot like TV. That memo was never circulated at YouTube.

So what can you do when you get kicked out of the club? My answer has been to do the things older and wiser people do. I teach, I write, I consult, and when I work with startups, it’s as an advisor, not as a founder.

And the hardest discipline, whether talking to my students or the companies I work with, is to hold back from offering too much advice, too definitively. When I see students or startups thinking up something crazy, and I want to explain why that won’t work, couldn’t possibly work, why this recapitulates the very argument that led to RFC 939 back in the day, I have to remind myself to shut up for a minute and just watch, because it may be me who will be surprised when I see what color comes out of the bag next.

Comments (42) + TrackBacks (0) | Category: social software

April 25, 2007

Sorry, Wrong Number: McCloud Abandons MicropaymentsEmail This EntryPrint This Article

Posted by Clay Shirky

Four years ago, I wrote a piece called Fame vs Fortune: Micropayments and Free Content. The piece was sparked by the founding of a company called BitPass and its adoption by the comic artist Scott McCloud (author of the seminal Understanding Comics, among other things.) McCloud created a graphic work called “The Right Number”, which you had to buy using BitPass.

It didn’t work. BitPass went out of business in January of this year. I didn’t write about it at the time because its failure was a foregone conclusion. This isn’t just retrospective certainty, either; here’s what I said about BitPass in 2003:
BitPass will fail, as FirstVirtual, Cybercoin, Millicent, Digicash, Internet Dollar, Pay2See, and many others have in the decade since Digital Silk Road, the paper that helped launch interest in micropayments. These systems didn’t fail because of poor implementation; they failed because the trend towards freely offered content is an epochal change, to which micropayments are a pointless response.

I’d love to take credit for having made a brave prediction there, but in fact Nick Szabo wrote a dispositive critique of micropayments back in 1996. The BitPass model never made a lick of sense, so predicting its demise was mere throat-clearing on the way to the bigger argument. The conclusion I drew in 2003 (and which I still believe) was that the vanishingly low cost of making unlimited perfect copies would put creators in the position of having to decide between going for audience size (fame) or restricting and charging for access (fortune), and that the desire for fame, no longer tempered by reproduction costs, would generally win out.

Creators are not publishers, and putting the power to publish directly into their hands does not make them publishers. It makes them artists with printing presses. This matters because creative people crave attention in a way publishers do not. […] with the power to publish directly in their hands, many creative people face a dilemma they’ve never had before: fame vs fortune.

Scott McCloud, who was also an advisor to BitPass, took strong issue with this idea in Misunderstanding Micropayments, a reply to the Fame vs. Fortune argument:

In many cases, it’s no longer a choice between getting it for a price or getting it for free. It’s the choice between getting it for price or not getting it at all. Fortunately, the price doesn’t have to be high.

McCloud was arguing that the creator’s natural monopoly — only Scott McCloud can produce another Scott McCloud work — would provide the artist the leverage needed to insist on micropayments (true), and that this leverage would create throngs of two-bit users (false).

What’s really interesting is that, after the failure of BitPass, McCloud has now released The Right Number absolutely free of charge. Nothing. Nada. Kein Preis. After the micropayment barrier had proved too high for his potential audience (as predicted), McCloud had to choose between keeping his work obscure, in order to preserve the possibility of charging for it, or going for attention. His actual choice in 2007, upends his argument of four years ago: he went for the fame, at the expense of the fortune. (This recapitulates Tim O’Reilly’s formulation: “Obscurity is a far greater threat to authors and creative artists than piracy.” [ thanks, Cory, for the pointer ])

Everyone who imagines a working micropayment system either misunderstands user preferences, or imagines preventing users from expressing those preferences. The working micropayments systems that people hold up as existence proofs — ringtones, iTunes — are businesses that have escaped from market dynamics through a monopoly or cartel (music labels, carriers, etc.) Indeed, the very appeal of micropayments to content producers (the only people who like them — they offer no feature a user has ever requested) is to re-establish the leverage of the creator over the users. This isn’t going to happen, because the leverage wasn’t based on the valuing of content, but of packaging and distribution.

I’ll let my 2003 self finish the argument:
People want to believe in things like micropayments because without a magic bullet to believe in, they would be left with the uncomfortable conclusion that what seems to be happening — free content is growing in both amount and quality — is what’s actually happening.

The economics of content creation are in fact fairly simple. The two critical questions are “Does the support come from the reader, or from an advertiser, patron, or the creator?” and “Is the support mandatory or voluntary?”

The internet adds no new possibilities. Instead, it simply shifts both answers strongly to the right. It makes all user-supported schemes harder, and all subsidized schemes easier. It likewise makes collecting fees harder, and soliciting donations easier. And these effects are multiplicative. The internet makes collecting mandatory user fees much harder, and makes voluntarily subsidy much easier.

The only interesting footnote, in 2007, is that these forces have now reversed even McCloud’s behavior.

Comments (11) + TrackBacks (0) | Category: social software

March 7, 2007

Spam that knows you: anyone else getting this?Email This EntryPrint This Article

Posted by Clay Shirky

So a few weeks ago, I started getting spam referencing O’Reilly books in the subject line, and I thought that the spammers had just gotten lucky, and that the universe of possible offensive measures for spammers now included generating so many different subject lines that at least some of them got through to my inbox, but recently I’ve started to get more of this kind of spam, as with:

  • Subject: definition of what “free software” means. Outgrowing its
  • Subject: What makes it particularly interesting to private users is that there has been much activity to bring free UNIXoid operating systems to the PC,
  • Subject: and so have been long-haul links using public telephone lines. A rapidly growing conglomerate of world-wide networks has, however, made joining the global

(All are phrases drawn from http://tldp.org/LDP/nag/node2.html.)

Can it be that spammers are starting to associate context with individual email addresses, in an effort to evade Bayesian filters? (If you wanted to make sure a message got to my inbox, references to free software, open source, and telecom networks would be a pretty good way to do it. I mean, what are the chances?) Some of this stuff is so close to my interests that I thought I’d written some of the subject lines and was receiving this as a reply. Or is this just general Bayes-busting that happens to overlap with my interests?

If it’s the former, then Teilhard de Chardin is laughing it up in some odd corner of the noosphere, as our public expressions are being reflected back to us as a come-on. History repeats itself, first as self-expression, then as ad copy…

Comments (15) + TrackBacks (0) | Category: social software

February 6, 2007

Second Life: A response to Henry JenkinsEmail This EntryPrint This Article

Posted by Clay Shirky

Introduction: Last week, Henry Jenkins, Beth Coleman and I all published pieces on Second Life and virtual worlds. (Those pieces are here: Henry, Beth, Clay.)

We also agreed we would each post reaction pieces this week. Henry’s second week post is here, Beth’s is here. (I have not read either yet, so the reactions to those pieces will come next week.) My reaction to Henry’s first piece is below; my reaction to Beth’s first piece will appear later in the week.

Henry,

I hope you’re a betting man, because at the end of this post, I’m going to propose a bet (or rather, as befits a conversation between academics, a framework for a bet.)

Before I do, though, I want to react to your earlier post on Second Life. Reading it, it struck me that we agree about many of the basic facts, and that most of our variance is about their relative importance. So as to prevent the softness of false consensus from settling over some sharp but interesting disagreements, let me start with a list of assertions I think we could both agree with. If I succeed, we can concentrate on our smaller but more interesting set of differences.

I think you and I agree that:

1. Linden has embraced participatory culture, including, inter alia, providing user tools, using CC licenses, and open sourcing the client.
2. Users of Second Life have created interesting effects by taking advantage of those opportunities.

and also

3. Most people who try Second Life do not like it. As a result, SL is is not going to be a mass movement in any meaningful sense of the term, to use your phrase.
4. Reporters and marketers ought not discuss Second Life using phony numbers.

The core difference between our respective views of the current situation is that you place more emphasis on the first two items on that list, and I on the second two.

With that having been said (and assuming you roughly agree with that analysis), I’ll respond to three points in your post that I either don’t understand or do understand but disagree with. First, I want to push back on one of your historical comparisons. Next, I want to try to convince you that giving bad actors a pass when they embrace participatory culture is short-sighted. Finally, and most importantly, I want to propose a bet on the future utility or inutility of virtual worlds.

One: Second Life != The Renaissance

You compare Second Life with the Renaissance and the Age of Reason. This is approximately insane, and your disclaimer that Second Life may not reach this rarefied plateau doesn’t do much to make it less insane. Using the Renaissance as a reference point links the two in the reader’s mind, even in the face of subsequent denial. (We are all Lakoffians now.)

I disagree with this comparison, not because I think its wrong, but because I think it’s a category error. Your example assumes I was writing about Second Life as shorthand for broader cultural change. I wasn’t. I was writing about Second Life as a client/server program that creates a visual rendering of 3-dimensional space. Not only is it not the Renaissance, it isn’t even the same kind of thing as the Renaissance. Your comparison elides the differences between general movements and particular tools.

Participatory culture is one of the essential movements of our age. It creates many different kinds of artifacts, however, and it is possible to be skeptical about Second Life as an artifact without being skeptical of participatory culture generally. Let me re-write the sentiment you reacted to, to make that distinction clear: Second Life, a piece of software developed by Linden Labs, is unlikely to become widely adopted in the future, because it is not now and has never been widely adopted, measured either in retention of new users or in the number of current return users.

If you believe, separate from the fortunes of Linden Lab’s current offering, that virtual worlds will someday become popular, we have a different, and far more interesting disagreement, one I’ll write about in the last section below.

Two: Participatoriness Should Not Be A ‘Get Out of Jail Free’ Card

You say:
I care only a little bit about the future of virtual worlds. I care a great deal about the future of participatory culture. And for the moment, the debate about and the hype surrounding SL is keeping alive the idea that we might design and inhabit our own worlds and construct our own culture. That’s something worth defending.

Reading this, I get the feeling that if I opened Clay’s Kitten Mulching Hut™, so long as it involved participatory kitten mulching, it would get the Jenkins seal of approval.

Of everything you wrote, I think this is likeliest to turn out to be a tactical mistake, for two reasons. First, for every Second Life user, there are at least 5 former users who quickly bailed. (And that’s in the short term; more have bailed the long term. We don’t know how many current return users there are, except that the number is smaller than any published figure.) Giving a pass to laudatory Second Life stories that use false numbers, simply because they are “keeping alive” an idea you like, risks bootstrapping Second Life’s failure to retain users into unwarranted skepticism about peer production generally.

More importantly, though, lowering your scrutiny of people using bogus Linden numbers, just because they are on your team, is a bad idea. To put this in Brooklynese, everyone touting the popularity of Second Life is either a schlemiel or a schuyster. The schlemiels are simply people who were fooled by Linden. The schuysters, though, are people who know the numbers are junk but use them anyway; making participatory culture a Get Out of Jail Free card for that kind of deceit is an invitation to corruption.

And I mean corruption in the most literal sense of the word:

- “As I write this, there are just over 1.4 million residents in Second Life, with over $704,000 spent in the last 24 hours.” Ian Shafer, at Clikz Experts

- “…one million users and growing by hundreds of thousands of users each month, that may well care about that virtual fashion crisis. They’re members of the virtual world Second Life, and when you consider all the places you need to reach consumers, this is by far one of the strangest.”David Berkowitz of 360i.

- “A new market emerging from virtual reality into the real-world global economy is growing by leaps and bounds […] such as the hugely popular Second Life - where the lines between online and offline commerce are rapidly blurring. Rodney Nelsestuen of TowerGroup.

- “At the rate Second Life is growing — from 100,000 registered users a year ago to one million in October, and now all the way up to two million — it may be over thirty million a year from now. At thirty million users Second Life is no longer a sideshow, but is something everyone has heard of and many people are experiencing for themselves.” Catherine Winters at SocialSignal

Shafer, Berkowitz, Nelsestuen and Winters all know that the figures they are touting are inaccurate, if inaccurate is a strong enough word for claims that would make a stock spammer blush. (30 million users at the end of 2007? Please. Linden will be lucky to have a million return users in any month of this year.)

And why are these people saying these things? Because they are selling their services, and they are likelier to get clients if people wrongly believe that Second Life is hugely popular, or that millions of people currently use it, or that that tens of millions will use it in a year. If, on the other hand, those potential clients were to understand that both the size and growth of the active population was considerably more modest than the Hamburger Helper figures Linden publishes, well that wouldn’t be as good for business.

Perhaps, so long as it is about user-generated experiences, this sort of lying doesn’t bug you. It bugs me. More to the point, I think it should bug you, if only for reasons of self-interest; accepting Second Life hyperbole now, on the grounds that people will later remember the participatory nature but not the falsified popularity, is whistling past a pretty big graveyard.

Three: ‘Virtual Worlds’ is a failed composite

Aside from historical comparisons and concerns about abuse of population numbers, there is one area where I think we understand one another, and come to opposite conclusions. Even after we’ve agreed that Second Life will not become a mass movement, and that 3D worlds will not replace existing forms of communication, there’s still this:

Most of us will find uses for virtual worlds one of these days; most of us will not “live” there nor will we conduct most of our business there.

(When you say ‘most of us will find uses for virtual worlds…one of these days’, I assume you mean that something like ‘half of users will use virtual worlds at some future date amenable to prediction’, something like half a dozen years, say.)

We both seem to have written off any short-term realization of a Snow Crash-ish general purpose, visually immersive social world (though probably for different reasons.) Here’s my read of your theory of virtual worlds — you tell me where I’m wrong:

1. The case of immersive 3D representation creates enough similarity among various platforms that they can be reasonably analyzed as a group.
2. Though we’re not headed to a wholesale replacement of current tools, or a general purpose “we live there” cyberspace, the number of special cases for virtual worlds will continue to grow, until most of us will use them someday.

If I have this right, or right enough, then our disagreement seems potentially crisp enough to make a bet on. Let me lay out where I think we disagree most clearly:

First, I think the label ‘virtual worlds’ makes no sense as a category. (I’ve already covered why elsewhere, so I won’t repeat that here.)

Second, I think that 3D game worlds will keep going gangbusters, and 3D non-game worlds will keep not. (An argument covered in the same place.)

Third, even with games driving almost all uses of 3D immersive worlds, I do not believe the portmanteau category of virtual worlds will reach anything like half of internet users (or even half of broadband users) in any predictable time.

#1 is just a thesis statement, but #2 and #3 are potential bets. So, if you’d care to make a prediction about either the percentage of game to non-game uses of virtual worlds, or overall population figures in virtual worlds, expressed either as an absolute number or as a percentage, I think we could take opposite side of a bet for, say, dinner anyplace in either New York or Boston, set for the year of your prediction. (If the date is too far out, we can also pro-rate it via compound average growth, to make the term of the bet shorter.)

Game? Lemme know.

Comments (21) + TrackBacks (0) | Category:

January 29, 2007

Second Life, Games, and Virtual WorldsEmail This EntryPrint This Article

Posted by Clay Shirky

Introduction: This post is an experiment in synchronization. Since Henry Jenkins, Beth Coleman, and I are all writing about Second Life and because we like each other’s work, even when (or especially when) we disagree, we’ve decided to all post something on Second Life today. Beth’s post will appear at http://www.projectgoodluck.com/blog/, and Henry’s is at http://www.henryjenkins.org/.

Let me start with some background. Because of the number of themes involved in discussions of Second Life, it’s easy to end up talking at different levels of abstraction, so let me start with two core assertions, things that I take as background to my part of the larger conversation:

  • First, Linden’s Residents figures are methodologically worthless. Any claim about Second Life derived from a count of Residents is not to be taken seriously, and anyone making claims about Second Life based on those figures is to be regarded with skepticism. (Explanation here and here.)
  • Second, there are many interesting things going on in Second Life. As I have said in other forums, and will repeat here, passionate users are a law unto themselves, and rightly so. Nothing I could say about their experience in Second Life, pro or con, would matter to those users. My concerns are demographic.

With those assertions covered, I am asking myself two things: will Second Life become a platform for a significant online population? And, second, what can Second Life tell us about the future of virtual worlds generally?

Concerning popularity, I predict that Second Life will remain a niche application, which is to say an application that will be of considerable interest to a small percentage of the people who try it. Such niches can be profitable (an argument I made in the Meganiche article), but they won’t, by definition, appeal to a broad cross-section of users.

The logic behind this belief is simple: most people who try Second Life don’t like it. Something like five out of six new users abandon it before a month is up. The three month abandonment figure seems to be closer to nine out of ten. (This figure is less firm, as it has only been reported colloquially, with no absolute numbers behind it.)

More importantly, the current active population is still an unknown. (Call this metric something like “How many users in the last 30 days have accounts more than 30 days old?”) We know the highest that figure could be is in the low hundreds of thousands, but no one other than the Lindens (and, presumably, their bigger marketing clients) knows how much lower it is than this theoretical maximum.

The poor adoption rate is a form of aggregate judgment. Anything bruited for wide adoption would have trouble with 85%+ abandonment, whether software or toothpaste. One possible explanation for this considerable user defection might be a technological gap. I do not doubt that improvements to the client and server would decrease the abandonment rate. I do doubt the improvement would be anything other than incremental, given 5 years and tens of millions in effort already.

Note too that abandonment is not a problem that all visually traversable spaces suffer from. Both Doom and Cyworld serve as counter-examples; in those cases, the rendering is cartoonish, yet both platforms achieved huge popularity in a short period. If the non-visual experience is good, the rendering does not need to be, but the converse does not seem to be true, on present evidence.

Two Objections

There have been two broad responses to skepticism occasioned by the Linden population numbers. (Three, if you count ad hominem, but Chris Lott has already covered that.)

The first response is not specific to Second Life. Many people have recalled earlier instances of misguided skepticism about new technologies, but the logical end-case of that thought is that skepticism about technology is never appropriate. (Disconfirmation of this thesis is left as an exercise for the reader.) Given that most new technologies fail, the challenge is to figure out which ones won’t. No one has noted examples of software with 85% abandonment rates, after five years of development, that went on to become widespread. Such examples may exist, but I can’t think of any.

The second objection is a conviction that demographics are irrelevant, and that the interesting goings-on in Second Life are what matters, no matter how few users are engaged in those activities.

I have never doubted (and have explicitly noted above) that there are interesting things happening in Second Life. The mistake, from my point of view, is in mixing two different questions. Whether some people like Second Life a lot is a completely separate issue from whether a lot of people like it. It is possible for the first assertion to be true and the second one false, and this is the only reading I believe is supported by the low absolute numbers and high abandonment rates. Nor is this an unusual case. We have several examples of platforms with fascinating in-world effects (Alphaworld, Black Sun/Blaxxun, The Palace, Dreamscape, LambdaMOO and environments on the SuperMOO List, etc.), all of which also failed to achieve wide use.

It is here that assertions about Second Life have most often been inconsistent. Before the uselessness of Linden’s population numbers was widely understood, the illusion of a large and rapidly growing community was touted as evidence of Second Life’s success. When both the absolute numbers and growth turned out to be more modest, population was downgraded and other metrics have been introduced as predictive of Second Life’s inevitable success.

A hypothesis which is strengthened by evidence of popularity, but not weakened by evidence of unpopularity, isn’t really a hypothesis, it’s a religious assertion. And a core tenet of the faithful seems to be that claims about Second Life are buttressed by the certain and proximate arrival of virtual worlds generally.

If we had but worlds enough and time…

It is worth pausing at this junction. Many people writing about Second Life make little distinction between ‘Second Life as a particular platform’ and ‘Second Life as an exemplar of the coming metaverse’. I would like to buck this trend, by explicitly noting the difference between those two conversations. I am basing my prediction of continued niche status for Second Life on the current evidence that most people who try it don’t like it. My beliefs about virtual worlds, on the other hand, are more conjectural. Everything below should be read with this caveat in mind.

With that said, I don’t believe that “virtual worlds” describes a coherent category, or, put another way, I believe that the group of things lumped together as virtual worlds have such variable implementations and user adoption rates that they are not well described as a single conceptual group.

I alluded to Pointcast in an earlier article; one of the ways the comparison is apt is in the abuse of categorization as a PR tool. Pointcast’s management claimed that email, the Web, and Pointcast all were about delivering content, and that the future looked bright for content delivery platforms. And indeed it did, except for Pointcast.

The successes of email and of the Web were better explained by their particular utilities than by their membership in a broad class of “content delivery.” Pointcast tried to shift attention from those particularities to a generic label in order to create a club in which it would automatically be included.

I believe a similar thing happens whenever Second Life is lumped with Everquest, World of Warcraft, et al., into a category called virtual worlds. If we accept the validity of this category, then multi-player games provide an existence proof of millions-strong virtual worlds, and the only remaining question is simply when we arrive at wider adoption of more general-purpose versions.

If, on the other hand, we don’t start off by lumping Second Life with Warcraft as virtual worlds, a very different question emerges: why do virtual game worlds outperform non-game worlds in their adoption? This pattern is quite stable over time — it well predates Second Live and World of Warcraft, as with first Ultima Online (1997) and then Everquest (1999) each quickly dwarfing the combined populations of Alphaworld and Black Sun (later Blaxxun) despite the significant lead times of those virtual worlds. What is it about games that would make them a better fit for virtual environments than non-games?

Games have at least three advantages other virtual worlds don’t. First, many games, and most social games, involve an entrance into what theorists call the magic circle, an environment whose characteristics include simplified and knowable rules. The magic circle saves the game from having to live up to expectations carried over from the real world.

Second, games are intentionally difficult. If all you knew about golf was that you had to get this ball in that hole, your first thought would be to hop in your cart and drive it over there. But no, you have to knock the ball in, with special sticks. This is just about the stupidest possible way to complete the task, and also the only thing that makes golf interesting. Games create an environment conducive to the acceptance of artificial difficulties.

Finally, and most relevant to visual environments, our ability to ignore information from the visual field when in pursuit of an immediate goal is nothing short of astonishing (viz. the gorilla experiment.) The fact that we could clearly understand spatial layout even in early and poorly rendered 3D environments like Quake has much to do with our willingness to switch from an observational Architectural Digest mode of seeing (Why has this hallway been accessorized with lava?) to a task-oriented Guns and Ammo mode (Ogre! Quad rocket for you!)

In this telling, games are not just special, they are special in a way that relieves designers of the pursuit of maximal realism. There is still a premium on good design and playability, but the magic circle, acceptance of arbitrary difficulties, and goal-directed visual filtering give designers ways to contextualize or bury at least some platform limitations. These are not options available to designers of non-game environments; asking users to accept such worlds as even passable simulacra subjects those environments to withering scrutiny.

Hubba Hubba

We can also reverse this observation. One question we might ask about successful non-game uses of virtual worlds is whether they too are special cases. One obvious example is erotic imagery. The zaftig avatar has been a trope of 3D rendering since designers have been able to scrape together enough polygons to model a torso, but examples start far earlier than virtual worlds. In fact, visual representation of voluptuous womanhood predates the invention of agriculture by the same historical interval as agriculture predates the present. This is a deep pattern.

It is also a pattern that, like games and unlike ordinary life, has a special relation to visual cues (though this effect is somewhat unbalanced by gender.) If someone is shown a virtual hamburger, it can arouse real hunger. However, to satisfy this hunger, he must then walk away from the image and get his hands on an actual hamburger. This is not the case, to put the matter delicately, with erotic imagery; a fetching avatar can arouse desire, but that desire can then be satiated without recourse to the real.

This pair of characteristics — a human (and particularly male) fixation on even poorly rendered erotic images, plus an ability to achieve a kind of gratification in the presence of those images — means that a sexualized rendering can create both attraction and satisfaction in a way that a rendering of, say, a mountain or an office cannot. As with games, visual worlds work in the context of eros not because the images themselves are so convincing, but because they reach a part of the brain that so desperately wants to be convinced.

More generally, I suspect that the cases where 3D immersion works are, and will continue to be, those uses that most invite the mind to fill in or simply do without missing detail, whether because of a triggering of sexual desire, the fight or flight reflex (many games), avarice (gambling), or other areas where we are willing and even eager to make rapid inferences based on a paucity of data. I also assume that these special cases are not simply adding up to a general acceptance of visual immersion, and that finding another avatar beguiling in a virtual bar is not in fact a predictor of being able to read someone’s face or body language in a virtual meeting as if you were with them. That, I believe, is a neurological problem of a different order.

Jaron Lanier is the Charles Babbage of Our Generation

Here we arrive at the furthest shores of speculation. One of the basic promises of virtual reality, at least in its Snow Crash-inflected version, is that we will be able to re-create the full sense of being in someone’s presence in a mediated environment. This desire, present at least since Shamash appeared to Gilgamesh in a dream, can be re-stated in technological terms as a hope that communications will finally become an adequate substitute for travel. We have been promised that this will come to pass with current technology since ATT demoed a video phone at the 1964 World’s Fair.

I believe this version of virtual reality will in fact be achieved, someday. I do not, however, believe that it will involve a screen. Trying to trick the brain by tricking the eyes is a mug’s game. The brain is richly arrayed with tools to detect and unmask visual trickery — if the eyes are misreporting, the brain falls back on other externally focussed senses like touch and smell, or internally focussed ones like balance and proprioception.

Though the conception of virtual reality is clear, the technologies we have today are inadequate to the task. In the same way that the theory of computation arose in the mechanical age, but had to wait first for electrics and then electronics to be fully realized, general purpose virtual reality is an idea waiting on a technology, and specifically on neural interface, which will allow us to trick the brain by tricking the brain. (The neural interface in turn waits on trifling details like an explanation of consciousness.)

In the meantime, the 3D worlds program in the next decade is likely to resemble the AI program in the last century, where early optimism about rapid progress on general frameworks gave way to disconnected research topics (machine vision, natural language processing) and ‘toy worlds’ environments. We will continue to see valuable but specific uses for immersive environments, from flight training and architectural flythroughs to pain relief for burn victims and treatment for acrophobia. These are all indisputably good things, but they are not themselves general, and more importantly don’t suggest rapid progress on generality. As a result, games will continue to dominate the list of well-populated environments for the foreseeable future, rendering ineffectual the category of virtual worlds, and, critically, many of the predictions being attached thereunto.

[We’ve been experiencing continuing problems with our MT-powered commenting system. We’re working on a fix but for now send you to a temporary page where the discussion can continue.]

Comments (0) + TrackBacks (0) | Category: social software

Against Well-designed Reputation Systems (An Argument for Community Patent)Email This EntryPrint This Article

Posted by Clay Shirky

Intro: I was part of a group of people asked by Beth Noveck to advise the Community Patent review project about the design of a reputation and ranking system, to allow the widest possible input while keeping system gaming to a minimum. This was my reply, edited slightly for posting here.

We’ve all gone to school on the moderation and reputation systems of Slashdot and eBay. In those cases, their growing popularity in the period after their respective launches led to a tragedy of the commons, where open access plus incentives led to nearly constant attack by people wanting to game the system, whether to gain attention for themselves or their point of view in the case of Slashdot, or to defraud other users, as with eBay.

The traditional response to these problems would have been to hire editors or other functionaries to police the system for abuse, in order to stem the damage and to assure ordinary users you were working on their behalf. That strategy, however, would fail at the scale and degree of openness at which those services function. The Slashdot FAQ tells the story of trying to police the comments with moderators chosen from among the userbase, first 25 of them and later 400. Like the Charge of the Light Brigade, however, even hundreds of committed individuals were just cannon fodder, given the size of the problem. The very presence of effective moderators made the problem worse over time. In a process analogous to more roads creating more traffic, the improved moderation saved the site from drowning in noise, so more users joined, but this increase actually made policing the site harder, eventually breaking the very system that made the growth possible in the first place.

EBay faced similar, ugly feedback loops; any linear expenditure of energy required for policing, however small the increment, would ultimately make the service unsustainable. As a result, the only opportunity for low-cost policing of such systems is to make them largely self-policing. From these examples and others we can surmise that large social systems will need ways to highlight good behavior or suppress negative behavior or both. If the guardians are to guard themselves, oversight must be largely replaced by something we might call intrasight, designed in such a way that imbalances become self-correcting.

The obvious conclusion to draw is that, when contemplating the a new service with these characteristics, the need for some user-harnessed reputation or ranking system can be regarded as a foregone conclusion, and that these systems should be carefully planned so that tragedy of the commons problems can be avoided from launch. I believe that this conclusion is wrong, and that where it is acted on, its effects are likely to be at least harmful, if not fatal, to the service adopting them.

There is an alternate reading of the Slashdot and eBay stories, one that I believe better describes those successes, and better places Community Patent to take advantage of similar processes. That reading concentrates not on outcome but process; the history of Slashdot’s reputation system should teach us not “End as they began — build your reputation system in advance” but rather “Begin as they began — ship with a simple set of features, watch and learn, and implement reputation and ranking only after you understand the problems you are taking on.” In this telling, constituting users’ relations as a set of bargains developed incrementally and post hoc is more predictive of eventual success than simply adopting any residue from previous successes.

As David Weinberger noted in his talk The Unspoken of Groups, clarity is violence in social settings. You don’t get 1789 without living through 1788; successful constitutions, which necessarily create clarity, are typically ratified only after a group has come to a degree of informal cohesion, and is thus able to absorb some of the violence of clarity, in order to get its benefits. The desire to participate in a system that constrains freedom of action in support of group goals typically requires that the participants have at least seen, and possibly lived through, the difficulties of unfettered systems, while at the same time building up their sense of membership or shared goals in the group as a whole. Otherwise, adoption of a system whose goal is precisely to constrain its participants can seem too onerous to be worthwhile. (Again, contrast the US Constitution with the Articles of Confederation.)

Most current reputation systems have been fit to their situation only after that situation has moved from theoretical to actual; both eBay and Slashdot moved from a high degree of uncertainty to largely stable systems after a period of early experimentation. Perhaps surprisingly, this has not committed them to continual redesign. In those cases, systems designed after launch, but early in the process of user adoption, have survived to this day with only relatively minor subsequent adjustments.

Digg is the important counter-example, the most successful service to date to design a reputation system in advance. Digg differs from the community patent review process in that the designers of Digg had an enormous amount of prior art directly in its domain (Slashdot, Kuro5hin, Metafilter, et al), and still ended up with serious re-design issues. More speculatively, Digg seems to have suffered more from both system gaming and public concern over its methods, possibly because the lack of organic growth of its methods prevented it from becoming legitimized over time in the eyes of its users. Instead, they were asked to take it or leave it (never a choice users have been know to relish.)

Though more reputation design work may become Digg-like over time, in that designers can launch with systems more complete than eBay or Slashdot did, the ability to survey significantly similar prior art, and the ability to adopt a fairly high-handed attitude towards users who dislike the service, are not luxuries the community patent review process currently enjoys.

The Argument in Two Pictures

The argument I’m advancing can be illustrated with two imaginary graphs. The first concerns plasticity, the ease with which any piece of software can be modified.

Plasticity generally decays with time. It is highest at the in the early parts of the design phase, when a project is in its most formative stages. It is easier to change a list of potential features than a set of partially implemented features, and it is easier to change partially implemented features than fully implemented features. Especially significant is the drop in plasticity at launch; even for web-based services, which exist only in a single instantiation and can be updated frequently and for all users at once, the addition of users creates both inertia, in the direction of not breaking their mental model of the service, and caution in upgrading, so as not to introduce bugs or create downtime in a working service. As the userbase grows, the expectations of the early adopters harden still further, while the expectations of new users follows the norms set up by those adopters; this is particularly true of any service with a social component.

An obvious concern with reputation systems is that, as with any feature, they are easier to implement when plasticity is high. Other things being equal, one would prefer to design the system as early as possible, and certainly before launch. In the current case, however, other things are not equal. In particular, the specificity of information the designers have about the service and how it behaves in the hands of real users moves counter to plasticity over time.

When you are working to understand the ideal design for a particular piece of software, the specificity of your knowledge increases with time. During the design phase, the increasing concreteness of the work provides concomitant gains in specificity, but nothing like launch. No software, however perfect, survives first contact with the users unscathed, and given the unparalleled opportunities with web-based services to observe user behavior — individually and in bulk, in the moment and over time — the period after launch increases specificity enormously, after which it continues to rise, albeit at a less torrid pace.

There is a tension between knowing and doing; in the absence of the ideal scenario where you know just what needs to be done while enjoying complete freedom to do it (and a pony), the essential tradeoff is in understanding which features benefit most from increased specificity of knowledge. Two characteristics that will tend to push the ideal implementation window to post-launch are when a set of possible features is very large, but the set of those features that will ultimately be required is small; and when culling the small number of required features from the set of all possible features can only be done by observing actual users. I believe that both conditions apply a fortiori to reputation and ranking.

Costs of Acting In Advance of Knowing

Consider the costs of designing a reputation system in advance. In addition to the well-known problems of feature-creep (“Let’s make it possible to rank reputation rankings!”) and Theory of Everything technologies (“Let’s make it Semantic Web-compliant!”), reputation systems create an astonishing perimeter defense problem. The number of possible threats you can imagine in advance is typically much larger than the number that manifest themselves in functioning communities. Even worse, however large the list of imagined threats, it will not be complete. Social systems are degenerate, which is to say that there are multiple alternate paths to similar goals — someone who wants to act out and is thwarted along one path can readily find others.

As you will not know which of these ills you will face, the perimeter you will end up defending will be very large and, critically, hard to maintain. The likeliest outcome from such an a priori design effort is inertness; a system designed in advance to prevent all negative behavior will typically have as a side effect deflecting almost all behavior, period, as users simply turn away from adoption.

Working social systems are both complex and homeostatic; as a result, any given strategy for mediating social relations can only be analyzed in the context of the other strategies in use, including strategies adopted by the users themselves. Since the user strategies cannot, by definition, be perfectly predicted in advance, and since the only ungameable social system is the one that doesn’t ship, every social system will have some weakness. A system designed in advance is likely to be overdefended while still having a serious weaknesses unknown the designer, because the discovery and exploitation of that class of weakness can only occur in working, which is to say user-populated, systems. (As with many observations about the design of social systems, these are precedents first illustrated in Lessons from Lucasfilm’s Habitat, in the sections “Don’t Trust Anybody” and “Detailed Central Planning Is Impossible, Don’t Even Try”.)

The worst outcome of such a system would be collapse (the Communitree scenario), but even the best outcome would still require post hoc design to fix the system with regard to observed user behavior. You could save effort while improving the possibility of success by letting yourself not know what you don’t know, and then learning as you go.

In Favor of Instrumentation Plus Attention

The N-squared problem is only a problem when N is large; in most social systems the users are the most important N, and the userbase only grows large gradually, even for successful systems. (Indeed, this scaling up only over time typically provides the ability for a core group, once they have self-identified, to inculcate new users a bit at a time, using moral suasion as their principal tool.) As a result, in the early days of a system, the designers occupy a valuable point of transition, after user behavior is observable, but before scale and culture defeat significant intervention.

To take advantage of this designable moment, I believe that what Community Patent needs, at launch, is only this: metadata, instrumentation, and attention.

Metadata: There are, I believe, three primitive types of metadata required for Community Patent — people, patents, and interjections. Each of these will need some namespace to exist in — identity for the people, and named data for the patents themselves and for various forms of interjection, from simple annotation to complex conversation. In addition, two abstract types are needed — links and labels. A link is any unique pair of primitives — this user made that comment, this comment is attached to that conversation, this conversation is about those patents. All links should be readily observable and extractable from the system, even if they are not exposed in the interface the user sees. Finally, following Schachter’s intuition from del.icio.us, all links should be labelable. (Another way to view the same problem is to see labels as another type of interjection, attached to links.) I believe that this will be enough, at launch, to maximize the specificity of observation while minimizing the loss of plasticity.

Instrumentation: As we know from collaborative filtering algorithms from Ringo to PageRank, it is not necessary to ask users to rank things in order to derive their rankings. The second necessary element will be the automated delivery of as many possible reports to the system designers as can be productively imagined, and, at least as essential, a good system for quickly running ad hoc queries, and automating their production should they prove fruitful. This will help identify both the kinds of productive interactions on the site that need to be defended and the kinds of unproductive interactions they need to be defended from.

Designer Attention: This is the key — it will be far better to invest in smart people watching the social aspects of the system at launch than in smart algorithms guiding those aspects. If we imagine the moment when the system has grown to an average of 10 unique examiners per patent and 10 comments per examiner, then a system with even a thousand patents will be relatively observable without complex ranking or reputation systems, as both the users and the comments will almost certainly exhibit power-law distributions. In a system with as few as ten thousand users and a hundred thousand comments, it will still be fairly apparent where the action is, allowing you the time between Patent #1 and Patent #1000 to work out what sorts of reputation and ranking systems need to be put in place.

This is a simplification, of course, as each of the categories listed above presents its own challenges — how should people record their identity? What’s the right balance between closed and open lists of labels? And so on. I do not mean to minimize those challenges. I do however mean to say that the central design challenge of user governance — self-correcting systems that do not raise crushing participation burdens on the users or crushing policing barriers on the hosts — are so hard to design in advance that, provided you have the system primitives right, the Boyd Strategy of OODA — Orient, Observe, Decide, Act — will be superior to any amount of advance design work.

[We’ve been experiencing continuing problems with our MT-powered commenting system. We’re working on a fix but for now send you to a temporary page where the discussion can continue.]

Comments (0) + TrackBacks (0) | Category: social software

January 4, 2007

Real Second Life numbers, thanks to David KirkpatrickEmail This EntryPrint This Article

Posted by Clay Shirky

I’ve been complaining about bad reporting of Second Life population for some time now. David Kirkpatrick at Fortune has finally gotten some signal out of Linden Labs. Kirkpatrick’s report is here, in the comments. (CNN.com comments don’t have permalinks, so scroll down.)

Here are the numbers Philip Rosedale of Linden gave him. These are, I presume, as of Jan 3:

  • 1,525,670 unique people have logged into SL at least once (so now we know: Residents is seeing something a bit over 50% inflation over users.)
  • Of that number, 252,284 people have logged in more than 30 days after their account creation date.
  • Monthly growth in that figure, calculated as the change between last September and last October, was 23%.

    Those of us who wanted the conversation to be grounded in real numbers owe Kirkpatrick our thanks for helping us get there.

    These numbers should have two good effects. First, now that Linden has reported, and Kirkpatrick has published, the real figures, maybe we’ll see the press shift to reporting users and active users, instead of Residents.

    Second, we’re no longer going to be asked to stomach absurd claims of size and growth. The ‘2.3 million user/77% growth in two months’ figures would have meant 70 million Second Life users this time next year. 250 thousand and 23% growth will mean 3 million in a year’s time, a healthy number, but not hyperbolic growth.

    We can start asking more sophisticated questions now, like the use pattern of active users, or the change in monthly growth rates, or whether the Residents-users inflation rate is stable, but those questions are for later. Right now, we’ve got enough real numbers to think about for a while.

  • Comments (9) + TrackBacks (0) | Category: social software

    January 3, 2007

    The future of television and the media triathlonEmail This EntryPrint This Article

    Posted by Clay Shirky

    Mark Cuban doesn’t understand television. He holds a belief, common to connoisseurs the world over, that quality trumps everything else. The current object of his faith in Qualität Über Alles is HDTV. Says Cuban:
    HDTV is the Internet video killer. Deal with it. Internet bandwidth to the home places a cap on the quality and simplicity of video delivery to the home, and to HDTVs in particular. Not only does internet capacity create an issue, but the complexity of moving HDTV streams around the home and to the HDTV is pretty much a deal killer itself.

    “HDTV is the Internet video killer.” Th appeal of this argument — whoever provides the highest quality controls the market — is obvious. So obvious, in fact, that it’s been used before. By audiophiles.

    As January 1, 2000 approaches, and the MP3 whirlpool continues to swirl, one simple fact has made me feel as if I’m stuck at the starting line of the entire download controversy: The sound quality of MP3 has yet to improve above that of the average radio broadcast. Until that changes, I’m merely curious—as opposed to being in the I-want-to-know-it-all-now frenzy that is my usual m.o. when to comes to anything that promises music you can’t get anywhere else. Robert Baird, October, 1999

    MP3s won’t catch on, because they are lower quality than CDs. And this was true, wasn’t it? People cared about audio quality so much that despite other advantages of MP3s (price, shareability, better integration with PCs), they’ve stayed true to the CD all these years. The commercial firms that make CDs, and therefore continue to control the music market, thank these customers daily for their loyalty.

    Meanwhile,back in the real world of the recording business, the news isn’t so rosy

    Cuban doesn’t understand that television has been cut in half. The idea that there should be a formal link between the tele- part and the vision part has ended. Now, and from now on, the form of a video can be handled separately from it’s method of delivery. And since they can be handled separately, they will be, because users prefer it that way.

    But Cuban goes further. He doesn’t just believe that, other things being equal, quality will win; he believes quality is so important to consumers that they will accept enormous inconvenience to get that higher-quality playback. When Cuban’s list of advantages of HDTV includes an inability to watch your own video on it (“the complexity of moving HDTV streams around the home and to the HDTV”), you have to wonder what he thinks a disadvantage would look like.

    This is the season of the HDTV gotcha. After Christmas, people are starting to understand that they didn’t buy a nicer TV, they bought only one part of a Total Controlled Content Delivery Package. Got an HDTV monitor and a new computer for Christmas? You might as well have gotten a Fabergé Egg and a framing hammer for all the useful ways you can combine the two presents.

    Media is a triathlon event. People like to watch, but they also like to create, and to share. Doubling down on the watching part while making it harder for the users to play their own stuff or share with their friends makes a medium worse in the users eyes. By contrast, the last 50 years have been terrible for user creativity and for sharing, so even moderate improvements in either of those abilities make the public go wild.

    When it comes to media quality, people don’t optimize, they satisfice. Once the medium, whether audio or video or whatever, crosses a minimum threshold, users accept it and move on to caring about other attributes. The change in internet video quality from 1996 to 2006 was the big jump, and YouTube is the proof. After this, firms that offer higher social value for video will have an edge over firms that offer higher production values while reducing social value.

    And because the audience for internet video will grow much faster than the audience for HDTV (and will be less pissed, because YouTube doesn’t rely on a ‘bait and switch’ walled garden play) the premium for making internet video better will grow with it. As Richard Gabriel said of programming languages years ago “[E]ven though Lisp compilers in 1987 were about as good as C compilers, there are many more compiler experts who want to make C compilers better than want to make Lisp compilers better.” That’s where video is today. HDTV provides a better viewing experience than internet video, but many more people care about making internet video better than making HDTV better.

    YouTube is the HDTV killer. Deal with it.

    Comments (15) + TrackBacks (0) | Category: social software

    December 26, 2006

    Linden's Second Life numbers and the press's desire to believeEmail This EntryPrint This Article

    Posted by Clay Shirky

    “Here at KingsRUs.com, we call our website our Kingdom, and any time our webservers serve up a copy of the home page, we record that as a Loyal Subject. We’re very pleased to announce that in the last two months, we have added over 1 million Loyal Subjects to our Kingdom.”

    Put that baldly, you wouldn’t fall for this bit of re-direction, and yet that is exactly what Linden Labs has pulled off with its Residents™ label. By adopting a term that seems like a simple re-branding of “users”, but which is actually unconnected to head count or adoption, they’ve managed to report what the press wants to hear, while providing no actual information.

    If you like your magic tricks to stay mysterious, leave now, but if you want to understand how Linden has managed to disable the fact-checking apparatus of much of the US business press, turning them into a zombie army of unpaid flacks, read on. (And, as with the earlier piece on Linden, this piece has also been published on Valleywag.)

    The basic trick is to make it hard to remember that Linden’s definition of Resident has nothing to do with the plain meaning of the word resident. My dictionary says a resident is a person who lives somewhere permanently or on a long term basis. Linden’s definition of Residents, however, has nothing to do with users at all — it measures signups for an avatar. (Get it? The avatar, not the user, is the resident of Second Life.)

    The obvious costume-party assumption is that there is one avatar per person, but that’s wrong. There can be more than one avatar per account, and more than one account per person, and there’s no public explanation of which of those units Residents measures, and thus no way to tell anything about how many actual people use Second Life. (An embarrassingly First Life concern, I know.)

    Confused yet? Wait, there’s less! Linden’s numbers also suggest that the Residents figure includes even failed attempts to use the service. They reported adding their second million Residents between mid-October and December 14th, but they also reported just shy of 810 thousand logins for the same period. One million new Residents but only 810K logins leaves nearly 200K new Residents unaccounted for. Linden may be counting as Residents people who signed up and downloaded the client software, but who never logged in, or there may be some other reason for the mismatched figures, but whatever the case, Residents is remarkably inflated with regards to the published measure of use.

    (If there are any actual reporters reading this and doing a big cover story on Linden, you might ask about how many real people use Second Life regularly, as opposed to Residents or signups or avatars. As I write those words, though, I realize I might as well be asking Business Week to send me a pony for my birthday.)

    Like a push-up bra, Linden’s trick is as effective as it is because the press really, really wants to believe:

  • “It has a population of a million.” — Richard Siklos, New York Times
  • “In the Internet-based virtual world known as Second Life, for instance, more than 1 million citizens have created representations of themselves known as avatars…” — Michael Yessis, USA TODAY
  • “Since it started about three years ago, the population of Second Life has grown to 1.2 million users.” — Peter Valdes-Dapena, CNN
  • “So far, it’s signed up 1.3 million members.” — David Kirkpatrick, Fortune

    Professional journalists wrote those sentences. They work for newspapers and magazines that employ (or used to employ) fact-checkers. Yet here they are, supplementing Linden’s meager PR budget by telling their readers that Residents measures something it actually doesn’t.

    This credulity appears even in the smallest items. I discovered the “Residents vs Logins” gap when I came across a Business 2.0 post by Erick Schonfeld, where he included the mismatched numbers while congratulating Linden on a job well done. When I asked the obvious question in the comments — How come there are fewer logins than new Residents in the same period? — I got a nice email from Mr. Schonfeld, complimenting me on a good catch.

    Now I’m generally pretty enthusiastic about taking credit where it isn’t due, but this bit of praise failed to meet even my debased standards. The post was a hundred words long, and it had only two numbers in it. I didn’t have to use forensic accounting to find the discrepancy, I just used subtraction (an oft-overlooked tool in the journalistic toolkit, but surprisingly effective when dealing with numbers.)

    This is the state of business reporting in an age when even the pros want to roll with the cool blogger kids. Got a paragraph that contains only two numbers, and they don’t match? No problem! Post it anyway, and on to the next thing.

    The prize bit of PReporting so far, though, has to be Elizabeth Corcoran’s piece for Forbes called A Walk on the Virtual Side, where she claimed that Second Life had recently passed “a million unique customers.”

    This is three lies in four words. There isn’t one million of anything human inhabiting Second Life. There is no one-to-one correlation between Residents and users. And whatever Residents does measure, it has nothing to do with paying customers. The number of paid accounts is in the tens of thousands, not the millions (and remember, if you’re playing along at home, there can be more than one account per person. Kits, cats, sacks, and wives, how many logged into St. Ides?)

    Despite the credulity of the Fourth Estate (Classic Edition), there are enough questions being asked in the weblogs covering Second Life that the usefulness is going to drain out of the ‘Resident™ doesn’t mean resident’ trick over the next few months. We’re going to see three things happen as a result.

    The first thing that’s going to happen, or rather not happen, is that the regular press isn’t going go back over this story looking for real figures. As much as they’ve written about the virtual economy and the next net, the press hasn’t really covered Second Life as business story or tech story so much as a trend story. The sine qua non of trend stories is that a trend is fast-growing. The Residents figure was never really part of the story, it just provided permission to write about about how crazy it is that all the kids these days are getting avatars. By the time any given writer was pitching that story to their editors, any skepticism about the basic proposition had already been smothered.

    No journalist wants to have to write “When we told you that Second Life had 1.3 million members, we in no way meant to suggest that figure referred to individual people. Fortune regrets any misunderstanding.” And since no one wants to write that, no one will. They’ll shift their coverage without pointing out the shift to their readers.

    The second thing that is going to happen is an increase in arguments of the form “We mustn’t let Linden’s numbers blind us to the inevitability of the coming metaverse.” That’s the way it is with things we’re asked to take on faith — when A works, it’s evidence of B, but if A isn’t working as well as everyone thought, it’s suddenly unrelated to B.

    Finally, there is going to be a spike in the number of the posts claiming that the two million number was never important anyway, the press’s misreporting was all an innocent mistake, Linden was planning to call those reporters first thing Monday morning and explain everything. Tateru Nino has already kicked off this genre with a post entitled The Value of One. The flow of her argument is hard to synopsize, but you can get a sense of it from this paragraph:

    So, a hundred thousand, one million, two million. Those numbers mean something to us, but not because they have intrinsic, direct meaning. They have meaning because they’re filtered through the media, disseminated out into the world, believed by people, who then act based on that belief, and that is where the meaning lies.

    Expect more, much more, of this kind of thing in 2007.

  • Comments (10) + TrackBacks (0) | Category: social software

    December 12, 2006

    Second Life: What are the real numbers?Email This EntryPrint This Article

    Posted by Clay Shirky

    Second Life is heading towards two million users. Except it isn’t, really. We all know how this game works, and has since the earliest days of the web:

    Member of the Business Press: “How many users do you have?”
    CEO of Startup: (covers phone) “Hey guys, how many rows in the ‘users’ table?”
    [Sound F/X: Typing]
    Offstage Sysadmin: “One million nine hundred and one thousand one hundred and seventy-three.”
    CEO: (Into phone) “We have one point nine million users.”

    Someone who tries a social service once and bails isn’t really a user any more than someone who gets a sample spoon of ice cream and walks out is a customer.

    So here’s my question — how many return users are there? We know from the startup screen that the advertised churn of Second Life is over 60% (as I write this, it’s 690,800 recent users to 1,901,173 signups, or 63%.) That’s not stellar but it’s not terrible either. However, their definition of “recently logged in” includes everyone in the last 60 days, even though the industry standard for reporting unique users is 30 days, so we don’t actually know what the apples to apples churn rate is.

    At a guess, Second Life churn measured in the ordinary way is in excess of 85%, with a surge of new users being driven in by the amount of press the service is getting. The wider the Recently Logged In reporting window is, the bigger the bulge of recently-arrived-but-never-to-return users that gets counted in the overall numbers.

    I suspect Second Life is largely a “Try Me” virus, where reports of a strange and wonderful new thing draw the masses to log in and try it, but whose ability to retain anything but a fraction of those users is limited. The pattern of a Try Me virus is a rapid spread of first time users, most of whom drop out quickly, with most of the dropouts becoming immune to later use. Pointcast was a Try Me virus, as was LambdaMOO, the experiment that Second Life most closely resembles.

    Press Pass

    I have been watching the press reaction to Second Life with increasing confusion. Breathless reports of an Immanent Shift in the Way We Live® do not seem to be accompanied by much skepticism. I may have been made immune to the current mania by ODing on an earlier belief in virtual worlds:

    Similar to the way previous media dissolved social boundaries related to time and space, the latest computer-mediated communications media seem to dissolve boundaries of identity as well. […] I know a respectable computer scientist who spends hours as an imaginary ensign aboard a virtual starship full of other real people around the world who pretend they are characters in a Star Trek adventure. I have three or four personae myself, in different virtual communities around the Net. I know a person who spends hours of his day as a fantasy character who resembles “a cross between Thorin Oakenshield and the Little Prince,” and is an architect and educator and bit of a magician aboard an imaginary space colony: By day, David is an energy economist in Boulder, Colorado, father of three; at night, he’s Spark of Cyberion City—a place where I’m known only as Pollenator.

    This wasn’t written about Second Life or any other 3D space, it was Howard Rheingold writing about MUDs in 1993. This was a sentiment I believed and publicly echoed at the time. Per Howard, “MUDs are living laboratories for studying the first-level impacts of virtual communities.” Except, of course, they weren’t. If, in 1993, you’d studied mailing lists, or usenet, or irc, you’d have a better grasp of online community today than if you’d spent a lot of time in LambdaMOO or Cyberion City. Ou sont les TinyMUCKs d’antan?

    You can find similar articles touting 3D spaces shortly after the MUD frenzy. Ready for a blast from the past? “August 1996 may well go down in the annals of the Internet as the turning point when the Web was released from the 2D flatland of HTML pages.” Oops.

    So what accounts for the current press interest in Second Life? I have a few ideas, though none is concrete enough to call an answer yet.

    First, the tech beat is an intake valve for the young. Most reporters don’t remember that anyone has ever wrongly predicted a bright future for immersive worlds or flythrough 3D spaces in the past, so they have no skepticism triggered by the historical failure of things like LambdaMOO or VRML. Instead, they hear of a marvelous thing — A virtual world! Where you have an avatar that travels around! And talks to other avatars! — which they then see with their very own eyes. How cool is that? You’d have to be a pretty crotchety old skeptic not to want to believe. I bet few of those reporters ever go back, but I’m sure they’re sure that other people do (something we know to be false, to a first approximation, from the aforementioned churn.) Second Life is a story that’s too good to check.

    Second, virtual reality is conceptually simple. Unlike ordinary network communications tools, which require a degree of subtlety in thinking about them — as danah notes, there is no perfect metaphor for a weblog, or indeed most social software — Second Life’s metaphor is simplicity itself: you are a person, in a space. It’s like real life. (Only, you know, more second.) As Philip Rosedale explained it to Business Week “[I]nstead of using your mouse to move an arrow or cursor, you could walk your avatar up to an Amazon.com (AMZN) shop, browse the shelves, buy books, and chat with any of the thousands of other people visiting the site at any given time about your favorite author over a virtual cuppa joe.”

    Never mind that the cursor is a terrific way to navigate information; never mind that Amazon works precisely because it dispenses with rather than embraces the cyberspace metaphor; never mind that all the “Now you can shop in 3D efforts” like the San Francisco Yellow Pages tanked because 3D is a crappy way to search. The invitation here is to reason about Second Life by analogy, which is simpler than reasoning about it from experience. (Indeed, most of the reporters writing about Second Life seem to have approached it as tourists getting stories about it from natives.)

    Third, the press has a congenital weakness for the Content Is King story. Second Life has made it acceptable to root for the DRM provider, because of their enlightened user agreements concerning ownership. This obscures the fact that an enlightened attempt to make digital objects behave like real world objects suffers from exactly the same problems as an unenlightened attempt, a la the RIAA and MPAA. All the good intentions in the world won’t confer atomicity on binary data. Second Life is pushing against the ability to create zero-cost perfect copies, whereas Copybot relied on that most salient of digital capabilities, which is how Copybot was able to cause so much agida with so little effort — it was working with the actual, as opposed to metaphorical, substrate of Second Life.

    Finally, the current mania is largely push-driven. Many of the articles concern “The first person/group/organization in Second Life to do X”, where X is something like have a meeting or open a store — it’s the kind of stuff you could read off a press release. Unlike Warcraft, where the story is user adoption, here most of the stories are about provider adoption, as with the Reuters office or the IBM meeting or the resident creative agencies. These are things that can be created unilaterally and top-down, catnip to the press, who are generally in the business of covering the world’s deciders.

    The question about American Apparel, say, is not “Did they spend money to set up stores in Second Life?” Of course they did. The question is “Did it pay off?” We don’t know. Even the recent Second Life millionaire story involved eliding the difference between actual and potential wealth, a mistake you’d have thought 2001 would have chased from the press forever. In illiquid markets, extrapolating that a hundred of X are worth the last sale price of X times 100 is a fairly serious error.

    Artifacts vs. Avatars

    Like video phones, which have been just one technological revolution away from mass adoption since 1964, virtual reality is so appealingly simple that its persistent failure to be a good idea, as measured by user adoption, has done little to dampen enthusiasm for the coming day of Keanu Reeves interfaces and Snow Crash interactions.

    I was talking to Irving Wladawsky-Berger of IBM about Second Life a few weeks ago, and his interest in the systems/construction aspect of 3D seems promising, in the same way video phones have been used by engineers who train the camera not on their faces but on the artifacts they are talking about. There is something to environments for modeling or constructing visible things in communal fashion, but as with the video phone, they will probably involve shared perceptions of artifacts, rather than perceptions of avatars.

    This use, however, is specific to classes of problems that benefit from shared visual awareness, and that class is much smaller that the current excitement about visualization would suggest. More to the point, it is at odds with the “Son of MUD+thePalace” story currently being written about Second Life. If we think of a user as someone who has returned to a site after trying it once, I doubt that the number of simultaneous Second Life users breaks 10,000 regularly. If we raise the bar to people who come back for a second month, I wonder if the site breaks 10,000 simultaneous return visitors outside highly promoted events.

    Second Life may be wrought by its more active users into something good, but right now the deck is stacked against it, because the perceptions of great user growth and great value from scarcity are mutually reinforcing but built on sand. Were the press to shift to reporting Recently Logged In as their best approximation of the population, the number of reported users would shrink by an order of magnitude; were they to adopt industry-standard unique users reporting (assuming they could get those numbers), the reported population would probably drop by two orders. If the growth isn’t as currently advertised (and it isn’t), then the value from scarcity is overstated, and if the value of scarcity is overstated, at least one of the engines of growth will cool down.

    There’s nothing wrong with a service that appeals to tens of thousands of people, but in a billion-person internet, that population is also a rounding error. If most of the people who try Second Life bail (and they do), we should adopt a considerably more skeptical attitude about proclamations that the oft-delayed Virtual Worlds revolution has now arrived.

    Comments (85) + TrackBacks (0) | Category: social software

    November 20, 2006

    Social Facts, Expertise, Citizendium, and CarrEmail This EntryPrint This Article

    Posted by Clay Shirky

    I want to offer a less telegraphic account of the relationship between expertise, credentials, and authority than I did in Larry Sanger, Citizendium, and the Problem of Expertise, and then say why I think the cost of coordination in the age of social software favors Wikipedia over Citizendium, and over traditionally authoritative efforts such as Britannica.

    Make a pot of coffee; this is going to be long, and boring.

    Those of us who write about Wikipedia, both pro and con, often mix two different views: descriptive — Wikipedia is/is not succeeding — and judgmental — Wikipedia is/is not good. (For the record, my view is that Wikipedia is a success, and that society is better off with Wikipedia than it would be without it.) What I love about the Citizendium proposal is that, by proposing a fusion of collaborative construction and expert authority, it presses people who dislike or mistrust Wikipedia to say whether they think that the wiki form of communal production can be improved, or is per se bad.

    Nicholas Carr, in What will kill Citizendium, came out in the latter camp. Explaining why he thinks Ctizendium is a bad idea, he offers his prescription for the right way to do things: “[…] you keep the crowd out of it and, in essence, create a traditional encyclopedia.” No need for that ‘in essence’ there. The presence of the crowd is what distinguishes wiki production; this is a defense of the current construction of authority, suggesting that the traditional mechanism for creating encyclopedias is the correct one, and alternate forms of construction are not.

    This is certainly a coherent point of view, but one that I believe will fail in practical terms, because it is uneconomical. (Carr, in his darker moments, seems to believe something similar, but laments what the economics of peer production mean. This is a “Wikipedia is succeeding/is not good” argument.) In particular, I believe that the costs of nominating and then deferring to experts will make Citizendium underperform its competition, relative to the costs of merely involving experts as ordinary participants, as Wikipedia does.

    Expertise, Credentials, and Authority

    First, let me say that I am a realist, which is to say that I believe in a reality that is not socially constructed. The materials that make up my apartment, wood and stone and so on, actually exist, and are independent of any observer. A real tree that falls in a real forest displaces real air, even if no one is there to interpret that as sound.

    I also believe in social facts, things that are true because everyone agrees they are true. My apartment itself is made of real stuff, but its my-ness is built on agreements: my landlady leases it to me, that lease is predicated on her ownership, that ownership is recognized by the city of New York, and so on. Social facts are no less real than non-social facts — my apartment is actually my apartment, my wife is my wife, my job is my job — they are just real for different reasons.

    If everyone stopped agreeing that my job was my job (I quit or was fired, say), I could still walk down to NYU and draw network diagrams on a whiteboard at 1pm on a Tuesday, but no one would come to listen, because my ramblings wouldn’t be part of a class anymore. I wouldn’t be faculty; I’d be an interloper. Same physical facts — same elevator and room and white board and even the same person — but different social facts.

    Some facts are social, some are not. I believe that Sanger, Carr and I all agree that expertise is not a social fact. As Carr says ‘An architect does not achieve expertise through some arbitrary social process of “credentialing.” He gains expertise through a program of study and apprenticeship in which he masters an array of facts and techniques drawn from such domains as mathematics, physics, and engineering.’ I agree with that, and amended my earlier sloppiness in distinguishing between having expertise and being an expert, after being properly called on it by Eric Finchley in the comments.

    However, though Carr’s description is accurate, is it incomplete: an architect does not achieve expertise through credentialing, but an architect does not become an architect through expertise either. An architect is someone with expertise who has also been granted an architect’s credentials. These credentials are ideally granted on proof of the kinds of antecedents that indicate expertise — in the case of architects, relevant study (itself certified with the social fact of a degree) and significant professional work.

    Consider the following case: a young designer with an architect’s degree designs a building, and a credentialed architect working at the same firm then affixes her stamp to the drawings. The presence of the stamp means that a contractor can use the drawings to do certain kinds of work; without it the drawings shouldn’t be used for such things. Both the expertise and the credentials are necessary to make a set of drawings usable, but in this fairly common scenario, the expertise and the credentials are held by different people.

    This system is designed to produce enough liability for architects that they will supervise the uncredentialed; if they fail to, their own credentials will be taken away. Now consider a disbarred architect (or lawyer or doctor.) There has been no change in their expertise, but a great change in their credentials. Most of the time, we can take the link between authority, credentials, and expertise for granted (its why we have credentials, in fact), but in edge cases, we can see them as separate things.

    The clarity to be gotten from all this definition is a bit of a damp squib: Carr and I are in large agreement about the Citizendium proposal. He thinks that conferring authority is the hard challenge for Citizendium; I think that conferring authority is the hard challenge for Citizendium. He thinks that the openness of a wiki is incompatible with Citizendium’s proposed form of conferring authority, as do I. And we both believe this weakness will be fatal.

    Where we disagree is in what this means for society.

    The Cost of Credentials

    Lying on a bed in an emergency room, you think “Oh good, here comes the doctor.” Your relief comes in part because the doctor has the expertise necessary to diagnose and treat you, and in part because the doctor has the authority to do things like schedule you for surgery if you need it. Whatever your anxieties at that moment, they don’t include the possibility that the nurses will ignore the doctor’s diagnosis, or refuse to treat you in the manner the doctor suggests.

    You don’t worry that expertise and authority are different kinds of things, in other words, because they line up perfectly from your point of view. You simply ascribe to the visible doctor many things that are actually true of the invisible system the doctor works in. The expertise resides in the doctor, but the authority is granted by the hospital, with credentials helping bridge the gap.

    So here’s the thing: it’s incredibly expensive to create and maintain such systems, including especially the cost of creating and policing credentials and authority. We have to make and enforce myriad refined distinctions — not just physician and soldier and chairman but ‘admitting physician’ and ‘second lieutenant’ and ‘acting chairman.’ We don’t let people get married or divorced without the presence of official oversight. Lots of people can drive the bus; only bus drivers may drive the bus. We make it illegal to impersonate an officer. And so on, through innumerable tiny, self-reinforcing choices, all required to keep the links between expertise, credentials and authority functional.

    These systems are beneficial for society. However, they are not absolutely beneficial, they are only beneficial when their benefits outweigh their costs. And we live in an era where all kinds of costs — social costs, coordination costs, Coasean costs — are undergoing a revolution.

    Cost Changes Everything

    Earlier, writing about folksonomies, I said “We need a phrase for the class of comparisons that assumes that the status quo is cost-free.” We still need that; I propose “Cost-free Present” — when people believe in we live in a cost-free present, they also believe that any value they see in the world is absolute, not relative. A related assumption is that any new system that has disadvantages relative to the present one is therefore inferior; if the current system creates no costs, then any proposed change that creates new bad outcomes, whatever the potential new good outcomes, is worse than maintaining the status quo.

    Meanwhile, out here in the real world, cost matters. As a result, when the cost structure for creating, say, an encyclopedia changes, our existing assumptions about encyclopedic value have to be re-examined, because current encyclopedic values are relative, not absolute. It is possible for low-cost, low-value systems to be better than high-cost, high-value systems in the view of the society adopting them. If the low-cost system can increase in value over time while remaining low cost, even better.

    Pick your Innovator’s Dilemma: the Gutenberg bible was considerably less beautiful than scribal copies, the Model T was less well constructed than the Curved Dash Olds, floppy disks were considerably less reliable than hard drives, et cetera. So with Wikipedia and Encyclopedia Britannica: Wikipedia began life as a lost-cost, low-value alternative, but it was accessible, shareable, and improvable. Britannica, by contrast, has always been high-value, but it is both difficult and expensive for readers to get to, and worse, they can’t use what they see — a Britannica reader can’t copy and post an article, can’t email the contents to their friends, can’t even email those friends the link with any confidence that they will be able to see it.

    Barriers to both access and re-use are built into the Britannica cost structure, and without those barriers, it will collapse. Nothing about the institution of Britannica has changed in the five years of Wikipedia’s existence, but in the current ecosystem, the 1768 model of creation — you pay us and we make an Encyclopedia — has been transformed from a valuable service to a set of self-perpetuating, use-crippling barriers.

    This what’s wrong with Cost-free Present arguments: the principal competitive advantages of Wikipedia over Britannica, such as shareability or rapid refactoring (as of the Planet entry after Pluto’s recent demotion) are things which were simply not possible in 1768. Wikipedia is not a better Britannica than Britannica; it is a better fit for the current environment than Britannica is. The measure of possible virtues of an encyclopedia now include free universal access and unlimited re-use. As a result, maintaining Britannica costs more in a world with Wikipedia than it did in a world without it, in the same way scribal production became more expensive after the invention of movable type than before, without the scribes themselves doing anything different.

    If we do what we always did, we’ll get the result we always got

    Citizendium seems predicated on several related ideas about cost and value: having expertise and being an expert are roughly the same thing; the costs of certifying experts will be relatively low; building and running software that confers a higher degree of authority to them than on non-expert users will be similarly low; and the appeal to non-experts of participating in such a system will be high. If these things are true, than a hybrid of voluntary participation and expert authority will be more valuable than either extreme.

    I am betting that those things aren’t true, because the costs of certifying experts and insuring deference to them — the costs of creating and sustaining the necessary social facts — will sandbag the system, making it too annoying to use.

    The first order costs will come from the certification and deference itself. By proposing to recognize external credentialing mechanisms, Citizendium sets itself up to take on the expenses of determining thresholds and overlaps of expertise. A masters student in psychology doing work on human motivation may know more about behavioral economics than a Ph.D. in neo-classical economics. It would be easy to label them both experts, but on what grounds should their disputes be adjudicated?

    On Wikipedia, the answer is simple — deference is to contributions, not to contributors, and is always provisional. (As with the Pluto example enough, even things as seemingly uncontentious as planethood turned out to be provisional.) Wikipedia certainly has management costs (all social systems do), but it has the advantage that those costs are internal, and much of the required oversight is enforced by moral suasion. It doesn’t take on the costs of forcing deference to experts because it doesn’t recognize the category of ‘expert’ as primitive in the system. Experts contribute to Wikipedia, but without requiring any special consideration.

    Citizendium’s second order costs will come from policing the system as a whole. If the process of certification and enforcement of deference become even slightly annoying to the users, they will quickly become non-users. The same thing will happen if the projection of force needed to manage Citizendium delegitimizes the system in the eyes of the contributors.

    The biggest risk with Wikipedia is ongoing: lousy or malicious edits, an occurrence that happens countless times a day. The biggest risk with Citizendium, on the other hand, is mainly up front, in the form of user inaction. The Citizendium project assumes that the desire of ordinary users to work alongside and be guided by experts is high, but everything in the proposal seems to raise the costs of contribution, relative to Wikipedia. If users do not want to participate in a system where the costs of participating are high, Citizendium will simply fail to grow.

    Comments (12) + TrackBacks (0) | Category: social software

    September 22, 2006

    What is the problem with deference to experts on Wikipedia?Email This EntryPrint This Article

    Posted by Clay Shirky

    Interesting pair of comments in Larry Sanger, Citizendium, and the Problem of Expertise, on the nature and seriousness of experts not contributing to Wikipedia:
    22. David Gerard on September 22, 2006 07:08 AM writes…

    Plenty of people complain of Wikipedia’s alleged “anti-expert bias”. I’ve yet to see solid evidence of it. Unless “expert-neutral” is conflated to mean “anti-expert.” Wikipedia is expert-neutral - experts don’t get a free ride. Which is annoying when you know something but are required to show your working, but is giving us a much better-referenced work.

    One thing the claims of “anti-expert bias” fail to explain is: there’s lots of experts who do edit Wikipedia. If Wikipedia is so very hostile to experts, you need to explain their presence.
    Permalink to Comment

    23. engineer_scotty on September 22, 2006 01:19 PM writes…

    I’ve been studying the so-called “expert problem” on Wikipedia—and I’m becoming more and more convinced that it isn’t and expert problem per se; it is a jackass problem. As in some Wikipedians are utter jackasses—in this context, “jackass” is an umbrella category for a wide variety of problem behaviors which are contrary to Wikipedia policy—POV pushing, advocacy of dubious theories, vandalism, abusive behavior, etc. Wikipedia policy is reasonably good at dealing with vandalism, abusive behavior and incivility (too good, some think, as WP:NPA occasionally results in good editors getting blocked for wielding the occasional cluestick ‘gainst idiots who sorely need it). It isn’t currently good at dealing with POV-pushers and crackpots whose edits are civil but unscholarly, and who repeatedly insert dubious material into the encyclopedia. Recent policy proposals are designed to address this.

    Many experts who have left, or otherwise have expressed dissatisfaction with Wikipedia, fall into two categories: Those who have had repeated bad experiences dealing with jackassses, and are frustrated by Wikipedia’s inability to restrain said jackasses; and those who themselves are jackasses. Wikipedia has seen several recent incidents, including one this month, where notable scientists have joined the project and engaged in patterns of edits which demonstrated utter contempt for other editors of the encyclopedia (many of whom were also PhD-holding scientists, though lesser known), attempted to “own” pages, attempted to portray conjecture or unpublished research as fact, or have exaggerated the importance or quality of their own work. When challenged, said editors have engaged in (predictable) tirades accusing the encyclopedia of anti-intellectualism and anti-expert bias—charges we’ve all heard before.

    The former sort of expert the project should try to keep. The latter, I think the project is probably better off without; and I suspect they would wear out their welcomes quickly on Citizendium as well.
    I would love to see a few case studies, linked to the History and Talk pages of a few articles— “Here was the expert contribution, here was the jackass edit, this is what was lost”, etc. Reading Engineer Scotty’s comment, and given the general sense of outraged privilege that seems to run through much of the “Experts have their work edited without permission!” literature, I am guessing that the problem is not so much experts contributing and then being driven away as it is non-contributions by people unwilling to work in an environment wherre their contributions aren’t sacrosanct.

    Comments (5) + TrackBacks (0) | Category: social software

    September 20, 2006

    Larry Sanger on me on CitizendiumEmail This EntryPrint This Article

    Posted by Clay Shirky

    A response from Larry Sanger, posted here in its entirety:

    Thanks to Clay Shirky for the opportunity to reply here on Many2Many
    to his “Larry Sanger, Citizendium, and the Problem of Expertise,” First, two points about Clay’s style of argumentation, which I simply cannot let go without comment. Then some replies to his actual arguments.

    1. Allow me to identify my own core animating beliefs, thank you very much.

    Clay’s piece annoying tendency to characterize my assumptions uncharitably and without evidence, and to psychologize about me. Thus, Clay says things like: “Sanger‚s published opinions seem based on three beliefs”; “Sanger wants to believe that expertise can survive just fine outside institutional frameworks”; “Sanger’s core animating belief seems to be a faith in experts”; “Sanger’s view seems to be that expertise is a quality like height”; and “Sanger also underestimates the costs of setting up and then enforcing a process that divides experts from the rest of us.”

    I find myself strongly disagreeing with Clay’s straw Sanger. However, I am not that Sanger! Last time I checked, I was made of flesh and blood, not straw.

    2. May I borrow that crystal ball when you’re done with it?

    Repeatedly, Clay makes dire predictions for the Citizendium. “Structural issues…will probably prove quickly fatal”; “institutional overhead…will stifle Citizendium”; “policing certification will be a common case, and a huge time-sink” so “the editor-in-chief will then have to spend considerable time monitoring that process”; “Citizendium will re-create the core failure of Nupedia”; “Sanger believes that Wikipedia goes too far in its disrespect of experts; what killed Nupedia and will kill Citizendium is that they won’t go far enough.”

    I think Clay lacks any good reason to think the Citizendium will fail; but clearly he badly wants it to fail, and his comments are animated by wishful thinking. That, anyway, seems the most parsimonious explanation. To borrow one of Clay’s phrases, and return him the favor: it is interesting “how consistent Clay has been about his beliefs” on the low value of officially-recognized expertise in online communities. “His published opinions seem based on” the belief in the supreme value and efficacy of completely flat self-organizing communities. The notion of experts being given special authority, even very circumscribed authority, does extreme violence to this “core animating belief” (to borrow another of Clay’s phrases). It must, therefore, be impossible.

    Less flippantly now. I do make a point of being properly skeptical about all of my projects—that’s another thing I’ve been consistent about. You can probably still find writings from 2000 and 2001 in which I said I didn’t know whether Nupedia or Wikipedia would work. I have no idea if the Citizendium will work. What I do know is that it is worth a try, and we’ll do our best to solve problems that we can anticipate and as they arise.

    By the way, there’s a certain irony in the situation, isn’t there? Clay Shirky, respected expert about online communities, holds forth about a new proposed online community, and does what so many experts love to do: make bold predictions about the prospects of items in their purview. Meanwhile, I, the alleged expert-lover, cast aspersions on his abilities to make such predictions. If my “core animating belief” were “a faith in experts,” why would I lack faith in this particular expert?

    3. I want to be a social fact, too!

    Let’s move on to Clay’s actual arguments. He begins his first argument with something perfectly true, that expertise (in the relevant sense, an operational concept of expertise) is a social fact, that this social fact is conferred (not always formally, but often) by institutions, and that, therefore, one cannot have expertise without (in some sense) “institutional overhead.” So far, so good. The current proposal—which is open to debate, at this early stage, even from Clay himself—addresses this situation by proposing to avoid editor application review committees in favor of self-designation of editorial status. The details are relevant, so let me quote them from the FAQ:

    We do not want editors to be selected by a committee, which process is too open to abuse and politics in a radically open and global project like this one is. Instead, we will be posting a list of credentials suitable for editorship. (We have not constructed this list yet, but we will post a draft in the next few weeks. A Ph.D. will be neither necessary nor sufficient for editorship.) Contributors may then look at the list and make the judgment themselves whether, essentially, their CVs qualify them as editors. They may then go to the wiki, place a link to their CV on their user page, and declare themselves to be editors. Since this declaration must be made publicly on the wiki, and credentials must be verifiable online via links on user pages, it will be very easy for the community to spot [most] false claims to editorship.
    What then is Clay’s criticism? “The problem” at the beginning of the argument was that “experts are social facts.” Yeah, so? So, says Clay,
    Sanger expects that decertification will only take place in unusual cases. This is wrong; policing certification will be a common case, and a huge time-sink. If there is a value to being an expert, people will self-certify to get at that value, not matter what their credentials. The editor-in-chief will then have to spend considerable time monitoring that process, and most of that time will be spent fighting about edge cases.

    My initial reaction to this was: how on Earth could Shirky know all that? Furthermore, isn’t it quite obvious that, far from being a static proposal, this project is going to be able to move nimbly (I usually propose radical changes and refinements to my projects) in order to solve just such problems, should they arise?

    In any event, based on my own experience, I counter-predict that Clay will probably be wrong in his prediction. There will probably be a lot of people who humorously, out of cluelessness, or whatever, claim to be editors.

    For the easy cases, which will probably be most of them, constables will be able to rein people in, nearly as easy as they can rein in vandalism. No doubt we will have a standard procedure for achieving this. As to the borderline (“edge”) cases (e.g., some grad students and independent scholars), Clay gives us no reason to think that the editor-in-chief will have to spend large amounts of time fighting about them. Unlike Wikipedia, and like many OSS projects, there will be a group of people authorized to select the “release managers” (so to speak). This policy will be written into the project charter, support of which will be a requirement of participation in the project.

    The review process for editor declarations, therefore, will be clear and well-accepted enough—that, after all, is the whole point of establishing a charter and “rule of law” in the online community—that the process can be expected to work smoothly. Mind you, it will be needed because of course there will be borderline cases, and disgruntled people, but Clay has given no reason whatsoever to think it will dominate the entire proceedings.

    Besides, this is a responsibility I propose to delegate to a workgroup; I will probably be too busy to be closely involved in it.

    Far from being persuasive, it is actually ironic that Clay cites primordial fights I had with trolls on Wikipedia as evidence of his points. It was precisely due to a lack of clearly-circumscribed authority and widely-accepted rules that I had to engage in such fights. Consequently, the Citizendium is setting up a charter, editors, and constables precisely to prevent such problems.

    4. Warm and fuzzy yes, a hierarchy no.

    Clay nicely sums up his next argument this way:

    Real experts will self-certify; rank-and-file participants will be delighted to work alongside them; when disputes arise, the expert view will prevail; and all of this will proceed under a process that is lightweight and harmonious. All of this will come to naught when the citizens rankle at the reflexive deference to editors; in reaction, they will debauch self-certification (leading to irc-style chanop wars), contest expert prerogatives, raising the cost of review to unsupportable levels (Wikitorial, round II,) take to distributed protest (q.v.Hank the Angry Drunken Dwarf), or simply opt-out (Nupedia in a nutshell.)

    (By the way, Clay is completely wrong about citizen participation in Nupedia. They made up the bulk of authors in the pipeline. Our first article was by a grad student. An undergrad wrote several biology articles. There have been so many myths are made about Nupedia, so completely divorced from reality, that it has become a fascinating and completely fact-free Rohrschach test for everything bad that anyone wants to say about expert authority in open collaboration.)

    The Citizendium is, by Clay’s lights, a radical experiment that does violence to his cherished notions of what online communities should be like. Persons inclined to “debauch self-certification” as on IRC chatrooms will be removed from the project; and others will not protest at such perfectly appropriate treatment, because we will have already announced this as a policy.

    Through self-selection the community can be expected to be in favor of such policies; those who dislike them will always have Wikipedia.

    That’s part of the beauty of a world with both a Citizendium and a Wikipedia in it. Those who (like you, Clay) instinctively hate the Citizendium—we’ve seen a little of this in blogs lately, calling the very idea “Wikipedia for stick-in-the-muds,” “Wikipedia for control freaks,” a “horror,” etc.—will always have Wikipedia. I strongly encourage you to stick with Wikipedia if you dislike the idea of the Citizendium that much. That will make matters easier for everyone. If other people want to organize themselves in a different way—a way you’d never dream of doing—then please give them room to do so. As a result we’ll have one project for people who agree with you, Clay, and one for people who agree with me, and the world will be richer.

    Clay does give some more support for thinking that an editor-guided wiki is unworkable. He says that the viability of a community resembles a “U curve” with one end being a total hierarchy and the other end being “a functioning community with a core group.” Apparently, projects that are neither hierarchies nor communities, which Clay implies is where the Citizendium would fit, would incur too many “costs of being an institution” and “significant overhead of process.” What I find particularly puzzling about this is how he describes the ends of U curve. I would have expected him to say hierarchy on one end and a totally flat, leaderless community on the other end. But instead, opposite the hierarchy is “a functioning community with a core group.” How is it, then, that the Citizendium as proposed would not constitute “a functioning community with a core group”?

    Let me put this more plainly, setting aside Clay’s puzzling theoretical apparatus. What the world has yet to test is the notion of experts and ordinary folks (and remember: experts working outside their areas of expertise are then “ordinary folks”) working together, shoulder-to-shoulder, on a single project according to open, open source principles. That is the radical experiment I propose. This actually hearkens back to the way OSS projects essentially work. So far, to my knowledge, experts have not been invited in to “gently guide” open content projects in a way roughly analogous to the way that senior developers gently guide OSS projects, deciding what changes are in the next release and what isn’t. You might say that the analogy does not work because senior developers of OSS projects are chosen based on the merits of their contributions within the project. But what if we regard an encyclopedia as continuous with the larger world of scholarship, so that scholarly work outside of the narrow province of a single project becomes relevant for determining a senior content developer? For an encyclopedia, that’s simply a sane variant on the model.

    Whereas OSS projects have special, idiosyncratic requirements, encyclopedias frankly do not. There’s no point to creating an insular community, an “in group” of people who have mastered the particular system, because it’s not about the system—it’s about something any good scholar can contribute to, an encyclopedia. Then, if the larger, self-selecting community invites and welcomes such people to join them as “senior content developers,” why not think the analogy with OSS is adequately preserved?

    (For more of the latter argument please see a new essay I am going to try to circulate among academics.)

    Comments (9) + TrackBacks (0) | Category: social software

    September 18, 2006

    Larry Sanger, Citizendium, and the Problem of ExpertiseEmail This EntryPrint This Article

    Posted by Clay Shirky

    The interesting thing about Citizendium, Larry Sanger’s proposed fork of Wikipedia designed to add expert review, is how consistent Sanger has been about his beliefs over the last 5 years. I’ve been reviewing the literature from the dawn of Wikipedia, born from the failure of the process-laden and expert-driven Nupedia, and from then to now, Sanger’s published opinions seem based on three beliefs:

    1. Experts are a special category of people, who can be readily recognized within their domains of expertise.
    2. A process of open creation in which experts are deferred to as of right will be superior to one in which they are given no special treatment.
    3. Once experts are identified, that deference will mainly be a product of moral suasion, and the only place authority will need to intrude are edge cases.

    All three beliefs are false.

    There are a number of structural issues with Citizendium, many related to the question of motivation on the part of the putative editors; these will probably prove quickly fatal. More interesting to me, though, is is the worldview behind Sanger’s attitude towards expertise, and why it is a bad fit for this kind of work. Reading the Citizendium manifesto, two things jump out: his faith in experts as a robust and largely context-free category of people, and his belief that authority can exist largely free of expensive enforcement. Sanger wants to believe that expertise can survive just fine outside institutional frameworks, and that Wikipedia is the anomaly. It can’t, and it isn’t.

    Experts Don’t Exist Independent of Institutions

    Sanger’s core animating belief seems to be a faith in experts. He took great care to invite experts to the Nupedia Advisory Board, and he has consistently lamented that Wikipedia offers no special prerogatives for expert review, and no special defenses against subsequent editing of material written by experts. Much of his writing, and the core of Citizendium, is based on assumptions about how experts should be involved in a project like this.

    The problem Citizendium faces is that experts are social facts — society typically recognizes experts through some process of credentialling, such as the granting of degrees, professional certifications, or institutional engagement. We have a sense of what it means that someone is a doctor, a judge, an architect, or a priest, but these facts are only facts because we agree they are. If I say “I sentence you to 45 days in jail”, nothing happens. If a judge says “I sentence you to 45 days in jail”, in a court of law, dozens of people will make it their business to act on that imperative, from the bailiff to the warden to the prison guards. My words are the same as the judges, but the judge occupies a position of authority that gives his words an effect mine lack, an authority only exists because enough people agree that it does.

    Sanger’s view seems to be that expertise is a quality like height — some people are obviously taller than others, and the rest of us have no problem recognizing who the tall people are. But expertise isn’t like that at all; it is in fact highly subject to shifts in context. A lawyer from New York can’t practice in California without passing the bar there. A surgeon from India can’t operate on a patient in the US without further certification. The UN representative from Yugoslavia went away when Yugoslavia did, and so on.

    As a result, you cannot have expertise without institutional overhead, and institutional overhead is what stifled Nupedia, and what will stifle Citizendium. Sanger is aware of this challenge, and offers mollifying details:

    […]we will be posting a list of credentials suitable for editorship. (We have not constructed this list yet, but we will post a draft in the next few weeks. A Ph.D. will be neither necessary nor sufficient for editorship.) Contributors may then look at the list and make the judgment themselves whether, essentially, their CVs qualify them as editors. They may then go to the wiki, place a link to their CV on their user page, and declare themselves to be editors. Since this declaration must be made publicly on the wiki, and credentials must be verifiable online via links on user pages, it will be very easy for the community to spot false claims to editorship.


    We will also no doubt need a process where people who do not have the credentials are allowed to become editors, and where (in unusual cases) people who have the credentials are removed as editors.

    Sanger et al. set the bar for editorship, editors self-certify, then, in order to get around the problems this will create, there will be an additional certification and de-certification process internal to the site. On Citizendium, if you are competent but uncredentialed, you will have to be vetted before you are allowed to ascend to the editor’s chair, and if you are credentialed but incompetent, you’re in until decertification. And, critically, Sanger expects that decertification will only take place in unusual cases.

    This is wrong; policing certification will be a common case, and a huge time-sink. If there is a value to being an expert, people will self-certify to get at that value, not matter what their credentials. The editor-in-chief will then have to spend considerable time monitoring that process, and most of that time will be spent fighting about edge cases.

    Sanger himself experienced this in his fight with Cunctator at the dawn of Wikipedia; Cunc questioned Sanger’s authority, leading Sanger to defend it with increasing vigor. As Sanger said at the time “…in order to preserve my time and sanity, I have to act like an autocrat. In a way, I am being trained to act like an autocrat.” Sanger’s authority at Wikipedia required his demonstrating it, yet this very demonstration made his job harder, and ultimately untenable. This the common case; as any parent can tell you, exercise of presumptive authority creates the conditions under which it is tested. As a result, Citizendium will re-create the core failure of Nupedia, namely putting at the center of the effort a process whose maintenance takes more energy than can be mustered by a volunteer project.

    “We’re a Warm And Fuzzy Hierarchy”: The Costs of Enforcement

    In addition to his misplaced faith in the rugged condition of expertise, Sanger also underestimates the costs of setting up and then enforcing a process that divides experts from the rest of us. Curiously, this underestimation seems to be borne of a belief that most of the world shares his views on the appropriate deference to expertise:

    Can you really expect headstrong Wikipedia types to work under the guidance of expert types in this way?

    Probably not. But then, the Citizendium will not be Wikipedia. We do expect people who have proper respect for expertise, for knowledge hard gained, to love the opportunity to work alongside editors. Imagine yourself as a college student who had the opportunity to work alongside, and under the loose and gentle direction of, your professors. This isn’t going to be a top-down, command-and-control system. It is merely a sensible community: one where the people who have made it their life’s work to study certain areas are given a certain appropriate authority—without thereby converting the community into a traditional top-down academic editorial scheme.

    Well, can you expect the experts to want to work “shoulder-to-shoulder” with nonexperts?

    Yes, because some already do on Wikipedia. Furthermore, they will have an incentive to work in this project, because when it comes to content—i.e., what the experts really care about—they will be in charge.

    These passages evince a wounded sense of purpose: Experts are real, and it is only sensible and proper that they be given an appropriate amount of authority. The totality of the normative view on display here is made more striking because Sanger never reveals the source of these judgments. “Sensible” according to whom? How much authority is “appropriate”? How much control is implied by being “in charge”, and what happens when that control is abused?

    These responses are also mutually contradictory. Citizendium, the manifesto claims, will not be a traditional top-down academic scheme, but experts will be in charge of the content. The only way experts can be in charge without top-down imposition is if every participant internalizes respect for authority to the point that it is never challenged in the first place. One need allude only lightly to the history of social software since at least Communitree to note that this condition is vanishingly rare.

    Citizendium is based less on a system of supportable governance than on the belief that such governance will not be necessary, except in rare cases. Real experts will self-certify; rank-and-file participants will be delighted to work alongside them; when disputes arise, the expert view will prevail; and all of this will proceed under a process that is lightweight and harmonious. All of this will come to naught when the citizens rankle at the reflexive deference to editors; in reaction, they will debauch self-certification (leading to irc-style chanop wars), contest expert preogatives, rasing the cost of review to unsupportable levels (Wikitorial, round II,) take to distributed protest (q.v. Hank the Angry Drunken Dwarf), or simply opt-out (Nupedia in a nutshell.)

    The “U”-Curve of Organization and the Mechanisms of Deference

    Sanger is an incrementalist, and assumes that the current institutional framework for credentialling experts and giving them authority can largely be preserved in a process that is open and communally supported. The problem with incrementalism is that the very costs of being an institution, with the significant overhead of process, creates a U curve — it’s good to be a functioning hierarchy, and its good to be a functioning community with a core group, but most of the hybrids are less fit than either of the end points.

    The philosophical issue here is one of deference. Citizendium is intended to improve on Wikipedia by adding a mechanism for deference, but Wikipedia already has a mechanism for deference — survival of edits. I recently re-wrote the conceptual recipe for a Menger Sponge, and my edits have survived, so far. The community has deferred not to me, but to my contribution, and that deference is both negative (not edited so far) and provisional (can always be edited.)

    Deference, on Citizendium will be for people, not contributions, and will rely on external credentials, a priori certification, and institutional enforcement. Deference, on Wikipedia, is for contributions, not people, and relies on behavior on Wikipedia itself, post hoc examination, and peer-review. Sanger believes that Wikipedia goes too far in its disrespect of experts; what killed Nupedia and will kill Citizendium is that they won’t go far enough.

    Comments (34) + TrackBacks (0) | Category: social software

    June 7, 2006

    Reactions to Digital MaoismEmail This EntryPrint This Article

    Posted by Clay Shirky

    Last5 week, Edge.org published Jaron Lanier’s Digital Maoism piece, warning of the dangers of new collectivism, and singling out Wikipedia for criticism. Today, Edge has several reactions to Lanier’s warnings, from people who think about social software from a variety of perspectives, including Douglas Rushkoff, Quentin Hardy, Yochai Benkler, me (and I re-print my piece here), Cory Doctorow, Kevin Kelly, Esther Dyson, Larry Sanger, Fernanda Viegas & Martin Wattenberg, Jimmy Wales, and George Dyson.

    I include my reaction here as well, as Edge doesn’t have comments.


    Jaron Lanier is certainly right to look at the downsides of collective action. It’s not a revolution if nobody loses, and in this case, expertise and iconoclasm are both relegated by some forms of group activity. However, “Digital Maoism” mischaracterizes the present situation in two ways. The first is that the target of the piece, the hive mind, is just a catchphrase, used by people who don’t understand how things like Wikipedia really work. As a result, criticism of the hive mind becomes similarly vague. Second, the initial premise of the piece — there are downsides to collective production of intellectual work — gets spread it so widely that it comes to cover RSS aggregators, American Idol, and the editorial judgment of the NY Times. These are errors of overgeneralization; it would be good to have a conversation about Wikipedia’s methods and governance, say, but that conversation can’t happen without talking about its actual workings, nor can it happen if it is casually lumped together with other, dissimilar kinds of group action.

    The bigger of those two mistakes appears early:

    “The problem I am concerned with here is not the Wikipedia in itself. It’s been criticized quite a lot, especially in the last year, but the Wikipedia is just one experiment that still has room to change and grow. […] No, the problem is in the way the Wikipedia has come to be regarded and used; how it’s been elevated to such importance so quickly.”

    Curiously, the ability of the real Wikipedia to adapt to new challenges is taken at face value. The criticism is then directed instead at people proclaiming Wikipedia as an avatar of a golden era of collective consciousness. Let us stipulate that people who use terms like hive mind to discuss Wikipedia and other social software are credulous at best, and that their pronouncements tend towards caricature. What “Digital Maoism” misses is that Wikipedia doesn’t work the way those people say it does.

    Neither proponents nor detractors of hive mind rhetoric have much interesting to say about Wikipedia itself, because both groups ignore the details. As Fernanda Viegas’s work shows, Wikipedia isn’t an experiment in anonymous collectivist creation; it is a specific form of production, with its own bureaucratic logic and processes for maintaining editorial control. Indeed, though the public discussions of Wikipedia often focus on the ‘everyone can edit’ notion, the truth of the matter is that a small group of participants design and enforce editorial policy through mechanisms like the Talk pages, lock protection, article inclusion voting, mailing lists, and so on. Furthermore, proposed edits are highly dependant on individual reputation — anonymous additions or alterations are subjected to a higher degree of both scrutiny and control, while the reputation of known contributors is publicly discussed on the Talk pages.

    Wikipedia is best viewed as an engaged community that uses a large and growing number of regulatory mechanisms to manage a huge set of proposed edits. “Digital Maoism” specifically rejects that point of view, setting up a false contrast with open source projects like Linux, when in fact the motivations of contributors are much the same. With both systems, there are a huge number of casual contributors and a small number of dedicated maintainers, and in both systems part of the motivation comes from appreciation of knowledgeable peers rather than the general public. Contra Lanier, individual motivations in Wikipedia are not only alive and well, it would collapse without them.

    “The Digital Maoism” argument is further muddied by the other systems dragged in for collectivist criticism. There’s the inclusion of American Idol, in which a popularity contest is faulted for privileging popularity. Well, yes, it would, wouldn’t it, but the negative effects here don’t come from some new form of collectivity, they come from voting, a tool of fairly ancient provenance. Decrying Idol’s centrality is similarly misdirected. This season’s final episode was viewed by roughly a fifth of the country. By way of contrast, the final episode of M*A*S*H was watched by three fifths of the country. The centrality of TV, and indeed of any particular medium, has been in decline for three decades. If the pernicious new collectivism is relying on growing media concentration, we’re safe.

    Popurls.com is similarly and oddly added to the argument, but there is in fact no meta-collectivity algorithm at work here — Popurls just an aggregation of RSS feeds. You might as well go after my.yahoo if that’s the kind of thing that winds you up. And the ranking systems that are aggregated all display different content, suggesting real subtleties in the interplay of algorithm and audience, rather than a homogenizing hive mind at work. You wouldn’t know it, though, to read the broad-brush criticism of Popurls here. And that is the missed opportunity of “The Digital Maoism”: there are things wrong with RSS aggregators, ranking algorithms, group editing tools, and voting, things we should identify and try to fix. But the things wrong with voting aren’t wrong with editing tools, and the things wrong with ranking algorithms aren’t wrong with aggregators. To take the specific case of Wikipedia, the Seigenthaler/Kennedy debacle catalyzed both soul-searching and new controls to address the problems exposed, and the controls included, inter alia, a greater focus on individual responsibility, the very factor “Digital Maoism” denies is at work.

    The changes we are discussing here are fundamental. The personal computer produced an incredible increase in the creative autonomy of the individual. The internet has made group forming ridiculously easy. Since social life involves a tension between individual freedom and group participation, the changes wrought by computers and networks are therefore in tension. To have a discussion about the plusses and minuses of various forms of group action, though, is going to require discussing the current tools and services as they exist, rather than discussing their caricatures or simply wishing that they would disappear.

    Comments (15) + TrackBacks (0) | Category:

    May 25, 2006

    News of Wikipedia's Death Greatly ExaggeratedEmail This EntryPrint This Article

    Posted by Clay Shirky

    Nicholas Carr has an odd piece up, reacting to the ongoing question of Wikipedia governance as if it is the death of Wikipedia. In Carr’s view
    Where once we had a commitment to open democracy, we now have a commitment to “making sure things are not excessively semi-protected.” Where once we had a commune, we now have a gated community, “policed” by “good editors.” So let’s pause and shed a tear for the old Wikipedia, the true Wikipedia. Rest in peace, dear child. You are now beyond the reach of vandals.
    Now this is odd because Carr has in the past cast entirely appropriate aspersions on pure openess as a goal, noting, among other things, that “The open source model is not a democratic model. It is the combination of community and hierarchy that makes it work. Community without hierarchy means mediocrity.”

    Carr was right earlier, and he is wrong now. Carr would like Wikipedia to have committed itself to openess at all costs, so that changes in the model are failure conditions. That isn’t the case however; Wikipedia is committed to effectiveness, and one of the things it has found to be effective is openess, but where openess fails to provide the necessary defenses on it’s own, they’ll make changes to remain effective. The changes in Wikipedia do not represent the death of Wikipedia but adaptation, and more importantly, adaptation in exactly the direction Carr suggests will work.

    We’ve said it here before: Openness allows for innovation. Innovation creates value. Value creates incentive. If that were all there was, it would be a virtuous circle, because the incentive would be to create more value. But incentive is value-neutral, so it also creates distortions — free riders, attempts to protect value by stifling competition, and so on. And distortions threaten openess.

    As a result, successful open systems create the very conditions that require a threaten openess. Systems that handle this pressure effectively continue (Slashdot comments.) Systems that can’t or don’t find ways to balance openess and closedness — to become semi-protected — fail (Usenet.)

    A huge number of our current systems are hanging in the balance, because the more valuable a system, the greater the incentive for free-riding. Our largest and most spontaneous sources of conversation and collaboration are busily being retrofit with filters and logins and distributed ID systems, in an attempt to save some of what is good about openess while defending against Wiki spam, email spam, comment spam, splogs, and other attempts at free-riding. Wikipedia falls into that category.

    And this is the possibility that Carr doesn’t entertain, but is implicit in his earlier work — this isn’t happening because the Wikipedia model is a failure, it is happening because it is a success. Carr attempts to deflect this line of thought by using a lot of scare quotes around words like vandal, as if there were no distinction between contribution and vandalism, but this line of reasoning runs aground on the evidence of Wikipedia’s increasing utility. If no one cared about Wikipedia, semi-protection would be pointless, but with Wikipedia being used as reference material in the Economist and the NY Times, the incentive for distortion is huge, and behavior that can be sensibly described as vandalism, outside scare quotes, is obvious to anyone watching Wikipedia. The rise of governance models is a reaction to the success that creates incentives to vandalism and other forms of attack or distortion.

    We’ve also noted before that governance is a certified Hard Problem. At the extremes, co-creation, openess, and scale are incompatible. Wikipedia’s principle advantage over other methods of putting together a body of knowledge is openess, and from the outside, it looks like Wikipedia’s guiding principle is “Be as open as you can be; close down only where there is evidence that openess causes more harm than good; when this happens, reduce openess in the smallest increment possible, and see if that fixes the problem.”

    People who build or manage large-scale social software form the experimental wing of political philosophy — in the same way that the US Constitution is harder to change than local parking regulations, Wikipedia is moving towards a system where evidence of abuse generates anti-bodies, and those anti-bodies vary in form and rigidity depending on the nature and site of the threat. By responding to the threats caused by its growth, Wikipedia is moving the hierachy+community model that Carr favored earlier. His current stance — that this change is killing the model of pure openess he loved — is simply crocodile tears.

    Comments (21) + TrackBacks (0) | Category: social software

    February 14, 2006

    Powerlaws: 2006 Dance Re-mixEmail This EntryPrint This Article

    Posted by Clay Shirky

    The question of inequality and unfairness has come up again, from Seth’s Gatekeepers posts and subsequent conversation, to pointers to Clive Thompson’s A-Listers article in New York magazine, which article discusses the themes from Powerlaws, Weblogs, and Inequality (though without mentioning that essay or noting that the original powerlaw work was done in 2003.)

    The most interesting thing I’ve read on the subject was in Doc Searls post:

    I’ve always thought the most important thesis in Cluetrain was not the first, but the seventh: Hyperlinks subvert hierarchies.
    What I’ve tried to say, in my posts responding to Tristan’s, Scott’s and others making the same point, is nothing more than what David Weinberger said in those three words.

    I thought I was giving subversion advice in the post that so offended Seth. But maybe I was wrong. Maybe being widely perceived as a high brick in the blogosphere’s pyramid gives my words an unavoidable hauteur — even if I’m busy insisting that all the ‘sphere’s pyramids are just dunes moving across wide open spaces.
    […]
    I’ll just add that, if ya’ll want to subvert some hierarchies, including the one you see me in now, I’d like to help.

    The interesting thing to me here is the tension between two facts: a) Doc is smart and b) that line of thinking is unsupportable, even in theory. The thing he wants to do — subvert the hierarchy of the weblog world as reflected in lists ranked by popularity — is simply impossible to do as a participant.

    Part of the problem here is language. Hierarchy has multiple definitions; the sort of hierarchy-subverting that networks do well is routing around or upending nested structures, whether org charts or ontologies. This is the Cluetrain idea that hyperlinks subvert hierarchies.

    The list of weblogs ranked by poularity is not a hierarchy in that sense, however. It is instead a ranking by status. The difference is critical, since what’s being measured when we measure links or traffic is not structure but judgment. When I’m not the CEO, I’m not the CEO because there’s an org chart, and I’m not at the top of it. There is an actual structure holding the hierarchy in place; if you want to change the hierarchy, you change the structure.

    When I’m not the #1 blogger, however, there are no such structural forces making that so. Ranking systems don’t work that way; they are just lists ordered by some measured characteristic. To say you want to subvert that sort of hierarchy makes little sense, because there are only two sorts of attack: you can say that what’s being measured isn’t important (and if it isn’t, why try to subvert it in the first place?), or you can claim that lists are irrelevant (which is tough if the list is measuring something real and valuable.)

    Lists are different from org charts. The way to subvert a list is to opt out; were Doc to stop writing, he would cede his place in the rankings to others. At the other extreme, for him to continue to champion the good over the mediocre, as he sees it, sharpens the very hierarchy he wants to subvert. Huis clos.

    The basic truth of such ranking systems is unchanged: for you to win, someone else must lose, because rank is a differential. Furthermore, in this particular system, the larger the blogsphere grows, the greater the inequality will be between being the most- and median-trafficked weblog.

    All of that is the same as it was in 2003. The power law is always there, any time anyone wants to worry about it. Why the worrying happens in spasms instead of steadily is one of the mysteries of the weblog world.

    The only things that are different in 2006 are the rise of groups and of commercial interests. Of the top 10 Technorati-measured blogs, (Disclosure: I am an advisor to Technorati), all but one of them are either run by more than one poster, or generate revenue from ads or subscriptions. (The exception is PostSecret, whose revenue comes from book sales, not directly from running the site.) Four of the top five and five of the ten are both group and commercial efforts — BoingBoing, Engadget, Kos, Huffington Post, and Gizmodo.

    Groups have wider inputs and outputs than individuals — the staff of BoingBoing or Engadget can review more potential material, from a wider range of possibilities, and post more frequently, than can any individual. Indeed, the only two of those ten blogs operating in the classic “Individual Outlet” mode are at #9 and 10 — Michelle Malkin and Glenn Reynolds, respectively.

    And blogs with business models create financial incentives to maximize audience size, both because that increases potential subscriber and advertisee pools, but also because a high ranking is attractive to advertisers even outside per capita calculations of dollars per thousand viewers.

    (As an aside, there’s a pair of interesting technical questions here: First, how big is the A-list ad-rate premium over pure per-capita calculations? Second, if such a premium exists, is it simply a left-over bias from broadcast media, or does popularity actually create measurable value over mere audience count for the advertiser? Only someone with access to ad rate cards from a large sample could answer those questions, however.)

    In his post Shirky’s Law, Hugh Macleod quotes me saying:

    Once a power law distribution exists, it can take on a certain amount of homeostasis, the tendency of a system to retain its form even against external pressures. Is the weblog world such a system? Are there people who are as talented or deserving as the current stars, but who are not getting anything like the traffic? Doubtless. Will this problem get worse in the future? Yes.

    I still think that analysis is correct. From the perspective of 2003, it’s the future already, and attaining the upper reaches of traffic, for even very committed bloggers, is much harder. That trend will continue. In February of 2009, I expect far more than the Top 10 to be dominated by professional, group efforts. The most popular blogs are no longer quirky or idiosyncratic individual voices; hard work by committed groups beats individuals working in their spare time for generating and keeping an audience.

    Comments (7) + TrackBacks (3) | Category: social software

    December 7, 2005

    Sanger on Seigenthaler’s criticism of WikipediaEmail This EntryPrint This Article

    Posted by Clay Shirky

    Larry Sanger, in regards to John Seigenthaler’s criticism of Wikipedia:

    I have long worried that something like this would happen—from the very start of Wikipedia, in fact. Last year I wrote a paper, “Why Collaborative Free Works Should Be Protected by the Law” (here’s another copy). When Seigenthaler interviewed me for his column, I sent him a copy of the paper and he agreed that it was prophetic. It is directly relevant to the part of Seigenthaler’s column that says: “And so we live in a universe of new media with phenomenal opportunities for worldwide communications and research—but populated by volunteer vandals with poison-pen intellects. Congress has enabled them and protects them.” That was a part of Seigenthaler’s column that bothered me: what exactly does Seigenthaler want Congress to do?

    Comments (82) + TrackBacks (0) | Category: social software

    June 13, 2005

    Wikipedia and slashdot: I was wrongEmail This EntryPrint This Article

    Posted by Clay Shirky

    In Wikipedia, Authority, and Astroturf, I made a guess about the relation between EliasAlucard, who created the Wikipedia entry on SymphonyOS, and esavard, who created the slashdot post about SymphonyOS that Rob Malda added to the front page of slashdot on June 8th.

    Michael Snow has followed up on the issue, and I was wrong. Esavard did not know Elias, and was not acting on concert with him. I owe an apology to both Esavard and Ryan Quinn, the technical lead for Symphony. I apologize to you both.

    The Wikipedia entry itself is more complicated. Snow notes that there is a vote as to whether to delete the SymphonyOS entry from Wikipedia, and its running strongly to leave it. This, in my view, is the right answer; the fact of a Wikipedia entry on a software project should be tied to its existence, rather than being a referendum on other aspects of the project.

    Furthermore, the entry has now been edited to a much more neutral point of view, including, in particular, the deletion of the Trivia section, which was created with a single piece of trivia — that the site had been slashdotted on June 8. There were, in my view, two things wrong with that section: first, if the section really was trivial, it should not, by definition, have been included. If it was not trivial, it should have had another name, but there’s no obvious alternative section for it, since the fact of the slashdotting is unrelated to the technical merit of the effort.

    Second, and more importantly, though the entry mentioned slashdot, it didn’t link to the actual slashdot thread on SymphonyOS, surely far more important than the effect slashdot traffic had on its servers. By mentioning the slashdot effect without pointing to slashdot itself, the Trivia section had the look of an advertisement.

    There’s a long thread on this issue on the Talk page, which is interesting both for Elias’ declarations of autonomy w/r/t to an article he clearly feels he owns (my favorite quote: “So what if this is an advertisement campaign? What are you going to do about it? Nothing.”) and for the view it offers about how the Wikipedia community works generally, with a kind of measured deliberativeness that is quite rare in online communities.

    Comments (1) + TrackBacks (0) | Category: social software

    June 9, 2005

    Wikipedia, Authority, and AstroturfEmail This EntryPrint This Article

    Posted by Clay Shirky

    Slashdot, one of my few ‘must scan three times a day’ sites, has notoriously poorly coordinated and unskeptical editors. As a result, they often run stories that are different from ads only in that /. doesn’t charge for the service.

    Yesterday, though, I saw a new wrinkle: a post sent in by an esavard, using the already pointless sound and fury around the Apple/Intel matchup, to flog a new! improved! YALD (Yet Another Linux for the Desktop) with the goals — who could imagine such audacious goals! — of making Linux easier to use, making applications simpler to create, and just generally making sure everyone has a pony.

    So, to add a little foam to what was pretty small beer, esavard pointed to the Wikipedia entry about their YALD, saying “If you want to know more about Symphony OS, a good starting point is a Wikipedia article describing the innovations proposed by this new desktop OS.

    Now at that point the Wikipedia entry was around three weeks old, had been edited 29 times, and 20 of those edits were by the same user, EliasAlucard. The first edit to that page after being picked up by slashdot (from an IP address with no associated username and with no other history of edits) added a note under the header Trivia: “On 8 June 2005, the Symphony OS website was a victim of the Slashdot effect.” (I deleted this bit of self-aggrandizement just now, though we’ll see how long Elias lets it go.)

    Then, today, when someone pointed out on the related Talk page that our pal EliasAlucard had created a Wikipedia advertisement, he replied “Guess what? No one cares about your opinion of what it looks like. Give it a rest already.”

    This is an interesting kind of spam, or maybe we could call it a reputation hack. I have no way of knowing who esavard is in relation to EliasAlucard, but I am betting they are pretty closely related. They create a Wikipedia page, point to it as if to demonstrate independent interest for the project in their potential slashdot post, then point to the slashdot effect on the Wikipedia page as proof of said independent interest. Voila, an instant trend.

    This is the downside of the mass amateurization of publishing. Since the threshold for exclusion from the Wikipedia is so low, there is almost no value in thinking “Hey, it’s got a Wikipedia article — must be serious.” We have the sense-memory of that way of thinking from the days where it cost money to publish something, and this class of reputation hack relies on that memory to seed the network with highly targeted ads.

    And it’s a hard hack to stop, since it isn’t exactly vandalism. Most articles have only a few editors in the early days, so it’s an attack that doesn’t have an obvious signature either. It’s relatively to see how to defend against vandalism of high-stakes pages, but it’s hard to see how to defend against the creation of pages where so little is at stake for anyone but the advertiser.

    Comments (11) + TrackBacks (0) | Category: social software

    June 8, 2005

    Uncyclopedia and CategoriesEmail This EntryPrint This Article

    Posted by Clay Shirky

    Uncyclopedia, a Wikipedia parody. Hadda happen, and as an added flavor bonus it includes categorization jokes:
    People and Animals
    Writers - Celebrities - Kings of Iceland - Living People - Dead People - Persons of indeterminate mortal status - Wankers - Deities

    Handy Categories
    * Coherent
    * Incoherent
    * Years
    * Everything

    Useless Categories
    * Beans
    * Island of L’aard
    * Morality
    * Typographical Symbols

    Comments (1) + TrackBacks (0) | Category: social software

    June 6, 2005

    Ebay Neg Tool: Cat-Mouse Reputation ProblemsEmail This EntryPrint This Article

    Posted by Clay Shirky

    Two years ago, the economist Paul Resnick wrote about his work on eBay:

    I think there are two problems with the official and community encouragement to resolve disputes before leaving negative feedback. First, patterns of mild dissatisfaction are not recorded, so lots of useful information is lost. Second, sellers have become overly sensitive any negative or even neutral effect because it is so rare. If negative feedback were given 5% or 10% of the time, on average, then sellers would worry about keeping their percentage down, but wouldn’t be as concerned about any particular feedback.

    Negative feedback is rare because it is powerful, as a kind of nuclear option, but as a result, there is a huge information assymetry, where frequently but mildly poor sellers are less likely to be spotted.

    Earlier this year, Toolhaus launched Ebay Negs!, which is the next phase of that cat/mouse game.

    Ebay Negs! lets you view all the negative feedback an eBay user has received. To use it, first highlight the ebay username you want to check with your mouse, then right click and select “Ebay Negs!” You will then be transferred to a page at http://www.toolhaus.org where all the negative feedback remarks that user have received will be displayed.

    This assumes the very imbalance that Resnick was talking about in 03 — indeed, the comments posted on the tool page all call it a time saver, indicating how little value is placed on even an overwhelming preponderance of positive comments.

    This is analogous to stocks falling when a company exactly meets its earnings target. Since the target was announced by the company itself, and since the accounting tricks that can be used to massage earnings are many, a company that can’t beat a hurdle it sets for itself is assumed to be in trouble. In the same way, if a negative rating on eBay means that all communal norms and attempts at dispute resolution failed, making tools for ferreting out even single examples of negative comments worth the users’s time.

    It’s interesting that as transparent a market as eBay has grown an information assymetry problem all its own, and tools like eBay Neg, while helpful to individual buyers in the short run, and just going to ratchet up the overall pressure more.

    Comments (1) + TrackBacks (0) | Category: social software

    WSJ.com: The day the email diedEmail This EntryPrint This Article

    Posted by Clay Shirky

    WSJ.com has a brief summary of what happened to the workplace during an email outage:

    So how’d we fare this time around? Well, we’re glad to report that the removal of cold, impersonal email from our workplace reminded us of the value of getting up and talking with each other, reforging lasting connections that will do far more for us than any fancy software system could ever do. Yeah right. And then we went out and planted a tree.

    No, what really happened was a day of false starts, fluttering hands and embarrassed shrugs, vaguely agonizing and occasionally amusing. […] Those with email also became lifelines for meeting organizers — because our calendars are all tied into our email, most of our schedules were instantly erased, leaving harried-looking meeting organizers trying to find people with working email who could peek at the organizers’ schedules, or who’d been invited to a meeting and could reply-all to the invite as a method of reconstructing the list of attendees.

    The key losses to the workplace from the lack of email included not just the data stored in the mail itself, but a critical — and now irreplaceable — social lubricant.

    Comments (0) + TrackBacks (0) | Category: social software

    May 16, 2005

    Ontology Is Overrated: Social advantages in taggingEmail This EntryPrint This Article

    Posted by Clay Shirky

    This spring, I gave a pair of talks on opposite coasts on the subject of categorization and tagging. The first was entitled Ontology Is Overrated, given at the O’Reilly ETech conference in March. Then, in April I gave a talk at IMCExpo called Folksonomies & Tags: The rise of user-developed classification.

    I’ve just put up an edited concatenation of those two talks, coupled with invaluable editorial suggestions from Alicia Cervini. It’s called Ontology is Overrated — Categories, Links, and Tags. Though much of it is not about social software per se, I try to extend the argument that the ‘people infrastucture’ hidden in traditional classification systems is an Achilles’ heel for systems that have to operate at internet scale, and that the logic of tagging overcomes that weakness:

    DSM-IV, the 4th version of the psychiatrists’ Diagnostic and Statistical Manual, is a classic example of an classification scheme that works because of these characteristics [of the user base]. DSM IV allows psychiatrists all over the US, in theory, to make the same judgment about a mental illness, when presented with the same list of symptoms. There is an authoritative source for DSM-IV, the American Psychiatric Association. The APA gets to say what symptoms add up to psychosis. They have both expert cataloguers and expert users. The amount of ‘people infrastructure’ that’s hidden in a working system like DSM IV is a big part of what makes this sort of categorization work.

    This ‘people infrastructure’ is very expensive, though. One of the problem users have with categories is that when we do head-to-head tests — we describe something and then we ask users to guess how we described it — there’s a very poor match. Users have a terrifically hard time guessing how something they want will have been categorized in advance, unless they have been educated about those categories in advance as well, and the bigger the user base, the more work that user education is.

    More at Ontology is Overrated — Categories, Links, and Tags.

    Comments (15) + TrackBacks (0) | Category: social software

    May 11, 2005

    Google Acquires DodgeballEmail This EntryPrint This Article

    Posted by Clay Shirky

    Google, the publicly held Mountain View, CA firm best known for its search engine, has acquired dodgeball, a social networking tool for mobile urbanites and one of the earliest examples of mobile social software.

    The next paragraph contains one hundred w00ts.

    w00t w00t w00t!!! w00t!!!! w00t w00t w00t!! w00t! w00t!!!! w00t w00t!!! w00t!!! w00t! w00t!!! w00t!!!! w00t!!! w00t!!!! w00t!!! w00t!!! w00t w00t!!! w00t!! w00t!!! w00t!! w00t!! w00t!! w00t!!!! w00t!!! w00t!! w00t w00t!!! w00t! w00t w00t!! w00t!!! w00t!! w00t!! w00t! w00t w00t w00t w00t! w00t!! w00t! w00t!! w00t!!! w00t!! w00t!! w00t!!!! w00t!!!! w00t!!!! w00t!!! w00t!!! w00t!!!! w00t!! w00t!! w00t w00t!! w00t!!!! w00t!!! w00t! w00t!!!! w00t w00t w00t!!!! w00t! w00t!! w00t! w00t w00t!!! w00t!!!! w00t!!! w00t!! w00t!!!! w00t!!! w00t!! w00t w00t! w00t!! w00t!! w00t! w00t!!! w00t!!! w00t w00t!!! w00t! w00t!!!! w00t w00t!!! w00t w00t!! w00t! w00t!! w00t!!!! w00t!!! w00t w00t!!! w00t! w00t!! w00t!!!

    Dennis Crowley and Alex Rainert were students of mine at ITP. I’ve watched them build Dodgeball over the last few years, which was both inspiring and instructional. Given the level of thought and effort they’ve put into it, this is really good news, for them and for Google.

    More to say later, but the important thing now is that Dodgeball adds to a really interesting set of ‘sand in the oyster’ issues for Google. Google has historically been information-centric. The content and character of social relations don’t fit well into that view of the world, but matter, a lot, to users. (As we’ve often said around here, community != content.)

    Gmail, Orkut, and now Dodgeball all touch this issue. Dodgeball in particular is built on a mix of three different kinds of maps: maps of location (118 rivington St), maps of place (a bar called The Magician), and maps of social environment (“I’m here. Where are my friends?”) By mixing them, Dodgeball mingles informational and social aspects of a user’s life into something more valuable than either of those things in isolation.

    As Brewster Kahle says ‘If you want to solve, hard problems, have hard problems.” The integration of information-centric and social-centric views of the world will be awfully valuable, if Google gets them right.

    So congrats to Dodgeball and to Google!

    Comments (11) + TrackBacks (0) | Category: social software

    May 2, 2005

    Tagsonomy.com, and an answer to Tim BrayEmail This EntryPrint This Article

    Posted by Clay Shirky

    Some of us talking about tagging a have launched a group weblog called “You’re It: A blog on tagging,” at tagsonomy.com. (Authors are Christian Crumlish, David Weinberger, Don Turnbull, Jon Lebkowsky, Kaliya Hamlin, Mary Hodder, Timo Hannay, and me.)

    My introductory post there pointed to my earlier tagging articles at M2M. My first real post is a response to Tim Bray’s question: “Are there any questions you want to ask, or jobs you want to do, where tags are part of the solution, and clearly work better than old-fashioned search?” I think the answer is Yes, and try to delinate some of the reasons why.

    (And, because tagging straddles social and organizational concerns, I’ll have to figure out when to post here vs there, but I’m planning to x-post pointers generally.)

    Comments (2) + TrackBacks (0) | Category: social software

    April 19, 2005

    Sanger, Part IIEmail This EntryPrint This Article

    Posted by Clay Shirky

    The second half of Larry Sanger’s piece on Wikipedia and Nupedia is up. I haven’t even read the whole thing yet, but it’s fascinating, especially as it goes considerably deeper into the governance issues.

    It is one thing to lack any equivalent to “police” and “courts” that can quickly and effectively eliminate abuse; such enforcement systems were rarely entertained in Wikipedia’s early years, because according to the wiki ideal, users can effectively police each other. It is another thing altogether to lack a community ethos that is unified in its commitment to its basic ideals, so that the community’s champions could claim a moral high ground. So why was there no such unified community ethos and no uncontroversial “moral high ground”? I think it was a simple consequence of the fact that the community was to be largely self-organizing and to set its own policy by consensus. Any loud minority, even a persistent minority of one person, can remove the appearance of consensus.

    Read it.

    Comments (2) + TrackBacks (0) | Category: social software

    April 18, 2005

    Sanger on WikipediaEmail This EntryPrint This Article

    Posted by Clay Shirky

    Over on slashdot, Larry Sanger has published the first in an N-part series (N>1) on the early history of the Wikipedia (and the failed Nupedia) projects.

    It has all of the benefits and disadvantages of being written by someone present at the creation: the details of early choices are fascinating, while the score-settling is a bit tedious. (He takes Daniel Pink to task for misquoting the tiny number of finished Nupedia articles, even though the gap between Wikipedia and Nupedia covers orders of magnitude.)

    What’s most fascinating, though, is not the historical element, but Sanger’s own position. He understands why Wikipedia works and Nupedia didn’t, and yet is constantly maintaining that the Wikipedia would benefit from being more like the planned Nupedia:

    This point bears some emphasis: Wikipedia became what it is today because, having been seeded with great people with a fairly clear idea of what they wanted to achieve, we proceeded to make a series of free decisions that determined the policy of the project and culture of its supporting community. Wikipedia’s system is neither the only way to run a wiki, nor the only way to run an open content encyclopedia. Its particular conjunction of policies is in no way natural, “organic,” or necessary. It is instead artificial, a result of a series of free choices, and we could have chosen differently in many cases; and choosing differently on some issues might have led to a project better than the one that exists today.

    I have a hard time understanding how a loosely bound community, choosing among available options, isn’t an organic process, but Sanger has always been convinced that setting and enforcing a Nupedian-style respect for authority was a) possible for Wikipedia and b) desirable for Wikipedia. (I’ve disagreed with Sanger on both points in the past, but based on a less complete re-telling than this looks to be.)

    In any case, since the whole piece isn’t yet published, it’s too soon to see how the various themes will develop, but for anyone following Wikipedia, this will be a key piece of writing.

    Comments (6) + TrackBacks (0) | Category: social software

    CFP: Wikimania 2005Email This EntryPrint This Article

    Posted by Clay Shirky

    Wikimania 200, the First International Wikimedia Conference will be held in Frankfurt from August 4-8, 2005 to 8 August 2005.

    Two key upcoming dates are:
    - May 10 - Abstract deadline for panels, papers, posters and presentations [Notification: by May 25]
    - May 30 - Submission deadline for research paper drafts

    Says the submission page:
    Original research is welcome, but not required. Be bold in your submissions! Wikimania is meant to be both a scientific conference and a social event. Relevant topics include:


    * Wiki research: How do wikis, and the Wikimedia wikis in particular, operate? Which processes scale and which ones don’t? What kinds of people or social structures are well-suited to wikis? How does introducing a wiki into existing project groups change group dynamics?
    * Wiki sociology: What motivates Wikimedians and what drives them away? Who are they, anyway? And where do they come from?
    * Wiki critics: Critical positions are welcome: why Wikipedia will never be an encyclopedia, why Wikinews can never substitute newspapers, why amateurs shouldn’t be allowed to edit, and so forth.
    * Wiki technology ideas: What can we do to address perceived and real problems, for example, peer review? How can we provide better-nuanced or more immediate user feedback?
    * Wiki software ideas […]
    * Wiki community ideas […]
    * Wiki project ideas […]
    * Wiki content ideas […]
    * Multimedia […]
    * Free knowledge […]
    * Collaborative writing […]
    * Multilingualism […]

    Comments (0) + TrackBacks (0) | Category: social software

    April 5, 2005

    Banning blogging, 'Toothing, and YozEmail This EntryPrint This Article

    Posted by Clay Shirky

    Phil Gyford, in With great audiences…, wonders what it takes to get a story propagated in the weblog world, and is afraid that the answer is merely ‘attention grabbing headline + a patina of Old Media validity.’ He writes about a “banning blogging” story picked up from the traditional press, where the weblog…

    …got carried away with the newspaper’s headline, repeating it in theirs even though a cursory read of the newspaper article reveals that no one “banned blogging.” The newspaper claims the principal doesn’t think blogging is educational, and Cory could certainly have criticised him for this alone, although it would make for a less dramatic post. The repetition of the lie about the principal banning blogging, rather than his apparent opinion, is possibly also what prompted a reader to suggest people should email the principal to complain.

    Phil posts about BoingBoing, but the pattern is quite general — you can see misleading posts like San Francisco Attempts to Regulate Blogging almost daily on slashdot.

    The pressure to give things a dramatic headline, online or off, is tremendous, because if you don’t get readers with the headline, you won’t get them at all. This leads, in the weblog world, to a curious moral hazard, where fact-checking can be left to the furthest upstream source. “Well, if the Osceola Star-Ledger, with their enormous resources, can’t fact-check the article, how can I be expected to???” And so we get ourselves in high dudgeon at injustices that may never have happened, because they are the kind of thing we would hate if they had happened.

    Thiscontrasts with with the magnificent distributed fact-checking done elsewhere, as with the Trent Lott or Dan Rather investigations. The choice to fact-check vigorously, even when a story is reported by well-funded news outlets, seems only to happen when the writers in question disagree with the story, while the decision to accept the fact-checking of any traditional media outlet, in order to be able to fast-forward to the aforementioned high dudgeon, seems to come when the weblogger likes repeating or even amplifying the claims made further upstream.

    Which brings me to ‘toothing.

    ‘Toothing was the craze for arranging on-the-spot trysts among users of Bluetooth-capable cellphones, as reported by Wired last March. Except ‘toothing was a hoax, as the perpetrator revealed after seeing this slashdot thread.

    It seems harmless, except that many of the subsequent references weren’t about ‘toothing per se (understandable, as there was nothing to study), but rather referenced ‘toothing as one member of a set of activities mobile technologies enabled. ‘Toothing went from being a thing to being a touchstone for reasoning about mobile technologies generally.

    A couple years ago, I spent some time on the trail of the urban legend that half the world had never made a phone call. While ‘toothing was never likely to acheive that degree of saturation, it was, like the ‘half the world’ phrase, a distortion not only in itself but as the avatar of larger social patterns.

    I checked the M2M archives, and to my relief, we didn’t write about ‘toothing, though probably not out of any native skepticism, or we would have written to de-bunk it. Yoz Grahame is the only person I know of who got this right, in the voluptously titled Sex-Crazed Brits Just Doing It Everywhere, Like, Everywhere Man, You Can’t Stop Them, They’re Like Dogs In Heat Or Something, And Dude, I Gotta Get Me Some Of That:

    Pausing only to spill some famous London ale down the front of his XXL-sized rugby shirt, Barry outlined some key points in the rapidly-evolving lexicon of British desire. “So what you do, right, is you spot a nice tart over by the bar and you think, lovely, I’ll have a bit of that. And you tip her the wink, you know? And then, if she looks back at you, she’s gagging for it.”

    “Just like Bluetooth signalling,” I commented as I tapped hurried notes into my Zaurus. “Ingenious!”

    One lesson we could all take from this is “Pay more attention to Yoz”, which couldn’t hurt, but a better motto is ‘WWYD?’ Note that he didn’t fact-check the ‘toothing story, he sense-checked it. The thing wrong with the toothing story isn’t that the participants of the toothign scene aren’t IDed, it’s that the story itself doesn’t make any sense. Most of us will not be able to afford the calling and re-calling of sources to double-check a quote, but all of us can ask ourselves, just before we hit Submit, ‘Is this true?’

    And the time we should be most careful to do that is if we feel really satisfied with what we’ve written — “How dare the House of Representatives propose a mandatory bar code tattooed on the foreheads of liberal bloggers!!! Must. Denounce. Now.”

    All the phrases we use to separate the weblog world from other media outlets weaken with elapsed time — old media, new media, traditional media, all of it suggests that newcomers join the club when they’ve been around long enough to be familiar. As weblogs continue their symbiosis with the forms of media that went before, we will make ourselves targets of truly malevolent hoaxes if we simply decide to repeat what we agree with. The echo chamber is of far less danger overall than unchecked amplification.

    Comments (16) + TrackBacks (1) | Category: social software

    March 31, 2005

    Stuff that gets spammed, part NEmail This EntryPrint This Article

    Posted by Clay Shirky

    I hardly know what to make of this — Waxy.org has discovered that WordPress, the great open source blogging platform, has been pimping out it’s highly rated home page to an SEO (Search Engine “Optimization”) firm, effectively selling the community capital it built up to spammers by “publishing” articles that are hidden to users but visible to spiders.

    There’s also a bizarre defense of this practice on Planet Wordpress, on the grounds that WordPress needed money to grow, and wasn’t getting it from donations.

    This is such an interesting and uncharted area — as the net gets bigger and karma, previously bottled up in human relations, becomes convertible for real currency, in everything from ZeroDegrees/SMS.ac style spam to real sales of virtual characters to this, we are going to have to find ways to defend against this sort of karmic hijacking.

    Comments (6) + TrackBacks (0) | Category: social software

    March 30, 2005

    Hedlund Reads Yahoo 360Email This EntryPrint This Article

    Posted by Clay Shirky

    Marc Hedlund examines Yahoo 360 using Lessons from Lucasfilm’s Habitat (Best. Essay. EVAR.) as his guide, since one of the authors of LLH was Randall Farmer, one of the creators of Y360.

    Hedlund comes away skeptical, noting that the lack of interoperable standards and widely available APIs violate some of the LLH tenets, as with the LLH assertion “Data communications standards are vital.”

    Those who do not learn the lessons of Habitat are doomed to repeat them, indeed. In 360, we see this problem, the lack of communication standards, expressed most acutely in the IM sidebar, which lists the online status of all of your buddies — excuse me, your Yahoo buddies. You can IM them and send them messages in the system (messages which are like email but not email, so that you have yet a third voice with which to speak to a subset of your friends). Why do I need a web view on my IM buddy list when I have that list on my computer already? If 360 becomes your home, perhaps that would be useful.

    The fault here is easy to see with a thought experiment. Let’s say Yahoo 360 were implemented today by a startup, a company without ties or loyalty to an existing body of users. Would they make the same decision? Is it in the best interest of new users to 360 to have their Yahoo buddies be the only ones available for sharing, or is that more in the interest of Yahoo?

    Data communication standards are vital, and the lack of them has kept IM from becoming a platform for innovation as email and the web have become. 360 suffers from the lack of a standard just as would any startup, but it hasn’t sought out a solution, as would a company that needed new users to survive.

    I’m less convinced than Marc that this is fatal, starting from the premise that much human congress happens within essentially arbitrary divisions like this one — you know your co-workers on the 5th floor or your neighbors on your street better than you know the people on the 6th floor, or on the next block over.

    However, I am, like Marc, convinced that this ‘proprietary standards and messaging’ weakness will prevent 360 from becoming a complete digital hub. It may simply be a good fusion of Orkut and fotolog.

    Comments (3) + TrackBacks (0) | Category: social software

    March 28, 2005

    Business data pointEmail This EntryPrint This Article

    Posted by Clay Shirky

    Just got email from a headhunter looking for leads for a ‘VP of Social Computing,’ whose job will include building and managing a staff of 75-80 (!) people.

    No word on what the company is (though it’s obviously large) and the work doubtless includes a number of more broadcast-oriented efforts as well (e.g. weblogs and RSS as publishing tools as well as conversational ones,) but it was interesting to a) see a VP level hire in this area and b) to see how large a staff is being imagined.

    Comments (4) + TrackBacks (0) | Category: social software

    March 22, 2005

    Allen on altruism and group sizeEmail This EntryPrint This Article

    Posted by Clay Shirky

    Interesting speculation over on Life With Alacrity about Dunbar, Altruistic Punishment, and Meta-Moderation — Allen disucsses work on an agent-based simulation that suggests a phase transition from cooperating groups to Tragedy of the Commons scenarios at ~15 people, a much lower number than many of us assumed commons-based problems arose. (My assumptions had been closer to 25.)

    This is a very interesting result. To explain it in different terms, if you have a system that depends on sharing some commons and there are no process or trust metrics, a group as small as 16 may find themselves not cooperating very effectively.
    The idea of commons can be as simple as how much speaking time participants in a meeting share. The time that each participant uses during the meeting can be considered the shared “commons”. If there are no enforced rules, with a group size of 16 there will inevitably be someone who will abuse the time and speak more than their share.

    The one big caveat is that this is based on studies of agents, not actual humans, making the results fairly provisional. However, the study at least points to some experimental designs that could be tried with real live groups.

    Comments (0) + TrackBacks (0) | Category: social software

    March 9, 2005

    One World, Two Maps (thoughts on the Wikipedia debate)Email This EntryPrint This Article

    Posted by Clay Shirky

    When thinking about technological change, there are two kinds of people, or rather, people with two kinds of maps of the world — radial, and Cartesian. Radial maps are circular, and express position in relative coordinates — angle and distance — from the center. Cartesian maps are grids, and express position in absolute coordinates. Each of the views has good and bad points on their own, but reading danah on Wikipedia has made me contemplate the tendency of the two groups to talk past each other.

    Radial people assume that any technological change starts from where we are now — reality is at the center of the map, and every possible change is viewed as a vector, a change from reality with both a direction and a distance. Radial people want to know, of any change, how big a change is it from current practice, in what direction, and at what cost.

    Cartesian people assume that any technological change lands you somewhere — reality is just one point of many on the map, and is not especially privileged over other states you could be in. Cartesian people want to know, for any change, where you end up, and what the characteristics of the new landscape are. They are less interested in the cost of getting there.

    Radial people tend to think more about change than end state, and more about local maxima (are things getting better?) than about a global maximum (are things as good as they could be?) Cartesian people think more about end state than change, and more about global than local maxima.

    I am a radial person; danah is a Cartesian person. Cory Doctorow is a radial person; Nicholas Negroponte is a Cartesian person. Richard Gabriel is radial; Alan Kay is Cartesian. This is not a question of technology but outlook. Extreme Programming is a radial method; the Capability Maturity Model is Cartesian. Open Source groups tend towards radial methods, closed source groups tend towards Cartesian methods. It’s incrementalism vs. planned jumps, evolution vs. directed labor.

    When we make mistakes, radial people tend to overestimate the value of incrementalism, and to underestimate the gap between local and global maxima. When they make mistakes, Cartesian people tend to underestimate the cost in moving from reality to some imagined alternate state, and to overestimate their ability to predict what a global maximum would look like.

    This is, plainly, an overstatement of the Everyone is a Pirate or a Ninja sort, but I think there is a grain of truth to it — when Negroponte rails against incrementalism, there’s an interesting discussion to be had about how big he thinks a change has to be before it no longer counts as an increment, but there’s no denying that he is advancing different idea about technological improvement than Gabriel is in his Worse Is Better argument. There’s a similar difference in the way danah or Matt Locke talk about Wikipedia vs. the way Cory or I do. There are lots of blended cases, but the basic impulse is different.

    This has been an era of radial triumphs, because radial maps tend to be better guides to large, homeostatic systems. When thinking about change on the internet, the tools that have been driven by a thousand tiny adoptions and alterations have tended to be more important than the tools designed in advance to change the landscape. However, radial vision requires that someone, somewhere, have pushed through a large, destabilizing change, in order for the radial people to be playing in new terrain with lots of unexplored local maxima. Shawn Fanning could only change the world in 1999 because Vint Cerf changed the world in 1969.

    Bob Spinrad, who used to run PARC (an echt Cartesian organization) said “The only institutions that fund pure research are either monopolies or think they are.” Cartesian development is economically draining, and never pays for itself in the short term, so it’s no accident that R&D happens outside traditional profit maximizing institutions, whether governmental, academic, or monopolists.

    You can see the differences in the two worldviews most clearly when we argue across that gap. I literally cannot understand danah’s complaints; I read “The problem that i’m having with the Wikipedia hype is the assumption that it is the panacea for it too has its problems”, and I wonder who she’s talking about. The radialists praising the Wikipedia are not saying it’s perfect, or even good in any absolute sense — we don’t ever talk about absolute quality.

    Wikipedia interests us because it’s better, and sustainably better, than what went before — it’s a move from a simple product (“Pay us and we’ll write an encyclopedia”) to a complex system, where a million differing, internal motivations of the users and contributors are causing an encyclopedia to coalesce. How cool is that? (The radialist motto…)

    But danah and Matt cannot understand our enthusiasm. From the Cartesian point of view, the thing that would excite you would be dramatic change to a new state. Radialists never say things like ‘panacea’ or ‘utopia’, but the Cartesians hear us saying those things, or think they do, because otherwise what would the fuss be about? Mere incrementalism is nothing more than a Panglossian fetishization of reality, and excitement about a technological change that doesn’t create a dramatic new equilibrium is simply hype, from the Cartesian point of view.

    And so, when they see us high-fiving over Wikipedia, the Cartesians think we’ve taken leave of our senses, and, more to the point, they think we’ve misunderstood what is happening. They then launch a corrective set of arguments, pointing out, for example, that Wikipedia still leaves unanswered questions about social exclusion. But this, from a radialist point of view, is no more meaningful than pointing out that Wikipedia doesn’t cure skin cancer — no one ever said it would. Anything that was bad at Point A and is still bad at Point B gets factored out of the radialist critique. Any change where most of the bad things are still bad but a few of the bad things are somewhat less bad seems like a good thing to us, and if it can happen in a way that requires less energy, or better harnesses individual motivation, that seems like a great thing.

    And so we go, back and forth, tastes great, less filling. We want to ask them why they aren’t excited about Wikipedia, since it is, to us, so obviously progress, but they want to know “Progress towards what?” They can’t even read their map without a posited end state. And they want to ask us why we’re not concerned about where all this is going, but we don’t have an answer to that question, because our maps only show us the way up the next hill, not what we’ll see when we get there.

    There’s no answer to any of this — as Grandma used to say, “Both your maps are nice.” But after months of cognitive dissonance — I both admire and love danah; what she’s saying about Wikipedia simply confuses me — I think now have a way of understanding why the current conversation seems so unmoored.

    Comments (7) + TrackBacks (0) | Category: social software

    March 1, 2005

    Matt Locke on folksonomiesEmail This EntryPrint This Article

    Posted by Clay Shirky

    Wonderful Matt Locke piece on folksonomies, which introduces not one but two substantial ideas to the debate:

    Perhaps this illustrates the limit of folksonomies - they are only useful in a context in which nothing is at stake. [Emphasis his] Folksonomies are, in essence, just vernacular vocabularies; the ad-hoc languages of intimate networks. They have existed as long as language itself, but have been limited to the intimate networks that created them. At the point in which something is at stake, either within that network or due to its engagement with other networks (legal, financial, political, etc) vernacular communication will harden into formal taxonomy, and in this process some of its slipperiness and playfulness will be lost.

    He relates this to the idea of play from finite and infinite games. (I’m more optimistic about the shift here than he is, for reasons I’ll discuss below, but I think he’s spot on about the gap between palyful and serious categorization.)

    The other idea, from Bowker and Star’s marvelous Sorting Things Out, is about the inherent tension in classification generally:

    Bowker and Star identify three values that are in competition within classfication structures: comparability, visibility and control. Folksonomies have elevated visibility, but at the expense of comparability (being able to translate classifications across taxonomies or contexts) and control (the ability of the classification to limit interpretation, rather than interpret ‘emergent’ behaviour). Whilst nothing is at stake, and there is little lost by not being able to transfer taxonomies from one context to the other, or users are not disadvantaged by the need to independently assess and contextualise meaning, folksonomies will provide a useful service.

    Just a fantastic post.

    The only place I vary from Matt (it’s not even a disagreement, really, just a prediction about the future) is in the eventual value of folksonomy. He likens folksonomies to vernacular vocabularies, but this doesn’t describe their first-order importance, at least not where systems like del.icio.us are concerned.

    Here’s what’s radical about what del.icio.us protends: My vocabulary on del.icio.us folksonomy is personal, not vernacular — no one knows or needs to know which class I’m talking about when I tag something ‘class’, or that I use LOC to mean Library of Congress. This isn’t the same as, say, the dictionary of thieves slang from the mid-18th c. because no one else needs to know my bookmark system, and I don’t need to know anyone else’s, or, to quote Adam Smith: “It is not from the benevolence of the butcher, the brewer, or the baker, that we can expect our dinner, but from their regard to their own interest.”

    This is really, truly different, because it uses the intiution of markets — aggregate self-interest creates shared value. Locke points to the loss of control as one of the downsides of folksonomic classification (at least in its del-style form), but there are significant upsides as well. The LOC has no top-level category for queer issues, but del.icio.us does, because its users want it to.

    By forcing a less onerous choice between personal and shared vocabularies, del.icio.us shows us a way to get categorization that is low-cost enough to be able to operate at internet scale, while ensuring that the emergent consensus view does not have to be pushed onto any given participant.

    Which is why it mystifies me that both Matt and danah are so concerned with exclusion — who’s excluded here, who isn’t also excluded from using the internet generally? Put another way, is anyone excluded from using del.icio.us who has better representation in other classification schemes?

    The del.icio.us answer is “If you don’t like the way something is tagged, tag it yourself. No one can tell you not to.” Prior ot del.icio.us, controlled vocabularies were almost inevitably vocabularies that pushed the politics of the creators onto the users; that is upended here.

    Comments (9) + TrackBacks (0) | Category: social software

    February 28, 2005

    Who's afraid of Wikipedia?Email This EntryPrint This Article

    Posted by Clay Shirky

    danah said, in Academia and Wikipedia, “All the same, i roll my eyes whenever students submit papers with Wikipedia as a citation.”

    I didn’t comment on this at the time, but grading papers over the weekend, I had a student cite the Wikipedia for the first time, referencing its entry on the OSI Reference Model. Seeing it in the footnotes, I wondered what the fuss was about. The Wikipedia article is a perfectly good overview of the Reference Model, and students should document, to the extent they are able, the sources of their research. When they have learned something from the Wikipedia, in it goes; to exclude it would in fact be dishonest.

    Curiously, the Wikipedia reference came in the same week that another student was referring to Walter Benjamin’s The Work of Art in the Age of Mechanical Reproduction, an essay that is tremendously influential and, in a bunch of non-trivial ways, wrong about the inherent politicization of reproducible art, and especially of film. I’m much more worried about students overestimating the value of the Benjamin essay, because of its patina of authority, than I am about them overestimating the value of the Wikipedia as a source for explaining the 7-layer networking model.

    And I assume I am hardly alone in the academy. Hundreds, if not thousands of us must be getting papers this year with Wikipedia URLs in the footnotes, and despite the moral panic, the Wikipedia is a fine resource on a large number of subjects, and can and should be cited in those cases. There are articles, as danah has pointed out, where it would be far better to go to the primary sources, but that would be as true were a student to cite any encyclopedia. If someone cited the Wikipedia to discuss Benjamin’s work, I’d send them back to the trenches, but I would also do that if they cited Encyclopedia Britannica.

    To borrow some Hemingway, this is how the academy will get used to Wikipedia — slowly, then all at once.

    Comments (10) + TrackBacks (0) | Category: social software

    February 21, 2005

    Social Physics (.org)Email This EntryPrint This Article

    Posted by Clay Shirky

    Fascinating new effort called Social Physics, affiliated with Berkman, with two large goals:

    - Create a robust, multi-disciplinary, multi-constituency community for addressing, vetting and conducting experiments in such issues as privacy, authentication, reputation, transparency, trust building and information exchange.


    - Develop a reusable, open source software framework based on the Eclipse Rich Client Platform that provides core services including: identity management, social network data models, authentication management, encryption, and privacy controls. On top of this framework we are also developing a demo app that provides identity management and social networking functions, tools to create peer-to-peer identity sharing and facilities to support communities of interest around emerging topics.

    I’m generally skeptical of identity management — it has the same hollow ring as knowledge management — but since the focus here is on trust building, rather than simple transactions that treat trust as a binary condition or simple threshold, this will be worth watching.

    Comments (0) + TrackBacks (0) | Category: social software

    February 16, 2005

    Social Software: Stuff that gets you laid...Email This EntryPrint This Article

    Posted by Clay Shirky

    JWZ has a great rant on the brokenated nature of groupware, written after a conversation with a friend building an open-source groupware project:

    If you want to do something that’s going to change the world, build software that people want to use instead of software that managers want to buy.

    When words like “groupware” and “enterprise” start getting tossed around, you’re doing the latter. You start adding features to satisfy line-items on some checklist that was constructed by interminable committee meetings among bureaucrats, and you’re coding toward an externally-dictated product specification that maybe some company will want to buy a hundred “seats” of, but that nobody will ever love. With that kind of motivation, nobody will ever find it sexy. It won’t make anyone happy.

    He then offered a more upbeat definition of social software than ‘stuff that gets spammed’:

    But with a groupware product, nobody would ever work on it unless they were getting paid to, because it’s just fundamentally not interesting to individuals.

    So I said, narrow the focus. Your “use case” should be, there’s a 22 year old college student living in the dorms. How will this software get him laid?

    That got me a look like I had just sprouted a third head, but bear with me, because I think that it’s not only crude but insightful. “How will this software get my users laid” should be on the minds of anyone writing social software (and these days, almost all software is social software).

    “Social software” is about making it easy for people to do other things that make them happy: meeting, communicating, and hooking up.

    Comments (8) + TrackBacks (1) | Category: social software

    February 3, 2005

    Tagging's power lawEmail This EntryPrint This Article

    Posted by Clay Shirky

    Ben Hyde looks at four popular bookmarks at del.icio.us and plots how many times each is tagged with the same word. E.g, BoingBoing is tagged as “blog” 200 times and as “news” 90 times. The curve is that of a classic power law: The most frequently used tags are used waaaay more frequently than lesser-used tags.

    Ben stresses that four bookmarks don’t constitute a significant sample, but wouldn’t we expect a folksonomy to assume the shape of a power law distribution?

    Comments (0) + TrackBacks (0) | Category: social software

    February 1, 2005

    Tags run amok!Email This EntryPrint This Article

    Posted by Clay Shirky

    Back In The Day, when I was trying to explain what I meant when I was talking about social software, but before Coates pulled my fat out of the fire by doing the work for me, I had all these wicked abstruse definitions that made everyone’s eyes glaze over.

    The only definition I ever found that created the lighbulb moment I was feeling was “Social software is stuff that gets spammed.” Not a perfect definition, but servicable in its way.

    Comes now del.icio.us tag spam from user DaFox, as if to illustrate the principle — a single link, whose extended description is a variation on the form “Best site EVAR!” and who has tagged the site (for his or her own retrieval doubtless) with the following tags:
    .imported .net 10placesofmycity 2005 3d academic accessibility activism advertising ai amazon amusing animation anime apache api app apple apps architecture art article articles astronomy audio backup bands bittorrent blog blogging blogs book bookmark books browser business c canada career china christian clothing cms code coding collaboration color comic comics community computer computers computing cooking cool creativity css culture daily database deals …

    The list includes another couple hundred items — that must be some site, containing as it does not just the above listed items but info relevant to Ruby programming, New York City, typography, economics, and porn. DaFox is the Canter and Siegal for the social software generation.

    Comments (6) + TrackBacks (1) | Category: social software

    Folksonomy: The Soylent Green of the 21st CenturyEmail This EntryPrint This Article

    Posted by Clay Shirky

    In What Do Tags Mean, Tim Bray says “There is no cheap metadata” (quoting himself from the earlier On Search.) He’s right, of course, in both the mathematical sense (metadata, like all entropy-fighting moves, requires energy) and in the human sense — in On Search, he talks about the difficulties of getting users to enter metadata.

    And yet I keep having this feeling that folksonomy, and particularly amateur tagging, is profound in a way that the ‘no cheap metadata’ dictum doesn’t cover.

    Imagine a world where there was really no cheap metadata. In that world, let’s say you head on down to the local Winn-Dixie to do your weekly grocery accrual. In that world, once you pilot your cart abreast of the checkout clerk, the bargaining begins.

    You tell her what you think a 28 oz of Heinz ketchup should cost. She tells you there’s a premium for the squeezable bottle, and if you’re penny-pinching, you should get the Del Monte. You counter by saying you could shop elsewhere. And so on, until you arrive at a price for the ketchup. Next out of your cart, the Mrs. Paul’s fish sticks…

    Meanwhile, back in the real world, you don’t have to do anything of the kind. When you get to the store, you find that, mirabile dictu, the metadata you need is already there, attached to the shelves in advance of your arrival!

    Consider what goes into pricing a bottle of Heinz: the profit margin of the tomato grower, the price of a barrel of oil, local commercial rents, average disposable incomes in your area, and the cost of providing soap in the employee bathrooms. Yet all those inputs have already been calculated, and the resulting price then listed on handy little stickers right there on the shelves. And you didn’t have to do any work to produce that metadata.

    Except, of course, you did. Everytime you pick between the Heinz and the Del Monte, it’s like clicking a link, the simplest possible informative transaction. Your choice says “The Heinz, at $2.25 per 28 oz., is a better buy than the Del Monte at $1.89.” This is so simple it doesn’t seem like you’re producing metadata at all — you’re just getting ketchup for your fish sticks. But in aggregate, those choices tell Del Monte and Heinz how to capture the business of the price-sensitive and premium-tropic, respectively.

    That looks like cheap metadata to me. And the secret is that that metadata is created through aggregate interaction. We know how much more Heinz ketchup should cost than Del Monte because Heinz Inc. has watched what customers do when they raise or lower their prices, and those millions of tiny, self-interested transactions have created the metadata that you take for granted. And when you buy ketchup, you add your little bit of preference data to the mix.

    So this is my Get Out of Jail Free card to Tim’s conundrum. Cheap metadata is metadata made by someone else, or rather by many someone elses. Or, put another way, the most important ingredient in folksonomy is people.

    I think cheap metadata has (at least) these characteristics:

    1. It’s made by someone else
    2. Its creation requires very few learned rules
    3. It’s produced out of self-interest (Corrolary: it is guilt-free)
    4. Its value grows with aggregation
    5. It does not break when there is incomplete or degenerate data

    And this is what’s special about tagging. Lots of people tag links on del.icio.us, so I gets lots of other people’s metadata for free. There is no long list of rules for tagging things ‘well,’ so there are few deflecting effects from transaction cost. People tag things for themselves, so there are no motivation issues. The more tags the better, because with more tags, I can better see both communal judgment and the full range of opinion. And no one cares, for example, that when I tag things ‘loc’ I mean the Library of Congress — the system doesn’t break with tags that are opaque to other users.

    This is what’s missing in the “Users don’t tag their own blog posts!” hand wringing — they’re not supposed to. Tagging is done by other people. As Cory has pointed out, people are not good at producing metadata about their own stuff, for a variety of reasons.

    But other people will tag your posts if they need to group them, find them later, or classify them for any other reason. And out of this welter of tiny transactions comes something useful for someone else. And because the added value from the aggregate tags is simply the product of self-interest + ease of use + processor time, the resulting metadata is cheap. It’s not free, of course, but it is cheap.

    Comments (5) + TrackBacks (0) | Category: social software

    January 30, 2005

    del.icio.uus Tag StemmingEmail This EntryPrint This Article

    Posted by Clay Shirky

    Matt Biddulph has put up a del.icio.us tag stemmer, which will take your username (or indeed any username) and point out the possible inconsistencies based on word stemming (tag/tags/tagging, etc.) It will also take a URL, scan all users who tagged it, and look for the same thing.

    What it will not (yet) do is return the full list of tags sorted by frequency, listing both tags with alternate stems and those without, but I assume this is simply a matter of time.

    This is part of why I think tags are such a big deal — they are annotations for the only native unit of accounting the Web has, namely the URL; the annotations are themselves URLs that can be further annotated; and they are simple enough in both concept and technical design that third-party services like ‘stemtags’ can easily be built on top of the system.

    Comments (4) + TrackBacks (0) | Category: social software

    Looking for avid wikipedia usersEmail This EntryPrint This Article

    Posted by Clay Shirky

    Press request by proxy — for someone working on a story about the wikipedia, is there anyone out there whou would consider themselves either an avid user of wikipedia, or an avid contributor to Wikipedia? If so, drop me a line.

    Comments (0) + TrackBacks (0) | Category: social software

    January 29, 2005

    Folksonomy is better for cultural values: A response to danahEmail This EntryPrint This Article

    Posted by Clay Shirky

    danah’s great piece on cultural issues in folksonomy gets to a key piece of the debate, namely that we can’t talk about categorization issues like accuracy without also talking about the culture that created the categories. However, I feel a curious disconnect between her exposition of the issues and her tone. There seems to be skepticism about folksonomic tagging in her post (though possibly it is just the reflexive skepticism of the academy.)

    In any case, I want to point out that, for almost all the issues she raises, those characteristics are worse, much worse, in formal classification schemes than in folksonomies in general, and folksonomic tagging in particular.

    At the risk of running good writing through the sausage grinder (and re-ordering it to boot) her list of issues is broadly:

    1. There are perspective problems (e.g. the tag ‘me’ on Flickr.)
    2. Tags can be gamed (in the manner of the MLK/Technorati tags)
    3. Classification schemes are always culturally dependent
    4. Many terms are contested
    5. Some words cannot be simply translated literally
    6. Some words have multiple or conflicting meanings

    All the items on that list are true, and items 1 and 2 on the list are genuine design issues. System gaming is an issue, and can be fought with, inter alia, opacity of ranking method (the Google way), reputation markets (the Ebay way), continual post-hoc edits (the Wiki way), and so on. Each of these solutions may be tried in different places where folksonomy takes hold.

    And for the relative tag problem, there may be a small enough number of those kind of tags — me, toread, unfiled, etc — that we can make a dictionary filter. But the relativity can also be interesting when crossed-tabbed with the identity of the tagger; I don’t want ‘toread’ or ‘funny’ generally, but I do want Liz’s ‘toread’ tags, and Matt Webb’s ‘funny’ links.

    Items 3-6 on that list are different because while they are problematic in folksonomies, they are more — much more — problematic in top-down classification systems. Folksonomies represent progress in those areas, in other words.

    You want cultural dependence? The Library of Congress, in its top level categories for geographic regions, lists “The Balkan Penninsula” as one main entity, and “Asia” as another. Contested terms? Try finding queer literature in any library classification scheme. And so on. Folksonomic tagging improves on this by exposing cultural dependence and contestedness, rather than denying its existence, or hiding it by fiat.

    (As an aside, the signal loss from the pressures brought to bear on official categorizations is a common theme in classification generally. The entire alt. hierarchy in usenet came into being because there was a proposal to create rec.drugs, and there was concern that usenet, running in part over an NSF-funded network, would be shut down. The alt.* hierarchy was a compromise, to allow some face saving in suggesting that the *.drugs group was not ‘official’. And of course, alt. (an early folksonomy, albeit highly compromised by usenet’s hierarchical design) ballooned to many times the size of the ‘official’ usenet.)

    The aggregate good of tags is not that they create consensus or accuracy; they observably don’t, and this is very observability is much of their value. Pick any popular del.icio.us link, click on the “and X other people” link under the URL, and you’ll see how that page is tagged by dozens or hundreds of people. There is both broad alignment around a few terms, but there is also a long tail of other views, which you don’t get in formal systems.

    To take but one example, of the 114 people who tagged the Buffyology database, 66 tagged it buffy, 58 tagged it tv, and only 12 tagged it database, the third most popular tag. But 4 people tagged it sf or scifi, 2 tagged it fantasy, and 2 tagged it vampire. So the ambiguity between the literature of fantasy and of science fiction is exposed in the tagging, and the possibility of viewing Buffy as a thing related not mainly to TV but to vampires is also preserved.

    So this is what I don’t get: I can’t imagine that anyone concerned about hegemony and marginalization would prefer professionally structured categories over folksonomy. If you care about contested terms and the risks of marginalization, del.icio.us, Flickr, et al do more to improve our access to, and understanding of, marginalization and contestation than any current alternative.

    Comments (4) + TrackBacks (0) | Category: social software

    January 28, 2005

    Ross and danah in an article on Friendster in the NYTEmail This EntryPrint This Article

    Posted by Clay Shirky

    You can tell how behind I am on my reading — just got to the Monday NY Times article on Friendster, to find our own Ross and danah quoted:

    “Social networking is at this very interesting point,” said Ross Mayfield, a pioneer in the social networking field and the chief executive of Socialtext, which sells software for collaborative writing and editing via the Internet. “These companies are at the stage where they need to demonstrate real results in terms of revenues and their business model. That voyeuristic fascination of seeing who has the most friends has worn off for a lot of people.”

    Comments (0) + TrackBacks (0) | Category: social software

    January 26, 2005

    Britannica not so great on the fact checking department after allEmail This EntryPrint This Article

    Posted by Clay Shirky

    It’s so good, I don’t even want to comment:

    A SCHOOLBOY with a fascination for Poland and wildlife has uncovered several significant errors in the latest — the fifteenth — edition of the Encyclopaedia Britannica. Lucian George, 12, a pupil at Highgate Junior School in North London, was delving into the volumes on Poland and wildlife in Central Europe when he noted the mistakes.

    More, much more, here.

    Comments (4) + TrackBacks (0) | Category: social software

    When usenet was the world...Email This EntryPrint This Article

    Posted by Clay Shirky

    AOL is no longer offering usenet access.

    This feels to me like they’re tearing down an old diner in a neighborhood I used to live in. I never go there anymore, but I spent 5 years of my life on usenet, and 2 of those years in a fever I can’t characterise as anything other than addiction. I learned to write there, and it’s one of only two places where I had people I’d call real friends who I never met IRL. (The other was Old Man Murray, RIP.)

    The moment AOL started offering newsgroup access, it created the ‘long September’, where the flood of clueless newbies became more or less permanent, in contrast with previous years where every September saw an influx of freshman at colleges with usenet access, who were then flamed to high heaven acculturated, and the whole thing settled back down by October.

    It was also the first sign that the logic of value through interconnection was higher than value through exclusivity. (AOL’s genius, unrivalled, I believe, in the modern era, was to say one thing to investors and another to users, telling Wall Street it was a media company, but selling communications tools and the attendant access to community to its users, functions that always dwarfed use of its media properties.)

    There’s not much to eulogize here — the era when you had to explain to the press that the internet was more than just usenet are long over, and AOL’s vital place in the ecosystem wanes each day with the fortunes of dial-up access generally. Over at MeFi, they’re trying to revel in the idea that the long September is ending, but that’s not whats happening here — AOL’s decision to disconnect its NNTP servers cedes usenet to spam and usenet access to Google, which things were each a done deal some time ago.

    Still, it feels kind of funny. Usenet was such a spectacular experiment in the annals of human communication, the idea that it’s value isn’t worth the cost of keeping the servers running comes as a marker of things I already knew, but which still feel different when they become facts in the world.

    Comments (2) + TrackBacks (0) | Category:

    January 25, 2005

    Ontology as a term of artEmail This EntryPrint This Article

    Posted by Clay Shirky

    After my little tirade yesterday, my friend Kio pointed out that ontology is a term of art for almost every group that uses it, and that it has very different meanings in those various groups.

    For the metaphysicians, ontology is inquiry into the nature and relations of being, with a particular focus on fundamental categories. That’s not what I mean.

    The definition of ontology I’m referring to is derived more from AI than philosophy: a formal, explicit specification of a shared conceptualization. (Other glosses of this AI-flavored view can be found using Google for define:ontology.) It is that view that I am objecting to.

    Comments (3) + TrackBacks (0) | Category: social software

    January 24, 2005

    Good post on folksonomy; another on taggingEmail This EntryPrint This Article

    Posted by Clay Shirky

    Great post on folksonomy from Bokardo.com:
    Folksonomy Notes: Considering the Downsides, Behavioral Trends, and Adaptation

    One thing that I mentioned in response to Liz’s post was that I feel we should keep in mind how adaptive we humans are. It is a fundamental talent we have. Too often, I think, we ignore this quality, pushing for consistency over everything else, when all we need is a little explanation of how things work. Once we know how they work, we’re fine. I’m not advocating a willy-nilly approach to designing architectures for humans: I’m advocating a willy-nilly approach to designing architectures by humans, who use a willy-nilly approach when reading and writing and speaking words.

    and another on tagging by Tim Bray: What Do Tags Mean?

    I think that it would be nice if a huge number of web pages converged on using a simple, flat, shared set of tags with entries like vancouver and mac os x and tsunami relief, which the current setup works well for.

    But I think it would also be nice if, once we have Atom, there are feeds about Petroleum Geology with their own tags, and feeds about Military Training too, and they each have their own drill tag. Which Atom would support nicely.

    Of course, the only people who would need to know about the Petroleum or Military tags would be people specifically looking for that kind of stuff; someone looking for a drill tag generically would probably get both and maybe that would be fine.

    Bottom line: I suspect Technorati, and anyone else who takes this up, should offer an (optional) “scheme” field in their tag search capability, which would be handy for those who care and invisible for those who don’t.

    Comments (2) + TrackBacks (0) | Category: social software

    Tags != folksonomies && Tags != Flat name spacesEmail This EntryPrint This Article

    Posted by Clay Shirky

    Grrrr - I hate not having the time to write the post I want to write on this, but here goes…

    Tags are labels attached to things. This procedure is absolutely orthogonal to whether professionals or amateurs are doing the tagging.

    Professionals often think tags are covalent with folksonomies because their minds have been poisoned by the false dream of ontology, but also because tagging looks too easy (in the same way the Web looked too easy to theoreticians of hypertext.) Not only are tags amenable to being used as controlled vocabularies, it’s happening today, where groups are agreeing about how to tag things so as to produce streams of e.g. business research.

    More importantly, tags are not the same as flat name spaces. The LiveJournal interests list, the first large-scale folksonomy I became aware of (though before the label existed) is flat. The interest list has one meaning: Person X has Interest Y, included as part of in List L. All L is attached to X, and all Y’s are equivalent in L.

    Tags don’t work that way at all. Tags are multi-dimensional, and only look flat, in the way Venn diagrams look flat. When I tag something ‘socialsoftware drupal’, I enable searches of the form “socialsoftware & drupal”, “socialsoftware &! (and not) drupal”, “drupal &! socialsoftware”, and so on.

    Hierarchy is a degenerate case of tags. If hierarchy floats your boat, by all means tag hierarchically. If I tag so that A &! B returns no results, and a search on A alone returns the same items as A & B, then A is a subset of B at the moment.

    This last point is key — the number one fucked up thing about ontology (in its AI-flavored form - don’t get me started, the suckiness of ontology is going to be my ETech talk this year…), but, as I say, the number one thing, out of a rich list of such things, is the need to declare today what contains what as a prediction about the future. Let’s say I have a bunch of books on art and creativity, and no other books on creativity. Books about creativity are, for the moment, a subset of art books, which are a subset of all books.

    Then I get a book about creativity in engineering. Ruh roh. I either break my ontology, or I have to separate the books on creativity, because when I did the earlier nesting, I didn’t know there would be books on creativity in engineering. A system that requires you to predict the future up front is guaranteed to get worse over time.

    And the reason ontology has been even a moderately good idea for the last few hundred years is that the physical fact of books forces you to predict the future. You have to put a book somewhere when you get it, and as you get more books, you can neither reshelve constantly, nor buy enough copies of any given book to file it on all dimensions you might want to search for it on later.

    Ontology is a good way to organize objects, in other words, but it is a terrible way to organize ideas, and in the period between the invention of the printing press and the invention of the symlink, we were forced to optimize for the storage and retrieval of objects, not ideas. Now, though, we can scrap of the stupid hack of modeling our worldview on the dictates of shelf space. One day the concept of creativity can be a subset of a larger category, and the next day it can become a slice that cuts across several categories. In hierarchy land, this is a crisis; in tag land, it’s an operation so simple it hardly merits comment.

    The move here is from graph theory (arrange everything in a tree graph, so that graph traversal becomes the organizing principle) to set theory (sets have members, and the overlap or non-overlap of those memberships becomes the organizing principle.) This is analogous to the change in how we handle digital data. The file system started out as a tree graph. Then we added symlinks (aliases, shortcuts), which said “You can organize things differently than you store them, and you can provide more than one mode of access.”

    The URI goes all the way in that direction. The URI says “Not only does it not matter where something is stored, it doesn’t matter whether it’s stored. A URI that generates the results on the fly is as valid as one that points to a disk.” And once something is no longer dependent on tree graph traversals to find it, you can dispense with hierarchical assumptions about categorizing it too.

    Comments (13) + TrackBacks (1) | Category: social software

    January 23, 2005

    The Innovator's LemmaEmail This EntryPrint This Article

    Posted by Clay Shirky

    To respond to David’s question about folksonomies Aren’t we going to innovate our way out of this? My answer is yes, but only for small values of “out.” A big part of what’s coming is accepting and adapting to the mess, instead of exiting it.

    Seeing people defend professional classification as a viable option for large systems is giving me horrible flashbacks to the arguments in 1993 about why gopher was superior to the Web. Gopher was categorized by professionals, and it was hierarchical. Gotta love hierarchy for forcing organization. You can’t just stick things any old place — to be able to add something to a hierarchy, you have to say where it goes, and what it goes with, next to, under, and above. And if you do it right, you can even call it an ontology, which means you get to charge extra. (I loves me some ontologies.)

    The Web, meanwhile, was chaos. Chaos! You could link anything to anything else! Melvil Dewey would plotz if he saw such a tuml. How on earth could you organize the Web? The task is plainly impossible.

    And you know what? The gopher people were right. The Web is chaos, and instead of getting the well-groomed world of gopher, we’ve adapted to the Web by meeting it half way.

    ...continue reading.

    Comments (7) + TrackBacks (0) | Category: social software

    January 22, 2005

    Folksonomies are a forced move: A response to LizEmail This EntryPrint This Article

    Posted by Clay Shirky

    Liz’s fantastic posts on folksonomy (one, two) detail the new issues we’re facing or will face around folksonomic organization. In the first post, though, she takes on my earlier argument about the economic value of folksonomy, saying

    Clay argues that detractors from wikipedia and folksonomy are ignoring the compelling economic argument in favor of their widespread use and adoption. Perhaps. But I’m arguing that it’s just as problematic to ignore the compelling social, cultural, and academic arguments against lowest-common-denominator classification. I don’t want to toss out folksonomies. But I also don’t want to toss out controlled vocabularies, or expert assignment of categories. I just don’t believe that all expertise can be replicated through repeated and amplified non-expert input.

    I don’t believe that either, so I want to re-state my views on the subject.

    I believe that folksonomies will largely displace professionally produced meta-data, and that this will not take very long to happen. However, I do not think that folksonomy is better than controlled vocabularies or expert judgment, except for completely tautological definitions of ‘better’, where the rise of folksonomy is viewed as prima facie evidence of superiority. This is not the position I take.

    If I had to craft a statement I thought both Liz and I could agree with, it would be that technology always involves tradeoffs among various characteristics in a particular environment. She goes on to list some of those characteristics, including especially the risks from lowest-common-denominator classifications. So far, so sympatico.

    Here’s where I think we disagree. She thinks economic value is another of the characteristics to be traded off. I think economic value is the environment.

    Put another way, I don’t think it matters what is lost by not having professionally produced metadata in any environment where that is not an option anyway, by virtue of being priced out of the realm of possibility.

    So when she says I am urging an uncritical acceptance of folksonomies, she is half right. I am not in favor of uncriticality; indeed, in the post she references, I note that well-designed metadata is better than folksonomies on traditional axes of comparison.

    But she’s right about the ‘acceptance’ half. It doesn’t matter whether we “accept” folksonomies, because we’re not going to be given that choice. The mass amateurization of publishing means the mass amateurization of cataloging is a forced move. I think Liz’s examination of the ways that folksonomies are inferior to other cataloging methods is vital, not because we’ll get to choose whether folksonomies spread, but because we might be able to affect how they spread, by identifying ways of improving them as we go.

    To put this metaphorically, we are not driving a car, with gas, brakes, reverse and a lot of choice as to route. We are steering a kayak, pushed rapidily and monotonically down a route determined by the enviroment. We have a (very small) degree of control over our course in this particular stretch of river, and that control does not extend to being able to reverse, stop, or even significantly alter the direction we’re moving in.

    Comments (4) + TrackBacks (1) | Category: social software

    More on Social Software as a termEmail This EntryPrint This Article

    Posted by Clay Shirky

    I got an email from Alex Pang at the Institute for the Future, asking about the future of social software, which response I started by writing about the value of the term in the recent past. (I’ll post my predictions for the future under separate cover.) This is in a way a continuation of the conversation started about social software as a term, kicked off last fall by Chrisopher Allen’s history and definition of the word.

    Alex’s questions were Where do you think social software will be in ten years? Will it be the foundation of a discrete category of applications or services? Will social software-like capabilities be built into other software? Will the whole concept be as outdated as a KC and the Sunshine Band album?

    Yes, in 10 years, the phrase will be outdated. We won’t need it anymore because the value of social interaction will be folded into a large number of applications, sometimes as built-in features, sometimes as external services that get integrated in the manner of web services.

    Looking back, the phrase ‘social software’ has served three functions. First, it called attention to an explosion of new work that was otherwise seemingly unrelated: at first glance, del.icio.us isn’t like Meetup isn’t like Socialtext. The label made it both possible and fruitful to examine those similarities, and to imagine how applications like those might be combined or extended.

    ...continue reading.

    Comments (3) + TrackBacks (0) | Category: social software

    January 7, 2005

    folksonomies + controlled vocabulariesEmail This EntryPrint This Article

    Posted by Clay Shirky

    There’s a post by Louis Rosenfeld on the downsides of folksonomies, and speculation about what might happen if they are paired with controlled vocabularies.

    …it’s easy to say that the social networkers have figured out what the librarians haven’t: a way to make metadata work in widely distributed and heretofore disconnected content collections.

    Easy, but wrong: folksonomies are clearly compelling, supporting a serendipitous form of browsing that can be quite useful. But they don’t support searching and other types of browsing nearly as well as tags from controlled vocabularies applied by professionals. Folksonomies aren’t likely to organically arrive at preferred terms for concepts, or even evolve synonymous clusters. They’re highly unlikely to develop beyond flat lists and accrue the broader and narrower term relationships that we see in thesauri.

    I also wonder how well Flickr, del.icio.us, and other folksonomy-dependent sites will scale as content volume gets out of hand.

    This is another one of those Wikipedia cases — the only thing Rosenfeld is saying that’s actually wrong is that ‘lack of development’ bit — del.icio.us is less than a year old and spawning novel work like crazy, so predicting that the thing has run out of steam when people are still freaking out about Flickr seems like a fatally premature prediction.

    The bigger problem with Rosenfeld’s analysis is its TOTAL LACK OF ECONOMIC SENSE. We need a word for the class of comparisons that assumes that the status quo is cost-free, so that all new work, when it can be shown to have disadvantages to the status quo, is also assumed to be inferior to the status quo.

    The advantage of folksonomies isn’t that they’re better than controlled vocabularies, it’s that they’re better than nothing, because controlled vocabularies are not extensible to the majority of cases where tagging is needed. Building, maintaining, and enforcing a controlled vocabulary is, relative to folksonomies, enormously expensive, both in the development time, and in the cost to the user, especailly the amateur user, in using the system.

    Furthermore, users pollute controlled vocabularies, either because they misapply the words, or stretch them to uses the designers never imagined, or because the designers say “Oh, let’s throw in an ‘Other’ category, as a fail-safe” which then balloons so far out of control that most of what gets filed gets filed in the junk drawer. Usenet blew up in exactly this fashion, where the 7 top-level controlled categories were extended to include an 8th, the ‘alt.’ hierarchy, which exploded and came to dwarf the entire, sanctioned corpus of groups.

    The cost of finding your way through 60K photos tagged ‘summer’, when you can use other latent characteristics like ‘who posted it?’ and ‘when did they post it?’, is nothing compared to the cost of trying to design a controlled vocabulary and then force users to apply it evenly and universally.

    This is something the ‘well-designed metadata’ crowd has never understood — just because it’s better to have well-designed metadata along one axis does not mean that it is better along all axes, and the axis of cost, in particular, will trump any other advantage as it grows larger. And the cost of tagging large systems rigorously is crippling, so fantasies of using controlled metadata in environments like Flickr are really fantasies of users suddenly deciding to become disciples of information architecture.

    This is exactly, eerily, as stupid as graphic designers thinking in the late 90s that all users would want professional but personalized designs for their websites, a fallacy I was calling “Self-actualization by font.” Then the weblog came along and showed us that most design questions agonized over by the pros are moot for most users.

    Any comparison of the advantages of folksonomies vs. other, more rigorous forms of categorization that doesn’t consider the cost to create, maintain, use and enforce the added rigor will miss the actual factors affecting the spread of folksonomies. Where the internet is concerned, betting against ease of use, conceptual simplicity, and maximal user participation, has always been a bad idea.

    Comments (15) + TrackBacks (0) | Category: social software

    Jake responds to my post on Wikipedia and authorityEmail This EntryPrint This Article

    Posted by Clay Shirky

    [Ed. Note: Jake wrote such a long and thoughtful comment responding to my post of yesterday on the Wikipedia and authority that I wanted to re-post it as an entry, with comments and trackbacks of its own.

    I have re-formatted it to use box-style quotes and expanded some acronynms, but changed none of the substance. -clay ]

    Clay writes:

    Picking up on yesterday’s theme of authority, the authority of, say, Coleridge’s encyclopedia was the original one: authority derived from the identity of the author. This is like trusting Mom’s Diner, or the neighborhood tailor — personal reputation is worth preserving, and helps assure quality. The authority of Britannica, by contrast, is the authority of a commercial brand. Their sales are intimately tied into their reputation for quality, so we trust them to maintain those standards, in order to preserve an income stream. This is like trusting Levis or McDonald’s — you don’t know the individuals who made your jeans or your french fries, but the commercial incentive the company has in preserving its brand makes the level of quality predictable and stable.

    Jake comments:

    Yes, but a brand is some sort of an ethereal thing. [I think a ‘not’ was dropped, as in “… not some sort of ethereal thing.” — ed] It is a symbolic representation of a product or the underlying institution that created it. Trademark rights are common law in nature and they attach through use.

    So while it is true that at some level this sort of structure is a reification, in practical terms its role in building trust must be considered. To me this is really the core of the concerns about Wikipedia. Is the structure adequate to the task of establishing something authoritative enough to be useful.

    ...continue reading.

    Comments (3) + TrackBacks (0) | Category: social software

    January 6, 2005

    Coates' new shorthand definition of social softwareEmail This EntryPrint This Article

    Posted by Clay Shirky

    Tom glosses himself, coming up with a pithier and more example-driven definition of social software:

    Social Software can be loosely defined as software which supports, extends, or derives added value from, human social behaviour - message-boards, musical taste-sharing, photo-sharing, instant messaging, mailing lists, social networking.

    Comments (6) + TrackBacks (0) | Category: social software

    Wikipedia: The nature of authority, and a LazyWeb request...Email This EntryPrint This Article

    Posted by Clay Shirky

    There was another point of danah’s I wanted to respond to, but yesterday’s post had gotten quite long enough, and in any case, I had a slightly different take in mind for this, including a LazyWeb request at the end, relating to this image:

    danah says “Wikipedia appears to be a legitimate authority on a vast array of topics for which only one individual has contributed material. This is not the utopian collection of mass intelligence that Clay values.” This misconstrues a dynamic system as a static one. The appropriate phrase is “…for which only one individual has contributed material so far.”

    Wikipedia is not a product, it is a system. The collection of mass intelligence that I value unfolds over time, necessarily. Like democracy, it is messier than planned systems at any given point in time, but it is not just self-healing, it is self-improving. Any given version of Britannica gets worse over time, as it gets stale. The Wikipedia, by contrast, whose version is always the Wiki Now, gets better over time as it gets refreshed. This improvement is not monotonic, but it is steady.

    ...continue reading.

    Comments (8) + TrackBacks (0) | Category: social software

    January 5, 2005

    Wikipedia: Me on boyd on Sanger on WalesEmail This EntryPrint This Article

    Posted by Clay Shirky

    My response to danah’s response about the Wikipedia/anti-elitism debate:

    First, some background. I have the same “yes, but” reaction over and over to Wikipedia detractors — much of what both Sanger and boyd say is wrong with the Wikipedia is wrong with it, but then there’s this incredible leap from “The site as it stands has faults” to “…and so it must be ignored or radically altered.”

    Reading pieces like Sanger’s, I feel like I’m being told that bi-planes fly better than F-16s because F-16’s are so heavy. You cannot understand how well things fly without understanding both weight and thrust.

    It’s a similar leap to assume that, since the Wikipedia has disadvantages relative to the Encyclopedia Britannica, Britannica must therefore be better. The real question is “Weighing the advantages and disadvantages of the Wikipedia against the advantages and disadvantages of Britannica, under what conditions is Britannica better, and under what conditions is Wikipedia better?”

    ...continue reading.

    Comments (11) + TrackBacks (0) | Category: social software

    January 4, 2005

    Taggle: A proposed Google for FolksonomiesEmail This EntryPrint This Article

    Posted by Clay Shirky

    Post at brianstorms about federated folksonomies a la del.icio.us and flickr, with an image-as-thought-experiment that sums up the idea neatly:

    Comments (2) + TrackBacks (0) | Category:

    Reagle on the WikipediaEmail This EntryPrint This Article

    Posted by Clay Shirky

    As if on cue, I got a pointer from Joseph Reagle to his recent paper on the Wikipedia. It’s a fascinating piece, largely concerned with disagreement and dispute resolution among participants, but relative to the current debate about the Sanger piece, this bit of history jumped out at me:

    Wikipedia is the populist offshoot of the Nupedia project started in March of 2000 by Jimbo Wales and Larry Sanger. Nupedia’s mission was to create a free encyclopedia via rigorous expert review under a free documentation license. Unfortunately, this process moved rather slowly and having recently been introduced to Wiki, Sanger persuaded Wales to set up a scratch-pad for potential Nupedia content where anyone could contribute. However, “There was considerable resistance on the part of Nupedia’s editors and reviewers, however, to making Nupedia closely associated with a website in the wiki format. Therefore, the new project was given the name ‘Wikipedia’ and launched on its own address, Wikipedia.com, on January 15 [2001]”


    Wikipedia proved to be so successful that when the server hosting Nupedia crashed in September of 2003 (with little more than 23 “complete” articles and 68 more in progress) it was never restored.

    The idea of a Wikipedia, but vetted, runs aground on the simple math of relative growth. The ‘filter, then publish model’ of Nupedia, for all its considerable virtues, is simply inadequate to deal with rapid growth (only 23 articles completed!). The characteristics of the Wikipedia’s success are in its ability to grow with minimal constraints, even when that means that the whole is a work in progress.

    Comments (0) + TrackBacks (0) | Category: social software

    January 3, 2005

    K5 Article on Wikipedia Anti-elitismEmail This EntryPrint This Article

    Posted by Clay Shirky

    Slashdot has a roundup of criticism of the Wikipedia, including a pointer to a Kuro5hin article by Larry Sanger, a co-founder of the Wikipedia, making three strong criticisms of the Wikpedia as it stands.

    The first criticism is that the Wikpedia lacks the perception of acccuracy:

    My point is that, regardless of whether Wikipedia actually is more or less reliable than the average encyclopedia, it is not perceived as adequately reliable by many librarians, teachers, and academics. The reason for this is not far to seek: those librarians etc. note that anybody can contribute and that there are no traditional review processes. You might hasten to reply that it does work nonetheless, and I would agree with you to a large extent, but your assurances will not put this concern to rest.

    This analysis seems to be correct on the surface, and at the same time deeply deeply wrong. Of course librarians, teachers, and academics don’t like the Wikipedia. It works without privilege, which is inimical to the way those professions operate.

    This is not some easily fixed cosmetic flaw, it is the Wikipedia’s driving force. You can see the reactionary core of the academy playing out in the horror around Google digitizing books held at Harvard and the Library of Congress — the NY Times published a number of letters by people insisting that real scholarship would still only be possible when done in real libraries. The physical book, the hushed tones, the monastic dedication, and (unspoken) the barriers to use, these are all essential characteristics of the academy today.

    It’s not that it doesn’t matter what academics think of the Wikipedia — it would obviously be better to have as many smart people using it as possible. The problem is that the only thing that would make the academics happy would be to shoehorn it into the kind of filter, then publish model that is broken, and would make the Wikipedia broken as well.

    Sanger’s second complaint is about governance:

    Far too much credence and respect accorded to people who in other Internet contexts would be labelled “trolls.” There is a certain mindset associated with unmoderated Usenet groups and mailing lists that infects the collectively-managed Wikipedia project: if you react strongly to trolling, that reflects poorly on you, not (necessarily) on the troll. If you attempt to take trolls to task or demand that something be done about constant disruption by trollish behavior, the other listmembers will cry “censorship,” attack you, and even come to the defense of the troll.

    This complaint is right, I think, inasmuch as it hits the core problem of Wikipedia (and of social software generally), namely governance. How do you take a group of individuals who disagree and get them to co-create, and to agree to be bound by a decision-making process that will assure that no one gets everything they want? And how do you also make that system open?

    However, Sanger gives Wales and the Wikipedia contributors too little credit here, I think. Governance is a certified Hard ProblemTM, and at the extremes, co-creation, openness, and scale are incompatible. The Wikipedia’s principle advantage over other methods of putting together a body of knowledge is openness, and from the outside, it looks like the Wikipedia’s guiding principle is “Be as open as you can be; close down only where there is evidence that openness causes more harm than good; when this happens, reduce openness in the smallest increment possible, and see if that fixes the problem.” Lather, rinse, repeat.

    You can see this incrementalism in the Wikipedia crew’s creeping approach to limiting edits — not allowing edits on the home page, paragraph level edits on long articles, etc. These kinds of solutions were deployed only in response to particular problems, and only after those problems were obviously too severe to be dealt with in any other way.

    This pattern means that there will always be problems with governance on the Wikipedia, by definition. If you don’t lock down, you will always get the problems associated with not locking down. However, to take the path Sanger seems to be advocating — lock down more, faster — risks giving up the Wikipedia’s core virtue. The project may yet fail because there is no sweet spot between openess and co-creation at Wikipedia scale. But to lock down pre-emptively won’t be avoiding that failure but accelerating it.

    Sanger’s final point, that the Wikipedia is anti-elitist, is quite similar to his first complaint. Yes, it is impossible for experts on a subject to post their views without molestation but that’s how wikis work. It’s certainly easy to imagine systems where experts are deferred to mechanically. Much of the world, including, significantly, the academy, works that way. But if you want a system that works that way, you don’t want a wiki, and if you want a wiki, you won’t get a system that works that way.

    In place of ordained expertise, my guess is that the Wikipedia will move further towards a ‘core group’ strategy, where there will be increasing separation of powers between committed and casual users, and the system will gain a kind of deference, not for expertise (a fairly elusive quality that Sanger invokes but never defines)

    It’s been fascinating to watch the Kubler-Ross stages of people committed to Wikipedia’s failure: denial, anger, bargaining, depression, acceptance. Denial was simple; people who didn’t think it was possible simply dis-believed. But the numbers kept going up. Then they got angry, perhaps most famously in the likening of the Wikipedia to a public toilet by a former editor for Encyclopedia Brittanica. Sanger’s post marks the bargaining phase; “OK, fine, the Wikipedia is interesting, but whatever we do, lets definitely make sure that we change it into something else rather than letting the current experiment run unchecked.”

    Next up will be a glum realization that there is nothing that can stop people from contributing to the Wikipedia if they want to, or to stop people from using it if they think it’s useful. Freedom’s funny like that.

    Finally, acceptance will come about when people realize that head-to-head comparisons with things like Britannica are as stupid as comparing horseful and horseless carriages — the automobile was a different kind of thing than a surrey. Likewise, though the Wikipedia took the -pedia suffix to make the project comprehensible, it is valuable as a site of argumentation and as a near-real-time reference, functions a traditional encyclopedia isn’t even capable of. (Where, for example, is Britannica’s reference to the Indian Ocean tsunami?)

    The Wikipedia is an experiment in social openness, and it will stand or fall with the ability to manage that experiment. Whining like Sanger’s really only merits one answer: the Wikipedia makes no claim to expertise or authority other than use-value, and if you want to vote against it, don’t use it. Everyone else will make the same choice for themselves, and the aggregate decisions of the population will determine the outcome of the project.

    And 5 years from now, when the Wikipedia is essential infrastructure, we’ll hardly remember what the fuss was about.

    Comments (36) + TrackBacks (0) | Category: social software

    A Really Simple Chat, v3.0b: Back Channels 'R' ItEmail This EntryPrint This Article

    Posted by Clay Shirky

    I’ve admired Manuel Kiessling’s marvelous A Really Simple Chat (ARSC) program since I saw Greg Elin using it as conference support back in 2002. Greg and I played around with it some more that fall for a small social software conference, adding some features specific to backchannel support (detailed in In-Room Chat as a Social Tool.)

    Now Manuel is beta-testing 3.0 of ARSC, and it’s got a lot of native support for backchannel features — there’s a specific ‘in room chat’ room, with Jerry Michalski’s ‘red card/green card’ system built in. There is also a defult ‘Display’ user (beta login ‘Display’, passwd ‘arsc’) whose view of the chat is optimized for projection or plasma screen by boosting the font size and dropping the input features.

    He’s also added user levels and two-level moderation, along with a number of other new features.

    The goal, says Manuel, is to “…make it possible to set up a digital backchannel in under 2 minutes, without any dirty hacks.” You can play with the beta version on his site, or install your own.

    As always, the advantages of ARSC are its ability to circulate access to a chat via URL, which is often simpler than getting people to download special software for irc., and allows a variety of strategies for inclusivity and exclusivity.

    I still have some LazyWeb requests for ARSC. The main one is to be able to turn the current message post-processing tool into a full-fledged Atom feed, to make it easier to archive, monitor, and even bridge between ARSC and irc.

    I’d also like to see some sort of ‘scroll-speed’ setting, where, when there are more than a dozen or so interjections in a minute, the later comments are pre-cached and roll out on the screen at some specified maximum pace, rather than the earlier comments flying off the screen. When things get crazy, make the machines do the work, not the people…

    And Manuel has registered inroomchat.org as well — nothing there yet, but he says he’s going to “…set up a wiki there for all things social software/in-room chat/digital backchannel.”

    Comments (2) + TrackBacks (0) | Category: social software

    January 1, 2005

    Good Piece on FolksonomiesEmail This EntryPrint This Article

    Posted by Clay Shirky

    Good piece by Adam Mathes, Folksonomies - Cooperative Classification and Communication Through Shared Metadata:

    Perhaps the most important strength of a folksonomy is that it directly reflects the vocabulary of users. In an information retrieval system, there are at least two, and possibly many more vocabularies present (Buckland, 1999). These could include that of the user of the system, the designer of the system, the author of the material, the creators of the classification scheme; translating between these vocabularies is often a difficult and defining issue in information systems. As discussed earlier, a folksonomy represents a fundamental shift in that it is derived not from professionals or content creators, but from the users of information and documents. In this way, it directly reflects their choices in diction, terminology, and precision.

    Comments (0) + TrackBacks (0) | Category: social software

    December 22, 2004

    Notes from ITP: Flickr-as-web-services editionEmail This EntryPrint This Article

    Posted by Clay Shirky

    Been away, working on a bunch of things including, most speculatively, a proposal for a book with the working title Organization in the Age of Social Devices, where devices refers both to our tools and to the things people do with those tools when left to their own devices. The collected themes of the book will be no surprise to readers here.

    All that is so 2006, however, and this is still 2004, so I want to try to capture some of what I’ve been seeing this semester at ITP. Unlike last year, where the fall semester largely resolved itself for me into a single big surprise (the pattern I’m calling Situated Software,) this year I’m seeing lots of distributed effects, with no one common thread, so I’m going to do a series of posts of things I’ve seen.

    So, first of all, ITP is Flickr-obsessed. The community is either in the grip of a fast-moving addiction, or we’re an epicenter of a pandemic; time will tell.

    I’ll start with two quick Flickr stories…

    ...continue reading.

    Comments (1) + TrackBacks (0) | Category: social software

    November 17, 2004

    Monitor110: Collective wisdom for investorsEmail This EntryPrint This Article

    Posted by Clay Shirky

    Monitor110 is taking the Technorati pattern and customizing it for investors. It’s for-fee and alpha, so its not easy to test-drive, but they claim near-real-time monitoring of 6M+ sources, which is half again as large as the Technorati universe; given their subscribing/scraping pattern, this means that the RSS universe has grown considerably larger than the weblog universe alone.

    This falls in the latent social software category, but it’s interesting to see the ‘collective wisdom of the weblog world’ pattern of blogdex et al becoming moving from the general to the specific.

    Comments (2) + TrackBacks (0) | Category: social software

    November 12, 2004

    Blogging as activity, blogging as identityEmail This EntryPrint This Article

    Posted by Clay Shirky

    I was mis-somethinged in the paper today — not misquoted or misattributed, exactly, but something like misconstrued, which makes me think both of danah’s post about blogging metaphors and Liz’s post about presenting blogs as diaries vs. blogs as outlets for research.

    In an article on weblogging and politics in today’s NY Times, Tom Zeller, the reporter who wrote the story, has me saying this:
    [Shirky] suggests that the online fact-finding machine has come unmoored, and that some bloggers simply “can’t imagine any universe in which a fair count of the votes would result in George Bush being re-elected president.”

    I said the part inside the quotes, and it is, in a narrow way, accurate — I have seen a remarkable profusion of posts about the election that assumes that evidence that e-voting errors are part of a theft of the election on a grand scale.*

    But the sentiment overall is not something I believe — I am in fact on the record over at PersonalDemocracy.com as saying that the unmasking of the National Guard memos was the most critical event in the use of internet technology in this campaign.

    What rubs me wrong is that the quote is framed in a way that makes it about identity, not activity. One way to present this would have been to define an axis of interest: ‘some Democrats “can’t imagine any universe in which a fair count of the votes would result in George Bush being re-elected president.”’ Another would have been to define a relatively neutral category: ‘some writers “can’t imagine any universe in which a fair count of the votes would result in George Bush being re-elected president.”’

    Neither of those seems wrong, but the way it’s phrased, I seem to be suggesting that there are bloggers unmoored from the fact-checking pattern because they are bloggers, rather than because they are Democratic partisans who publish their thoughts using weblog tools. And that’s where it goes wrong.

    I have long been of the opinion that the word weblog has no crisp meaning anymore, and is going to fade as a defining term for the same reason ‘portal’ did — there are too many patterns to be conveniently contained by one word. But here the nature of weblogging and webloggers is defined, from outside, as not just a category, but an identity.

    And that I think, is not just wrong but unfair. So many people have weblogs now that anyone wanting to say anything sweeping and negative about the weblog world — they’re all bored teenagers, they rant instead of writing, they are conspiracy theorist, whatever — can find, in 10 minutes on technorati, a hundred weblogs that support their point of view.

    If I had the interview to do over again, I’d say that many of the people who “can’t imagine any universe in which a fair count of the votes would result in George Bush being re-elected president” are blogging about it. Blogging is increasingly an activity rather than an identity, too widespread and various to be pigeonholed, and should be treated as such when peopel are writing about it.

    ——

    • As for my own views, I voted for Kerry and I’m sorry Bush won. I also think e-voting is a disaster. But I don’t think the two are linked — Bush’s majority was too large, and even if Kerry could win the electoral college with a reversal in Ohio or Florida, he still lost the popular vote. I thought it was rotten to have a popular win and an electoral loss in 2000, and I still hold that principle.

    Comments (5) + TrackBacks (0) | Category: social software

    November 8, 2004

    IMsmarterEmail This EntryPrint This Article

    Posted by Clay Shirky

    IMsmarter.com is a service designed to add GMail-style functions to your IM conversations, by setting up a proxy service that archives all your IM conversations, making them persistent, accessible from multiple clients, searchable, combinable, etc. (They also have a blog feature, where you can set up a blog through the service, and sub to blogs created by other IMsmarter users, but that feels added-on and irrelevant compared to the main offering.)

    I’ve only been playing with it a while, and it’s still clunky in parts (search, for example, only searches the text of a conversation, but not the username, so when I say “Hmm, I was talking to Alex, and he said something about…” I can’t (or can’t find a way to) search on his name directly.

    Architecturally, though, it’s another turn of the centralization/decentralization screw, where adding a centralized server upstream of a P2Pish app like IM creates novel value. And by using a proxy, which is a pretty low-level tool, they get to be platform and client-agnostic, since most clients have to support proxying to deal with firewalls.

    To be determined: whether AOL tries to kill it. That, more than any subsequent features, will determine its success or failure.

    Comments (8) + TrackBacks (0) | Category: social software

    November 5, 2004

    Group as UserEmail This EntryPrint This Article

    Posted by Clay Shirky

    A year and a half ago I suggested, in a speech at the Etech conference called A Group Is Its Own Worst Enemy, that…

    It has to be hard to do at least some things on the system for some users, or the core group will not have the tools that they need to defend themselves [against rogue users].

    Now, this pulls against the cardinal virtue of ease of use. But ease of use is wrong. Ease of use is the wrong way to look at the situation, because you’ve got the Necker cube [of individual and group effects] flipped in the wrong direction. The user of social software is the group, not the individual.

    I think we’ve all been to meetings where everyone had a really good time, we’re all talking to one another and telling jokes and laughing, and it was a great meeting, except we got nothing done. Everyone was amusing themselves so much that the group’s goal was defeated by the individual interventions.

    The user of social software is the group, and ease of use should be for the group. If the ease of use is only calculated from the user’s point of view, it will be difficult to defend the group from the “group is its own worst enemy” style attacks from within.

    I’ve just put up another piece on this subject — Group As User: Flaming and the Design of Social Software — about what we can learn from the history of flaming in mailing lists and other conversational spaces:

    Flaming is one of a class of economic problems known as The Tragedy of the Commons. Briefly stated, the tragedy of the commons occurs when a group holds a resource, but each of the individual members has an incentive to overuse it. (The original essay used the illustration of shepherds with common pasture. The group as a whole has an incentive to maintain the long-term viability of the commons, but with each individual having an incentive to overgraze, to maximize the value they can extract from the communal resource.)

    In the case of mailing lists (and, again, other shared conversational spaces), the commonly held resource is communal attention. The group as a whole has an incentive to keep the signal-to-noise ratio low and the conversation informative, even when contentious. Individual users, though, have an incentive to maximize expression of their point of view, as well as maximizing the amount of communal attention they receive. It is a deep curiosity of the human condition that people often find negative attention more satisfying than inattention, and the larger the group, the likelier someone is to act out to get that sort of attention.

    However, proposed responses to flaming have consistently steered away from group-oriented solutions and towards personal ones. The logic of collective action, alluded to above, rendered these personal solutions largely ineffective. Meanwhile attempts at encoding social bargains weren’t attempted because of the twin forces of door culture (a resistance to regarding social features as first-order effects) and a horror of censorship (maximizing individual freedom, even when it conflicts with group goals.)

    The piece goes on to contrast wiki and weblogish ways of avoiding flaming, and uses that as input for suggesting a number of possible experiments with social form.

    Comments (5) + TrackBacks (0) | Category: social software

    October 21, 2004

    Merholz on Metadata for the MassesEmail This EntryPrint This Article

    Posted by Clay Shirky

    Great Peter Merholz piece, Metadata for the Masses, continuing the folksonomy/ethnoclassification thread

    We’re beginning to see ethnoclassification in action on the social bookmarks site Del.icio.us, and the photo sharing site Flickr. […] The primary benefit of free tagging is that we know the classification makes sense to users. It can also reveal terms that “experts” might have overlooked. “Cameraphone” and “moblog” are newborn words that are already among Flickr’s most popular; such adoption speed is unheard of in typical classifications. For a content creator who is uploading information into such a system, being able to freely list subjects, instead of choosing from a pre-approved “pick list,” makes tagging content much easier. This, in turn, makes it more likely that users will take time to classify their contributions.

    Comments (1) + TrackBacks (0) | Category:

    October 14, 2004

    Social software as a termEmail This EntryPrint This Article

    Posted by Clay Shirky

    danah is right, Allen has done us all a great favor by posting his work on the term social software. I want to address her despisity (despision? despisement?) of the term, though, especially as my shayna punim graces the 2000+ section of Allen’s doc.

    I don’t think the term ‘social software’ is perfect, but I do think it’s optimal, as it’s obviously in use where other terms aren’t. So I think it’s a local but not global maxima.

    And I think it’s a local maxima because software is where the action is now, in this kind of social experimentation. In fact, I think that danah’s complaint, “I feel as though the term allows us to emphasize the technology instead of the behavior that it supports” is one of the things the phrase ‘social software’ has going for it.

    Technologists have been looking at social interaction for decades, usually through one of two lenses — either ignoring the users as hopelessly irrational (something with more than a grain of truth in it, as committed groups have emotional, not just intellectual, motivations), or by trying to shoehorn social applications into single-user paradigms, ignoring the fact that groups produce interactions that individuals and pairs don’t (e.g. flaming, trolling, etc.)

    So in this case, emphasizing to technologists the way software can both embody and alter social patterns is the right answer, since it creates a sense of possibility in bringing new techniques to old patterns to see what the interaction is like.

    And the misundertanding of the need for, and use of, that phrase by technologists seems to me to be at the heart of danah’s complaint. When she asks “Why are we acting like giddy children who just found a new toy?”, it’s because we’re giddy children who have just found a new toy.

    The fact that sociologists have been talking about these kind of things for decades hasn’t created a lot of value in the way social tools are designed, since there isn’t much of a habit among sociologists of talking in ways or venues that matter to developers (though there are notable exceptions, of course, including danah herself.)

    I spent the summer reading academic journals, principally Small Groups and Group Dynamics, and I can tell you that if you are minded to actually change the way groups interact, there’s more insight in the Flickr and del.icio.us interfaces than in combined publishing history of those journals.

    The difficulty of pattern fit is a fairer cop, since the domain of social software has fuzzy edges (of necessity, in my view.) For example, I say pairwise communications, such as SMS, falls out of the pattern, while MMOs are in, since the group effects, not the pairwise ones, are the hard ones to deal with. But this is a judgment call; others differ.

    And as for the focus on YASNSes, wikis, weblogs, etc, yes, the tech community suffers from neophilia, but then we would, wouldn’t we? If we thought old things were as good as new things, we wouldn’t invent new things. This creates some loss, but also considerable gain, and our tribe, almost by definition, is made up of people who think the gains from focusing on the new things outweigh the losses. (Not all of us are pure neophiles, though — my first post here was to a piece of writing done in 1970…)

    Complaining that we shouldn’t be delighted that new things are happening reminds me of Dave Farber’s comment that Napster was nothing new, because the original internet treated all machines as peers. Everything since — scale, the rise of the PC, the spread of audio tools, the horror of unstable IP addresses — seemed to him to be mere details. And yet Napster was a big deal — it, and not the original IP-bound tools, changed the world in the direction of both decentralization and what Tom Coates calls the New Musical Functionality.

    And social software feels like that to me now. We’ve had social network maps since the 1930s, and the 6 Degrees pattern since the 1960s, but we’ve only had networking services since 1996, and only had working ones since 2002. Bass-Station, Meetup, Flickr, del.icio.us, Fotowiki, LiveJournal, dodgeball, Audioscrobbler, these are new things, and they play well with others (unlike Lotus Notes et al, which wrecked the earlier term groupware), so we are not just getting new tools but are getting combinatorial complexity, as with the spread of the del.icio.us tagging pattern from feature to infrastructure.

    danah’s term, computer-mediated social interaction, is more descriptive, but probably less galvanizing for tool-builders, and places the edge problems in the technical domain — users don’t regard phones as computers, for example. Just plain “mediated group interaction” is probably even better (with my admitted bias towards triplets and away from pairs), but say that to someone who builds software and see if their eyes light up or glaze over.

    Allen probably credits me too much with popularizing the term (it was, ironically, David Winer and Shelley Powers who did most to spread it, by denouncing me and the horse I rode in on, back in 2002.) However, inasmuch as I have spread it, my principal goal hasn’t been accuracy, but inspiration. If you can get people schooled in single-user interaction to get excited about understanding and coding for the inherently social possibilities of software, you get cool new stuff to play with. And that beats uninspiring accuracy any day…

    Comments (8) + TrackBacks (0) | Category: social software

    October 6, 2004

    Blog Explosion and Insider's Club: Brothers in cluelessnessEmail This EntryPrint This Article

    Posted by Clay Shirky

    A couple years ago, when I was looking for some concise way to define Social Software, one of the definitions I used was “stuff worth spamming”, on the theory that any open group channel worth its salt would attract that form of system gaming.

    Behold Blog Explosion, the first blog spam co-operative. With Blog Explosion, you can now sign up to generate worthless traffic to other weblogs, in return for their generating worthless traffic to you:

    The concept is very simple. You read other blog sites and they in return visit your blog. Blogexplosion is the internet’s first blog exchange where thousands of bloggers visit each other’s blogs in order to receive tons of blog traffic. Imagine how many other people out there could be adding your blog to their blogroller and how many people would be reading your blog every day with this sort of attention. It’s free to use!

    And NJ.com offers more proof, as if any were needed, that fantasizing about weblogs has become a broad cultural obsession, as the article Take the inside track to the insider’s club, demonstrates:

    Injecting yourself into the inside ranks of any subculture, from coin collecting to Java software programming, was once an arduous, seemingly impossible task, requiring years of experience, flights to far-off conventions, and lots of schmoozing with insiders. No more. Now anyone can assume the position of insider — one of those in-the-know types who is up on the latest news, is acquainted with all the major players, and is viewed as a personage of some esteem within a discreet arena. From e-mail to Weblogs, the online world opens up avenues to cozy up to experts, make a mark in your avocation or profession, and be viewed, in your own right, as someone who matters.

    It ends with the exhortation ” And the ultimate act of insiderdom? Create a Weblog. Do it, devote your life to it, and you will soon be a star.”

    I can’t tell whether to feel happy or sad that I’ve sat through this movie so many times that I can mouth the words, but seeing the idea of web rings and that old “Now you can have direct access to world leaders — through e-mail!” meme run through the “Now with new Blogs!” treatment does suggest we’ve entered the phase where first-mover advantages are being sold to Nth movers, where N is large. Next stop, exposes airing the disappointment of people who started a blog and worked on it all week and still didn’t become famous.

    Just like the web before it, the people selling ‘it’s the easy path to a big audience’ are not the inventors of the pattern but the people who understood how things worked when the crowd was small, and begin selling those virtues to the people whose very entrance into the system pushes it out of communal equilibrium.

    Comments (1) + TrackBacks (0) | Category: social software

    September 30, 2004

    The Seven Two Pieces Social Software Must HaveEmail This EntryPrint This Article

    Posted by Clay Shirky

    Last year Matt Webb at Interconnected posted On Social Software, which was then picked up by Matt Phillips at drupal, who posted Incentives for online software: the 7 pieces social software must have…. Both Webb and Phillips’s pieces were riffing on Stewart Butterfield’s earlier post on the same subject. The list of attributes as posted at drupal was Identity, Presence, Relationships, Conversations, Groups, Reputation, and Sharing. [Updated 9/30, 19:40 EDT to reflect Matt Webb’s work.]

    I just went through the list for this semester’s Social Software class at ITP, and re-aranged it, because the list is too big to be a subset of all social software (very few systems have formal support for Reputation or Relationships, for example), but much too small to be a superset of all interesting features (a potentially infinite list.)

    I think there are in fact only two attributes — Groups and Conversations — which are on the ‘necessary and sufficient’ list (though I have expanded the latter to Conversations or Shared Awareness, for reasons described below.) I doubt there are other elements as fundamental as these two, or, put another way, software that supports these two elements is social, even if it supports none of the others. (Wikis actually come quite close to this theoretical minimum, for reasons also discussed below.)

    Some of the remaining attributes are “technological signature” questions. These are not about essence so much as characterization — what kind of software is it? What are its core technological capabilities? I have four attributes that fall into this category, having added two to the drupal list: Identity and Roles, Presence, Naming and Addressing, and Valence. I think you can learn important things about any piece of social software by examining these four attributes. There are probably others.

    Finally, there are three leftovers from the original seven. These are essentially optional characteristics, which only pertain to a subset of social software and which were, I believe, wrongly included in the original list out of an excitement about recent innovations. The inessential characteristics included on the drupal list are Sharing, Relationships, and Reputation. Others are of course possible: Profiles? FOAF networking? etc.

    My version follows.

    ...continue reading.

    Comments (4) + TrackBacks (0) | Category: social software

    September 28, 2004

    Wiki+ patterns: TiddlyWiki and Web CollaboratorEmail This EntryPrint This Article

    Posted by Clay Shirky

    We’ve written a number of times over the last two years on various attempts to fuse the use patterns of wikis and weblogs — they are appealingly different in their uses but complementary in the way their users value them.

    Here are three recent entries in that category:

    TiddlyWiki, a wiki-patterned note-taking app written by jeremy Ruston and designed to run entirely in the browser. No set-up at all, with the acknowledged skeleton in the closet that it can’t save things easily without access to a server. PhPtiddlyWiki is Patrick Curry’s attempt to fix that omission by tying a TiddlyWiki to a MySQL db through PHP.

    The fascinating thing to me about TiddlyWiki isn’t so much the lowered set-up costs as the way a user page is built up during a session. Content creation is pure wiki, down to using CamelCase to specify new bits of content. The units of content themselves, however, are pure blog — post rather than page oriented, with multiple posts per page. And the user interaction is, I think, unique. Clicking on a post moves that post into the blog-standard central column, placing it underneath any posts the user previously clicked on, so the page is not reverse-crhonological by date but reverse-chronological by user interaction — page depth as history (though oddly it switches modes when you are clicking on content within a post in the center column — there the content opens under, rather than on top of, the most recent item.

    Anyway, a lot of words to explain what is a very natural feeling but novel pattern of interaction — well worth a look.

    The third item is Web Collaborator, a wiki/discussion board fusion, designed as a free collaborative space. The bet there seems to be that the free-formness of the wiki can be imporved upon by making two categories of social behaviors on wikis explicit in the tool. First, they make a distinction between discussion about and creation of shared content, and provide a discussion board for the former. Second, the provide a contact list of people on the project, with some nesting of editorial controls.

    I’ve just watched student groups in this semester’s Social Software class use a wiki to coordinate group work, and most groups did both — listed their contacts as the frist thing they put on the wiki page, and shuttled, sometimes uncomfortably, between conversation and shared editing modes. And Liz will be pleased to see that it passes her “Is it ugly?” test for wikidom generally.

    However, the discussion board on WebCollaborator seems badly designed, suffering from tUI issues — too much white space makes comments standalone rather than conversational — to a disconnect with the mission — the conversation is all macro, and I couldn’t see any way to tie a particular discussion to a particular piece of shared content, making it hard to focus the group on proposed edits or changes.

    The overall pattern is interesting, however, and they seem to have come with an appreciation for the wiki form and a desire to make a few simple changes that support existing social patterns, rather than a wholesale makeover.

    Comments (0) + TrackBacks (0) | Category: social software

    MSFT releases FelxWiki as Open Source projectEmail This EntryPrint This Article

    Posted by Clay Shirky

    We wrote about Microsoft’s FlexWiki project last December. Now eWeek is reporting that Microsoft is releasing FlexWiki code under an Open Source license. (Code is available on Source Forge, though it indicates that is is extensions to FlexWiki — I am not sure when or where the full codebase will be released..)

    Comments (0) + TrackBacks (0) | Category: social software

    September 27, 2004

    Ethan Zuckerman on bias in WikipediaEmail This EntryPrint This Article

    Posted by Clay Shirky

    Good Ethan Zuckerman post on systemic bias in the Wikipedia, and on a proposal called CROSSBOW (Committee Regarding Overcoming Serious Systemic Bias On Wikipedia), to address the problem:

    Amazing though it is, Wikipedia is not flawless. It’s got a problem common to almost all peer production projects: people work on what they want to work on. (This “problem” is probably the secret sauce that makes peer production projects work… which is what makes it such a difficult problem to tackle.) Most of the people who work on Wikipedia are white, male technocrats from the US and Europe. They’re especially knowledgeable about certain subjects - technology, science fiction, libertarianism, life in the US/Europe - and tend to write about these subjects. As a result, the resource tends to be extremely deep on technical topics and shallow in other areas. Nigeria’s brilliant author, Chinua Achebe gets a 1582 byte “stub” of an article, while the GSM mobile phone standard gets 16,500 bytes of main entry, with dozens of related articles.

    It is the hallmark of working open source projects that criticism tends to lead to better code rather than just more arguing; the Wikipedia seems to have that same pattern down.

    Comments (2) + TrackBacks (0) | Category: social software

    September 24, 2004

    Social sharing service tutorialEmail This EntryPrint This Article

    Posted by Clay Shirky

    Interesting guide to building social services featuring “implicit social discovery”, the pattern behind del.icio.us, Flikr, and Webjay.

    The service we build will let you:

    1. Publish observations or ‘stuff’ onto a website.
    2. Categorize it variety of ways.
    3. Pivot on yours or others observations to discover other related topics or persons.

    Our work will be modelled on newly emerging services including del.icio.us, Flickr and Webjay . The code itself will be a rewrite based on what I’ve learned from developing Thingster and BooksWeLike over the last year.

    Social content services have a strong emphasis on implicit social discovery. Users use these services to organize their own content for later recollection. But since the services are public, other users can peek into the collective space, and discover similar items, topics or persons. We’re going to look for opportunities in this project to stress the ‘synthesis’ aspect of social discovery; to escape from the pattern of curated collections managed and presented by one person.

    It’s cool to see features become patterns become platforms, and I love the ‘implicit social discovery’ label.

    Comments (3) + TrackBacks (0) | Category: social software

    September 15, 2004

    SIPShare: P2P SIP-based filesharing from EarthlinkEmail This EntryPrint This Article

    Posted by Clay Shirky

    Acronym-laden craziness! Earthlink has released SIPShare, a proof-of-concept P2P filesharing network that uses Session Initiation Protocol (SIP), a tool originally designed for voice and video communications, to set up file sharing networks.

    EarthLink believes an open Internet is a good Internet. An open Internet means users have full end-to-end connectivity to say to each other whatever it is they say, be that voice, video, or other data exchanges, without the help of mediating servers in the middle whenever possible. We believe that if peer-to-peer flourishes, the Internet flourishes. SIPshare helps spread the word that SIP is more than a powerful voice over IP enabler —- much more. SIP is a protocol that enables peer-to-peer in a standards-based way. The emerging ubiquity of SIP as a general session-initiation enabler provides a rare opportunity to offer users all manner of P2P applications over a common protocol, instead of inventing a new protocol for each new P2P application that comes along.

    Written in Java, with a BSD-style license, so it should be extensible in a way that Skype, Grouper, et al are not. Will be interesting to see if anyone uses this as a base for file-sharing + group communications.

    Comments (0) + TrackBacks (0) | Category: social software

    Stowe Boyd on Pay-for-Play in the YASNS worldEmail This EntryPrint This Article

    Posted by Clay Shirky

    Over at Get Real, Stowe notices that Ryze is starting to charge users for access to parts of the overall network, and considers the issues that pay-for-play in social networks raises:

    It’s not that [Pay for Play] is shady, since members theoretically know what they are getting into when they sign up (leaving aside the issue of changing the policies after the fact), but it starts to raise questions:

    * People want to be networked and meet others with whom to do business, so it makes sense to be listed in the ‘yellow pages’ of the future, which is what these services seem to be tending toward. But if it is a ‘yellow pages’ model, shouldn’t people pay to be listed?
    * If it is, on the other hand, a telephone exchange model, certainly the ones making the call (making the search) should pay.
    * If it is a dating service model, people want to get hooked up with people meeting their profiled interests and (theoretically) no one else, and therefore, the service should be managing things so that unwanted contact does not happen.

    So, it looks like we are evolving some scary, blendo model of business, here. I am free to join, but I don’t have the rights of the paying members who can (in some circumstances) see me when I can’t see them. This inequality is troubling, but parallels other fee-for-rights movements, like paid travel lanes in public highways. But since, in principal, I want to be contacted in some circumstances this should be ok, right? Well, only so long as I am never spammed, and it seems likely that those paying for the paid memberships are more likely to be using the service to sell, sell, sell.

    Comments (0) + TrackBacks (0) | Category: social software

    September 14, 2004

    CFP on Virtual CommunitiesEmail This EntryPrint This Article

    Posted by Clay Shirky

    ACMs SIGGROUP has a call for papers on novel approaches to virtual community (Due date: Jan 15, 2005), with the charming title of Less of You, More of Us: The Political Economy of Power in Virtual Communities, explicitly trying to counter some of the research biases present in work to date:

    For example, there are a large number of researchers inquiring into the recent blogging phenomenon, but I have heard many explicitly exclude technologies/communities such as LiveJournal.com with his 3.8 million users (1.7 active), and discount the value of teenage bloggers, who are mostly female (67% of Livejournal users). Because researchers tend to cover familiar territories, we encourage authors to explore alternatives. Our issue will provide researchers with the opportunity to expose the readership to a wider sense of virtual community and what is going on at the edges of the event horizon.

    For my money, of course, the most important research bias to undo is the bias that regards the use of social software as mainly leading to virtual communities, with real-world ties between participants being regarded as an unusual occurrence. Most communities regarded as ‘virtual’ have at least some good old-fashioned face-to-face interaction among some of the members, and as Meetup et al have showed us, that trend increases as the density of internet users grows.

    Nevertheless, it looks like an interesting CFP, and their interest in non-traditional source of insight could open the door for some much needed conversation between academics and practitioners.

    Comments (0) + TrackBacks (0) | Category: social software

    September 12, 2004

    Grouper: Groove meets WASTE...Email This EntryPrint This Article

    Posted by Clay Shirky

    Grouper, a new entrant in the category of small-group P2P apps. (The N^2 problem is only a problem if N is large…) It’s the usual mix of “communications plus file sharing for small groups” that we know from Groove, Bad Blue, and WASTE, but it claims (Windows only, so I can’t test it at home) to have put a lot of effort into ease of use. It advertises thumbnailing of pics, streaming of files from remote users, and no adware or spyware.

    If any M2M users have tried it, we’d love comments or pointers to other posts.

    Comments (4) + TrackBacks (0) | Category: social software

    September 9, 2004

    Felten on WikipediaEmail This EntryPrint This Article

    Posted by Clay Shirky

    Continuing the examination of the value of the Wikipedia, Ed Felten compares Wikipedia and Britannica

    Overall verdict: Wikipedia’s advantage is in having more, longer, and more current entries. If it weren’t for the Microsoft-case entry, Wikipedia would have been the winner hands down. Britannica’s advantage is in having lower variance in the quality of its entries.

    The thing I love about this post is that it turns the original complaint on its head. The Syracusan Critique is that the wikipedia can’t be good because it isn’t authoritative, which has a precursor assumption that Collin Brooke brilliantly glosses as “Authority/trustworthiness/reputation/credibility is something that pre-exists the research.” Falstodt the Syracusan presumes that if you cannot point to a pre-existing authority, the content itself is inherently less valuable (his phrase damning the Wikipedia is “without any credentials”.) Felten, by contrast, assumes that the value of content can be derived, rather than presumed, and begins by comparing individual articles, a process that cuts through trivial dismissals, and starts the real, and hard, work of talking about actual strengths and weaknesses.

    Comments (0) + TrackBacks (0) | Category: social software

    Progressive trust and Intimacy gradientsEmail This EntryPrint This Article

    Posted by Clay Shirky

    Two interesting posts at Life With Alacrity. First, thoughts on the growth of progressive trust in real human relations, and what it means for technology:

    Computer trust rarely works the way that human trust does. It starts with mathematical proofs—that such and such a mathematical algorithm is extremely difficult, thus something built on it must also be difficult. These are then built on top of each other until a system is created. It often seeks a level of “perfect trust” that is rarely required by human trust.

    One of the reasons why I chose to back the then nascent SSL (Secure Sockets Layer) Standard back in 1992-3, was that I felt that it much better mapped to the progressive trust model, and thus to human trust, then did its competitors.

    At the time, the SET standard was backed by all the major players—Visa, Mastercard, Microsoft, etc. […] But SSL starts out very simple—first it just connects two parties, then it establishes simple confidentiality between them. If one party wants more confidentiality, they can upgrade to a stronger algorithm. Then one party can request a credential from the other, or both can.

    Then a post, Intimacy Gradient, on architectural patterns that may have relevance to the design of social software:

    Refuge and prospect come from the landscape architect Jay Appleton. Prospect is a place where we can see others, and refuge is a place were we can retreat and conceal ourselves. A specific prediction of his theory is that people prefer the edges of a space more then the middle. Often prospect and refuge are in conflict, as a prospect tends to be expansive and bright whereas a refuge is small and dark, but there are cases where they are combined in one place; this is why we value private homes with a spectacular view so much, and why we pay so much to stay at scenic retreats. So what are the edges of our social spaces? Are there ways that we can signal either prospect and refuge?

    Comments (1) + TrackBacks (0) | Category: social software

    Educause on wikis in the academyEmail This EntryPrint This Article

    Posted by Clay Shirky

    Good Educause post on wikis in the academy. It includes a general overview of wikis that will be familiar to anyone reading M2M, but also some specific observations about wikis in academic settings:

    Indeed, an instructor could structure and regulate interaction to such an extent that the wiki is effectively transformed into a stripped-down course management system. But doing so risks diluting the special qualities that make wikis worth using in the first place, with the result being, in the words of Heather James, “pumped-up PowerPoint.” James has described the experience of using wikis in her teaching as her “brilliant failure.” She regrets that she “changed the tool, but did not change the practice,” and failed to account for the “great potential in this tool to be completely disruptive (in a good way) to the classroom setting.” With the benefit of hindsight, she concludes that for wikis to fulfill their promise, “the participants need to be in control of the content—you have to give it over fully.”26 This process involves not just adjusting the technical configuration and delivery; it involves challenging the social norms and practices of the course as well.

    Update: Ross also quotes this piece, in his discussion on anonymity and privacy in wikis.

    Comments (0) + TrackBacks (0) | Category: social software

    Joel On Software on social interfacesEmail This EntryPrint This Article

    Posted by Clay Shirky

    Great Joel Spolsky post on getting social interfaces right, with great observations about the social appropriation of small UI effects:

    Usenet clients have this big-R command which is used to reply to a message while quoting the original message with those elegant >’s in the left column. And the early newsreaders were not threaded, so if you wanted to respond to someone’s point coherently, you had to quote them using the big-R feature. This led to a particularly Usenet style of responding to an argument: the line-by-line nitpick. It’s fun for the nitpicker but never worth reading. (By the way, the political bloggers, newcomers to the Internet, have reinvented this technique, thinking they were discovering something fun and new, and called it fisking, for reasons I won’t go into. Don’t worry, it’s not dirty.) Even though human beings had been debating for centuries, a tiny feature of a software product produced a whole new style of debating.

    Comments (0) + TrackBacks (0) | Category: social software

    August 25, 2004

    FolksonomyEmail This EntryPrint This Article

    Posted by Clay Shirky

    Folksonomy, a new term for socially created, typically flat name-spaces of the del.icio.us ilk, coined by Thomas Vander Wal.

    In commentary on Atomiq, Gene Smith, who generally likes the idea, lists some disadvantages of folksonomies:
    On the other hand, I can see a few reasons why a folksonomy would be less than ideal in a lot of cases:
    * None of the current implementations have synonym control (e.g. “selfportrait” and “me” are distinct Flickr tags, as are “mac” and “macintosh” on Del.icio.us).
    * Also, there’s a certain lack of precision involved in using simple one-word tags—like which Lance are we talking about? (Though this is great for discovery, e.g. hot or Edmonton)
    * And, of course, there’s no heirarchy and the content types (bookmarks, photos) are fairly simple.

    A lot of this parallels the discussion around the continuing development and use of del.icio.us. I am in the “Wenn ich Ontology höre … entsichere ich meinen Browning” camp, so I think Smith’s points are not so much absolute disadvantages as choices.

    Synonym control is not as wonderful as is often supposed, because synonyms often aren’t. Even closely related terms like movies, films, flicks, and cinema cannot be trivally collapsed into a single word without loss of meaning, and of social context. (You’d rather have a Drain-O® colonic than spend an evening with people who care about cinema.) So the question of controlled vocabularies has a lot to do with the value gained vs. lost in such a collapse. I am predicting that, as with the earlier arc of knowledge management, the question of meaningful markup is going to move away from canonical and a priori to contextual and a posteriori value.

    Lack of precision is a problem, though a function of user behavior, not the tags themselves. del.icio.us allows both heirarchical tags, of the weapon/lance form, as well as compounds, as with SocialSoftware. So the issue isn’t one of software but of user behavior. As David pointed out, users are becoming savvier about 2+ word searches, and I expect folksonomies to begin using tags as container categories or compounds with increasing frequency.

    No heirarchy I have a hard time as seeing as inherently problematic — heirarchy is good for creating non-overlapping but all-inclusive buckets. In a file-system world-view, both of those are desirable characteristics, but in a web world-view, where objects have handles rather than containment paths, neither characteristic is necessary. Thus multiple tags “skateboarding tricks movie” allows for much of the subtlety but few of the restrictions of heirarchy. If heirarchy was a good way to organize links, Yahoo would be king of the hill and Google an also-ran service.

    There is a loss in folksonomies, of course, but also gain, so the question is one of relative value. Given the surprising feedback loop — community creates folksonomy, which helps the community spot its own concerns, which leads them to invest more in folksonomies — I expect the value of communal categorization to continue to grow.

    Comments (5) + TrackBacks (2) | Category: social software

    August 21, 2004

    Multiply, spam, and economic incentivesEmail This EntryPrint This Article

    Posted by Clay Shirky

    Stowe, reading my earlier Multiply rant, responds saying Multiply isn’t spam, and says that we need a statement of purpose for social networks to adhere to.

    I’m more pessimistic than he; I believe that Multiply join messages are spam. Now spam has the “I know it when I see it” problem, so to talk carefully about it requires a specified definition. Here’s mine — spam is unsolicited mail, sent without regard to the particular identity of the recipient, and outside the context of an existing relationship.

    Anyone sending me mail because I am on a list I haven’t asked to be on; without having a reason to think that I, in particular, would want this mail; and without us already knowing one another, is spamming me. In particular, ads sent to me as a member of a category, no matter how targeted, count, in this definition, as spam. You could be advertising a new brand of gin specially brewed for Brooklyn-dwelling Python hackers who like bagpipe music and that mail would still be spam.

    If you adopt this definition, even just for the sake of argument, it’s pretty clear that Multiply fails the first and second tests. I did not ask for mail from them, and they are not sending me mail because they know me — they simply have my address on a list furnished by my friends. (IAQH.*) I think where Stowe and I may disagree is in point #3: do I have an existing relationship with the sender of the mail?

    ...continue reading.

    Comments (8) + TrackBacks (0) | Category: social software

    August 20, 2004

    Multiply and social spam: time for a boycottEmail This EntryPrint This Article

    Posted by Clay Shirky

    I’ll go David’s complaint about Multiply one better. Multiply are spammers, and should be treated as such, as should every other service that uses their tactics.

    Imagine that you were offered pills that promised to expand the length and diameter of your fingers, and you wanted some. (Maybe you’re a concert pianist or something — I actually don’t want to know the sordid details…) Now imagine that you could get the first month’s pills for free, if you just uploaded your entire address book and gave the pill manufacturer permission to make the same offer to everyone listed there, in email sent out under your name. Would you do it? Because that is what you are doing if you use Multiply.

    Here is what is happening: anyone launching a new YASNS has to work much harder to get users than in the old days, because the concept is so well established now. Furthermore, the existence of a social profile elsewhere means nothing to Multiply. Therefore they have every incentive to spam non-users mercilessly, because if they can wear them down until they join, great, and if they never join, who cares?

    But Multiply are not ordinary spammers, since they have the email addresses of your friends, and permission to use them. When Jenna NoLastname writes me saying “Want to meet you!”, my spam filter handles it, but when mail from my friend Schmendrick J. Subramanian shows up, my spam filter lets it through, because I’ve known Schmend since Back In The Day.

    ...continue reading.

    Comments (10) + TrackBacks (0) | Category: social software

    August 17, 2004

    The Great Scam: ReactionsEmail This EntryPrint This Article

    Posted by Clay Shirky

    I pointed to The Great Scam [new cached link] over the weekend, a first-person narrative of a scam perpetrated by Nightfreeze in the online game EVE. (Before I go into what caught my eye about it, I want to rectify an omission in my earlier post. The Great Scam contains derogatory references to women and minorities. I should have put a warning in the original pointer; my apologies.)

    The piece is most interesting not as a story but as an artifact — the author does not set out to explain to much as describe the events in question, and in doing so, ends up documenting several areas of behavior that may be of interest to M2M readers:

    • The use of out-of-band communications tools
    • Issues of identity and presentation
    • Questions of constitutional legitimacy

    ...continue reading.

    Comments (4) + TrackBacks (0) | Category: social software

    i-Neighbors: Local social capitalEmail This EntryPrint This Article

    Posted by Clay Shirky

    i-Neighbors, a service to generalize local social networks.

    Form the About page:

    I-neighbors was created by a team of researchers at the Massachusetts Institute of Technology (MIT). These services were designed to encourage neighborhood participation and to help people form local social ties. We believe that the Internet can help people connect to their local communities and to create neighborhoods that are safer, better informed, more trusting, and better equipped to deal with local issues. I-neighbors helps communities build “neighborhood social capital” by providing a place for neighbors to find each other, share information and work together to solve local problems.

    The proposed transformation here is similar to the change from one-off hosting of mailing lists to Yahoo Groups — instead of a set of one-off services, a neighborhood can use an existing template to start a whole related set of services — netowrking, photos, local reviews — all at once.

    The core intuition is that ‘neighborhood’ is a concept that maps well to such services. Current services that have strong geographic components include Meetup, UrbanBaby and Craigslist — the first two rely on strong affiliational ties, with geography as a filter, rather than vice-versa, and Craigs assumes that cities are units, and that people have different ranges for different functions — I’ll travel all the way across town to interview for a job, but not to go to a garage sale. It will be interesting to see what the ‘neighborhood first/ then other filters’ model produces.

    The lessons from UpMyStreet, a similar service in the UK that launched several years ago, are a bit mixed: the UK Postcode system is far more granular than that of the US, allowing them much more refined geo-location, but the place has also become a dumping grounds for racist and anit-immigrant feeling. Bob Putnam (he of Bolwing Alone) is doing some work on neighborhood social capital, and finds that high social capital correlates strongly with ethnic homogeneity — it will also be interesting to see, if i-Neighbors gets enough use, how that dynamic plays out here.

    UPDATE: danah posted about i-Neighbors as well, with interesting questions about the relations between race and neighborhood.

    Comments (4) + TrackBacks (0) | Category: social software

    XFN RelationshipsEmail This EntryPrint This Article

    Posted by Clay Shirky

    The madness of the age — making human relations explicit — continues in the form of XFN’s relationship profiles. Here, as a sample, is the entire range of possible romantic categories:

    - muse - Someone who brings you inspiration. No inverse.
    - crush - Someone you have a crush on. No inverse.
    - date - Someone you are dating. Symmetric. Not transitive.
    - sweetheart - Someone with whom you are intimate and at least somewhat committed, typically exclusively. Symmetric. Not transitive.

    Muse, crush, date, sweetheart. The whole of romantic or sexual feeling in not just four words but those four?

    The odd thing about about efforts like this is not merely the lack of completeness. Attempting to get to completeness would be admitting defeat, since the goal is simplification. The odd thing is that even the few proposals there are are obviously wrong.

    “Date” is not the word for someone you are dating, and everyone knows it except the authors of this list. You can be my date to the prom without it ever being an ongoing thing; meanwhile, someone you are dating is never referred to as your date, but as your boyfriend of girlfriend. Ditto ‘sweetheart’, where the assumption is definite on intimacy and lukewarm on committment, when usage would indicate the opposite balance. And so on.

    The best part, though, is the rationale:

    There were a whole pile of love, romance, and sexually oriented terms we considered and discarded. Some were rejected on the grounds they were unnecessary—for example, polyamorous individuals can indicate their other partners using values already defined (having two links marked sweetheart or spouse, for example). Others were left out because they did not fit with the desire to keep XFN simple. The current set seems to us to accurately capture a sufficiently detailed range of romantic feelings without becoming overwhelming.

    You really can’t make this stuff up. “We left a bunch of stuff out because when you try to model all the ways people really talk about attraction and intimacy and attachment, it just seems messy.” The thought that maybe the domain they are trying to model is messy seems never to have crossed their minds.

    And:

    A special note is merited for the omission of a term to describe a person to whom one is engaged. The terms “fiancé” and “fianceé” are gender-specific, which was a problem. We also decided that describing engagement should be left out since it is intended as a transitional state of affairs, as a prelude to marriage (and thus the value spouse, which is a less intentionally temporary relationship).

    Earth to XFN: Most romantic relations are temporary. Fiance is a more formal state of affairs than sweetheart. Despite the fact that it doesn’t fit with your model, it is treated as a real category by actual people. Throwing out real-world behavior because it doesn’t fit your model is supposed to make you question your model, no?

    It’s as if the creation of a list is meant to seem complete, because lists are discrete, and domains to be modelled are also supposed to be discrete…

    Comments (7) + TrackBacks (0) | Category: social software

    August 16, 2004

    Social Origin of Good IdeasEmail This EntryPrint This Article

    Posted by Clay Shirky

    Ronald Burt, who created the ‘social holes’ network measure (find out where the connections between groups aren’t, and look for value in bridging, roughly), wrote a paper last year on the Social Origin of Good Ideas (PDF):

    A theme in the above work is that information, beliefs and behaviors are more homogenous within than between groups. People focus on activities inside their own group, which creates holes in the information flow of information across structural holes. People with contacts in separate groups broker the flow of information across structural holes. Brokerage is social capital in that brokers have a competitive advantage in creating value with projects that integrate otherwise separate ways of thinking or behaving.

    Much of the paper is focused on sturcutral holes in business settings, arguing that brokers create much fo the value we associate with innovation.

    On a related note, here’s a presentation danah co-wrote on social holes in email. The animation of the visualization, referenced in the presentation, is here

    Comments (0) + TrackBacks (0) | Category: social software

    August 15, 2004

    Must Read: The Great ScamEmail This EntryPrint This Article

    Posted by Clay Shirky

    Terrific account of scamming other players in EVE, a massively multiplayer game set in space. It’s got everything — innocent fun, bitter disillusionment, vows of revenge, close calls, a dastardly plan, a network of mostly invented collaborators, and an ending that make the whole thing more astonishing still.

    This is one of the great first-person narratives of game participation, and touches on several themes we care about here. I’ll write about it later, but for now, I won’t bother commenting, or even quoting from it. It’s long, but it deserves to be read in full.

    It’s at http://www.pq5.com/Nightfreeze/, but may still be slashdotted, so check http://freecache.org/http://www.pq5.com/Nightfreeze/ as well. If anyone knows of an alternate and more persistant URL, lemme know.

    Comments (4) + TrackBacks (0) | Category: social software

    August 12, 2004

    Duncan Watts on Collective IntelligenceEmail This EntryPrint This Article

    Posted by Clay Shirky

    Great Duncan Watts piece on the dangers of centralized intelligence, his argument being that while centrally controlled organizations can respond well to situations they’ve forseen, only decentralized but coordinated groups can respond to unexpected catastrophe. (Duncan is the guy who worked out the Small Worlds pattern, providing intellectual backstop to Milgram’s Six Degrees work, so he knows whereof he speaks on the subject of decentralized networks.)

    He covers the Japanese auto industry’s recovery after an earthquake, which he also describes in his Six Degrees book, but adds this more recent example:

    Perhaps the most striking example of informal knowledge helping to solve what would appear to be a purely technical problem occurred in a particular company that lost all its personnel associated with maintaining its data storage systems. The data itself had been preserved in remote backup servers but could not be retrieved because not one person who knew the passwords had survived. The solution to this potentially devastating (and completely unforeseeable) combination of circumstances was astonishing, not because it required any technical wizardry or imposing leadership, but because it did not. To access the database, a group of the remaining employees gathered together, and in what must have been an unbearably wrenching session, recalled everything they knew about their colleagues: the names of their children; where they went on holidays; what foods they liked; even their personal idiosyncrasies. And they managed to guess the passwords.

    Comments (7) + TrackBacks (0) | Category: social software

    YASNS Watch: What up with Multiply?Email This EntryPrint This Article

    Posted by Clay Shirky

    Reminiscent of the Zerodegrees spamming incident, I’m getting spammed by requests to join Multiply. Is this happening to anyone else?

    As for the service itself, so far, it looks like a standard YASNS, with an emphasis on broadcast messages, in the manner of Orkut in the bad old days:

    Multiply can also be used to compose and send messages, similar to e-mail but much more powerful. With e-mail your audience is limited to the specific people in your address book. Multiply messages - or multi-messages, as we call them - can reach the entire network of people you are connected to through mutual friends.

    Channeling danah, note the rhetoric of ‘powerful’ messaging here. Power exists in differentials, and here the power being advertised is clearly the power to force your messages onto people who don’t even know you.

    This is reflected in their list of selling points, which are mostly ego-centric rather than communitarian — ‘broadcast to’ rather than ‘join with’. Their idea of compelling use cases are me reviewing a restaurant, and then making sure all my friends and their friends see it (so what if they live in Jakarta — the new Chumley’s on Flatbush just rocks!), and making sure that your weblog has a built-in (did someone say captive?) audience.

    But of the 100+ YASNSes out there, what is making Multiply the new choice of social spammers? Enquiring minds want to know. Meantime, put me on the Social Networking post-mortem bus…

    Comments (0) + TrackBacks (0) | Category: social software

    August 11, 2004

    OT: The browser-as-writing-instrument saga continuesEmail This EntryPrint This Article

    Posted by Clay Shirky

    Sadly, the search for a browser that works as a writing environment continues, mostly unrequited. I have switched to Firefox, which supports Undo and Find in the textarea, used a local .css file to open the textarea window to something you can imagine writing more than 50 words in, and written my own version of a CopyURL+ format, so I can grab title, link, and div-bracketed text selection in one click. All good, so good, in fact, that it’s worth switching off Safari.

    However, after spending several hours playing with various customizations, Firefox is like a cute puppy that chews your shoes — it does a lot of things I find delightful, and one thing that is so wrong: I still can’t auto-save textarea.

    (Update: Don’t miss Ray Ozzie’s great post in the comments. Very smart, albeit depressing.)

    ...continue reading.

    Comments (25) + TrackBacks (0) | Category:

    August 9, 2004

    Adam Greenfield: Social networking post-mortemEmail This EntryPrint This Article

    Posted by Clay Shirky

    Adam surveys the state of social networking, and finds that things are Not Good:

    And the invitations! Invitations to hip-hop fashion shows (“now with extra bling!”), invitations to parties six time zones away. Though unintentionally, the YASNS’ messaging functionality offers nothing but an extra spam channel - when the rare meaningful message does get sent, it’s hard to discern from among the noise, at more danger of being batch-deleted as a false negative than anything in my Inbox. (What this indicates to me, incidentally, is something wonderful: that people are so manifold and multiple that the mere fact of friendship with someone is a remarkably poor predictor of affinity for that person’s own friends. At least the people I seem to know. Walt Whitman would be delighted.)

    He sweetly points to us as experts on the YASNS penomenon, but of course the best stuff here is us pointing to writing like this…

    Comments (0) + TrackBacks (0) | Category: social software

    August 6, 2004

    del.icio.us mind mapEmail This EntryPrint This Article

    Posted by Clay Shirky

    An app to take your del.icio.us tags and turn them into a mindmap:

    You can see the mind map of my recent links here: delicious_mind/ (note: loads Java applet and takes a little while). The online version lets you fold and unfold categories; click on or near the little red arrow to follow the named link; click on the node to fold and unfold it. Using the Freemind application, you can add icons (I’ve tweaked the mind map file with a single icon, called the “neato” icon, on the python category), color and resize nodes and edges, do all kinds of things to organize , link, and accentuate the various items in your mind map.

    It’s not social software yet, as it does tags-by-user, but not yet users-by-tag or tags-by-user-group, but its more informative than the for-decoration-only extisp.icio.us, and the group-oriented updates seem both obvious and easy. Slowly, slowly we’re getting the visual tools needed to characterize groups…

    Comments (1) + TrackBacks (0) | Category: social software

    Hermit Crab Pattern on use.perl.orgEmail This EntryPrint This Article

    Posted by Clay Shirky

    A young woman best described as livejournal user manque started using the free journal service over at perl.org to ruminate out loud about her life. On LJ, she would have enjoyed the privacy of the mall, but on perl.org, she stuck out so dramatically that the perl hackers accused her of being a bot. (As a friend of mine said, the goal of any true programmer is to fail the Turing test…)

    Some of the denizens of perl.org suggested she should stay and learn perl, but some kind soul pointed her to LJ, to which she decamped.

    im sorry for the inmconvenience i had no idea what this site was about and merely used it as a journal to write my thoughts down. im deeply sorry if i scared anyone and you odnt have to worry im leaving this site for good and ill never bother you again. once again i did not do this on purpose forgive me. im now on livejournal im sorry! i had no idea i just thought this was a site for writing a journal. sorry

    I call this the hermit crab pattern, where the occupant of a social space is a differnet kind of creature than the one the space was designed for. I first came across this when there was a group of middle-aged women using the ultra-hip word.com bulletin boards as a kind of online kaffeeklatsch. (Prodigy users manque.)

    This pattern is at least part of the answer to tech-determinism — the software doesn’t actually program what goes on in it; context and contrast are such strong human forces, they overwhelm the simple technical affordances and limitations. use.perl.org runs slashcode, which also runs slashdot — not only is the perl community quite different than the slashdot community, but our friend the livejournal user to be was different from the perl community as well.

    Comments (2) + TrackBacks (0) | Category: social software

    August 3, 2004

    OT: The brokenated terribility of writing in browsers, reduxEmail This EntryPrint This Article

    Posted by Clay Shirky

    Great tip in the comments — making Safari read a .css file with textarea { width: 400px; height: 500px } in it force-resizes it to something you can imagine writing in. I like both MT and Wordpress better already.

    Lots of other good user recommendations.

    ...continue reading.

    Comments (0) + TrackBacks (0) | Category: social software

    Merholz on Paths at BerkeleyEmail This EntryPrint This Article

    Posted by Clay Shirky

    Great post from Peter Merholz, by way of Ross, on the way pedestrians in the aggregate work out where the paths should be on the grounds of UC Berkeley. This “they built the quad but didn’t lay the sidewalks” story has been an urban legend for years, its great to see someone documenting it, and making such an important point about the aggregate intelligence of your users in the process:

    For some reason, Berkeley would rather spend it’s money reinforcing it’s poor landscape architecture with barriers and re-sodding, then recognizing that the paths suggest a valuable will of the people.

    Comments (0) + TrackBacks (0) | Category: social software

    Ads and social mediaEmail This EntryPrint This Article

    Posted by Clay Shirky

    Jason Calacanis discovers that Fark has been selling story placement on their front page and calls them on it, getting a priceless quote from Fark management in the process:

    I don’t think that either Drew or I are willing to engage in a discussion regarding the business ethics of our decision However, if you look at any news source, they are influenced by PR agencies, wine & dine’s and similar events. Take a look at the Graydon Carter as example #1. I challenge you to find a pure editorial voice in news today.

    I assume what the Fark rep means is “Everyone who takes money to publish sells their independence”, itself an arguable assertion, but even if that were true, you’d think anyone publishing on the Web would have noticed the arrival of, oh, 4 or 5 million non-commercial sites in the last few years, no?

    Worse, they talk a good game about the inevitability of money-for-links, but they sure never bothered to let their readers in on it.

    On a related note (cluelessness among people trying to manipulate social media being our grand theme here), I got mail from someone asking if I would be interested in talking to company X, who makes a revolutionary firewall and dessert topping? Normally I delete stupid PR mail like that the minute I see it, but I’d gotten mail from this source a couple of times recently, so I hit reply to ask to be taken off his list, and his email address was PayPerClipPR.com.

    I really couldn’t believe it — here was a PR person writing to ask me to do a story on his company (which I have never done in ten years of writing about the net), and his email address announced that his compensation was directly tied to the number of clips he could get run on behalf of his client. The unstable stance of asking a favor while broadcasting contempt left me a little disoriented.

    If three is the official number to declare a trend, I say Fark, PayPerClipPR.com and Change This (bonus meme accelerator: Friendster sells character placement from the movie Anchorman) mean that the Game Social Media for Extractive Value pattern, already bad with comment spam, is going to get much worse.

    Comments (13) + TrackBacks (0) | Category:

    August 2, 2004

    Mimi Ito on Mobile devices and presenceEmail This EntryPrint This Article

    Posted by Clay Shirky

    Mimi Ito wrote an interesting introduction to the ways mobile devices change urban gatherings, including two themes especially near and dear to my heart. First, the ways coordination replaces planning:

    Mobile phones have revolutionized the experience of arranging meetings in urban space. In the past, landmarks and pre-arranged times were the points that coordinated action and convergence in urban space. People would decide on a particular place and time to meet, and converge at that time and place. I recall hours spent at landmarks such as Hachiko Square in Shibuya or Roppongi crossing, making occasional forays to a payphone to check for messages at home or at a friend’s home. Now teens and twenty-somethings generally do not set a fixed time and place for a meeting. Rather, they initially agree on a general time and place (Shibuya, Saturday late afternoon), and exchange approximately 5 to 15 messages that progressively narrow in on a precise time and place, two or more points eventually converging in a coordinated dance through the urban jungle. As the meeting time nears, contact via messaging and voice becomes more concentrated, eventually culminating in face-to-face contact.

    and then the way that mediated and unmediated conversations can now take place among a group at the same time:

    In other cases, mobile messages are used to contact a recipient just out of visual range or unavailable for voice contact. Messaging during class or lectures gets around the limitations on private voice contact. “Hey, look. The teacher buttoned his shirt wrong.” “This class sucks.” Another example from one of our informants was when she was standing in a long line for a bus and saw her friend near the front of the line. She sent her a message to look behind her so that she could see her and wave. In other cases, students have described how they will message their friends upon entering a large lecture hall to ask where they are sitting.

    Comments (2) + TrackBacks (0) | Category: social software

    August 1, 2004

    Networked performance blogEmail This EntryPrint This Article

    Posted by Clay Shirky

    There’s a new weblog on the subject of networked performance, covering among other things, the Big Games pattern of Uncle Roy and Pacmanhattan.

    I love this pattern of work, of course, but can’t wholeheartedly recommend this weblog as an addition to your RSS feed, because of the high risk of pretentious claptrap. (One current post notes that the performers “intervene” in an online game, without even trying to describe what that intervention might be.) The art world has a love-hate relationship with game designers, because the game people have a far greater social reach, but infuriate the art world by caring more about whether things are fun than whether they are illustrative of theory — expect to see that played out here.

    Comments (3) + TrackBacks (0) | Category: social software

    Off-topic rant: Why are browsers such terrible writing instruments?Email This EntryPrint This Article

    Posted by Clay Shirky

    Off-topic Lazyweb-ish rant: Why, with the rise of the writeable web, are browsers still stuck in this “Go to a page, get what you want, go to the next page” mentality? Tabbed browsing means I have 3-4 windows and between 15 and 20 tabs open, with some individual tabs open for days at a time. Partly as cause and partly as effect of tabs, the amount of writing and annotating I do in browser-mediated environments — weblogs, wikis, bulletin boards, even tagging del.icio.us links — already high, is rising still further.

    So why, when my browser crashes or I re-start, do I not just get every URL that was open at the time of the crash? Why, when I accidently reload or close a window with a form in it, do I lose the content of the form? And why can’t I undo edits in the form field? Can it really be 2004 and there is an app with an installed base in the hundreds of millions that doesn’t support Undo?

    ...continue reading.

    Comments (15) + TrackBacks (0) | Category: social software

    Breaking up by Powerpoint, and tangentsEmail This EntryPrint This Article

    Posted by Clay Shirky

    Joey deVilla is speculating that breaking up by email, usually on the part of the guy (is that redundant? I wonder if anyone has even collected the stats?) is the result of exposure to office-mediated communications, and especially Powerpoint.

    He goes on to detail the next obvious move:

    As a counter-note, though, some months ago there was a piece on breaking up by SMS, suggesting that the rise of mediated breakups in relationships may be an effect of the rist of mediated communication in relationships generally.

    (Tangent: After years of relationships conducted in part through emotionally laden email, chat, and even coded posts in public fora, when I began dating my wife, I made a conscious decision to put nothing in mail more fraught than places and times to meet. The fact that we are now married and have two kids means that this method is sure-fire, based on a sample size of 1.)

    (Tangent 2: Was talking to Matt Jones, who sends his fiance messages through his choice of del.icio.us links, as she knows she’s subscribed to him. More proof, as if any were needed, that the human condition affects everything it touches.)

    (Tangent 3: The Group Hug site for online confessions specifically tries to filter out posts that attempt to set up user-to-user communication, showing how hard people will try to respond to one another, even in a site designed solely for one-way, one-time use. Curious that they call it Group Hug, since they are intentionally disabling any mediated analog to a real group hug…)

    Comments (0) + TrackBacks (0) | Category: social software

    July 29, 2004

    Feed Me Links: Social bookmark managerEmail This EntryPrint This Article

    Posted by Clay Shirky

    I owe John Manoogian of FeedMeLinks an apology — an earlier reference to a walkthrough of FeedMeLinks suggested that some features were not yet implemented. I was wrong — I’ve been playing with it for a couple of days, and everything works.

    Interestingly, of the 4 common organizing tools for social link managers (tag or category, most recent first, by user, popularity), FML seems to have the greatest emphasis on characterizing users, going so far as to provide user icons. Can the FML dating service be far behind?

    Comments (1) + TrackBacks (0) | Category: social software

    WhyWikiWorksNot: 2004 Dance Re-mixEmail This EntryPrint This Article

    Posted by Clay Shirky

    Two thoughtful pieces on failures to implement wikis in the field:

    First, Connected, distributed work

    80% of my time goes into coordination - communicating with people. The only tools that aid in communication are e-mail, instant messaging and phone. We made an effort to introduce all involved to the concept of Wiki and use it wherever possible to reduce the time and effort spent in writing/forwarding e-mails and communicating the same idea to a million people in a million ways (ok I’m exaggerating here). However all efforts went in vain…

    Then Wikis in classrooms and Aiming for communal constructivism in a wiki environment

    I guess I’m making a criticism of instructionist classroom methods where they stifle or limit student-to-student interaction. I do think that lectures have their place but for certain subject matter, a lecture would not be suitable. Each week, I prepared the material, each week I contrived some kind of in-class activity to let people ‘interact’. But as I mentioned before, I was merely creating fill-in-the-blanks exercises… I realize now, that to get to the level of which I was aiming, in terms of communal constructivism, you need to let the participants identify their own blanks

    These posts interest me because they are rooted in practice, not theory, and address the sense of surprise and resistance users often feel when exposed to wikis.

    Comments (12) + TrackBacks (0) | Category: social software

    Jimmy Wales on the WikipediaEmail This EntryPrint This Article

    Posted by Clay Shirky

    Good slashdot interview with Jimmy Wales, founder of wikipedia., covering some of the challenges of running a large distributed social project.

    And the beginning of this question…

    What methods have you found that work best for getting people not only involved in contributing, but also keeping them contributing to the Wiki?


    Jimmy Wales:
    Love. It isn’t very popular in technical circles to say a lot of mushy stuff about love, but frankly it’s a very very important part of what holds our project together.

    brought tears to my eyes. It’s great to see someone go into the belly of the ‘tech is all’ beast and tell the truth about emotional forces…

    Comments (0) + TrackBacks (0) | Category: social software

    July 23, 2004

    Change ThisEmail This EntryPrint This Article

    Posted by Clay Shirky

    The Change This Manifesto has been floating around for a few days:

    In the Internet (and especially blogging), we see the glimpse of an alternative. Taken over time, many of the best blogs create a thoughtful, useful argument that actually teaches readers something.

    Alas, blogging is falling into the same trap as many other forms of media. The short form that works so well online attracts more readers than the long form. Worse, most blogs stake out an emotional position and then preach to the converted, as opposed to challenging people to think in a new way.

    So we’re launching ChangeThis. The bet?

    We’re betting that a signicant portion of the population wants to hear thoughtful, rational, constructive arguments about important issues. […]

    ChangeThis doesn’t publish e-books or manuscripts or manuals. Instead, we facilitate the spread of thoughtful arguments…arguments we call manifestos. A manifesto is a five-, ten- or twenty-page PDF file that makes a case. It outlines in careful, thoughtful language why you might want to think about an issue differently.

    It’s obvious this will fail. Why it will fail, however, is instructive.

    Change This is one of the last stands for an idea of the Old Left — media = force. This belief, present since Marx and Engels put state control of media on the Communist Manifesto’s To Do list, says that media is a strong locus of control over the individual. In this view, when you alter media, you alter the public’s worldview, as they are both pliable and mute.

    This idea was attractive, because it took note of the supply-side control of media in the era when everything went mass. It was so attractive in fact, that even when the internet started to erode that supply-side control, most of the O.L. denied that this was happening, lumping social communication like mailing lists and weblogs together with traditional broadcast media, because to admit the alternate possibility — that people could now produce as well as consume, and this would not necessarily lead to a groundswell of support for the left — was too terrifying to contemplate.

    (This is the source, incidentally, of much of the anguish by the O.L. over the war-bloggers. Populist expression is not supposed to be conservative.)

    Look at the charge Change This lays at the feet of weblogging — people like to read short things they agree with more than long things they disagree with. True enough, of course, but Change This assumes that the audience a weblog has is somehow god-given, and that a weblogger’s choice of subject is de-coupled from their audience. This is the key assumption of ‘media = force’ — you can manipulate your audience as you like.

    In fact, the opposite is the case — if the most popular weblogs are trafficking in cant, that’s because of the readers, not the writers, since it is the readers who decide which weblogs are popular.

    And notice what they don’t mention? Comments and trackbacks. They regard a weblog as a publication, and a post as a stand-alone piece, rather than regarding interlinked weblogs as an ecosystem of argument. And why do they ignore the central fact of weblogging as argument? Because admitting that posts are not pieces and that readers are also writers would upset their view of the problem as “We publish, you distribute.”

    Change This doesn’t like weblogs because they don’t want any backtalk; their main goal is to restore the orderly progression of outbound ideas from producer to consumer. Every aspect of their Manifesto, from the choice of the word manifesto on down, screams contempt for the reader, whose principle job is as a super-distribution network.

    And then there’s the odd reference to producing PDFs. In the middle of announcing their plans to rescue intellectual discourse, they suddenly point to a specific document format; it’s like listing the brand of knife the chef uses on a menu. What do PDFs have to do with Change This’s larger goals?

    And the answer, of course, is ‘Everything.’ PDF is the ultimate no-backtalk format. It is designed for the page, not the screen, can’t be annotated, has no provision for comments and nor can it host any trackbacks — in short, it is almost useless as a site for subsequent reference to the very conversations Change This says they want to stir up.

    If their ideas were any good, they’d put them out where people can talk about them. To do so, though, would open up the criticism they say they encourage but actually fear. They want the old days back, where one could publish a magazine of serious discourse without having to deal with the possibility that the audience might have something serious to say in reply. Alas, those days are gone, and Change This’s attempt to re-create the muteness of anti-social media is little more than a nostalgia trip.

    Comments (26) + TrackBacks (0) | Category: social software

    Mobile social software listEmail This EntryPrint This Article

    Posted by Clay Shirky

    While we’re posting these lists, here’s a list of mobile social software applications from elastic space.

    Comments (0) + TrackBacks (0) | Category: social software

    List of mobile urban gamesEmail This EntryPrint This Article

    Posted by Clay Shirky

    A list of mobile urban games, of the PacManhattan/Uncle Roy variety.

    Comments (0) + TrackBacks (0) | Category:

    July 22, 2004

    The New Musical FunctionalityEmail This EntryPrint This Article

    Posted by Clay Shirky

    Tom Coates has the first of what looks like a fantastic series of posts on the new musical functionality, an extended musing on the distribution of production, reproduction, and filtering of music, covering especially the newly social context.

    Over the next few days I’m going to write about some of the core trends that I’m seeing in people’s use of digital music, attempting to extrapolate from some current behaviours that we’re all observing around us - concentrating on how people wish to interact and use their music. I’m not going to spend too much time on the way some people may wish to legislate against these desires or build around them - because I believe for the most part that any attempt to do so will inevitably fail. Competing models that more adequately fulfil those needs will rise to take over in their place. […] I’ll be talking about four major areas that seem to me to be indicative of the unevenly-distributed musical functionality of the future - (1) portability and access, (2) navigation, (3) self-presentation and social uses of music and (4) data use and privacy.

    Among the social apps that I think relate to his thesis but which he doesn’t (yet) mention are:

    * songBuddy
    * MusicPlasma
    * MusicMobs
    * Webjay

    And, as an added flavor bonus, here’s a City of Sound post I’ve been meaning to blog on socialising listening habits, tied mostly to the features of audioscrobbler, which Coates also regards as essential.

    Comments (1) + TrackBacks (0) | Category: social software

    A Social Networking Bill of RightsEmail This EntryPrint This Article

    Posted by Clay Shirky

    Over at the Planetowrk Journal, Duncan Work has proposed a social networking bill of rights, elaborating on these 5 principles:
    1. The right to know who is collecting what and for what purposes;
    2. The right to not participate;
    3. The right to clear and, in some cases, irrevocable privacy policies;
    4. The right to control access to personal information and attention;
    5. The right to participate in a global social networking system without restrictive barriers.
    It’s wrapped up in something that’s a bit too much of an ad for LinkedIn for my taste, but it’s an interesting start. #3, especially, will be interesting to see in practice, since the courts have usually allowed a wide degree of freedom for companies to unilaterally change their bargain with users, especially for businesses in bankruptcy, which triggers freedom from all manner of contractual obligations. Would be fun to write the contract that is designed to survive that sort of change of control for the data.

    Comments (6) + TrackBacks (0) | Category:

    Group sponsorshipEmail This EntryPrint This Article

    Posted by Clay Shirky

    Ben Hyde is thinking about an anonymous reputation system, analogous to the proposed K5 user-sponsorship model, where users could get sponsored by groups, in a ‘letter of introduction’ kind of way, so as to be able to operate anonymously (though I think he means pseudonymously) but with some visible reputation:

    Is it possible to have useful actor reputation systems without demanding that the actors give up their privacy? This is a key design problem. It appears that the answer is yes. Consider as an example. Let’s say I have an excellent reputation in some community. I request that community write me a letter of introduction to the anonymous community. This letter says nothing more than the bearer of this letter is a good guy. I take the note to the anonymous community and they provide me with an reputation/identity that I can use to on anonymous actions [sic]. Recipients of those actions can then check that anonymous reputation. If I act badly in that persona then they place bad marks on the anonymous reputation; but it these do not go back to my original reputation - there is no back pointer. The only back pointer available is the link to the original community. I have damaged the reputation of my home community, and only that.

    Comments (2) + TrackBacks (0) | Category: social software

    July 21, 2004

    More on the monkeymindEmail This EntryPrint This Article

    Posted by Clay Shirky

    About half of any group of people, when they see someone yawning, will begin yawning themselves. Now it turns out chimps, our closest primate relatives, do the same thing.

    The really freaky thing is the other social characteristics that correlate with susceptibility to yawning:

    In research on people, those subjects that perform contagious yawning also recognise images of their own faces and are better at inferring what other people are thinking from their faces. What is more, brain imaging studies have shown that people watching others yawning have more activity in parts of the brain associated with self-information processing. “Our data suggest that contagious yawning is a by-product of the ability to conceive of yourself and to use your experience to make inferences about comparable experiences and mental states in others,”

    An earlier study, from last year, also shows that monkeys can recognize unfairness:

    Knowing when you have been ripped off is not solely a human skill, biologists have discovered. Monkeys can spot a raw deal when they see one, and if they are not treated fairly they throw a tantrum. The finding confirms the idea that cooperative behaviour, which relies on the participants’ having a sense of fair play, appeared early in our evolutionary history.

    This matters because when you are designing software to engage groups, you are triggering primal (which is to say emotional rather than intellectual) behaviors. An engaged group of users is unlikely, almost by definition, to behave rationally, because when we are in group settings, we are guided in part by the monkeymind.

    Comments (5) + TrackBacks (0) | Category: social software

    July 20, 2004

    Communication != content, gzipped versionEmail This EntryPrint This Article

    Posted by Clay Shirky

    At lunch with a TV exec today, I think I finally came up with a way to explain the communication != content formulation to media people: Online communications aren’t user-generated content for the same reason that phone calls aren’t user-generated radio.

    Comments (5) + TrackBacks (0) | Category:

    Curious eveness of comments at MefiEmail This EntryPrint This Article

    Posted by Clay Shirky

    Haughey has an odd graph of cumulative comments over time up on Metafilter, showing that after the first 100K comments, the come at almost exactly 100K additional comments every six months. Haughey notes the oddity:

    During that time we had 9/11 happen, tons of new users, and then over a year and a half of no new users, yet the # of comments stays steady. […] That’s kind of freaky, maybe we’re hitting up against an information overload limit that no amount of new users or events can influence? Any ideas what could be holding us so steady over such changing times?

    Comments (1) + TrackBacks (0) | Category: social software

    July 19, 2004

    Best writing on the ethics of collaboration?Email This EntryPrint This Article

    Posted by Clay Shirky

    Just had a student ask me a simple but flummoxing question: Is there a good book or essay on the ethics of collaboration? Particularly of collaborative groups with no formal leader?

    There’s lots of prescriptive writing out there, and most of it is garbage of the “First, make everything explicit…” variety, and there’s plenty of work on describing individual problems — I already pointed her to Tyranny of Structurelessness and A Dozen Things I Think I Know About Working In Groups.

    What she’s asking for though (and what I now want as well) is a good overview on all of the most common dilemmas of such groups — tension between individuals, obstinacy in consensus-driven work, slacking group members, etc. — from a descriptive rather than prescriptive point of view?

    Does such a piece of writing exist?

    Comments (3) + TrackBacks (0) | Category:

    George Michael's message boardsEmail This EntryPrint This Article

    Posted by Clay Shirky

    The much blogged choice by George Michael to shut down the message boards on his site.

    As many of you will know, much of my reasoning for the future is to stay away from the negativity of the media. I think that it is bad for me and for music in general, so I find it really sad to see the forums so packed full of negative comment, and that so many genuinely positive fans find themselves defending me… constantly against attack. How pointless. […] Those of you that want to carry on the media’s work will have to do it somewhere else. […] Sorry guys, but that’s the way it goes… Peace and Love… or nothing at all.

    Let us stipulate, as the lawyers say, that Michael is an idiot. Giving people tools for uncensored communications and then expecting them to engage in “Peace and Love… or nothing at all” puts him in Radio 4 territory. (I also love the implication that people posting negative posts are doing “the media’s work”.) But this is an old story — the real interest, I think, lies elsewhere.

    The open question, I think, is this: when does a board turn nasty like this? Cory Doctorow made the observation that the comments at boingboing were extremely positive, even when critical, during the early days of the site, but later, as the site grew, they turned nasty and vitriolic. My hypothesis is that two effects are at work here:

    1. The community/audience threshold is critical — when a site is large enough that it reaches an ‘audience’ (which is to say a group of users too large to be communal) it loses communal self-regulation and becomes an attractive nuisance for people who want to use comments as a collateral way of reaching the same audience.

    2. Fame makes people angry. Fame is an imbalance of attention — more people want your time than you have time to give. In practice, this means that interactions with famous people almost always involve you getting dissed. Intellectually, you know this is situational and beyond remedy, but emotionally, it still feels bad. (The closest most people get to feeling famous is at their wedding reception. You gather a room full of people you could talk to for hours, then talk to most of them for just a few minutes each before running on to the next conversation.)

    Here’s how I think those two forces interact: The satisfactions of addressing a community vs. addressing an audience are different. In a community, speech is often used to form, cement or re-affirm social bonds, whereas in addressing an audience, you work for maximum effect. When you get people angry, possibly sub-conciously angry, about fame, and they are given a forum on a famous person’s site, acting out is one sure way to maximize attention.

    So if I’m right about the effects and their interaction, what I want to know is “Are there clear thresholds where these effects start to manifest themselves, or is it different in ever situation, other than order-of-magnitude calculations?”

    Comments (9) + TrackBacks (0) | Category: social software

    Reputation and SocietyEmail This EntryPrint This Article

    Posted by Clay Shirky

    A good First Monday piece by Hassan Masum and Yi–Cheng Zhang called Manifesto for the Reputation Society , which avoids most of the “I know! Let’s call reputation a number, then work with the numbers!” problems common to such work.

    eBay did us a huge disservice by making reputation look simple. eBay hosts millions of one-time, numerically expressible, single-variable transactions (“How much money for how many Beanie Babies?”), among distributed actors in non-iterated communication. This makes it a game theorists wet dream, but a bad proxy for reputation systems generally. Massum and Zhang recognize this, and examine many other reputation systems as well — Slashdot, Amazon, even Google’s PageRank algorithm — making this the best “Start reading here” paper for reputation I’ve found.

    You may mentally assign a friend a bad reputation for being on time or returning borrowed items promptly, while still thinking them reliable for helping out in case of real need. No person can be reduced to a single measure of “quality.”

    So people will have different reputations for different contexts. But even for the same context, people will often have different reputations as assessed by different judges. None of us is omniscient — we all bring our various weaknesses, tastes, bias, and lack of insight to bear when rating each other. And people and organizations often have hidden agendas, leading to consciously distorted opinions.

    Reputations are rarely formed in isolation — we influence each others’ opinions. Studying the structure of social connectivity promises to reveal insights about how we interact, and thinking about simple quantities like the average number of sources consulted before an opinion is formed will help us to better filter these opinions.

    Are reputations only for people? No, their scope is far wider:

    - They can be for groups of people: companies, media sources, non–governmental organizations, fraternities, political movements.

    - They are often used for inanimate objects: books, movies, music, academic papers, consumer products. Typically, whenever we talk about the “quality” of an object with some degree of subjectivity, we can also speak of its reputation, usually as assessed by multiple users — bestseller lists are a simple example.

    - Finally, ideas can have reputations. Belief systems, theories, political ideas, and policy proposals are the bedrock of public discussion. The waxing and waning of idea–reputations directly affects their likelihood of implementation, and thus the environment that we all share.

    It’s curious that they called it a manifesto, since its long on description and short on prescription, but it’s better for not being one.

    They also point to Masum’s earlier First Monday piece on a distributed reputation layer called TOOL (which, unlike the Manifesto, suffers from some of the same problems as RELATIONSHIP markup, I think). They also point to the Reputations Research Network , and to last year’s MIT/NSF Symposium on Reputation Systems as places to find other work in the field.

    Comments (3) + TrackBacks (0) | Category: social software

    List of social software labsEmail This EntryPrint This Article

    Posted by Clay Shirky

    Danyel Fisher has posted his list of businesses with social software labs. (He doesn’t say so in the post, but if there are places he’s missed, I’m guessing you could add them in the comments to get the list updated.)

    Comments (0) + TrackBacks (0) | Category:

    July 18, 2004

    Brazilian vs. USAian Throwdown on OrkutEmail This EntryPrint This Article

    Posted by Clay Shirky

    What is it about Brazil that makes them such avid users of social software? A year ago we covered the Brazilian connection during the Fotolog controversy; now it’s Portuguese v. English on Orkut (with the English speakers, I might add, looking like jerks.)


    Says Reuters:


    Tammy Soldaat, a Canadian, got a sample of Brazilian wrath recently when she posted a message asking whether her community site on body piercing should be exclusive to people who speak English.

    Brazilian Orkut users quickly labeled her a “nazi” and “xenophobe.”


    “After that I understood why everyone is complaining about these people, why they’re being called the ‘plague of Orkut,”’ she said in a site called “Crazy Brazilian Invasion.” […]


    “When the average Orkut user goes to look at community listings to see what’s out there, he’ll see a list populated with pretty much all Portuguese communities,” Gibbs said. “This is highly frustrating since Orkut is not a Brazilian service.”


    “Orkut is not a Brazilian service.” It’s hard to know where to begin — the assumption that because English has been the historically dominant language it should be made the dominant language by fiat in the future is simply foul.


    (And, on an interesting note about the panic of the majority, the assertion that Orkut is “pretty much all” Brazilian communities has a parallel in the study of sexism — men will report that any given room is half women when the actual proportion of women crosses one-third.)

    Comments (4) + TrackBacks (0) | Category: social software

    July 14, 2004

    Caputa on WallflowerzEmail This EntryPrint This Article

    Posted by Clay Shirky

    Peter Caputa also posts about Wallflowerz, a dating site that pays people to be active on the service.

    The unique thing about the site is that it pays you for being active. And people pay to use it on a per use basis.  Instead of a monthly fee to have messaging capability (eg match.com), you have to buy credits.  But, when people message you or when you suggest matches to people, you receive credits that you can withdraw for cold hard cash.

    I stand agape. Once you’ve set up a market where I pay you for contact, we are conceptually close to a pay-for-(tiny)sex scenario.

    I think linking the intrinsic desire to use a dating site with the extrinsic goal of making money is so distorting that it will kill the business, but I hope it catches on in the short term, because the system-gaming that will go on during the death throes will be fascinating to watch.

    Comments (0) + TrackBacks (0) | Category: social software

    Community != contentEmail This EntryPrint This Article

    Posted by Clay Shirky

    Peter Caputa, guest-blogging at socialsoftware.weblogsinc.com says “Blogging is the Ultimate Social Software.” So far so good, but he makes that statement based on this assertion — “I think it is safe to say that sharing information is at the center of social networking.”

    This I disagree with. Peter is right about blogs as a social networking tool (Dina Mehta and Lilia Efimova make the same argument) but the thing that makes it work isn’t information sharing. The thing that is at the center of social networking is social networking.

    This is related to yesterday’s theme of panic about Kuro5hin’s proposed sponsorship system — despite 30+ years of evidence that human contact online has irreducibly sophisticated features, there is persistent anxiety driving people to want to express contact in terms of some other, simpler and more tangible thing.

    I think this is partly because we’ve all internalized Shannon, where all communication is to be expressed as information, and it’s partly because media is supposed to be explained as a conduit for content. (All together now, communication is not “content”.)

    By way of example, here, in full, is utterlybemused2’s blog entry for 7 July:

    whoaaa i just came back from swimming at rachels and my hands are bright red, and my finger tips hurt…. im also dead tired and my eyes hurt too

    I probably don’t even have to mention that the site is Livejournal…

    If you reduce this to “sharing information”, this blog entry makes almost no sense. Who cares that you just came back from swimming at Rachel’s! (Who’s Rachel anyway?) But of course, no one reading this is reading it to see if utterlybemused2 has any information to share, they’re reading it to tune in to ub2’s life — the post only makes sense in a social context, and the effect of reading it can’t be reduced to an analysis of its content.

    Blogs are a fantastic social networking tool, and they are a fantastic publishing tool, but those are different and incommensurable patterns.

    Comments (4) + TrackBacks (0) | Category: social software

    July 13, 2004

    Social link managementEmail This EntryPrint This Article

    Posted by Clay Shirky

    I’m fascinated with the way that a bunch of old ideas floating around from the dot com era are back, and now succeeding. Many of these apps are explicitly social, and are benefitting from the larger user population and increased comfort — it took quite a while for Match.com to catch on, and sixdegrees had much of the Friendster model down by 1996 and flamed out anyway.

    One really interesting category of these v 2.0 apps is shared bookmarking, a la the service Backflip from Back in the Day. So, with a minimum of editorializing, here is a list of places doing some form of shared link management, which are providing some of Tom Coates’ “user-friendly throw-aroundable clumps of groupness.”

  • del.icio.us (Subscribe to users or to user-created tags)
  • Bookmarkmanager - http://freshmeat.net/projects/bookmarkmanager/”>Bookmarkmanager (Host your own)
  • Dude, Check this out (BEST. URL. EVAR.)
  • Spurl (Sitewide hot list; saves page contents as well as links)
  • Feed Me Links! (Pretty UI, but several features broken)
  • Furl (del.icio.us knock-off; caches pages)
  • Gibeo (shared remote site annotation; more like 3rd Voice)
  • Linkfilter (Moderated)
  • Simpy (Find people like you through their links)
  • Stumble Upon (Cross-platform toolbar; explicit user rating [added 7/23])

    Add more in the comments if you know of any, and I’ll amend the list here.

    My personal recommendation is del.icio.us. If I had to sum up the Web’s effects on the world, I’d say “surprised by simplicity.” Unlike most other technologies, we’re witnessing a shift to simpler apps over time, as with the way million dollar CMS systems and collaboration via Lotus Notes shifts to weblogs and wikis. del.icio.us hits that same pattern — not a single wasted feature, it just works the way the Web does.

    And my anti-recomendation is Amplify. Using it, I had a horrible flashback to the bad old days of Backflip, where the idea was the the user would store their links on Backflip, who would then make it almost impossible for the user to get at those links in aggregate, to store a copy locally, or to get to their links should Backflip be down.

    Amplify is that same terrible idea — your links are stored as “Amps,” and everything you click is an uninformative Amp redirect, so even if you get to a page with a link on it, you can’t copy the URL without also visiting the link, and then, when you do visit an “Amp” (always mistrust people who try to re-brand key parts of the Web) it’s in a frame, so that you can’t easily share it without also sending the recipient through Amplify.

    And, as the glistening maraschino cherry on the towering sundae of badness, the categories are pre-fab rather than user created, and there are even 14 of them, the Yahoo-official number of top level categories.

    I suppose the flipside of the “everything old is new again” pattern is that the old bad ideas get a re-play as well as the old but good ones. I can’t imagine why anyone would hand their links over to Amplify — the info-to-eye-candy ratio on the pages is at PowerPoint levels, and the “we’ll capture the users eyeballs and hold them hostage” link model, already broken in the mid-90s, has now been superseded by things like del.icio.us and Bookmarkmanager. Grrrr.

  • Comments (16) + TrackBacks (0) | Category: social software

    Two on moderation: Yay Hooray and Kuro5hinEmail This EntryPrint This Article

    Posted by Clay Shirky

    Yay Hooray is one of many community sites designed to give the community power to self moderate. After following the traditional slashdot progression of increasing individual power over moderation, they’ve now added ‘filter content by buddy list’ as a feature.

    Started back in 2001, YH was built by the skinnyCorp team as an experiment in online community. Originally, YH was built to manage itself through a level system that allowed users to earn administration responsibilities. Then it evolved into a point system. With JBA [the new release], rather than administering the actual content of the website, the aim is to allow the filtering of content through an advanced buddy filtering system.

    It’s all about the membranes.

    Yay Hooray includes 4 layers of social filter — no filter (show all); FOAF (two degrees of separation); Posts by friends; Posts by me. Another sign that work on YASNSes are moving from standalone (Friendster, Orkut) to embedded (dodgeball, flikr.)

    (Interesting to note that after the “Let’s give everyone their network to 5 or 6 degrees!” that services are largely settling on friend-of-a-friend as the default setting, and often the outer bound.)

    On a related note, there is an illustrative post
    on Kur5hin from some weeks ago, written by Ta bu shi da yu with the title Why sponsored users won’t work, about the proposed sponsorship model on Kuro5hin.

    The piece is particularly noteworthy for its hysterical tone:
    Rusty has already told us that he “can’t stress enough the point that if someone you sponsor does something to get themselves kicked out, you get kicked out too”. Excuse me? In other words, you go to the effort of sponsoring someone, they act up and get kicked off and you get kicked out too? […] Placing the responsiblity of policing someone else’s behaviour is not only stupid and foolhardy, on K5 it’s actually impossible. Unless you are an editor, you can’t delete an account, remove stories or comments, nullify user accounts or in fact do anything that effectively disciplines a sponsored user. If sponsored users can’t be disciplined, then existing users who dare to sponsor a newbie will run the risk of being kicked from K5 for something they didn’t do!

    The disbelief, bordering on moral panic, is palpable. Rusty explained a simple policy — new users will have sponsors — and then Ta bu shi da yu repeats this policy, twice, as if its mere re-statement would make it seem unfeasible.

    I love this post, because it articulates what I think of as the sub-rosa assumptions around earlier forms of community tools:

    - Systems should only use technological, not social, tools
    - A user is responsible only for his or her own behavior
    - Any policy to be enforced must be expressible algorithmically — no judgment calls
    - Users must have access to pseudonymous communications

    The central thesis of the post — that sponsorship can’t work, for these reasons — is suspect at the very least, as sponsorship systems work well elsewhere. Furthermore, humans use both social influence and judgment calls to affect one another’s behavior, and have done for some time now. But what seems to exercise Ta bu shi da yu is the idea that Kuro5hin will make social infrastructure, and therefore introduce social mistakes, into the network.

    (Interestingly, Kuro5hin is still in the “No new users” mode, so the test case for this version of sponsorship is still waiting to be effected, if Rusty decides to go for it.)

    Comments (1) + TrackBacks (0) | Category: social software

    July 7, 2004

    Public Mind: Generic critical massEmail This EntryPrint This Article

    Posted by Clay Shirky

    Public Mind is trying to make a general-purpose site for creating critical mass, supporting a number of different patterns — product feedback (there’s a whole category on Skype), commercial petitions (“A better belt clip for my Ericsson T68-i cellular phone”), and novel product ideas (“A child’s cellular phone that just has two buttons, talk and hang-up”.)

    When you see a proposal you support, you can click through to a page that tells you what’s going on (the “Skype for Mac” page has news of the recent beta tests), but most pages have some version of this message:

    Currently, this special request group is not yet big enough to attract the attention of a company or organization. However, you can help your request group grow to speed up and improve your chances that someone will seize the opportunity and propose a solution through Public Mind. To help this group reach critical mass (get big enough), you need to take action now. Email your friends, associates, and co-workers about Public Mind and your special request. The more people who join your group, the more likely you’ll get what you want.

    Of course, they don’t tell you how big critical mass is for any given idea.

    I go back and forth on these things — critical mass is obviously a useful thing in lots of situations, and on the plus side, they’re very up-front about no spam and opt-out, and the site is more organic than a purely “Sign our poll” thing.

    However, this is so explicit about getting “critical mass” as a first-order goal that it makes me suspicious anyone in management will take it seriously. Part of the reason critical mass matters so much is that it’s hard to achieve, and therefore a good sign of real interest or concern. Lowering the barriers to people saying “Sure, I want my kid to have a phone like that”, even if they don’t really care and wouldn’t buy one if it was on offer denatures the thing that made the message important in the first place.

    Comments (2) + TrackBacks (0) | Category: social software

    extisp.icio.us: mapping user tagsEmail This EntryPrint This Article

    Posted by Clay Shirky

    Behold extisp.icio.us, a 2D display mapping of del.icio.us tags per user, with font size and position indicating relative importance (here is a display mapping of Seb’s tags.)

    Though del.icio.us is social software, extisp.icio.us isn’t yet. #1 on my request list is to see concatenated users — http://kevan.org/extispicious.cgi?name=sebpaquet+cshirky. #2 is to see the inverse mapping — select a tag and see the users arranged in the same manner — http://kevan.org/extispicious.cgi?tag=socialsoftware. (And #3 is a RESTian interface: http://kevan.org/extispicious/name/sebpaquet)

    Comments (3) + TrackBacks (0) | Category: social software

    July 6, 2004

    Two on the Monkey-MindEmail This EntryPrint This Article

    Posted by Clay Shirky

    Ah, the monkey-mind, that primal and social part of our brains that evolved long before the human species emerged. Carl Zimmer has an interesting post, Machiavellian Monkeys, suggesting that neocortex size of primates increases with the propensity for social deception.
    While deception isn’t just an opportunistic result of being in big groups, big groups may well be the ultimate source of deception (and by extension big brains). That’s the hypothesis of Robin Dunbar of Liverpool, as he detailed last fall in the Annual Review of Anthropology. Deception and other sorts of social intelligence can give a primate a reproductive edge in many different ways. It can trick its way to getting more food, for example; a female chimp can ward off an infanticidal male from her kids with the help of alliances. Certain factors make this social intelligence more demanding. If primates live under threat of a lot of predators, for example, they may get huddled up into big groups. Bigger groups mean more individuals to keep track of, which means more demands on the brain. Which, in turn, may lead to a bigger brain.
    And, more rant than research, is David Wong’s the Law of Monkey, covering what he calls the Monkeysphehe, that small group of people we actually care about.
    That’s the whole thing, right here. Life on Earth, in a nutshell. We are hard-wired to have a drastic double standard for the people inside and out of our Monkeysphere and those outside make up 99.999% of the world’s population.

    Have you ever gotten pissed off in traffic? Like, really pissed off? I think we all have. We’ve thrown finger gestures and wedged our heads out of the window and screamed “LEARN TO FUCKING DRIVE, FUCKER!!” We’ve all pulled the gun out of the glove compartment and let a few fly at the offending car. Not firing at their head or anything. Just, you know, at their tires.

    Now imagine yourself standing in an elevator with three other people, two friends and a coworker. A friend goes to hit a button and accidentally punches the wrong one. Would you lean over, your mouth two inches from her ear, and scream “LEARN TO OPERATE THE FUCKING ELEVATOR BUTTONS, SHITCAMEL!!”

    They’d think you’d gone insane. We all go a little insane, though, when we get in a group larger than the Monkeysphere. You know the feeling, that invincibility of being an anonymous head in a crowd, screaming curses at a football player you’d never dare say to his face.

    Like all rants, it both over- and mis-states the case in places (I’ve always disliked the rant as a form) but it’s interesting to see that ideas about social congress of just the sort Zimmer covers have permeated this ‘explains it all for you’ style of writing.

    Comments (2) + TrackBacks (0) | Category: social software

    July 1, 2004

    The NY Times on distributed alibisEmail This EntryPrint This Article

    Posted by Clay Shirky

    Times article on groups that offer to provide alibis for one another, using SMS to coordinate, and usually using the phone to create the alibi:
    There is nothing new about making excuses or telling fibs. But the lure of alibi networks, their members say, lies partly with the anonymity of the Internet, which lets people find collaborators who disappear as quickly as they appeared. Engaging a freelance deceiver is also less risky than dragging a friend into a ruse. Cellphone-based alibi clubs, which have sprung up in the United States, Europe and Asia, allow people to send out mass text messages to thousands of potential collaborators asking for help. When a willing helper responds, the sender and the helper devise a lie, and the helper then calls the victim with the excuse — not unlike having a friend forge a doctor’s note for a teacher in the pre-digital age.
    As danah and David have both pointed out, we require a certain degree of flex in our social arrangements, and when technology gets too efficient at making sure we could have called or written, new social structures get invented to take the edge off.

    Comments (2) + TrackBacks (0) | Category:

    Weblogs in the classroom and social spaceEmail This EntryPrint This Article

    Posted by Clay Shirky

    Interesting post on the use of weblogs to teach writing, noting that blogs up end the usual assumptions about writing as a private activity.

    As a consequence, many writing assignments include opportunities for deep, personal reflective writing that is not possible within the public eye. But what is the tradeoff for that kind of writing opportunity for students? Isn’t it possible that the paradoxical situation of creating a risk-free space in which to enable risk-taking has led compositionists to forget a primary purpose of privacy, which is to provide a comfortable writing space, comfort which can also come from community?

    (Quote pulled from this page.)

    Comments (0) + TrackBacks (1) | Category: social software

    June 1, 2004

    The backchannel and conference designEmail This EntryPrint This Article

    Posted by Clay Shirky

    The use of attendee backchannels at conferences, a a favorite theme here, is part of a larger trend, towards ad hoc organization, or even ad hoc creation of value.

    You can see the context backchannels are happening in by looking at the Users create the schedule process for this weekend’s 2004 Planetwork conference.

    Anyone can propose a topic, anyone can create a login to rate a topic, and the half-hour speaking slots are given to the top ranked topics. To get such a slot, a talk needs to be both highly and broadly rated. (In subsequent passes at this method of selection, organizers will have to work against gaming-the-system options, of course, but the current style is fine for now.)

    Interestingly, the Planetwork folks have handed out the first 3 half-hour slots, and are going to do 3 more on June 2nd, and 3 more on June 3rd, meaning that the conference emerges over time. It also might let voters optimize the slots over time, as they see unaddressed topics and vote related proposals up.

    I say might, because it’s not clear how coordinated the voting can get in this framework. One class of risk in this system is ‘slashdot risk’, named after the reflexive stance on slashdot in favor of Linux, making even well-meaning criticism of that OS much less popular than even the most vapid pro-Linux boosterism. Groups have a hard time selecting topics or speakers who violate their cherished assumptions, so the interface could in certain groups amplify existing prejudices.

    The emergence of new classes of risk, however, is inevitable (as with ‘clique risk’ that happens in backchannels) because the weakness of the current conference form is so great that new ways of handing power to the users, however beset with problems, will be preferred by the users themselves.

    The social dilemmas of a conference are many, but most of them can be grouped under one heading: social loss. At a large, topic-specific conference, there are several obvious forms of loss

    • the conference schedule not matching the interests of the attendees (which Planetwoirk is trying to address)
    • speakers and panelists not being asked to address hard questions (“So, tell me Bob, just how good is your proprietary product?”)
    • members of the audience can have more knowledge than the speakers (as with Alan Kay being lectured to about object oriented programming)
    • members of the audience preferring to speak with one another, in groups, alongside or instead of listening to the speaker (In-room chat as a social tool.)

    Conference organizers will object that these new styles of arranging and participating in conferences will do more harm than good, and in many cases that will be true, but it won’t matter, because the real change here is not that technology is allowing new forms of participation, but rather that it is allowing new forms of creation — a conference has heretofore been an artifact, crafted by a small group for a large group, and as usual, the small group has found many ways to justify its existence (and I say this as a veteran of conference planning.)

    The ace in the hole, though, was capability — the small group model is required because the coordination cost for the involvement of a large group is simply too high. Whatever arguments there might be for involving attendees directly run aground on the difficulties of actually doing anything about it.

    Until now. Because of its plasticity, because of the tech-savvy nature of the road warrior clan who make up the core of its attendees, and because the “money for value” equation is quite direct, the conference form is an early warning of the pressures other social forms, better but not perfectly insulated, are going to undergo as social software continues to blow back through existing institutions.

    Comments (2) + TrackBacks (0) | Category: social software

    May 28, 2004

    Nomic WorldEmail This EntryPrint This Article

    Posted by Clay Shirky

    A transcript of a talk I gave called Nomic World, at the fantastic State of Play conference last fall. It oncerns the possible use of MMOs as experiments in letting the players own and operate the environment, thus modeling the conditions of political freedom. (The title comes from Peter Suber's great game Nomic, a game in which changing the rules during the game is a legitimate move.)
    Now what would it be like if we set out to design a game environment like that? Instead of just waiting for the players to argue for property rights or democratic involvement, what would it be like to design an environment where they owned their online environment directly, where we took the "Code is Law" equation at face value, and gave the users a constitution that included the ability to both own and alter the environment? There's a curious tension here between political representation and games. The essence of political representation is that the rules are subject to oversight and alteration by the very people expected to abide by them, while games are fun in part because the rule set is fixed. Even in games with highly idiosyncratic adjustments to the rules, as with Monopoly say, the particular rules are fixed in advance of playing. One possible approach to this problem is to make changing the rules fun, to make it part of the game.

    Comments (0) + TrackBacks (0) | Category: social software

    May 27, 2004

    "How to make friends by telephone"Email This EntryPrint This Article

    Posted by Clay Shirky

    Amazing mid-last-century document explaining how to use the telephone. Some of it is technical -- transferring calls, holding the receiver, but a lot of it is, well, tele-quette, like why the receiving party should answer first, and why the calling party should end the call. Very TCP-ish, in a social way...

    Comments (1) + TrackBacks (0) | Category: social software

    May 26, 2004

    Weblogs and authorityEmail This EntryPrint This Article

    Posted by Clay Shirky

    Cameron has a fantastic post on his ICA paper, 'Weblogs and Authority', in which he differentiates weblogs pointed to in blogrolls and those pointed to as links. (As an aside, I've always thought of the difference between blogrolling someone vs. linking to them in a post as the difference between shouting out to someone on the cover of a rap album vs. actually sampling them.) His most important finding is how radically the lists differ in both who's on them, and, for blogs on both lists, how the rank order differs. Metafilter and boingboing trade places -- on the blogroll list, MeFi is #1 and bB #3, but on the permalink list, they are #3 and #1 respectively. Scripting.com and rebeccablood.com both appear on the blog roll list (#6 and #16, respectively) but neither appear in the Top 20 of rank-by-permalink. Dan Gillmor's column and Jeff Jarvis's blog both appear in the permalink list (#6 and #18 respectively) but neither is on the Top 20 blogroll list. There's more than one powerlaw -- the shape remains but the population changes radically, depending on the ranking characteristics.

    Comments (5) + TrackBacks (0) | Category: social software

    What is social capital?Email This EntryPrint This Article

    Posted by Clay Shirky

    Interesting post bringing together proposed definitions of the hard-to-define 'social capital'. Almost as interesting is that the list is culled from a Google search using their 'define:[word]' syntax.

    Comments (0) + TrackBacks (0) | Category: social software

    May 18, 2004

    Technical document from iRoomEmail This EntryPrint This Article

    Posted by Clay Shirky

    Interesting document on some of the technical details behind Stanford's iRoom, part of the larger iWork project. The iRoom is a room designed for highly mediated collaboration among real-world users. The description of the iRoom reads, in part
    Emphasize co-location. There is a long history of research on computer supported cooperative work for distributed access (teleconferencing support). To complement this work, we chose to explore new kinds of support for team meetings in single spaces, taking advantage of the shared physical space for orientation and interaction. Reliance on social conventions. Many projects have attempted to make an interactive workspace “smart” (usually called an intelligent environment). Rather than have the room react to users, we have chosen to focus on providing the affordances necessary for a group to adjust the environment as they proceed with their task. In other words, we have set our semantic Rubicon such that users and social conventions take responsibility for actions, and the system infrastructure is responsible for providing a fluid means to execute those actions.
    Now they've published a paper on the Event Heap, an attempt at making a coodination framework for all the different software users run in the iRoom. The paper is about solving social coordination problems among software by giving every piece of software access to a shared "space" where all the software can see the messages being passed around and acted on by the other software. One app written in this way is a display tool that coordinates presentation in the physical environment of the iRoom:
    While traditional presentation programs coordinate the display of slides across time, SmartPresenter coordinates the display of information across both time and display surfaces. For example, a presentation might specify that at time-step 4, slide 17 from a Power Point presentation be shown on the left touch screen, a 3-d model be displayed on the high-resolution front screen, and web pages be displayed on the other two touch screens.
    As an analogy, the Event Heap is, for the software accessing it, like a project room with whiteboards on all the walls -- no matter what you're working on in your little corner, you can read whatever anyone else has written on any of the other surfaces. Much of it might not be of much use to you, but its there if you need it. The gory details are, well, gory -- it uses IBM's TSpaces project, an implementation of Gelertner's tuplespaces idea in Java, and all like that -- but the basic message is fascinating: as we start working with the blowback of our mediated social interactions moving into real world interaction, the borders between our tools are going to have to be made semi-permeable as well, so they can function as socially as we can.

    Comments (0) + TrackBacks (0) | Category: social software

    MT 3.0 AddendumEmail This EntryPrint This Article

    Posted by Clay Shirky

    My posting speed is always slow, which prevents me from commenting on the events of the day, as I usually don’t know what I think until they become the events of last week. I am therefore the Last Blogger On Earth® to comment on the MT 3.0 pricing debacle.

    I have only two things to add to Liz’s excellent observations:

    First, most of the analyses have focussed on the users, as if MT were a word processor whose main value was to individuals. Seen in this light, the users complaining about the changes are behaving childishly.

    However, that’s what users always do in this situation — the reaction is baked in. The problem is not with these particular users, it would be with any group of users in a similar situation. Weblogging tools are community enablers, and when you create community, you engage people’s emotions. Period. Community membership precedes rationality, both historically (all higher primates are social) and literally (children attach to their families before they can talk.)

    The dilemma for people who build communal tools is this: if you want something that hooks people emotionally, you cannot have rational users, and vice-versa. And when you build a tool that helps create a social fabric, changes to the tool trigger social anxieties. Always. (See the Fuck Fotolog thread from last year.)

    The second, narrower point is to the suggestion that since MT 2.x still works and is still free, nothing has changed. This is nonsense, for two reasons: First, MT is not merely a piece of code, it is a ticket into a community. I still use an ancient version of emacs, because its personal software, not social software, and what other people do or don’t do with emacs doesn’t affect me. MT does not have those characteristics — what other people are using matters, and splitting the 2.x and 3.x trees creates two classes of users.

    And this is the other reason the “2.x is still free” argument is nonsense: if other people are better off, you are worse off.

    This one is hard to understand, because classical economics denies that it is true. Classical economics, however, is wrong: if your neighbor wins the lottery, you are worse off.

    There are all sorts of arguments for why this isn’t true, or shouldn’t be true, but none of those arguments matter. We have a set of emotions like jealousy and envy that are decisively negative and triggered by other people enjoying things we don’t have access to.

    This matters for the creators of social software because one of the standard “Launch now, make money later” plans is to add Gold Membership, with enhanced services. This should be a winning strategy — the old users are no worse off, but the new users pay premium prices for premium services. The problem with this, in a social context, is that it creates a class system where some people are visibly better off than others. Classical economics tells us this is not a problem, but the users seem not to have gotten that message.

    This is not to say that MT shouldn’t charge for their product — we use it here, and I’m assuming we’ll upgrade when the time comes. It is to say, though, that because MT has succeeded in creating social value, you cannot expect users to act rationally to change. If you want users to really care about a piece of social software, they will invest in it emotionally. If you change the bargain they think they are operating under, even if that bargain is merely implicit and obviously unsupportable and even if you have the absolute and unilateral right to change it, they will freak out.

    This reaction is part of the social weather, and like the real weather, complaining about it is both immensely satisfying and basically useless.

    Comments (10) + TrackBacks (0) | Category: social software

    May 13, 2004

    Databases built for loveEmail This EntryPrint This Article

    Posted by Clay Shirky

    Building up store of previously decentralized information used to be so expensive that only big organizations could undertake the process, and then only for important things -- phone books, driver's license records. And now it's everywhere. The NY Times today has a story on Mark Thomas, who has built, with distributed help, a global database of the locations and phone numbers of pay phones. It started as a quirky labor of love, but has since been used to find runaways, pedophiles, and stalkers, all of whom were relying on the unfindability of a payphone. Thomas built our first working server-push script (now _that_ dates me) for me back when I was PM of AGENCY.COM, Back in the Day, and started his pay-phone project shortly thereafter, and this is where it's ended up -- a single individual, linking ten of thousands of phone numbers to addresses all over the world _in his spare time_. And the Psy.geo.conflux is running a distributed camera-phone street game in NYC where participants send SMS challenges to one another to photograph some abstract thing ("Take a picture of something that tastes good next to something that tastes bad",) another project which would have been impossible two years ago and will be normal two years from now. The network doesn't just give individuals the power to distribute what was previously concentrated, but also to concentrate what was previously distributed...

    Comments (1) + TrackBacks (0) | Category: social software

    Revolution vs. societyEmail This EntryPrint This Article

    Posted by Clay Shirky

    Lucas, current MVP holder in the Comments section here, comments on Moblogging from the Front and the New Reformation, saying:
    I have recently had an opportunity to rethink my position on this issue. Only a few weeks ago I would have agreed with Clay. But I now think that unmediation, and indeed the entire concept of personal empowerment via consumption — and even production — of information via the internet needs to be revised. Why this sudden change of face? Well, first of all there is a hidden (and quite naive and probably dangerous) assumption to the argument that more information — even the right information at the right time — leads to more informed decision making and thus empowerment. [more]

    Let me unhide that assumption, by saying that I am a sometime-student of decision making literature (currently reading Sources of Power: How People Make Decisions, which is absolutely fascinating), and I would never suggest that more information necessarily leads to better decisions.

    In fact, one of the things that makes an expert expert is knowing what information to ignore, so a rising tide of information is almost certainly going to lead to bad decisions in just the way that desktop publishing tools led to party invites with nine different fonts. It will take a long time before we know how to ignore the bulk of this new information we’re getting.

    There’s a larger point to make, though, about historical change: A change is revolutionary if the likelihood of it happening has nothing to do with whether it’s good nor not.

    It’s easy to point out the ways in which the network is bad — everyone from Robert Putnam to Naomi Wolf to George Packer to that The Internet is Shit guy has described (correctly, it must be stipulated, often correctly) the ways in which more access to more media makes things worse.

    Doesn’t matter. Does not matter. There is never going to be a moment where we as a society ask ourselves “Do we want this? Do we want the changes that the new tsunami of production and access and spread of information are going to bring about?”

    As an illustration, one of my clients is a big library, so I spend a lot of time around librarians, and I have heard speech after speech where librarians tell one another how vital libraries are even in the age of Google.

    These speeches are in a way rehearsals for the Big Moment, when society comes into their office and asks “Dear Librarians, tell us: should we keep on the seductively easy Path of Google, or should we come here and learn The Way of The Card Catalog?” And the librarians will tell society, in impassioned but carefully reasoned and ultimately convincing terms, why libraries are still vital institutions, and why getting your information without the help of Trained Professionals® is a bad bad idea.

    And the one possibility these librarians who make rehearse this argument in their heads seem not to have considered is the obvious one, extrapolated from the present: this moment where they get to make their case will never come. One at a time, people will shift from one mode of thought to another, and eventually younger users won’t realize that there ever were two modes — you just google for the stuff you want. How else would you do it?

    The librarians can point out (again correctly, let it be said) the ways in which this is inferior to the present system, but they will never get to make that speech, since no one will ever ask them to, anymore than anyone asked the linotype operators to point out the ways in which desktop publishing was inferior to type-setting (which, in the beginning, it was, in every aspect except convenience.)

    The comparison with the Protestant Reformation was not to suggest that we are entering a bright new future — for a hundred years after it started, the Protestant Reformation broke more things than it fixed. It was to suggest that even though we can describe, correctly, the ways in which the loss of mediation will be bad for many of society’s core institutions, it’s happening anyway, and our telling ourselves it shouldn’t won’t change much.

    Comments (2) + TrackBacks (0) | Category: social software

    May 11, 2004

    Moblogging from the front and the new ReformationEmail This EntryPrint This Article

    Posted by Clay Shirky

    James Hong of HotorNot fame launched YAFRO as a Friendster clone (the acronym is for Yet Another Friendster Rip-off.) Since then, they’ve turned it into a moblog, and Hong has recently posted a list of US soldiers posting pictures to YAFRO from Iraq. Images straight from the front, with Dan Rather nowhere in sight…

    Jaques Barzun, author of the marvelous history of modernity From Dawn to Decadence (1500 - present), makes the point that the Catholic Church as a pan-European political force was done in by the Protestant Reformation, itself fueled by the printing press. Once the Church lost the ability to control the direct perception of scripture, thanks to the printing of (relatively) cheap bibles in languages other than Latin, their loss of political hegemony followed.

    This is what we are seeing now relative to the military’s control of information. A year or so ago, someone in the DoD told me that the thing that would most affect the prosecution of the war in Iraq would be images of DAB’s — Dead American Bodies. The unplanned spread of photos of coffins, and now of torture victims, means that control of this part of the war is outside the military’s hands.

    The spread of images from Iraq, both relatively plain ones like most of what’s on the YAFRO blogs to the horrifying images of torture and abuse from the Abu Ghraib prison are all part of the removal of bottlenecks that will change the political structure in ways we can’t predict.

    And it isn’t just military affairs, its politics and business and everything else, from attempts to coordinate evidence of Apple’s manufacturing errors (previously handled case-by-case, but now becoming a kind of grass-rooots class action protest, to Apple’s horror) to the distributed amicus brief on the SCO case conducted by the Linux community to the recent right of Americans to get their medical records on request and within 30 days to the publication of spoilers for popular TV shows. (Read this last link now — its from the Times and goes away in 5 days, and although on the surface its about TV, its really a musing on life in a fully disclosed culture.)

    I remember hearing about the security efforts being put into place around delivery of Ken Starr’s Whitewater (Lewinsky) report as it was delivered, and thought “Why are they bothering? It will be in the web in 48 hours…” I was wrong, of course — it was on the web the next day. Now I hear that military officials are debating whether to release other photos with evidence of American torture of Iraqis, and I wonder again why they are bothering. If the images exist, they will be released. It’s a fantasy to assume that they can re-assert control of the spread of images by fiat.

    A parallel and a counter-parallel jump to mind. The parallel is Barzun’s point that during the initial furor of the Protestant Reformation, neither the Church nor Luther and his peers wanted a schism — on the contrary, all of them constantly maintained that what they wanted was to preserve the Church. It’s just that the Lutherans wanted to preserve the Church while reforming the relationship between the institution and the laity, while the Church itself was willing to talk about all sorts of reforms except institutional privilege.

    At a guess, filtered versus unfiltered information, in many settings and particularly around control of audio and visuals as opposed to words, is going to precipitate the same sort of conflict. (The music industry is a canary in that particular coal mine.)

    The counter-parallel is from Hunchback of Notre Dame, where Dom Claude holds up a newly cheap and accessible bible, points to his beloved Cathedral, and says “This will kill that.” The word was more powerful than the image.

    Now we are in a mirror world, where the newly free production and distrubution of images is the novelty. Hearing about DABs or torture victims is nothing like seeing them — I had to rip the cover of the Economist this week because my wife can’t stand to see the image of the man on the box with the electrodes in his hands.

    New tools for spreading of the word are powerful, of course — witness the weblog explosion in all its complexity. But the spread of images is a different kind of thing, not least because images pass across linguistic borders like a lava flow. Now that production and distribution of images are in the hands of the laity, it’s a safe bet that we are entering a world of “That will kill this.” We just don’t know what parts of society “this” refers to yet.

    Comments (23) + TrackBacks (0) | Category: social software

    May 6, 2004

    unmediated: more for your RSS reader...Email This EntryPrint This Article

    Posted by Clay Shirky

    unmediated: Tracking the tools that decentralize the media. Good group blog on alternate media production and distribution, including communal techniques.

    Comments (0) + TrackBacks (0) | Category: social software

    Matt Webb and a practical guide to social softwareEmail This EntryPrint This Article

    Posted by Clay Shirky

    Matt Webb has posted notes for a practical primer on social software. The essay is in part literature review, which is useful, but the best part is that it takes Stewart Butterfield's 7 Habits of Highly Effective Social Software (Identity, Presence, Relationships, Conversations, Groups, Reputation, Sharing) and uses them as lenses to critique a particular piece of social software (AIM) as a guide to thinking through the issues generally:
    *Identity* | Your identity is shown by a screenname, which remains persistent through time. There are incentives not to change this, like having your list of friends stored on the server and only accessible through your screenname. This acts as a pressure to not change identity. Having a persistent identity is more important than having one brought in from the physical world. *Presence* | Presence is awareness of sharing the same space, and this is implemented as seeing when your friends are online, or busy. AIM isn't particularly good at group presence and visibility of communication, although other chat systems (such as IRC and early Talkers) use the concept of "rooms" and whispers. [...]
    It looks like Matt is really digging in to practical advice, and has started list of links to be included in his primer.

    Comments (0) + TrackBacks (0) | Category: social software

    SWORD: Small world phone directory from BTEmail This EntryPrint This Article

    Posted by Clay Shirky

    BT is working on a phone directory that calculates "small world" networks. (That link, alas, is to a PR piece -- lots of words, but the tech content, she is not so much.) The system, being tested internally at BT, is designed so that when you want to look up the number of a Paul Kim or Matt Jones, it presents you the possible numbers sorted by social proximity, not geographic location. "You are likely to want the Matt Jones connected to you through Ben, Alice and Tom, not the Matt Jones who is your boss's secretary's dentist's cousin." There is one interesting bit of speculation in the piece -- while the original design was to improve disambiguation in large search spaces by adding social gradients, the project could also represent an alternate way of discovering unlisted numbers for mobile addresses:
    "SWORD could populate a database by utilising people's personal address books, stored from their mobile phones," Paul Toms added. For example, if you want to contact a friend of a friend, whose number you do not have, it could be ascertained via a link to a mutual acquaintance. Paul added: "It's a question of getting the timing right, and while obvious security issues would need to take precedent, the potential for SWORD to become a useful means of finding mobile numbers is a very interesting prospect."
    Its a switch from a list approach to phone directories -- you're on or off -- to a social map -- only show my number to friends of friends. (And within 6 months of launch, there would be people working to get the world's biggest mobile phone list, and then selling access to it...)

    Comments (1) + TrackBacks (0) | Category: social software

    May 5, 2004

    SocialGrid: Crazy and liveEmail This EntryPrint This Article

    Posted by Clay Shirky

    So the deeply crazy attempt to solve dating is now live in a service called SocialGrid, a service that could perhaps best be described as "geek code meets FOAF." You describe yourself (example question -- rate your physical attractiveness on a scale from "Below Average" to "Model Looks" -- no points for guessing the gender of the UI designer *) and it generates a set of tags you embed in your page. Google then indexes those tags, thus letting you search for, e.g. 5' 6" brunettes of above average physical attractiveness interested in dating who live near you. (And a pony.) A quick check of member pages reveals roughly (wait for it) 90% men looking for women. The other 10% is divided among men looking for men, women looking for women, one man looking for transgendered women, and one 20 year old woman with auburn hair and exceptional writing talent, who is looking for men and probably astonished at her incredible good luck right about now... Oh, and in case you were wondering:
    Warning to Copycats & Clones SocialGrid has retained one of the top intellectual property law firms in America. Everything on this site is copyrighted and trademarked, including our search and coding system. Our patent application claims coverage on searches for all complex objects using Internet search engines. Our goal is to ensure a search system that will be free to our members and keep individuals and corporations from profiting by charging for searches. We will marginalize every profit margin. There is no money to made in creating another ID coding system. The world needs only one system. If necessary, we will give SocialGrid and the patent to Google to insure one standardized coding system. Any copycats and clones will have to answer to Google. Please be advised that any copyright, trademark, and patent infringement will result in legal action.
    So now you know. ---- * There's, more, much more, where that came from. The category "Hair", for example, includes "Blonde" but not "Blond" and offers the users the opportunity to differentiate between "Blonde" and "Dark Blonde." Inexplicably, there is no checkbox for "Dark roots"..."

    Comments (11) + TrackBacks (0) | Category: social software

    Un-Linked InEmail This EntryPrint This Article

    Posted by Clay Shirky

    Openness creates growth which creates value which creates incentive which creates system-gaming which damages openness. This social pattern has now hit LinkedIn. I got this from them a couple days ago:
    Dear Clay, As more professionals are actively adopting LinkedIn, you may have noticed that you are also getting more emails from people inviting you to be their connection. While this is the easiest way to build your network and a testament to your reputation as a professional, we recommend you only accept invitations from people you know and who you are willing and able to recommend to other professionals you know...
    Three themes in one! First, scale in social systems creates a pressure to introduce membranes that shield individual participants from the effects of scale. Second, society is a public good, unowned and unownable, which sets it up for a tragedy of the commons. The tortured phrase "While this is the easiest way to build your network and a testament to your reputation as a professional..." speaks to the tension between rapid growth and long-term value. (As an aside: is it really 2004 and people are _still_ retro-fitting sites to deal with the almost universal results of rapid social growth? People people people, this _always_ happens -- community software is unlike, say, audio editing tools in that success is much harder to deal with than failure. If you plan to succeed, plan to deal with success...) And the third theme, of course, is the persistent tension between the goals of LinkedIn Inc., of the users of LinkedIn, and of those gaming the system. Here LinkedIn is recommending that I take additional steps most of whose value accrues to them, not me. To get a LinkedIn whitelist I have to upload my address book first. It's the standard YASNS plea -- "Do more work to help us bother you less!®" I assume they are in terror that I not figure out that simply ignoring all LinkedIn mail once the S/N ratio falls too far is much easier. On a related note, Stowe Boyd tells an interesting story about a guy _selling_ one degree connections to him (scroll to "SNA jacking"), on the grounds that he has snammed enough people to act as a valuable bridge. The snammer in question
    ...is making contacts with folks on the LinkedIn network under false pretenses: We all presume that he is like us, and that his network is made up of people like our own business and personal contacts, not clients paying for access. Don't get me wrong, I think that his model — pay to play — is potentially a good one, so long as everyone involved is operating under the same set of rules. However, that's not the set of rules I was operating under when I joined LinkedIn, and it wasn't what I thought was going on when I accepted his request to become a contact. I don't want him to make money on my reputation and contacts, and I especially don't want him to do so without my knowing about it.
    It's interesting to me that Stowe invokes the 'rules' he joined under, when no such things existed. What he means is "I made certain assumptions about the social fabric that this guy is violating, and as in real society, I expect my assumptions to be both shared and actionable." That they are not comes as a surprise, and this tension between what we expect vs. what the terms of service say and the software allows is a lot of what makes the currently clunky YASNS world so interesting to watch.

    Comments (4) + TrackBacks (0) | Category: social software

    May 4, 2004

    Ars Electronic Community PrizesEmail This EntryPrint This Article

    Posted by Clay Shirky

    Ars Electronica featured prizes for digital community for the first time this year. (Nice to see several wiki projects; surprising to see so many sites focussed on health issues.) The two winners are: *Wikipedia (USA)* www.wikipedia.org "Wikipedia" is an online encyclopedia that all Internet users can collaborate on by writing and submitting new articles or improving existing ones. *The World Starts With Me (Niederlande / Uganda)* http://www.theworldstarts.org "The World Starts With Me" is a sex education and AIDS prevention project that simultaneously gives young Ugandans the opportunity to acquire Internet and computer skills. Awards of distinction went to: * dol2day - democracy online (Deutschland): http://www.dol2day.de * Krebs-Kompass (Deutschland): http://www.krebs-kompass.de * Open-Clothes - 6 billions way of fashion for 6 billions people (Japan): http://www.open-clothes.com/ * smart X tension (Österreich / Zimbabwe): http://www.mulonga.net Honorary mentions to: * Cabinas Públicas de Internet http://cabinas.rcp.net.pe * Children with Diabetes http://www.childrenwithdiabetes.com/ * DakNet: Store and Forward http://www.firstmilesolutions.com * Del.icio.us http://del.icio.us/ * DjurslandS.net http://www.djurslands.net * iCan http://www.bbc.co.uk/ican * kuro5hin http://www.kuro5hin.org/ * Kythera-Family.net http://www.kythera-family.net * Lomography http://www.lomography.com * Nabanna http://ictpr.nic.in/baduria/welcome.html * NYCwireless http://www.nycwireless.net * Télécentre Communautaire Polyvalent Tombouctou * Wikitravel http://www.wikitravel.org * Daily Prophet http://www.dprophet.com  

    Comments (0) + TrackBacks (0) | Category: social software

    Darknet: JD Lasica's experiment in distributed editingEmail This EntryPrint This Article

    Posted by Clay Shirky

    JD Lasica is working on a book called Darknet: Remixing the Future of Movies, Music and Television, and he's put it up on a socialtext wiki for group editing, and he makes an interesting distinction between his current use of the wiki and his planned future use of a weblog:
    This is an experiment in trust. Feel free to dive in and make all the changes you think are warranted. I've opened this up as a public wiki, rather than a private space. Feel free to link to this main page from your blog, though I'll also ask at this early stage that people not excerpt material or dissect any of the material in detail because we're not at the public discussion point yet. At a later date, I'll post a considerable amount of material from the book—as well as a great deal of material not included in the book—and at that time we'll open it up to the blog community. But for now, this wiki is set up only for collaborative editing and nothing else.
    So the wiki is "comer here and edit" and the blog is for "let me send it out for distributed comments." Will be interesting to see how that transition goes. Also on the wiki front, Common Craft has a very nice description of wikis in plain english:
    The site’s content comes from the users of the wiki. *This is a defining element of wikis*: the users are responsible for the direction and content of the wiki web site over time. Everyone that uses the wiki has the opportunity to contribute to it and/or edit in the way that they see fit. This allows a wiki to change constantly and morph to represent the needs of the users over time. Wikis grow to represent the community of users.

    Comments (0) + TrackBacks (0) | Category: social software

    May 1, 2004

    PacManhattan (and blowback)Email This EntryPrint This Article

    Posted by Clay Shirky

    PacManhattan Like it sounds -- PacMan recreated on the Manhattan street grid, with teams coordinating using cell phones, Wifi and GPS. Done by my colleague Frank Lantz and the folks in his Big Games class at ITP. Frank == genius, and ITP is on fire these days. ObObservation: When people ask me why social software isn't just 'online communities' under a new name, I used to offer some complex answer about linking the study of online communities and computer-supported collaborative work under a rubric of computer-mediated communication specifically targeting group interaction. Dull, no? Now I just say "Blowback" -- our tools are doubling back to affect the real world. The principal site of important social software these days is offline, as with the back-channel or MeetUp or Bass-Station or dodgeball. And now, PacManhattan.

    Comments (0) + TrackBacks (0) | Category: social software

    April 30, 2004

    What I Did Next SummerEmail This EntryPrint This Article

    Posted by Clay Shirky

    Intel is doing another project on urban interaction this summer, following on the successful Familiar Strangers project last summer that resulted in the production of Jabberwocky *, a Bluetooth phone app for extending the Familiar Stranger pattern. This summer, they're doing an Urban Probe project, and the current info page lists an interesting set of questions it invites people to ask about particular spaces.
    Select a location that is public (i.e. there is no restricted access to it). You must be able to observe this space by co-existing within its confines (i.e. you cannot watch it from a distance). Remain within the space for 15-30 minutes. Perform the following activities and describe your experience: - What are the boundaries of this place? What is the "entrance" and "exit"? - Describe the urban ecology of this place - Excavate or reveal the existence of at least one human trace within or across this place and interpret it - Expose a public secret that is concealed within this place - What one question would you ask this place? - In this place, what is most "beautiful"? Most "disruptive"? - What single word captures the aura of this place? - In a single sentence, what is the meaning of this place? Create a hypothetical digital, physical artifact to introduce to this place (i.e. handheld, mobile, fixed, etc). It can perform a task or be entirely impractical. Explain an envisioned use of your artifact within this place.
    This might be a useful brainstorming exercise for urbano-technologists generally. Given the interest Familiar Strangers generated, this will be worth watching. ---- * Like most phone apps, at least in the States, Jabberwocky is an argument for what good apps could be like if the phone were a real platform, but isn't itself a good app yet, since the phone isn't a platform yet, since the US is so dreadfully behind in mobile infrastructure.

    Comments (0) + TrackBacks (0) | Category: social software

    Spike and Howl: Less is more, and zeroconf is a lot moreEmail This EntryPrint This Article

    Posted by Clay Shirky

    Anyone who's been to an O'Reilly conference has seen the shared-note-taking wonder that is Hydra SubEthaEdit (most sucktastic renaming EVAR.) The wonder of Rendezvous, Apple's branding for zeroconf wireless networking, drives Hydra, and I've always wondered when that pattern would become more widely supported, both in the sense of moree tools and more platforms. Now I know the answer -- it's now. Porchdog software has a cross-platform implementation of zeroconf called Howl (OS X, Linux, BSD, Windows 2K+) _and_ Spike, a cross-platform shared clipboard (OS X, Windows 2K+).
    When you share a Spike clipboard, you see a clipping as soon as it is copied on the source machine. You can immediately drag that clipping into your own document on your own machine, and save valuable time.
    It's all open source, and you get free yummy candy for trying it. (Not really about the candy, but all the other stuff is true.) So go download it already -- its v cool, and is part and parcel of the 'software that does less, well' pattern that is making me breathe a huge sigh of relief that maybe my life won't be wasted hunting particular features in the sub-sub-sub-menus of giant hulking tools.

    Comments (2) + TrackBacks (0) | Category: social software

    April 29, 2004

    Social hardware: Champaign-Urbana mesh projectEmail This EntryPrint This Article

    Posted by Clay Shirky

    I've been fascinated with social hardware ever since seeing Ahmi Wolf and Mark Argo build Bass-Station (wifi-in-a-boombox emergent jukebox thingie, and part of their Community Media Platform project.) Now Champaign-Urbana is working on a simple and cheap mesh network tool, with the following design center: pop a disk in a 486 and it works. (The inimitable Glenn Fleischman's take on it is here.) As with straight Wifi, the obvious uses of a simple meshing tool are to replace wireline networks where they would be too expensive, but the second-order benefits that will come out will all be novel and often social uses for temporary creation of self-configuring high-bandwidth LANs -- internet cafes without the cafe, temporary autonomous file trading zones, video re-mix culture throwdowns in real time. As Matt Jones sometimes says "It's getting too future in here..."

    Comments (0) + TrackBacks (0) | Category: social software

    SNAM: Spam for social networksEmail This EntryPrint This Article

    Posted by Clay Shirky

    SNAM, a new coinage from Trendsetter.com for social network spam:
    Social networks have spawned a new form of spam that uses the FOAF (Friend of a Friend) message feature frequently found in this new genre of networks. Google’s Orkut, a network of some 200,000 members, offers the ability to send messages to FOAFs. FOAF messages often contain conference promotions or job postings that, while low in volume, will one day require action on the part of network managers.

    Comments (1) + TrackBacks (0) | Category: social software

    April 28, 2004

    Morningstar and Farmer BlogEmail This EntryPrint This Article

    Posted by Clay Shirky

    As one of the few pieces of stated editorial policy at M2M, we don't talk much about games as social software, because other sites have that covered. To the list of "Places we love to read about the social life of games", we can now add Chip Morningstar and Randy Farmer's weblog. Morningstar and Farmer wrote the single most important document on the social nature of cyberspace EVAR, the 1990 Lessons from Lucasfilm's Habitat, so add this feed to your newsreader.

    Comments (1) + TrackBacks (0) | Category: social software

    Golan Levin on infovizEmail This EntryPrint This Article

    Posted by Clay Shirky

    So after yesterday's hissy fit about bad information visualization in social software, I figured I ought to point to something interesting on the subject. Here's Golan Levin's syllabus on Information Visualization as Artistic Practice. Of particular interest here is the list of network maps (scroill down in the left-hand frame.)

    Comments (1) + TrackBacks (0) | Category: social software

    April 27, 2004

    Geo-mapping OrkutEmail This EntryPrint This Article

    Posted by Clay Shirky

    Here's my geo-map of Orkut. Red lines are friends, blue are friends of friends: Pretty much what you'd expect -- white-hot in NYC and the Valley, random smatterings in SoCal and Texas, and the occasional odd point (CMU, RIT, etc.) Fill in your Orkut name or number in the form field at the top of that page to get yours (and note that they don't ask for a password, meaning they're using cached data.) ObInfoVizRant: This is a classic "Oooh cool" followed by "Vanishes without a trace" toyinterface choice, in part because it's designed for maximum "Keanu Reeves" interfaceness, even though it actually damages the sense of the data being portrayed. As an artifact of the choice to use lines instead of points to represent distribution, there's a ton of information over the Midwest, even though I know no one there. UPDATE: Liz rightly upbraids me in the comments for not differentiating between the Lines and No Lines interfaces, which are an option for the user. I should have said "The _default_ graph would be much better done as a set of icons showing individuals, so that density was in clusters, instead of line intersections." You can get this graph by clicking No Lines, but the designer clearly chose lines as the default for the coolness factor.

    Comments (8) + TrackBacks (0) | Category: social software

    Are MMO's fair?Email This EntryPrint This Article

    Posted by Clay Shirky

    So we don't talk about games much, as the intellectual competition is too fierce (*cough terranova cough *), but Dave Rickey has an interesting article over at Skotos on powerlaw distributions in MMO worlds. He does a thought experiment on the emergence of that characteristically unequal distribution of outcomes in the language of gamers, and comes up with an interesting question and answer:
    So very small changes in overall performance can make very big differences in overall result, depending on how the contests are set up. The question becomes: How much of a factor is personal skill, how wide is the distribution in performance? The more of a factor the personal skill of the player is, the faster the dropout rate. The conclusion we can draw from this is that there are sound psychological and mathematical reasons for the de-emphasis of personal skill in these games, and any efforts to build MMO's around personal-skill based gameplay need to account for these.
    If Rickey is right, designing a game that accurately reflects players' relative skills or investment of time will make them _less_ fun for a majority of players.

    Comments (1) + TrackBacks (0) | Category: social software

    April 26, 2004

    Canadian Green Party turns to the net to rank its planksEmail This EntryPrint This Article

    Posted by Clay Shirky

    The Canadian Green Party has put their campaign proposals on a site, and is soliciting public comment in the form of ranking, as with this list of policies affecting the Business climate. Viewers can vote up or down in traditional slashdot style, with the added limitations that there is no further characterization of a vote and that all you can do is re-sort the list -- no explicit numerical distinctions are retained (though they have set up an interface to flag 'at-risk' proposals, which I take to be those modded down by more than 50% of the users.) This was reported on BoingBoing as being a wiki, which is an interesting way to do emergent policy proposals among a group (the Dean campaign was using Socialtext in this way as well), but if it's a wiki, it's not a public one. Because of the dictates of partisan politics, wikis tend not to work well in places where _everyone's_ motives are suspect, meaning that the wikification of policy is mostly among insiders. The Green Party site looks like it embodies this form -- you can suggest new amended policies only through email, where they go through a vetting step before reaching the site (if they ever do), while everyone has access to the voting interface.

    Comments (0) + TrackBacks (0) | Category: social software

    A City Is Not A TreeEmail This EntryPrint This Article

    Posted by Clay Shirky

    It’s a moment of disorientation I’ve had a couple of times — you find a great piece of writing, and think “Wow, this is really going to change things!”, only to discover that it is in fact decades old. The clash of historical vertigo with Internet Now is both wonderful and daunting.

    I had that moment yesterday with Christopher Alexander’s A City Is Not A Tree from 1965. Alexander argues that the hallmark of designed cities (Mesa City, Brasilia) is that their builders invariably gravitate to tree-structures, where all sub-units of a similar type roll-up into a single super-unit, und so weiter, which creates an artificial and ultimately damaging simplification. He contrasts this with the structure of organic cites (London, NYC), which are organized as semi-lattices, where overlap and shared function is the order of the day.

    Whenever we have a tree structure, it means that within this structure no piece of any unit is ever connected to other units, except through the medium of that unit as a whole.

    The enormity of this restriction is difficult to grasp. It is a little as though the members of a family were not free to make friends outside the family, except when the family as a whole made a friendship.

    In simplicity of structure the tree is comparable to the compulsive desire for neatness and order that insists the candlesticks on a mantelpiece be perfectly straight and perfectly symmetrical about the centre. The semilattice, by comparison, is the structure of a complex fabric; it is the structure of living things, of great paintings and symphonies.

    It must be emphasized, lest the orderly mind shrink in horror from anything that is not clearly articulated and categorized in tree form, that the idea of overlap, ambiguity, multiplicity of aspect and the semilattice are not less orderly than the rigid tree, but more so. They represent a thicker, tougher, more subtle and more complex view of structure.
    Like the 1970 Jo Freeman essay on group structure I pointed to as my inaugural post, A City Is Not A Tree is resonant in part because Alexander is describing the world we live in without having seen it.

    I have an intuition that this essay says something important about planned vs grown communities in general, even when they meet outside the boundaries of real space and even when the architecture in question is an architecture of machines, but I won’t try to pin that down here —- the material needs at least a re-reading before trying to work with the ideas.

    Go. Hit print.

    Comments (9) + TrackBacks (0) | Category: social software

    April 23, 2004

    Grant Bowman's List of Collaborative Tools on sourceforgeEmail This EntryPrint This Article

    Posted by Clay Shirky

    Grant Bowman has a huge list of collaborative tools, hosted on sourceforge. It is specifically focussed on open source projects, though it has a smaller number of commercial apps and related links. It's not categorized, and has the usual problems of such lists -- it includes the generic graph-drawing package GraphViz and esr's Fetchmail, for example, so its hard to see a crisp line drawn around collaboration, and the projects are listed alphabetically, so there isn't a sense of functional category. As an object of contemplation, however, or if you are looking for inspiration, it's great.

    Comments (0) + TrackBacks (0) | Category: social software

    All together now: Ha-a-a-a-a....Email This EntryPrint This Article

    Posted by Clay Shirky

    ...py Birthday to us, Happy Birthday to us, Happy Birthday, deeeeeear M-To-M, Happy Birthday to us A year and a bit ago, Liz Lawley and Hylton Jolliffe cooked up the idea for a weblog on social software, and Liz, wanting a social blog to be social to its core, then rounded up the rest of us. The first official post was April 23, 2003. After the intros, Liz's first post was Why I don't Like Wikis; mine was a pointer to Jo Freeman's brilliant The Tyranny of Structurelessness; Ross's was The Social Capital of Blogspace; and Seb's was on Smarter, Simpler, Social. And now we're one, 600 hundred or so posts later (no exact count, as some stories didn't get ported over in the move...) Thank you all for reading; one year on, it's pretty obvious that things are just going to get more interesting on this front. Congratulate us for passing the drooling stage; now comes the part were we start toddling around breaking things...

    Comments (4) + TrackBacks (0) | Category: social software

    April 22, 2004

    danah on community awardsEmail This EntryPrint This Article

    Posted by Clay Shirky

    danah has a set of questions about awards for 'community sites' for the Webby Awards and Ars Electronica:
    - Is the nomination supposed to focus on the site, its design, its intention, etc. or the resultant community? - Who is being nominated? The creator or the community? What if the community hates the creator? - What practice is being validated? The expected one or the successful one? What if the successful one is subversive? - How valuable are communities that transcend the site? Do you count the transcendence? - How do you address invisible communities whose only proof of existence is their end-result?
    This is just the right set of questions -- the value of a _site_ and the value of the _community_ are hardly parallel. As an example, Bronze: Beta, home of Buffistas is by any technical measure completely dreadful -- a non-threaded write-only dumping ground that should be dead in the water. _Eppur si muove._ Now you'd be tempted to say that B:B has a good community despite the technology, except that it was designed to spec -- the crappiness is intentional. After the old Bronze boards were shut down, the community rallied to build themselves a new home, and the spec for that home included having a single page with a posting form at the top, as if it were a web BBS ca. 1994. When they were re-building Parliment after WWII, Winston Churchill is (said to have) said "Whatever you do, don't put enough seats in for everybody," on the grounds that, in the old Parliment building, when some matter came up that was important enough for all the members of Parliment to show up at once, the place got uncomfortably crowded, which re-enforced the sense of urgency. The surface inadequacy provided deep value. Bronze: Beta is like that (setting aside the difference between Buffy gossip and political discourse that affects the lives of millions.) It isn't just a good community site despite the limited technology, its a good community in part because of the limited technology -- the limits help shape the community (see the post below this one on Ward's 'limit as a social tool' hack.) I'm pleased to see community as a concern in both camps (though I trust Ars to find more interesting candidates than the Webbys) but like danah I think there's a misfit between actual community and what the award givers are looking for.

    Comments (2) + TrackBacks (0) | Category: social software

    Ward on social engineering in a wikiEmail This EntryPrint This Article

    Posted by Clay Shirky

    Giles Turnbull has posted an interesting interview with Ward Cunningham on all things wiki. There's lots of good stuff there, but the thing that caught my eye was this little story about adjusting the software to re-enforce cultural norms:
    Every wiki develops a set of norms. Every member of the community sets themselves against those norms. If you have people who post stuff that is waaaay beyond those norms, such as posting pornographic images in pages, then you find that kind of thing gets dealt with very quickly. It just gets removed. But since last Fall we have had an individual who has been posted only *slightly* outside those norms, so close to what's acceptable that others have been unable to agree on whether or not his contributions should remain. [...] People said "ban him" but I'm not really sure I'd be able to effectively do that even if I wanted to. I'd be getting into an arms race that I could never win. Sunir understands what he calls "soft security". I was using code against behaviour but I didn't feel that I was in a very strong position. The problem was that the abuser had too much time. He was too active and could get too worked up about things, so much that he had to fight. So I put a post-limiter in place. People can only post so many times during a set time period. And it worked, almost straight away. We haven't banned the abuser, merely limited his ability to post so that what he does post is more within the norms we can expect and deal with.

    Comments (0) + TrackBacks (0) | Category: social software

    April 21, 2004

    Webjay: Lucas Gonze goes after user-created music filteringEmail This EntryPrint This Article

    Posted by Clay Shirky

    So last year, I was bitching about how the music industry is stifling the inevitable "Big Flip", where you switch from a "filter, then publish" model for analog production, to a "publish, then filter" model for digital production, where content is first made available, and _then_ sorted for quality. (This is how Google and Blogdex work, for example.) I was in particular lamenting the lack of user-generated filtering that could break the bottleneck of the A&R (Artists and Repetoire) departments of the big music firms. So now my homeboy Lucas Gonze has gone and built it. It's Webjay, a site for trading user-generated playlists. Best of all, it's designed for playlists that feature music that is legitimately available over the web:
    Even though we won't censor users, we would be grateful if users would censor themselves. Webjay exists to promote music which has been authorized for distribution on the web, not to make it easier to find unauthorized music. Please do not post links to unauthorized music. It will bring trouble. It will promote hoarded music at the expense of music libre. It will be stupid -- posting hoarded music on the web is a really bad idea.
    So you get three filters in one -- someone else has vetted the music for quality, the music is rolled up in thematic playlists, further raising the "If you like X, you might also like Y" quotient, and everything you hear is (at least putatively) music libre.

    Comments (6) + TrackBacks (0) | Category: social software

    York University Lecture on Social SoftwareEmail This EntryPrint This Article

    Posted by Clay Shirky

    There's a long overview of social software history, trends, and possible futures by Darren Wershler-Henry, from a class at York called "Communications for Tomorrow." Particularly interesting to me are the Other Questions for Social Software:
    Rules for Entry Another key question for social software and the communities it creates concerns the rules for entry. How complicated or simple, how stingent or loose should they be? Every culture has rules, and online cultures are no exception. But how strong do the sanctions that govern commonspace need to be, really? [...] Paranoia and the urge to control are far too common in the business community's approach to online community. Corporations are anxious about the actions of their users because they are ignorant about the slightly irreverent and iconoclastic nature of online interaction. The failure to allow some room for unruly online behaviour is one of the quickest ways to kill a nascent online society. Clearly, there need to be some disincentives to causing mischief online; but just making it difficult and inconvenient should suffice in most cases. Rituals We do know that part of what makes any community work, including online communities, is the inclusion of rituals - a subject closely related to community rules. Amy Jo Kim, author of Community Building on the Web, points out that there are rituals specific to particular kinds of social software. [...] Like all life-cycles, the cycle of community includes a reproductive phase. Since reproduction is essential for long-term online survival, online enterprises are wise to capitalize on it. Communities that include features allowing members to assume control of sections of the community's functions over time or split off into sub-communities tend to be more successful than static sites.
    It's a nice broad overview, coupled with some interesting thoughts about future research into identity, visualization, and community life-cycle.

    Comments (1) + TrackBacks (0) | Category: social software

    April 20, 2004

    Historical review of the role of population data in human rights abusesEmail This EntryPrint This Article

    Posted by Clay Shirky

    Interesting paper on the use of census and other population data as an input to large-scale human rights abuses.
    Yet such functions do not exhaust the uses of the population data systems. As many commentators have indicated, particularly in the literature on the efforts of European colonialists to control of populations in their far-flung empires, there is a darker side to the development of these systems. Population data systems also permit the identification of vulnerable subpopulations within the larger population, or even the definition of entire populations as "outcasts" and a threat to the overall health of the state.
    The work goes on to detail many historical versions of this problem, from American Indians to Roma in Europe. Even stipulating that the overlap between this work and social software is both oblique and partial, reading this raised the ickiness factor for me from the amount of data our social networks are casually gathering. We are privatizing census functions, allowing private individuals and firms to gather material once reserved only for the state. We're already seeing things like danah's stories of White Supremacists harassing black users on Friendster, and the googlebombing of "Jew". Our systems are not yet of a size or representative depth to make these kinds of abuses much more than ad hoc, but I wonder if we're privatizing systematic abuse as well.

    Comments (0) + TrackBacks (0) | Category: social software

    April 19, 2004

    SubEthaTrackEmail This EntryPrint This Article

    Posted by Clay Shirky

    SubEthaTrack, a site for making SubEthaEdit (formerly Hydra) documents globally available. (SubEtha is the group document editing tool perhaps best likened to an IM wiki. Mac only, alas. SubEtha is becoming the new BBEdit.) The design is: open and share a SubEthaEdit document, then go to SubEthaTrack, which will read and share your document, making it globally available. Then other users can search for documents, and join any you have made public. There's even one-button application launch from Safari. Right now, there are so few docs set up that the search filter is a no-op, returning a list of the handful of existing spaces, but the CodingMonkeys folks have an always-on server hosting some test docs, like a global scratchpad. Lots of latent promise, lots of hurdles as well, including, alas NAT traversal (sweet weeping Jesus, the internet is broken and getting more broken by the day.) I have tried it from a hotel room and a conference network, and am able to join existing SubEtha shared docs over the network, but unable to host any of my own, because the NAT/firewall/router dingus I'm behind drops traffic at the port SubEthaTrack expects to inspect. There was a heady moment in 2000 where we thought the P2P people were fixing NAT traversal as a general solution, but here it is 4 years later, and we're still fixing this problem imperfectly and app by app. For things like SubEtha to work, we need to take into account that the users most likely to need zero configuration tools are the users who are least likely to have a naked IP address.

    Comments (0) + TrackBacks (0) | Category: social software

    Creation of a Social Innovation Map in ViennaEmail This EntryPrint This Article

    Posted by Clay Shirky

    A workshop invitation, in Vienna later this week (so most relevant to CHI attendees) to make a "social innovation map", trying to describe where a good set of next moves might lie :
    The workshop is convened by Convivio, the Network for People-Centred Interactive Design. (Convivio is the European Commission-funded network of sixteen research institutions and companies, from nine countries, that seeks to enhance social quality through the use of technology and design, software and hardware, research and the arts, in novel ways). http://www.convivionet.net/ Convivio's opportunity map is intended to describe a vision that will influence the research agenda for Information and Communications Technologies in Europe, and beyond. The map will be presented at the Information Societies (IST) conference in The Hague, on 15-17 November; at that event, planning for Europe's Seventh Framework Programme for research begins in earnest. After that, in May 2005, Convivio's vision will be the focus of an international conference. The workshop is free, and open to non-members of Convivio. Designers, developers, researchers (and especially CHI participants) are welcome to help us ensure that social and cultural issues drive the innovation agenda. The workshop runs from 09:30 to 14:00 on Monday 26 April.

    Comments (0) + TrackBacks (0) | Category: social software

    April 18, 2004

    LJ ImagesEmail This EntryPrint This Article

    Posted by Clay Shirky

    Short post: A page that displays the 10 most recent images from LiveJournal. And now the long post-script: This goes in the 'I don't get it' category -- saw this a while ago and passed over it, but several people have since forwarded it to me, so there's obviously something there, but what? Here are the answers I've been able to think of: first, people often see things like this, and are interested only for as long as it takes to forward the links. It's like that old Plumb design 'Visual Thesaurus' -- everyone loved it when they first saw it, but no one ever used it. We've gone from a world where passing something on to a friend meant that you were interested in something for long enough to remember it to a world where you don't have to have anything other than a momentary frisson to forward it to someone who thinks its cool just long enough to forward it again. LJ Images as the new Hamster Dance. Second, images appeal, necessarily, to a sub-intellectual part of the brain. (necessarily, because eyes existed long before cerebellums.) It may be that anything using images sparks positive short-term reactions. Third, skin. Enough of the pictures are party snapshots, and the LJ cohort skews young and restless, so there may be a "Girls Gone Wild: LJ Edition!" pleasure in hitting refresh while looking for the occasional moderately revealing photo. But overall, the service is a bore, especially compared with the LJ Random User feature. I wonder if the pressure to get and even lead the rush to any new discovery in the weblog world leads people to over-forward stuff like this, creating an attention market for material with high immediate appeal and short shelf-life? What would happen to our memes if there were a 24 hour lag between viewing and recommending?

    Comments (7) + TrackBacks (0) | Category: social software

    April 17, 2004

    Dodgeball goes multi-cityEmail This EntryPrint This Article

    Posted by Clay Shirky

    Dodgeball, the social networking tool for mobile phones, is expanding past NYC this weekend, becoming available in SF, LA, Boston, and Philadelphia as well. (Full disclosure: Dodgeball management, aka Dennis Crowley and Alex Rainert, were students of mine, so I'm both reporting and kvelling.) The quickie description -- 'Friendster for Mobile Phones' -- makes the service graspable by potential users, but hides a lot of the complexity actually in the service -- social networking, mobile carrier interoperability, geocoding, lightweight user alert systems, on and on. I've watched these guys putting an astonishing amount of thought and effort into this system for the last couple of years, and it's heartening to see it paying off, especially as the mobile carriers still seem to deeply not get the social potential of their formerly point-to-point devices.

    Comments (3) + TrackBacks (0) | Category: social software

    April 15, 2004

    New word: orkwardEmail This EntryPrint This Article

    Posted by Clay Shirky

    Don't know how I missed Joey deVilla's great rant on orkut:
    Remember that recent issue of The Onion, in which they wrote an article about a car that ran on anger? Maybe emotion-powered physical devices may not be possible, but that's not the case with software. Orkut is powered by _envy_.
    And, the best single-image critique of orkut EVAR:

    Comments (0) + TrackBacks (0) | Category: social software

    WASTE: It's ba-a-a-ck [And: a plea to readers]Email This EntryPrint This Article

    Posted by Clay Shirky

    Last summer, we wrote about WASTE, the nullsoft tool for secure communication and collaboration among small numbers of clients (10-50 nodes), which was posted under the GPL and almost as quickly pulled by AOL. I was looking at WASTE as an example of the file sharing goes social pattern, but the sourceforge project has lain dormant for some time. Now, though, the chip-maker VIA seems to have stirred the pot by posting a tool which was a WASTE copy, not even a port, under the name PadLock, bringing this response yesterday from the WASTE developers.
    Development on the program will resume soon, and we will begin the major protocol adjustments, to bring about the release of v1.4 final. I would also like to remind everyone that the last two releases are alpha, which is why only minor changes are visible. We have been experimenting with technologies to create a more feature rich program instead of releasing betas. We hope this will turn out well down the road.
    The PadLock code has since been removed from VIA's site, but it's (temporarily?) revived activity and interest around WASTE, whose potential as an open-source platform for building social networks is large but also largely unrealized. Will be worth watching... *UPDATE:* Bill Seitz's question in the comments is worth putting here, for greater visibility. He asks "Do you have any sense of how well this protocol works for sharing smaller packages of data, e.g. tuples/triples?" That is an incredibly good question, both because it is the design pattern of Groove, the best-engineered tool in this space, but also because propagation of small bits means that there are a world of RDF and transclusion-style tools (e.g. purple numbers) which could be integrated into that environment. So, a plea to readers --- can anyone with deeper familiarity into WASTE protocols than I have answer Bill's question?

    Comments (2) + TrackBacks (0) | Category: social software

    April 12, 2004

    Orcmid on the Back-channelEmail This EntryPrint This Article

    Posted by Clay Shirky

    Orcmid has a good post on the back-channel, in which he rightly calls me out for the tone of my earlier post. (In particular, I have since apologized to danah, both in the comments there and in private.) Orcmid's key quote, I think, is this:
    If the problem is the design or structure of meetings, or perceived inequities and power situations, deal with that.  It is not a problem that technology will fix.
    I absolutely agree with this, so let me first abjure any sense I might have given that I think there is a tech solution here -- this is absolutely something that requires careful meeting design. (I wrote about this in In-room Chat as a Social Tool a couple of years ago.) Where I agree with Orcmid (and Andrew Fiore) is in assuming that the core value in the conference is in its groupness. Where I disagree is in assuming that the group value mostly flows from the plenary presentations. From my experience of professional conferences, almost all such meetings have the same characteristic -- the hallway conversations are better than the contents of the talks. So I am making two assumptions that Orcmid and Andrew don't, I think, share. First, the back-channel is a fact, not a choice. Every conference with Wifi will get a back-channel, and every conference will have Wifi in the next couple of years. So for me, any question of _whether_ to have a back-channel is already barking up the wrong tree -- all conference organizers will have to deal with it in some way or other. Even formally asking people to do part or all of the conference 'lids down' is a strategy that assumes the back-channel, rather than ignoring it. The second assumption is that is there is huge untapped potential for lateral value among groups of attendees, and that if unlocking this value comes at the expense of some of the value for the presenter in having a room full of attentive (or at least not obviously distracted) listeners, there is still reason to explore whether the overall value of the conference is higher. A talk is an incredibly lousy way to transmit facts -- if someone had invited me to the MSFT conference with the promise that I would walk away with all of the facts presented at that conference, but none of the social interaction, I wouldn't have gotten in a taxi to go there, much less a plane. We went for each other, and while talks have a way of shaping the conversation, they are less important now than they were pre-Web, when the pure information in the talk was harder to come by. We're living in a remarkable period of experimentation with social form, where things like FOO Camp abandon older organizational styles in favor of relying on attendee-created value. The back-channel brings some of that value (and tension and anarchy) into more established conference settings. Taking Orcmid up on his challenge, I'm willing to admit the disadvantages of the back-channel -- it was distracting to people who chose to opt out, and emotionally hurtful to those who felt left out. But I don't believe evidence of harm necessarily leads to the conclusion that the back-channel should be banned, both for practical reasons (it is basically unbannable) and philosophical ones (increasing value for lateral communications may well outweigh harm to older conference styles.) So my counter-challenge is: Assume the back-channel is a permanent option, and in any large gathering (greater than two dozen, lets say) assume that at least some participants will form one. Now what?

    Comments (8) + TrackBacks (0) | Category: social software

    April 9, 2004

    Technology, Agency, and the Back-channelEmail This EntryPrint This Article

    Posted by Clay Shirky

    danah compares non-participation in the back-channel at the MSFT conference to racial discrimination:
    everyone loves to tell me that anyone could get on the channel so get over it. This horrifies me because it rings of “any person of color can get on the Internet so the race divide is their fault.”
    This comparison makes no sense. A person has no agency with regards to their race, making racial discrimination manifestly unfair. But look at the characteristics danah likens to race: people didn't bring their laptops to the conference, they can't install the software, they don't like splitting their focus. Now one can certainly imagine a conference in which those characteristics were divisive in the way race is -- the "Dyslexic Seniors and their ADD Tech-mad Grandchildren" conference would create such a split. But this was a conference _about social software_, whose entire invite list had been chosen for their expertise in the topic, whose sponsor provided Wifi, and where the back-channel's existence was announced in public on the morning of Day One. Even "Golly, it sure is confusing installing all that new-fangled software the kids are using today" fails the test, as we were using irc, a 15 year old port of Compuserve's 20 year old CB Simulator. No matter who you were at that conference (unless you were Barry Wellman, godfather of us all), irc existed the day you first logged in. Now there's certainly no reason anyone should bring a laptop to a conference or log into a back-channel if they don't want to, but it's silly to confuse that set of choices and their attendant ramifications with racial discrimination, when the population in question was selected for their professional engagement with social software. And this matters because playing the race card obscures the parts of the argument that do matter -- the back-channel created negative consequences, because it created a distraction. The problem wasn't that people wanted to opt out of the back-channel for various reasons, but that even when they did, they were affected by it. On top of the obvious annoyances like out-of-synch laughter or distracting typing sounds, a room with a back-channel _feels_ different, because many of the attendees are simply less present. It also raises the stakes for presenters, who have to be more expert at holding an audience's attention, because the grace period before you lose people collapses to 30 seconds or so. The critical conversation is whether and in what circumstances the advantages outweigh the disadvantages and, relatedly, how those disadvantages might be mitigated. There's more, much more, to be gotten out of that conversation than in conflating non-participation as a choice with racial discrimination.

    Comments (15) + TrackBacks (0) | Category: social software

    April 8, 2004

    Townsend in KoreaEmail This EntryPrint This Article

    Posted by Clay Shirky

    I've written earlier about Anthony Townsend's work on the changes coming to urban areas with wireless access. (And I'm pleased to say he's now a colleague at ITP as well.) Now he's gotten a Fulbright to study the social effects of near-ubiquitous broadband penetration in Korea, of both wired and wireless varieties, and has set up a weblog for posting his observations. Read the whole, uh, RSS feed.

    Comments (1) + TrackBacks (0) | Category: social software

    boyd on the backchannelEmail This EntryPrint This Article

    Posted by Clay Shirky

    danah with more on the backchannel:
    The thing about the IRC backchannel is that it's *obvious* that there is a second-place to the conference. Thus, those not participating create another target of dislike in addition to the conference. One can despise the conference as well as the IRC channel. In most events, people don't hate either the actual organizers of the conference or the participants of the IRC channel (since they're friends anyhow); they simply despise the organization. With only a fraction of people participating, the IRC channel doesn't become a communication tool; it becomes a second place. And since people are in both the IRC channel and the conference simultaneously, it means that you can't just disregard that population - they are weaved too tightly. (You can disregard the conference attendees that just sit in the bar the whole time.)
    This is right on -- the channel becomes the hallway conference folded back in on the formal conference, and is in many ways a parallel track. This produces both its value and its problems. danah nails the effects created by the backchannel, though she and I (and, I think, Liz) disagree pretty strongly about whether those effects add up to net positive or net negative. ---- Follow-up from Ross Our dear vacationing danah continues:
    When i bring this up to people, everyone loves to tell me that anyone could get on the channel so get over it. This *horrifies* me because it rings of "any person of color can get on the Internet so the race divide is their fault." There are many reasons why people don't feel comfortable on the IRC channel. It's not their home domain; they don't use laptops during conferences or they don't have the skills to install the backchannel; they don't execute well with continuous partial attention; speed typing is not comfortable.... You name it. It's an environment that privileges those comfortable in it already.
    Two points:
    • We mix different tools within the Eventspace for different situations. IRC, web-based chat, blog, wiki, photos and video. Sometimes aiming to extend the event beyond the four walls to remote participants. Sometimes aiming to enhance participation.
    • The role of event facilitator is fundamentally changing to one that leverages these tools, encourages in-room and out-of-body participation and highlights key issues and contributions. This happens to freak traditional faciltators out and not just because of their honed empathic abilities.
    danah is right that there is risk of an in-room social software divide. And Clay is right that sometimes you want this to happen, sometimes you don't. Again, it depends upon the situation.

    Comments (2) + TrackBacks (0) | Category: social software

    Operation Fuck With the LJ ChristiansEmail This EntryPrint This Article

    Posted by Clay Shirky

    OK, one more from April Fool's, another social hack: A LiveJournal user, Moroveus decided to lodge a kind of distributed protest on April Fool'sfirst called Operation Fuck with the LJ Christians and later renamed Operation Jour de Poisson; the prank was a comment on the recent Pledge of Allegiance lawsuit. Here is his To Do list detailing the mechanics of the prank:
    Cut LJ bio and interests in order to disguise my obvious penchant for atheism and liberalism. Unsubscribe from atheism Go back two weeks and "friend-only" all the posts about politics, Mel Gibson, religion, the GOP, etc. Create pro-pledge image. Create anti-pledge image. Code and post the meme in my public journal. Post meme in active conservative/Christian communities. Wait a few days and then swap the root image and edit the post it points to.
    Then, on April Fool's, he replaced the original root image with this one:
    Hilarity ensued.

    ...continue reading.

    Comments (3) + TrackBacks (0) | Category:

    April 6, 2004

    Weinberger on ASN's and FOAFEmail This EntryPrint This Article

    Posted by Clay Shirky

    David Weinberger has a piece up at JOHO called The Truth About Why I Hate Friendster, in which he lists the public but fake reasons he doesn't like the current crop of ASNs (Artificial Social Networks, a beautiful observation), as well as the private and real reasons he doesn't like them, and ends up focussing on the centralized v. decentralized debate.
    ASNs are closed networks when it comes to data. Of course they exist on the Net and use the usual Net protocols, but these systems get their benefits by walling off their data. The benefits are powerful. But, like AOL back when the Web started, they are protectionist. As a result, as more data is added to them, their value increases but that value is invisible to the rest of the Net. The open Net becomes less valuable as human links are moved into ASNs. The Friend of a Friend (FOAF) proposal attempts to add value to the open Net. [...] FOAF is kind of catching on. For example, the popular blogging software, TypePad, automatically creates FOAF files based on user profiles. (Leigh Dodds' Foaf-a-matic will create a FOAF file if your blogging app doesn't do it for you.) Applications for FOAF are not catching on, at least not yet.
    David and I disagree somewhat here, as I think that technologies that use a mix of centralization and decentralization are often superior to either extreme -- Napster worked better than either iTunes or Kazaa. Not that Friendster is the be-all and end-all, but rather that the problems he identifies with FOAF -- the lack of applications -- are because of systematic errors in FOAF, rather than some inexplicable lag in application design. Universally inclusive and consumable information about me is, almost by definition, going to be so bland as to be useless ("Mr. Shirky is a Pisces, and likes Chinese noodles.") The membrane-bound characteristics he kicks against with Wallop et al are actually useful to limiting the exposure information with real social value. This doesn't mean that there aren't non-Wallopish ways to get the value of semi-permeable membranes, but FOAF in its present incarnation sure ain't it.

    Comments (3) + TrackBacks (0) | Category: social software

    April 5, 2004

    A good one from April Fool'sEmail This EntryPrint This Article

    Posted by Clay Shirky

    OK, so I was a little hard on the April Fool's stuff, mostly because I was tired of the "Microsoft buys Red Hat"-level jokes on slashdot. One fascinating and explicitly social hack, though, 'MetaFilter HP becomes wiki' was a doozy.

    ...continue reading.

    Comments (2) + TrackBacks (0) | Category: social software

    April 4, 2004

    FlashMob meets the Grid, part wayEmail This EntryPrint This Article

    Posted by Clay Shirky

    We wrote about the attempt to build a Flashmob supercomputer here , back in February.. The event took place earlier this week, and succeeded in networking nearly 700 computers on the spot; however, they failed at their goal of getting a spot on the list of the Top 500 hundred supercomputers because of (all together now) problems of scale:
    Results: FlashMob I was very successful and a lot of fun. Over 700 computers came into the gym and we were able to hook up 669 to the network. Our best Linpack result was a peak rate of 180 Gflops using 256 computers, however a node failed 75% through the computation. Our best completed result was 77 Gflops using 150 computers. The biggest challenge was indentifying flakely computers...
    Dealing with, uh, flakeley nodes is one of the big design challenges of the era, in all sorts of systems. If we're going big and distributed (which we are) then you cannot _ever_ assume things will go your way, certainly not at all points in the system at the same time. Big distributed anything -- supercomputing, file sharing, social networks -- all has the same core challenge: assume flakeliness, then design systems that can withstand it.

    Comments (0) + TrackBacks (0) | Category: social software

    April 1, 2004

    POKE in the Eye With A Sharp StickEmail This EntryPrint This Article

    Posted by Clay Shirky

    So I have become bored bored bored with the April Fool's stuff by and large, but I was struck by how much conceptual similarity the joke Jabber spec, Presence Obtained via Kinesthetic Excitation (POKE) bears to Matt Webb's Glancing. I've been using Apple's iChat as my IM client for a while now, and am addicted to the gentle 'whuff' sound as users enter and leave presence-space, so while POKE is meant to be ridiculous, it's about 80% of the way to something real, something that both Webb and iChat are getting at -- relying on the limbic system for presence awareness.

    Comments (2) + TrackBacks (0) | Category: social software

    March 31, 2004

    Situated SoftwareEmail This EntryPrint This Article

    Posted by Clay Shirky

    I just published a piece called situated software, about a pattern of software creation I think I'm seeing among my students at ITP:
    We've been killing conversations about software with "That won't scale" for so long we've forgotten that scaling problems aren't inherently fatal. The N-squared problem is only a problem if N is large, and in social situations, N is usually not large. A reading group works better with 5 members than 15; a seminar works better with 15 than 25, much less 50, and so on. This in turn gives software form-fit to a particular group a number of desirable characteristics -- it's cheaper and faster to build, has fewer issues of scalability, and likelier uptake by its target users. It also has several obvious downsides, including less likelihood of use outside its original environment, greater brittleness if it is later called on to handle larger groups, and a potentially shorter lifespan. I see my students making some of these tradeoffs, though, because the kinds of scarcities the Web School was meant to address -- the expense of adequate hardware, the rarity of programming talent, and the sparse distribution of potential users -- are no longer the constraints they once were.
    It's a set of observations about a change in programming practices and costs, but also about building software that is situated in an existing community, and takes advantage of that community's behavior in a way that impersonal Web applications can't.

    Comments (4) + TrackBacks (0) | Category: social software

    March 30, 2004

    Snarkiness on paradeEmail This EntryPrint This Article

    Posted by Clay Shirky

    Liz has a great post on mamamusings,Confessions of a Backchannel Queen about a back- backchannel. Our story in brief -- during a social software conference yesterday, The Usual Suspects convened on an irc backchannel. At one point, T.U.S. began criticizing one of the presentations as being pitched at novices, which got us an online shushing by one of the organizers. Liz, rather than meekly staying shushed, then started a back-backchannel, a second irc channel for the snarkiness, which included about a third of the original irc channel but none of the organizers.
    But when the snarkiness left the original backchannel, there were some interesting side effects. First, the original channel nearly died. The level and quality of content dropped off significantly as the most high-energy participants shifted their action to the new channel. Second, the level of “bad behavior” in the new channel escalated dramatically. By drawing attention to it, and pushing it out of the mainstream environment, it was focused and amplified. That’s not necessarily a good thing. There were times when went a little over the top, to the point were people were noticing the ripples of laughter at times when laughter seemed inappropriate.
    Read the whole thing. There was an interesting observation during a presentation yesterday about the tension between informality and inclusiveness in online tools. New tools like email and IM get dragged into organizations by the employees, who start by using personal email or IM for business, and prizing it for its informality. Over time, the tool becomes both inclusive and vital, becoming a core function, and the appearance of business expectations undermine the informality. That is happening now with the backchannel -- if a few connection junkies are creating a backchannel, you can ignore it, but if the backchannel includes half the room, the tension between the informality and control breaks out in the open. And so we draw behind a semi-permeable membrane, the pattern of the era.

    Comments (3) + TrackBacks (0) | Category: social software

    March 26, 2004

    Deanspace goes *-spaceEmail This EntryPrint This Article

    Posted by Clay Shirky

    Dan Gillmor is reporting that Zack Rosen, leader of the DeanSpace effort (itself build on the open-source drupal) is now building an easy-to-use open source groupware toolset. Rosen tells Gillmor his goal is
    To establish a permanent foundation that can spearhead social software development projects for nonprofit organizations. Unless an organization is committed to hiring full time engineers to do Web development, the only and most frequent solution is to pay tons of money hiring firms to provide proprietary 'black box' Web application products. These firms a have conflict of interest -- they live off the monthly checks so they have a huge interest in owning the organization's data and locking them into their services. We want to create a much cheaper, open, and powerful option for these kinds of services. [...]
    This is huge. Since the Dean organization was more movement than campaign, the lessons from its use of social software are more broadly relevant than to just political groups. I can't tell you how often I talk to people who have a sense that there is some set of collaborative tools on beyond email that could help their organization, but don't know where to begin. Part of it is confusion -- they think they want weblogs for conversation, BBSes for shared document creation, wikis for personal publishing, and so on -- and part of it is standard-issue tech anxiety -- can we install it? can we maintain it? how much will it cost? and so on. These conversations tend to be long and meandering, starting with a plaintive "Where do I even start?" If Rosen achieves what he's setting out to do, it will be a great pleasure to be able to short circuit that conversation by saying "Here. Start here."

    Comments (4) + TrackBacks (0) | Category: social software

    Girls on FilmEmail This EntryPrint This Article

    Posted by Clay Shirky

    Spring is at last in the air in New York City, and a couple of times in the last couple of weeks, I've seen a curious sight: two women, one sitting on a bench in some picturesque setting -- Cobble Hill park, busy East Village street -- and the other taking her picture. These weren't photo shoots -- neither the photographer nor the camera were of the professional variety -- but they weren't just snapshots on a fun outing either. The first time I saw it, I didn't know what what going on, til my wife clued me in: Match.com. The next time I saw it, I recognized it instantly -- once you know what the pattern looks like, it becomes obvious. And, like everything interesting about the social uses we are pressing out tools into, it was two parts technology to seven parts humanity. It was interesting that the photos were being taken outdoors -- the message seemed to be (at least interpreted from the Guy side of the aisle) "If you want to see my apartment, you'll have to wait til I invite you in, even if it's just on film." The other commonality was that at one point, the subject threw herself into a faux glamour-girl pose, acting out some of the tension of being photographed for anonymous and distributed judgment and channeling some of the images of womanhood that saturate our lives. And of course, the Vargas pose was a cue for both model and photographer to collapse into giggles. It was sweet, really, a new ritual of friendship for our little corner of the 21st century, when it isn't just models and performers who need to worry about mediated representations of themselves.

    Comments (3) + TrackBacks (0) | Category: social software

    March 25, 2004

    Rusty Adds Membranes to Kuro5hinEmail This EntryPrint This Article

    Posted by Clay Shirky

    If there were a Shirky's Law, it would be something like "The advantages of anonymity grow linearly with the population; the disadvantages grow with the square of the population." After decades where the native design assumption was that anything that minimized user flexibility was A Bad Thing®, we are in an era where the disadvantages of complete user freedom in communal settings have become too high to bear, whether from anonymous flamers, spammers, trolls, or whatever else. One solution that seems to be emerging is the addition of semi-permeable membranes, which raise some threshold to participation, as with Six Apart's proposed TypeKey service. Now Rusty Foster of Kuro5hin has added his version to the membrane pattern, adding a "Managed Growth" pattern, similar in spirit to LiveJournal's "Get a user to invite you" pattern of growth. Says Rusty, characterizing the problem:
    So the question is, how do we make it more difficult for obnoxious people to disrupt the site, without barring the gates altogether? And from a wider view, how can a large community like this continue to grow in an organic way? I think part of the initial success of the site was due to the word-of-mouth nature of who showed up to use it. Now that half of our pages are result number one for some google search or another, it seems like a lot of that person-to-person growth, and the sense of community that comes with it, has been lost. I'd like to propose a strategy for this with four parts. The overall ideas behind it are first, to create more of a barrier to entry and thereby make losing accounts more of a hardship, and second, to recognize that some administrative oversight of who stays and who goes is necessary, while making it as accountable as we can to the wishes of other members (without, hopefully, turning it into a game itself).
    He goes on to describe new ways of handing Sponsorship (creation of new accounts), Guidelines, Warnings, and Feedack, as well as some speculation about implementation. As with everything Rusty does, it's both interesting and well-written, and true to Kuro5hin form, the comments are fantastic as well. Read the whole thing.

    Comments (4) + TrackBacks (0) | Category: social software

    March 22, 2004

    RELATIONSHIP: Two WorldviewsEmail This EntryPrint This Article

    Posted by Clay Shirky

    There were two immediate and strong criticisms of my RELATIONSHIP post of last Tuesday. The first, and broader, criticism, by Ian Davis, suggests I've misunderstood both the relative newness and the general flexibility of the work -- multi-variate relationships can be expressed in multi-variate terms, and missing characterizations can be added, and so on.

    The second, in a comment by bardia, says that all the objections I raise and more have been discussed by the people on the FOAF list, and that if these were fatal problems, that group, smart as they are, would have caught them.

    I want to deal with these in turn, but first, I want to re-state my views on the subject, because Ian in particular seems to have misconstrued them as practical objections. For the record, I do not believe that RELATIONSHIP suffers from practical problems; I do not believe that it is underdeveloped, or that there are missing but critical implementation details. I believe instead that it suffers from a philosophical error, and one that cannot be fixed by any future iteration of the current line of reasoning.

    ...continue reading.

    Comments (19) + TrackBacks (0) | Category: social software

    danah on Schmidt on social networksEmail This EntryPrint This Article

    Posted by Clay Shirky

    Find boyd rant, set off by Eric Schmidt's "Find the problem for the tools we have" notion of social networking software:
    The thing is that social network representations require nuance. We can either try to solve the nuances universally (not going to happen) or try to figure out what problems we're trying to employ social networks in and figure out how to negotiate them there IN A CONTEXT. The latter is going to be far more successful. Haven't we already learned that each YASNS models a different social network anyhow (and no, FOAF is not the answer here because the different models are often because people are segmenting their networks differently in order to represent different facets).

    Comments (0) + TrackBacks (0) | Category: social software

    March 19, 2004

    LOAF: Social email filteringEmail This EntryPrint This Article

    Posted by Clay Shirky

    From Joshua Schachter, inventor of memepool, Geo URL, and del.icio.us and Maciej Ceglowski of Idle Words and the Web crawl, comes LOAF, a way of sharing address books without disclosing their contents, so that groups of LOAF-enabled users can build social networks on top of connectedness metrics, without needing central servers or full disclosure.
    When you receive an email from an address you have not previously written to, LOAF checks to see if the email address is known to any of your existing correspondents. This essentially sorts incoming email into three categories: - Mail from complete strangers These are people whom you do not know, and who are also unknown to your correspondents. - Mail from partial strangers These are people you have never sent email to, but who have gotten email from at least one of your own correspondents. [...] - Mail from people you know This last category consists of people whom you have written to before. Presumably this is email you're most interested in, unless it's another forward from your mom. Mail in category (2) can be further classified by counting how many correspondents you and the sender have in common. If the originating email appears in the address books of several of your correspondents, this may indicate a person with whom you have many connections. Insert standard social network theory here.
    Also, don't miss the discussion of LOAF attack strategies, including the Dictionary attack, Me Too attack, Ex-Girlfriend attack, and Marc Canter attack. Both Josh and Maciej are geniuses, in the older and rarer sense of the word, so this should be well worth playing with. UPDATE: Kellan has pointed out another LOAF in the comments, which is a bizarre and elaborate joke, with verbose but uninformative language and a long list of fake implementations. (The Python implementation of that LOAF consists of the single command 'pass'.) The two LOAFs (LOAVES?) are unrelated -- I doubt Josh and Maciej knew about the joke LOAF in naming their project.Update to the update: The real LOAF is named after the joke LOAF. Maybe we can retrofit the acronym to mean List of a Friend?

    Comments (8) + TrackBacks (0) | Category: social software

    Dogging: Smart Mobs Go CarnalEmail This EntryPrint This Article

    Posted by Clay Shirky

    From Wired: Dogging Craze has Brits in Heat:
    "Dogging is the broad term used to cover all the sexual outdoor activities that go on," says the dogging FAQ at Melanies UK Swingers, a popular dogging site. "This can be anything from putting on a show from your car, to a gangbang on a picnic table." [...] Dogging sessions are usually organized through the dozens of dogging sites and message boards that have sprung up in the last couple of years. Photos are exchanged and meetings arranged by e-mail or mobile phone text message. At the meet, cell phones and text messages are used to confirm meeting places and, crucially, identities. Cameras and videophones are increasingly used to record what goes on. "Technology is vital and is the main driver (of the dogging phenomenon)," said Richard Byrne, a lecturer at Harper Adams University College in the United Kingdom who produced a survey (PDF) last year that found dogging to be a widespread and growing problem in Britain's country parks.
    It is leading, predictably, to an increase in sexually transmitted diseases...

    Comments (5) + TrackBacks (0) | Category: social software

    March 18, 2004

    Can social networks stop spam?Email This EntryPrint This Article

    Posted by Clay Shirky

    Interesting articcle, Can Social Networking Stop Spam?, about the work of UCLA researchers on social clustering as a spam detector, using the latent social network as a filter:
    "When you get an e-mail from Alice with a 'cc' to Bob, you put a link between Alice and Bob," Boykin explained. Examining six weeks worth of e-mails from Bobs, Carols, Alices and others, Boykin and Roychowdhury were able to identify the "components" of their burgeoning e-mail network. "A component is a set of nodes which can all reach each other in the network," Boykin said. "It turns out that spam components and non-spam components are easy to distinguish" in a large enough network by examining so-called "clustering coefficients." "In social networks, if A knows B, and B knows C, A often knows C also," Boykin explained. "Clustering coefficients measure this relationship." In comparison to random networks, Boykin said he and his co-worker discovered that "non-spam components have high clustering coefficients, and spam components have clustering coefficients equal to zero."
    the catch seems to be a large, readable population of email users -- the article doesn't make it clear how many people need to be involved to get to the claimed accuracy.

    Comments (3) + TrackBacks (0) | Category: social software

    March 17, 2004

    Henshall: Social networking is brokenEmail This EntryPrint This Article

    Posted by Clay Shirky

    Stuart Henshall, Social Networking is Broken
    For the life of me... When is IM not a social networking device? (Have you ever seen a 12 year old girl reconnect her buddies after taking a new name?) That looks like social networking to me. When are introductions by e-mail not social networking. Or a speakerphone call? It's time to put a stop to categorizing these "things" as social networks. Call them "Associative Networking Tools" or "Structured Association Tools" or something similar. Then you can create a bucket for them. The reason there is no real business model is they are just part of / or component towards building our capabilities to enhance "presence" and connectivity.
    As we say around here: w00t!

    Comments (1) + TrackBacks (0) | Category: social software

    March 16, 2004

    RELATIONSHIP: A vocabulary for describing relationships between peopleEmail This EntryPrint This Article

    Posted by Clay Shirky

    Behold RELATIONSHIP, a vocabulary for describing relationships between people. I don't know if I'm the one to shoot these particular fish in this particular barrel, since both mme. boyd and Herr Weinberger are more eloquent than I on the subject of of making the tacit explicit, but this thing is self-critiquing. Here, just in case you were wondering, is how you should be characterizing your relationships with one another:
    friendOf, acquaintanceOf, parentOf, siblingOf, childOf, grandchildOf, spouseOf, enemyOf, antagonistOf, ambivalentOf, lostContactWith, knowsOf, wouldLikeToKnow, knowsInPassing, knowsByReputation, closeFriendOf, hasMet, worksWith, colleagueOf, collaboratesWith, employerOf, employedBy, mentorOf, apprenticeTo, livesWith, neighborOf, grandparentOf, lifePartnerOf, engagedTo, ancestorOf, descendantOf, participantIn, participant
    Describing relationships with a controlled vocabulary can sound credible right up to the moment you see the vocabulary, but this thing is a mess.

    ...continue reading.

    Comments (24) + TrackBacks (0) | Category: social software

    March 12, 2004

    PieSpy and Dynamic Social Networks in ShakespeareEmail This EntryPrint This Article

    Posted by Clay Shirky

    PieSpy, the Java tool for inferring social networks from IRC (which we've written about before) has now been turned on a corpus of static text -- Shakespeare's plays. Here's a bit of Anthony and Cleopatra, with Cleopatra in the center

    Best of all, though, is that since PieSpy is made for streaming rather than static text, it treats each play as an ongoing conversation, and creates animations of the social networks over time.

    Comments (1) + TrackBacks (0) | Category: social software

    March 9, 2004

    YASNSes get detailed: Two picturesEmail This EntryPrint This Article

    Posted by Clay Shirky

    Was struck by two recent interface changes, one on Orkut and one on Friendster, both in the direction of gathering more explicit meta-data. Friendster first: they have added these two sets of preferences, to be used in the next rev of the service:

    ...continue reading.

    Comments (9) + TrackBacks (0) | Category:

    March 8, 2004

    Huy Zing on deletion from OrkutEmail This EntryPrint This Article

    Posted by Clay Shirky

    Looks like Orkut is bidding to be the service that nullifies Google's "Don't Be Evil" policy, after subjecting users to random and unnotified deletions. Huy Zing, an incredibly active Orkut user, describes these deletions in a pair of posts. First, in Orkut Times: Uncertainty of Orkut Life:
    For every profile, Orkut.com provides a "Flag as Bogus" button intended to allow any user to report profiles that are known to be fake. This bogus-flagging sounds like a great idea, but it turns out it has been used between forum flamers to spread hate and escalate battles beyond the community discussions. Unfortunately, orkut.com chooses the policy of shooting first and asking questions later, presuming guilt before innocence. So argue with someone in a community and you'll be looking over your shoulder for a while. I don't think that I'm a victim of bogus-flagging, as I'm fairly sure that I've had no enemies on Orkut.com. My crime seems to be the fact that I either created too many communities or that odds are that I created some questionable communities. The problem is that there are no known rules against creating communities that members might enjoy. Why wouldn't I want to make Orkut.com entertaining and fun for others?
    Then, in Tuesday's with Huy Zing, he details the hilarious arbitrariness of the community deletions:
    It became obvious soon enough that the final judgment of my communities was arbitrary: coldplay & U2 remain but all hip-hop acts like Eminem & Jay-Z are gone. Dance Like Everyone's Watching is gone, but All Your Base Are Belong To Us lives on. It appeared a nerd was at the wheel. Some very questionable editorial discretion was exercised: death penalty-related communities or Middle East Conflict, all food communities were destroyed. Luckily, my personal favorite "Fly Chicks For the Geeky Guy" survived; the "G-Spot Search Expedition" didn't. I have to wonder how Orkut expects the geeky guy to know how to satisfy the fly chicks.

    Comments (0) + TrackBacks (0) | Category: social software

    Robert Kaye on Social Networks for File SharingEmail This EntryPrint This Article

    Posted by Clay Shirky

    Robert Key has published his ETech talk on a design for social networks for file sharing as an essay on OpenP2P.com:
    To apply this concept, the network starts with a group of trusted people forming a tribe of people. Starting a tribe as a friendnet, where each connection is backed up by a meatspace connection, is an excellent starting point. However, sharing files inside of a small tribe is only interesting for a short while because it presents a limited search horizon. If tribes connect with other tribes to form chiefdoms, the search horizon expands with each new connection in the chiefdom. Finally, connect chiefdoms to other chiefdoms to form states, and the search horizon may start to look similar to the search horizons in open file-trading systems. Each tribe should carefully select tribal elders who will set the tone of the network and determine social policies for the network. The elders should be aware of the tribal members and their strengths and weaknesses in order to set policies that are effective for the group. The elders should focus the tribe on its primary goals and continually evaluate the state of the tribe to ensure that its members are well educated on the tribal policies.
    I've been interested in this idea for some time, but the devil is in the details. In particular, the more a group approaches mutual responsibility over long periods, the more its problems become the problems of a state -- here, one issue that jumps out is tribal elders. I don't know how Robert is instantiating this in software, but the simple phrase "Each tribe should carefully select tribal elders..." hides reams of complexity. Choose how? Voting? But once the elders are set, how are they to be changed, or removed? And do new members simply have to accept the elders that were there when they arrived? Etc etc. The fascinating problem here is political plasticity -- if the system is too easy to change, it will decohere or get hijacked by the RIAA. If it is too hard to change, the users will tear it down from within. It's a good idea, and Robert's got chops, so this effort will be worth watching.

    Comments (0) + TrackBacks (0) | Category: social software

    March 5, 2004

    The Orkut SongEmail This EntryPrint This Article

    Posted by Clay Shirky

    Comments (6) + TrackBacks (0) | Category: social software

    Inner Circle: Social Tool by MSFTEmail This EntryPrint This Article

    Posted by Clay Shirky

    News.com is reporting on a tool out of Lili Cheng's group at Microsoft called Inner Circle.
    "Contacts don't match the way people think," said Lili Cheng, group manager of the social-computing group within Microsoft Research. A better model is the handwritten list of phone numbers many people keep next to their computer. That, Cheng said, "better represents the people that you'd want to talk to." To try to translate that idea into digital terms, Cheng and her team have come up with a concept called Inner Circle, which automatically maintains and updates a list of about 20 people with whom one is e-mailing and instant messaging the most.
    No pointers to the project itself, but I assume it will appear on the Social Computing page eventually.

    Comments (1) + TrackBacks (0) | Category: social software

    YASNS: ICQ UniverseEmail This EntryPrint This Article

    Posted by Clay Shirky

    AOL's ICQ division is launching ICQ Universe, a social networking service built on top of a buddy list. The FAQ is filled with all sorts of interesting notes, including:
    Q. What is the ICQ Universe Lobby? A.The ICQ Universe Lobby is for users waiting to be invited to the ICQ Universe. As long as you're listed in the lobby, you cannot interact with people in the ICQ Universe. However, you can encourage people to invite you by filling the Why I should be invited box or request to join a recruiter's part of the universe. People who are recruiting are listed in the lobby.
    This takes the AOL Lobby/LambdaMOO closet pattern and adds it to the YASNS world -- an entry space where you're in the system, but not yet part of the social world.

    ...continue reading.

    Comments (2) + TrackBacks (0) | Category: social software

    March 3, 2004

    Rob Cross Explains Social Networking for BusinessEmail This EntryPrint This Article

    Posted by Clay Shirky

    Rob Cross has a good introductory overview on social networks in a business context, including some case studies:

    *Key Findings*: It is obvious from the picture on the left that the consulting practice is broken into two different sub-groups with one person acting as a boundary spanner. Interestingly enough the practice was divided on precisely the dimension it needed to be connected, their unique skill sets. The group on the left side of the network was skilled in the 'softer' issues of strategy or organizational design, whereas the group on the right was composed of people skilled in 'harder' technical aspects of knowledge management such as information architecture, modeling and data warehousing.

    Comments (0) + TrackBacks (0) | Category: social software

    Wikipedia Code: MediaWikiEmail This EntryPrint This Article

    Posted by Clay Shirky

    A new version of (_thanks Tom_!) the code that runs the Wikipedia is available for general users, including multi-lingual support and the ability to display mathematical formulae and other hard layout challenges using LaTEX.

    Comments (2) + TrackBacks (0) | Category: social software

    March 1, 2004

    Chinese-language social software weblogEmail This EntryPrint This Article

    Posted by Clay Shirky

    Not reading Mandarin, I can only say that I came across a Chinese language weblog on social software today. Perhaps if any of our readers also reads Mandarin, they can comment on whether or not it's any good.

    Comments (2) + TrackBacks (0) | Category: social software

    danah boyd on FriendsterEmail This EntryPrint This Article

    Posted by Clay Shirky

    danah boyd's ethnographic research on Friendster has been accepted at CHI, the Computer-Human Interaction conference.
    Fundamentally, context is missing from what one is presenting. On one hand, an individual is constructing a Profile for a potential date. Yet, simultaneously, one must consider all of the friends, colleagues and other relations who might appear on the site. It can be argued that this means an individual will present a more truthful picture, but having to present oneself consistently across connections from various facets of one’s life is often less about truth than about social appropriateness. Another argument is that one is simply performing for the public, but in doing so, one obfuscates the quirks that often make one interesting to a potential suitor. Notably, most users fear the presence of two people on Friendster: boss and mother. Teachers also fear the presence of their students. This articulated concern suggests that users are aware that, in everyday activity they present different information depending on the audience. Given the task of creating a Profile, users elect to present themselves based on how they balance the public/private dimension.
    Congratulations, danah! (PDF taken from her page of published work, home of much other goodness.)

    Comments (0) + TrackBacks (0) | Category: social software

    February 24, 2004

    LiveJournal adds FOAFEmail This EntryPrint This Article

    Posted by Clay Shirky

    Comments (1) + TrackBacks (0) | Category: social software

    WikiCourt: The Proposal, and First RebuttalEmail This EntryPrint This Article

    Posted by Clay Shirky

    Aaron Swartz proposed a wiki court, and spells out the basic mechanics, beginning with the premise:
    I’m an optimist. I believe that statements like “Bush went AWOL” or “Gore claims to have invented the Internet” can be evaluated and decided pretty much true or false. (the conclusion can be a little more nuanced, but the important thing is that there’s a definitive conclusion.) And even crazier, I believe that if there was a fair and accurate system for determining which of these things were lies, people would stop repeating the lies. [...] And perhaps most crazy of all, I want to stop repeating falsehoods. I believe the truth is more important than particular political goals, so I want to build a system I can trust. I want to know that when I make claims, I’m not speaking out of political distortion but out of honest truth. And I want to be able to evaluate the claims of other too.
    He goes on the describe a process by which he thinks this might happen. Matthew Thomas then weighs in with a rebuttal:
    What bewilders me most about Aaron’s proposal is his reference to Wikipedia as an example of how collaborative editing by ideological opponents can work. Aaron has contributed substantially to Wikipedia, but so have I, and I’ve seen quite the opposite. Wikipedia works best when dealing with uncontroversial subjects. In controversial subjects — for example, Mother Teresa, or George W. Bush, or anything to do with Israel and Palestine — it often succumbs to edit wars, with two or more contributors repeatedly reverting each other’s changes until one of them gets tired, or until an administrator freezes the article at a state that no-one is happy with. And to the extent any dispute is eventually resolved, it is usually resolved by making the article’s characterization of the dispute so exhaustive and so weasely that few people want to read it anyway.
    (There are lots of good links in that second paragraph; go read the whole thing.) We've written a lot here about the value of wikis, of course, especially in discursive and contentious environments (Atom wiki, a historyflow analysis of the Wikipedia), but they are a tool, not a panacea. My money's on Thomas in this one.

    Comments (0) + TrackBacks (0) | Category: social software

    Umbrella.net: Ad hoc social networksEmail This EntryPrint This Article

    Posted by Clay Shirky

    Jonah Brucker-Cohen and Katherine Moriwaki are developing an ad hoc mesh network, both technological and social, with devices embedded in umbrellas. As a result, the network only appears with the "coincidence of need" that occurs with the first drop of rain. Concept:
    In Dublin, Ireland, rainfall is frequent and unpredictable. Often individuals carry umbrellas with them in case they are caught in a downpour. It is common to witness during a sudden and unexpected flash of rain, a sea of umbrellas in the crowded streets sweeping open as raindrops first hit the ground. This collective, yet isolated act of opening an umbrella creates a network of individuals who are connected through similarity of action, and intent. The manifestation of open umbrellas on the street could be tied to a temporary network which is activated through routers and nodes attached to the umbrella, which operate only while it rains. While the coincidence of need exists, the network operates. When the necessity of action and intent ceases, it disappears. We believe these transitory networks can add surprise and beauty to our currently fixed communication channels.
    Tech:
    The UMBRELLA.net system works with a hardware and software component that is integrated into the design of a typical umbrella. By embedding the system into an everyday object, our intent is to lessen the point of entry for people using the system as they are already familiar with the object and how it works. The prototype will include handheld PocketPC (iPaq) computers that will interface to the umbrella and only communicate with each other when the need exists: ie. When rain is present and other nodes exist in close proximity.

    Comments (7) + TrackBacks (0) | Category: social software

    February 22, 2004

    FlashMob meets the GridEmail This EntryPrint This Article

    Posted by Clay Shirky

    A FlashMob designed to assemble, ad hoc, a Top 500 supercomputer:
    *Welcome to FlashMobComputing.org* This is the home of the first Flash Mob Computing supercomputer and the official site for all things Flash Mob Computing. On April 3, 2004 University of San Francisco will host the first Flash Mob Computing computer, FlashMob I, with the purpose of creating one of the Top 500 Supercomputers on the planet. You Can Help! You are invited to join us in Koret Gym at USF in San Francisco from 10pm - 4pm. [...] *What is Flash Mob Computing and FlashMob I?* A Flash Mob supercomputer is hundreds or even thousands of computers connected together via a LAN working together as a single supercomputer. A Flash Mob computer, unlike an ordinary cluster, is temporary and organized on-the-fly for the purpose of working on a single problem. Flash Mob I is the first of it's kind. By bringing hundreds of people like you together in one room, we will have enough computing power to become one of the fastest supercomputers on the planet.

    Comments (1) + TrackBacks (0) | Category:

    February 21, 2004

    Two on Dean (one stupid, one smart)Email This EntryPrint This Article

    Posted by Clay Shirky

    Hylton just pointed me to the Campaigns Online report, The Profound Impact of the Internet, Blogs, and E-Technologies in Presidential Political Campaigning (from January 2004, so the title feels a bit premature.) The staff of Campaigns Online made the mistake many of us made, assuming the impressive numbers from the Dean campaign were signs of future votes. The phrase that leapt out at me was "Dean has been very successful in recruiting supporters. On November 15, 2003, the number of supporters exceeded more than 500,000", going on to quote the Dean campaign on that historic milestone. The problem with the 500,000 number is that it wasn't real, or, rather, it wasn't true. Anyone who has had any dealings with an internet business in the last decade knows how user figures are arrived at -- any row in the user database is a "user." Someone fills in the "Join the Dean Campaign!" form just to look around? Count 'em. A login for Mr. Nobody at 123 Main St? Count 'em. Two people from Sheboygan with the same email address but different logins? Count 'em twice. And so on. The ease of listing yourself as a supporter lowered the value of the supporter count as a signal of strength. In the same way that the polls didn't translate into votes, the half-million lines in the database was less predictive of voter behavior than we thought. The good news about the Dean loss is that voters still matter, no matter what tools are being used. My new rule is: If I hear someone talking about using the internet to transform democracy, I'll listen for 5 minutes. If, in that time, they don't use the words vote, voter, or voting, I'm going to go back to reading slashdot. Which brings me to the smart Dean post. Britt Blaser has a great post on the use of Dean tools post-Dean, with a particular emphasis on the effects of such tools on voters.
    While the customer for these open source tools is any campaign that wants to do things even better than the Dean campaign, their user is the potential voter and campaign donor-activist.The crucial design challenge is the user experience of a voter coming upon a candidate's web site and discovering that there is a place for each voter's voice in this campaign. The thing the campaigns have to do better is to solicit each voter's input on the issues, not just to promote the horse race between two stylized candidates. This is an inversion of the Dean model, where people could only discuss issues among themselves at Meetups and in blog comments, for there was no explicit means for voters to express their policy preferences in a way that could be aggregated as a coherent direction for the campaign. I always maintained that this is what the people wanted most from the campaign, and their admirable efforts would have been amplified if the issues had not been on the back burner.
    It's great. Read it.

    Comments (2) + TrackBacks (0) | Category: social software

    February 20, 2004

    Advice to social networking servicesEmail This EntryPrint This Article

    Posted by Clay Shirky

    Over at Life With Alacrity, there's a long, thoughtful post giving advice to social networking services:
    Be very careful of the design of rating systems and reputation systems. They are extremely difficult to design well, as they too often can be gamed, or fall into reciprocity such that they are meaningless. My personal advice is just don't do it at first -- save it for a 2.0 version of the product, not 1.0 beta. If you are going to do it now, really study it -- there is a lot of good academic research on issues of reputation. It can be hard to slog through but it is worth it. Offer a grant to Danah's school for them to do research for you on the topic. Endorsements are a best way of doing reputation for now. They is also is imperfect and vulnerable to reciprocity games, however, as least you can see if two people are playing that game just by looking at the endorser and endorsee. If you find too much reciprocity you can basically ignore both players.
    I'm skeptical that all of this advice will be taken by the for-profit networking services, because the net effect will be to reduce the leverage of sites over their users. If, however, we do end up with a standard for linking such networks, a lot of what's here will be valuable.

    Comments (0) + TrackBacks (0) | Category: social software

    February 19, 2004

    Games 'R' Not UsEmail This EntryPrint This Article

    Posted by Clay Shirky

    Yahoo has a rather bland overview of the intersection of games and academic work, which reminded me to drop an editorial overview in. We here at Many-2-Many rarely ever use phrases like "we here at Many-2-Many" -- we are small pieces loosely joined, most of the time. However, I've noticed we have a de facto policy here of writing only rarely about games, despite their obviously profoundly social components. For my part, and, I think, for my colleagues, this is because the subject of games is itself enormous, and because other places do it better than we ever could. So, for the record, the best sites I've found for tracking the social dimensions of games: Terra Nova -- Game-theory All-Stars Ludology.org -- Academic study of games Slashdot -- Search for all articles relating to 'MMO' (massively multi-player online games) Whenever work from the gaming world becomes a crossover hit (e.g. Flikr, which came out of Ludicorp's game work), we'll blog it like mad, but to track social work in the game industry, the above linked sites are the places to go.

    Comments (3) + TrackBacks (0) | Category: social software

    Lots on lurkingEmail This EntryPrint This Article

    Posted by Clay Shirky

    Eugene Kim seems to have kicked off a wide-ranging discussion on lurking with his simple question -- Are Lurkers Bad?
    Lurkers are part of a group's latent energy; good things happen when that energy is activated. Lurkers are part of the all-important weak-tie network, and it's important to keep them engaged, even if engagement does not translate to participation. However, having lots of lurkers as a community goes through its nascent "sausage stage" can hurt if it drives lurkers and other potential participants away. Here's another question: Are lurkers members of a community? This question is left as an exercise to the reader.
    (NB: It's a short post, and I've quoted most of it here.) This prompted John Stafford to reply with The Power of Lurking, with really interesting ideas on the relationship of lurking and scale:
    Segmenting the population draws out lurkers (since a "good" lurker normally lacks expertise or motivation -- thus always getting beat by a core member when there is a logical next step -- thus remaining a lurker until chance intersection with an area of great insight or knowledge). [...] I can imagine something like Model UN, where identical topics are debated in small committees and then after prioritization and revision, they are sent to the group as a whole might effectively flush out lurkers. Though I'm unclear how you implement such a thing online without it feeling unproductive (since you are intentionally creating duplication).
    Lilia Efimova is also thinking about the nature of lurkers and lurking, and has what must surely be the motherlode of lurking research lists.

    Comments (4) + TrackBacks (0) | Category: social software

    Diego Doval on mapping social relationshipsEmail This EntryPrint This Article

    Posted by Clay Shirky

    Interesting trio of posts by Diego Doval on alternate methods of mapping social relationships in social networking services:
    This point of "scalability" however is important I think, because it addresses the issue of fixed representation more directly. How so? Well, current "social networking" tools basically force every person in the network to adapt to whatever categories are generally common. Furthermore, they force the parties in a relationship (implicitly) to agree on what their relationship is. I think it's not uncommon that you'd see a person as being, say, an acquaintance, and that person to view you as a friend (if not a close one). People don't always agree on what the relationship means to each other. This to me points to the need to let each person define their own relationship/trust structures and then let the software mesh them seamlessly if possible.
    Start with the post linked above, but don't miss the other two.

    Comments (0) + TrackBacks (0) | Category: social software

    February 18, 2004

    Two negative views of social networking servicesEmail This EntryPrint This Article

    Posted by Clay Shirky

    Jon Udell: Is social networking just another men's group?:
    I am sure that networked software systems can support and amplify that improvisational dynamic, but I often wonder whether our current software development culture can produce such systems. Consider the recent crop of six-degrees-inspired relationship amplifiers, notably LinkedIn and Orkut. Both have tin ears for social nuance. On LinkedIn, when asked to endorse a marginal acquaintance, I was stopped dead in my tracks by the requirement to define our relationship as one of a list of choices such as "You managed X directly" and "You were a client of X's." On Orkut the choice is even more starkly binary: "X is my friend" or "X is not my friend." LinkedIn's and Orkut's tin ears shouldn't really surprise anyone. Social software systems are created by programmers who -- let's face it -- are not renowned for their social skills.
    Richard Stokes:I'll take social software for $1,000 please, Alex:
    Social software has an inherent network externality. That is, much like Microsoft Office or email, it is only valuable to the extent that other people are using it. The "value-add" follows a typical S-curve model, that is, there is some critical mass of users that must be surpassed before the application is compelling to the masses. The average person will receive value from a software network only if a sufficient number of other people participate. The lack of a critical mass of participants acts as a barrier towards achieving that critical mass. Chicken and egg syndrome.

    Comments (5) + TrackBacks (0) | Category: social software

    Henshall on SkypeEmail This EntryPrint This Article

    Posted by Clay Shirky

    Stuart Henshall, the go-to guy for thinking about the ramifications of Skype and VoIP, has a piece on the radicalness of Skype's always-on conference call system:
    After my first Skype conference calls I realized how just how "different" it is. In a traditional conference call you dial-in to a phone number and the conference is pinged as you enter. No-one knows who is coming in unitl they announce themselves. These calls are usually scheduled for a specific time. Hardly a spontaneous way to connect a few people or in the "spur of the moment" bring someone new into the conversation. [...] I'm Skyping with you and want an additional person in the conversation.... I right click on the new contact in my friends list and that individual is added in to the conversation. My screen (see TDavid for picture) expands to show the new connection. I can introduce them, knowing exactly when the connection is established.
    More like walking up to a group at a party than the current generation of conference calls. The biggest difference between VoIP and the circuit-switched phone network we've got is not going to be cheap phone calls. It's going to be ridiculously easy group-forming.

    Comments (2) + TrackBacks (0) | Category: social software

    February 17, 2004

    Justin Hall on the etech backchannelEmail This EntryPrint This Article

    Posted by Clay Shirky

    Justin Hall writes about the online backchannel at ETech:
    The active dismissive chatter of the chat rooms primarily stayed in the background. But there were a few moments when the traffic on the wireless networks burst into visible words within the room. During some of the talks, enterprising hackers set up an LED display in the seats - the "HeckleBot." Anyone in the IRC channel could send a message to the scrolling blinking lights on the other side of the room. "Joi is not wearing pants!" came up during one panel. According to digital insurrectionist and indyvoter.org co-founder Marc Powell "One of the main points of IRC is to IRC in a way that makes other people on the channel totally lose it - laughing out loud or busting up or whatever. Having a hecklebot digital sign bridges the digital divide." With attention split between laptops with email and chat, a potentially ringing phone, panelists talking, and a scrolling LED running comedy commentary on the proceedings, it was hard to know who was laughing at what between six different stimuli streams.
    It's an intro for the general reader, so there aren't a lot of proposed "what next" steps, but one of the comments adds a lot to the detail of the article
    One nice thing about hanging out online is that the notes and transcripts can be saved. There's a wiki page hosted by the conference that has a growing list of links to people's notes from the show. The Hydra links, it is interesting to note, are from an application called "SubEthaEdit" - a tool for collective simultaineous note-taking online. Macintosh only; a tool that really thrives in a wireless-saturated conference! This is definitely the upside of plentiful bandwidth - the productive backchannel.

    Comments (0) + TrackBacks (0) | Category: social software

    Polls, Votes, and Public SignalingEmail This EntryPrint This Article

    Posted by Clay Shirky

    Imagine someone hands you a sealed box which, they say, contains a lot of money. You take the box home, open it, and find three dollars inside. You can now ask yourself one of two questions: "Where did all that money go?" or "I wonder how much money was really in that box in the first place?" At the beginning of this year, we were given a stack of boxes marked "Democratic Primaries -- Property of Howard Dean" that, we were told, had a lot of votes in them. Over the last few weeks, we've opened several of those boxes, and none of them have contained the votes we were told to expect. And over and over, people are asking "Where did all those votes go?" without asking whether the votes were ever there in the first place.

    ...continue reading.

    Comments (7) + TrackBacks (0) | Category: social software

    February 15, 2004

    Werbach on Internet CampaigningEmail This EntryPrint This Article

    Posted by Clay Shirky

    Interesting Kevin Werbach post on future uses of internet tools in political campaigns, starting from the premise that 'What most people really support are causes, not candidates':
    I have a hunch that the first Internet campaign to truly mobilize voters on a mass scale (as opposed to fundraising and core supporters) won't start around a candidate.  One of the key, and under-appreciated, elements of Dean's early success was MoveOn.org.  MoveOn never actually endorsed Dean, but its tactics and worldview were aligned with the Dean campaign. Its early online poll was the first demonstration of Dean's "front-runner" status.  MoveOn, thanks to help from friends like George Soros, will be significant player in the Fall campaign.  Yet MoveOn wasn't started to elect a President; it was started to defend a President (Clinton) against impeachment efforts.
     

    Comments (0) + TrackBacks (0) | Category: social software

    February 14, 2004

    Dean and the Last Internet CampaignEmail This EntryPrint This Article

    Posted by Clay Shirky

    I just re-read David Weinberger's quotes from the Trippi speech last week, because something felt funny the first time I read it, and this time I found it, where Trippi says:
    We did a pretty damn good job of it. Given the Party rules, we should never have been able to get to where we were 3 weeks before Iowa: Ahead in the polls, etc. We did it without the Party. The American people did, using the tools provided by the Internet.
    The Dean campaign has been proof-of-concept for a number of novel political tools and tactics, and for that, their place in history is assured. However, Trippi comes _this close_ to blaming the voters. "The American people" had little to do with the Dean campaign for the first year of its existence, as is normal. All campaigns are run by a small group of believers and pros until the elections start. The moment the voters arrived, however, they transformed the campaign from frontrunner to a 0-9 also-ran in the space of 16 days. Whenever the voters have been asked they said they don't much care for Dean. To invoke the American people without noting their rejection of Dean seems disingenuous at best.

    ...continue reading.

    Comments (15) + TrackBacks (0) | Category:

    Phony LinkedIn Account Gets Real LinksEmail This EntryPrint This Article

    Posted by Clay Shirky

    Beautiful experiment: Jeffrey Nolan has created a fake LinkedIn account, then sent out link requests, to see how many people would reflexively accept a link from someone they could not, by definition, actually know:
    One person responded that they wished to know more about the fictitious company I created. Another 'connection' actually sent me a business proposition and a request to connect through my network. Finally, and most interesting, there was only 2 emails from people attempting to apply a qualitative filter to the invitation.
    Note that this is not the same as a Fakester -- Liz Goodman draws a distinction between a fake and a Real Fake. I'm a fake if I say I'm an outdoorsy type who loves the ocean; I'm a Real Fake if I say I'm an Elfin mage. The Fakesters were Real Fakes of the highest order -- Jesus, the City of San Francisco, Pure Evil. What Nolan is doing is much more subversive: taking what everyone has noticed -- that no one turns down friend requests -- and turning it from an observation to an attack strategy. A site with a lot of Fakesters could be fun; a site with a lot of fakes would be significantly less useful than advertised, if the system started forwarding communications request through fictitious nodes. The loss of value would come not merely because such requests might not arrive, but because if they _did_ arrive they would demonstrate to social hackers that they could get to anyone listed on the service. All you would need to do is send link invites to hundreds of people they don't actually know, and take the high yield of links generated, to see more of the network at no cost. (Something like this seems to be happening on Orkut, in fact, albeit as a result of the simplicity of link requests, without needing the person's real-world email address.)

    Comments (4) + TrackBacks (0) | Category: social software

    February 13, 2004

    Sam Ruby: Lessons from !EchoEmail This EntryPrint This Article

    Posted by Clay Shirky

    From the remove of 3000 miles away, this year's ETech looks like it was a seminal gathering of social software thinkers. (I had an iron-clad excuse for not going, but I was still sad to miss it.) This post and the two below it point to slides or partial notes from the conference -- not perfect, but not nothing either. Sam has posted the slides from his talk on what he's learned from arranging a standards effort using a wiki. We've quoted Sam Ruby before about the use of the wiki to design a new syndication standard, in History, Personalities, Wikis Redux
    In defense of the wiki - had this merely been a weblog post or a mailing list, I am confident that we wouldn't be having a naming discussion right now.  Or any discussion.  Quite simply, it was the wiki that made this project possible.
    Talking about about what they've learned matters because the strategy actually worked. The wiki broke the logjam around designing a syndication format, which format is now being sent to the IETF and showing up in places like photoblogging apps. The story of the centrality of the wiki, and the way it was used, is only partly told in the slides, but even the bullet-point version of the talk is good reading:
    Mailing lists seem very prone to flamebait: statements which may very much be true but are expressed in a provocative way.  Some people seem to just have an inborn ability to attract flames. The most effective strategy for flamebait is to simply ignore them.  Unfortunately, to be effective, this needs to be universally applied. What's worse, is that most flamebaiters don't seem to realize what they are doing. On a wiki, emotionally charged words tend to be quickly replaced with ones that more effectively make the point that is trying to be made without the distracting histrionics.

    Comments (2) + TrackBacks (0) | Category: social software

    Robert Kaye: Social file-sharingEmail This EntryPrint This Article

    Posted by Clay Shirky

    Robert Kaye of MusicBrainz gave a talk at ETech on using social infrastructure to improve file-sharing. (Nota Bene: the "Next" button for these slides is in the upper right-hand corner, white on white.) The slides are tantalizing but frustrating, as they convey the basics but not the nuances of the argument.
    *Social models* Emulate human evolution: tribes, chiefdoms, states. Build a strong foundation of trusted people to form a tribe and to give it purpose: Share, discover and protect the tribe from attackers. Connect tribes to build chiefdoms. Connect chiefdoms to build states. With each connection the search horizon expands. Tribal _elders_ set the tone for the network, set growth guidelines and decide inter-tribal relations
    I'm especially curious how the idea of elders will be expressed. I've always felt that there is huge unexplored territory in granting users of software additional powers over time, as happens in real-world human groups almost by default.

    Comments (4) + TrackBacks (0) | Category: social software

    February 12, 2004

    danah boyd: ++goodEmail This EntryPrint This Article

    Posted by Clay Shirky

    Seb has already posted on danah's notes from her etech talk, revenge of the user, so I'm not pointing to anything new here. I do want to say however, that it is a fantastic piece. Go read it now. Here's one little bit from a long, great set of observations and ideas
    When technologies are built, the creators often have a very limited scope of desired and acceptable behavior. They build the systems aimed at the people who will abide by their desires. Often, their users don't have the same views about how the technology should be used. They use it differently. Creators get aggravated. They don't understand why users won't behave. The demand behavior. First, the creator messages the user, telling them that this isn't what is expected of them. Then, the creator starts carrying a heavier and heavier stick. This is called configuring the user. And y'know what... it doesn't work. [...] Yet, the more we try to force users into desired behavior, the less we pay attention to why they're doing what they're doing. Users are reacting the designs that creators choose. Why did people try to amass innumerable friends in Friendster? They wanted to see more of the network. In the early days, they wanted to be listed as one of the most popular people in others' networks. Friendster used to list this but they removed this feature when they realized how problematic it was. Yet, it came back in full force with Orkut where every list is based on popularity. Guess what? It came back with the same problem. The more popular someone is, the more others see them and try to link to them because one might assume that this person will take on friends or because other people recognize this person or because it seems like a way to meet more people. It doesn't get us any closer to having a social network that means something.

    Comments (1) + TrackBacks (0) | Category: social software

    Two Pieces on Moderating Community SpacesEmail This EntryPrint This Article

    Posted by Clay Shirky

    Cliff Lampe and Paul Resnick have a paper out called Slash(dot) and Burn: Distributed Moderation in a Large Online Conversation Space (note colon, for extra academic juju). Although much of it is background for people not familiar with slashdot's system, the section on design implications is quite interesting
    Slashdot’s design, and the usage patterns that have emerged, highlight tensions among four design goals for distributed moderation systems. First, comments should be moderated quickly. Second, they should be moderated accurately according to the community norms. Third, each individual moderator should have limited impact on any particular comment. Fourth, the burden on moderators should be minimized, to encourage their continued participation. Consider the tension among timeliness, accuracy, and minimizing the influence of individual moderators. In the Slashdot system, two to five people (depending on a comment’s initial score) must provide positive moderations before a comment reaches a score of +4. This limits the impact of any individual moderator. But more than 40% of comments that reached +4 took longer than three hours to reach it; in three hours, the typical conversation was already half over. An alternative design would give more weight to early moderators, which would lead to earlier identification of treasures (and trash) but would give more power to those early moderators and lead to more errors caused by items having inappropriately high or low scores that would have to be corrected by future moderators.
    And, over on Kuro5hin, there is an old post (Oct 2003) by localroger, Notes Towards a Moderation Economy, which notes the same aspects of the slashdot moderation system, and goes on to propose a long list of alternative techniques:
    An economy, like an ecosystem, is more stable if it has multiple feedback pathways. Any single feedback pathway is prone to catastrophe; the cockroaches eat all the bamboo and subsequently starve, or the one company with all the money fires all its own employees to save money, but they're also its customers and as a result the economy goes bankrupt. But if there are many species with interlocking relationships, or many participants in the economy, a catastrophic turn in any single path does not ruin the system. With this in mind, let's consider some additional reward systems that could be automated in an electronic community: Equity should be worth something, so that one has an incentive to use a single account and keep it in good standing. [...] Leaving highly rated posts should grow one's equity, and leaving poorly rated posts should shrink it. Rating itself should cost some equity, so that one thinks before doing it. Extreme rating might cost extra, so that it is meaningful. For example, the old 5-point rating system was eliminated because most people rated 1 or 5; this could be fixed as follows: This was so awful I was willing to spend 2 points kicking it. This was pretty bad. Cost me a point to say so. Read it, nothing special. Costs nothing to say so. This was pretty good, worth a point to say so. This was so good I was willing to spend 2 points saying so. [...]
    The comments on the Kur05hin piece are also well worth reading (paging Tom Coates...)

    Comments (3) + TrackBacks (0) | Category: social software

    February 11, 2004

    We interrupt your regular programming...Email This EntryPrint This Article

    Posted by Clay Shirky

    To announce the arrival of Marina Charlotte Zelleke Shirky, 7 lbs, 13 oz., 20.5 inches, mother and child doing fine. Papa doing fine too, though my posting tempo is likely to be down for a while. Off to bed...

    Comments (20) + TrackBacks (0) | Category:

    February 9, 2004

    Computerworld on Blogs Bubbling Into BusinessEmail This EntryPrint This Article

    Posted by Clay Shirky

    A piece over at Computerworld on the use of weblogs in business environments. Not much that will be unfamiliar to Many-to-Many readers, and they conflate blogs with other collaborative tools (the lead graf is about Socialtext, which combines the weblogs and wikis) but its a good piece to show the curious. There is one great quote, from Jamie Lewis:
    But Jamie Lewis, an analyst at Burton Group in Midvale, Utah, says he isn't sure all companies should immediately jump on the blogging bandwagon. "Whether companies should look into using it depends on corporate culture and the kind of culture they're trying to develop," Lewis says. Blogging is like a lot of other collaborative tools -- if the company is good about trying to encourage and generate cross-functional and interpersonal collaboration and communication, then it's a good idea, Lewis says.
    This is commensurate with what we know from earlier studies of IT in the workplace -- technology tends to be an amplifier, so streamlining things often makes a bad culture able to to get worse faster. If a company distrusts employee initiative, blogs won't help much, except maybe in that "precipitate a crisis" way -- they are tools, not magic pixie dust.

    Comments (3) + TrackBacks (0) | Category: social software

    February 8, 2004

    Captain Obvious Forums: Best ToS EVAREmail This EntryPrint This Article

    Posted by Clay Shirky

    I am obsessed with patterns of governance in online spaces, and in the ways those patterns play out over time. One common pattern is for a site to put on a friendly face, while hiding restrictive terms of service underneath. The two common crises this leads to are when the users violate the expectations of the site's owners (old skool: Habitat; New Skool: the BBC) or when the users rebel because the site begins enforcing the ToS unexpectedly (old Skool: Habitat again; New Skool: Fakester revolution.) Captain Obvious is trying an alternate pattern: in your face descriptions of dictatorial powers. Since the owner of a site has enormous power over the users' interaction, the C.O strategy is to announce, upfront and in lavish detail, the do's and don'ts of the system
    If you are banned for whatever reason, you can only post in the Tard farm. If you can prove that you aren't a total moron there, you might be un-banned. If you re-register with another name, your IP will be blocked. Stay the HELL out of “TEH SMARTAY FOREM!!!11” unless you can keep up with the topic. The posters in that forum are very protective of it, and rightfully so. Don't post in that forum until you read ALL THE POSTS in that topic first, and are sure you can speak English and use proper grammar and punctuation. [...] Off topic/generally retarded posts will be moved to Brain Fecal where the regulars will proceed to rip it to shreds and make you cry. Frequent posters of Brain Fecal worthy threads will find themselves at home in Tardfarm. Speaking of Tardfarm, THERE ARE NO RULES IN TARDFARM. Go crazy. I don't give a fuck. Do whatever you want. But keep in mind that the things there may not always be work safe. [...] You must have 50 posts before you can alter your avatar and custom title. Until then, or even AFTER then, if you do something stupid, don't fret! We will supply you with an appropriate avatar and custom title.
    I love the Tard Farm solution -- its not a question of banning or not banning, which gets the trolling impulses up, but rather of where you post, a place with rules or a place with no rules. It's like the old multi-player game solution of player-killing in the wilderness (Tard Farm) but not in the cities (Forums, and esp the TEH SMARTAY FOREM!!!11 (a pattern pioneered by Habitat, because before anyone did anything, Habitat did everything...)

    Comments (4) + TrackBacks (0) | Category: social software

    February 5, 2004

    Social networking services and PrivacyEmail This EntryPrint This Article

    Posted by Clay Shirky

    Interesting piece by Roger Clarke on the privacy implications of social networking services, and comes to some grim conclusions. The centerpiece is an analysis of the legal rights of the user on Plaxo:
    Under the doctrine of privity, a contract creates rights and responsibilities for the parties to the contract, but for no-one else. Hence there are no rights whatsoever under the contract for the individuals to whom the data relates. The pop-up box declares the following undertaking by Plaxo to their client: "we respect the privacy of your contacts and maintain a strict policy of not sharing their contact information (received as a result of responding to your update requests) with other Plaxo users who are asking for this information" (my emphasis). The emphasised words appear to exclude the data that is provided by the user when they upload their adress-book, and hence the undertaking does not apply to the data about other people that users gift to the company. This assurance falls desperately far short of real privacy protection. It is in any case entirely non-binding because of the clause in the Terms of Service which declares conclusively that "These Terms of Service shall constitute the complete and exclusive agreement between us". (The Terms incorporate the Privacy Policy statement, but not the contents of the pop-up box).

    Comments (3) + TrackBacks (0) | Category: social software

    Stowe Boyd on GushEmail This EntryPrint This Article

    Posted by Clay Shirky

    Stowe Boyd on Gush, an IM client that re-configures the visual tracking of conversations into column view, and integrates RSS:
    One of the limitations of a gated community model like that of IM products like Gush is that there is no way to post content to the wide wide world. I hope that Gush is extended to support blog features like trackback -- then I can comment on external blogs from within the Gush 'announcements' model. Still, I think the notion of community-oriented postings -- within a workgroup or social group -- is well supported by the Gush model. Perhaps a tiny tweak to support posting announcements to external blogs or external users via email gateway would solve the closed world problem. Ditto the need for comments -- if I am using announcements as a mini-blog, then folks need to be able to mini-comment, too.
    One of the really interesting changes in this social software in 2003 was when people began thinking terms of patterns rather than software, and then combining them, as with wiki+blog, blog+BBS etc combos. The two tools I was most surprised not to see integrated as patterns were IM and mailing lists. With Gush and Buddyspace below, it looks like IM is now being really integrated with other tools.

    Comments (2) + TrackBacks (0) | Category: social software

    February 4, 2004

    BuddySpace: Super-buzzword IMEmail This EntryPrint This Article

    Posted by Clay Shirky

    BuddySpace - an experimental IM client with multi-variate presence, and visualization tools built in:
    BuddySpace is an instant messenger with four novel twists: (1) it allows optional maps for geographical & office-plan visualizations in addition to standard 'buddy lists'; (2) it is built on open source Jabber, which makes it interoperable with ICQ, MSN, Yahoo and others; (3) it is implemented in Java, so it is cross-platform; (4) it is built by a UK research lab, so it is 100% free with full sources readiily available. But BuddySpace is about more than just 'messaging', as we explain below. [...] The concept of presence has matured in recent years to move away from the simple notion of 'online/offline/away', towards a rich blend of attributes that can be used to characterise an individual's physical and/or spatial location, work trajectory, time frame of reference, mental mood, goals, and even intentions! Our challenge is how best to characterise presence, how to make it easy to manage and easy to visualise, and how to remain consistent with the user's own expectations, work habits, and existing patterns of Instant Messaging and other communication tool usage.

    Comments (5) + TrackBacks (0) | Category: social software

    Welcome David WeinbergerEmail This EntryPrint This Article

    Posted by Clay Shirky

    As you will note from the non-gray background in the post below this one, David Weinberger is no longer a guest blogger, having agreed to become one of the regulars. And there was much rejoicing.

    Comments (0) + TrackBacks (0) | Category:

    February 3, 2004

    Exiting DeanspaceEmail This EntryPrint This Article

    Posted by Clay Shirky

    I wanted to wait ‘til the February 3rd polls opened to post this, because I wanted it to be a post-mortem and not a vivisection. What follows is a long musing on the Dean campaign’s use of internet tools, but it has a short thesis: the hard thing to explain is not how the Dean campaign blew such a huge lead, but rather why we ever thought that lead actually existed. Dean’s campaign didn’t just fail, it dissolved on contact with reality.

    The answer, I think, is that we talked ourselves, but not the voters, into believing. And I think the way the campaign was organized helped inflate and sustain that bubble of belief, right up to the moment that the voters arrived.

    Take this as an early entry in a conversation everyone who was watching Dean’s use of the internet should contribute to: what went right? what went wrong? and what to do differently next time? We should do this now because ‘next time’ still includes a passel of primaries and then, most importantly, the general election. If we have the conversation now, we won’t have to wait til the few uncontested House races of 2006 to see if we learned anything.

    Two caveats at the beginning: first, the stupidest thing I’ve said on this issue was in Dean: (Re)stating the Obvious:

    ...continue reading.

    Comments (64) + TrackBacks (0) | Category: social software

    ACM Queue on Culture in Distributed WorkgroupsEmail This EntryPrint This Article

    Posted by Clay Shirky

    ACM's Queue has a piece entitled Culture Surprises in Remote Software Development Teams:
    _Decision-support systems._ The decision-support systems designed in the United States embody algorithms that fit egalitarian, democratic participation. These systems focus on the task rather than relationships, common in many other cultures. They allow for anonymous voting and weighted decision analysis and other algorithms that ignore any aspect of relationships and obligation. The one exception is "stakeholder analysis," which surfaces the interests of the major participants. Although it does not openly acknowledge decisions on the basis of power and relationships, it reveals who the players are and what their goals are. Furthermore, in the United States, the criteria typically concern cost and benefit to the future material outcome of an organization. The criteria often are neither wisdom from history nor the preservation of long-term personal relationships central to the thinking in other cultures. And, of course, some cultures don't want the details ever to be made explicit.
    This is a B+ piece, the sort I hate posting here -- too good to ignore, not good enough to rave about. They take the trouble to list several ways in which cultures can fail to mesh, and then post _the same trivial assertions we've been reading for literally decades_: its important to be clear, its a misfortune that people's characteristics aren't explicit, video will make things better. _Grrrr_ And then, late in the article, they post what should have been the topic sentence: "And, of course, some cultures don't want the details ever to be made explicit." That's right, some cultures do suffer from this problem -- human ones. (Weinberger is required reading on this subject.) When I was in college, there was a communally authored document circulating with remarkable comments received on student papers, and my favorite was "Reading this makes me want to come over to your house, prop your eyes open with toothpicks, and scream 'Look! Look at the text!'" That's how this piece makes me feel -- it took real work to put this together, and there's even a lot here to think about. Their proposed reactions, however, basically amount to world-as-orkut -- "The second step to dealing successfully with multicultural teams is to find out explicitly what the cultural values are of the people you are working with." This makes me want to go over to their houses, prop their eyes open with toothpicks, and scream 'Look! Look at the community!'

    Comments (3) + TrackBacks (0) | Category: social software

    February 2, 2004

    Shannon Clark on Dean and Social SoftwareEmail This EntryPrint This Article

    Posted by Clay Shirky

    Shannon Clark on Dean and Social Software:
    In any network it is very easy to assume that everyone is alike. At a very least most networks (and especially online communities) assume that the participants each have a set of things in common, over time these grow to include generally a specialized vocabulary and a shared worldview. The danger that this holds can be seen as you observe how networks change and grow over time, the more complex the shared history, the harder it can be for new members to join and once joined, it is that much harder for the new members to influence and shape the network.
    and, later:
    But writing, talking, even contributing money, is a different act than voting. Voting is irrevocable and is, most of the time at least, a representation of making a decision - this candidate or the other (at least here in the US we don't generally have vote off style elections where you indicate a level of support and second/third choices etc, our voting tends to be either/or votes). To get people to vote requires a number of specific and each slightly difficult steps.
    Read the whole thing.

    Comments (0) + TrackBacks (0) | Category: social software

    One from the Old Skool: Godwin on ASCII, from 1994Email This EntryPrint This Article

    Posted by Clay Shirky

    Here's Mike Godwin on video vs. text, from the 1994 Wired:
    Flaming (typically defined as the posting of e-mail or public messages intended to insult or provoke) is an occupational hazard of the Net. Mere text, they'll tell you, is too narrow a communications medium for human beings - it doesn't carry body language or emotional nuance - so misunderstandings are all too probable. Sometimes they'll even go further: When the information superhighways are all built, they say, and we're able to transmit live, full-motion video to each other, we will enter a Golden Age of Telepresence, and online misunderstandings will evaporate. I'm here to tell you they're wrong. Wake up, online belletrists everywhere - the Golden Age is already here, and flames are the proof. The problem is not that ASCII is too restricted a medium - the problem, if anything, is that text says too much, and that the medium is too intimate! Flames are the friction born of minds rubbing too closely together.
    10 years later (and 10 more years of failed predictions of video-conferencing as a mainstream tool) and this is still right. There is a coalition of people eager to sell video as the cool new technology and writing as on the way out, but as things like the spread of weblogs show us, words have a number of really good characteristics that video lacks.

    Comments (5) + TrackBacks (0) | Category: social software

    Steven Johnson on DeanEmail This EntryPrint This Article

    Posted by Clay Shirky

    Steven Johnson has a piece on Dean and the disconnect between the campaign and the voters
    Now here's the slightly more complicated part of the explanation: the Dean campaign's use of the internet has forever changed the way that candidates 1) organize their supporters, and 2) raise their money. But I would argue that it has had almost no effect -- and probably will continue to have little to no effect -- on the way ordinary voters ultimately decide who to vote for. That decision is still largely made via face-to-face inspection, where possible, and then via television, where you get an approximation, however filtered, of that face-to-face encounter. So to me, the story of the last few months is that we all assumed that Dean's mastery of 1) and 2) would transfer over to success in that final stage where people actually pull the lever for a candidate. But that stage is still governed by older media; it's as though fundraising and organizing have jumped ahead to the 21st century, while actually deciding who you are going to vote for remains in the 20th.

    Comments (0) + TrackBacks (0) | Category: social software

    January 31, 2004

    FusedSpace: Contest for energizing public spaceEmail This EntryPrint This Article

    Posted by Clay Shirky

    FusedSpace is running a contest for ideas that energize the public environment (which, confusing to American ears, they call the public domain, an intellectual property concept.)
    Do you have an idea or proposal through which technology will make possible other interactions with the public domain, will shed new light on it or in any other way will bring about innovation? Then do enter the Fusedspace competition. Fusedspace is an international competition for ideas on inspiring applications for new technology in the public domain. Fusedspace calls for innovative ideas that, by means of existing technology, can change or improve our current relationship with physical public space or that can otherwise bring about innovations in the public domain. Fusedspace calls for submissions - that succeed either in increasing or simplifying the accessibility of virtual public spaces. - that (by means of hardware or software) succeed either in making use of or increasing the public potential of the new media. - that (by means of hardware or software) develop facilities which generate or enhance 'social coherence'. - that stimulate or define the debate on the newly formed public domain. - that search for modifications to utilities that predominantly are used commercially, and that can broaden cultural and social perspectives in the public domain.
    Accepting submissions now, due by April 2, 2004. I'd link to more information, but their site uses frames for no easily observable reason, so getting to the info is a bit of a chore.

    Comments (1) + TrackBacks (0) | Category: social software

    January 30, 2004

    Theresa Senft: Against ReputationEmail This EntryPrint This Article

    Posted by Clay Shirky

    Interesting Theresa Senft article entitled Against Reputation
    Interestingly, although there have been criticisms of specific implementations of reputation management systems (i.e., the existence of plain dumb reviewers on Amazon) I have yet to see a full-on argument regarding what I see as the biggest problem with reputation itself: its reliance on a spherical mode of relationality, as in the phrase, "sphere of influence." Spheres are ways of delimiting space, and with it, people and ideas. Just as the our understanding of the public sphere turns on how we define public (and lock out those who don't fit in), to claim a sphere of influence, one must first declare in advance of the interaction "these are the people/ideas currently influencing me, and these are the ones who do not."
    This is timely, with Seb's reference to LJ trying to unpack "the overloaded concept of friend", which I think is going to be a disaster. There are some places where, when technology is made _more_ flexible, it gets notably _less_ usable; we cannot ever render human relations with complete explicitness (Paging Dr. Weinberger to the white courtesy telephone...). Taking the label 'Friend' on LiveJournal (whose primary virtue is that it is obviously inadequate, so people don't read as much into it), and turning it into something multi-variate and hard to use, and which will still be inadequate but now confusingly so, since after all shouldn't we oughta be able to say exactly what we mean? (to which the answer is of course "No", and the history of computer science's encounters with real people has largely been the history of misunderstanding that constraint), but, as I say, taking the simple term Friend and thinking it will be a good idea to sub-divide and sub-divide and sub-divide til Jason Kottke is hiring someone to manage _just_ his LiveJournal profile is like trying to open a can to get out just one worm -- it seems like a good idea right up to the moment you open the can.

    Comments (5) + TrackBacks (0) | Category: social software

    Orkut messaging as spamEmail This EntryPrint This Article

    Posted by Clay Shirky

    Adam Greenfield, contemplating orkut, looks at what friend-of-friend messaging does in social networks with high degree nodes, and it looks a lot like spam.

    Comments (2) + TrackBacks (0) | Category: social software

    boyd on Orkut; Meskill on the YASNS numbersEmail This EntryPrint This Article

    Posted by Clay Shirky

    apophenia goes apoplectic on the subject of orkut
    #3) Explain to me why one must be a friend to be a fan of someone? The role of fan is inherently a power differential, not an equalizer. (Don't get me wrong: on Orkut, there's definitely pressure to reciprocate.) The people that i'm a fan of are not my friends; they're idols; they're people that i read on the interweb but do not know. It is sooo weird to read which of my friends are a fan of me. Does that mean that the rest are only following social custom in linking to me? Does that mean that they don't really respect me? [Or does it mean, like it means to me, that it's too bloody weird to consider checking off that fan bit?] And worse... i can see who is a fan of others. This means that i can check on my friends and figure out that they're using the fan feature... just not on me. Hello, socially awkward.
    danah's on a tear -- read the whole thing (and if you are designing an application that relies on social networks, read the whole thing, then print it and tape it up right next to your monitor.) Also, Judith Meskill has compiled a list of 100+ (!) YASNS services (_via McGee's Musings_), and although the number includes things out of the usual Orkut/Friendster orbit (dating sites, social bookmark managers), it makes a convincing case that this is the madness of the age. She also points to a number of people working out the blogs-as-social networks meme (including the Mehta/Efimova exchange we linked to earlier.)

    Comments (2) + TrackBacks (0) | Category: social software

    January 28, 2004

    Dean: (Re)stating the obviousEmail This EntryPrint This Article

    Posted by Clay Shirky

    There are some topics which are so hot-button that any criticism, even speculative and limited, can read as complete dismissal. So it is with my Is Social Software Bad for the Dean campaign? piece of Monday. The thing that put me over the edge was seeing that piece linked to with the phrase “the backlash against the idea of digital campaigning has begun.”

    This is of course nothing like my actual position, so to set the record straight, here are four things I’d like to stipulate, as the lawyers say:

    ...continue reading.

    Comments (10) + TrackBacks (0) | Category: social software

    Reason on Dean and Social SoftwareEmail This EntryPrint This Article

    Posted by Clay Shirky

    Interesting post over at Reason on the various flavors of post-Iowa and NH Dean campaign analysis
    And this, perhaps, is the problem—from the perspective of politicians, anyway—with campaigning by smart mob. Politics is a top down business. The old metaphor of the "political machine" is in this sense quite apt: It evokes a vast clockwork mechanism, perhaps composed of many cogs and gears, but governed in the end by a few hands at the levers of control. The organism—reigning metaphor for online social networks—lacks such convenient levers. Dean's network comprises not just his own site, rife with comments, but sites like DeanSpace, which were autonomous, not run by the campaign. In politics, that's a bug, not a feature.

    Comments (5) + TrackBacks (0) | Category: social software

    Two Pieces on the Demise of YASNSsEmail This EntryPrint This Article

    Posted by Clay Shirky

    The first one is serious, from TeledyN:
    And yes, I do think [social networking services] will fail, it's inevitable. Whether by intentional design or by blind emulations, these new black-book stop-shops all share several dubious characteristics: * they are not social networks, only flat-taxonomy directories of questionaire replies, and badly designed questionaires at that. * because they do not interoperate, because they cannot share data or interchange or allow identity migrations, they are essentially anti social, building protectionist walls around people (called 'clubs' or 'communities' but really meaning the opposite) * they don't work. So why don't they work? Because they are _not_ social networks. A social network is a network with a social cause, a social reason for being. Social networks fill a niche need for interaction. Church clubs, business clubs, square-dance clubs, these form natural, anthropologically sound social networks with the intelligent self-organization moving from the local (chapter) out to the regional and then clustering still beyond. They are also self-governing, electing their executives from grassroots, organizing on the need to expand the social network.
    Read the whole thing. The second is a funny post from Jason Kottke on craigslist
    Permanent full-time position for a personal social coordinator for a New York-based web designer. Your primary responsibility will be managing my accounts with various online social networking sites including, but not limited to, Friendster, LinkedIn, Tribe, Orkut, Ryze, Spoke, ZeroDegrees, Ecademy, RealContacts, Ringo, MySpace, Yafro, EveryonesConnected, Friendzy, FriendSurfer, Tickle, Evite, Plaxo, Squiby, and WhizSpark. Specific duties include: - approving or rejecting invitations of friendship - managing a database of usernames and passwords for each of the social networking sites - sending out friendship invitations [...]

    Comments (2) + TrackBacks (0) | Category: social software

    James McGee on thinking in public &/vs thinking collaborativelyEmail This EntryPrint This Article

    Posted by Clay Shirky

    Interesting James McGee post, from April of last year, on the relationship between thinking out loud and thinking together:
    My problem is this. Most of the technology tools for supporting thinking together (e.g. discussion forums, threaded discussion, wikis) depend on skills and norms that I've found to be rare in practice and challenging to promote. My intuitions tell me that there are important differences with weblogs that address at least some of these issues. [...] One of the primary reasons that thinking together is hard is that it requires both that we think in public and that we think collaboratively. I suspect that thinking together fails at least as often because we don't know how to think in public as it does because we don't know how to do it collaboratively. Further I think that order matters. You need to learn how to think in public first. Then you can work on developing skills to think collaboratively.
    He goes on to tie this to the way weblogs can make linking public and collaborative thinking easier (relevant to a panel Liz is putting together on weblogs and collaboration.)

    Comments (0) + TrackBacks (0) | Category: social software

    January 27, 2004

    Ideant on edemocracyEmail This EntryPrint This Article

    Posted by Clay Shirky

    Long, thoughtful musing over at ideant on the limits of edemocracy, especially as concerns the relationship between the public, as defined in politics, and the mass, as in 'that which is reached by mass media'
    -We need to be aware of which aspects of the internet are characteristic of a mass medium, and which can support the creation of a community of publics. We need to figure out how best to use both. -We need to acknowlege that tools that allow people to organize themselves are not as important as the agendas that people are supposed to pursue once they organize themselves. We need not just programmers, designers and enterpreneurs, but citizens who are politically conscious and active. - We need to acknowlege that getting information about the world is not as important as acting upon the world. We have to move away from the idea of defining individuals as intersections of information circuits and back to the idea of individuals as ensembles of social relations, to paraphrase Lorenzo Simpson. We have to ask ourselves honestly to what extent 'social software' is not in fact an oxymoron.

    Comments (1) + TrackBacks (0) | Category: social software

    Solipsis: P2P Virtual WorldsEmail This EntryPrint This Article

    Posted by Clay Shirky

    Preliminary work on Solipsis, a system for building and navigating a virtual world hosted in a distributed fashion. The PDF of the protocols is a curious mix of math and poetry
    If an entity does not know any entity in some large sector, it will hardly know about an entity arriving from this sector. Conversely, if it moves forward a sector with no known entity, it will hardly get aware of entities it should met on its path. The Global Connectivity property aims that an entity will not “turn its back” to a portion of the world.
    There's very little fleshed out here, so I can't recommend it so much as point to it, but I remember Tim Sweeney of the game Unreal talking about something like this back when years had 1's in them, so it may have percolated long enough to be the right time for it.

    Comments (1) + TrackBacks (0) | Category: social software

    Dovester: The O.G of YASNSsEmail This EntryPrint This Article

    Posted by Clay Shirky

    Fantastic post on a 17th century social network of dove breeders in France
    The oldest club in Europe, an exclusive French society of dove breeders, used social networking tools since the late 17th century to connect its members via a handwritten newsletter, circulating from member to member, and being amended along the way. A special trust metric had been established, which allowed each breeder to rate his peers, a process in which each vote carried weight based on the casters own ratings. In addition to the mailing, which took roughly one year to travel each of the members, shortcut routes were established, usually between counties, through which smaller groups could reach other groups. To create the shortcuts, each breeder was required to name at least two “sponsors” and four breeders he sponsored. [...]
    Read the whole thing.

    Comments (0) + TrackBacks (0) | Category: social software

    LinkedIn use up?Email This EntryPrint This Article

    Posted by Clay Shirky

    Is it just me, or has LinkedIn use risen dramatically? Up to last week, I'd gotten 3 requests for forwarding _ever_, and I've been listed on the system since roughly Day 1, but in the last 72 hours, I've gotten another 4. Did Orkut's appearance and flameout advertise the idea of YASNSs, leaving their actual use to be filled elsewhere? Of is LinkedIn doing some kind of promotion? Or am I seeing large change on a small base, so its basically random?

    Comments (11) + TrackBacks (0) | Category: social software

    Dina Mehta and Lilia Efimova on weblogs as/vs SNSsEmail This EntryPrint This Article

    Posted by Clay Shirky

    Terrific pair of posts, one by Dina Mehta on blogs and social network services
    My blog is my social software.  It is also my social network.   It has my profile and much more - it has my identity fleshed out, through my posts.   * A profile with history that allows you to know so much about me - i started blogging in March 2003 - and already readers people have seen me add new professional interests and take my qualitative research skills into new areas, some know i love music and Floyd, others have been with me to my cottage in the hills, read about my holiday and  meetings with some wonderful bloggers on my trip, seen me change home, celebrated with me when i got a project due to my blog, and even wondered where i am when i've gone silent on my blog for a few days.  * A profile that tells you much more than any homepage i have on Ecademy or Ryze or Tribe orLinkedIn could.  * A profile that changes, grows, flows - not a cold resume or 'about me' page filled with past achievements and accolades - but is touchy-feely and one that says more about me through my thoughts, interests, preoccupations, rants, rambles and angst - that makes me more than just a consultant or a qualitative researcher - or a demographic statistic, 'female blogger from India'. [...] When i did not blog, i found social networks far more relevant and useful.  Today, my blog is my one-stop shop. 
    which prompted a follow-up piece from Lilia Efimova which is a more point-by-point comparison of the two models, and more supportive of the idea that SNSs have different advantages than weblogs
    *Slow uncovering vs. instant visibility* Learning about someone from a weblog takes time. Personality appears in a context and through time to read many lines of weblog posts and to participate in conversations. And it's even more difficult to learn about someone's network: linking, blogrolls and RSS subscription lists tell a bit, but you never know if linking or blogrolling means regular reading and how many e-mails/IMs/calls were exchanged next to blogging. At YASNs finding about someone's profile and network doesn't take much time (only invitation or access rights :) The degree and type of connection are still not clear, but at least you know that it was explicitly approved. Browsing through connections is easy and fun.

    Comments (3) + TrackBacks (0) | Category: social software

    January 26, 2004

    Is Social Software Bad for the Dean Campaign?Email This EntryPrint This Article

    Posted by Clay Shirky

    I’m getting the same cognitive dissonance listening to political handicappers explain Dean’s dismal showing in Iowa that I used to get listening to financial analysts try to explain dot com mania with things like P/E ratios and EBITDA. A stock’s value is not set by those things; it is set by buyer and seller agreeing on price. In ordinary markets, buyers and sellers use financial details to get to that price, but sometimes, as with dot com stocks, the way prices get agreed on has nothing to do with finance.

    In the same way, talking about Dean’s third-place showing in terms of ‘momentum’ and ‘character’, the P/E and EBITDA of campaigns, may miss the point. Dean did poorly because not enough people voted for him, and the usual explanations – potential voters changed their minds because of his character or whatever – seem inadequate to explain the Iowa results. What I wonder is whether Dean has accidentally created a movement (where what counts is believing) instead of a campaign (where what counts is voting.)

    And (if that’s true) I wonder if his use of social software helped create that problem.

    ...continue reading.

    Comments (44) + TrackBacks (0) | Category: social software

    Gillmor on Wikipedia and WikisEmail This EntryPrint This Article

    Posted by Clay Shirky

    Dan Gillmor notes that the Wikipedia is about to reach it's 200,000th article, and goes on to explain why it works
    "The only way you can write something that survives is that someone who's your diametrical opposite can agree with it," says Jimmy Wales, a founder of Wikipedia. Urban planners and criminologists talk about the "broken window" syndrome, says Ward Cunningham, who came up with the first Wiki software in the 1990s. If a neighborhood allows broken windows to stay that way, the neighborhood will deteriorate because vandals and other unsavory people will assume no one cares. Similarly, a Wiki draws strength from its volunteers who catch and fix every act of online vandalism. When the bad guys learn that someone will repair their damage within minutes, and therefore prevent the damage from being visible to the world, they tend to give up and move along to more vulnerable places.
    The "broken windows" pattern is reminiscent of what Wattenberg and Viegas found in their historyflow wiki-visualization. Dan also talks about Socialtext, Ross's company, and, fittingly, gives the closing line to Ward Cunningham, inventor of the wiki: "Successful Wikis are inherently fragile, says Cunningham, but they show something important: 'People are generally good.'"

    Comments (0) + TrackBacks (0) | Category: social software

    January 24, 2004

    Orkut: Brief notesEmail This EntryPrint This Article

    Posted by Clay Shirky

    So Orkut seems to be exploding -- people are joining at a rapid rate, albeit from a still-tiny base (my 40 friends link me to 6500 or so people, whereas on Friendster, 15 friends link me to 300,000+.) Their "you gotta be invited to get in" thing seems to be creating just the right sense of 'red velvet rope' to drive traffic, and the fact that it's Google-sponsored can't be bad for business. They've also made it incredibly easy to declare a link to someone already in the system, meaning that even as more users are joining, average path length is falling. It was 4.4 a few hours ago, and its down to 3.8 now. Another interesting detail: they let you do link-by-link path traversal, as in "Show me Liz's friends, let me select one, then show me Liz's friend's friends" and so on. I was wondering how deep they'd let me go (4, 5, or 6 degrees, basically), but they recalculate paths dynamically, so everytime I'd get a path like Clay->Liz->Sam and click on one of Sam's friends, it would recalculate to soemthing like Clay->Greg->Sam's Friend. I finally walked off the end of a 5 link traversal, and started just seeing random people with no link calculation (which feels like a violation of the premise of FOAF networks -- "Only let me see and be seen by people who are within N degrees"), but it took some time. The nework *feels* much denser than Friendster or LinkedIn, which is to say fewer people with single digit connectivity, but I don't have a global view, so I can't yet say for sure. There is an _incredible_ amount of activity around adding friends on the system right now; my mailbox is mostly Orkut notifications. I wonder if all social services will suffer from the difference between the dollhouse pleasures of setting things up ("ooh, and I know _this_ person and _this_ person and _this_ person...") and the rarity of actual use. I have gotten far more requests to link on LinkedIn, for example, than actual requests for use (user since launch, 54 connections, 3 total requests for actually use for the service.)

    Comments (7) + TrackBacks (0) | Category: social software

    PieSpy: Java Tool for Inferring and Visualizing Social Networks on IRCEmail This EntryPrint This Article

    Posted by Clay Shirky

    With a title like that, I shouldn't have to do much more to get you to click, should I?

    Comments (1) + TrackBacks (0) | Category: social software

    Will Davies on the net and quasi-democracyEmail This EntryPrint This Article

    Posted by Clay Shirky

    Great Will Davies post at iWire on the difference between democracy and quasi-democracy, using the examples of internet polling gone amok from the American Family Association and the BBC.
    These models are all in a sense quasi-democratic: they copy many features of Government constitutions (representation, voting rights, debating rights) but there is one crucial difference. In a democracy, these rights are handed out to a selected group (once upon a time it was white male property owners; nowadays its adults); in the case of quasi-democracy, these rights get claimed by those who can be bothered to claim them. [...] Technologically mediated political discussion, bottom-up e-democracy and online polling are the same. They are quasi-democratic, but only quasi because they rely on people caring sufficiently to get involved. The absence of the silent majority is not felt in any way. They are (a new term here!) auto-representative. Their voice speaks only for itself; in a democratic society, voices come together and become something else entirely, which is the nature of genuine reresentation. Any true democracy has to be, in a fundamental sense, representative not simply vocal.
    Davies nails somethign something that has always bothered me about the swirling conversation around e-democracy -- we're stretching the word democracy to mean "emergent effects from group participation." This is, I think, a huge mistake, because it makes democracy seem easy. Voting alone is not enough for democracy -- the legitimacy of the vote rests on the idea that the people voting and the people affected by the vote are the same group, a situation distinctly missing from either the AFA or BBC polls. I think Davies is working out something important about the intersection of social software and present political practice -- read the whole thing.

    Comments (7) + TrackBacks (0) | Category: social software

    January 23, 2004

    When will they ever learn, reduxEmail This EntryPrint This Article

    Posted by Clay Shirky

    A while ago, I posted about what happened when the BBC's Radio 4 tricked a Member of Parliment into proxying his legislative capabilities to an anonymous group of internet voters. (Hilarity ensued.) Adam Greenfield added a comment pointing to the America Family Association's Gay Marriage poll, plainly designed to deliver a resounding reaffirmation of the special triune bond between fag-hating straights, the US Government, and The Lord. And then, as Adam noted then and Wired is reporting now, Things Did Not Go As Expected. It turns out that when you put something on the internet, other people can get to it, people who might not agree with you.
    We're very concerned that the traditional state of marriage is under threat in our country by homosexual activists," said AFA representative Buddy Smith. "It just so happens that homosexual activist groups around the country got a hold of the poll -- it was forwarded to them -- and they decided to have a little fun, and turn their organizations around the country (onto) the poll to try to cause it to represent something other than what we wanted it to. And so far, they succeeded with that.
    We asked for user feedback, and _the users caused it to represent something other than what we wanted it to_. That is hard to beat as a direct statement of the glory of this medium. I am reminded of Eastern Europe once the Communist governments collapsed -- country after country suffered from pyramid schemes because the citizens had no idea how market economies worked. Similarly, organizations like the American Family Association and Radio 4, so completely accustomed to dictating the terms of a conversation, suffer from a delusion that the internet offers them a way to reach only the people who agree with them. Organizations like this assume users will happily provide the fig leaf of popular engagement, without any of that messy unpredictability that comes from offering actual people actual choices. We can only be thankful that their failures have been as dramatic as these, though we have to be on the lookout for attempts to make the net more predictable, rather than making the AFA's and Radio 4's of the world more accountable. As Mr. Pound, the British MP, said when the results of his poll came in "We will have to re-evaluate the listeners of Radio 4," as if the people should answer to the pollsters, and not vice versa.

    Comments (2) + TrackBacks (0) | Category: social software

    January 21, 2004

    Weblogs are less self-consistent than Blaze imaginesEmail This EntryPrint This Article

    Posted by Clay Shirky

    In the comments of Mistakes in the Moral Mathematic of Blogging, I suggested that "[a]s weblogs become less personal expression and more lightweight publishing tool, a number of blogs get a significant amount of traffic from outside the blogosphere (e.g. Gizmodo, which is low on the link chart but very high on traffic.)" Seb points out an example of that in William Blaze's analysis of the transmission of the spread of the Linton Freeman’s excellent Visualizing Social Networks paper. Seeing that Freeman's article was posted here on the 15th, Blaze concludes "Several hours later the link was duplicated on an even more popular (381 inbound blogs) site Many 2 Many, presumably because they saw the link via Blackbelt Jones, or perhaps on one of several other smaller sites that also picked up the link via that site." This is a telling intuition -- Blaze's assumption is that, once discovered, pointers to Freeman's piece simply passed from weblog to weblog. In our case, though, it came to be posted to Many-to-Many because I saw it on del.icio.us, a 'social bookmark manager.' The design pattern of del.icio.us is radically different from that of a weblog. In particular, del.icio.us is designed to be useful to the individual first and foremost, as an easy way to collect and categorize links; its secondary function is to provide an aggregate view of links others find interesting. Many of us though, possibly a majority of delicious users, find the main page to be as valuable as the bookmark saving service. In this case, I saw the link appear on del.icio.us on the 15th, and after reading the paper, posted it immediately. So Blaze is right that context was stripped, but wrong in thinking it was stripped in passing from weblog to weblog. It was stripped when an individual user saved the link for him or herself, in a forum where that behavior is public, and where I happened to be watching the link flow. So although link credit was stripped, it's not clear that this was because of any negative action. The delicious user who first posted the link can hardly be expected to record contextual data for what is a personal record of interesting material, and webloggers surely shouldn't refrain from posting interesting links simply because they discover them in a low-context environment. Blaze's core assumption, in other words, is leaky. He starts from the idea that when a link shows up on two weblogs in short succession, it must be that it progressed from one to the other. In this case though, and, I believe in a growing number of cases, the comingling of the weblog world with other forms of both link management and social tools means that even if there were a code of conduct among people who are self-conciously blogging, this would not be enough to produce the effect he wants. As weblogs become increasingly important as general purpose publishing tools, the internal consistency of communal understanding or shared prupose will necessarily shrink.

    Comments (3) + TrackBacks (0) | Category: social software

    January 20, 2004

    Open Collaboration Services meeting, Feb 4Email This EntryPrint This Article

    Posted by Clay Shirky

    The Open Collaboration Services Initiative is having a meeting on Feb 4 to discuss, among other things
    1) an "architecture blueprint for collaborative business" that enterprises can reference to enable collaborative business internally and with their partners and customers; this blueprint will be suitable to guide enterprises' collaboration-related investments from a technology perspective; 2) an integrated set of protocols ("universal collaboration connector") that enables "plug-and-play" collaboration across vendor and organizational boundaries
    More at OCSI (warning: their home page isn't very user friendly)

    Comments (0) + TrackBacks (0) | Category: social software

    January 19, 2004

    Marko on Mistakes in the Moral Mathematics of BloggingEmail This EntryPrint This Article

    Posted by Clay Shirky

    Marko has posted a long and fascinating addition to the conversation on blogging, inequality, and justice. I have only read it once, too cursory a viewing of such careful work to reply, but I wanted to flag its appearance immediately.
    The first mistake – lets call it the “Natural Social Institutions” view – is the simplistic but widely held view that the patterns resulting from the operation of freely forming networks are acceptable because the rules of operation of these networks are in some sense natural. “Diversity plus freedom of choice creates inequality, and the greater the diversity, the more extreme the inequality,” Clay writes. “In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome…[I]t arises naturally.” Diversity plus freedom of choice creates inequality, yes, but how much inequality comes out in the wash is determined by a complex mix of institutional arrangements – including informational feedback mechanisms – as well as other factors influencing individual linking behavior. Clay has acknowledged as much by pointing to David Sifry’s Technorati Interesting Newcomers List and later by sketching several possible strategies in modifying the power law distribution. But Clay avoids the mistake only part of the way. He still gives the “natural power law” a kind of moral priority in his picture. The reason this is a mistake is that there is no way in which we can meaningfully say that “the blogging world without the Technorati Interesting Newcomers List” is in any way natural, or the baseline from the point of view of justice, in comparison to “the blogging world with the Technorati Interesting Newcomers List.” Neither has a special claim to be the baseline of moral analysis. It’s not as if there is one distribution and then we tinker with it. In order to answer the question of justice we need to agree on some further point of view from which to judge the justice of the rules and the resulting distributions.
    Read the whole thing. Really.

    Comments (7) + TrackBacks (0) | Category: social software

    Liz on "What is a blog?"Email This EntryPrint This Article

    Posted by Clay Shirky

    Liz has a post over at mamamusings about defining the word 'blog' for research purposes.
    There are a couple of issues to be thought about here. First, figuring out—for the purposes of any other sort of research—what a blog really is. At the AoIR conference last fall, I noticed that most of the people talking about blogs (myself included) either didn’t define blogs, or used a potentially problematic definition. Second, determining whether what we want/need to focus on for meaningful results are the blogs, or the bloggers. I maintain four different blogs, for example—not including the blogs for each of my classes. Choosing to focus on the object produced yields different results from focusing on the producer. Third, deciding how (or whether) to categorize blogs. Reading through the bloggies award page for 2004 (while you’re there, vote for misbehaving for best group blog!), I was struck by many of the categories, and by the assumptions inherent in those categories.
    I described my view on the definitional question while following up to Dave Pollard
    There was a halycon period (between, say, the launch of Blogger and the launch of Gawker) when the definition of a weblog, weblog technology, and the actual interconnected mass of weblogs were all of a piece. When someone asked “What’s a weblog?”, you could point to Instapundit or Talking Points Memo or the recently updated list on blogger and say “There, that’s it, that’s a weblog”, without having to specify whether you meant the technology driving it, or the actual blog itself, or the abstract notion derived from the two. Those days are over. Weblogs (the technology) have become the premier lightweight publishing platform, and make no requirements that the users of that platform respect or even know about weblogs (the communal practice). The only thing in common among Jeremy Hylton, Dave Barry, Howard Dean's campaign, the US Navy's procurement officers and the_d00shbag over at LiveJournal, who just quit his job at KFC, is that they all use weblog software.
    This is a hard question, of course, so hard, from my point of view that it is unsolvable in anything other than local declaration, as in "In this paper, we use the word 'blog' to mean X." I don't think there will ever be a definition common enough to take for granted in research contexts.

    Comments (1) + TrackBacks (0) | Category: social software

    January 16, 2004

    Conference calls and VoIPEmail This EntryPrint This Article

    Posted by Clay Shirky

    I just got off a conference call where some of the participants had to gather together in a room and use a speaker phone. Because the way it had been set up in advance, we were limited to 5 lines on the call. This is not at all surprising, of course -- its how most conference calls are. I was however struck anew by the brokenated horrorfulment that is the conference call because this one was hosted by the Library of Congress. A Government agency, needing some outside counsel, could not schedule a conference call without specifying in advance how many people would be in attendance. Think what that would feel like in this medium: "Thank you for choosing to host your mailing list with Yahoo Groups. What is the maximum number of subscribers that will be on this list?" "Creating #python on irc... How many users will be on this channel?" "Welcome to the Atom Wiki. You will only be able to see this wiki in read-only mode, as the creator set the participant cap at 100." Every now and again, obvious things strike me anew: We have a huge mismatch between the potential of voice and the fact of the phone system because the phone system is almost literally anti-social. Having a party? Great -- just make sure you cap attendance in advance, and distribute lots of hard-to-remember login information beforehand. The current regulatory argument for treating voice over the internet (VoIP) like the traditional phone system is the old canard "If it walks like a duck..." The problem with this view is that VoIP _doesn't_ walk like a duck -- when broad use of voice over the internet finally arrives, it won't make people specify in advance how many people should be on a conference call, or require some central facility to set it up. The press often focusses on the ways VoIP is cheaper than regular telephony, but that doesn't hold a candle to the ways the internet is better than the phone system. We've barely even imagined ways of integrating voice into social software, because the one model we have for handling voice at scale has trained us to tolerate ridiculously hard group-forming.

    Comments (4) + TrackBacks (0) | Category: social software

    10 Tools for Networked ActivismEmail This EntryPrint This Article

    Posted by Clay Shirky

    Over at "Designing for Civil Society" they have an interesting list of 10 Open Source Tools for eActivism, taken from the Democracy Online Newswire. A lot of it is social software, including pointers to mailing list managers, wikis, weblogs and discussion forums, and it includes this nifty chart arranging the tools on two axes -- formal to informal, and centralized to distributed: (I loves me some two-axis charts...)

    Comments (6) + TrackBacks (0) | Category: social software

    January 15, 2004

    Visualizing Social NetworksEmail This EntryPrint This Article

    Posted by Clay Shirky

    Great overview of visual analysis of social networks over the years, from the hand drawn (Friendship choices among 4th graders) to the computer generated (Social support network of a homeless woman.) to the interactive (Women's attendance at social events)
    Computers have been used to do the actual drawing for a number of years. But, more recently, the network research community has shown a tendency to construct and share screen-based images instead of relying entirely on the production and distribution of printed pages. This new approach facilitates the use of color and animation. Currently it offers enough flexibility to allow viewers to begin to interact with the images they receive. [...] Future developments will undoubtedly extend current trends. Network analysts already have made considerable progress in developing programs for computation (Freeman, 1988). And, as I have shown in this paper, we have made progress in developing programs for visualization. We can look forward to similar progress in developing database programs designed to facilitate the storage and retrieval of social network data. But the real breakthrough will occur when we develop a single program that can integrate these three kinds of tools into a single program. Only then will we be able to access network data sets and both compute and visualize their structural properties quickly and easily.

    Comments (4) + TrackBacks (0) | Category: social software

    January 14, 2004

    Ben Hyde on Powerlaws and InequalityEmail This EntryPrint This Article

    Posted by Clay Shirky

    Ben Hyde follows up on my Inequality post with observations of his own about mechanisms for working with power law distributions.
    He touches on the issue of volitility. This is a two edged sword; you want stablity - since network participants pay a cost to reshuffle the network - and you want moblity/oportunity. I believe, but I don't have enough data or a reasonable model, that the distribution of volitility in most of these networks is similar to that found in the distribution of firm sizes from year to year. Small firms change size a _lot_ more than large firms - it's a double expodential. If that's the right distribution for the volitility then the design problem is to manage the constants in that distribution.
    and
    That in turn brings me to the information issue. I wish Clay had mentioned that one way to reduce the slope of the curve is to improve the information available to the network members. That encourages members to link to things that are more diverse. I.e. the habit of linking to the "more popular blogs" is less egalitarian than the habit of linking to the "most popular blogs that discuss my interests." You can't do the latter if you don't have good information.
    This echoes Seb's note, in the comments of Inequality, about avoiding "linking up" (i.e the blogroll habit of adding popular weblogs as a shout-out.) Ben has been thinking about power law distributions, and about what kind of active steps we can take in shaping them, for a long time. Read the whole thing. (Postscript: Reading Ben's stuff, I was reminded about the excellent power law overview at http://backspaces.net/PLaw/)

    Comments (0) + TrackBacks (0) | Category: social software

    "KVETCH is Dead": Community in adverse technological contextsEmail This EntryPrint This Article

    Posted by Clay Shirky

    Joshua Schachter (inventor of del.icio.us) sent me this nice addition to the literature of "Well, we didn't expect the community to do X!", this time from the euology of Kvetch.com (via the Wayback Machine -- all hail Brewster.) Kvetch was meant to be a write-only complaints board -- you could go and kvetch about whatever was on your mind. You would also see other people's kvetching served up randomly, and of course your kvetching would be added to the random pool for others to see. There was no identity, and no search. And yet a community (kvommunity?) of sorts grew up. The eulogy is short but scattered -- I quote the relevant portions here:
    *People try to connect even in the harshest climates*. I never expected this site to actually connect people. After all, the posting was random, and there were thousands of posts. And yet, people tried. They posted responses to other posts, and posted them dozens of times to increase their likelihood of getting seen. Stupid, but valiant. *Wherever there are people, there's the potential for love*. I know that Kvetch was responsible for at least one marriage. A union born of kvetching. Amazing. *Every collaborative project eventually outgrows its owner*. You start a project like this because you have a certain way of looking at the world. But when you open it up for group participation, it always changes. In this case, the amount of hostility the site attracted was sometimes shocking. For me, a kvetch is supposed to be a clever observation of one of life's funny little annoyances. But for others, it was an excuse to really let out their deep dark angry side. And there's nothing wrong with that, I suppose. It's just not what I wanted to cultivate. *Identity is important, even in ephemera like this*. Posters created specific identities and protected them vigorously, even though there were no memberships so anyone could post under any name. It lead to some very passionate turf wars over names that anyone could claim.
    I love that last one -- it reminds me of Old Man Murray (now archived), a brilliant gaming site whose bulletin boards were nothing but puerile filth, lovingly written. On OMM, there was no 'identity management' whatsoever -- you just signed yourself in under a particular nickname, but anyone else could post under your name, and you could post under anyone else's as well. And yet there was not only identity, it was so strong that when one poster posted under another's handle, not only could the community usually tell it was a fake, we could often guess who faked it. Our identity systems, and often our reputation systems, try to reduce identity to a question of globally unique IDs (GUIDs), but a GUID is not an identity, and an identity is not a GUID. If people could invent and defend identity on kvetch, which had not only no identity management but _no login_, then we're missing something in our current approaches to digital identity, which are often both unsubtle and overengineered. "Identity is important, even in ephemera like this."

    Comments (4) + TrackBacks (0) | Category: social software

    Anil on the loss of accidental social intermedariesEmail This EntryPrint This Article

    Posted by Clay Shirky

    Interesting Anil Dash post on the loss of accidental social intermediaries as communication tools switch from being place-centric to person-centric:
    We might not notice that those social intermediaries are gone, but I suspect when we recall in the future the anecdotes that result from them, the kids who are born today won't understand how a phone number used to belong to a family or a group of people or how, in the days before email, a message might pass under the wary gaze of a few unanticipated recipients. An "address" used to refer to a place, not a person. Some would say this loss in accidental connectivity is more than made up for by the immediacy and efficiency of contemporary communication, and I wouldn't argue that point, for the most part. But I can't help but wonder if the delightful and frequently inspirational value that can come from a conversation that starts wtih "Hold on, I'll get him for you... By the way, who should I tell him is calling?" might be worth more than we realized, and that we might be well served by a moment's reflection when noting its passing.

    Comments (2) + TrackBacks (0) | Category: social software

    January 13, 2004

    InequalityEmail This EntryPrint This Article

    Posted by Clay Shirky

    I gave a half-hearted answer to Joi's Are Blogs Just post, explicitly ignoring the larger philosophical issues he raised from the "Inequality and Fairness" section of Power Laws, Weblogs, and Inequality. That was a cop out. I didn't answer his question because it seems to assume some things about inequality -- that one should take a position for or against it -- that I don't actually believe. So here's what I do believe: inequality is inevitable, and that being for or against it makes no more sense than being for or against the weather.

    ...continue reading.

    Comments (26) + TrackBacks (0) | Category: social software

    January 12, 2004

    The curious case of Amazon's 800 numberEmail This EntryPrint This Article

    Posted by Clay Shirky

    So Amazon has a 1-800 number, where you can speak to a real live human-type person. It is, however, hard to find. Or rather, it was, until last week. Kevin Kelly, who publishes the wonderful Cool Tools, listed Amazon's 800 number, saying
    [...] No other merchant online or offline has provided the ease and accuracy of ordering as Amazon does. Still, in my experience there are occasionally glitches that their email-bots can't deal with, usually entailing a minor billing snafu. In these rare cases you need Amazon.com's almost-secret real-person customer service telephone number. You won't find it on their website. I once got it by calling 800 directory assistance. In any case, they make it hard to find because a call costs Amazon more, so you should jot down this number for those special moments when only a human will do: 800-201-7575.
    From Cool Tools, it was later posted to BoingBoing (and now, of course, here). Kevin understands why the number is hard to find, and is trying to pass it along with the caveat that the reader should excercise some self-restraint when using it, and Xeni, who posted it to BoingBoing, passed along Kevin's . But if that worked, there would be no need to make it hard to find in the first place. What I find interesting about this is the parallels with spam. Amazon can't afford not to have an 800 number, but they also know that if it gets in wide circulation, their customers will have much lower thresholds for calling it than Amazon wants them to have, so they try to make sure that the potential caller is willing to expend some energy to get the number. But the old social gradients that would mean slow diffusion of the number are gone, so now it's everywhere. It will be interesting to see what Amazon does in response. Use of the number will presumably go up -- they could let the service get worse, and put the requirement that the customer expend energy to get to a live person into the phone wait time (the usual strategy), they could staff up the 800 number, and raise the funds by rasiing prices on the site, or they could even change 800 numbers periodically, the way people trying to shake off spam do. Even sharing little tips with your friends gets conducted in a g