Corante

Authors

Clay Shirky
( Archive | Home )

Liz Lawley
( Archive | Home )

Ross Mayfield
( Archive | Home )

Sébastien Paquet
( Archive | Home )

David Weinberger
( Archive | Home )

danah boyd
( Archive | Home )

Guest Authors
Recent Comments

Ask Fm Anonymous Finder on My book. Let me show you it.

Ask Fm Anonymous Finder on My book. Let me show you it.

mobile games on My book. Let me show you it.

http://www.gunforums.com/forums/showtopic.php?fid/30/tid/15192/pid/111828/post/last/#LAST on My book. Let me show you it.

temecula dui attorney on My book. Let me show you it.

louboutin chaussures soldes on My book. Let me show you it.

Site Search
Monthly Archives
Syndication
RSS 1.0
RSS 2.0

Many-to-Many

Category Archives

« guests | social software

February 28, 2008

My book. Let me Amazon show you it.

Email This Entry

Posted by Clay Shirky

I’m delighted to say that online bookstores are shipping copies of Here Comes Everybody today, and that it has gotten several terrific notices in the blogosphere:

Cory Doctorow:
Clay’s book makes sense of the way that groups are using the Internet. Really good sense. In a treatise that spans all manner of social activity from vigilantism to terrorism, from Flickr to Howard Dean, from blogs to newspapers, Clay unpicks what has made some “social” Internet media into something utterly transformative, while other attempts have fizzled or fallen to griefers and vandals. Clay picks perfect anecdotes to vividly illustrate his points, then shows the larger truth behind them.
Russell Davies:
Here Comes Everybody goes beyond wild-eyed webby boosterism and points out what seems to be different about web-based communities and organisation and why it’s different; the good and the bad. With useful and interesting examples, good stories and sticky theories. Very good stuff.
Eric Nehrlich:
These newly possible activities are moving us towards the collapse of social structures created by technology limitations. Shirky compares this process to how the invention of the printing press impacted scribes. Suddenly, their expertise in reading and writing went from essential to meaningless. Shirky suggests that those associated with controlling the means to media production are headed for a similar fall.
Philip Young:
Shirky has a piercingly sharp eye for the spotting the illuminating case studies - some familiar, some new - and using them to energise wider themes. His basic thesis is simple: “Everywhere you look groups of people are coming together to share with one another, work together, take some kind of public action.” The difference is that today, unlike even ten years ago, technological change means such groups can be form and act in new and powerful ways. Drawing on a wide range of examples Shirky teases out remarkable contrasts with what has been the expected logic, and shows quite how quickly the dynamics of reputation and relationships have changed.

Comments (30) + TrackBacks (0) | Category: social software

February 7, 2008

My book. Let me show you it.

Email This Entry

Posted by Clay Shirky

I’ve written a book, called Here Comes Everybody: The Power of Organizing Without Organizations, which is coming out in a month. It’s coming out first in the US and UK (and in translation later this year in Holland, Portugal and Brazil, Korea, and China.)

Here Comes Everybody is about why new social tools matter for society. It is a non-techie book for the general reader (the letters TCP IP appear nowhere in that order). It is also post-utopian (I assume that the coming changes are both good and bad) and written from the point of view I have adopted from my students, namely that the internet is now boring, and the key question is what we are going to do with it.

One of the great frustrations of writing a book as opposed to blogging is seeing a new story that would have been a perfect illustration, or deepened an argument, and not being able to add it. To remedy that, I’ve just launched a new blog, at HereComesEverybody.org, to continue writing about the effects of social tools.

Wow. What a great response — we’ve given out all the copies we can, but many thanks for all the interest. Also, I’ve convinced the good folks at Penguin Press to let me give a few review copies away to people in the kinds of communities the book is about. I’ve got half a dozen copies to give to anyone reading this, with the only quid pro quo being that you blog your reactions to it, good bad or indifferent, some time in the next month or so. Drop me a line if you would like a review copy — clay@shirky.com.

Comments (117) + TrackBacks (0) | Category: social software

November 13, 2007

It's Live! New JCMC on Social Network Sites

Email This Entry

Posted by danah boyd

It gives me unquantifiable amounts of joy to announce that the JCMC special theme issue on “Social Network Sites” is now completely birthed. It was a long and intense labor, but all eight newborn articles are doing just fine and the new mommies are as proud as could be. So please, join us in our celebration by heading on over to the Journal for Computer-Mediated Communication and snuggling up to an article or two. The more you love them, the more they’ll prosper!

JCMC Special Theme Issue on “Social Network Sites”
Guest Editors: danah boyd and Nicole Ellison
http://jcmc.indiana.edu/vol13/issue1/

Please feel free to pass this announcement on to anyone you think might find value from this special issue.

Comments (0) + TrackBacks (1) | Category: social software

November 3, 2007

Race/ethnicity and parent education differences in usage of Facebook and MySpace

Email This Entry

Posted by danah boyd

In June, I wrote a controversial blog essay about how U.S. teens appeared to be self-dividing by class on MySpace and Facebook during the 2006-2007 school year. This piece got me into loads of trouble for all sorts of reasons, forcing me to respond to some of the most intense critiques.

While what I was observing went beyond what could be quantitatively measured, certain aspects of it could be measured. To my absolute delight, Eszter Hargittai (professor at Northwestern) had collected data to measure certain aspects of the divide that I was trying to articulate. Not surprising (to me at least), what she was seeing lined up completely with what I was seeing on the ground.

Her latest article “Whose Space? Differences Among Users and Non-Users of Social Network Sites” (published as a part of Nicole Ellison and my JCMC special issue on social network sites) suggests that Facebook and MySpace usage are divided by race/ethnicity and parent education (two common measures of “class” in the U.S.). Her findings are based on a survey of 1060 first year students at the diverse University of Illinois-Chicago campus during February and March of 2007. For more details on her methodology, see her methods section.

While over 99% of the students had heard of both Facebook and MySpace, 79% use Facebook and 55% use MySpace. The story looks a bit different when you break it down by race/ethnicity and parent education:

While Eszter is not able to measure the other aspects of lifestyle that I was trying to describe that differentiate usage, she is able to show that Facebook and MySpace usage differs by race/ethnicity and parent education. These substitutes for “class” can be contested, but what is important here is that there is genuinely differences in usage patterns, even with consistent familiarity. People are segmenting themselves in networked publics and this links to the ways in which they are segmented in everyday life. Hopefully Eszter’s article helps those who can’t read qualitative data understand that what I was observing is real and measurable.

(We are still waiting for all of the JCMC articles from our special issue to be live on the site. Fore more information on this special issue, please see the Introduction that Nicole and I wrote: Social Network Sites: Definition, History, and Scholarship.)

Discussion: Apophenia

Comments (0) + TrackBacks (0) | Category: social software

August 2, 2007

history of social network sites (a work-in-progress)

Email This Entry

Posted by danah boyd

As many of you know, Nicole Ellison and I are guest editing a special issue of JCMC. As a part of this issue, we are writing an introduction that will include a description of social network sites, a brief history of them, a literature review, a description of the works in this issue, and a discussion of future research. We have decided to put a draft of our history section up to solicit feedback from those of you who know this space well. It is a work-in-progress so please bear with us. But if you have suggestions, shout out.

history of social network sites (a work-in-progress)

In particular, we want to know: 1) Are we reporting anything inaccurately? 2) What are we missing?

Comments (0) + TrackBacks (0) | Category: social software

August 1, 2007

New Freedom Destroys Old Culture: A response to Nick Carr

Email This Entry

Posted by Clay Shirky

I have never understood Nick Carr’s objections to the cultural effects of the internet. He’s much too smart to lump in with nay-sayers like Keen, and when he talks about the effects of the net on business, he sounds more optimistic, even factoring in the wrenching transition, so why aren’t the cultural effects similar cause for optimism, even accepting the wrenching transition in those domains as well?

I think I finally got understood the dichotomy between his reading of business and culture after reading Long Player, his piece on metadata and what he calls “the myth of liberation”, a post spurred in turn by David Weinberger’s Everything Is Miscellaneous.

Carr discusses the ways in which the long-playing album was both conceived of and executed as an aesthetic unit, its length determined by a desire to hold most of the classical canon on a single record, and its possibilities exploited by musicians who created for the form — who created albums, in other words, rather than mere bags of songs. He illustrates this with an exegesis of the Rolling Stones’ Exile on Main Street, showing how the overall construction makes that album itself a work of art.

Carr uses this point to take on what he calls the myth of liberation: “This mythology is founded on a sweeping historical revisionism that conjures up an imaginary predigital world - a world of profound physical and economic constraints - from which the web is now liberating us.” Carr observes, correctly, that the LP was what it was in part for aesthetic reasons, and the album, as a unit, became what it became in the hands of people who knew how to use it.

That is not, however, the neat story Carr wants to it be, and the messiness of the rest of the story is key, I think, to the anxiety about the effects on culture, his and others.

The LP was an aesthetic unit, but one designed within strong technical constraints. When Edward Wallerstein of Columbia Records was trying to figure out how long the long-playing format should be, he settled on 17 minutes a side as something that would “…enable about 90% of all classical music to be put on two sides of a record.” But why only 90%? Because 100% would be impossible — the rest of the canon was too long for the technology of the day. And why should you have to flip the record in the middle? Why not have it play straight through? Impossible again.

Contra Carr, in other words, the pre-digital world was a world of profound physical and economic constraints. The LP could hold 34 minutes of music, which was a bigger number of minutes than some possibilities (33 possibilities, to be precise), but smaller than an infinite number of others. The album as a form provided modest freedom embedded in serious constraints, and the people who worked well with the form accepted those constraints as a way of getting at those freedoms. And now the constraints are gone; there is no necessary link between an amount of music and its playback vehicle.

And what Carr dislikes, I think, is evidence that the freedoms of the album were only as valuable as they were in the context of the constraints. If Exile on Main Street was as good an idea as he thinks it was, it would survive the removal of those constraints.

And it hasn’t.

Here is the iTunes snapshot of Exile, sorted by popularity:

While we can’t get absolute numbers from this, we can get relative ones — many more people want to listen to Tumbling Dice or Happy than Ventilator Blues or Turd on the Run, even though iTunes makes it cheaper per song to buy the whole album. Even with a financial inducement to preserve the album form, the users still say no thanks.

The only way to support the view that Exile is best listened to as an album, in other words, is to dismiss the actual preferences of most of the people who like the Rolling Stones. Carr sets about this task with gusto:
Who would unbundle Exile on Main Street or Blonde on Blonde or Tonight’s the Night - or, for that matter, Dirty Mind or Youth and Young Manhood or (Come On Feel the) Illinoise? Only a fool would.
Only a fool. If you are one of those people who has, say, Happy on your iPod (as I do), then you are a fool (though you have lots of company). And of course this foolishness extends to the recording industry, and to the Stones themselves, who went and put Tumbling Dice on a Greatest Hits collection. (One can only imagine how Carr feels about Greatest Hits collections.)

I think Weinberger’s got it right about liberation, even taking at face value the cartoonish version Carr offers. Prior to unlimited perfect copyability, media was defined by profound physical and economic constraints, and now it’s not. Fewer constraints and better matching of supply and demand are good for business, because business is not concerned with historical continuity. Fewer constraints and better matching of supply and demand are bad for current culture, because culture continually mistakes current exigencies for eternal verities.

This isn’t just Carr of course. As people come to realize that freedom destroys old forms just as surely as it creates new ones, the lament for the long-lost present is going up everywhere. As another example, Sven Birkerts, the literary critic, has a post in the Boston Globe, Lost in the blogosphere, that is almost indescribably self-involved. His two complaints are that newspapers are reducing the space allotted to literary criticism, and too many people on the Web are writing about books. In other words, literary criticism, as practiced during Birkerts’ lifetime, was just right, and having either fewer or more writers are both lamentable situations.

In order that the “Life was better when I was younger” flavor of his complaint not become too obvious, Birkerts frames the changing landscape not as a personal annoyance but as A Threat To Culture Itself. As he puts it “…what we have been calling “culture” at least since the Enlightenment — is the emergent maturity that constrains unbounded freedom in the interest of mattering.”

This is silly. The constraints of print were not a product of “emergent maturity.” They were accidents of physical production. Newspapers published book reviews because their customers read books and because publishers took out ads, the same reason they published pieces about cars or food or vacations. Some newspapers hired critics because they could afford to, others didn’t because they couldn’t. Ordinary citizens didn’t write about books in a global medium because no such medium existed. None of this was an attempt to “constrain unbounded freedom” because there was no such freedom to constrain; it was just how things were back then.

Genres are always created in part by limitations. Albums are as long as they are because that Wallerstein picked a length his engineers could deliver. Novels are as long as they are because Aldus Manutius’s italic letters and octavo bookbinding could hold about that many words. The album is already a marginal form, and the novel will probably become one in the next fifty years, but that also happened to the sonnet and the madrigal.

I’m old enough to remember the dwindling world, but it never meant enough to me to make me a nostalgist. In my students’ work I see hints of a culture that takes both the new freedoms and the new constraints for granted, but the fullest expression of that world will probably come after I’m dead. But despite living in transitional times, I’m not willing to pretend that the erosion of my worldview is a crisis for culture itself. It’s just how things are right now.

Carr fails to note that the LP was created for classical music, but used by rock and roll bands. Creators work within whatever constraints exist at the time they are creating, and when the old constraints give way, new forms arise while old ones dwindle. Some work from the older forms will survive — Shakespeare’s 116th sonnet remains a masterwork — while other work will wane — Exile as an album-length experience is a fading memory. This kind of transition isn’t a threat to Culture Itself, or even much of a tragedy, and we should resist attempts to preserve old constraints in order to defend old forms.

Comments (49) + TrackBacks (0) | Category: social software

July 26, 2007

responding to critiques of my essay on class

Email This Entry

Posted by danah boyd

One month ago, I put out a blog essay that took on a life of its own. This essay addressed one of America’s most taboo topics: class. Due to personal circumstances, I wasn’t online as things spun further and further out of control and I had neither the time nor the emotional energy to address all of the astounding misinterpretations that I saw as a game of digital telephone took hold. I’ve browsed the hundreds of emails, thousands of blog posts, and thousands of comments across the web. I’m in awe of the amount of time and energy people put into thinking through and critiquing my essay. In the process, I’ve also realized that I was not always so effective at communicating what I wanted to communicate. To clarify some issues, I decided to put together a long response that addresses a variety of different issues.

Responding to Responses to: “Viewing American class divisions through Facebook and MySpace”

Please let me know if this does or does not clarify the concerns that you’ve raised.

(Comments on Apophenia)

Comments (0) + TrackBacks (0) | Category: social software

July 25, 2007

Tagmashes from LibraryThing

Email This Entry

Posted by David Weinberger

im Spalding at LibraryThing has introduced a new wrinkle in the tagosphere…and wrinkles are welcome because they pucker space in semantically interesting ways. (Block that metaphor!)

At LibraryThing, people list their books. And, of course, we tag ‘em up good. For example, Freakonomics has 993 unique tags (ignoring case differences), and 8,760 total tags. Now, tags are of course useful. But so are subject headings. So, Tim has come up with a clever way of deriving subject headings bottom up. He’s introduced “tagmashes,” which are (in essence) searches on two or more tags. So, you could ask to see all the books tagged “france” and “wwii.” But the fact that you’re asking for that particular conjunction of tags indicates that those tags go together, at least in your mind and at least at this moment. Library turns that tagmash into a page with a persistent URL. The page presents a de-duped list of the results, ordered by interestinginess, and with other tagmashes suggested, all based on the magic of statistics. Over time, a large, relatively flat set of subject headings may emerge, which, subject to further analysis, could get clumpier and clumpier with meaning.

You may be asking yourself how this differs from saved searches. I asked Tim. He explained that while the system does a search when you ask for a new tagmash, it presents the tagmash as if it were a topic, not a search. For one thing, lists of search results generally don’t have persistent URLs. More important, to the user, tagmash pages feel like topic pages, not search results pages.

And you may also be asking yourself how this differs from a folksonomy. While I’d want to count it as a folksonomic technique, in a traditional folksonomy (oooh, I hope I’m the first to use that phrase!), a computer can notice which terms are used most often, and might even notice some of the relationships among the terms. With tagmashes, the info that this tag is related to that one is gleaned from the fact that a human said that they were related.

LibraryThing keeps innovating this way. It’s definitely a site to watch.

Comments (4) + TrackBacks (0) | Category: social software

July 20, 2007

Spolsky on Blog Comments: Scale matters

Email This Entry

Posted by Clay Shirky

Joel Spolsky approvingly quotes Dave Winer on the subject of blog-comments:

The cool thing about blogs is that while they may be quiet, and it may be hard to find what you’re looking for, at least you can say what you think without being shouted down. This makes it possible for unpopular ideas to be expressed. And if you know history, the most important ideas often are the unpopular ones…. That’s what’s important about blogs, not that people can comment on your ideas. As long as they can start their own blog, there will be no shortage of places to comment.

Joel then adds his own observations:

When a blog allows comments right below the writer’s post, what you get is a bunch of interesting ideas, carefully constructed, followed by a long spew of noise, filth, and anonymous rubbish that nobody … nobody … would say out loud if they had to take ownership of their words.

This can be true, all true, as any casual read of blog comments can attest. BoingBoing turned off their comments years ago, because they’d long since passed the scale where polite conversation was possible. The Tragedy of the Conversational Commons becomes too persistently tempting when an audience gorws large. At BoingBoing scale, John Gabriel’s Greater Internet Fuckwad Theory cannot be repealed.

But the uselessness of comments it is not the universal truth that Dave or (fixed, per Dave’s comment below) Joel makes it out to be, for two reasons. First, posting and conversation are different kinds of things — same keyboard, same text box, same web page, different modes of expression. Second, the sites that suffer most from anonymous postings and drivel are the ones operating at large scale.

If you are operating below that scale, comments can be quite good, in a way not replicable in any “everyone post to their own blog”. To take but three recent examples, take a look at the comments on my post on Michael Gorman, on danah’s post at Apophenia on fame, narcissism and MySpace and on Kieran Healy’s biological thought experiment on Crooked Timber.

Those three threads contain a hundred or so comments, including some distinctly low-signal bouquets and brickbats. But there is also spirited disputation and emendation, alternate points of view, linky goodness, and a conversational sharpening of the argument on all sides, in a way that doesn’t happen blog to blog. This, I think, is the missing element in Dave and Joel’s points — two blog posts do not make a conversation. The conversation that can be attached to a post is different in style and content, and in intent and effect, than the post itself.

I have long thought that the ‘freedom of speech means no filtering’ argument is dumb where blogs are concerned — it is the blogger’s space, and he or she should feel free to delete, disemvowel, or otherwise dispose of material, for any reason, or no reason. But we’ve long since passed the point where what happens on a blog is mainly influenced by what the software does — the question to ask about comments is not whether they are available, but how a community uses them. The value in in blogs as communities of practice is considerable, and its a mistake to write off comment threads on those kinds of blogs just because, in other environments, comments are lame.

Comments (20) + TrackBacks (0) | Category: social software

July 10, 2007

"The internet's output is data, but its product is freedom"

Email This Entry

Posted by Clay Shirky

I said that in Andrew Keen: Rescuing ‘Luddite’ from the Luddites, to which Phil, one of the commenters, replied

There are assertions of verifiable fact and then there are invocations of shared values. Don’t mix them up.

I meant this as an assertion of fact, but re-reading it after Tom’s feedback, it comes off as simple flag-waving, since I’d compressed the technical part of the argument out of existence. So here it is again, in slightly longer form:

The internet’s essential operation is to encode and transmit data from sender to receiver. In 1969, this was not a new capability; we’d had networks that did this in since the telegraph, at the day of the internet’s launch, we had a phone network that was nearly a hundred years old, alongside more specialized networks for things like telexes and wire-services for photographs.

Thus the basics of what the internet did (and does) isn’t enough to explain its spread; what is it for has to be accounted for by looking at the difference between it and the other data-transfer networks of the day.

The principal difference between older networks and the internet (ARPAnet, at its birth) is the end-to-end principle, which says, roughly, “The best way to design a network is to allow the sender and receiver to decide what the data means, without asking the intervening network to interpret the data.” The original expression of this idea is from the Saltzer and Clark paper End-to-End Arguments in System Design; the same argument is explained in other terms in Isenberg’s Stupid Network and Searls and Weinberger’s World of Ends.

What the internet is for, in other words, what made it worth adopting in a world already well provisioned with other networks, was that the sender and receiver didn’t have to ask for either help or permission before inventing a new kind of message. The core virtue of the internet was a huge increase in the technical freedom of all of its participating nodes, a freedom that has been translated into productive and intellectual freedoms for its users.

As Scott Bradner put it, the Internet means you don’t have to convince anyone else that something is a good idea before trying it. The upshot is that the internet’s output is data, but its product is freedom.

Comments (7) + TrackBacks (0) | Category: social software

July 9, 2007

Andrew Keen: Rescuing 'Luddite' from the Luddites

Email This Entry

Posted by Clay Shirky

Last week, while in a conversation with Andrew Keen on the radio show To The Point, he suggested that he was not opposed to the technology of the internet, but rather to how it was being used.

This reminded me of Michael Gorman’s insistence that digital tools are fine, so long as they are shaped to replicate the social (and particularly academic) institutions that have grown up around paper.

There is a similar strand in these two arguments, namely that technology is one thing, but the way it is used is another, and that the two can and should be separated. I think this view is in the main wrong, even Luddite, but to make such an accusation requires a definition of Luddite considerably more grounded than ‘anti-technology’ (a vacuous notion — no one who wears shoes can reasonably be called ‘anti-technology.’) Both Keen and Gorman have said they are not opposed to digital technology. I believe them when they say this, but I still think their views are Luddite, by historical analogy with the real Luddite movement of the early 1800s.

What follows is a long detour into the Luddite rebellion, followed by a reply to Keen about the inseparability of the internet from its basic effects.

Infernal Machines

The historical record is relatively clear. In March of 1811, a group of weavers in Nottinghamshire began destroying mechanical looms. This was not the first such riot — in the late 1700s, when Parliament refused to guarantee the weaver’s control of supply of woven goods, workers in Nottingham destroyed looms as well. The Luddite rebellion, though, was unusual for several reasons: its breadth and sustained character, taking place in many industrializing towns at once; its having a nominal leader, going by the name Ned Ludd, General Ludd, or King Ludd (the pseudonym itself a reference to an apocryphal figure from an earlier loom-breaking riot in the late 1700s); and its written documentation of grievances and rationale. The rebellion, which lasted two years, was ultimately put down by force, and was over in 1813.

Over the last two decades, several historians have re-examined the record of the Luddite movement, and have attempted to replace the simplistic view of Luddites as being opposed to technological change with a more nuanced accounting of their motivations and actions. The common thread of the analysis is that the Luddites didn’t object to mechanized wide-frame looms per se, they objected to the price collapse of woven goods caused by the way industrialists were using the looms. Though the target of the Luddite attacks were the looms themselves, their concerns and goals were not about technology but about economics.

I believe that the nuanced view is wrong, and that the simpler view of Luddites as counter-revolutionaries is in fact the correct one. The romantic view of Luddites as industrial-age Robin Hoods, concerned not to halt progress but to embrace justice, runs aground on both the written record, in which the Luddites outline a program that is against any technology that increases productivity, and on their actions, which were not anti-capitalist but anti-consumer. It also assumes that there was some coherent distinction between technological and economic effects of the looms; there was none.

A Technology is For Whatever Happens When You Use It

The idea that the Luddites were targeting economic rather than technological change is a category fallacy, where the use of two discrete labels (technology and economics, in this case) are wrongly thought to demonstrate two discrete aspects of the thing labeled (here wide-frame looms.) This separation does not exist in this case; the technological effects of the looms were economic. This is because, at the moment of its arrival, what a technology does and what it is for are different.

What any given technology does is fairly obvious: rifles fire bullets, pencils make marks, looms weave cloth, and so on. What a technology is for, on the other hand, what leads people to adopt it, is whatever new thing becomes possible on the day of its arrival. The Winchester repeating rifle was not for firing bullets — that capability already existed. It was for decreasing the wait between bullets. Similarly, pencils were not for writing but for portability, and so on.

And the wide-frame looms, target of the Luddite’s destructive forays? What were they for? They weren’t for making cloth — humankind was making cloth long before looms arrived. They weren’t for making better cloth — in 1811, industrial cloth was inferior to cloth spun by the weavers. Mechanical looms were for making cheap cloth, lots and lots of cheap cloth. The output of a mechanical loom was cloth, but the product of such a loom was savings.

The wide-frame loom was a cost-lowering machine, and as such, it threatened the old inefficiencies on which the Luddite’s revenues depended. Their revolt had the goal of preventing those savings from being passed along to the customer. One of their demands was that Parliament outlaw “all Machinery hurtful to Commonality” — all machines that worked efficiently enough to lower prices.

Perhaps more tellingly, and against recent fables of Luddism as a principled anti-capitalist movement, they refrained from breaking the looms of industrial weavers who didn’t lower their prices. What the Luddites were rioting in favor of was price gouging; they didn’t care how much a wide-frame loom might save in production costs, so long as none of those savings were passed on to their fellow citizens.

Their common cause was not with citizens and against industrialists, it was against citizens and with those industrialists who joined them in a cartel. The effect of their campaign, had it succeeded, would been to have raise, rather than lower, the profits of the wide-frame operators, while producing no benefit for those consumers who used cloth in their daily lives, which is to say the entire population of England. (Tellingly, none of the “Robin Hood” versions of Luddite history make any mention of the effect of high prices on the buyers of cloth, just on the sellers.)

Back to Keen

A Luddite argument is one in which some broadly useful technology is opposed on the grounds that it will discomfit the people who benefit from the inefficiency the technology destroys. An argument is especially Luddite if the discomfort of the newly challenged professionals is presented as a general social crisis, rather than as trouble for a special interest. (“How will we know what to listen to without record store clerks!”) When the music industry suggests that the prices of music should continue to be inflated, to preserve the industry as we have known it, that is a Luddite argument, as is the suggestion that Google pay reparations to newspapers or the phone company’s opposition to VoIP undermining their ability to profit from older ways of making phone calls.

This is what makes Keen’s argument a Luddite one — he doesn’t oppose all uses of technology, just ones that destroy older ways of doing things. In his view, the internet does not need to undermine the primacy of the copy as the anchor for both filtering and profitability.

But Keen is wrong. What the internet does is move data from point A to B, but what it is for is empowerment. Using the internet without putting new capabilities into the hands of its users (who are, by definition, amateurs in most things they can now do) would be like using a mechanical loom and not lowering the cost of buying a coat — possible, but utterly beside the point.

The internet’s output is data, but its product is freedom, lots and lots of freedom. Freedom of speech, freedom of the press, freedom of association, the freedom of an unprecedented number of people to say absolutely anything they like at any time, with the reasonable expectation that those utterances will be globally available, broadly discoverable at no cost, and preserved for far longer than most utterances are, and possibly forever.

Keen is right in understanding that this massive supply-side shock to freedom will destabilize and in some cases destroy a number of older social institutions. He is wrong in believing that there is some third way — lets deploy the internet, but not use it to increase the freedom of amateurs to do as they like.

It is possible to want a society in which new technology doesn’t demolish traditional ways of doing things. It is not possible to hold this view without being a Luddite, however. That view — incumbents should wield veto-power over adoption of tools they dislike, no matter the positive effects for the citizenry — is the core of Luddism, then and now.

Comments (26) + TrackBacks (0) | Category: social software

June 27, 2007

knowledge access as a public good

Email This Entry

Posted by danah boyd

Over at the Britannica Blog, Michael Gorman (the former president of the American Library Association) wrote a series of posts concerning web2.0. In short, he’s against it and thinks everything to do with web2.0 and Wikipedia is bad bad bad. A handful of us were given access to the posts before they were posted and asked to craft responses. The respondents are scholars and thinkers and writers of all stripes (including my dear friend and fellow M2M blogger Clay Shirky). Because I addressed all of his arguments at once, my piece was held to be released in the final week of the public discussion. And that time is now. So enjoy!

(Comments at Apophenia)

...continue reading.

Comments (0) + TrackBacks (0) | Category: social software

June 24, 2007

viewing American class divisions through Facebook and MySpace

Email This Entry

Posted by danah boyd

Over the last six months, i’ve noticed an increasing number of press articles about how high school teens are leaving MySpace for Facebook. That’s only partially true. There is indeed a change taking place, but it’s not a shift so much as a fragmentation. Until recently, American teenagers were flocking to MySpace. The picture is now being blurred. Some teens are flocking to MySpace. And some teens are flocking to Facebook. Which go where gets kinda sticky, because it seems to primarily have to do with socio-economic class.

I’ve been trying to figure out how to articulate this division for months. I have not yet succeeded. So, instead, I decided to write a blog essay addressing what I’m seeing. I suspect that this will be received with criticism, but my hope is that the readers who encounter this essay might be able to help me think through this. In other words, I want feedback on this piece.

Viewing American class divisions through Facebook and MySpace

What I lay out in this essay is rather disconcerting. Hegemonic American teens (i.e. middle/upper class, college bound teens from upwards mobile or well off families) are all on or switching to Facebook. Marginalized teens, teens from poorer or less educated backgrounds, subculturally-identified teens, and other non-hegemonic teens continue to be drawn to MySpace. A class division has emerged and it is playing out in the aesthetics, the kinds of advertising, and the policy decisions being made.

Please check out this essay and share your thoughts in the comments on Apophenia.

Comments (0) + TrackBacks (0) | Category: social software

June 20, 2007

Gorman, redux: The Siren Song of the Internet

Email This Entry

Posted by Clay Shirky

Michael Gorman has his next post up at the Britannica blog: The Siren Song of the Internet. My reply is also up, and posted below. The themes of the historical lessons of Luddism are also being discussed in the comments to last week’s Gorman response, Old Revolutions Good, New Revolutions Bad

Siren Song of the Internet contains a curious omission and a basic misunderstanding. The omission is part of his defense of the Luddites; the misunderstanding is about the value of paper and the nature of e-books.

The omission comes early: Gorman cavils at being called a Luddite, though he then embraces the label, suggesting that they “…had legitimate grievances and that their lives were adversely affected by the mechanization that led to the Industrial Revolution.” No one using the term Luddite disputes the effects on pre-industrial weavers. This is the general case — any technology that fixes a problem (in this case the high cost of homespun goods) threatens the people who profit from the previous inefficiency. However, Gorman omits mentioning the Luddite response: an attempt to halt the spread of mechanical looms which, though beneficial to the general populace, threatened the livelihoods of King Ludd’s band.

By labeling the Luddite program legitimate, Gorman seems to be suggesting that incumbents are right to expect veto power over technological change. Here his stand in favor of printed matter is inconsistent, since printing was itself enormously disruptive, and many people wanted veto power over its spread as well. Indeed, one of the great Luddites of history (if we can apply the label anachronistically) was Johannes Trithemius, who argued in the late 1400s that the printing revolution be contained, in order to shield scribes from adverse effects. This is the same argument Gorman is making, in defense of the very tools Trithemius opposed. His attempt to rescue Luddism looks less like a principled stand than special pleading: the printing press was good, no matter happened to the scribes, but let’s not let that sort of thing happen to my tribe.

Gorman then defends traditional publishing methods, and ends up conflating several separate concepts into one false conclusion, saying “To think that digitization is the answer to all that ails the world is to ignore the uncomfortable fact that most people, young and old, prefer to interact with recorded knowledge and literature in the form of print on paper.”

Dispensing with the obvious straw man of “all that ails the world”, a claim no one has made, we are presented with a fact that is supposed to be uncomfortable — it’s good to read on paper. Well duh, as the kids say; there’s nothing uncomfortable about that. Paper is obviously superior to the screen for both contrast and resolution; Hewlett-Packard would be about half the size it is today if that were not true. But how did we get to talking about paper when we were talking about knowledge a moment ago?

Gorman is relying on metonymy. When he notes a preference for reading on paper he means a preference for traditional printed forms such as books and journals, but this is simply wrong. The uncomfortable fact is that the advantages of paper have become decoupled from the advantages of publishing; a big part of preference for reading on paper is expressed by hitting the print button. As we know from Lyman and Varian’s “How Much Information” study, “…the vast majority of original information on paper is produced by individuals in office documents and postal mail, not in formally published titles such as books, newspapers and journals.”

We see these effects everywhere: well over 90% of new information produced in any year is stored electronically. Use of the physical holdings of libraries are falling, while the use of electronic resources is rising. Scholarly monographs, contra Gorman, are increasingly distributed electronically. Even the physical form of newspapers is shrinking in response to shrinking demand, and so on.

The belief that a preference for paper leads to a preference for traditional publishing is a simple misunderstanding, demonstrated by his introduction of the failed e-book program as evidence that the current revolution is limited to “hobbyists and premature adopters.” The problems with e-books are that they are not radical enough: they dispense with the best aspect of books (paper as a display medium) while simultaneously aiming to disable the best aspects of electronic data (sharability, copyability, searchability, editability.) The failure of e-books is in fact bad news for Gorman’s thesis, as it demonstrates yet again that users have an overwhelming preference for the full range of digital advantages, and are not content with digital tools that are designed to be inefficient in the ways that printed matter is inefficient.

If we gathered every bit of output from traditional publishers, we could line them up in order of vulnerability to digital evanescence. Reference works were the first to go — phone books, dictionaries, and thesauri have largely gone digital; the encyclopedia is going, as are scholarly journals. Last to go will be novels — it will be some time before anyone reads One Hundred Years of Solitude in any format other than a traditionally printed book. Some time, however, is not forever. The old institutions, and especially publishers and libraries, have been forced to use paper not just for display, for which is it well suited, but also for storage, transport, and categorization, things for which paper is completely terrible. We are now able to recover from those disadvantages, though only by transforming the institutions organized around the older assumptions.

The ideal situation, which we are groping our way towards, will be to have all written material, wherever it lies on the ‘information to knowledge’ continuum, in digital form, right up the moment a reader wants it. At that point, the advantages of paper can be made manifest, either by printing on demand, or by using a display that matches paper’s superior readability. Many of the traditional managers of books and journals will suffer from this change, though it will benefit society as a whole. The question Gorman pointedly asks, by invoking Ned Ludd and his company, is whether we want that change to be in the hands of people who would be happy to discomfit society as a whole in order to preserve the inefficiencies that have defined their world.

Comments (6) + TrackBacks (0) | Category: social software

June 16, 2007

The Future Belongs to Those Who Take The Present For Granted: A return to Fred Wilson's "age question"

Email This Entry

Posted by Clay Shirky

My friend Fred Wilson had a pair of posts a few weeks back, the first arguing that youth was, in and of itself an advantage for tech entrepreneurs, and the second waffling on that question with idea that age is a mindset.

I think Fred got it right the first time, and I said so at the time, in The (Bayesian) Advantages of Youth:

I’m old enough to know a lot of things, just from life experience. I know that music comes from stores. I know that newspapers are where you get your political news and how you look for a job. I know that if you need to take a trip, you visit a travel agent. In the last 15 years or so, I’ve had to unlearn those things and a million others. This makes me a not-bad analyst, because I have to explain new technology to myself first — I’m too old to understand it natively. But it makes me a lousy entrepreneur.
Today, Fred seems to have returned to his original (and in my view correct) idea in The Age Question (continued):
It is incredibly hard to think of new paradigms when you’ve grown up reading the newspaper every morning. When you turn to TV for your entertainment. When you read magazines on the train home from work. But we have a generation coming of age right now that has never relied on newspapers, TV, and magazines for their information and entertainment.[…] The Internet is their medium and they are showing us how it needs to be used.

This is exactly right.

I think the real issue, of which age is a predictor, is this: the future belongs to those who take the present for granted. I had this thought while talking to Robert Cook of Metaweb, who are making Freebase. They need structured metadata, lots of structured metadata, and one of the places they are getting it is from Wikipedia, by spidering the bio boxes (among other things) for things like birthplace and age of people listed Freebase. While Andrew Keen is trying to get a conversation going on whether Wikipedia is a good idea, Metaweb takes it for granted as a stable part of the environment, which lets them see past this hurdle to the next one.

This is not to handicap the success of Freebase itself — it takes a lot more than taking the present for granted to make a successful tool. But one easy way to fail is to assume that the past is more solid than it is, and the present more contingent. And the people least likely to make this mistake — the people best able to take the present for granted — are young people, for whom knowing what the world is really like is as easy as waking up in the morning, since this is the only world they’ve ever known.

Some things improve with age — I wouldn’t re-live my 20s if you paid me — but high-leverage ignorance isn’t one of them.

Comments (20) + TrackBacks (0) | Category: social software

June 13, 2007

Old Revolutions Good, New Revolutions Bad: A Response to Gorman

Email This Entry

Posted by Clay Shirky

Encyclopedia Britannica has started a Web 2.0 Forum, where they are hosting a conversation going on around a set of posts by Michael Gorman. The first post, in two parts, is titled Web 2.0: The Sleep of Reason Brings Forth Monsters, and is a defense of the print culture against alteration by digital technologies. This is my response, which will be going up on the Britannica site later this week.

Web 2.0: The Sleep of Reason Brings Forth Monsters starts with a broad list of complaints against the current culture, from biblical literalism to interest in alternatives to Western medicine.

The life of the mind in the age of Web 2.0 suffers, in many ways, from an increase in credulity and an associated flight from expertise. Bloggers are called “citizen journalists”; alternatives to Western medicine are increasingly popular, though we can thank our stars there is no discernable “citizen surgeon” movement; millions of Americans are believers in Biblical inerrancy—the belief that every word in the Bible is both true and the literal word of God, something that, among other things, pits faith against carbon dating; and, scientific truths on such matters as medical research, accepted by all mainstream scientists, are rejected by substantial numbers of citizens and many in politics. Cartoonist Garry Trudeau’s Dr. Nathan Null, “a White House Situational Science Adviser,” tells us that: “Situational science is about respecting both sides of a scientific argument, not just the one supported by facts.”

This is meant to set the argument against a big canvas of social change, but the list is so at odds with the historical record as to be self-defeating.

The percentage of the US population believing in the literal truth of the Bible has remained relatively constant since the 1980s, while the percentage listing themselves as having “no religion” has grown. Interest in alternative medicine dates to at least the patent medicines of the 19th century; the biggest recent boost for that movement came under Reagan, when health supplements, soi-disant, were exempted from FDA scrutiny. Trudeau’s welcome critique of the White House’s assault on reason targets a political minority, not the internet-using population, and so on. If you didn’t know that this litany appeared under the heading Web 2.0, you might suspect Gorman’s target was anti-intellectualism during Republican administrations.

Even the part of the list specific to new technology gets it wrong. Bloggers aren’t called citizen-journalists; bloggers are called bloggers. Citizen-journalist describes people like Alisara Chirapongse, the Thai student who posted photos and observations of the recent coup during a press blackout. If Gorman can think of a better label for times when citizens operate as journalists, he hasn’t shared it with us.

Similarly, lumping Biblical literalism with Web 2.0 misses the mark. Many of the most active social media sites — Slashdot, Digg, Reddit — are rallying points for those committed to scientific truth. Wikipedia users have so successfully defended articles on Evolution, Creationism and so on from the introduction of counter-factual beliefs that frustrated literalists helped found Conservapedia, whose entry on Evolution is a farrago of anti-scientific nonsense.

But wait — if use of social media is bad, and attacks on the scientific method are bad, what are we to make of social media sites that defend the scientific method? Surely Wikipedia is better than Conservapedia on that score, no? Well, it all gets confusing when you start looking at the details, but Gorman is not interested in the details. His grand theory, of the hell-in-a-handbasket variety, avoids any look at specific instantiations of these tools — how do the social models of Digg and Wikipedia differ? does Huffington Post do better or worse than Instapundit on factual accuracy? — in favor of one sweeping theme: defense of incumbent stewards of knowledge against attenuation of their erstwhile roles.

There are two alternate theories of technology on display in Sleep of Reason. The first is that technology is an empty vessel, into which social norms may be poured. This is the theory behind statements like “The difference is not, emphatically not, in the communication technology involved.” (Emphasis his.) The second theory is that intellectual revolutions are shaped in part by the tools that sustain them. This is the theory behind his observation that the virtues of print were “…often absent in the manuscript age that preceded print.”

These two theories cannot both be true, so it’s odd to find them side by side, but Gorman does not seem to be comfortable with either of them as a general case. This leads to a certain schizophrenic quality to the writing. We’re told that print does not necessarily bestow authenticity and that some digital material does, but we’re also told that he consulted “authoritative printed sources” on Goya. If authenticity is an option for both printed and digital material, why does printedness matter? Would the same words on the screen be less scholarly somehow?

Gorman is adopting a historically contingent view: Revolution then was good, revolution now is bad. As a result, according to Gorman, the shift to digital and networked reproduction of information will fail unless it recapitulates the institutions and habits that have grown up around print.

Gorman’s theory about print — its capabilities ushered in an age very different from manuscript culture — is correct, and the same kind of shift is at work today. As with the transition from manuscripts to print, the new technologies offer virtues that did not previously exist, but are now an assumed and permanent part of our intellectual environment. When reproduction, distribution, and findability were all hard, as they were for the last five hundred years, we needed specialists to undertake those jobs, and we properly venerated them for the service they performed. Now those tasks are simpler, and the earlier roles have instead become obstacles to direct access.

Digital and networked production vastly increase three kinds of freedom: freedom of speech, of the press, and of assembly. This perforce increases the freedom of anyone to say anything at any time. This freedom has led to an explosion in novel content, much of it mediocre, but freedom is like that. Critically, this expansion of freedom has not undermined any of the absolute advantages of expertise; the virtues of mastery remain are as they were. What has happened is that the relative advantages of expertise are in precipitous decline. Experts the world over have been shocked to discover that they were consulted not as a direct result of their expertise, but often as a secondary effect — the apparatus of credentialing made finding experts easier than finding amateurs, even when the amateurs knew the same things as the experts.

This improved ability to find both content and people is one of the core virtues of our age. Gorman insists that he was able to find “…the recorded knowledge and information I wanted [about Goya] in seconds.” This is obviously an impossibility for most of the population; if you wanted detailed printed information on Goya and worked in any environment other than a library, it would take you hours at least. This scholars-eye view is the key to Gorman’s lament: so long as scholars are content with their culture, the inability of most people to enjoy similar access is not even a consideration.

Wikipedia is the best known example of improved findability of knowledge. Gorman is correct that an encyclopedia is not the product of a collective mind; this is as true of Wikipedia as of Britannica. Gorman’s unfamiliarity and even distaste for Wikipedia leads him to mistake the dumbest utterances of its most credulous observers for an authentic accounting of its mechanisms; people pushing arguments about digital collectivism, pro or con, known nothing about how Wikipedia actually works. Wikipedia is the product not of collectivism but of unending argumentation; the corpus grows not from harmonious thought but from constant scrutiny and emendation.

The success of Wikipedia forces a profound question on print culture: how is information is to be shared with the majority of the population? This is an especially tough question, as print culture has so manifestly failed at the transition to a world of unlimited perfect copies. Because Wikipedia’s contents are both useful and available, it has eroded the monopoly held by earlier modes of production. Other encyclopedias now have to compete for value to the user, and they are failing because their model mainly commits them to denying access and forbidding sharing. If Gorman wants more people reading Britannica, the choice lies with its management. Were they to allow users unfettered access to read and share Britannica’s content tomorrow, the only interesting question is whether their readership would rise a ten-fold or a hundred-fold.

Britannica will tell you that they don’t want to compete on universality of access or sharability, but this is the lament of the scribe who thinks that writing fast shouldn’t be part of the test. In a world where copies have become cost-free, people who expend their resources to prevent access or sharing are forgoing the principal advantages of the new tools, and this dilemma is common to every institution modeled on the scarcity and fragility of physical copies. Academic libraries, which in earlier days provided a service, have outsourced themselves as bouncers to publishers like Reed-Elsevier; their principal job, in the digital realm, is to prevent interested readers from gaining access to scholarly material.

If Gorman were looking at Web 2.0 and wondering how print culture could aspire to that level of accessibility, he would be doing something to bridge the gap he laments. Instead, he insists that the historical mediators of access “…promote intellectual development by exercising judgment and expertise to make the task of the seeker of knowledge easier.” This is the argument Catholic priests made to the operators of printing presses against publishing translations of the Bible — the laity shouldn’t have direct access to the source material, because they won’t understand it properly without us. Gorman offers no hint as to why direct access was an improvement when created by the printing press then but a degradation when created by the computer. Despite the high-minded tone, Gorman’s ultimate sentiment is no different from that of everyone from music executives to newspaper publishers: Old revolutions good, new revolutions bad.

Comments (48) + TrackBacks (0) | Category: social software

May 31, 2007

HBR Interactive Case Study: "We Googled You"

Email This Entry

Posted by danah boyd

In my last post, i shared my case study response to the Harvard Business Review Case Study “We Googled You.” Since then, thanks to a kind reader (tx Andy Blanco), i learned that HBR made this case study the First Interactive Case Study. This means that you can read the case (without the respondents’ responses) and submit your own response.

You are still more than welcome to read my response, but i’d be super duper stoked to read your response as well. I found this exercise mentally invigorating and suspect you might as well. HBR wants you to submit your response to them, but i’d also be stoked if you’d be willing to share it with us.

Feel free to add your response to the comments on Apophenia or write your response on your own blog and add a link to the comments. Either way, i’d really love to hear how you would handle this scenario in your own business practices.

(Note: the reason that i use comments on Apophenia is because they notify me… i don’t get notified here and i find it easier to keep the conversation in one place.)

Comments (0) + TrackBacks (0) | Category: social software

May 30, 2007

cribs and commentary, oh my!

Email This Entry

Posted by danah boyd

I have recently uploaded a bunch of talk cribs, a new book essay, and a case commentary for your enjoyment.

Harvard Business Review Case Commentary

The Harvard Business Review has a section called “Case Commentary” where they propose a fictional but realistic scenario and invite different prominent folks to respond. I was given the great honor of being invited to respond to a case entitled “We Googled You.”

In Diane Coutu’s hypothetical scenario, Fred is trying to decide whether or not to hire Mimi after one of Fred’s co-workers googles Mimi and finds newspaper clippings about Mimi protesting Chinese policies. [The case study is 2 pages - this is a very brief synopsis.] Given the scenario, we were then asked, “should Fred hire Mimi despite her online history?”

Unfortunately, Harvard Business Review does not make their issues available for free download (although they are available at the library and the case can be purchased for $6) but i acquired permission to publish my commentary online for your enjoyment. It’s a llittle odd taken out of context, but i still figured some folks might enjoy my view on this matter, especially given that the press keep asking me about this exact topic.

“We Googled You: Should Fred hire Mimi despite her online history?”

Cannes Film Festival

At the Cannes Film Festival’s Opening Forum on “Cinema: The Audiences of Tomorrow,” i gave a keynote about youth, DRM, remix, film, MySpace, YouTube, and other such good things. Check out: “Film and the Audience of Tomorrow”

BlogTalks Reloaded

Last fall, i spoke at BlogTalk Reloaded. They’ve turned a bunch of our talks into full papers packaged and published as a book titled: BlogTalks Reloaded. My piece is The Significance of Social Software. I look at the culture surrounding, technology of, and practices embedded in social software.

Personal Democracy Forum

At the Personal Democracy Forum, i argued that politicians should reach out and shake virtual hands with young people rather than just putting up flat profiles on social network sites. Check out the crib: “Digital Handshakes on Virtual Receiving Lines.”

Internet Caucus Panel

The Internet Caucus recently held a panel in DC called “Just The Facts About Online Youth Victimization.” David Finkelhor (Director of Crimes Against Children Research Center), Amanda Lenhart (PEW), and Michele Ybarra (President of Internet Solutions for Kids) all presented quantitative data while i batted qualitative cleanup.

panel video and audio | YouTube video | PDF transcript

Comments (2) + TrackBacks (0) | Category: social software

May 24, 2007

What are we going to say about "Cult of the Amateur"?

Email This Entry

Posted by Clay Shirky

A month or so ago, Micah Sifry offered me a chance to respond to Andrew Keen, author of the forthcoming Cult of the Amateur, at a panel at last week’s Personal Democracy Forum (PdF). The book is a polemic against the current expansion of freedom of speech, freedom of the press, and freedom of association. Also on the panel were Craig Newmark and Robert Scoble, so I was in good company; my role would, I thought, be easy — be pro-amateur production, pro-distributed creation, pro-collective action, and so on, things that come naturally to me.

What I did not expect was what happened — I ended up defending Keen, and key points from Cult of the Amateur, against a panel of my peers.

I won’t review CotA here, except to say that the book is going to get a harsh reception from the blogosphere. It is, as Keen himself says, largely anecdotal, which makes it more a list of ‘bad things that have happened where the internet is somewhere in the story’ than an account of cause and effect; as a result, internet gambling and click fraud are lumped together with the problems with DRM and epistemological questions about peer-produced material. In addition to this structural weakness, it is both aggressive enough and reckless enough to make people spitting mad. Dan Gillmor was furious about the inaccuracies, including his erroneous (and since corrected) description in the book, Yochai Benkler asked me why I was even deigning to engage Andrew in conversation, and so on. I don’t think I talked to anyone who wasn’t dismissive of the work.

But even if we stipulate that the book doesn’t do much to separate cause from effect, and has the problems of presentation that often accompany polemic, the core point remains: Keen’s sub-title, “How today’s internet is destroying our culture”, has more than a grain of truth to it, and the only thing those of us who care about the network could do wrong would be to dismiss Keen out of hand.

Which is exactly what people were gearing up to do last week. Because Keen is a master of the dismissive phrase — bloggers are monkeys, only people who get paid do good work, and so on — he will engender a reaction from our side that assumes that everything he says in the book is therefore wrong. This is a bad (but probably inevitable) reaction, but I want to do my bit to try to stave it off, both because fairness dictates it — Keen is at least in part right, and we need to admit that — and because a book-burning accompanied by a hanging-in-effigy will be fun for us, but will weaken the pro-freedom position, not strengthen it.

The Panel

The panel at PdF started with Andrew speaking, in some generality, about ways in which amateurs were discomfiting people who actually know what they are doing, while producing sub-standard work on their own.

My response started by acknowledging that many of the negative effects Keen talked about were real, but that the source of these effect was an increase in the freedom of people to say what they want, when they want to, on a global stage; that the advantages of this freedom outweigh the disadvantages; that many of the disadvantages are localized to professions based on pre-internet inefficiencies; and that the effort required to take expressive power away from citizens was not compatible with a free society.

This was, I thought, a pretty harsh critique of the book. I was wrong; I didn’t know from harsh.

Scoble was simply contemptuous. He had circled offending passages which he would read, and then offer an aphoristic riposte that was more scorn than critique. For instance, in taking on Andrew’s point that talent is unevenly distributed, Scoble’s only comment was, roughly, “Yeah, Britney must be talented…”

Now you know and I know what Scoble meant — traditional media gives outsize rewards to people on characteristics other than pure talent. This is true, but because he was so dismissive of Keen, it’s not the point that Scoble actually got across. Instead, he seemed to be denying either that talent is unevenly distributed, or that Britney is talented.

But Britney is talented. She’s not Yo-Yo Ma, and you don’t have to like her music (back when she made music rather than just headlines), but what she does is hard, and she does it well. Furthermore, deriding the music business’s concern with looks isn’t much of a criticism. It escaped no one’s notice that Amanda Congdon and lonelygirl15 were easy on the eyes, and that that was part of their appeal. So cheap shots at mainstream talent or presumptions of the internet’s high-mindedness are both non-starters.

More importantly, talent is unevenly distributed, and everyone knows it. Indeed, one of the many great things about the net is that talent can now express itself outside traditional frameworks; this extends to blogging, of course, but also to music, as Clive Thompson described in his great NY Times piece, or to software, as with Linus’ talent as an OS developer, and so on. The price of this, however, is that the amount of poorly written or produced material has expanded a million-fold. Increased failure is an inevitable byproduct of increased experimentation, and finding new filtering methods for dealing with an astonishingly adverse signal-to-noise ratio is the great engineering challenge of our age (c.f. Google.) Whatever we think of Keen or CotA, it would be insane to deny that.

Similarly, Scoble scoffed at the idea that there is a war on copyright, but there is a war on copyright, at least as it is currently practiced. As new capabilities go, infinite perfect copyability is a lulu, and it breaks a lot of previously stable systems. In the transition from encoding on atoms to encoding with bits, information goes from having the characteristics of chattel to those of a public good. For the pro-freedom camp to deny that there is a war on copyright puts Keen in the position of truth-teller, and makes us look like employees of the Ministry of Doublespeak.

It will be objected that engaging Keen and discussing a flawed book will give him attention he neither needs nor deserves. This is fantasy. CotA will get an enthusiastic reception no matter what, and whatever we think of it or him, we will be called to account for the issues he raises. This is not right, fair, or just, but it is inevitable, and if we dismiss the book based on its errors or a-causal attributions, we will not be regarded as people who have high standards, but rather as defensive cult members who don’t like to explain ourselves to outsiders.

What We Should Say

Here’s my response to the core of Keen’s argument.

Keen is correct in seeing that the internet is not an improvement to modern society; it is a challenge to it. New technology makes new things possible, or, put another way, when new technology appears, previously impossible things start occurring. If enough of those impossible things are significantly important, and happen in a bundle, quickly, the change becomes a revolution.

The hallmark of revolution is that the goals of the revolutionaries cannot be contained by the institutional structure of the society they live in. As a result, either the revolutionaries are put down, or some of those institutions are transmogrified, replaced, or simply destroyed. We are plainly witnessing a restructuring of the music and newspaper businesses, but their suffering isn’t unique, it’s prophetic. All businesses are media businesses, because whatever else they do, all businesses rely on the managing of information for two audiences — employees and the world. The increase in the power of both individuals and groups, outside traditional organizational structures, is epochal. Many institutions we rely on today will not survive this change without radical alteration.

This change will create three kinds of loss.

First, people whose jobs relied on solving a hard problem will lose those jobs when the hard problems disappear. Creating is hard, filtering is hard, but the basic fact of making acceptable copies of information, previously the basis of the aforementioned music and newspaper industries, is a solved problem, and we should regard with suspicion anyone who tries to return copying to its previously difficult state.

Similarly, Andrew describes a firm running a $50K campaign soliciting user-generated ads, and notes that some professional advertising agency therefore missed out on something like $300,000 dollars of fees. Its possible to regard this as a hardship for the ad guys, but its also possible to wonder whether they were really worth the $300K in the first place if an amateur, working in their spare time with consumer-grade equipment, can create something the client is satisfied with. This loss is real, but it is not general. Video tools are sad for ad guys in the same way movable type was sad for scribes, but as they say in show biz, the world doesn’t owe you a living.

The second kind of loss will come from institutional structures that we like as a society, but which are becoming unsupportable. Online ads offer better value for money, but as a result, they are not going to generate enough cash to stand up the equivalent of the NY Times’ 15-person Baghdad bureau. Josh Wolf has argued that journalistic privilege should be extended to bloggers, but the irony is that Wolf’s very position as a videoblogger makes that view untenable — journalistic privilege is a special exemption to a general requirement for citizens to aid the police. We can’t have a general exception to that case.

The old model of defining a journalist by tying their professional identity to employment by people who own a media outlet is broken. Wolf himself has helped transform journalism from a profession to an activity; now we need a litmus test for when to offer source confidentiality for acts of journalism. This will in some ways be a worse compromise than the one we have now, not least because it will take a long time to unfold, but we can’t have mass amateurization of journalism and keep the social mechanisms that regard journalists as a special minority.

The third kind of loss is the serious kind. Some of these Andrew mentions in his book: the rise of spam, the dramatically enlarged market for identity theft. Other examples he doesn’t: terrorist organizations being more resilient as a result of better communications tools, pro-anorexic girls forming self-help groups to help them remain anorexic. These things are not side-effects of the current increase in freedom, they are effects of that increase. Spam is not just a plague in open, low-entry-cost systems; it is a result of those systems. We can no longer limit things like who gets to form self-help groups through social controls (the church will rent its basement to AA but not to the pro-ana kids), because no one needs help or permission to form such a group anymore.

The hard question contained in Cult of the Amateur is “What are we going to do about the negative effects of freedom?” Our side has generally advocated having as few limits as possible (when we even admit that there are downsides), but we’ve been short on particular cases. It’s easy to tell the newspaper people to quit whining, because the writing has been on the wall since Brad Templeton founded Clarinet. It’s harder to say what we should be doing about the pro-ana kids, or the newly robust terror networks.

Those cases are going to shift us from prevention to reaction (a shift that parallels the current model of publishing first, then filtering later), but so much of the conversation about the social effects of the internet has been so upbeat that even when there is an obvious catastrophe (as with the essjay crisis on Wikipedia), we talk about it amongst ourselves, but not in public.

What Wikipedia (and Digg and eBay and craigslist) have shown us is that mature systems have more controls than immature ones, as the number of bad cases is identified and dealt with, and as these systems become more critical and more populous, the number of bad cases (and therefore the granularity and sophistication of the controls) will continue to increase.

We are creating a governance model for the world that will coalesce after the pre-internet institutions suffer whatever damage or decay they are going to suffer. The conversation about those governance models, what they look like and why we need them, is going to move out into the general public with CotA, and we should be ready for it. My fear, though, is that we will instead get a game of “Did not!”, “Did so!”, and miss the opportunity to say something much more important.

Comments (19) + TrackBacks (0) | Category: social software

May 19, 2007

The (Bayesian) Advantage of Youth

Email This Entry

Posted by Clay Shirky

A couple of weeks ago, Fred Wilson wrote, in The Mid Life Entrepreneur Crisis “…prime time entrepreneurship is 30s. And its possibly getting younger as web technology meets youth culture.” After some followup from Valleywag, he addressed the question at greater length in The Age Question (continued), saying “I don’t totally buy that age matters. I think, as I said in my original post, that age is a mind set.”

This is a relief for people like me — you’re as young as you feel, and all that — or rather it would be a relief but for one little problem: Fred was right before, and he’s wrong now. Young entrepreneurs have an advantage over older ones (and by older I mean over 30), and contra Fred’s second post, age isn’t in fact a mindset. Young people have an advantage that older people don’t have and can’t fake, and it isn’t about vigor or hunger — it’s a mental advantage. The principal asset a young tech entrepreneur has is that they don’t know a lot of things.

In almost every other circumstance, this would be a disadvantage, but not here, and not now. The reason this is so (and the reason smart old people can’t fake their way into this asset) has everything to do with our innate ability to cement past experience into knowledge.

Probability and the Crisis of Novelty

The classic illustration for learning outcomes based on probability uses a bag of colored balls. Imagine that you can take out one ball, record its color, put it back, and draw again. How long does it take you to form an opinion about the contents of the bag, and how correct is that opinion?

Imagine a bag of black and white balls, with a slight majority of white. Drawing out a single ball would provide little information beyond “There is at least one white (or black) ball in this bag.” If you drew out ten balls in a row, you might guess that there are a similar number of black and white balls. A hundred would make you relatively certain of that, and might give you an inkling that white slightly outnumbers black. By a thousand draws, you could put a rough percentage on that imbalance, and by ten thousand draws, you could say something like “53% white to 47% black” with some confidence.

This is the world most of us live in, most of the time; the people with the most experience know the most.

But what would happen if the contents of the bag changed overnight? What if the bag suddenly started yielding balls of all colors and patterns — black and white but also green and blue, striped and spotted? The next day, when the expert draws a striped ball, he might well regard it as a mere anomaly. After all, his considerable experience has revealed a predictable and stable distribution over tens of thousands of draws, so no need to throw out the old theory because of just one anomaly. (To put it in Bayesian terms, the prior beliefs of the expert are valuable precisely because they have been strengthened through repetition, which repetition makes the expert confident in them even in the face of a small number of challenging cases.)

But the expert keeps drawing odd colors, and so after a while, he is forced to throw out the ‘this is an anomaly, and the bag is otherwise as it was’ theory, and start on a new one, which is that some novel variability has indeed entered the system. Now, the expert thinks, we have a world of mostly black and white, but with some new colors as well.

But the expert is still wrong. The bag changed overnight, and the new degree of variation is huge compared to the older black-and-white world. Critically, any attempt to rescue the older theory will cause the expert to misunderstand the world, and the more carefully the expert relies on the very knowledge that constitutes his expertise, the worse his misunderstanding will be.

Meanwhile, on the morning after the contents of the bag turn technicolor, someone who just showed up five minutes ago would say “Hey, this bag has lots of colors and patterns in it.” While the expert is still trying to explain away or minimize the change as a fluke, or as a slight adjustment to an otherwise stable situation, the novice, who has no prior theory to throw out, understands exactly what’s going on.

What our expert should have done, the minute he saw the first odd ball, is to say “I must abandon everything I have ever thought about how this bag works, and start from scratch.” He should, in other words, start behaving like a novice.

Which is exactly the thing he — we — cannot do. We are wired to learn from experience. This is, in almost all cases, absolutely the right strategy, because most things in life benefit from mental continuity. Again, today, gravity pulls things downwards. Again, today, I get hungry and need to eat something in the middle of the day. Again, today, my wife will be happier if I put my socks in the hamper than on the floor. We don’t need to re-learn things like this; once we get the pattern, we can internalize it and move on.

A Lot of Knowledge Is A Dangerous Thing

This is where Fred’s earlier argument comes in. In 999,999 cases, learning from experience is a good idea, but what entrepreneurs do is look for the one in a million shot. When the world really has changed overnight, when wild new things are possible if you don’t have any sense of how things used to be, then it is the people who got here five minutes ago who understand that new possibility, and they understand it precisely because, to them, it isn’t new.

These cases, let it be said, are rare. The mistakes novices make come from a lack of experience. They overestimate mere fads, seeing revolution everywhere, and they make this kind of mistake a thousand times before they learn better. But the experts make the opposite mistake, so that when a real once-in-a-lifetime change comes along, they are at risk of regarding it as a fad. As a result of this asymmetry, the novice makes their one good call during an actual revolution, at exactly the same time the expert makes their one big mistake, but at that moment, that’s all that is needed to give the newcomer a considerable edge.

Here’s a tech history question: Which went mainstream first, the PC or the VCR?

People over 35 have a hard time even understanding why you’d even ask — VCRs obviously pre-date PCs for general adoption.

Here’s another: Which went mainstream first, the radio or the telephone?

The same people often have to think about this question, even though the practical demonstration of radio came almost two decades after the practical demonstration of the telephone. We have to think about that second question because, to us, radio and the telephone arrived at the same time, which is to say the day we were born. And for college students today, that is true of the VCR and the PC.

People who think of the VCR as old and stable, and the PC as a newer invention, are not the kind of people who think up Tivo. It’s people who are presented with two storage choices, tape or disk, without historical bias making tape seem more normal and disk more provisional, who do that kind of work, and those people are, overwhelmingly, young.

This is sad for a lot of us, but its also true, and Fred’s kind lies about age being a mind set won’t reverse that.

The Uses of Experience

I’m old enough to know a lot of things, just from life experience. I know that music comes from stores. I know that you have to try on pants before you buy them. I know that newspapers are where you get your political news and how you look for a job. I know that if you want to have a conversation with someone, you call them on the phone. I know that the library is the most important building on a college campus. I know that if you need to take a trip, you visit a travel agent.

In the last 15 years or so, I’ve had to unlearn every one of those things and a million others. This makes me a not-bad analyst, because I have to explain new technology to myself first — I’m too old to understand it natively. But it makes me a lousy entrepreneur.

Ten years ago, I was the CTO of a web company we built and sold in what seemed like an eon but what was in retrospect an eyeblink. Looking back, I’m embarrassed at how little I knew, but I was a better entrepreneur because of it.

I can take some comfort in the fact that people much more successful than I succumb to the same fate. IBM learned, from decades of experience, that competitive advantage lay in the hardware; Bill Gates had never had those experiences, and didn’t have to unlearn them. Jerry and David at Yahoo learned, after a few short years, that search was a commodity. Sergey and Larry never knew that. Mark Cuban learned that the infrastructure required for online video made the economics of web video look a lot like TV. That memo was never circulated at YouTube.

So what can you do when you get kicked out of the club? My answer has been to do the things older and wiser people do. I teach, I write, I consult, and when I work with startups, it’s as an advisor, not as a founder.

And the hardest discipline, whether talking to my students or the companies I work with, is to hold back from offering too much advice, too definitively. When I see students or startups thinking up something crazy, and I want to explain why that won’t work, couldn’t possibly work, why this recapitulates the very argument that led to RFC 939 back in the day, I have to remind myself to shut up for a minute and just watch, because it may be me who will be surprised when I see what color comes out of the bag next.

Comments (42) + TrackBacks (0) | Category: social software

May 8, 2007

social network sites: public, private, or what?

Email This Entry

Posted by danah boyd

Over at Knowledge Tree is a recent essay i wrote called Social Network Sites: Public, Private, or What? For many who follow my blog, the arguments are not new, but i suspect some folks might appreciate the consolidated and not-so-spastic version. At the very least, perhaps you’ll be humored to see my writing splattered with the letter ‘s’ instead of the letter ‘z’ (it’s an Australian e-journal). There’s also an MP3 of me reading the essay for those who fear text (which is very novel since y’all know how much i fear audio/video recordings of me, but i did resist trying to sound funny while pronouncing the letter s instead of the letter z). And here’s a PDF of the essay for those who wishing to kill trees.

In conjunction with this essay, there’s a life chat at 2PM Australian Eastern on 22 May. This translates to 9PM PST on 21 May and midnight New York time (which is where i’ll be so hopefully i won’t be too loopy, or at least no more loopy than i am feeling right now).

Enjoy! (Comments at Apophenia)

Comments (0) + TrackBacks (0) | Category: social software

April 25, 2007

Sorry, Wrong Number: McCloud Abandons Micropayments

Email This Entry

Posted by Clay Shirky

Four years ago, I wrote a piece called Fame vs Fortune: Micropayments and Free Content. The piece was sparked by the founding of a company called BitPass and its adoption by the comic artist Scott McCloud (author of the seminal Understanding Comics, among other things.) McCloud created a graphic work called “The Right Number”, which you had to buy using BitPass.

It didn’t work. BitPass went out of business in January of this year. I didn’t write about it at the time because its failure was a foregone conclusion. This isn’t just retrospective certainty, either; here’s what I said about BitPass in 2003:
BitPass will fail, as FirstVirtual, Cybercoin, Millicent, Digicash, Internet Dollar, Pay2See, and many others have in the decade since Digital Silk Road, the paper that helped launch interest in micropayments. These systems didn’t fail because of poor implementation; they failed because the trend towards freely offered content is an epochal change, to which micropayments are a pointless response.

I’d love to take credit for having made a brave prediction there, but in fact Nick Szabo wrote a dispositive critique of micropayments back in 1996. The BitPass model never made a lick of sense, so predicting its demise was mere throat-clearing on the way to the bigger argument. The conclusion I drew in 2003 (and which I still believe) was that the vanishingly low cost of making unlimited perfect copies would put creators in the position of having to decide between going for audience size (fame) or restricting and charging for access (fortune), and that the desire for fame, no longer tempered by reproduction costs, would generally win out.

Creators are not publishers, and putting the power to publish directly into their hands does not make them publishers. It makes them artists with printing presses. This matters because creative people crave attention in a way publishers do not. […] with the power to publish directly in their hands, many creative people face a dilemma they’ve never had before: fame vs fortune.

Scott McCloud, who was also an advisor to BitPass, took strong issue with this idea in Misunderstanding Micropayments, a reply to the Fame vs. Fortune argument:

In many cases, it’s no longer a choice between getting it for a price or getting it for free. It’s the choice between getting it for price or not getting it at all. Fortunately, the price doesn’t have to be high.

McCloud was arguing that the creator’s natural monopoly — only Scott McCloud can produce another Scott McCloud work — would provide the artist the leverage needed to insist on micropayments (true), and that this leverage would create throngs of two-bit users (false).

What’s really interesting is that, after the failure of BitPass, McCloud has now released The Right Number absolutely free of charge. Nothing. Nada. Kein Preis. After the micropayment barrier had proved too high for his potential audience (as predicted), McCloud had to choose between keeping his work obscure, in order to preserve the possibility of charging for it, or going for attention. His actual choice in 2007, upends his argument of four years ago: he went for the fame, at the expense of the fortune. (This recapitulates Tim O’Reilly’s formulation: “Obscurity is a far greater threat to authors and creative artists than piracy.” [ thanks, Cory, for the pointer ])

Everyone who imagines a working micropayment system either misunderstands user preferences, or imagines preventing users from expressing those preferences. The working micropayments systems that people hold up as existence proofs — ringtones, iTunes — are businesses that have escaped from market dynamics through a monopoly or cartel (music labels, carriers, etc.) Indeed, the very appeal of micropayments to content producers (the only people who like them — they offer no feature a user has ever requested) is to re-establish the leverage of the creator over the users. This isn’t going to happen, because the leverage wasn’t based on the valuing of content, but of packaging and distribution.

I’ll let my 2003 self finish the argument:
People want to believe in things like micropayments because without a magic bullet to believe in, they would be left with the uncomfortable conclusion that what seems to be happening — free content is growing in both amount and quality — is what’s actually happening.

The economics of content creation are in fact fairly simple. The two critical questions are “Does the support come from the reader, or from an advertiser, patron, or the creator?” and “Is the support mandatory or voluntary?”

The internet adds no new possibilities. Instead, it simply shifts both answers strongly to the right. It makes all user-supported schemes harder, and all subsidized schemes easier. It likewise makes collecting fees harder, and soliciting donations easier. And these effects are multiplicative. The internet makes collecting mandatory user fees much harder, and makes voluntarily subsidy much easier.

The only interesting footnote, in 2007, is that these forces have now reversed even McCloud’s behavior.

Comments (11) + TrackBacks (0) | Category: social software

April 3, 2007

Incantations for Muggles

Email This Entry

Posted by danah boyd

I love Etech. This year, i had the great opportunity to keynote Etech (albeit at an ungodly hour). The talk i wrote was entirely new and intended for the tech designer/developer audience (warning: the academics will hate it). The talk is called:

“Incantations for Muggles:
The Role of Ubiquitous Web 2.0 Technologies in Everyday Life”

It’s about how technologists need to pay attention to the magic that everyday people create using the Web2.0 technologies that we in the tech world think are magical. It’s quite a fun talk and i figured that some might enjoy reading it so i just uploaded my crib notes. It is unlikely that i said exactly what i wrote, but the written form should provide a good sense of the points i was trying to make in the talk.

I should give infinite amounts of appreciation to Raph Koster who took unbelievable notes during my presentation, letting me adjust my crib to be more in tune with what i actually said. THANK YOU! I was half tempted to not bother blogging my crib notes given the fantastic-ness of his notes, but i figure that there still might be some out there who would prefer the crib. Enjoy!

(PS: If you remember me saying something that i didn’t put in the crib, let me know and i’ll add it… i’m stunned at how many of you took notes during the talk.)

Comments (4) + TrackBacks (0) | Category: social software

March 18, 2007

Tweet Tweet (some thoughts on Twitter)

Email This Entry

Posted by danah boyd

SXSW has come and gone and my phone might never recover. Y’see, last year i received over 500 Dodgeballs. To the best that i can tell, i received something like 3000 Tweets during the few days i was in Austin. My phone was constantly hitting its 100 message cap and i spent more time trying to delete messages than reading them. Still, i think that Twitter and Dodgeball are interesting and i want to take a moment to consider their strengths and weaknesses as applications.

While you can use Dodgeball for a variety of things, it’s primarily a way of announcing presence in a social venue where you’d be willing to interact with other people. Given that i’m a hermit, i primarily use Dodgeball to announce my presence at conference outtings and to sigh in jealousy as people romp around Los Angeles. Dodgeball is culturally linked to place. I’m still pretty peeved with Google over the lack of development of Dodgeball because i still think it would be a brilliant campus-based application where people actually do party-hop on every weekend and want to know if their friends are at the neighboring frat party instead of this one. When it comes to usage at SXSW, Dodgeball is great. I know when 7 of my friends are in one venue and 11 are in another; it helps me decide where to go.

Twitter has taken a different path. It is primarily micro-blogging or group IMing or push away messaging. You write whatever you damn well please and it spams all of the people who agreed to be your friends. The biggest strength AND weakness of Twitter is that it works through your IM client (or Twitterrific) as well as your phone. This means that all of the tech people who spend far too much time bored on their laptops are spamming people at a constant rate. Ah, procrastination devices. If you follow all of your friends on your mobile, you’re in for a hellish (and every expensive) experience. Folks quickly learn to stop following people on their mobile (or, if they don’t, they turn Twitter off altogether). This, unfortunately, kills the mobile value of it, making it far more of a web tool than a mobile tool. Considering how much of a bitch it is to follow/unfollow people, users quickly choose and rarely turn back. Thus, once they stop following someone on their phone, they don’t return just because they are going out with that person that night (unless they run into them and choose to switch it on).

At SXSW, Twitter is fantastic for mobile. Everyone is running around the same town commenting on talks, remarking on venues, bitching about the rain. But dear god did i feel bad for the people who weren’t at SXSW who were getting spammed with that crap. One value of Twitter is that it’s really lightweight and easy. One problem is that this is terrible if your social world is not one giant cluster. While my tech friends who normally attend SXSW moped about how jealous they were upon receiving all of the SXSW messages, my non-tech friends were more of the WTF camp. Without segmentation, i had to choose one audience over the other because there was no way to move seamlessly between the audiences. Of course, groups are much heavier to manage. Still, i think it’s possible and i gave Ev some notes.

I think it’s funny to watch my tech geek friends adopt a social tech. They can’t imagine life without their fingers attached to a keyboard or where they didn’t have all-you-can-eat phone plans. More importantly, the vast majority of their friends are tech geeks too. And their social world is relatively structurally continuous. For most 20/30-somethings, this isn’t so. Work and social are generally separated and there are different friend groups that must be balanced in different ways.

Of course, the population whose social world is most like the tech geeks is the teens. This is why they have no problems with MySpace bulletins (which are quite similar to Twitter in many ways). The biggest challenge with teens is that they do not have all-you-can-eat phone plans. Over and over, the topic of number of text messages in one’s plan comes up. And my favorite pissed off bullying act that teens do involves ganging up to collectively spam someone so that they’ll go over their limit and get into trouble with their parents (phone companies don’t seem to let you block texts from particular numbers and of course you have to pay 10c per text you receive). This is particularly common when a nasty breakup occurs and i was surprised when i found out that switching phone numbers is the only real solution to this. Because most teens are not permanently attached to a computer and because they typically share their computers with other members of the family, Twitterific-like apps wouldn’t really work so well. And Twitter is not a strong enough app to replace IM time.

Of course, this doesn’t mean that all teens would actually like Twitter. There are numerous complaints about the lameness of bulletins. People forward surveys just as something to do and others complain that this is a waste of their time. (Of course, then they go on to do it themselves.) Still, bulletin space is like Twitter space. You need to keep posting so that your friends don’t forget you. Or you don’t post at all. Such is the way of Twitter. Certain people i see flowing 5-15 times a day. Others i never hear from (or like once a week).

There’s another issue at play… Like with bulletins, it’s pretty ostentatious to think that your notes are worth pushing to others en masse. It takes a certain kind of personality to think that this kind of spamming is socially appropriate and desirable. Sure, we all love to have a sense of what’s going on, but this is push technology at its most extreme. You’re pushing your views into the attention of others (until they turn it or you off).

The techno-geek users keep telling me that it’s a conversation. Of course, this is also said of blogging. But i don’t think that either are typically conversations. More often, they are individuals standing on their soap boxes who enjoy people responding to them and may wander around to others soap boxes looking for interesting bits of data. By and large, people Twitter to share their experience; only rarely do they expect to receive anything in return. What is returned is typically a kudos or a personal thought or an organizing question. I’d be curious what percentage of Tweets start a genuine back-and-forth dialogue where the parties are on equal ground. It still amazes me that when i respond to someone’s Tweet personally, they often ignore me or respond curtly with an answer to my question. It’s as though the Tweeter wants to be recognized en masse, but doesn’t want to actually start a dialogue with their pronouncements. Of course, this is just my own observation. Maybe there are genuine conversations happening beyond my purview.

Unfortunately, i don’t know how sustainable Twitter is for most people. It’s very easy to burn out on it and once someone does, will they return? It’s also really hard for friend-management. If you add someone, even if you “leave” them, you’ll get Twitteriffic posts from them. This creates a huge disincentive for adding people, even if you welcome them to read your Tweets. Post-SXSW, i’ve seen two things: the most active in Austin are still ridiculously active. The rest have turned it off for all intents and purposes. Personally, i’m trying to see how long i’ll last before i can’t stand the invasion any longer. Given that my non-tech friends can’t really join effectively (for the same reasons as teens - text messaging plan and lack of always-on computerness and hatred of IM interruptions), i don’t think that i can get a good sense of how this would play out beyond the geek crowd. But it sure is entertaining to watch.

PS: I should note that my favorite part of Twitter is that when i wander to a non-functioning page, i get this image:

How can that not make you happy?

(Conversation at Apophenia)

Comments (0) + TrackBacks (0) | Category: social software

March 17, 2007

fame, narcissism and MySpace

Email This Entry

Posted by danah boyd

When adults aren’t dismissing MySpace as the land-o-predators, they’re often accusing it of producing narcissistic children. I find it hard to bite my tongue in these situations, but i know that few adults are willing to take the blame for producing narcissistic children. The issue of narcissism and fame is back in public circulation with a vengeance (thanks in part to Britney Spears for having a public meltdown). While the mainstream press is having a field day with blaming celebrities and teens for being narcissistic, more solid research on narcissism is emerging.

For those who are into pop science coverage of academic work, i’d encourage you to start with Jake Halpern’s “Fame Junkies” (tx Anastasia). For simplicity sake, let’s list a few of the key findings that have emerged over the years concerning narcissism.

  • While many personality traits stay stable across time, it appears as though levels of narcissism (as tested by the NPI) decrease as people grow older. In other words, while adolescents are more narcissistic than adults, you were also more narcissistic when you were younger than you are now.
  • The scores of adolescents on the NPI continue to rise. In other words, it appears as though young people today are more narcissistic than older people were when they were younger.
  • There appears to be a correlation between narcissism and self-esteem based education. In other words, all of that school crap about how everyone is good and likable has produced a generation of narcissists.
  • Celebrity does not make people narcissists but narcissistic people seek fame.
  • Reality TV stars score higher on the NPI than other celebrities.

OK… given these different findings (some of which are still up for debate in academic circles), what should we make of teens’ participation on social network sites in relation to narcissism?

My view is that we have trained our children to be narcissistic and that this is having all sorts of terrifying repercussions; to deal with this, we’re blaming the manifestations instead of addressing the root causes and the mythmaking that we do to maintain social hierarchies. Let’s unpack that for a moment.

American individualism (and self-esteem education) have allowed us to uphold a myth of meritocracy. We sell young people the idea that anyone can succeed, anyone can be president. We ignore the fact that working class kids get working class jobs. This, of course, has been exacerbated in recent years. There used to be meaningful working class labor that young people were excited to be a part of. It was primarily masculine labor and it was rewarded through set hierarchies and unions helped maintain that structure. The unions crumpled in the 1980s and by the time the 1987 recession hit, there was a teenage wasteland No longer were young people being socialized into meaningful working class labor; the only path out was the “lottery” (aka becoming a famous rock star, athlete, etc.).

Since the late 80s, the lottery system has become more magnificent and corporatized. While there’s nothing meritocratic about reality TV or the Spice Girls, the myth of meritocracy remains. Over and over, working class kids tell me that they’re a better singer than anyone on American Idol and that this is why they’re going to get to be on the show. This makes me sigh. Do i burst their bubble by explaining that American Idol is another version of Jerry Springer where hegemonic society can mock wannabes? Or does their dream have value?

So, we have a generation growing up being told that they can be anyone, magnifying the level of narcissism. Narcissists seek fame and Hollywood dangles fame like a carrot on a stick. Meanwhile, technology emerges that challenges broadcast’s control over distribution. It just takes a few Internet success stories for fame-seeking narcissists to begin projecting themselves into the web in the hopes of being seen and being validated. While the important baseline of peer-validation still dominates, the hopes of becoming famous are still part of the narrative. Unfortunately, it’s kinda like watching wannabe actors work as waiters in Hollywood. They think that they’ll be found there because one day long ago someone was and so they go to work everyday in a menial service job with a dream.

Perhaps i should rally behind people’s dreams, but i tend to find them quite disturbing. It is these kinds of dreams that uphold the American myths that get us into such trouble. They also uphold hegemony and the powerful feed on their dreams, offering nothing in return. We can talk about reality TV as an amazing opportunity for anyone to act, but realistically, it’s nothing more than Hollywood’s effort to bust the actors’ guild and related unions. Feed on people’s desire for fame, pay them next to nothing and voila profit margin!

Unfortunately, union busting is the least of my worries when it comes to dream parasites. When i was trying to unpack the role of crystal meth in domestic violence, i started realizing that the meth offered a panacea when the fantasy bubble burst. Needless to say, this resulted in a spiral into hell for many once-dreamers. The next step was even more nauseating. When i started seeing how people in rural America recovered from meth, i found one common solution: born-again Christianity. The fervor for fame which was suppressed by meth re-emerged in zealous religiosity. Christianity promised an even less visible salvation: God’s grace. While blind faith is at the root of both fame-seeking and Christianity, Christianity offers a much more viable explanation for failures: God is teaching you a lesson… be patient, worship God, repent, and when you reach heaven you will understand.

While i have little issue with the core tenants of Christianity or religion in general, i am disgusted by the Christian Industrial Complex. In short, i believe that there is nothing Christian about the major institutions behind modern day organized American Christianity. Decades ago, the Salvation Army actively engaged in union-busting in order to maintain the status-quo. Today, the Christian Industrial Complex has risen into power in both politics and corporate life, but their underlying mission is the same: justify poor people’s industrial slavery so that the rich and powerful can become more rich and powerful. Ah, the modernization of the Protestant Ethic.

Let’s pop the stack and return to fame-seeking and massively networked society. Often, you hear Internet people modify Andy Warhol’s famous quote to note that on the Internet, everyone will be famous amongst 15. I find this very curious, because aren’t both time and audience needed to be famous? Is one really famous for 15 minutes? Or amongst 15? Or is it just about the perceived rewards around fame?

Why is it that people want to be famous? When i ask teens about their desire to be famous, it all boils down to one thing: freedom. If you’re famous, you don’t have to work. If you’re famous, you can buy anything you want. If you’re famous, your parents can’t tell you what to do. If you’re famous, you can have interesting friends and go to interesting parties. If you’re famous, you’re free! This is another bubble that i wonder whether or not i should burst. Anyone who has worked with celebrities knows that fame comes with a price and that price is unimaginable to those who don’t have to pay it.

How does this view of fame play into narcissism? If you think you’re all that, you don’t want to be told what to do or how to do it… You think you’re above all of that. When you’re parents are telling you that you have to clean your room and that you’re not allowed out, they’re cramping your style. How can you be anyone you want to be if you can’t even leave the house? Fame appears to be a freedom from all of that.

The question remains… does micro-fame (such as the attention one gets from being very cool on MySpace) feed into the desires of narcissists to get attention? On a certain level, yes. The attention feels good, it feeds the ego. But the thing about micro-celebrities is that they’re not free from attack. One of the reasons that celebrities go batty is that fame feeds into their narcissism, further heightening their sense of self-worth as more and more people tell them that they’re all that. They never see criticism, their narcissism is never called into check. This isn’t true with micro-fame and this is especially not true online when celebrities face their fans (and haters) directly. Net celebrities feel the exhaustion of attention and nagging much quicker than Hollywood celebrities. It’s a lot easier to burn out quicker and before reaching that mass scale of fame. Perhaps this keeps some of the desire for fame in check? Perhaps not. I honestly don’t know.

What i do know is that MySpace provides a platform for people to seek attention. It does not inherently provide attention and this is why even if people wanted 90M viewers to their blog, they’re likely to only get 6. MySpace may help some people feel the rush of attention, but it does not create the desire for attention. The desire for attention runs much deeper and has more to do with how we as a society value people than with what technology we provide them.

I am most certainly worried about the level of narcissism that exists today. I am worried by how we feed our children meritocratic myths and dreams of being anyone just so that current powers can maintain their supremacy at a direct cost to those who are supplying the dreams. I am worried that our “solutions” to the burst bubble are physically, psychologically, and culturally devastating, filled with hate and toxic waste. I am worried that Paris Hilton is a more meaningful role model to most American girls than Mother Theresa ever was. But i am not inherently worried about social network technology or video cameras or magazines. I’m worried by how society leverages different media to perpetuate disturbing ideals and pray on people’s desire for freedom and attention. Eliminating MySpace will not stop the narcissistic crisis that we’re facing; it will simply allow us to play ostrich as we continue to damage our children with unrealistic views of the world.

(Conversation at Apophenia)

Comments (0) + TrackBacks (0) | Category: social software

March 16, 2007

web 1-2-3

Email This Entry

Posted by danah boyd

I’m often asked what “Web 3.0” will be about. Lately, i have found myself talking about two critical stages of web sociality in order to explain where we’re going. I realized that i never succinctly described this here so i thought i should.

In early networked publics, there were two primary organizing principles for group sociability: interests and activities. People came together on rec.motorcylcles because they shared an interest in motorcycles. People also came together in work groups to discuss activities. Usenet, mailing lists, chatrooms, etc. were organized around these principles.

By and large, these were strangers meeting. Early net adopters were often engaging with people like them who were not geographically proximate. Then the boom hit and everyone got online, often to email with their friends (and consume). With everyone online, the organizing principles of sociality shifted.

As blogging began to take hold, people started arranging themselves around pre-existing friend groups. In this way, the organizing principle was about ego-centric networks. People’s “communities” began being defined by their friends. This model is quite different than group-driven structures where there are defined network boundaries. Ego-centric system are a (mostly) continuous graph. There are certainly clusters, but rarely bounded groups. This is precisely how we get the notion of “6 degrees of separation.” While blogging (and to a lesser degree homepages) were key to this shift, it was really social network sites that took the ball to the endzone. They made the networks visible, allowing people to put themselves at the center of their world. We finally have a world wide WEB of people, not just documents.

When i think about what’s next, i don’t think it’s going more virtual, more removed from everyday life. Actually, i think it’s even more connected to everyday life. We moved from ideas to people. What’s next? Place.

I believe that geographic-dependent context will be the next key shift. GPS, mesh networks, articulated presence, etc. People want to go mobile and they want to use technology to help them engage in the mobile world. Unfortunately, i think we have huge structural barriers in front of us. It’s not that we can’t do this on a technological level, it’s that there are old-skool institutions that want to get in the way. And they want to do it by plugging the market and shaping the law to their advantage. Primarily, i’m talking about carriers. And the handset makers who help keep the carriers alive. Let me explain.

The internet was not made for social communities. It was not made for social network sites. This grew because some creative folks decided to build on the open platform that was made available. Until recently, network neutrality was never a debate in the internet world because it was assumed. Given a connection (and time and literacy), anyone could contribute. Gotta love libertarian idealism.

Unfortunately, the same is not true for the mobile network. There’s never been neutrality and it’s the last thing that the carriers want. They want to control every byte and every application that can be put on the handsets that they adopt (and control through locking). In short, they want to control everything. It’s near impossible to develop networked social applications for mobiles. If it works on one carrier, it’s bound to be ignored by others. Even worse, the carriers have a disincentive to allow you to spread bytes over the network. (I can’t imagine how much those with all-you-can-eat plans detest Twittr.) Culturally, this is the step that’s next. Too bad i think that inane corporate bullshit is going to get in the way.

Of course, while i think that people want to move in this direction, i also think that privacy confusion has only just begun.

(Conversation at Apophenia)

Comments (0) + TrackBacks (0) | Category: social software

March 10, 2007

Twitter Tips the Tuna

Email This Entry

Posted by Ross Mayfield

On Wednesday, Twitter tipped the tuna.  By that I mean it started peaking.  Adoption amongst the people I know seemed to double immediately, an apparent tipping point. It hasn’t jumped the shark, and probably won’t until Steven Colbert covers this messaging of the mundane.  As Twitter turns 1 on March 13th, not only is there a quickening of users, but messages per user.

Twitter's 1st Year

Twitter, in a nutshell, is mobile social software that lets you broadcast and receive short messages with your social network.  You can use it with SMS (sending a message to 40404), on the web or IM.  A darn easy API has enabled other clients such as Twitterific for the Mac.  Twitter is Continuous Partial Presence, mostly made up of mundane messages in answer to the question, “what are you doing?” A never-ending steam of presence messages prompts you to update your own.  Messages are more ephemeral than IM presence — and posting is of a lower threshold, both because of ease and accessibility, and the informality of the medium.

Anil Dash was spot-on to highlight “The sign of success in social software is when your community does something you didn’t expect.”  A couple of weeks ago it became a convention to start messages with @username as a way of saying something to someone visible to everyone.  Within the limited affordances of the tool, people started to use it not only for presence, but a kind of shouting at the party conversation.  Further, when you see an to someone who isn’t in your social network, you find yourself inclined to go see who it is or add them if they are a friend who just joined.  This kind of social discovery goes beyond seeing friend lists on profiles, aids network structure and quickens adoption.

While the app is viral (you have to get others to adopt to be able to use it), mobile social software has great word-of-mouth properties.  At Wikimania this summer, a buzz went off in my pocket when I was having dinner, which prompted me to get Jason Calacanis, Dave Winer and the brothers Gillmor to adopt.  Wednesday was the first day of TED, so a bunch of A-listers spread it.  At SXSW it seems to be the smart mob tool of choice, and there is even a group for it with a feature I’ve never seen before, JOIN.

Most recently there has been a rise in fake identities and even celebrities. Partially because people want to form more than one group, sometimes as integration points with other communities.  Some of the groups I’ve spotted include AdaptivePath, Barcamp, Technorati (a hack that begs people for blurbs in WTF), Techmeme (a hack that posts new top stories) and Wordpress (release updates).  Andy Carvin hypothesizes Twitter could save lives in a catastrophe, but group forming is already ahead of his theory with the USGS Earthquake Center on Twittter.

This week most of my company joined Twitter and I set up http://twitter.com/socialtext for no reason in particular.  I posted the login in a private wiki page to let anyone contribute.  But when Moconner saw how simple the API was, he wrote a bot to let us post from our IRC channel.  Now we have a low threshold way to express group identity that fits with the way we work.

Liz Lawley well addressed the differences of this form of presence and criticisms of mundane content and interruption costs.  She highlights “exploring clusters of loosely related people by looking at the updates from their friends. There are stories told in between updates.” 

However, I do think the the interruption tax is significant — especially with the quickening of adoption.  You use your social network as a filter, which helps both in scoping participation within a pull model of attention management, but also to Liz’s point that my friends are digesting the web for me and perhaps reducing my discovery costs.  But the affordance within Twitter of both mobile and web, that not only lets Anil use it (he is Web-only) is what helps me manage attention overload.  I can throttle back to web-only and curb interruptions, simply by texting off.

Good thing too, because back when it was called twittr people held back believing what they posted would be interrupting on mostly mobile devices.  Lately I think people just go for it, and most consumption is on the web or other clients.  I’d love to see some research on posts/user, client use, tracking @username, group identities, geographic dispersion and revealing other undesigned conventions.

Cross-posted on ross.typepad.com

Comments (3) + TrackBacks (0) | Category: social software

March 7, 2007

Spam that knows you: anyone else getting this?

Email This Entry

Posted by Clay Shirky

So a few weeks ago, I started getting spam referencing O’Reilly books in the subject line, and I thought that the spammers had just gotten lucky, and that the universe of possible offensive measures for spammers now included generating so many different subject lines that at least some of them got through to my inbox, but recently I’ve started to get more of this kind of spam, as with:

  • Subject: definition of what “free software” means. Outgrowing its
  • Subject: What makes it particularly interesting to private users is that there has been much activity to bring free UNIXoid operating systems to the PC,
  • Subject: and so have been long-haul links using public telephone lines. A rapidly growing conglomerate of world-wide networks has, however, made joining the global

(All are phrases drawn from http://tldp.org/LDP/nag/node2.html.)

Can it be that spammers are starting to associate context with individual email addresses, in an effort to evade Bayesian filters? (If you wanted to make sure a message got to my inbox, references to free software, open source, and telecom networks would be a pretty good way to do it. I mean, what are the chances?) Some of this stuff is so close to my interests that I thought I’d written some of the subject lines and was receiving this as a reply. Or is this just general Bayes-busting that happens to overlap with my interests?

If it’s the former, then Teilhard de Chardin is laughing it up in some odd corner of the noosphere, as our public expressions are being reflected back to us as a come-on. History repeats itself, first as self-expression, then as ad copy…

Comments (15) + TrackBacks (0) | Category: social software

March 6, 2007

thoughts on twitter

Email This Entry

Posted by Liz Lawley

I’m completely fascinated by Twitter right now—in much the same way I was by blogging four years ago, and by ICQ years before that.

If you haven’t tried it yet, Twitter is a site that allows you to post one-line messages about what you’re currently doing—via the web interface, IM, or SMS. You can limit who sees the messages to people you’ve explicitly added to your friends list, or you can make the messages public. (My Twitter posts are private, but my friend Joi’s are public.)

What Twitter does, in a simple and brilliant way, is to merge a number of interesting trends in social software usage—personal blogging, lightweight presence indicators, and IM status messages—into a fascinating blend of ephemerality and permanence, public and private.

The big “P” word in technology these days is “participatory.” But I’m increasingly convinced that a more important “P” word is “presence.” In a world where we’re seldom able to spend significant amounts of time with the people we care about (due not only to geographic dispersion, but also the realities of daily work and school commitments), having a mobile, lightweight method for both keeping people updated on what you’re doing and staying aware of what others are doing is powerful.

I’ve experimented a bit with a visual form of this lightweight presence indication, through cameraphone photos taken while traveling. A photo of a boarding gate sign, or of a hotel entrance, conveys where I am and what I’m doing quickly and easily. But that only works if people are near a computer and are watching my Flickr photo feed, and that’s a lot to ask.

I also use IM status messages to broadcast what I’m doing. My iChat has a stack of custom messages that I’ve saved for re-use, from “packing” and “at the airpot” to “breaking up sibling squabbles” and “grading…the horror! the horror!” But status messages have no permanence to them, and require some degree of synchronicity—people have to be logged into IM, and looking at status messages, while I’m there. Because Twitter archives your messages on the web (and can send them as SMS that you can check at any time), that requirement for synchronous connections goes away.

Blogs allow this kind of archived update, of course—but they’re not lightweight. Where one might easily post a Twitter message along the lines of “on my way to work”, a blog post like that wouldn’t be worth the effort and overhead.

I’ve heard two kinds of criticisms of Twitter already.

The first criticizes the triviality of the content. But asking “who really cares about that kind of mindless trivia about your day” misses the whole point of presence. This isn’t about conveying complex theory—it’s about letting the people in your distributed network of family and friends have some sense of where you are and what you’re doing. And we crave this, I think. When I travel, the first thing I ask the kids on the phone when I call home is “what are you doing?” Not because I really care that much about the show on TV, or the homework they’re working on, but because I care about the rhythms and activities of their days. No, most people don’t care that I’m sitting in the airport at DCA, or watching a TV show with my husband. But the people who miss being able to share in day-to-day activity with me—family and close friends—do care.

The second type of criticism is that the last thing we need is more interruptions in our already discontinuous and partially attentive connected worlds. What’s interesting to me about Twitter, though, is that it actually reduces my craving to surf the web, ping people via IM, and cruise Facebook. I can keep a Twitter IM window open in the background, and check it occasionally just to see what people are up to. There’s no obligation to respond, which I typically feel when updates come from individuals via IM or email. Or I can just check my text messages or the web site when I feel like getting a big picture of what my friends are up to.

Which then leads to one of the aspects of Twitter that I find most fascinating—exploring clusters of loosely related people by looking at the updates from their friends. There are stories told in between updates. Who’s at a conference, and do they know each other? Who’s on the road, and who’s at home. Narratives that wind around and between the updates and the people, that show connections. Updates that echo each other, or even directly respond to another Twitter post.

There’s more to it than that, but I’m still sorting it all out in my head. Just wanted to post an early-warning signal that I see something important happening here, something worth paying (more than partial) attention to.

(cross-posted from mamamusings; since comments have been unreliable here, any comments can be posted there)

Comments (12) + TrackBacks (0) | Category: social software

February 16, 2007

The Future of Science Fiction and Fantasy Publishing?

Email This Entry

Posted by Paul B Hartzog

Not long ago, I wrote an article on “Social Publishing” here on Many-to-Many, which suggests the possibility of a system where

“authors create and distribute their work, and readers, individually and collectively, including fans as well as editors and peers, review, comment, rank, and tag, everything.”

So I followed up on the post and, along with a colleague Richard Adler, started Oort-Cloud.org

Oort-Cloud is a site where science fiction and fantasy readers and writers can build precisely the kind of community that I alluded to in Social Publishing. Oort-Cloud utilizes a process we have termed “OpenLit” which you can read more about on the OpenLit page. Basically, OpenLit is a simple catalytic cycle:

Write - Share - Read - Respond

Write
First, writers write.
Share
Second, writers share with others what they have written.
Read
Third, readers read what is available.
Respond
Fourth, readers respond to what they have read.

In this way, writers become better writers by virtue of having a distribution outlet that embeds constant feedback, and readers have access to better and better stories, where “better” actually means better for them based on their interaction with the writers.

Hopefully, this all means new opportunities for everyone involved in science fiction and fantasy — readers, writers, and publishers alike.

Comments (2) + TrackBacks (0) | Category: social software

February 13, 2007

Facebook's little digital gift

Email This Entry

Posted by danah boyd

Last week, Facebook unveiled a gifting feature. For $1, you can purchase a gift for the person you most adore. If you choose to make the gift public, you are credited with that gift on the person’s profile under the “gift box” region. If you choose to make the gift private, the gift is still there but there’s no notice concerning who gave it.

Before getting into this, let me take a moment to voice my annual bitterness over Hallmark Holidays, particularly the one that involves an obscene explosion of pink, candy, and flowers.

The gifting feature is fantastically times to align with a holiday built around status: Valentine’s Day. Valentine’s Day is all about pronouncing your relationship to loved ones (and those you obsess over) in the witness of others. Remember those miniature cards in elementary school? Or the carnations in high school? Listening to the radio, you’d think Valentine’s Day was a contest. Who can get the most flowers? The fanciest dinner? This holiday should make most people want to crawl in bed and eat bon-bons while sobbing over sappy movies. But it works. It feeds on people’s desire to be validated and shown as worthy to the people around them, even at the expense of others. It is a holiday built purely on status (under the guise of “love”). You look good when others love you (and the more the merrier).

Of course, Valentine’s Day is not the only hyper-commercialized holiday. The celebration of Christ’s birth is marked by massive shopping. In response, the Festival of Lights has been turned into 8 days of competitive gift giving in American Jewish culture. Acknowledging that people get old in patterns that align with a socially constructed calendar also requires presents. Hell, anything that is seen as a lifestage change requires gifts (marriage, childbirth, graduation, Bat Mitzvah, etc.).

Needless to say, gift giving is perpetuated by a consumer culture that relishes any excuse to incite people to buy. My favorite of this is the “gift certificate” - a piece of paper that says that you couldn’t think of what to give so you assuaged your guilt by giving money to a corporation. You get brainwashed into believing that forcing your loved one to shop at that particular venue is thoughtful, even though the real winner is the corporation since only a fraction of those certificates are ever redeemed. No wonder corporations love gift certificates - they allow them to make bundles and bundles of money, knowing that the receiver will never come back for the goods.

But anyhow… i’ve gone off on a tangent… Gifts. Facebook.

Unlike Fred, i think that gifts make a lot more sense than identity purchases when it comes to micro-payments and social network sites. Sure, buying clothes in virtual systems makes sense, but what’s the value of paying to deck out your profile if the primary purpose of it is to enable communication? I think that for those who actively try to craft a public identity through profiles (celebrities and fame junkies), paying to make a cooler profile makes sense. But most folks are quite content with the crap that they can do for free and i don’t see them paying money to get more fancified backgrounds when they can copy/paste. That said, i think it’s very interesting when you can pay to affect someone else’s profile. I think it’s QQ where you can pay to have a donkey shit on your friend’s page and then they have to pay to clean it up. This prankster “gift” has a lot of value. It becomes a game within the system and it bonds two people together.

In a backchannel conversation, Fred argues with me that digital gifts will have little value because they only make people look good for a very brief period. They do not have the same type of persistence as identity-driven purchases like clothing in WoW. I think that it is precisely this ephemeralness that will make gifts popular. There are times for gift giving (predefined by society). Individuals’ reaction to this is already visible on social network sites comments. People write happy birthday and send glitter for holidays (a.k.a. those animated graphical disasters screaming “happy valentine’s day!”). These expressions are not simply altruistic kindness. By publicly performing the holiday or birthday, the individual doing the expression looks good before hir peers. It also prompts reciprocity so that one’s own profile is then also filled with validating comments. Etc. Etc. (If interested in gifting, you absolutely must read the canon: Marcel Mauss’ “The Gift”.)

Like Fred, i too have an issue with the economic structure of Facebook Gifts, but it’s not because i think that $1 is too expensive. Gifts are part of status play. As such, there are critical elements about gift giving that must be taken into consideration. For example, it’s critical to know who gifted who first. You need to know this because it showcases consideration. Look closely at comments on MySpace and you’ll see that timing matters; there’s no timing on Facebook so you can’t see who gifted who first and who reciprocated. Upon receipt of a gift, one is often required to reciprocate. To handle being second, people up the ante in reciprocating. The second person gives something that is worth more than the first. This requires having the ability to offer more; offering two of something isn’t really the right answer - you want to offer something of more value. All of Facebook’s gifts are $1 so they are all equal. Value, of course, doesn’t have to be about money. Scarcity is quite valuable. If you gift something rare, it’s far more desired than offering a cheesy gift that anyone could get. This is why the handmade gift matters in a culture where you can buy anything.

I don’t think Facebook gifts - in its current incarnation - is sustainable. You can only gift so many kisses and rainbows before it’s meaningless. And what’s the point of paying $1 for them (other than to help the fight against breast cancer)? $1 is nothing if the gift is meaningful, but the 21 gift options will quickly lose meaning. It’s not just about dropping the price down to 20 cents. It’s about recognizing that gifting has variables that must be taken into account.

People want gifts. And they want to give gifts. Comments (or messages on the wall) are a form of gifting and every day, teens and 20-somethings log in hoping that someone left a loving comment. (And all the older folks cling to their Crackberries with the same hope.) It’s very depressing to log in and get no love.

I think that Facebook is right-on for making a gifting-based offering, but i think that to make it work long-term, they need to understand gifting a bit better. It’s about status. It’s about scarcity. It’s about reciprocity and upping the ante. These need to worked into the system and evolving this will make Facebook look good, not like they are backpeddling. This is not about gifting being a one-time rush; it’s about understanding the social structure of gifting.

(See Apophenia for comments)

Comments (0) + TrackBacks (0) | Category: social software

Debatepedia cures premature neutrality

Email This Entry

Posted by David Weinberger

Wikipedia’s policy of neutrality sometimes forces resolution when we’d rather have debate. Yes, competing sides get represented in the articles, and the discussion pages let us hear people arguing their points, but the arguments themselves are treated as stations on the way to neutral agreement.

So, there’s room for additional approaches that take the arguments themselves as their topics. That’s what Debatepedia.org does, and it looks like it’s on its way to being really useful.

Like Wikipedia, anyone can edit existing content. Unlike Wikipedia, its topics are all up for debate. Each topic presents both sides, structured into sub-questions, with a strong ethos of citation, factuality, and lack of flaming; the first of its Guiding Principles is “No personal opinion.” Rather, it attempts to present the best case and best evidence for each side.

Debatepedia limits itself to topics with yes-no alternatives and with clear pro and con cases. To start a debate, a user has to propose it and the editors (who seem to be the people who founded it…I couldn’t find info about them on the site) have to accept it. This keeps people from proposing stupid topics and boosts the likelihood that if you visit a listed debate, you’ll find content there. It also limits discussion to topics that have two and only two sides, which may turn out to be a serious limitation. But, we’ll see. And it can adapt as required.

Will Debatepedia take off? Who the hell knows. But it’s a welcome addition to the range of experiments in pulling ourselves together.

Comments (3) + TrackBacks (0) | Category: social software

February 6, 2007

about those walled gardens

Email This Entry

Posted by danah boyd

In the tech circles in which i run, the term “walled gardens” evokes a scrunching of the face if not outright spitting. I shouldn’t be surprised by this because these are the same folks who preach the transparent society as the panacea. But i couldn’t help myself from thinking that this immediate revulsion is obfuscating the issue… so i thought i’d muse a bit on walled gardens.

Walled gardens are inevitably built out of corporate greed - a company wants to lock in your data so that you can’t move between services and leave them in the dust. They make money off of your eyeballs. They make money off of your data. (In return, they often provide you with “free” services.) You put blood, sweat, and tears - or at least a little bit of time - into providing them with valuable data and you can’t get it out when you decide you’ve had enough. If this were the full story, of course walled gardens look foul to the core.

The term “walled garden” implies that there is something beautiful being surrounded by walls. The underlying assumption is that walls are inherently bad. Yet, walls have certain value. For example, i’m very appreciative of walls when i’m having sex. I like to keep my intimate acts intimate and part of that has to do with the construction of barriers that prevent others from accessing me visually and audibly. I’m not so thrilled about tearing down all of the walls in meatspace. Walls are what allow us to construct a notion of “private” and, even more importantly, contextualized publics. Walls help contain the social norms so that you know how to act properly within their confines, whether you’re at a pub or in a classroom.

One of the challenges online is that there really aren’t walls. What walls did exist came tumbling down with the introduction of search. Woosh - one quick query and the walls that separated comp.lang.perl from alt.sex.bondage came crashing down. Before search (a.k.a. Deja), there were pseudo digital walls. Sure, Usenet was public but you had to know where the door was to enter the conversation. Furthermore, you had to care to enter. There are lots of public and commercial places i pass by every day that i don’t bother entering. But, “for the good of all humankind”, search came to pave the roads and Arthur Dent couldn’t stop the digital bulldozer.

We’re living with the complications of no walls online. Determining context is really really hard. Is your boss really addressing you when he puts his pic up on Match.com? Does your daughter take your presence into consideration when she crafts her MySpace? No doubt it’s public, but it’s not like any public that we’re used to in meatspace.

For a long time, one of the accidental blessings of walled gardens was that they kept out search bots as part of their selfish data retention plan. This meant that there were no traces left behind of people’s participation in walled gardens when they opted out - no caches of previous profiles, no records of a once-embarassing profile. Much to my chagrin, many of the largest social network sites (MySpace, LinkedIn, Friendster, etc.) have begun welcoming the bots. This makes me wonder… are they really walled gardens any longer? It sounds more like chain linked fences to me. Or maybe a fishbowl with a little plastic castle.

What does it mean when the supposed walled gardens begin allowing external sites to cache their content?

[tangent] And what on earth does it mean that MySpace blocks the Internet Archive in its robots.txt but allows anyone else? It’s like they half-realize that posterity might be problematic for profiles, but fail to realize that caches of the major search engines are just as freaky. Of course, to top it off, their terms say that you may not use scripts on the site - isn’t a bot a script? The terms also say that participating in MySpace does not give them a license to distribute your content outside of MySpace - isn’t a Google cache of your profile exactly that? [end tangent]

Can we really call these sites walled gardens if the walls are see-through? I mean, if a search bot can grab your content for cache, what’s really stopping you from doing so? Most tech folks would say that they are walled gardens because there are no tools to support easy export. Given that thousands of sites have popped up to provide codes for you to turn your MySpace profile into a dizzy display of animated daisies with rainbow hearts fluttering from the top (while inserting phishing scripts), why wouldn’t there be copy/pastable code to let you export/save/transfer your content? Perhaps people don’t actually want to do this. Perhaps the obsessive personal ownership of one’s content is nothing more than a fantasy of the techno-elite (and the businessmen who haven’t yet managed to lock you in to their brainchild). I mean, if you’re producing content into a context, do you really want to transfer it wholesale? I certainly don’t want my MySpace profile displayed on LinkedIn (even if there are no nude photos there).

For all of this rambling, perhaps i should just summarize into three points:
  • If walls have value in meatspace, why are they inherently bad in mediated environments? I would argue that walls provide context and allow us to have some control over the distribution of our expressions. Walls should be appreciated, even if they are near impossible to construct.
  • If robots can run around grabbing the content of supposed walled gardens, are they really walled? It seems to me that the tizzy around walled gardens fails to recognize that those most interested in caching the data (::cough:: Google) can do precisely that. And those most interested does not seem to include the content producers.
  • If the walls come crashing down, what are we actually losing? Walls provide context, context is critical for individuals to properly express themselves in a socially appropriate way. I fear that our loss of walls is resulting in a very confused public space with far more visibility than anyone can actually handle.

Basically, i don’t think that walled gardens are all that bad. I think that they actually provide a certain level of protection for those toiling in the mud. The problem is that i think that we’ve torn down the walls of the supposed walled gardens and replaced them with chain links or glass. Maybe even one-way glass. And i’m not sure that this is such a good thing. ::sigh::

So, what am i missing? What don’t i understand about walled gardens?

(Conversation at Apophenia)

Comments (0) + TrackBacks (0) | Category: social software

February 3, 2007

Technorati's WTF

Email This Entry

Posted by David Weinberger

Technorati has a new feature that’s only slightly confusing but very interesting and potentially quite useful. (Disclosure: I’m on Technorati’s board of advisors.)

It’s called “WTF,” which technically stands for “Where’s the Fire,” but has another more likely meaning. (David Isenberg named one of his conferences “WTF” and then had a contest to decide what it stood for.) So, if you go to Technorati and take a look at the Top Searches in the upper right, to the left of each entry there’s an orange flame. Don’t click on it yet because the page it takes you to is confusing. Instead, click on one of the searches. At the moment, “Boston Mooninites” is the top search. Click on it to go to the search results page. The top result is not a result at all. It’s got a flame icon next to it, indicating that it’s actually the WTF about the phrase “Boston Mooninites.” It’s an explanation of what that phrase means and why people are searching on it now. Who wrote it? Anybody who wants to. So now click on the flame icon. It takes you to the same page you would have gotten to if you had clicked on the flame icon in the Top Searches list on the home page.

Ok, so now you’re on the WTF page for “Boston Mooninites.” Note that this is not the search results page. It’s where you get to create your own WTF for that search query. Or, you can vote on which of the existing ones; the one with the most votes is featured on the search results page for the query.

It’ll be very interesting to see how this develops. For example, the current top WTF for Windows Vista is a product review, not a neutral explanation. (I’m not complaining.) Many of the WTFs on the Vista list are responses to previous ones, as if WTFs are discussion board, probably an artifact of the layout of the WTF page.

Comments (3) + TrackBacks (0) | Category: social software

January 29, 2007

Second Life, Games, and Virtual Worlds

Email This Entry

Posted by Clay Shirky

Introduction: This post is an experiment in synchronization. Since Henry Jenkins, Beth Coleman, and I are all writing about Second Life and because we like each other’s work, even when (or especially when) we disagree, we’ve decided to all post something on Second Life today. Beth’s post will appear at http://www.projectgoodluck.com/blog/, and Henry’s is at http://www.henryjenkins.org/.

Let me start with some background. Because of the number of themes involved in discussions of Second Life, it’s easy to end up talking at different levels of abstraction, so let me start with two core assertions, things that I take as background to my part of the larger conversation:

  • First, Linden’s Residents figures are methodologically worthless. Any claim about Second Life derived from a count of Residents is not to be taken seriously, and anyone making claims about Second Life based on those figures is to be regarded with skepticism. (Explanation here and here.)
  • Second, there are many interesting things going on in Second Life. As I have said in other forums, and will repeat here, passionate users are a law unto themselves, and rightly so. Nothing I could say about their experience in Second Life, pro or con, would matter to those users. My concerns are demographic.

With those assertions covered, I am asking myself two things: will Second Life become a platform for a significant online population? And, second, what can Second Life tell us about the future of virtual worlds generally?

Concerning popularity, I predict that Second Life will remain a niche application, which is to say an application that will be of considerable interest to a small percentage of the people who try it. Such niches can be profitable (an argument I made in the Meganiche article), but they won’t, by definition, appeal to a broad cross-section of users.

The logic behind this belief is simple: most people who try Second Life don’t like it. Something like five out of six new users abandon it before a month is up. The three month abandonment figure seems to be closer to nine out of ten. (This figure is less firm, as it has only been reported colloquially, with no absolute numbers behind it.)

More importantly, the current active population is still an unknown. (Call this metric something like “How many users in the last 30 days have accounts more than 30 days old?”) We know the highest that figure could be is in the low hundreds of thousands, but no one other than the Lindens (and, presumably, their bigger marketing clients) knows how much lower it is than this theoretical maximum.

The poor adoption rate is a form of aggregate judgment. Anything bruited for wide adoption would have trouble with 85%+ abandonment, whether software or toothpaste. One possible explanation for this considerable user defection might be a technological gap. I do not doubt that improvements to the client and server would decrease the abandonment rate. I do doubt the improvement would be anything other than incremental, given 5 years and tens of millions in effort already.

Note too that abandonment is not a problem that all visually traversable spaces suffer from. Both Doom and Cyworld serve as counter-examples; in those cases, the rendering is cartoonish, yet both platforms achieved huge popularity in a short period. If the non-visual experience is good, the rendering does not need to be, but the converse does not seem to be true, on present evidence.

Two Objections

There have been two broad responses to skepticism occasioned by the Linden population numbers. (Three, if you count ad hominem, but Chris Lott has already covered that.)

The first response is not specific to Second Life. Many people have recalled earlier instances of misguided skepticism about new technologies, but the logical end-case of that thought is that skepticism about technology is never appropriate. (Disconfirmation of this thesis is left as an exercise for the reader.) Given that most new technologies fail, the challenge is to figure out which ones won’t. No one has noted examples of software with 85% abandonment rates, after five years of development, that went on to become widespread. Such examples may exist, but I can’t think of any.

The second objection is a conviction that demographics are irrelevant, and that the interesting goings-on in Second Life are what matters, no matter how few users are engaged in those activities.

I have never doubted (and have explicitly noted above) that there are interesting things happening in Second Life. The mistake, from my point of view, is in mixing two different questions. Whether some people like Second Life a lot is a completely separate issue from whether a lot of people like it. It is possible for the first assertion to be true and the second one false, and this is the only reading I believe is supported by the low absolute numbers and high abandonment rates. Nor is this an unusual case. We have several examples of platforms with fascinating in-world effects (Alphaworld, Black Sun/Blaxxun, The Palace, Dreamscape, LambdaMOO and environments on the SuperMOO List, etc.), all of which also failed to achieve wide use.

It is here that assertions about Second Life have most often been inconsistent. Before the uselessness of Linden’s population numbers was widely understood, the illusion of a large and rapidly growing community was touted as evidence of Second Life’s success. When both the absolute numbers and growth turned out to be more modest, population was downgraded and other metrics have been introduced as predictive of Second Life’s inevitable success.

A hypothesis which is strengthened by evidence of popularity, but not weakened by evidence of unpopularity, isn’t really a hypothesis, it’s a religious assertion. And a core tenet of the faithful seems to be that claims about Second Life are buttressed by the certain and proximate arrival of virtual worlds generally.

If we had but worlds enough and time…

It is worth pausing at this junction. Many people writing about Second Life make little distinction between ‘Second Life as a particular platform’ and ‘Second Life as an exemplar of the coming metaverse’. I would like to buck this trend, by explicitly noting the difference between those two conversations. I am basing my prediction of continued niche status for Second Life on the current evidence that most people who try it don’t like it. My beliefs about virtual worlds, on the other hand, are more conjectural. Everything below should be read with this caveat in mind.

With that said, I don’t believe that “virtual worlds” describes a coherent category, or, put another way, I believe that the group of things lumped together as virtual worlds have such variable implementations and user adoption rates that they are not well described as a single conceptual group.

I alluded to Pointcast in an earlier article; one of the ways the comparison is apt is in the abuse of categorization as a PR tool. Pointcast’s management claimed that email, the Web, and Pointcast all were about delivering content, and that the future looked bright for content delivery platforms. And indeed it did, except for Pointcast.

The successes of email and of the Web were better explained by their particular utilities than by their membership in a broad class of “content delivery.” Pointcast tried to shift attention from those particularities to a generic label in order to create a club in which it would automatically be included.

I believe a similar thing happens whenever Second Life is lumped with Everquest, World of Warcraft, et al., into a category called virtual worlds. If we accept the validity of this category, then multi-player games provide an existence proof of millions-strong virtual worlds, and the only remaining question is simply when we arrive at wider adoption of more general-purpose versions.

If, on the other hand, we don’t start off by lumping Second Life with Warcraft as virtual worlds, a very different question emerges: why do virtual game worlds outperform non-game worlds in their adoption? This pattern is quite stable over time — it well predates Second Live and World of Warcraft, as with first Ultima Online (1997) and then Everquest (1999) each quickly dwarfing the combined populations of Alphaworld and Black Sun (later Blaxxun) despite the significant lead times of those virtual worlds. What is it about games that would make them a better fit for virtual environments than non-games?

Games have at least three advantages other virtual worlds don’t. First, many games, and most social games, involve an entrance into what theorists call the magic circle, an environment whose characteristics include simplified and knowable rules. The magic circle saves the game from having to live up to expectations carried over from the real world.

Second, games are intentionally difficult. If all you knew about golf was that you had to get this ball in that hole, your first thought would be to hop in your cart and drive it over there. But no, you have to knock the ball in, with special sticks. This is just about the stupidest possible way to complete the task, and also the only thing that makes golf interesting. Games create an environment conducive to the acceptance of artificial difficulties.

Finally, and most relevant to visual environments, our ability to ignore information from the visual field when in pursuit of an immediate goal is nothing short of astonishing (viz. the gorilla experiment.) The fact that we could clearly understand spatial layout even in early and poorly rendered 3D environments like Quake has much to do with our willingness to switch from an observational Architectural Digest mode of seeing (Why has this hallway been accessorized with lava?) to a task-oriented Guns and Ammo mode (Ogre! Quad rocket for you!)

In this telling, games are not just special, they are special in a way that relieves designers of the pursuit of maximal realism. There is still a premium on good design and playability, but the magic circle, acceptance of arbitrary difficulties, and goal-directed visual filtering give designers ways to contextualize or bury at least some platform limitations. These are not options available to designers of non-game environments; asking users to accept such worlds as even passable simulacra subjects those environments to withering scrutiny.

Hubba Hubba

We can also reverse this observation. One question we might ask about successful non-game uses of virtual worlds is whether they too are special cases. One obvious example is erotic imagery. The zaftig avatar has been a trope of 3D rendering since designers have been able to scrape together enough polygons to model a torso, but examples start far earlier than virtual worlds. In fact, visual representation of voluptuous womanhood predates the invention of agriculture by the same historical interval as agriculture predates the present. This is a deep pattern.

It is also a pattern that, like games and unlike ordinary life, has a special relation to visual cues (though this effect is somewhat unbalanced by gender.) If someone is shown a virtual hamburger, it can arouse real hunger. However, to satisfy this hunger, he must then walk away from the image and get his hands on an actual hamburger. This is not the case, to put the matter delicately, with erotic imagery; a fetching avatar can arouse desire, but that desire can then be satiated without recourse to the real.

This pair of characteristics — a human (and particularly male) fixation on even poorly rendered erotic images, plus an ability to achieve a kind of gratification in the presence of those images — means that a sexualized rendering can create both attraction and satisfaction in a way that a rendering of, say, a mountain or an office cannot. As with games, visual worlds work in the context of eros not because the images themselves are so convincing, but because they reach a part of the brain that so desperately wants to be convinced.

More generally, I suspect that the cases where 3D immersion works are, and will continue to be, those uses that most invite the mind to fill in or simply do without missing detail, whether because of a triggering of sexual desire, the fight or flight reflex (many games), avarice (gambling), or other areas where we are willing and even eager to make rapid inferences based on a paucity of data. I also assume that these special cases are not simply adding up to a general acceptance of visual immersion, and that finding another avatar beguiling in a virtual bar is not in fact a predictor of being able to read someone’s face or body language in a virtual meeting as if you were with them. That, I believe, is a neurological problem of a different order.

Jaron Lanier is the Charles Babbage of Our Generation

Here we arrive at the furthest shores of speculation. One of the basic promises of virtual reality, at least in its Snow Crash-inflected version, is that we will be able to re-create the full sense of being in someone’s presence in a mediated environment. This desire, present at least since Shamash appeared to Gilgamesh in a dream, can be re-stated in technological terms as a hope that communications will finally become an adequate substitute for travel. We have been promised that this will come to pass with current technology since ATT demoed a video phone at the 1964 World’s Fair.

I believe this version of virtual reality will in fact be achieved, someday. I do not, however, believe that it will involve a screen. Trying to trick the brain by tricking the eyes is a mug’s game. The brain is richly arrayed with tools to detect and unmask visual trickery — if the eyes are misreporting, the brain falls back on other externally focussed senses like touch and smell, or internally focussed ones like balance and proprioception.

Though the conception of virtual reality is clear, the technologies we have today are inadequate to the task. In the same way that the theory of computation arose in the mechanical age, but had to wait first for electrics and then electronics to be fully realized, general purpose virtual reality is an idea waiting on a technology, and specifically on neural interface, which will allow us to trick the brain by tricking the brain. (The neural interface in turn waits on trifling details like an explanation of consciousness.)

In the meantime, the 3D worlds program in the next decade is likely to resemble the AI program in the last century, where early optimism about rapid progress on general frameworks gave way to disconnected research topics (machine vision, natural language processing) and ‘toy worlds’ environments. We will continue to see valuable but specific uses for immersive environments, from flight training and architectural flythroughs to pain relief for burn victims and treatment for acrophobia. These are all indisputably good things, but they are not themselves general, and more importantly don’t suggest rapid progress on generality. As a result, games will continue to dominate the list of well-populated environments for the foreseeable future, rendering ineffectual the category of virtual worlds, and, critically, many of the predictions being attached thereunto.

[We’ve been experiencing continuing problems with our MT-powered commenting system. We’re working on a fix but for now send you to a temporary page where the discussion can continue.]

Comments (0) + TrackBacks (0) | Category: social software

Against Well-designed Reputation Systems (An Argument for Community Patent)

Email This Entry

Posted by Clay Shirky

Intro: I was part of a group of people asked by Beth Noveck to advise the Community Patent review project about the design of a reputation and ranking system, to allow the widest possible input while keeping system gaming to a minimum. This was my reply, edited slightly for posting here.

We’ve all gone to school on the moderation and reputation systems of Slashdot and eBay. In those cases, their growing popularity in the period after their respective launches led to a tragedy of the commons, where open access plus incentives led to nearly constant attack by people wanting to game the system, whether to gain attention for themselves or their point of view in the case of Slashdot, or to defraud other users, as with eBay.

The traditional response to these problems would have been to hire editors or other functionaries to police the system for abuse, in order to stem the damage and to assure ordinary users you were working on their behalf. That strategy, however, would fail at the scale and degree of openness at which those services function. The Slashdot FAQ tells the story of trying to police the comments with moderators chosen from among the userbase, first 25 of them and later 400. Like the Charge of the Light Brigade, however, even hundreds of committed individuals were just cannon fodder, given the size of the problem. The very presence of effective moderators made the problem worse over time. In a process analogous to more roads creating more traffic, the improved moderation saved the site from drowning in noise, so more users joined, but this increase actually made policing the site harder, eventually breaking the very system that made the growth possible in the first place.

EBay faced similar, ugly feedback loops; any linear expenditure of energy required for policing, however small the increment, would ultimately make the service unsustainable. As a result, the only opportunity for low-cost policing of such systems is to make them largely self-policing. From these examples and others we can surmise that large social systems will need ways to highlight good behavior or suppress negative behavior or both. If the guardians are to guard themselves, oversight must be largely replaced by something we might call intrasight, designed in such a way that imbalances become self-correcting.

The obvious conclusion to draw is that, when contemplating the a new service with these characteristics, the need for some user-harnessed reputation or ranking system can be regarded as a foregone conclusion, and that these systems should be carefully planned so that tragedy of the commons problems can be avoided from launch. I believe that this conclusion is wrong, and that where it is acted on, its effects are likely to be at least harmful, if not fatal, to the service adopting them.

There is an alternate reading of the Slashdot and eBay stories, one that I believe better describes those successes, and better places Community Patent to take advantage of similar processes. That reading concentrates not on outcome but process; the history of Slashdot’s reputation system should teach us not “End as they began — build your reputation system in advance” but rather “Begin as they began — ship with a simple set of features, watch and learn, and implement reputation and ranking only after you understand the problems you are taking on.” In this telling, constituting users’ relations as a set of bargains developed incrementally and post hoc is more predictive of eventual success than simply adopting any residue from previous successes.

As David Weinberger noted in his talk The Unspoken of Groups, clarity is violence in social settings. You don’t get 1789 without living through 1788; successful constitutions, which necessarily create clarity, are typically ratified only after a group has come to a degree of informal cohesion, and is thus able to absorb some of the violence of clarity, in order to get its benefits. The desire to participate in a system that constrains freedom of action in support of group goals typically requires that the participants have at least seen, and possibly lived through, the difficulties of unfettered systems, while at the same time building up their sense of membership or shared goals in the group as a whole. Otherwise, adoption of a system whose goal is precisely to constrain its participants can seem too onerous to be worthwhile. (Again, contrast the US Constitution with the Articles of Confederation.)

Most current reputation systems have been fit to their situation only after that situation has moved from theoretical to actual; both eBay and Slashdot moved from a high degree of uncertainty to largely stable systems after a period of early experimentation. Perhaps surprisingly, this has not committed them to continual redesign. In those cases, systems designed after launch, but early in the process of user adoption, have survived to this day with only relatively minor subsequent adjustments.

Digg is the important counter-example, the most successful service to date to design a reputation system in advance. Digg differs from the community patent review process in that the designers of Digg had an enormous amount of prior art directly in its domain (Slashdot, Kuro5hin, Metafilter, et al), and still ended up with serious re-design issues. More speculatively, Digg seems to have suffered more from both system gaming and public concern over its methods, possibly because the lack of organic growth of its methods prevented it from becoming legitimized over time in the eyes of its users. Instead, they were asked to take it or leave it (never a choice users have been know to relish.)

Though more reputation design work may become Digg-like over time, in that designers can launch with systems more complete than eBay or Slashdot did, the ability to survey significantly similar prior art, and the ability to adopt a fairly high-handed attitude towards users who dislike the service, are not luxuries the community patent review process currently enjoys.

The Argument in Two Pictures

The argument I’m advancing can be illustrated with two imaginary graphs. The first concerns plasticity, the ease with which any piece of software can be modified.

Plasticity generally decays with time. It is highest at the in the early parts of the design phase, when a project is in its most formative stages. It is easier to change a list of potential features than a set of partially implemented features, and it is easier to change partially implemented features than fully implemented features. Especially significant is the drop in plasticity at launch; even for web-based services, which exist only in a single instantiation and can be updated frequently and for all users at once, the addition of users creates both inertia, in the direction of not breaking their mental model of the service, and caution in upgrading, so as not to introduce bugs or create downtime in a working service. As the userbase grows, the expectations of the early adopters harden still further, while the expectations of new users follows the norms set up by those adopters; this is particularly true of any service with a social component.

An obvious concern with reputation systems is that, as with any feature, they are easier to implement when plasticity is high. Other things being equal, one would prefer to design the system as early as possible, and certainly before launch. In the current case, however, other things are not equal. In particular, the specificity of information the designers have about the service and how it behaves in the hands of real users moves counter to plasticity over time.

When you are working to understand the ideal design for a particular piece of software, the specificity of your knowledge increases with time. During the design phase, the increasing concreteness of the work provides concomitant gains in specificity, but nothing like launch. No software, however perfect, survives first contact with the users unscathed, and given the unparalleled opportunities with web-based services to observe user behavior — individually and in bulk, in the moment and over time — the period after launch increases specificity enormously, after which it continues to rise, albeit at a less torrid pace.

There is a tension between knowing and doing; in the absence of the ideal scenario where you know just what needs to be done while enjoying complete freedom to do it (and a pony), the essential tradeoff is in understanding which features benefit most from increased specificity of knowledge. Two characteristics that will tend to push the ideal implementation window to post-launch are when a set of possible features is very large, but the set of those features that will ultimately be required is small; and when culling the small number of required features from the set of all possible features can only be done by observing actual users. I believe that both conditions apply a fortiori to reputation and ranking.

Costs of Acting In Advance of Knowing

Consider the costs of designing a reputation system in advance. In addition to the well-known problems of feature-creep (“Let’s make it possible to rank reputation rankings!”) and Theory of Everything technologies (“Let’s make it Semantic Web-compliant!”), reputation systems create an astonishing perimeter defense problem. The number of possible threats you can imagine in advance is typically much larger than the number that manifest themselves in functioning communities. Even worse, however large the list of imagined threats, it will not be complete. Social systems are degenerate, which is to say that there are multiple alternate paths to similar goals — someone who wants to act out and is thwarted along one path can readily find others.

As you will not know which of these ills you will face, the perimeter you will end up defending will be very large and, critically, hard to maintain. The likeliest outcome from such an a priori design effort is inertness; a system designed in advance to prevent all negative behavior will typically have as a side effect deflecting almost all behavior, period, as users simply turn away from adoption.

Working social systems are both complex and homeostatic; as a result, any given strategy for mediating social relations can only be analyzed in the context of the other strategies in use, including strategies adopted by the users themselves. Since the user strategies cannot, by definition, be perfectly predicted in advance, and since the only ungameable social system is the one that doesn’t ship, every social system will have some weakness. A system designed in advance is likely to be overdefended while still having a serious weaknesses unknown the designer, because the discovery and exploitation of that class of weakness can only occur in working, which is to say user-populated, systems. (As with many observations about the design of social systems, these are precedents first illustrated in Lessons from Lucasfilm’s Habitat, in the sections “Don’t Trust Anybody” and “Detailed Central Planning Is Impossible, Don’t Even Try”.)

The worst outcome of such a system would be collapse (the Communitree scenario), but even the best outcome would still require post hoc design to fix the system with regard to observed user behavior. You could save effort while improving the possibility of success by letting yourself not know what you don’t know, and then learning as you go.

In Favor of Instrumentation Plus Attention

The N-squared problem is only a problem when N is large; in most social systems the users are the most important N, and the userbase only grows large gradually, even for successful systems. (Indeed, this scaling up only over time typically provides the ability for a core group, once they have self-identified, to inculcate new users a bit at a time, using moral suasion as their principal tool.) As a result, in the early days of a system, the designers occupy a valuable point of transition, after user behavior is observable, but before scale and culture defeat significant intervention.

To take advantage of this designable moment, I believe that what Community Patent needs, at launch, is only this: metadata, instrumentation, and attention.

Metadata: There are, I believe, three primitive types of metadata required for Community Patent — people, patents, and interjections. Each of these will need some namespace to exist in — identity for the people, and named data for the patents themselves and for various forms of interjection, from simple annotation to complex conversation. In addition, two abstract types are needed — links and labels. A link is any unique pair of primitives — this user made that comment, this comment is attached to that conversation, this conversation is about those patents. All links should be readily observable and extractable from the system, even if they are not exposed in the interface the user sees. Finally, following Schachter’s intuition from del.icio.us, all links should be labelable. (Another way to view the same problem is to see labels as another type of interjection, attached to links.) I believe that this will be enough, at launch, to maximize the specificity of observation while minimizing the loss of plasticity.

Instrumentation: As we know from collaborative filtering algorithms from Ringo to PageRank, it is not necessary to ask users to rank things in order to derive their rankings. The second necessary element will be the automated delivery of as many possible reports to the system designers as can be productively imagined, and, at least as essential, a good system for quickly running ad hoc queries, and automating their production should they prove fruitful. This will help identify both the kinds of productive interactions on the site that need to be defended and the kinds of unproductive interactions they need to be defended from.

Designer Attention: This is the key — it will be far better to invest in smart people watching the social aspects of the system at launch than in smart algorithms guiding those aspects. If we imagine the moment when the system has grown to an average of 10 unique examiners per patent and 10 comments per examiner, then a system with even a thousand patents will be relatively observable without complex ranking or reputation systems, as both the users and the comments will almost certainly exhibit power-law distributions. In a system with as few as ten thousand users and a hundred thousand comments, it will still be fairly apparent where the action is, allowing you the time between Patent #1 and Patent #1000 to work out what sorts of reputation and ranking systems need to be put in place.

This is a simplification, of course, as each of the categories listed above presents its own challenges — how should people record their identity? What’s the right balance between closed and open lists of labels? And so on. I do not mean to minimize those challenges. I do however mean to say that the central design challenge of user governance — self-correcting systems that do not raise crushing participation burdens on the users or crushing policing barriers on the hosts — are so hard to design in advance that, provided you have the system primitives right, the Boyd Strategy of OODA — Orient, Observe, Decide, Act — will be superior to any amount of advance design work.

[We’ve been experiencing continuing problems with our MT-powered commenting system. We’re working on a fix but for now send you to a temporary page where the discussion can continue.]

Comments (0) + TrackBacks (0) | Category: social software

January 27, 2007

Crowd questions

Email This Entry

Posted by David Weinberger

LinkedIn now is enabling users to pose questions to their social network. Only members can respond. They’re also limiting how many questions you can ask per month. Interestingly, you’re only allowed to give one answer to any one question. As always, it’s those details that determine the shape of the society and its success. (Thanks for the pointer, Eric Scheid.)

Comments (1) + TrackBacks (0) | Category: social software

January 4, 2007

Real Second Life numbers, thanks to David Kirkpatrick

Email This Entry

Posted by Clay Shirky

I’ve been complaining about bad reporting of Second Life population for some time now. David Kirkpatrick at Fortune has finally gotten some signal out of Linden Labs. Kirkpatrick’s report is here, in the comments. (CNN.com comments don’t have permalinks, so scroll down.)

Here are the numbers Philip Rosedale of Linden gave him. These are, I presume, as of Jan 3:

  • 1,525,670 unique people have logged into SL at least once (so now we know: Residents is seeing something a bit over 50% inflation over users.)
  • Of that number, 252,284 people have logged in more than 30 days after their account creation date.
  • Monthly growth in that figure, calculated as the change between last September and last October, was 23%.

    Those of us who wanted the conversation to be grounded in real numbers owe Kirkpatrick our thanks for helping us get there.

    These numbers should have two good effects. First, now that Linden has reported, and Kirkpatrick has published, the real figures, maybe we’ll see the press shift to reporting users and active users, instead of Residents.

    Second, we’re no longer going to be asked to stomach absurd claims of size and growth. The ‘2.3 million user/77% growth in two months’ figures would have meant 70 million Second Life users this time next year. 250 thousand and 23% growth will mean 3 million in a year’s time, a healthy number, but not hyperbolic growth.

    We can start asking more sophisticated questions now, like the use pattern of active users, or the change in monthly growth rates, or whether the Residents-users inflation rate is stable, but those questions are for later. Right now, we’ve got enough real numbers to think about for a while.

  • Comments (9) + TrackBacks (0) | Category: social software

    Disney's kiddies network

    Email This Entry

    Posted by David Weinberger

    Disney is launching a social network for kids. My knee-jerk reaction: Yech.

    Gavin O’Malley at Online Media Daily has a more considered reaction. He points to the apparent failure of Wal-Mart’s social network for kids (“The Hub”—an awfully grown-up name), and worries that having parental controls will kill the Disney effort as well. I agree with Gartner’s Andrew Frank that it’s likely to be all product placement all the time…and, if so, I hope kids reject it.

    But, of course, I haven’t seen it and don’t know what it’ll be like. Maybe Disney is smarter than that.

    Comments (0) + TrackBacks (0) | Category: social software

    January 3, 2007

    The future of television and the media triathlon

    Email This Entry

    Posted by Clay Shirky

    Mark Cuban doesn’t understand television. He holds a belief, common to connoisseurs the world over, that quality trumps everything else. The current object of his faith in Qualität Über Alles is HDTV. Says Cuban:
    HDTV is the Internet video killer. Deal with it. Internet bandwidth to the home places a cap on the quality and simplicity of video delivery to the home, and to HDTVs in particular. Not only does internet capacity create an issue, but the complexity of moving HDTV streams around the home and to the HDTV is pretty much a deal killer itself.

    “HDTV is the Internet video killer.” Th appeal of this argument — whoever provides the highest quality controls the market — is obvious. So obvious, in fact, that it’s been used before. By audiophiles.

    As January 1, 2000 approaches, and the MP3 whirlpool continues to swirl, one simple fact has made me feel as if I’m stuck at the starting line of the entire download controversy: The sound quality of MP3 has yet to improve above that of the average radio broadcast. Until that changes, I’m merely curious—as opposed to being in the I-want-to-know-it-all-now frenzy that is my usual m.o. when to comes to anything that promises music you can’t get anywhere else. Robert Baird, October, 1999

    MP3s won’t catch on, because they are lower quality than CDs. And this was true, wasn’t it? People cared about audio quality so much that despite other advantages of MP3s (price, shareability, better integration with PCs), they’ve stayed true to the CD all these years. The commercial firms that make CDs, and therefore continue to control the music market, thank these customers daily for their loyalty.

    Meanwhile,back in the real world of the recording business, the news isn’t so rosy

    Cuban doesn’t understand that television has been cut in half. The idea that there should be a formal link between the tele- part and the vision part has ended. Now, and from now on, the form of a video can be handled separately from it’s method of delivery. And since they can be handled separately, they will be, because users prefer it that way.

    But Cuban goes further. He doesn’t just believe that, other things being equal, quality will win; he believes quality is so important to consumers that they will accept enormous inconvenience to get that higher-quality playback. When Cuban’s list of advantages of HDTV includes an inability to watch your own video on it (“the complexity of moving HDTV streams around the home and to the HDTV”), you have to wonder what he thinks a disadvantage would look like.

    This is the season of the HDTV gotcha. After Christmas, people are starting to understand that they didn’t buy a nicer TV, they bought only one part of a Total Controlled Content Delivery Package. Got an HDTV monitor and a new computer for Christmas? You might as well have gotten a Fabergé Egg and a framing hammer for all the useful ways you can combine the two presents.

    Media is a triathlon event. People like to watch, but they also like to create, and to share. Doubling down on the watching part while making it harder for the users to play their own stuff or share with their friends makes a medium worse in the users eyes. By contrast, the last 50 years have been terrible for user creativity and for sharing, so even moderate improvements in either of those abilities make the public go wild.

    When it comes to media quality, people don’t optimize, they satisfice. Once the medium, whether audio or video or whatever, crosses a minimum threshold, users accept it and move on to caring about other attributes. The change in internet video quality from 1996 to 2006 was the big jump, and YouTube is the proof. After this, firms that offer higher social value for video will have an edge over firms that offer higher production values while reducing social value.

    And because the audience for internet video will grow much faster than the audience for HDTV (and will be less pissed, because YouTube doesn’t rely on a ‘bait and switch’ walled garden play) the premium for making internet video better will grow with it. As Richard Gabriel said of programming languages years ago “[E]ven though Lisp compilers in 1987 were about as good as C compilers, there are many more compiler experts who want to make C compilers better than want to make Lisp compilers better.” That’s where video is today. HDTV provides a better viewing experience than internet video, but many more people care about making internet video better than making HDTV better.

    YouTube is the HDTV killer. Deal with it.

    Comments (15) + TrackBacks (0) | Category: social software

    December 29, 2006

    metadata + reality = politics

    Email This Entry

    Posted by David Weinberger

    The US Food and Drug Administration has decided tentatively that meat and milk from cloned animals are the same as from normal animals, so it is not going to require those products to carry special labels.

    Too bad.

    It’s not that I think cloned food is dangerous. I’d still like the labels to note that the animals were cloned because more metadata is always good. If people don’t want to eat clones for whatever reason, they should be enabled to make that choice. In fact, we’d be better off with full access to the information about what we’re purchasing. Where was the cow raised? What was it fed? What was its weight? What was its body fat ratio? How old was it? Did it get to roam free? Did it have a sweet smile? What was its sign? We’re better off being able to access it all, no matter how farfetched.

    But, because of the nature of non-digital reality, taking up label space with a notice that the meat is cloned would itself be metadata indicating that the government thinks such information is worth noting. Metadata in the physical world is a zero sum game.

    And that means not only is it true that (as Clay says) “metadata is worldview (or is that “metadata are worldview”?), physical labels are politics. We are forced to make value-driven decisions by the constraints of the physical (labels take up valuable space), the biological (human eyes require fonts to be sized above a certain minimum) and the economic (it is not feasible to attach an almanac of information to every chicken wing). But online, all those limit go away…

    …except for the economic. It would be expensive to do a cholesterol count for every slaughtered cow (assuming that cows have cholesterol) simply to gather information that so far nobody cares about, but there’s plenty of information that we’re gathering anyway or for which there is predictable interest—e.g., cloning—that we could make available online (via a unique identifier for each slab of flesh). There would still be politics in the decision about which information to put into the extended set, but it would be a more inclusive, bigger tent, allowing customers to decide according to their own cockamamie values.

    And isn’t cockamamie consumerism what democracy is all about?

    Comments (4) + TrackBacks (0) | Category: social software

    December 26, 2006

    Linden's Second Life numbers and the press's desire to believe

    Email This Entry

    Posted by Clay Shirky

    “Here at KingsRUs.com, we call our website our Kingdom, and any time our webservers serve up a copy of the home page, we record that as a Loyal Subject. We’re very pleased to announce that in the last two months, we have added over 1 million Loyal Subjects to our Kingdom.”

    Put that baldly, you wouldn’t fall for this bit of re-direction, and yet that is exactly what Linden Labs has pulled off with its Residents™ label. By adopting a term that seems like a simple re-branding of “users”, but which is actually unconnected to head count or adoption, they’ve managed to report what the press wants to hear, while providing no actual information.

    If you like your magic tricks to stay mysterious, leave now, but if you want to understand how Linden has managed to disable the fact-checking apparatus of much of the US business press, turning them into a zombie army of unpaid flacks, read on. (And, as with the earlier piece on Linden, this piece has also been published on Valleywag.)

    The basic trick is to make it hard to remember that Linden’s definition of Resident has nothing to do with the plain meaning of the word resident. My dictionary says a resident is a person who lives somewhere permanently or on a long term basis. Linden’s definition of Residents, however, has nothing to do with users at all — it measures signups for an avatar. (Get it? The avatar, not the user, is the resident of Second Life.)

    The obvious costume-party assumption is that there is one avatar per person, but that’s wrong. There can be more than one avatar per account, and more than one account per person, and there’s no public explanation of which of those units Residents measures, and thus no way to tell anything about how many actual people use Second Life. (An embarrassingly First Life concern, I know.)

    Confused yet? Wait, there’s less! Linden’s numbers also suggest that the Residents figure includes even failed attempts to use the service. They reported adding their second million Residents between mid-October and December 14th, but they also reported just shy of 810 thousand logins for the same period. One million new Residents but only 810K logins leaves nearly 200K new Residents unaccounted for. Linden may be counting as Residents people who signed up and downloaded the client software, but who never logged in, or there may be some other reason for the mismatched figures, but whatever the case, Residents is remarkably inflated with regards to the published measure of use.

    (If there are any actual reporters reading this and doing a big cover story on Linden, you might ask about how many real people use Second Life regularly, as opposed to Residents or signups or avatars. As I write those words, though, I realize I might as well be asking Business Week to send me a pony for my birthday.)

    Like a push-up bra, Linden’s trick is as effective as it is because the press really, really wants to believe:

  • “It has a population of a million.” — Richard Siklos, New York Times
  • “In the Internet-based virtual world known as Second Life, for instance, more than 1 million citizens have created representations of themselves known as avatars…” — Michael Yessis, USA TODAY
  • “Since it started about three years ago, the population of Second Life has grown to 1.2 million users.” — Peter Valdes-Dapena, CNN
  • “So far, it’s signed up 1.3 million members.” — David Kirkpatrick, Fortune

    Professional journalists wrote those sentences. They work for newspapers and magazines that employ (or used to employ) fact-checkers. Yet here they are, supplementing Linden’s meager PR budget by telling their readers that Residents measures something it actually doesn’t.

    This credulity appears even in the smallest items. I discovered the “Residents vs Logins” gap when I came across a Business 2.0 post by Erick Schonfeld, where he included the mismatched numbers while congratulating Linden on a job well done. When I asked the obvious question in the comments — How come there are fewer logins than new Residents in the same period? — I got a nice email from Mr. Schonfeld, complimenting me on a good catch.

    Now I’m generally pretty enthusiastic about taking credit where it isn’t due, but this bit of praise failed to meet even my debased standards. The post was a hundred words long, and it had only two numbers in it. I didn’t have to use forensic accounting to find the discrepancy, I just used subtraction (an oft-overlooked tool in the journalistic toolkit, but surprisingly effective when dealing with numbers.)

    This is the state of business reporting in an age when even the pros want to roll with the cool blogger kids. Got a paragraph that contains only two numbers, and they don’t match? No problem! Post it anyway, and on to the next thing.

    The prize bit of PReporting so far, though, has to be Elizabeth Corcoran’s piece for Forbes called A Walk on the Virtual Side, where she claimed that Second Life had recently passed “a million unique customers.”

    This is three lies in four words. There isn’t one million of anything human inhabiting Second Life. There is no one-to-one correlation between Residents and users. And whatever Residents does measure, it has nothing to do with paying customers. The number of paid accounts is in the tens of thousands, not the millions (and remember, if you’re playing along at home, there can be more than one account per person. Kits, cats, sacks, and wives, how many logged into St. Ides?)

    Despite the credulity of the Fourth Estate (Classic Edition), there are enough questions being asked in the weblogs covering Second Life that the usefulness is going to drain out of the ‘Resident™ doesn’t mean resident’ trick over the next few months. We’re going to see three things happen as a result.

    The first thing that’s going to happen, or rather not happen, is that the regular press isn’t going go back over this story looking for real figures. As much as they’ve written about the virtual economy and the next net, the press hasn’t really covered Second Life as business story or tech story so much as a trend story. The sine qua non of trend stories is that a trend is fast-growing. The Residents figure was never really part of the story, it just provided permission to write about about how crazy it is that all the kids these days are getting avatars. By the time any given writer was pitching that story to their editors, any skepticism about the basic proposition had already been smothered.

    No journalist wants to have to write “When we told you that Second Life had 1.3 million members, we in no way meant to suggest that figure referred to individual people. Fortune regrets any misunderstanding.” And since no one wants to write that, no one will. They’ll shift their coverage without pointing out the shift to their readers.

    The second thing that is going to happen is an increase in arguments of the form “We mustn’t let Linden’s numbers blind us to the inevitability of the coming metaverse.” That’s the way it is with things we’re asked to take on faith — when A works, it’s evidence of B, but if A isn’t working as well as everyone thought, it’s suddenly unrelated to B.

    Finally, there is going to be a spike in the number of the posts claiming that the two million number was never important anyway, the press’s misreporting was all an innocent mistake, Linden was planning to call those reporters first thing Monday morning and explain everything. Tateru Nino has already kicked off this genre with a post entitled The Value of One. The flow of her argument is hard to synopsize, but you can get a sense of it from this paragraph:

    So, a hundred thousand, one million, two million. Those numbers mean something to us, but not because they have intrinsic, direct meaning. They have meaning because they’re filtered through the media, disseminated out into the world, believed by people, who then act based on that belief, and that is where the meaning lies.

    Expect more, much more, of this kind of thing in 2007.

  • Comments (10) + TrackBacks (0) | Category: social software

    December 21, 2006

    PLoS ONE ... the long tail of scientific research

    Email This Entry

    Posted by David Weinberger

    Public Library of Science has gone beta with PLos ONE, a peer-reviewed journal that publishes everything that passes the review, not just what it considers to be important. So, if it’s good science about a nit, it’ll find a home at PLoS ONE.

    Articles are all published under a Creative Commons Attribution License. It does, however, cost a scientist (or her institution) $1,250 to be published by PLoS ONE. This is, alas, an improvement over what traditional journals charge scientists. PLoS ONE will waive the fee for authors who don’t have the funds.

    Readers can discuss and annotate the articles. But the site could really use tags ‘n’ feeds. Maybe after beta…

    Comments (7) + TrackBacks (0) | Category: social software

    December 15, 2006

    on being virtual

    Email This Entry

    Posted by danah boyd

    Lately, i’ve become very irritated by the immersive virtual questions i’ve been getting. In particular, “will Web3.0 be all about immersive virtual worlds?” Clay’s post on Second Life reminded me of how irritated i am by this. I have to admit that i get really annoyed when techno-futurists fetishize Stephenson-esque visions of virtuality. Why is it that every 5 years or so we re-instate this fantasy as the utopian end-all be-all of technology? (Remember VRML? That was fun.)

    Maybe i’m wrong, maybe i’ll look back twenty years ago and be embarrassed by my lack of foresight. But honestly, i don’t think we’re going virtual.

    There is no doubt that immersive games are on the rise and i don’t think that trend is going to stop. I think that WoW is a strong indicator of one kind of play that will become part of the cultural landscape. But there’s a huge difference between enjoying WoW and wanting to live virtually. There ARE people who want to go virtual and i wouldn’t be surprised if there are many opportunities for sustainable virtual environments. People who feel socially ostracized in meatspace are good candidates for wanting to go virtual. But again, that’s not everyone.

    If you look at the rise of social tech amongst young people, it’s not about divorcing the physical to live digitally. MySpace has more to do with offline structures of sociality than it has to do with virtuality. People are modeling their offline social network; the digital is complementing (and complicating) the physical. In an environment where anyone could socialize with anyone, they don’t. They socialize with the people who validate them in meatspace. The mobile is another example of this. People don’t call up anyone in the world (like is fantasized by some wrt Skype); they call up the people that they are closest with. The mobile supports pre-existing social networks, not purely virtual ones.

    That’s the big joke about the social media explosion. 1980s and 1990s researchers argued that the Internet would make race, class, gender, etc. extinct. There was a huge assumption that geography and language would no longer matter, that social organization would be based on some higher function. Guess what? When the masses adopted social media, they replicated the same social structures present in the offline world. Hell, take a look at how people from India are organizing themselves by caste on Orkut. Nothing gets erased because it’s all connected to the offline bodies that are heavily regulated on a daily basis.

    While social network sites and mobile phones are technology to adults, they are just part of the social infrastructure for teens. Remember what Alan Kay said? “Technology is anything that wasn’t around when you were born.” These technologies haven’t been adopted as an alternative to meatspace; they’ve been adopted to complement it.

    Virtual systems will be part of our lives, but i don’t think immersion is where it’s at. Most people are deeply invested in the physicality of life; this is not going away.

    Update: to discuss this post, please join the conversation at apophenia.

    Comments (0) + TrackBacks (0) | Category: social software

    December 12, 2006

    Second Life: What are the real numbers?

    Email This Entry

    Posted by Clay Shirky

    Second Life is heading towards two million users. Except it isn’t, really. We all know how this game works, and has since the earliest days of the web:

    Member of the Business Press: “How many users do you have?”
    CEO of Startup: (covers phone) “Hey guys, how many rows in the ‘users’ table?”
    [Sound F/X: Typing]
    Offstage Sysadmin: “One million nine hundred and one thousand one hundred and seventy-three.”
    CEO: (Into phone) “We have one point nine million users.”

    Someone who tries a social service once and bails isn’t really a user any more than someone who gets a sample spoon of ice cream and walks out is a customer.

    So here’s my question — how many return users are there? We know from the startup screen that the advertised churn of Second Life is over 60% (as I write this, it’s 690,800 recent users to 1,901,173 signups, or 63%.) That’s not stellar but it’s not terrible either. However, their definition of “recently logged in” includes everyone in the last 60 days, even though the industry standard for reporting unique users is 30 days, so we don’t actually know what the apples to apples churn rate is.

    At a guess, Second Life churn measured in the ordinary way is in excess of 85%, with a surge of new users being driven in by the amount of press the service is getting. The wider the Recently Logged In reporting window is, the bigger the bulge of recently-arrived-but-never-to-return users that gets counted in the overall numbers.

    I suspect Second Life is largely a “Try Me” virus, where reports of a strange and wonderful new thing draw the masses to log in and try it, but whose ability to retain anything but a fraction of those users is limited. The pattern of a Try Me virus is a rapid spread of first time users, most of whom drop out quickly, with most of the dropouts becoming immune to later use. Pointcast was a Try Me virus, as was LambdaMOO, the experiment that Second Life most closely resembles.

    Press Pass

    I have been watching the press reaction to Second Life with increasing confusion. Breathless reports of an Immanent Shift in the Way We Live® do not seem to be accompanied by much skepticism. I may have been made immune to the current mania by ODing on an earlier belief in virtual worlds:

    Similar to the way previous media dissolved social boundaries related to time and space, the latest computer-mediated communications media seem to dissolve boundaries of identity as well. […] I know a respectable computer scientist who spends hours as an imaginary ensign aboard a virtual starship full of other real people around the world who pretend they are characters in a Star Trek adventure. I have three or four personae myself, in different virtual communities around the Net. I know a person who spends hours of his day as a fantasy character who resembles “a cross between Thorin Oakenshield and the Little Prince,” and is an architect and educator and bit of a magician aboard an imaginary space colony: By day, David is an energy economist in Boulder, Colorado, father of three; at night, he’s Spark of Cyberion City—a place where I’m known only as Pollenator.

    This wasn’t written about Second Life or any other 3D space, it was Howard Rheingold writing about MUDs in 1993. This was a sentiment I believed and publicly echoed at the time. Per Howard, “MUDs are living laboratories for studying the first-level impacts of virtual communities.” Except, of course, they weren’t. If, in 1993, you’d studied mailing lists, or usenet, or irc, you’d have a better grasp of online community today than if you’d spent a lot of time in LambdaMOO or Cyberion City. Ou sont les TinyMUCKs d’antan?

    You can find similar articles touting 3D spaces shortly after the MUD frenzy. Ready for a blast from the past? “August 1996 may well go down in the annals of the Internet as the turning point when the Web was released from the 2D flatland of HTML pages.” Oops.

    So what accounts for the current press interest in Second Life? I have a few ideas, though none is concrete enough to call an answer yet.

    First, the tech beat is an intake valve for the young. Most reporters don’t remember that anyone has ever wrongly predicted a bright future for immersive worlds or flythrough 3D spaces in the past, so they have no skepticism triggered by the historical failure of things like LambdaMOO or VRML. Instead, they hear of a marvelous thing — A virtual world! Where you have an avatar that travels around! And talks to other avatars! — which they then see with their very own eyes. How cool is that? You’d have to be a pretty crotchety old skeptic not to want to believe. I bet few of those reporters ever go back, but I’m sure they’re sure that other people do (something we know to be false, to a first approximation, from the aforementioned churn.) Second Life is a story that’s too good to check.

    Second, virtual reality is conceptually simple. Unlike ordinary network communications tools, which require a degree of subtlety in thinking about them — as danah notes, there is no perfect metaphor for a weblog, or indeed most social software — Second Life’s metaphor is simplicity itself: you are a person, in a space. It’s like real life. (Only, you know, more second.) As Philip Rosedale explained it to Business Week “[I]nstead of using your mouse to move an arrow or cursor, you could walk your avatar up to an Amazon.com (AMZN) shop, browse the shelves, buy books, and chat with any of the thousands of other people visiting the site at any given time about your favorite author over a virtual cuppa joe.”

    Never mind that the cursor is a terrific way to navigate information; never mind that Amazon works precisely because it dispenses with rather than embraces the cyberspace metaphor; never mind that all the “Now you can shop in 3D efforts” like the San Francisco Yellow Pages tanked because 3D is a crappy way to search. The invitation here is to reason about Second Life by analogy, which is simpler than reasoning about it from experience. (Indeed, most of the reporters writing about Second Life seem to have approached it as tourists getting stories about it from natives.)

    Third, the press has a congenital weakness for the Content Is King story. Second Life has made it acceptable to root for the DRM provider, because of their enlightened user agreements concerning ownership. This obscures the fact that an enlightened attempt to make digital objects behave like real world objects suffers from exactly the same problems as an unenlightened attempt, a la the RIAA and MPAA. All the good intentions in the world won’t confer atomicity on binary data. Second Life is pushing against the ability to create zero-cost perfect copies, whereas Copybot relied on that most salient of digital capabilities, which is how Copybot was able to cause so much agida with so little effort — it was working with the actual, as opposed to metaphorical, substrate of Second Life.

    Finally, the current mania is largely push-driven. Many of the articles concern “The first person/group/organization in Second Life to do X”, where X is something like have a meeting or open a store — it’s the kind of stuff you could read off a press release. Unlike Warcraft, where the story is user adoption, here most of the stories are about provider adoption, as with the Reuters office or the IBM meeting or the resident creative agencies. These are things that can be created unilaterally and top-down, catnip to the press, who are generally in the business of covering the world’s deciders.

    The question about American Apparel, say, is not “Did they spend money to set up stores in Second Life?” Of course they did. The question is “Did it pay off?” We don’t know. Even the recent Second Life millionaire story involved eliding the difference between actual and potential wealth, a mistake you’d have thought 2001 would have chased from the press forever. In illiquid markets, extrapolating that a hundred of X are worth the last sale price of X times 100 is a fairly serious error.

    Artifacts vs. Avatars

    Like video phones, which have been just one technological revolution away from mass adoption since 1964, virtual reality is so appealingly simple that its persistent failure to be a good idea, as measured by user adoption, has done little to dampen enthusiasm for the coming day of Keanu Reeves interfaces and Snow Crash interactions.

    I was talking to Irving Wladawsky-Berger of IBM about Second Life a few weeks ago, and his interest in the systems/construction aspect of 3D seems promising, in the same way video phones have been used by engineers who train the camera not on their faces but on the artifacts they are talking about. There is something to environments for modeling or constructing visible things in communal fashion, but as with the video phone, they will probably involve shared perceptions of artifacts, rather than perceptions of avatars.

    This use, however, is specific to classes of problems that benefit from shared visual awareness, and that class is much smaller that the current excitement about visualization would suggest. More to the point, it is at odds with the “Son of MUD+thePalace” story currently being written about Second Life. If we think of a user as someone who has returned to a site after trying it once, I doubt that the number of simultaneous Second Life users breaks 10,000 regularly. If we raise the bar to people who come back for a second month, I wonder if the site breaks 10,000 simultaneous return visitors outside highly promoted events.

    Second Life may be wrought by its more active users into something good, but right now the deck is stacked against it, because the perceptions of great user growth and great value from scarcity are mutually reinforcing but built on sand. Were the press to shift to reporting Recently Logged In as their best approximation of the population, the number of reported users would shrink by an order of magnitude; were they to adopt industry-standard unique users reporting (assuming they could get those numbers), the reported population would probably drop by two orders. If the growth isn’t as currently advertised (and it isn’t), then the value from scarcity is overstated, and if the value of scarcity is overstated, at least one of the engines of growth will cool down.

    There’s nothing wrong with a service that appeals to tens of thousands of people, but in a billion-person internet, that population is also a rounding error. If most of the people who try Second Life bail (and they do), we should adopt a considerably more skeptical attitude about proclamations that the oft-delayed Virtual Worlds revolution has now arrived.

    Comments (85) + TrackBacks (0) | Category: social software

    December 5, 2006

    Friends, Friendsters, and Top 8: Writing community into being on social network sites

    Email This Entry

    Posted by danah boyd

    My new paper on friending practices in social network sites is officially live at First Monday. Friends, Friendsters, and Top 8: Writing community into being on social network sites

    “Are you my friend? Yes or no?” This question, while fundamentally odd, is a key component of social network sites. Participants must select who on the system they deem to be ‘Friends.’ Their choice is publicly displayed for all to see and becomes the backbone for networked participation. By examining what different participants groups do on social network sites, this paper investigates what Friendship means and how Friendship affects the culture of the sites. I will argue that Friendship helps people write community into being in social network sites. Through these imagined egocentric communities, participants are able to express who they are and locate themselves culturally. In turn, this provides individuals with a contextual frame through which they can properly socialize with other participants. Friending is deeply affected by both social processes and technological affordances. I will argue that the established Friending norms evolved out of a need to resolve the social tensions that emerged due to technological limitations. At the same time, I will argue that Friending supports pre-existing social norms yet because the architecture of social network sites is fundamentally different than the architecture of unmediated social spaces, these sites introduce an environment that is quite unlike that with which we are accustomed.

    I very much enjoyed writing this paper and i hope you enjoy reading it!

    Comments (4) + TrackBacks (0) | Category: social software

    November 20, 2006

    Social Facts, Expertise, Citizendium, and Carr

    Email This Entry

    Posted by Clay Shirky

    I want to offer a less telegraphic account of the relationship between expertise, credentials, and authority than I did in Larry Sanger, Citizendium, and the Problem of Expertise, and then say why I think the cost of coordination in the age of social software favors Wikipedia over Citizendium, and over traditionally authoritative efforts such as Britannica.

    Make a pot of coffee; this is going to be long, and boring.

    Those of us who write about Wikipedia, both pro and con, often mix two different views: descriptive — Wikipedia is/is not succeeding — and judgmental — Wikipedia is/is not good. (For the record, my view is that Wikipedia is a success, and that society is better off with Wikipedia than it would be without it.) What I love about the Citizendium proposal is that, by proposing a fusion of collaborative construction and expert authority, it presses people who dislike or mistrust Wikipedia to say whether they think that the wiki form of communal production can be improved, or is per se bad.

    Nicholas Carr, in What will kill Citizendium, came out in the latter camp. Explaining why he thinks Ctizendium is a bad idea, he offers his prescription for the right way to do things: “[…] you keep the crowd out of it and, in essence, create a traditional encyclopedia.” No need for that ‘in essence’ there. The presence of the crowd is what distinguishes wiki production; this is a defense of the current construction of authority, suggesting that the traditional mechanism for creating encyclopedias is the correct one, and alternate forms of construction are not.

    This is certainly a coherent point of view, but one that I believe will fail in practical terms, because it is uneconomical. (Carr, in his darker moments, seems to believe something similar, but laments what the economics of peer production mean. This is a “Wikipedia is succeeding/is not good” argument.) In particular, I believe that the costs of nominating and then deferring to experts will make Citizendium underperform its competition, relative to the costs of merely involving experts as ordinary participants, as Wikipedia does.

    Expertise, Credentials, and Authority

    First, let me say that I am a realist, which is to say that I believe in a reality that is not socially constructed. The materials that make up my apartment, wood and stone and so on, actually exist, and are independent of any observer. A real tree that falls in a real forest displaces real air, even if no one is there to interpret that as sound.

    I also believe in social facts, things that are true because everyone agrees they are true. My apartment itself is made of real stuff, but its my-ness is built on agreements: my landlady leases it to me, that lease is predicated on her ownership, that ownership is recognized by the city of New York, and so on. Social facts are no less real than non-social facts — my apartment is actually my apartment, my wife is my wife, my job is my job — they are just real for different reasons.

    If everyone stopped agreeing that my job was my job (I quit or was fired, say), I could still walk down to NYU and draw network diagrams on a whiteboard at 1pm on a Tuesday, but no one would come to listen, because my ramblings wouldn’t be part of a class anymore. I wouldn’t be faculty; I’d be an interloper. Same physical facts — same elevator and room and white board and even the same person — but different social facts.

    Some facts are social, some are not. I believe that Sanger, Carr and I all agree that expertise is not a social fact. As Carr says ‘An architect does not achieve expertise through some arbitrary social process of “credentialing.” He gains expertise through a program of study and apprenticeship in which he masters an array of facts and techniques drawn from such domains as mathematics, physics, and engineering.’ I agree with that, and amended my earlier sloppiness in distinguishing between having expertise and being an expert, after being properly called on it by Eric Finchley in the comments.

    However, though Carr’s description is accurate, is it incomplete: an architect does not achieve expertise through credentialing, but an architect does not become an architect through expertise either. An architect is someone with expertise who has also been granted an architect’s credentials. These credentials are ideally granted on proof of the kinds of antecedents that indicate expertise — in the case of architects, relevant study (itself certified with the social fact of a degree) and significant professional work.

    Consider the following case: a young designer with an architect’s degree designs a building, and a credentialed architect working at the same firm then affixes her stamp to the drawings. The presence of the stamp means that a contractor can use the drawings to do certain kinds of work; without it the drawings shouldn’t be used for such things. Both the expertise and the credentials are necessary to make a set of drawings usable, but in this fairly common scenario, the expertise and the credentials are held by different people.

    This system is designed to produce enough liability for architects that they will supervise the uncredentialed; if they fail to, their own credentials will be taken away. Now consider a disbarred architect (or lawyer or doctor.) There has been no change in their expertise, but a great change in their credentials. Most of the time, we can take the link between authority, credentials, and expertise for granted (its why we have credentials, in fact), but in edge cases, we can see them as separate things.

    The clarity to be gotten from all this definition is a bit of a damp squib: Carr and I are in large agreement about the Citizendium proposal. He thinks that conferring authority is the hard challenge for Citizendium; I think that conferring authority is the hard challenge for Citizendium. He thinks that the openness of a wiki is incompatible with Citizendium’s proposed form of conferring authority, as do I. And we both believe this weakness will be fatal.

    Where we disagree is in what this means for society.

    The Cost of Credentials

    Lying on a bed in an emergency room, you think “Oh good, here comes the doctor.” Your relief comes in part because the doctor has the expertise necessary to diagnose and treat you, and in part because the doctor has the authority to do things like schedule you for surgery if you need it. Whatever your anxieties at that moment, they don’t include the possibility that the nurses will ignore the doctor’s diagnosis, or refuse to treat you in the manner the doctor suggests.

    You don’t worry that expertise and authority are different kinds of things, in other words, because they line up perfectly from your point of view. You simply ascribe to the visible doctor many things that are actually true of the invisible system the doctor works in. The expertise resides in the doctor, but the authority is granted by the hospital, with credentials helping bridge the gap.

    So here’s the thing: it’s incredibly expensive to create and maintain such systems, including especially the cost of creating and policing credentials and authority. We have to make and enforce myriad refined distinctions — not just physician and soldier and chairman but ‘admitting physician’ and ‘second lieutenant’ and ‘acting chairman.’ We don’t let people get married or divorced without the presence of official oversight. Lots of people can drive the bus; only bus drivers may drive the bus. We make it illegal to impersonate an officer. And so on, through innumerable tiny, self-reinforcing choices, all required to keep the links between expertise, credentials and authority functional.

    These systems are beneficial for society. However, they are not absolutely beneficial, they are only beneficial when their benefits outweigh their costs. And we live in an era where all kinds of costs — social costs, coordination costs, Coasean costs — are undergoing a revolution.

    Cost Changes Everything

    Earlier, writing about folksonomies, I said “We need a phrase for the class of comparisons that assumes that the status quo is cost-free.” We still need that; I propose “Cost-free Present” — when people believe in we live in a cost-free present, they also believe that any value they see in the world is absolute, not relative. A related assumption is that any new system that has disadvantages relative to the present one is therefore inferior; if the current system creates no costs, then any proposed change that creates new bad outcomes, whatever the potential new good outcomes, is worse than maintaining the status quo.

    Meanwhile, out here in the real world, cost matters. As a result, when the cost structure for creating, say, an encyclopedia changes, our existing assumptions about encyclopedic value have to be re-examined, because current encyclopedic values are relative, not absolute. It is possible for low-cost, low-value systems to be better than high-cost, high-value systems in the view of the society adopting them. If the low-cost system can increase in value over time while remaining low cost, even better.

    Pick your Innovator’s Dilemma: the Gutenberg bible was considerably less beautiful than scribal copies, the Model T was less well constructed than the Curved Dash Olds, floppy disks were considerably less reliable than hard drives, et cetera. So with Wikipedia and Encyclopedia Britannica: Wikipedia began life as a lost-cost, low-value alternative, but it was accessible, shareable, and improvable. Britannica, by contrast, has always been high-value, but it is both difficult and expensive for readers to get to, and worse, they can’t use what they see — a Britannica reader can’t copy and post an article, can’t email the contents to their friends, can’t even email those friends the link with any confidence that they will be able to see it.

    Barriers to both access and re-use are built into the Britannica cost structure, and without those barriers, it will collapse. Nothing about the institution of Britannica has changed in the five years of Wikipedia’s existence, but in the current ecosystem, the 1768 model of creation — you pay us and we make an Encyclopedia — has been transformed from a valuable service to a set of self-perpetuating, use-crippling barriers.

    This what’s wrong with Cost-free Present arguments: the principal competitive advantages of Wikipedia over Britannica, such as shareability or rapid refactoring (as of the Planet entry after Pluto’s recent demotion) are things which were simply not possible in 1768. Wikipedia is not a better Britannica than Britannica; it is a better fit for the current environment than Britannica is. The measure of possible virtues of an encyclopedia now include free universal access and unlimited re-use. As a result, maintaining Britannica costs more in a world with Wikipedia than it did in a world without it, in the same way scribal production became more expensive after the invention of movable type than before, without the scribes themselves doing anything different.

    If we do what we always did, we’ll get the result we always got

    Citizendium seems predicated on several related ideas about cost and value: having expertise and being an expert are roughly the same thing; the costs of certifying experts will be relatively low; building and running software that confers a higher degree of authority to them than on non-expert users will be similarly low; and the appeal to non-experts of participating in such a system will be high. If these things are true, than a hybrid of voluntary participation and expert authority will be more valuable than either extreme.

    I am betting that those things aren’t true, because the costs of certifying experts and insuring deference to them — the costs of creating and sustaining the necessary social facts — will sandbag the system, making it too annoying to use.

    The first order costs will come from the certification and deference itself. By proposing to recognize external credentialing mechanisms, Citizendium sets itself up to take on the expenses of determining thresholds and overlaps of expertise. A masters student in psychology doing work on human motivation may know more about behavioral economics than a Ph.D. in neo-classical economics. It would be easy to label them both experts, but on what grounds should their disputes be adjudicated?

    On Wikipedia, the answer is simple — deference is to contributions, not to contributors, and is always provisional. (As with the Pluto example enough, even things as seemingly uncontentious as planethood turned out to be provisional.) Wikipedia certainly has management costs (all social systems do), but it has the advantage that those costs are internal, and much of the required oversight is enforced by moral suasion. It doesn’t take on the costs of forcing deference to experts because it doesn’t recognize the category of ‘expert’ as primitive in the system. Experts contribute to Wikipedia, but without requiring any special consideration.

    Citizendium’s second order costs will come from policing the system as a whole. If the process of certification and enforcement of deference become even slightly annoying to the users, they will quickly become non-users. The same thing will happen if the projection of force needed to manage Citizendium delegitimizes the system in the eyes of the contributors.

    The biggest risk with Wikipedia is ongoing: lousy or malicious edits, an occurrence that happens countless times a day. The biggest risk with Citizendium, on the other hand, is mainly up front, in the form of user inaction. The Citizendium project assumes that the desire of ordinary users to work alongside and be guided by experts is high, but everything in the proposal seems to raise the costs of contribution, relative to Wikipedia. If users do not want to participate in a system where the costs of participating are high, Citizendium will simply fail to grow.

    Comments (12) + TrackBacks (0) | Category: social software

    November 12, 2006

    social network sites: my definition

    Email This Entry

    Posted by danah boyd

    I would like to offer my working definition of “social network sites” per confusion over my request for a timeline.

    A “social network site” is a category of websites with profiles, semi-persistent public commentary on the profile, and a traversable publicly articulated social network displayed in relation to the profile.

    To clarify:

    1. Profile. A profile includes an identifiable handle (either the person’s name or nick), information about that person (e.g. age, sex, location, interests, etc.). Most profiles also include a photograph and information about last login. Profiles have unique URLs that can be visited directly.
    2. Traversable, publicly articulated social network. Participants have the ability to list other profiles as “friends” or “contacts” or some equivalent. This generates a social network graph which may be directed (“attention network” type of social network where friendship does not have to be confirmed) or undirected (where the other person must accept friendship). This articulated social network is displayed on an individual’s profile for all other users to view. Each node contains a link to the profile of the other person so that individuals can traverse the network through friends of friends of friends….
    3. Semi-persistent public comments. Participants can leave comments (or testimonials, guestbook messages, etc.) on others’ profiles for everyone to see. These comments are semi-persistent in that they are not ephemeral but they may disappear over some period of time or upon removal. These comments are typically reverse-chronological in display. Because of these comments, profiles are a combination of an individuals’ self-expression and what others say about that individual.

    This definition includes all of the obvious sites that i talk about as social network sites: MySpace, Facebook, Friendster, Cyworld, Mixi, Orkut, etc. Some of the obvious players like LinkedIn are barely social network sites because of their efforts to privatize the articulated social network but, given that it’s possible, I count them (just like i count MySpace even when the users turn their profiles private).

    There are sites that primarily fit into other categories but contain all of the features of social network sites. This is particularly common with sites that were once a different type of community site but have added new features. BlackPlanet, AsianAvenue, MiGente, QQ, and Xanga all fit into this bucket. I typically include LiveJournal as a social network site but it is sorta an edge-cases because they do not allow you to comment on people’s profiles. They do however allow you to publicly comment on the blog entries. For this reason, Dodgeball is also a problem - there are no comments whatsoever. In many ways, i do not consider Dodgeball a social network site, but i do consider it a mobile social network tool which is why i often lump it into this cluster of things.

    Of course, things are getting trickier every day. I’m half-inclined to qualify the definition to say that the profile and articulated social network are the centralizing feature of these sites because there are tons of sites that have profiles and social network site features as a peripheral components of their service but where the primary focus is elsewhere. Examples of this include: YouTube, Flickr, Last.FM, 43Things, Meetup, Vox, Crushspot, etc. (Dating sites are probably the most tricky because they are very profile-centric but the social network is peripheral.) But, on the other hand, most of these sites grew out of this phenomenon. So, for the sake of argument, i leave room to include them but also consider them edge cases.

    At the same time, it’s critical to point out what social network sites are most definitely NOT. They are NOT the same as all sites that support social networks or all sites that allow people to engage in social networking. Your mobile phone, your email, your instant message client… these all support the articulation of social networks (addressbooks) but they do not let you publicly display them in relation to a profile for others to traverse. MUDs/MOOs, BBSes, chatrooms, bulletin boards, mailing lists, MMORPGS… these all allow you to meet new people and make friends but they are not social network sites.

    This is part of why i get really antsy when people talk about this category as “social networks” or “social networking” or “social networking sites.” I think that this is leading to all sorts of confusion about what is and what is not in the category. These alternative categories are far far far too broad and all too often i hear people talking about everything that allows you to talk to anyone in any way as one of these sites (this is the mistake that DOPA makes for example).

    While it’s great to talk about all of these things as part of a broader “social software” or “social media” phenomenon, there are also good reasons to have a label to address a subset of these sites that are permitting very particular practices. This allows academics, politicians, technologists, educators, and others discuss how structural shifts are prompting different kinds of behaviors. (What happens when people publicly articulate their relationships? How do these systems change the rules of virality because the network is visible? Etc.) Because of this, i don’t want the slippage to be too great because people are using terrible terms or because people want their site to fit into the category of what’s currently cool.

    Of course, like most categories, there are huge issues around the edges and there’s never a clean way to construct boundaries. (To understand the challenges, read Women, Fire, and Dangerous Things.) Just think of the category “game” and try to come up with a comfortable definition and boundary for that. Still, there are things that are most definitely not games. An apple is not a game. Sure, it can be used in a game but it is not inherently a game. Not all sites that allow people to engage in social activity are social network sites and it is ridiculous to try to shove them all there simply because there’s a lot of marketing money to be made (yet i realize that this is often the reason why people do try). For this reason, i really want to stake out “social network sites” as a category that has meaningful properties even if the edges are a little fuzzy. There is still meaningful family resemblance and more central prototypes than others. I really want to focus on making sense of what’s happening with this category by focusing primarily on the prototypes and less on the edge cases.

    Anyhow, this is a work in progress but i wanted to write some of this down since i seem to be getting into lots of fights via email about this.

    Comments (7) + TrackBacks (0) | Category: social software

    social network site history

    Email This Entry

    Posted by danah boyd

    When i started tracking social network sites, i didn’t think that i would be studying them. I did a terrible job at keeping a timeline and now, i realize, this is important information to have on hand. I’m currently in the process of trying to go backwards and capture critical dates and i need your help. I know a lot of you have a lot of this information and can probably help me (and thus help everyone else interested in this arena).

    I have created a simple pbwiki at http://yasns.pbwiki.com/ (password yasns) where i’m starting to make a timeline. Can you please add what you know to it? Pretty please with a cherry on top? A lot of this information is scattered all over the web and in people’s heads and it’d be great to get it documented in a centralized source. (I know that there is some info on Wikipedia but it’s not complete; as appropriate, i will transfer information back in their format.) Note: i didn’t include citations because i often don’t have them but if you have them, they’d be very very welcome.

    Please let others know about this if you think they might have information to add. Thank you kindly for your time.

    (PS: i have a new academic paper coming out shortly. Stay tuned.)

    Comments (2) + TrackBacks (0) | Category: social software

    November 2, 2006

    tagging vs folksonomy?

    Email This Entry

    Posted by Liz Lawley

    Is this a reasonable statement to make?

    • Tagging is the process of adding descriptive terms to an item, without the constraint of a controlled vocabulary
    • Folksonomy is the aggregation of tags from one or more users

    Yes? No?

    (Full disclosure: You’re helping me prepare for a tutorial on folksonomies that I’m presenting at the CSCW conference in Banff this weekend.)

    Update: Over on mamamusings, one commenter raised the issue of whether a folksonomy requires multiple items to be tagged.

    Can a folksonomy exist around a single item (e.g. a del.icio.us bookmark)?

    My assumption has always been that a folksonomy involved tags for multiple items…but perhaps it’s a set of tags describing multiple items, ora set of tags from multiple users, or both.

    Comments (7) + TrackBacks (0) | Category: social software

    October 29, 2006

    danah profiled

    Email This Entry

    Posted by Ross Mayfield

    The Financial Times has a great profile of danah with broad coverage of social networking.

    Comments (1) + TrackBacks (0) | Category: social software

    October 10, 2006

    comScore misinterprets data: MySpace is *NOT* gray

    Email This Entry

    Posted by danah boyd

    Read the ComScore press release. Completely. Read the details. They have found that the unique VISITORS have gotten older. This is not the same thing as USERS. A year ago, most adults hadn’t heard about MySpace. The moral panic has made it such that many US adults have now heard of it. This means that they visit the site. Do they all have accounts? Probably not. Furthermore, MySpace has attracted numerous bands in the last year. If you Google most bands, their MySpace page is either first or second; you can visit these without an account. People of all ages look for bands through search.

    Why is Xanga far greater in terms of young people? Most adults haven’t heard of it. It’s not something that comes up high in search for other things. Facebook’s bimodal population pre-public launch shows that more professors/teachers are present than i thought (or maybe companies are more popular than i thought? or maybe comScore’s data is somehow counting teens/college students as 35-54…).

    Can someone tell me exactly how comScore measures this? Is it based on the known age of the person using a given computer? Remember that many teens are logging in through their parent’s computer in the living room. Is it based on reported age? I kinda doubt it but the fact that there are more 100+ year olds on MySpace than are living should make people think about reported data. Is it based on phone interviews? How do they collect it? This isn’t really parseable into English.

    My problem is that all of these teen sites show a heavy usage amongst 35-54. I cannot for the life of me explain how Xanga is 36% 35-54. There’s just no way. I don’t get how the data is formulated but it seems like an odd pattern across these sites to see a drop in 25-34 and a rise in 35-54. Older folks aren’t suddenly blogging on Xanga. So what gives? My hunch is that comScore’s metrics are consistently counting teens as 35-54 across all sites. My hypothesis is that because comScore is measuring per computer and teens are using their parent’s computer, comScore can’t tell the difference between a teen user and a parent user. If so, maybe all this is telling us is that parents have definitely listened to the warnings over the last year and are now making their teens access these sites through their computer?

    Finally, when we talk about data, we also need to separate Visitors from Active Users from Accounts. The number of accounts is not the same as the number of users. The number of visitors is not the same as the number of users.

    All this said, there is no doubt that more older people are creating accounts. Parents are told that they should check in on their kids. Police officers, teachers, marketers… they are all logging in to look at the youth. Is that the same as meaningful users? Some yes, some no.

    From my qualitative experience, the vast majority of actual users are 14-30 with a skew to the lower end. Furthermore, the majority of the accounts are presenting themselves as 14-30. To confirm the latter (which is easier), i did a random sample of 100 profiles with UIDs over 50M (to address the “last year” phenomenon). What i found was:

    • 26 are under 18
    • 45 are 18-30 (with a skew to the lower)
    • 10 are over 30 but under 70
    • 1 is over 70 (but looks less than 18)
    • 6 are bands
    • 11 are invalid or deleted
    • 1 is complete fake characters (explained in descript)
    A few more things of note…
    • 18 have private profiles
    • Of those over 30, only 2 has more than 2 friends (one has 3 friends; one has 5)

    This account data hints that the general assumption that approximately 25% of users are minors is correct. Of the remaining, the bulk is under 30. Qualitatively, i’m seeing the most active use from those under 21. Given account practices, i don’t think that i’m off in what i’m seeing.

    I do suspect that MySpace is holding strong at being primarily for younger people but that older folks have definitely been checking it out a LOT more. Still, i’m still suspicious of the fact that 35-54 are common across all youth sites. I’d really like to see comScore’s data on something that we can check. Maybe LiveJournal?

    (I’d really really really love to be proven wrong on this. If anyone has data that can provide an alternate explanation to the comScore numbers, please let me know!)

    Update: Fred Stutzman and i just jockeyed back and forth to find something we could agree on wrt the comScore numbers. Here are some ways of making sense of the data of VISITORS:

    • Xanga is more of a teen-flavored site than MySpace, Facebook or Friendster
    • Facebook is more of a college-flavored site than MySpace, Friendster or Xanga
    • Friendster is more of a 20/30-something flavored site than MySpace, Facebook or Xanga
    • Of users going to these four sites, MySpace does not swing to any one group; it draws people of all ages to visit the site.
    • A greater percentage of adults (most likely parents) visit MySpace than any of the other social sites

    This is all fine and well and confirms most intuition. The problem is that what we CANNOT confirm via this data is that more adults visit any of these sites than minors. Again, intuitive but the comScore data seems to indicate that adults visit each of sites more than their key population. This is really visible in their “total internet” users which seems to suggest that the vast majority of visitors to all of these social sites are adults. I cannot find a single person who works for one of these companies that believes this.

    I’ve spoked to numerous folks since i posted last nite. Most believe that comScore gets this data by running a program on people’s computers. Young people are supposed to use a separate account than their parents. This data seems to indicate that comScore is wrong in assuming that people will do so. Most minors probably use their parent’s account to check these social sites. So, if we assume that, Xanga is obscenely a teen site, Facebook probably has nearly as many high school users as college users and MySpace swings young but is used by a wider variety of age groups than most social sites.

    Finally, it’s all nice and well that Fox Interactive spokespeople confirm this data but i’ve watched over and over as FIM has confirmed or said things that were patently untrue in public. I don’t know if this is because FIM (the parent of MySpace) doesn’t know what’s going on on MySpace or if it’s because they don’t care whether or not they are accurate publicly. I don’t honestly believe that FIM has any clue about the age of its unique visitors. They know the purported age of people who have accounts and it would be patently false to say that 35-54 dominates account holders.

    Frankly, i’m uber disappointed with comScore but even more disappointed with all of the press and bloggers who ran with the story that MySpace is gray without really looking at the data. This encourages inaccurate data and affects the entire tech industry as well as policy makers, advertisers, and users. I’m horrified that AP, Slashdot, Wall Street Journal, and numerous respectable bloggers are just reporting this as truth and speaking about it as though this is about users instead of visitors. C’mon now. If we’re going to fetishize quantitative data, let’s at least use a properly critical eye.

    Comments (7) + TrackBacks (0) | Category: social software

    October 4, 2006

    SlideShare -- the YouTube of Powerpoint

    Email This Entry

    Posted by Ross Mayfield

    SlideShare launches today — the YouTube of Powerpoint.  While Powerpoint destroys thought, so does TV.  And misgivings aside, slides can be an art form in and of itself.  They are objects you spin stories around.  Like this:

    It is easy to embed a presentation and player within a site, blog or wiki. The above presentation is one I found by danah.  I’ve been playing with the Alpha and really have to applaud Rashmi (you may know her from Dcamp), Jonathan and the gang at Uzanto.

    You upload your Powerpoint (PPT and PPS formats) or OpenOffice (ODP format) slides into My Slidespace with a familiar title, description and tags. The flash player is fast and intuitive.

    Slides are findable by search (the content of the presentation is indexed), Latest, Popular, Featured, Profiles and Tags (Latest, Popular this week and Popular all time).  Here is an RSS feed of the latest.

    What’s also fascinating is their servers are backed by Amazon S3 (Simple Storage Service).  The other week when Socialtext 2.0 launched with a large-file webcast, we got Techcrunched and were worried about the load on our servers.  After a little scrambling in IRC, Pete Kaminski leveraged S3, and problem solved.  In this case, SlideShare has web serviced their scalability.  An interesting model to watch, and good thing if this thing is a sudden hit.

    Comments (6) + TrackBacks (0) | Category: social software

    September 22, 2006

    What is the problem with deference to experts on Wikipedia?

    Email This Entry

    Posted by Clay Shirky

    Interesting pair of comments in Larry Sanger, Citizendium, and the Problem of Expertise, on the nature and seriousness of experts not contributing to Wikipedia:
    22. David Gerard on September 22, 2006 07:08 AM writes…

    Plenty of people complain of Wikipedia’s alleged “anti-expert bias”. I’ve yet to see solid evidence of it. Unless “expert-neutral” is conflated to mean “anti-expert.” Wikipedia is expert-neutral - experts don’t get a free ride. Which is annoying when you know something but are required to show your working, but is giving us a much better-referenced work.

    One thing the claims of “anti-expert bias” fail to explain is: there’s lots of experts who do edit Wikipedia. If Wikipedia is so very hostile to experts, you need to explain their presence.
    Permalink to Comment

    23. engineer_scotty on September 22, 2006 01:19 PM writes…

    I’ve been studying the so-called “expert problem” on Wikipedia—and I’m becoming more and more convinced that it isn’t and expert problem per se; it is a jackass problem. As in some Wikipedians are utter jackasses—in this context, “jackass” is an umbrella category for a wide variety of problem behaviors which are contrary to Wikipedia policy—POV pushing, advocacy of dubious theories, vandalism, abusive behavior, etc. Wikipedia policy is reasonably good at dealing with vandalism, abusive behavior and incivility (too good, some think, as WP:NPA occasionally results in good editors getting blocked for wielding the occasional cluestick ‘gainst idiots who sorely need it). It isn’t currently good at dealing with POV-pushers and crackpots whose edits are civil but unscholarly, and who repeatedly insert dubious material into the encyclopedia. Recent policy proposals are designed to address this.

    Many experts who have left, or otherwise have expressed dissatisfaction with Wikipedia, fall into two categories: Those who have had repeated bad experiences dealing with jackassses, and are frustrated by Wikipedia’s inability to restrain said jackasses; and those who themselves are jackasses. Wikipedia has seen several recent incidents, including one this month, where notable scientists have joined the project and engaged in patterns of edits which demonstrated utter contempt for other editors of the encyclopedia (many of whom were also PhD-holding scientists, though lesser known), attempted to “own” pages, attempted to portray conjecture or unpublished research as fact, or have exaggerated the importance or quality of their own work. When challenged, said editors have engaged in (predictable) tirades accusing the encyclopedia of anti-intellectualism and anti-expert bias—charges we’ve all heard before.

    The former sort of expert the project should try to keep. The latter, I think the project is probably better off without; and I suspect they would wear out their welcomes quickly on Citizendium as well.
    I would love to see a few case studies, linked to the History and Talk pages of a few articles— “Here was the expert contribution, here was the jackass edit, this is what was lost”, etc. Reading Engineer Scotty’s comment, and given the general sense of outraged privilege that seems to run through much of the “Experts have their work edited without permission!” literature, I am guessing that the problem is not so much experts contributing and then being driven away as it is non-contributions by people unwilling to work in an environment wherre their contributions aren’t sacrosanct.

    Comments (5) + TrackBacks (0) | Category: social software

    September 21, 2006

    Socialtext 2.0

    Email This Entry

    Posted by Ross Mayfield

    We launched Socialtext 2.0 today. Techcrunch has the story, but I thought M2M readers might be interested in this screencast which talks through the design decisions.

    Comments (1) + TrackBacks (0) | Category: social software

    September 20, 2006

    Larry Sanger on me on Citizendium

    Email This Entry

    Posted by Clay Shirky

    A response from Larry Sanger, posted here in its entirety:

    Thanks to Clay Shirky for the opportunity to reply here on Many2Many
    to his “Larry Sanger, Citizendium, and the Problem of Expertise,” First, two points about Clay’s style of argumentation, which I simply cannot let go without comment. Then some replies to his actual arguments.

    1. Allow me to identify my own core animating beliefs, thank you very much.

    Clay’s piece annoying tendency to characterize my assumptions uncharitably and without evidence, and to psychologize about me. Thus, Clay says things like: “Sanger‚s published opinions seem based on three beliefs”; “Sanger wants to believe that expertise can survive just fine outside institutional frameworks”; “Sanger’s core animating belief seems to be a faith in experts”; “Sanger’s view seems to be that expertise is a quality like height”; and “Sanger also underestimates the costs of setting up and then enforcing a process that divides experts from the rest of us.”

    I find myself strongly disagreeing with Clay’s straw Sanger. However, I am not that Sanger! Last time I checked, I was made of flesh and blood, not straw.

    2. May I borrow that crystal ball when you’re done with it?

    Repeatedly, Clay makes dire predictions for the Citizendium. “Structural issues…will probably prove quickly fatal”; “institutional overhead…will stifle Citizendium”; “policing certification will be a common case, and a huge time-sink” so “the editor-in-chief will then have to spend considerable time monitoring that process”; “Citizendium will re-create the core failure of Nupedia”; “Sanger believes that Wikipedia goes too far in its disrespect of experts; what killed Nupedia and will kill Citizendium is that they won’t go far enough.”

    I think Clay lacks any good reason to think the Citizendium will fail; but clearly he badly wants it to fail, and his comments are animated by wishful thinking. That, anyway, seems the most parsimonious explanation. To borrow one of Clay’s phrases, and return him the favor: it is interesting “how consistent Clay has been about his beliefs” on the low value of officially-recognized expertise in online communities. “His published opinions seem based on” the belief in the supreme value and efficacy of completely flat self-organizing communities. The notion of experts being given special authority, even very circumscribed authority, does extreme violence to this “core animating belief” (to borrow another of Clay’s phrases). It must, therefore, be impossible.

    Less flippantly now. I do make a point of being properly skeptical about all of my projects—that’s another thing I’ve been consistent about. You can probably still find writings from 2000 and 2001 in which I said I didn’t know whether Nupedia or Wikipedia would work. I have no idea if the Citizendium will work. What I do know is that it is worth a try, and we’ll do our best to solve problems that we can anticipate and as they arise.

    By the way, there’s a certain irony in the situation, isn’t there? Clay Shirky, respected expert about online communities, holds forth about a new proposed online community, and does what so many experts love to do: make bold predictions about the prospects of items in their purview. Meanwhile, I, the alleged expert-lover, cast aspersions on his abilities to make such predictions. If my “core animating belief” were “a faith in experts,” why would I lack faith in this particular expert?

    3. I want to be a social fact, too!

    Let’s move on to Clay’s actual arguments. He begins his first argument with something perfectly true, that expertise (in the relevant sense, an operational concept of expertise) is a social fact, that this social fact is conferred (not always formally, but often) by institutions, and that, therefore, one cannot have expertise without (in some sense) “institutional overhead.” So far, so good. The current proposal—which is open to debate, at this early stage, even from Clay himself—addresses this situation by proposing to avoid editor application review committees in favor of self-designation of editorial status. The details are relevant, so let me quote them from the FAQ:

    We do not want editors to be selected by a committee, which process is too open to abuse and politics in a radically open and global project like this one is. Instead, we will be posting a list of credentials suitable for editorship. (We have not constructed this list yet, but we will post a draft in the next few weeks. A Ph.D. will be neither necessary nor sufficient for editorship.) Contributors may then look at the list and make the judgment themselves whether, essentially, their CVs qualify them as editors. They may then go to the wiki, place a link to their CV on their user page, and declare themselves to be editors. Since this declaration must be made publicly on the wiki, and credentials must be verifiable online via links on user pages, it will be very easy for the community to spot [most] false claims to editorship.
    What then is Clay’s criticism? “The problem” at the beginning of the argument was that “experts are social facts.” Yeah, so? So, says Clay,
    Sanger expects that decertification will only take place in unusual cases. This is wrong; policing certification will be a common case, and a huge time-sink. If there is a value to being an expert, people will self-certify to get at that value, not matter what their credentials. The editor-in-chief will then have to spend considerable time monitoring that process, and most of that time will be spent fighting about edge cases.

    My initial reaction to this was: how on Earth could Shirky know all that? Furthermore, isn’t it quite obvious that, far from being a static proposal, this project is going to be able to move nimbly (I usually propose radical changes and refinements to my projects) in order to solve just such problems, should they arise?

    In any event, based on my own experience, I counter-predict that Clay will probably be wrong in his prediction. There will probably be a lot of people who humorously, out of cluelessness, or whatever, claim to be editors.

    For the easy cases, which will probably be most of them, constables will be able to rein people in, nearly as easy as they can rein in vandalism. No doubt we will have a standard procedure for achieving this. As to the borderline (“edge”) cases (e.g., some grad students and independent scholars), Clay gives us no reason to think that the editor-in-chief will have to spend large amounts of time fighting about them. Unlike Wikipedia, and like many OSS projects, there will be a group of people authorized to select the “release managers” (so to speak). This policy will be written into the project charter, support of which will be a requirement of participation in the project.

    The review process for editor declarations, therefore, will be clear and well-accepted enough—that, after all, is the whole point of establishing a charter and “rule of law” in the online community—that the process can be expected to work smoothly. Mind you, it will be needed because of course there will be borderline cases, and disgruntled people, but Clay has given no reason whatsoever to think it will dominate the entire proceedings.

    Besides, this is a responsibility I propose to delegate to a workgroup; I will probably be too busy to be closely involved in it.

    Far from being persuasive, it is actually ironic that Clay cites primordial fights I had with trolls on Wikipedia as evidence of his points. It was precisely due to a lack of clearly-circumscribed authority and widely-accepted rules that I had to engage in such fights. Consequently, the Citizendium is setting up a charter, editors, and constables precisely to prevent such problems.

    4. Warm and fuzzy yes, a hierarchy no.

    Clay nicely sums up his next argument this way:

    Real experts will self-certify; rank-and-file participants will be delighted to work alongside them; when disputes arise, the expert view will prevail; and all of this will proceed under a process that is lightweight and harmonious. All of this will come to naught when the citizens rankle at the reflexive deference to editors; in reaction, they will debauch self-certification (leading to irc-style chanop wars), contest expert prerogatives, raising the cost of review to unsupportable levels (Wikitorial, round II,) take to distributed protest (q.v.Hank the Angry Drunken Dwarf), or simply opt-out (Nupedia in a nutshell.)

    (By the way, Clay is completely wrong about citizen participation in Nupedia. They made up the bulk of authors in the pipeline. Our first article was by a grad student. An undergrad wrote several biology articles. There have been so many myths are made about Nupedia, so completely divorced from reality, that it has become a fascinating and completely fact-free Rohrschach test for everything bad that anyone wants to say about expert authority in open collaboration.)

    The Citizendium is, by Clay’s lights, a radical experiment that does violence to his cherished notions of what online communities should be like. Persons inclined to “debauch self-certification” as on IRC chatrooms will be removed from the project; and others will not protest at such perfectly appropriate treatment, because we will have already announced this as a policy.

    Through self-selection the community can be expected to be in favor of such policies; those who dislike them will always have Wikipedia.

    That’s part of the beauty of a world with both a Citizendium and a Wikipedia in it. Those who (like you, Clay) instinctively hate the Citizendium—we’ve seen a little of this in blogs lately, calling the very idea “Wikipedia for stick-in-the-muds,” “Wikipedia for control freaks,” a “horror,” etc.—will always have Wikipedia. I strongly encourage you to stick with Wikipedia if you dislike the idea of the Citizendium that much. That will make matters easier for everyone. If other people want to organize themselves in a different way—a way you’d never dream of doing—then please give them room to do so. As a result we’ll have one project for people who agree with you, Clay, and one for people who agree with me, and the world will be richer.

    Clay does give some more support for thinking that an editor-guided wiki is unworkable. He says that the viability of a community resembles a “U curve” with one end being a total hierarchy and the other end being “a functioning community with a core group.” Apparently, projects that are neither hierarchies nor communities, which Clay implies is where the Citizendium would fit, would incur too many “costs of being an institution” and “significant overhead of process.” What I find particularly puzzling about this is how he describes the ends of U curve. I would have expected him to say hierarchy on one end and a totally flat, leaderless community on the other end. But instead, opposite the hierarchy is “a functioning community with a core group.” How is it, then, that the Citizendium as proposed would not constitute “a functioning community with a core group”?

    Let me put this more plainly, setting aside Clay’s puzzling theoretical apparatus. What the world has yet to test is the notion of experts and ordinary folks (and remember: experts working outside their areas of expertise are then “ordinary folks”) working together, shoulder-to-shoulder, on a single project according to open, open source principles. That is the radical experiment I propose. This actually hearkens back to the way OSS projects essentially work. So far, to my knowledge, experts have not been invited in to “gently guide” open content projects in a way roughly analogous to the way that senior developers gently guide OSS projects, deciding what changes are in the next release and what isn’t. You might say that the analogy does not work because senior developers of OSS projects are chosen based on the merits of their contributions within the project. But what if we regard an encyclopedia as continuous with the larger world of scholarship, so that scholarly work outside of the narrow province of a single project becomes relevant for determining a senior content developer? For an encyclopedia, that’s simply a sane variant on the model.

    Whereas OSS projects have special, idiosyncratic requirements, encyclopedias frankly do not. There’s no point to creating an insular community, an “in group” of people who have mastered the particular system, because it’s not about the system—it’s about something any good scholar can contribute to, an encyclopedia. Then, if the larger, self-selecting community invites and welcomes such people to join them as “senior content developers,” why not think the analogy with OSS is adequately preserved?

    (For more of the latter argument please see a new essay I am going to try to circulate among academics.)

    Comments (9) + TrackBacks (0) | Category: social software

    September 18, 2006

    Larry Sanger, Citizendium, and the Problem of Expertise

    Email This Entry

    Posted by Clay Shirky

    The interesting thing about Citizendium, Larry Sanger’s proposed fork of Wikipedia designed to add expert review, is how consistent Sanger has been about his beliefs over the last 5 years. I’ve been reviewing the literature from the dawn of Wikipedia, born from the failure of the process-laden and expert-driven Nupedia, and from then to now, Sanger’s published opinions seem based on three beliefs:

    1. Experts are a special category of people, who can be readily recognized within their domains of expertise.
    2. A process of open creation in which experts are deferred to as of right will be superior to one in which they are given no special treatment.
    3. Once experts are identified, that deference will mainly be a product of moral suasion, and the only place authority will need to intrude are edge cases.

    All three beliefs are false.

    There are a number of structural issues with Citizendium, many related to the question of motivation on the part of the putative editors; these will probably prove quickly fatal. More interesting to me, though, is is the worldview behind Sanger’s attitude towards expertise, and why it is a bad fit for this kind of work. Reading the Citizendium manifesto, two things jump out: his faith in experts as a robust and largely context-free category of people, and his belief that authority can exist largely free of expensive enforcement. Sanger wants to believe that expertise can survive just fine outside institutional frameworks, and that Wikipedia is the anomaly. It can’t, and it isn’t.

    Experts Don’t Exist Independent of Institutions

    Sanger’s core animating belief seems to be a faith in experts. He took great care to invite experts to the Nupedia Advisory Board, and he has consistently lamented that Wikipedia offers no special prerogatives for expert review, and no special defenses against subsequent editing of material written by experts. Much of his writing, and the core of Citizendium, is based on assumptions about how experts should be involved in a project like this.

    The problem Citizendium faces is that experts are social facts — society typically recognizes experts through some process of credentialling, such as the granting of degrees, professional certifications, or institutional engagement. We have a sense of what it means that someone is a doctor, a judge, an architect, or a priest, but these facts are only facts because we agree they are. If I say “I sentence you to 45 days in jail”, nothing happens. If a judge says “I sentence you to 45 days in jail”, in a court of law, dozens of people will make it their business to act on that imperative, from the bailiff to the warden to the prison guards. My words are the same as the judges, but the judge occupies a position of authority that gives his words an effect mine lack, an authority only exists because enough people agree that it does.

    Sanger’s view seems to be that expertise is a quality like height — some people are obviously taller than others, and the rest of us have no problem recognizing who the tall people are. But expertise isn’t like that at all; it is in fact highly subject to shifts in context. A lawyer from New York can’t practice in California without passing the bar there. A surgeon from India can’t operate on a patient in the US without further certification. The UN representative from Yugoslavia went away when Yugoslavia did, and so on.

    As a result, you cannot have expertise without institutional overhead, and institutional overhead is what stifled Nupedia, and what will stifle Citizendium. Sanger is aware of this challenge, and offers mollifying details:

    […]we will be posting a list of credentials suitable for editorship. (We have not constructed this list yet, but we will post a draft in the next few weeks. A Ph.D. will be neither necessary nor sufficient for editorship.) Contributors may then look at the list and make the judgment themselves whether, essentially, their CVs qualify them as editors. They may then go to the wiki, place a link to their CV on their user page, and declare themselves to be editors. Since this declaration must be made publicly on the wiki, and credentials must be verifiable online via links on user pages, it will be very easy for the community to spot false claims to editorship.


    We will also no doubt need a process where people who do not have the credentials are allowed to become editors, and where (in unusual cases) people who have the credentials are removed as editors.

    Sanger et al. set the bar for editorship, editors self-certify, then, in order to get around the problems this will create, there will be an additional certification and de-certification process internal to the site. On Citizendium, if you are competent but uncredentialed, you will have to be vetted before you are allowed to ascend to the editor’s chair, and if you are credentialed but incompetent, you’re in until decertification. And, critically, Sanger expects that decertification will only take place in unusual cases.

    This is wrong; policing certification will be a common case, and a huge time-sink. If there is a value to being an expert, people will self-certify to get at that value, not matter what their credentials. The editor-in-chief will then have to spend considerable time monitoring that process, and most of that time will be spent fighting about edge cases.

    Sanger himself experienced this in his fight with Cunctator at the dawn of Wikipedia; Cunc questioned Sanger’s authority, leading Sanger to defend it with increasing vigor. As Sanger said at the time “…in order to preserve my time and sanity, I have to act like an autocrat. In a way, I am being trained to act like an autocrat.” Sanger’s authority at Wikipedia required his demonstrating it, yet this very demonstration made his job harder, and ultimately untenable. This the common case; as any parent can tell you, exercise of presumptive authority creates the conditions under which it is tested. As a result, Citizendium will re-create the core failure of Nupedia, namely putting at the center of the effort a process whose maintenance takes more energy than can be mustered by a volunteer project.

    “We’re a Warm And Fuzzy Hierarchy”: The Costs of Enforcement

    In addition to his misplaced faith in the rugged condition of expertise, Sanger also underestimates the costs of setting up and then enforcing a process that divides experts from the rest of us. Curiously, this underestimation seems to be borne of a belief that most of the world shares his views on the appropriate deference to expertise:

    Can you really expect headstrong Wikipedia types to work under the guidance of expert types in this way?

    Probably not. But then, the Citizendium will not be Wikipedia. We do expect people who have proper respect for expertise, for knowledge hard gained, to love the opportunity to work alongside editors. Imagine yourself as a college student who had the opportunity to work alongside, and under the loose and gentle direction of, your professors. This isn’t going to be a top-down, command-and-control system. It is merely a sensible community: one where the people who have made it their life’s work to study certain areas are given a certain appropriate authority—without thereby converting the community into a traditional top-down academic editorial scheme.

    Well, can you expect the experts to want to work “shoulder-to-shoulder” with nonexperts?

    Yes, because some already do on Wikipedia. Furthermore, they will have an incentive to work in this project, because when it comes to content—i.e., what the experts really care about—they will be in charge.

    These passages evince a wounded sense of purpose: Experts are real, and it is only sensible and proper that they be given an appropriate amount of authority. The totality of the normative view on display here is made more striking because Sanger never reveals the source of these judgments. “Sensible” according to whom? How much authority is “appropriate”? How much control is implied by being “in charge”, and what happens when that control is abused?

    These responses are also mutually contradictory. Citizendium, the manifesto claims, will not be a traditional top-down academic scheme, but experts will be in charge of the content. The only way experts can be in charge without top-down imposition is if every participant internalizes respect for authority to the point that it is never challenged in the first place. One need allude only lightly to the history of social software since at least Communitree to note that this condition is vanishingly rare.

    Citizendium is based less on a system of supportable governance than on the belief that such governance will not be necessary, except in rare cases. Real experts will self-certify; rank-and-file participants will be delighted to work alongside them; when disputes arise, the expert view will prevail; and all of this will proceed under a process that is lightweight and harmonious. All of this will come to naught when the citizens rankle at the reflexive deference to editors; in reaction, they will debauch self-certification (leading to irc-style chanop wars), contest expert preogatives, rasing the cost of review to unsupportable levels (Wikitorial, round II,) take to distributed protest (q.v. Hank the Angry Drunken Dwarf), or simply opt-out (Nupedia in a nutshell.)

    The “U”-Curve of Organization and the Mechanisms of Deference

    Sanger is an incrementalist, and assumes that the current institutional framework for credentialling experts and giving them authority can largely be preserved in a process that is open and communally supported. The problem with incrementalism is that the very costs of being an institution, with the significant overhead of process, creates a U curve — it’s good to be a functioning hierarchy, and its good to be a functioning community with a core group, but most of the hybrids are less fit than either of the end points.

    The philosophical issue here is one of deference. Citizendium is intended to improve on Wikipedia by adding a mechanism for deference, but Wikipedia already has a mechanism for deference — survival of edits. I recently re-wrote the conceptual recipe for a Menger Sponge, and my edits have survived, so far. The community has deferred not to me, but to my contribution, and that deference is both negative (not edited so far) and provisional (can always be edited.)

    Deference, on Citizendium will be for people, not contributions, and will rely on external credentials, a priori certification, and institutional enforcement. Deference, on Wikipedia, is for contributions, not people, and relies on behavior on Wikipedia itself, post hoc examination, and peer-review. Sanger believes that Wikipedia goes too far in its disrespect of experts; what killed Nupedia and will kill Citizendium is that they won’t go far enough.

    Comments (34) + TrackBacks (0) | Category: social software

    September 8, 2006

    Facebook's "Privacy Trainwreck": Exposure, Invasion, and Drama

    Email This Entry

    Posted by danah boyd

    Last night, i asked will Facebook learn from its mistake? In the first paragraph, i alluded to a “privacy trainwreck” and then went on to briefly highlight the political actions that were taking place. I never returned to why i labeled it that way and in my coarseness, i failed to properly convey what i meant by this.

    When i sat down to explain the significance of the “privacy trainwreck,” a full-length essay came out. Rather than make you read this essay in blog form (or via your RSS reader), i partitioned it off to a printable webpage.

    Facebook’s “Privacy Trainwreck”: Exposure, Invasion, and Drama

    The key points that i make in this essay are:

    • Privacy is an experience that people have, not a state of data.
    • The ickyness that people feel when they panic about privacy comes from the experience of exposure or invasion.
    • We’ve experienced the exposure hiccup before with Cobot. When are we going to learn?
    • Invasion changes social reality and there is a cognitive cap to being able to handle it.
    • Does invasion potentially result in a weakening of meaningful social ties?
    • Facebook lost its innocence this week.

    Please enjoy this essay and forward it on to both technology folks and Facebook participants. I would like to hear feedback!

    Comments (5) + TrackBacks (0) | Category: social software

    September 7, 2006

    Wiki Wired Experiment

    Email This Entry

    Posted by Ross Mayfield

    UPDATE: Veni. Vidi. Wiki. The published story, and commentary by Ryan Singel, The Wiki That Edited Me.

    I believe the Wired Wiki experiment can be called a success, and yesterday I would have said it was doomed. Just came back from Wiki Wednesday, where Wired reporter Ryan Singel held a conversation about it.  How we conducted the experiment, what part of the editorial process it was directed at it and the participation of the community gives us a lot to learn from.

    Do recall that the use of wikis in journalism has been significantly tainted by the LA Times Wikitorial debacle.  It was a failure in wiki implementation, goal setting, content structure and moderation.  While the media has embraced public blogs, they still have a while to go before public wikis are accepted. 

    ...continue reading.

    Comments (1) + TrackBacks (0) | Category: social software

    August 29, 2006

    Edit this Wired Article

    Email This Entry

    Posted by Ross Mayfield

    Last time someone tried this it was a disaster, but Wired News has boldly put an article about wikis into a Socialtext wiki for anyone to be a Wired editor:

    In an experiment in collaborative journalism, Wired News is putting reporter Ryan Singel at your service.

    This wiki began as an unedited 1,059-word article on the wiki phenomenon, exactly as Ryan filed it. Your mission, should you choose to accept it, is to do the job of a Wired News editor and whip it into shape. Don’t change the quotes, but feel free to reorganize it, make cuts, smooth the prose or add links — whatever it takes to make it a lively, engaging news piece.

    Ryan will answer questions from the comments page, and, when consensus calls for it, conduct additional reporting. If there’s something he missed, let him know, and he’ll get on the phone and investigate, then submit new text to the wiki for your review.

    Readers can also submit headlines for the story, and write and edit the “deck” — a blurb for our front page and RSS feed that promotes the article.

    To make any changes, you’ll first need to create a free account at Socialtext.

    We’ll release the results under a Creative Commons license, and, if the whole thing doesn’t turn into a disaster, run the final story on Wired News on Sept. 7, 2006.

    Comments (0) + TrackBacks (0) | Category: social software

    August 11, 2006

    In-line tagging at LibraryThing

    Email This Entry

    Posted by David Weinberger

    Tim Spalding has taken discussion forums a big step forward over at LibraryThing. The concept is simple but could make a real difference because it allows forum msgs to be aggregated in multiple ways. When you’re entering a msg at a forum, you can put a title or author in brackets and LibraryThing will take a stab at identifying what you have in mind. Think of it as in-place tagging. You can thus easily find all the posts about a book. And all the references to a book or author will be lilsted on that book or author’s page.

    Because LibraryThing knows which books you own (because you’ve told it), it can feed you msgs about any of them. And, as Tim points out, this unhiding of msgs will change the temporality of posts: Rather than msgs fading into obscurity a few days or weeks after they’re posted, they’ll be easily findable and reply-able.

    Very cool.

    Comments (7) + TrackBacks (0) | Category: social software

    August 7, 2006

    number games and social software

    Email This Entry

    Posted by danah boyd

    Over the last month, i’ve been driving Mimi’s Hybrid on and off. One of my favorite things about the Hybrid is that it tells you how many MPG you’re averaging over time. I find myself driving around town trying to maximize that number, getting uber excited when it goes up and super sad when it goes down. It reminds me of when i used to try to maximize my miles per hour when going from Boston to New York only this is more environmental. Yet, it’s not the environment that i’m concerning myself with - it’s all about number games in the same way that people obsess over every pound on the scale or the calories in every bite.

    Then i was thinking about Tantek and Jason raving about Consumating. I love the fact that it’s a lot of cool geeky people but i can never get over the lameness that i feel when i log in and look at my score. And yet, i can’t be bothered to answer the questions that make me feel all uncomfortable in the hopes that someone will like my answers and rate me higher. It’s a catch-22 for me. Yet, i totally understand why Tantek and Jason and others absolutely love it and why they go back for more.

    And then i was thinking about the people on Yahoo! Answers who spend hours every day answering questions to get high ranks. It’s very similar to Consumating only it’s not all embarassing because it’s not really about you - it’s about the answers. There’s no real gain from getting points but still, it’s like a mouse in a cage determined to do well just cuz they can.

    This all reminds me of a scene in some movie. I can’t recall what movie it was but it was about how you just want to be the best at something, anything… to have something to point at and say look, i’m #1! The validation, the proof of greatness! Even if that something is problematic attention getting like being the #1 serial killer. (Was it Bowling for Columbine?)

    I started wondering about these number games… They’re all over social software - Neopets, friends on social network sites, blog visitors, etc. Who is motivated by what number games? Who is demotivated? Does it make a difference if the number game is about the group vs. the individual, about one’s self directly vs. about some abstract capability?

    Are there some number games that work better than others in attracting a broader audience? I’m thinking about Orkut here… if the game is to get as many Brazillians on the site as possible, you only need a few obsessives to be the rallying forces; everyone else is part of the number game simply by signing up. So there are tons competing in the number games but only a few invested.

    Does anyone know anything about how these number games work as incentives?

    [Also posted at Apophenia]

    Comments (12) + TrackBacks (0) | Category: social software

    July 27, 2006

    Culture Jams, Culture Preserves

    Email This Entry

    Posted by Paul B Hartzog

    This post is via my Paul B. Hartzog blog, but I realized that I should’ve posted it here, so here goes….

    I recently read The Rise of Crowdsourcing over at Wired (the author, Jeff Howe, has a blog on the topic at http://www.crowdsourcing.com).

    The article mentions that iStockphoto (cheap stock photography via the Internet) has obliterated the “future for professional stock photography.” (Similarly, Clay Shirky noted way back when that blogs “are such an efficient tool for distributing the written word that they make publishing a financially worthless activity.”)

    But more importantly, the Wired article discusses the rise of R&D networking. For example, InnoCentive matches problems and problem-solvers: “The strength of a network like InnoCentive’s is exactly the diversity of intellectual background…. We actually found the odds of a solver’s success increased in fields in which they had no formal expertise.”

    Now, just this year, Chevy attempted its own kind of crowdsourcing, allowing website visitors to apply their own text input over Chevy Tahoe footage to create-your-own-commercial. What they got was a barrage of anti-pollution, anti-accident, and just-about-anti-anything creations. (See them at YouTube: http://youtube.com/results?search=chevy+tahoe). One participant even launched a website where you can rate the videos).

    Using existing mass media images to twist, mock, refute, subvert, or as wikipedia more politely says “produce negative commentary about itself” is called “culture jamming.”

    Umberto Eco calls this “semiological guerrilla warfare” and supports “action which would urge the audience to control the message and its multiple possibilities of interpretation.” (from Travels in Hyperreality).

    But what happens when the culture jammers actually want to continue and extend the media in question?

    Well, last year Wired ran this story about some Star Trek fans who make their own episodes, which eventually culminated in this article at The New York Times. (See the fan-vids: http://www.newvoyages.com/, http://www.ussintrepid.org.uk/, http://www.hiddenfrontier.com/, and http://www.starshipexeter.com/).

    The fans are saying, look, if we can’t get what we want on television, the technology is out there for us to do it ourselves…. It has become so popular that Walter Koenig, the actor who played Chekov in the original “Star Trek,” is guest starring in an episode, and George Takei, who played Sulu, is slated to shoot another one later this year.

    Now the Star Trek franchise has a real opportunity here that could be taken as a crowdsourcing lesson to other media producers (music, film, books, etc.). Here it comes:

    Free the content!

    Let the Star Trek fans take the initiative and spend the money to keep the interest-level going, crank out a studio movie once in a while, foster crossovers between shows, organize events, provide financial assistance, etc.

    This is what Rebecca Blood calls “participatory culture,” and Clay Shirky “mass amateurization.”

    The Pew Internet & American Life Project released this study which states that “57 percent of 12- to 17-year-olds online – 12 million individuals – are creating content of some sort and posting it to the Web.”

    So if culture jams are the result of the appropriation of mass media images for negative commentary, then the same process used for positive purposes would result in culture preserves, no?

    Kick out the preserves! ;-)

    Comments (7) + TrackBacks (0) | Category: social software

    July 24, 2006

    Shameless Plug

    Email This Entry

    Posted by Ross Mayfield

    It is without shame that I can share the release of Socialtext Open, an Open Source distribution of Socialtext. I figure this is in demand by M2M readers, and, well, we are quite proud of it. For your downloading pleasure.

    Comments (2) + TrackBacks (0) | Category: social software

    July 20, 2006

    The Power of Conversation Redux

    Email This Entry

    Posted by Paul B Hartzog

    In September of 2005, I posted “The Power of Conversation” in which I suggested:

    “The real value of communicative technologies like social software is that they re-enable and enhance our ability to use a time-tested means of information processing, i.e. the conversation, in new and interesting ways!”

    Now, today I caught “The role of conversation in a changing society and public realm”:

    Conversation has long been the cornerstone of our society. New technologies enable us to speak to people anytime, anywhere. However, there is growing concern – both in the UK and elsewhere - that we are talking less than we used to. This work suggests that this is a misconception and that the issue is actually much more complex.

    (thx to this post over at Howard Rheingold’s Smart Mobs)

    Robert Putnam’s book Bowling Alone catalyzed the debate about the decline of community. Putnam, like many others, suffered from ontological blinders. By defining community in a narrow way, he failed to see forms of community that didn’t fit his narrow definition. But:

    The adherence to outdated ways of thinking about social involvement have intensified concern about our sense of community. The way that we engage with those around us has changed. We no longer necessarily connect with either conventional structures like community societies or even less formal associative fora, like markets. Community involvement remains of vital importance, but structures of engagement no longer reflect the ways in which people are comfortable in having their say.

    This problem is also rampant in politics where scholars who focus on the primacy of nation-states ignore transnational social organization, and scholars who focus on the structures of formal government fail to notice the networks of informal governance that are emerging across the globe. The bottom line is that technology ushers in new forms of social organization that escape notice precisely because they are invisible to adherents of the old paradigm. By the time anyone notices the impending social transformation, it is too powerful to contain, and social transformation cascades across the landscape. Or so the theory goes.

    So what about conversation? Well, I venture to suggest that it is through conversation, the connecting of people with other people, the exchange of ideas, the spread of information, debate, dissent, and empathy, that collective wisdom arises. Furthermore, given the resurgence of violent politics, the ambivalence in the face of environmental crises, and profit-driven enclosure movements like overly restrictive copyright law and the Net Neutrality concern, we could definitely benefit from new forms of social organization as carriers of collective wisdom.

    Comments (3) + TrackBacks (0) | Category: social software

    July 18, 2006

    from architecture to urban planning: technology development in a networked age

    Email This Entry

    Posted by danah boyd

    Last week, i had drinks with Ian Rogers and Kareem Mayan and we were talking about shifts in the development of technology. Although all of us have made these arguments before in different forms, we hit upon a set of metaphors that i feel the need to highlight.

    Complete with references to engineering, technology development was originally seen as a type of formalized production. You design, build and ship products. And then they’re out in the wild, removed from the production cycle until you make Version 2. Of course, it didn’t take long for people to realize that when they shipped flaws, they didn’t need to do a recall. Instead, they could just ship free updates in the form of Version 1.1.

    As the world went web-a-rific, companies held onto the ship-final-products mentality in its stodgy archaic form. Until the forever-in-beta hit. I, for one, love the persistent beta. It signals that the system is continuously updating, never fully baked and meant to be organic. This is the way that it should be.

    Web development is fundamentally different than packaged software. Because it is the web, there’s no vast distance between producers and consumers. Distribution channels cross space and time (much to the chagrin of most old skool industries). Particularly when it comes to social software, producers can live inside their creations, directly interact with those using the system, and evolve the system alongside the practices that are emerging. In fact, not only can they, they’re stupid to do anything else.

    The same revolution has happened in writing. Sure, we still ship books but what does it mean to have the author have direct interaction with the reader like they do in blogging? It’s almost as though someone revived the author from the dead [1]. And maybe turned hir into a kind of peculiar looking Frankenstein who realizes that things aren’t quite right in interpretation-land but can’t make them right no matter what. Regardless, with the author able to directly connect to the reader, one must wonder how the process changes. For example, how is the audience imagined when its presence is persistent?

    I’m reminded of a book by Stewart Brand - How Building Learn. In it, Brand talks about how buildings evolve over time based on their use and the aging that takes place. A building is not just the end-result of the designer, but co-constructed by the designer, nature, and the inhabitant over time. When i started thinking about technology as architecture, i realized the significance of that book. We cannot think about technologies as finalized products, but as evolving architectures. This should affect the design process at the getgo, but it also highlights the differences between physical and digital architectures. What would it mean if 92 million people were living in the house simultaneously with different expectations for what colors the walls should be painted? What would it mean if the architect was living inside the house and fighting with the family about the intention of the mantel?

    The networked nature of web technologies brings the architect into the living room of the house, but the question still remains: what is the responsibility of a live-in architect? Coming in as an authority on the house does no good - in that way, the architect should still be dead. But should the architect just be a glorified fixer-upper/plumber/electrician? Should the architect support the aging of the house to allow it to become eccentric? Should the architect build new additions for the curious tenants? What should the architect be doing? One might think that the architect should just leave the place alone… but is this how digital sites evolve? Do they just need plumbers and electricians? Perhaps the architect is not just an architect but also an urban planner… It is not just the house that is of concern, but the entire city. How the city evolves depends on a whole variety of forces that are constantly in flux. Negotiating this large-scale system is daunting - the house seems so much more manageable. But 92 million people never lived in a single house together.

    [1] Note to Barthes scholars: i’m being snippy here. I realize that the author’s authority should still be contested, that multiple interpretations are still valid, and that the author is still a product of social forces. I also realize that even as i’m writing this blogpost, its reading will be out of my control, but the reality is that i’ll still - as author - get all huffy and puffy and try to be understood. Damnit.

    Comments (9) + TrackBacks (0) | Category: social software

    July 16, 2006

    Twttr

    Email This Entry

    Posted by Ross Mayfield

    Prepare to be spammed globally.  Twttr just launched, a mobile social software app for SMSing your social network developed by Odeo.  It’s slightly simpler than Dodgeball, not location centric and a bit more viral.  Biz Stone calls it present-tense blogging. Ev notes you might want to upgrade your SMS plan and they are working on compatibility outside the US.  To me its reply-to-all baked in your phone.

    If they support MMS and let me send a photo to twttr and CC flickr, it will be a killer app.  But for now, put my SMS’ in a sidebar widget or give me feeds I can splice.

    Yes, I am a twtt.

    Comments (1) + TrackBacks (0) | Category: social software

    Dandelife

    Email This Entry

    Posted by Ross Mayfield

    I’m advising a new startup called Dandelife, which is a Social Biography Network.  TechCrunch has the scoop, but let me tell you why I think they will be successful.

    Ever get that feeling why you are blogging and flickring your life away that you have lost something?  That you are telling your life’s story, but it is lost in the archives and in the minds of people who are really paying attention?

    There is a gap in social software for binding stories in a chronology.  For building biographies of people, places and things.  I think Dandelife serves as different object to tell stories around.  Time.

    The horizontal and vertical visualizations are what makes this work:

    Dandelife is definately beta and Edward and Kelly are working hard on it.  But when you can upload your blog and photos to start your story, its pretty powerful.  Go play.  And let them know how it can get better.

    Comments (6) + TrackBacks (0) | Category: social software

    June 27, 2006

    Wiki Case Study: DrKW

    Email This Entry

    Posted by Ross Mayfield

    Socialtext released an update to the Dresdner Kleinwort Wasserstein (DrKW) case study on enterprise wiki and blog use.  Based on the usability interviews performed by Suw Charman, the case addresses ease of use and adoption issues that lead to wiki traffic outperforming the intranet within six months.  Specific use cases such as managing meetings, brainstorming and publishing and creating presentations collaboratively are explored in depth.

    We had to move away from a static, dead intranet,” says Myrto Lazopoulou. “The wiki has allowed us to improve collaboration, communication and publication. We can cross time zones, improve the way teams works, reduce email and increase transparency.”

    The case study is also available in PDF format and complements other research done on this leading deployment:

    * An Adoption Strategy for Social Software in the Enterprise
    * Enterprise 2.0 article in the MIT Sloan Management Review
    * Harvard Business School Case Study: Wikis at Dresdner Kleinwort Wasserstein
    * JP Rangaswami’s blog

    Comments (0) + TrackBacks (0) | Category: social software

    June 10, 2006

    PennTags - When card catalogs meet tags

    Email This Entry

    Posted by David Weinberger

    University of Pennsylvania’s del.icio.us-like PennTags project allows readers to tag catalogued items. It’s a great way to track resources for a research project and simultaneously make the results of your forays available to future researchers. In fact, it seems just plain selfish not to do so.

    Integrating tagging with the book catalogue (and therefore with the book taxonomy) instantaneously provides the best of both worlds: Structured browsing leads you to nodes with jumping off points into the connections made by others who are putting those nodes into various contexts, and tags lead you back into the structured world organized by experts in structure.

    My guess is that the folksonomy that emerges will not change the existing taxonomy because in a miscellaneous world you don’t have to change something in order to change it. The existing taxonomy could stay exactly as it is, as the folksonomy supplements it by providing synonyms for existing categories (e.g., a search for “recipes” takes you to the “cuisine” category of the existing taxonomy) and leaping-off-points from it into the user-created clusters of meaning (e.g., here’s the tag cloud for the node you’re browsing). Rather than disrupting, transforming or replacing the existing taxonomy, the folksonomy may just affectionately tousle its hair.

    Anyway, PennTags looks like a great project.

    (U of Penn’s Library Staff Blog is here. And here is the newtech category of that blog. On a quick browse, this looks like a terrific resource if you’re interested in libraries, taxonomies, folksonomies, tagging, etc.) [Tags: ]

    Comments (3) + TrackBacks (0) | Category: social software

    May 25, 2006

    MySpace and Deleting Online Predators Act (with Henry Jenkins)

    Email This Entry

    Posted by danah boyd

    Henry Jenkins (Co-Director of Comparative Media Studies at MIT) and i were interviewed by Sarah Wright of the MIT News Office about the proposed Deleting Online Predators Act (DOPA). Although they only used a fraction of our interview in the MIT Tech Talk, we decided to publish the extended version online. We feel as though our response provides valuable information for parents, legislators, journalists and technologists. It summarizes a lot of what both Henry and i have been trying to get across when interviewed by the media.

    Discussion: MySpace and Deleting Online Predators Act

    Please, feel free to share this. You are also welcome to re-publish this interview (or portions of this interview) with proper attribution.

    Comments (18) + TrackBacks (0) | Category: social software

    News of Wikipedia's Death Greatly Exaggerated

    Email This Entry

    Posted by Clay Shirky

    Nicholas Carr has an odd piece up, reacting to the ongoing question of Wikipedia governance as if it is the death of Wikipedia. In Carr’s view
    Where once we had a commitment to open democracy, we now have a commitment to “making sure things are not excessively semi-protected.” Where once we had a commune, we now have a gated community, “policed” by “good editors.” So let’s pause and shed a tear for the old Wikipedia, the true Wikipedia. Rest in peace, dear child. You are now beyond the reach of vandals.
    Now this is odd because Carr has in the past cast entirely appropriate aspersions on pure openess as a goal, noting, among other things, that “The open source model is not a democratic model. It is the combination of community and hierarchy that makes it work. Community without hierarchy means mediocrity.”

    Carr was right earlier, and he is wrong now. Carr would like Wikipedia to have committed itself to openess at all costs, so that changes in the model are failure conditions. That isn’t the case however; Wikipedia is committed to effectiveness, and one of the things it has found to be effective is openess, but where openess fails to provide the necessary defenses on it’s own, they’ll make changes to remain effective. The changes in Wikipedia do not represent the death of Wikipedia but adaptation, and more importantly, adaptation in exactly the direction Carr suggests will work.

    We’ve said it here before: Openness allows for innovation. Innovation creates value. Value creates incentive. If that were all there was, it would be a virtuous circle, because the incentive would be to create more value. But incentive is value-neutral, so it also creates distortions — free riders, attempts to protect value by stifling competition, and so on. And distortions threaten openess.

    As a result, successful open systems create the very conditions that require a threaten openess. Systems that handle this pressure effectively continue (Slashdot comments.) Systems that can’t or don’t find ways to balance openess and closedness — to become semi-protected — fail (Usenet.)

    A huge number of our current systems are hanging in the balance, because the more valuable a system, the greater the incentive for free-riding. Our largest and most spontaneous sources of conversation and collaboration are busily being retrofit with filters and logins and distributed ID systems, in an attempt to save some of what is good about openess while defending against Wiki spam, email spam, comment spam, splogs, and other attempts at free-riding. Wikipedia falls into that category.

    And this is the possibility that Carr doesn’t entertain, but is implicit in his earlier work — this isn’t happening because the Wikipedia model is a failure, it is happening because it is a success. Carr attempts to deflect this line of thought by using a lot of scare quotes around words like vandal, as if there were no distinction between contribution and vandalism, but this line of reasoning runs aground on the evidence of Wikipedia’s increasing utility. If no one cared about Wikipedia, semi-protection would be pointless, but with Wikipedia being used as reference material in the Economist and the NY Times, the incentive for distortion is huge, and behavior that can be sensibly described as vandalism, outside scare quotes, is obvious to anyone watching Wikipedia. The rise of governance models is a reaction to the success that creates incentives to vandalism and other forms of attack or distortion.

    We’ve also noted before that governance is a certified Hard Problem. At the extremes, co-creation, openess, and scale are incompatible. Wikipedia’s principle advantage over other methods of putting together a body of knowledge is openess, and from the outside, it looks like Wikipedia’s guiding principle is “Be as open as you can be; close down only where there is evidence that openess causes more harm than good; when this happens, reduce openess in the smallest increment possible, and see if that fixes the problem.”

    People who build or manage large-scale social software form the experimental wing of political philosophy — in the same way that the US Constitution is harder to change than local parking regulations, Wikipedia is moving towards a system where evidence of abuse generates anti-bodies, and those anti-bodies vary in form and rigidity depending on the nature and site of the threat. By responding to the threats caused by its growth, Wikipedia is moving the hierachy+community model that Carr favored earlier. His current stance — that this change is killing the model of pure openess he loved — is simply crocodile tears.

    Comments (21) + TrackBacks (0) | Category: social software

    May 22, 2006

    Enterprise 2.0, SoA and the Freeform Advantage

    Email This Entry

    Posted by Ross Mayfield

    Andrew McAfee, who first mentioned the term Enterprise 2.0 to me on December 1st 2005, provides a definition:

    Now, since I was the first to write extensively about Enterprise 2.01 I feel I’m entitled to define it:

    Enterprise 2.0 is the use of freeform social software within companies.

    ‘Freeform’ in this case means that the software is most or all of the following:

    • Optional
    • Free of up-front workflow
    • Egalitarian, or indifferent to formal organizational identities
    • Accepting of many types of data

    ‘Social’ means that there’s always a person on at least one end of the wire with Enterprise 2.0 technologies.  With wikis, prediction
    markets, blogs, del.icio.us, and other Web 2.0 technologies with clear
    enterprise applications people are doing all the interacting and
    providing some or all of the content; the IT is just doing housekeeping
    and/or bookkeeping.

    I’m in agreement, and find it easier to be than naming debates of the past (and reminiscent at my first stab at naming: “Social Software adapts to its environment, instead of requiring its environment to adapt to software”).

    If there is debate, it will be on two fonts: the role of organizational identities (Egalitarian) or an emaphasis on technology over social dynamics.  McAfee focuses on the second, that of Enterprise 2.0 vs. SoA:

    Full post is on my blog…

    Comments (0) + TrackBacks (0) | Category: social software

    May 12, 2006

    Social Science and Design Questions

    Email This Entry

    Posted by Ross Mayfield

    Last week Liz organized the Microsoft Research Social Computing Symposium. I shared some raw notes here, and here is a good gaming summary, but most of the activity was in a private Socialtext wiki. Among other things, Clay and danah held a session on the lingering questions in our field. This should tease out what work is already done or in progress, but I thought they may be thought provoking at the least:

    Social Science Questions

    * How can we measure the success of different types of online communities, and their survival and prodictivity and various criteria?
    * Coates: which community software is more successful in which environments?
    * What are the boundry conditions for mobile and pervasive (social) computing systems?
    * To what extend, in what ways, at what rate/time scal will mobile and/or pervasive systems change the way humans interact socially?
    * Do natives of social media systems have a different notion of themselves as individuals and abour their relation to broader social groups?
    * What are the mechanisms that cause people to act, mark up, buy or sell bits they care about online?
    * What tips people to try something, what’s enough to bring value?
    * Does the “regular public” want to connect with people othey do not know? (outside the context of dating)
    * What level of visual representation of the body is necessary to trigger mirror neurons
    * Are the online community members of tomorrow going to be more or less participatory than today’s? And why?
    * What impact do computer/video games have on the everyday habits and routines of the gamers?
    * Is society becoming more or less individualized?
    * How can we use the computational ability of our machines to transform communication?
    * How can we get access to behavioral (server logs) and attitudinal data (survey data) from large scale worlds?

    Design Questions

    * What elements of MMOG can be adapted to web applications?
    * How can we build virtual worlds/spaces where we can operate parallel servers with slightly variable rulesets?
    * … so that we can change one experimental condition and obverve the response by the inhabitants?
    * What are the barriers to contributing to social group ointeraction (social bookmarking, wikis)?
    * …What are the steps to mitigate the barriers?
    * How do we make memories portable?
    * How do we use social judgement to surfae what your peers care or are interested in? What the crowd is interested in?
    * How can communities support veterans going off topic together an new commers seeking topical information and connections?

    What lingering questions do you have for possible research?

    Comments (2) + TrackBacks (0) | Category: social software

    May 11, 2006

    anti-social networks legislation

    Email This Entry

    Posted by danah boyd

    Earlier, i spoke about how the MySpace panic was likely to cause legislation proposals. Today, Congressperson Fitzpatrick proposed legislation to amend the Communications Act of 1934 “to require recipients of universal service support for schools and libraries to protect minors from commercial social networking websites and chat rooms.” This legislation broadly defines social network sites as anything that includes a Profile plus an ability to communicate with strangers. It covers social networking sites, chatrooms, bulletin boards. Obviously, the target is MySpace but most of our industry would be affected. Blogger, Flickr, Odeo, LiveJournal, Xanga, MySpace, Facebook, AIM, Yahoo! Groups, MSN Spaces, YouTube, eBaumsworld, Slashdot. It would affect Wikipedia if there wasn’t a special clause for non-commercial sites. Because many news sites (NYTimes, CNN, the Post) allow people to login and create profiles and comment, it might affect them too.

    Because it affects both libraries and schools, it will dramatically increase the digital divide. Poor youth only gain access to these sites through libraries and schools. With this ban, poor youth will have no access to the cultural artifacts of their day. Furthermore, because libraries won’t be able to maintain separate 18+ and minor computers, this legislation will affect everyone who uses libraries, including adults.

    This legislation is horrifying and culturally damaging. Please, all of you invested in social technologies, do something to make this stop.

    Comments (3) + TrackBacks (0) | Category: social software

    May 3, 2006

    innovating mobile social technologies (damn you helio)

    Email This Entry

    Posted by danah boyd

    The next step in social technologies is mobile. Duh. Yet, a set of factors have made innovation in this space near impossible. First, carriers want to control everything. They control what goes on a handset, how much you pay for it and who else you can communicate with. Next, you have hella diverse handsets. Even if you can put an application on a phone, there’s no standard. Developers have to make a bazillion different versions of an app. To make matters worse, installing on a phone sucks and most users don’t want to do it. Plus, to make their lives easier, developers often go for Java apps and web apps which are atrociously slow and painful. All around, it’s a terrible experience for innovators, designers and users.

    This headaches have a detrimental effect on the development of mobile social software. Successful social technologies requires cluster effects. Cluster effects require everyone within a particular social cluster to be able to play. If 20% of your friends can’t play because their phone/carrier won’t let them, the end result is often that NO ONE plays. Of course, there’s a tipping point where people buy a new phone or switch carriers, but that tipping point is hefty and right now, it’s for things like SMS not neuvo apps. Switching carriers is even uglier - it requires a huge drop in price.

    Being able to get to basic cluster effects is the baseline for a mobile social app to succeed. This alone won’t make it work, but you need that to even begin. There are lots of other limitations, especially when the MoSoApp depends on geography. Take a look at something like Dodgeball. It was utterly brilliant at SXSW because 1) everyone was able to use it; 2) huge clusters were on it; 3) everyone was geographically proximate. There was a curve of use so that a fraction checked in all of the time, most checked in occasionally and a fraction never checked in. But that’s the ideal distribution for cluster effects. Still, because everyone could use it, it was used.

    Over and over, i hear about cool technologies that involve multimedia sharing, GPS applications, graphical interfaces, etc. In theory, as research, these are great. Unfortunately, without clusters, you cannot even test the idea to see if it would make sense to a given population. :(

    There are only three phones out there with cluster effects right now: Crackberry, Treo and Sidekick. Even still, the killer app for each of these (email or AIM) connects them not to each other but to a broader network because of non-mobile technology. Plus, each of these clusters has issues when it comes to developing for them. Crackberry appeals to the business world who is on leash to their boss. Productivity-centric apps could be helpful to this crowd, but it will not be fun and most of these ideas involve privacy destruction. The Treo is central around the business tech world but most of this population socializes with people who are trying out every new phone on the planet; this group is too finicky and besides, they want everything OPEN. Then there’s the Sidekick - it has penetrated the hearts and minds of urban street youth. Sadly, few designers are really interested in thinking about black urban culture. ::grumble::grumble::

    When i heard that the Helio was going to launch with MySpace on board, i got super super excited. Like IM and email, MySpace is a perfect application to bridge web and mobile interactions. Sure, it only would include the communications messages and not really take advantage of the mobile issues with social networks, but it would be a good step, no? The target would inevitably be 16-30, an ideal target for dealing with mobile sociability. I was anxiously awaiting the launch, figuring that if anything could push youth to center around a technology, it would involve MySpace. From MySpace, you could actually start innovating with youth networks, location-based activities, image sharing, etc. Opportunity!

    And then they launched. What marketing asshole chose the prices? $85 a month minimum on top of a $275 phone??? Has anyone not noticed that the target youth market is using the free generic phone and a $40 a month plan? You need to lure them away from their T-mobile/Sprint/Verizon plan and entice to come over. You need to do this en masse, with enthusiasm. You cannot do this for $85 a month on top of a $275 phone. ::sigh:: Opportunity lost.

    There are two ways to get mobile social applications going:
    1) A population needs to have access to a universal interaction platform which (except for SMS and dialing) means being on the same technology;
    2) Carriers/handsets need to standardize and open up to development by outsiders.

    The latter is the startup fantasy and i don’t see it happening any time soon (stupid carriers). The former is really hard because it means enticing people over away from their contracts. Plus, it means moving against gadget individuality, which is something that people have really bought into. The only way to do that is for it to be super accessible and super cool. This is unfortunately an oxymoron because cool in gadgets equals expensive which means inaccessible. While the trendsetters will all opt-in, you need the followers to come along too for cluster effects to work.

    There is a third option: destroy the carriers. The possibility of WiFi phones (following blanketed WiFi) means that you just have to deal with multiple handset makers but, right now at least, they are better about openness. At least then, you’d just have one development roadblock. Unfortunately, this is probably a long way off because the telcos are in bed with legislators who are being extremely slow about universal WiFi and are all about protecting dying industries.

    I hate when innovation is jammed up by bad politics and stupid forms of competition. One of the hugest challenges of convergence culture is that traditional competition doesn’t work. We’re not competing for who can create the coolest toothbrush design anymore. We’re now competing for who can build the biggest roadblocks in convergence. Today, innovation means figuring out how to best undermine the roadblocks without getting into legal trouble. Talk about a buzz kill.

    So what should be done? Oh carriers, handset makers, innovators, venture capitalists, legal people… Is the goal to innovate or to control? What should be done to push past these roadblocks? (And for all of you in favor of control, remember that there are other markets besides the US/UK/Japan where innovation will occur and laws will not protect.)

    Update: I want to clarify some things around youth purchasing. The youth market is 14-28. The 14-21s get their phones from their parents and are on their plans. The 21-28s get their own plans. The 14-21s are stuck with whatever free phone they get unless they can beg and plead for a cooler phone for their birthday. They also get shit plans, although many have been able to convince their parents to support SMS these days. This segment of the youth population is key because they are hyper active and this is when they are setting their norms for phone use these days. The way to get to them is to either make a phone that is so cool that they beg and beg for their birthday (and it fits into their parents’ plan) or to make a package so cheap that they can convince their parents to get them a separate plan because it’s economically viable. The 21-28s have more flexibility but they are still strapped for cash and are quite cautious with their plans, but if they’ve gotten used to SMS they don’t give it up. They are also more likely to take the free phone unless they are the trendsetters (because they now have to pay and begging doesn’t work). The exception to this is actually working class teens who tend to buy their own phone starting at 15/16 - they buy cooler phones but still have shit mobile plans. This is why the Sidekick worked so well in this demographic. (Note: these observations and this post are based on what i’ve seen hanging out in youth culture, not any interactions i’ve had with mobile or tech companies or any formal data i’ve collected for my dissertation. In other words, i may be very wrong.)

    Also posted at apophenia

    Comments (5) + TrackBacks (0) | Category: social software

    May 1, 2006

    Tag nation

    Email This Entry

    Posted by David Weinberger

    Technorati reports that 47% of blog posts have a user-created category or tag associated with it, excluding default categories such as “diary” and “general.”

    That’s a lot of tags.

    Comments (4) + TrackBacks (0) | Category: social software

    April 25, 2006

    great facebook guidelines for administrators

    Email This Entry

    Posted by Liz Lawley

    While preparing for a panel on “Blogs, Wikis, MMORPGs, and YASNS: Shaking Up Traditional Education” at the Milken Institute Global Conference, I stumbled across Fred Stutzman’s post “How University Administrators Should Approach the Facebook: Ten Rules.” Great stuff. I particularly liked #9:

    Since you can’t make Facebook go away, and even if you tried to, you couldn’t, you might as well accept it and deal with it. The fact of the matter is that students need to understand the long view, and they need to understand the importance of the written record. They’ve spent their entire lives online, and they are completely comfortable posting information about themselves online. Now that they’re 18, economic motivations step in, and it is our obligation and duty to protect them. Telling them not to say anything controversial, or forcing them to use privacy settings just won’t cut it - remember, the students who are on the Facebook want to be found and listened to. What they need to understand is the context. They have to understand the need to act now on behalf of the person they’ll be in 4 or 5 or 6 years. Give them that context. Explain to them the value of maintaining a self-image they can be proud of down the road. Work with them on this, not against them - it may be your only chance.

    That advice should be going to parents and teachers, as well—not just administrators. Thinking about the “long view” of these media—blogs, wiki editing history, social network site profiles—is a skill that we need to be teaching kids.

    Comments (18) + TrackBacks (0) | Category: social software

    April 13, 2006

    Enterprise 2.0

    Email This Entry

    Posted by Ross Mayfield

    Harvard Professor Andrew McAfee:

    I have an article in the spring 2006 issue of Sloan Management Review (SMR) on what I call Enterprise 2.0 —  the emerging use of Web 2.0 technologies like blogs and wikis (both perfect examples of network IT) within the Intranet.  The article describes why I think this is an important and welcome development, the contents of the Enterprise 2.0 ‘toolkit,’ and the experiences to date of an early adopter.  It also offers some guidelines to business leaders interested in building an Enterprise 2.0 infrastructure within their companies.

    One question not addressed in the article is: Why is Enterprise 2.0 is an appealing reality now?…

    He continues, in his blog:

    As described in the SMR article, these tools include powerful search, tags (the basis for the folksonomies at del.icio.us and flickr), and automatic RSS signals whenever new content appears.  As I type these words I don’t know the best site to serve as the link behind the abbreviation ‘RSS’ in the previous sentence.  To find this site, I’m going to type ‘RSS’ into Google and see what pops up (sure enough, the Wikipedia entry for ‘RSS’ was pretty high in Google’s results).  I also don’t know the URL of the page I’m using right now to type this blog entry.  I do know that it’s on my del.icio.us page, tagged as ‘APMblog,’ so I can find it whenever I want.  And I don’t know what work my three collaborators on a research project are doing right now; I just know that when any of them has some results to share or a new draft of the paper they’ll post it on the project’s wiki (which is powered by Socialtext) and I’ll immediately get an RSS notification about it.

    These examples are not meant to show that my professional life is perfectly organized (that assertion would be worse than false; it would be fraudulent) or that we’ve addressed all the challenges associated with the growth of the Web.  They’re meant instead to illustrate how technologists have done a brilliant job at three tasks: building platforms to let lots of users express themselves, letting the structure of these platforms emerge over time instead of imposing it up front, and helping users deal with the resulting flood of content.

    As the SMR article discusses, the important question for business leaders is how to import these three trends from the Internet to the Intranet —  how to harness Web 2.0 to create Enterprise 2.0.

    Andrew also dug deep to develop a Harvard Business School Case Study: Wikis at Dresdner Kleinwort Wasserstein.

    Former HBR Editor Nick Carr, always one for orderly skepticism, comments on the SMR article:

    McAfee sounds a note of caution along these lines. He notes the possibility that “busy knowledge workers won’t use the new technologies, despite training and prodding,” and points to the fact that “most people who use the Internet today aren’t bloggers, wikipedians or taggers. They don’t help produce the platform - they just use it.” There’s the rub. Managers, professionals and other employees don’t have much spare time, and the ones who have the most valuable business knowledge have the least spare time of all. (They’re the ones already inundated with emails, instant messages, phone calls, and meeting requests.) Will they turn into avid bloggers and taggers and wiki-writers? It’s not impossible, but it’s a long way from a sure bet.

    This is true, adoption is the rub.  But one hedge we have is, to McAfee’s point, how these tools help cope with overload.  I’d wager, in fact I have, that email volume will only increase, some devices only exacerbate the problem, and unlike KM — more productive and simpler models have an upper hand.

    Dion Hinchcliffe focuses on the technical aspects of this trend: Ajax, SaaS and SoA.  But what is really different is the focus on users ahead of buyers and architecture.  Remember, it’s made of people.

    Comments (22) + TrackBacks (0) | Category: social software

    March 21, 2006

    Friendster lost steam. Is MySpace just a fad?

    Email This Entry

    Posted by danah boyd

    People keep asking me “What went wrong with Friendster? Why is MySpace any different?” Although i’ve danced around this issue in every talk i’ve given, i guess i’ve never addressed the question directly. So i sat down to do so tonite. I meant to write a short blog post, but a full-length essay came out. Rather than make you read this essay in blog form (or via your RSS reader), i partitioned it off to a printable webpage. If you are building social technologies or online communities, please read this. I think it’s really important to understand the history of these sites, how users engaged with them, how the architects engaged with users, and how design decisions had social consequences. Hopefully, my essay can help with this.

    Friendster lost steam. Is MySpace just a fad?

    I do want to highlight a section towards the end because i think that it’s quite problematic that folks aren’t thinking about the repercussions of the moral panic around MySpace.

    If MySpace falters in the next 1-2 years, it will be because of this moral panic. Before all of you competitors get motivated to exacerbate the moral panic, think again. If the moral panic succeeds:
    1. Youth will lose (even more) freedom of speech. How far will the curtailment of the First Amendment go?
    2. All users will lose the safety and opportunities of pseudonymity, particularly around political speech and particularly internationally.
    3. Internet companies will be required to confirm the real life identity of all users. At their own cost.
    4. International growth on social communities will be massively curtailed because it is much harder to confirm non-US populations.
    5. Internet companies will lose the protections of common carrier which will have ramifications in all sorts of directions.
    6. Internet companies will see a massive increase in subpoenas and will be forced to turn over data on their users which will in turn destroy the trust relationship between companies and users.
    7. There will be a much greater barrier for new communities to form and for startups to build out new social environments.
    8. International companies will be far better positioned to create new social technologies because they won’t have to abide by American laws even if American citizens use their technology (assuming the servers are hosted outside of the US). Unless, of course, we decide to block sites on a nation-wide basis….

    Comments (31) + TrackBacks (3) | Category: social software

    March 13, 2006

    glocalization talk at Etech

    Email This Entry

    Posted by danah boyd

    Last week, i gave a talk at O’Reilly’s Etech on how large-scale digital communities can handle the tensions between global information networks and local interaction and culture. I’ve uploaded the crib for those who are interested in reading the talk: “G/localization: When Global Information and Local Interaction Collide”.

    This talk was written for designers and business folks working in social tech. I talk about the significance of culture and its role in online communities. I go through some of the successful qualities of Craiglist, Flickr and MySpace to lay out a critical practice: design through embedded observation. I then discuss a few issues that are playing out on tech and social levels.

    Anyhow, enjoy! And let me know what you think!

    Comments (14) + TrackBacks (0) | Category: social software

    March 12, 2006

    Clash of Uncivilizations

    Email This Entry

    Posted by Ross Mayfield

    Jon Turow passed on an open letter to Mark Zuckerberg in the Daily Princetonian. Facebook recently expanded from college to high school, resulting in a clash of uncivilizations:

    …If we really wanted to, we could steer clear of the groups by just avoiding the high school profiles. But we can’t ignore it when they post on our walls. And my god, do they post. Unfortunately, they don’t understand that by posting “OMG how are you? I haven’t seen you since our Model UN trip three years ago!” they are undermining the college personas that we have so carefully constructed over the past there years. And when a 16-year-old girl pokes us, we worry that poking back could result in a cyber-statutory rape conviction. Something tells us that when having sex with one of your facebook friends could result in a criminal violation, things have gone too far….

    Comments (16) + TrackBacks (0) | Category: social software

    March 9, 2006

    The Experimental Wing of Political Philsophy

    Email This Entry

    Posted by Ross Mayfield

    Clay may end up posting something about pattern languages for moderations systems here, but Nat has great notes from his talk at Etech and I couldn’t help but lift this quote:

    This is the direction that the conversation around social software is taking. Hobbes would say that Dave had the right and all was good. Rousseau would reply, “no he didn’t, software systems that don’t allow the users to fight back are immoral.”
    Social software is the experimental wing of political philsophy, a discipline that doesn’t realize it has an experimental wing. We are literally encoding the principles of freedom of speech and freedom of expression in our tools. We need to have conversations about the explicit goals of what it is that we’re supporting and what we are trying to do, because that conversation matters. Because we have short-term goals and the cliff-face of annoyance comes in quickly when we let users talk to each other. But we also need to get it right in the long term because society needs us to get it right. I think having the language to talk about this is the right place to start.

    Then again, Plato argued in the Seventh Letter that only philosophers are fit to rule.

    Comments (13) + TrackBacks (1) | Category: social software

    March 6, 2006

    An Adoption Strategy for Social Software in the Enterprise

    Email This Entry

    Posted by Ross Mayfield

    Perhaps the greatest competency Socialtext has gained over the past three years is fostering adoption of social software.  Adoption matters most for IT to have value.  It should be obvious that if only a third of a company uses a portal, then the value proposition of that portal is two thirds less than it’s potential.  But for social software, value is almost wholy generated by the contributions of the group and imposed adoption is marked for failure.  Suw Charman has been working with Socialtext on site at Dresdner Klienwort Wasserstein and has spearheaded the creation of the following practice documentation.  I believe this will be a critical contribution for enterprise practices, so do read on…

    An Adoption Strategy for Social Software in the Enterprise

    Experience has shown that simply installing a wiki or blog (referred to collectively as ‘social software’) and making it available to users is not enough to encourage widespread adoption. Instead, active steps need to be taken to both foster use amongst key members of the community and to provide easily accessible support.

    There are two ways to go about encouraging adoption of social software: fostering grassroots behaviours which develop organically from the bottom-up; or via top-down instruction. In general, the former is more desirable, as it will become self-sustaining over time - people become convinced of the tools’ usefulness, demonstrate that to colleagues, and help develop usage in an ad hoc, social way in line with their actual needs.

    Top-down instruction may seem more appropriate in some environments, but may not be effective in the long-term as if the team leader stops actively making subordinates use the software, they may naturally give up if they have not become convinced of its usefulness. Bottom-up adoption taps into social incentives for contribution and fosters a culture of working openly that has greater strategic benefits. Inevitably in a successful deployment, top-down and bottom-up align themselves in what Ross Mayfield calls ‘middlespace’.

    ...continue reading.

    Comments (9) + TrackBacks (2) | Category: social software

    February 26, 2006

    AirTroductions

    Email This Entry

    Posted by danah boyd

    I spend too much time in airports and i can’t imagine i’m alone in this crowd. While i often like to get work done, i also like interesting interactions… or at least sane seatmates. Social software should be able to help but there are so many barriers to this. You need to articulate too much and who has time? Still, as broken as they are, i’m interested in exploring the tools that might lead to entertaining interactions or at least to the development of better systems to do so. One of the ones i’m curious about is AirTroductions. Yeah, it kinda has dating overtones to it, but i’m still curious if it’d ever work. At the very least, who else is en route to Etech or SXSW or IASummit when? I have to imagine that lots of folks i know will be passing through the same airports in the next month. Anyone else willing to give it a try just to see?

    Comments (13) + TrackBacks (1) | Category: social software

    February 21, 2006

    the significance of MySpace

    Email This Entry

    Posted by danah boyd

    While MySpace has skyrocketed to success beyond any of the other social technologies on the web, too few folks in the industry talk about it, participate in it or otherwise pay attention to it…. mostly because it’s particularly populated by teens, musicians and other folks who are nowhere near connected to the tech industry. Much of what’s discussed is the culture of fear put forward by the mass media. This is quite unfortunate because there’s a lot of interesting stuff going on there.

    At AAAS this week, i had the opportunity to present the first phase of my findings in a talk called Identity Production in a Networked Culture. If you want insight into what teens are doing on MySpace and why, check it out.

    Comments (4) + TrackBacks (3) | Category: social software

    February 14, 2006

    Powerlaws: 2006 Dance Re-mix

    Email This Entry

    Posted by Clay Shirky

    The question of inequality and unfairness has come up again, from Seth’s Gatekeepers posts and subsequent conversation, to pointers to Clive Thompson’s A-Listers article in New York magazine, which article discusses the themes from Powerlaws, Weblogs, and Inequality (though without mentioning that essay or noting that the original powerlaw work was done in 2003.)

    The most interesting thing I’ve read on the subject was in Doc Searls post:

    I’ve always thought the most important thesis in Cluetrain was not the first, but the seventh: Hyperlinks subvert hierarchies.
    What I’ve tried to say, in my posts responding to Tristan’s, Scott’s and others making the same point, is nothing more than what David Weinberger said in those three words.

    I thought I was giving subversion advice in the post that so offended Seth. But maybe I was wrong. Maybe being widely perceived as a high brick in the blogosphere’s pyramid gives my words an unavoidable hauteur — even if I’m busy insisting that all the ‘sphere’s pyramids are just dunes moving across wide open spaces.
    […]
    I’ll just add that, if ya’ll want to subvert some hierarchies, including the one you see me in now, I’d like to help.

    The interesting thing to me here is the tension between two facts: a) Doc is smart and b) that line of thinking is unsupportable, even in theory. The thing he wants to do — subvert the hierarchy of the weblog world as reflected in lists ranked by popularity — is simply impossible to do as a participant.

    Part of the problem here is language. Hierarchy has multiple definitions; the sort of hierarchy-subverting that networks do well is routing around or upending nested structures, whether org charts or ontologies. This is the Cluetrain idea that hyperlinks subvert hierarchies.

    The list of weblogs ranked by poularity is not a hierarchy in that sense, however. It is instead a ranking by status. The difference is critical, since what’s being measured when we measure links or traffic is not structure but judgment. When I’m not the CEO, I’m not the CEO because there’s an org chart, and I’m not at the top of it. There is an actual structure holding the hierarchy in place; if you want to change the hierarchy, you change the structure.

    When I’m not the #1 blogger, however, there are no such structural forces making that so. Ranking systems don’t work that way; they are just lists ordered by some measured characteristic. To say you want to subvert that sort of hierarchy makes little sense, because there are only two sorts of attack: you can say that what’s being measured isn’t important (and if it isn’t, why try to subvert it in the first place?), or you can claim that lists are irrelevant (which is tough if the list is measuring something real and valuable.)

    Lists are different from org charts. The way to subvert a list is to opt out; were Doc to stop writing, he would cede his place in the rankings to others. At the other extreme, for him to continue to champion the good over the mediocre, as he sees it, sharpens the very hierarchy he wants to subvert. Huis clos.

    The basic truth of such ranking systems is unchanged: for you to win, someone else must lose, because rank is a differential. Furthermore, in this particular system, the larger the blogsphere grows, the greater the inequality will be between being the most- and median-trafficked weblog.

    All of that is the same as it was in 2003. The power law is always there, any time anyone wants to worry about it. Why the worrying happens in spasms instead of steadily is one of the mysteries of the weblog world.

    The only things that are different in 2006 are the rise of groups and of commercial interests. Of the top 10 Technorati-measured blogs, (Disclosure: I am an advisor to Technorati), all but one of them are either run by more than one poster, or generate revenue from ads or subscriptions. (The exception is PostSecret, whose revenue comes from book sales, not directly from running the site.) Four of the top five and five of the ten are both group and commercial efforts — BoingBoing, Engadget, Kos, Huffington Post, and Gizmodo.

    Groups have wider inputs and outputs than individuals — the staff of BoingBoing or Engadget can review more potential material, from a wider range of possibilities, and post more frequently, than can any individual. Indeed, the only two of those ten blogs operating in the classic “Individual Outlet” mode are at #9 and 10 — Michelle Malkin and Glenn Reynolds, respectively.

    And blogs with business models create financial incentives to maximize audience size, both because that increases potential subscriber and advertisee pools, but also because a high ranking is attractive to advertisers even outside per capita calculations of dollars per thousand viewers.

    (As an aside, there’s a pair of interesting technical questions here: First, how big is the A-list ad-rate premium over pure per-capita calculations? Second, if such a premium exists, is it simply a left-over bias from broadcast media, or does popularity actually create measurable value over mere audience count for the advertiser? Only someone with access to ad rate cards from a large sample could answer those questions, however.)

    In his post Shirky’s Law, Hugh Macleod quotes me saying:

    Once a power law distribution exists, it can take on a certain amount of homeostasis, the tendency of a system to retain its form even against external pressures. Is the weblog world such a system? Are there people who are as talented or deserving as the current stars, but who are not getting anything like the traffic? Doubtless. Will this problem get worse in the future? Yes.

    I still think that analysis is correct. From the perspective of 2003, it’s the future already, and attaining the upper reaches of traffic, for even very committed bloggers, is much harder. That trend will continue. In February of 2009, I expect far more than the Top 10 to be dominated by professional, group efforts. The most popular blogs are no longer quirky or idiosyncratic individual voices; hard work by committed groups beats individuals working in their spare time for generating and keeping an audience.

    Comments (7) + TrackBacks (3) | Category: social software

    February 8, 2006

    an open letter to blizzard entertainment

    Email This Entry

    Posted by Liz Lawley

    [Editorial Note: The following letter, which is also being posted on the Terra Nova weblog, is not intended to be seen as an “official stance” of either TerraNova or Many-to-Many. It is simply an open letter authored by a group of authors and scholars who also have affiliations with one or the other of these weblgogs.]

    Open Letter to Blizzard Entertainment—Speech Policy for GLBT guilds in World of Warcraft

    Recently, Sara Andrews, a player in Blizzard Entertainment’s World of Warcraft (WoW) recruited for a Gay, Lesbian, Bisexual and Transexual (GLBT) Friendly guild in the general chat channel on the Shadowmoon server. She was reported to a Game Master by another player, and subsequently sanctioned for “Harassment – Sexual Orientation”. Under the Terms of Use of WoW, it is forbidden to transmit offensive material, including abusive or sexually explicit material.

    Ms Andrews was given a warning not to undertake this again. She assumed this was a mistake, but Blizzard confirmed that the sanction and the punishment would stand. An official from Blizzard responded:

    “To promote a positive game environment for everyone and help prevent such harassment from taking place as best we can, we prohibit mention of topics related to sensitive real-world subjects in open chat within the game, and we do our best to take action whenever we see such topics being broadcast. This includes openly advertising a guild friendly to players based on a particular political, sexual, or religious preference, to list a few examples. For guilds that wish to use such topics as part of their recruiting efforts, our Guild Recruitment forum, located at our community Web site, serves as one open avenue for doing so.”

    As a result of public comments about this issue, Blizzard has reversed its decision and has privately communicated to Ms Andrews that no punishment will stem from this incident. It also has privately indicated that it is reviewing its sexual harassment policy. It has issued no public statement about the issue.

    We write this letter as educators, journalists, writers and players interested in the development of virtual worlds like World of Warcraft. We congratulate Blizzard on the courage to rescind its initial decision, and urge it to make a formal announcement that they were wrong to make it. The decision to sanction and punish Ms Andrews was wrong as a narrow matter of interpretation, and as a general principle of policy for WoW and other virtual worlds.

    ...continue reading.

    Comments (21) + TrackBacks (0) | Category: social software

    January 21, 2006

    The Bottom-Up $100,000 Pyramid

    Email This Entry

    Posted by David Weinberger

    Zephyr Teachout and Britt Blaser, both veterans of the Howard Dean Internet campaign, reflect on how to fix what’s going wrong at the well-intentioned Since Sliced Bread contest. The Service Employees International Union (SEIU) is sponsoring the contest, offering $100,000 to the person who comes up with the best idea for improving the lives of working women and men. 22,000 ideas were submitted which “a group of diverse experts” winnowed to 70, a process some felt was too top-down.

    This is a fascinating case in which a bottom-up process is supposed to squeeze out a single winner, the contest is intended to advance the social good, and the reward includes a hefty chunk of change.

    Comments (3) + TrackBacks (0) | Category: social software

    January 2, 2006

    Social Software Top 10

    Email This Entry

    Posted by Ross Mayfield

    Ev:

    …With the caveats that Alexa’s data is not comprehensive—and even if they had perfect stats, “Alexa Rank” is still just one definition of popularity (a combination of reach and pageviews)—here’s the 10 most popular social media sites (with corresponding Alexa 100 rank):

    1. MySpace (8)
    2. Blogger (16)
    3. Xanga (20)
    4. Hi5 (31)
    5. Orkut (33)
    6. Thefacebook (41)
    7. Friendster (46)
    8. Flickr  (51)
    9. LiveJournal (NA)
    10. Photobucket (77)…

    As the caveat noted, this is just one dimension to view such things.

    UPDATE: A constructive comment points out that Wikipedia isn’t on this list.

    Comments (26) + TrackBacks (1) | Category: social software

    December 30, 2005

    The Business Blogging 500

    Email This Entry

    Posted by Ross Mayfield

    Chris Anderson (Wired/Long Tail Blog) kicks off an open research project:

    Short Form: In collaboration with Socialtext, we’ve created a wiki that tracks which of the Fortune 500 is blogging. Check it out here. 

    Jason Calacanis already did by contributing Time Warner Inc (he should know), increasing the count to 14 of the Fortune 500, or 3%:

    Blogging F500 Company Sample Blog
    Amazon.com Inc. Amazon Web Services Blog
    Avaya Inc. 2006 FIFA World Cup Blog
    Avon Products, Inc. Beauty Dish
    Cisco Systems, Inc. Cisco High Tech Policy Blog
    Dell, Inc Linux Engineering
    Electronic Data Systems EDS’ Next Big Thing Blog
    Ford Motor Company 2005 Mustang Blog
    General Motors Corporation FastLane Blog
    Hewlett-Packard Company HP Blogs
    Microsoft Corporation MSDN’s Microsoft Blogs
    Motorola Inc Motoblog: 4 bloggers & a phone
    Oracle Corporation OraBlogs
    Sprint Things That Make You Go Wireless
    Sun Microsystems Inc Jonathan Schwartz
    Texas Instruments Video 360 Blog
    Time Warner Inc AOL Blogs
    The Boeing Company Randy’s Journal

    Chris (and Doc) may be on to something about observing the correlation between F500 blogging and stock performance.  But at the least, this can serve as a renewable resource for informing social software adoption.

    Comments (25) + TrackBacks (2) | Category: social software

    December 28, 2005

    blurring boundaries between virtual and real worlds

    Email This Entry

    Posted by Liz Lawley

    Ted Castranova has a fascinating post up on Terra Nova entitled “The Horde is Evil,” in which he argues that the Horde races on World of Warcraft are “on the whole evil,” and that this has moral implications for avatar choices:

    I’ve advanced two controversial positions: that avatar choice is not a neutral thing from the standpoint of personal integrity, and that the Horde, in World of Warcraft, is evil. Nobody agrees, but it’s been suggested that the community could chew on this a bit.

    So here’s my view: When a real person chooses an evil avatar, he or she should be conscious of the evil inherent in the role. There are good reasons for playing evil characters - to give others an opportunity to be good, to help tell a story, to explore the nature of evil. But when the avatar is a considered an expression of self, in a social environment, then deliberately choosing a wicked character is itself a (modestly) wicked act.

    I don’t agree with Castranova (my horde character is a Tauren, a peaceful bison-like creature that lives in a Native American-inspired cultural context), nor do many of the commenters—but the issues he brings up are powerful and interesting, and the lengthy discussion in the comments is well worth reading.

    Lately I’ve been thinking a lot about the relationship between “real life” and “game life,” since I have personal and/or professional relationships with most of the people in my World of Warcraft guild, including both of my children. Castranova’s argument, in which he bolsters his argument by citing his 3-year-old’s reaction to his undead character, relates directly to those boundary-crossing issues.

    When I was playing online on Monday, Joi Ito said that he thought World of Warcraft was becoming the “new golf” for the technology set. I think there’s some truth in that, but it brings with it all kinds of additional social pressures and complexities, of which avatar racial choices are only the beginning. I think there’s some fertile ground for research in that boundary area, the crossover between the real and game worlds, and the extent to which they influence each other.

    (cross-posted from mamamusings)

    Comments (6) + TrackBacks (1) | Category: social software

    December 12, 2005

    Tag, you're gay!

    Email This Entry

    Posted by David Weinberger

    The Guardian has a story by Mark Honigsbaum about an attempt to identify gay-related items:

    Backed by the museums documentation watchdog, MDA, the group Proud Heritage this week began sending out a two-page survey requesting that institutions throughout the country list the gay and lesbian documents and artefacts in their collections. “For the first time ever, we are asking museums, libraries and archives throughout Britain to revisit their holdings and reveal what they have that is queer,” said Proud Heritage’s director Jack Gilbert. “At the moment these are not classified correctly, or held completely out of context and never see the light of day.”

    … At the Lllangolen Museum in Denbighshire, north Wales, for instance, there is an exhibit commemorating the lives of Eleanor Butler and Sarah Ponsonby. Known locally as the Ladies of Llangollen, they lived together in a small cottage from 1819 until their deaths in 1829 and 1831, and were renowned for wearing dark riding habits, an eccentric choice of dress for the time.

    “They would never have used the word lesbian to describe their relationship but there is no question that they lived together and shared the same bed,” said Mr Gilbert. “We think there may well be similar examples in other archives, but because people didn’t use words like lesbian and gay 200 years ago archivists have either overlooked it or simply don’t realise it’s there.”

    Great example of why authors/creators/publishers are not the best or final taggers of their own stuff. (Thanks to Phil Edwards for the link.)

    Comments (12) + TrackBacks (1) | Category: social software

    December 8, 2005

    Freedom of Anonymous Speech

    Email This Entry

    Posted by Ross Mayfield

    Assume that John Seigenthaler gets what he wants from his criticism of Wikipedia.  He very well may gain congressional hearings on anonymity.  Purportedly in comments to a post by Larry Sanger that begs the question, his intent is to have the private sector regulate anonymity on the net.

    The way he described it, you could shift the burden by changing the law so that Internet Service Providers would evaluate the plaintiff’s
    evidence, and decide themselves whether revealing the customer’s
    identity might be appropriate. If the decision is yes, at that point
    the ISP notifies the customer, who is given the opportunity to initiate
    legal proceedings to enjoin the ISP from revealing his identity.

    Given the consolidation of telecom, this would empower a handful of ISPs, as in 5, to be judge and jury for revealing identity.  Anonymity is a critical facet of society, and it’s value is more than whistle-blowing.  I wouldn’t call it a right, but would call it a feature of the virtual and real worlds (we don’t walk around with name-tags).  Regardless of how you value anonymity, you should agree that this would:

    1. create undue costs for ISPs,
    2. privatize governance and enforcement,
    3. create undue legal costs for consumers, which
    4. could lead to infringements on civil liberties, because
    5. customers would be guilty until proven innocent.

    Now, if the ISP or legal action revealed the libelous party it would resolve Seigenthaler’s complaint against Wikipedia. 

    Beyond this attempt to weaken anonymity on the Net, Wikipedia’s open nature is also under attack.  Adam Curry edited podcasting history in his favor.  Big deal.  It’s a wiki, just edit it if you disagree and let the community’s practice work over time.

    Consider regulating against graffiti.  You have two options:

    • Guard every wall in town to prevent the infraction from occurring
    • Paint over infractions and enforce the law by chasing down perpetrators

    The former is not just prohibitively expensive, it kills creativity and culture.  The later is the status quo and generally works, especially where communities flourish.

    So what would have Wikipedia do?  Lock down contributions through a fact checking process with rigid policy?  Or let people contribute, leverage revision history and let the group revert infractions.

    Social media is disruptive.  The role of regulation significantly impacts how society will manage transition.  Today much of media is regulated through complaints (e.g. indecency).  It only takes one horror story for us to loose freedom of anonymous speech.  The easiest and most dangerous way to curb social media is to have it conform to mainstream models.

    UPDATE: Cnet has a pretty good article on the liability reform sought by Seigenthaler, the first argument I made.  Mitch Ratcliffe takes issue with my second argument, about how a wiki works and how best to regulate it.  Mitch, you keep trying to fit Wikipedia into your model of how an encyclopedia should be instead of recognizing how it is different.  A print version of Wikipedia should have an editorial process bolted on to emergent practice, as it is a comparable product, frozen in time.  But instead, the evolving nature of Wikipedia needs to be recognized and celebrated for what it is.  Help people understand what it is, not what it is not.

    Comments accepted over here.

    Comments (0) + TrackBacks (0) | Category: social software

    December 7, 2005

    Sanger on Seigenthaler’s criticism of Wikipedia

    Email This Entry

    Posted by Clay Shirky

    Larry Sanger, in regards to John Seigenthaler’s criticism of Wikipedia:

    I have long worried that something like this would happen—from the very start of Wikipedia, in fact. Last year I wrote a paper, “Why Collaborative Free Works Should Be Protected by the Law” (here’s another copy). When Seigenthaler interviewed me for his column, I sent him a copy of the paper and he agreed that it was prophetic. It is directly relevant to the part of Seigenthaler’s column that says: “And so we live in a universe of new media with phenomenal opportunities for worldwide communications and research—but populated by volunteer vandals with poison-pen intellects. Congress has enabled them and protects them.” That was a part of Seigenthaler’s column that bothered me: what exactly does Seigenthaler want Congress to do?

    Comments (82) + TrackBacks (0) | Category: social software

    November 17, 2005

    The End of Process

    Email This Entry

    Posted by Ross Mayfield

    If a knowledge worker has the organization’s information in a social context at their finger tips, and the organization is sufficiently connected to tap experts and form groups instantly to resolve exceptions — is there a role for business process as we know it?

    Post continues over here…

    Comments (0) + TrackBacks (2) | Category: social software

    November 11, 2005

    round-up on MySpace and culture of fear

    Email This Entry

    Posted by danah boyd

    I’ve been thinking a lot about how anti-MySpace propaganda has been rooted in the culture of fear. Given that youth play a critical, but different, role in social software, i suspect that folks might be interested in how MySpace is getting perceived as a scary, scary place.

    Growing up in a culture of fear: from Columbine to banning of MySpace looks at how mainstream media is inciting moral panic around youth participation in public spaces. The article is framed around the ban of MySpace in certain schools. MySpace blamed for alienated youth’s threats follows up on this, looking specifically at how Columbine-esque situations are still not being addressed for their core problem: youth alienation. Instead, we’re still blaming the technology.

    Comments (53) + TrackBacks (1) | Category: social software

    November 3, 2005

    Programmer's Definition of Social Software

    Email This Entry

    Posted by Ross Mayfield

    Jimmy Wales:
    “I think, partly because of the personality types who become programmers… I don’t know what it is exactly… a lot of programmers, seem to me to think that the whole point of social software is to replace the social with the software. Which is not really what you want to do, right? Social Software should exist to empower us to be human… to interact… in all the normal ways that humans do.”

    Via a correction in danah’s comments

    Comments (4) + TrackBacks (1) | Category: social software

    October 28, 2005

    Social Software Critic

    Email This Entry

    Posted by Ross Mayfield

    A slew of social software startups have arisen as of late, and while we don’t cover the news here, it’s a good time to be a culture critic.

    Ning — Social Apps

    Ning is the latest entry into the social applications space, aiming to be the mother of all social software. Aiming to be a platform from the get go is a tough haul, the prize is admirable, but most platforms start as apps first. I’ve never heard someone utter the words “killer platform.” As a result, the applications are relatively shallow and they are competing against decentralized open source application publishing.

    Since I used them as an example of stealth as an old school model, it turns out they are located a block away from my office and I have met a bunch of great people there. So let me offer this more constructive take away. Today Ning fosters transient micro communities with only pivots to bind them. When the first class node is an app, as opposed to a profile, group or other object that centers on people, you have to construct an overlay of sorts to enable group forming across networks. In other words, object-centered sociality is currently isolated, which limits network effects. On the upside, the information architecture does a decent job handling underlying complexity, their terms of service are well done and they are leveraging standard languages instead of seeking lock-in.

    One sentence suggestion: Focus less on the apps and more on the social.

    Flock — Social Browser

    Flock is aiming to be the browser that we always wanted. Yes, it’s more of an alpha than a beta, and after you start playing with it you want more. For Innovators, we already do all this stuff with well groomed bookmarklets and personal hacks. For Early Adopters, it’s not quite there yet.

    Maybe that’s the point. It’s an open source play that is releasing early and often. If the Innovators build upon it (and from what I understand, like Greasemonkey and RonR, it’s like being a Connecticut Yankee in King Arthur’s Court for developers) it may fulfill the needs of a more active mainstream. Today the blogging client and favorites features are too shallow to move me off of Firefox, bookmarklets and Etco/1001. There are two almost hidden features that demonstrates synergy (cough) between modalities:

    • Search auto-completes with the breadcrumbs you leave behind. It’s not social search, but could be a perfect compliment to Yahoo (which points to both the Biz Dev challenge that will really enhance the product and is their core revenue stream — but also the potential exits as the browser war heats up).
    • When you add a favorite, if the page has a feed, you can go back to see what’s new from the source.

    Aggregation may be the modality (compared to Browse, Search and Author) that could blossom, as it needs better interaction design, there is a lot of demand to bring reading and writing together and the client gives you offline capabilities. I’m starting to speculate here, but that’s the exciting thing about Flock, it makes you speculate to the point you want to engage.

    One sentence suggestion: Focus on interaction between modalities and services, manage for quality and get busy with Biz Dev (I can’t believe that’s a job title again).

    Wink — Social Search

    Wink is a nice Social Search play that incorporates user tagging and ranking to provide recommended results and block spam. My favorite feature, of course, is the ability to create a concept around a query that is an unstructured wiki page. If the concept exists as a pagename within Wikipedia, it populates it with that page and offers related concepts based upon the content. I’m not sure that Wikipedia eats Google, but there is higher quality metadata available and a great way to augment the user experience. Wink is a small startup with lot of promise, but has the inherent challenges of vertical search play (how to attract users, is Google ad revenue enough, and the portals are not acquiring).

    One sentence suggestion: Bake into blogspace.

    Memeorandum — Social Aggregator

    Okay, this one may not be social yet. But Memorandum is starting to solve a problem for me, where to go for a dashboard view of blogs and MSM with the ability to drill down into conversations. I’m not sure that it has the accuracy yet that Google News does for the top two stories, but this is an invaluable dimension to get me out of my subscribed echo chamber.

    One sentence suggestion: Let me filter using my social network, even if it’s uploading my subscriptions.

    Sphere — Blog Search

    I’d agree with John Battelle that Sphere offers a good incremental improvement over existing blog search engines, but others have already extended to advanced tagging and feed features that make it more useful for bloggers. It is relatively spam free and speedy, but we will have to see how it scales.

    One sentence suggestion: Differentiate beyond core search for blog reader utility.

    Rollyo — Personalized Search

    Rollyo’s roll your own search engine is more than a great tag line. Letting people build their own search with a strong identity has utility for the creator and users may benefit from those that bubble up. But there is something missing here, something more socialized than personalized.

    One sentence suggestion: Give searchers as well as creators a way to intertwingle for greater engagement.

    Comments (8) + TrackBacks (4) | Category: social software

    October 24, 2005

    Friendster publications

    Email This Entry

    Posted by danah boyd

    Various folks have been asking me about my Friendster publications and i thought i’d do a simple round-up for anyone who is trying to learn about Friendster. Below are directly relevant papers and their abstracts (or a brief excerpt); full citations can be found on my papers page. Please feel free to email me if you have any questions.

    “None of this is Real: Networked Participation in Friendster” by danah boyd - currently in review (email for a copy), ethnographic analysis of Friendster, Fakesters, and digital social play

    Excerpt from introduction: Using ethnographic and observational data, this paper analyzes the emergence of Friendster, looking at the structural aspects that affected participation in early adopter populations. How did Friendster become a topic of conversation amongst disparate communities? What form does participation take and how does it evolve as people join? How do people negotiate awkward social situations and collapsed social contexts? What is the role of play in the development of norms? How do people recalibrate social structure? By incorporating social networks in a community site, Friendster introduces a mechanism for juxtaposing global and proximate social contexts. It is this juxtaposition that is at the root of many new forms of social software, from social bookmarking services like del.icio.us to photo sharing services like Flickr. Capturing proximate social contexts and pre-existing social networks are core to the development of these new technologies. Friendster is not an answer to the network question, but an experiment in capture and exposure of proximate relations in a global Internet environment. While Friendster is not nearly now as popular as in its heyday, the lessons learned through people’s exploration of it are increasingly critical to the development of new social technologies. As a case study, this paper seeks to reveal those lessons in a manner useful to future development.

    Profiles as Conversation: Networked Identity Performance on Friendster by danah boyd and Jeffrey Heer - 2006 HICSS paper on how Friendster Profiles become sites of conversation

    Abstract: Profiles have become a common mechanism for presenting one’s identity online. With the popularity of online social networking services such as Friendster.com, Profiles have been extended to include explicitly social information such as articulated “Friend” relationships and Testimonials. With such Profiles, users do not just depict themselves, but help shape the representation of others on the system. In this paper, we will discuss how the performance of social identity and relationships shifted the Profile from being a static representation of self to a communicative body in conversation with the other represented bodies. We draw on data gathered through ethnography and reaffirmed through data collection and visualization to analyze the communicative aspects of Profiles within the Friendster service. We focus on the role of Profiles in context creation and interpretation, negotiating unknown audiences, and initiating conversations. Additionally, we explore the shift from conversation to static representation, as active Profiles fossilize into recorded traces.

    Vizster: Visualizing Online Social Networks by Jeffrey Heer and danah boyd - a 2005 InfoVis paper about visualizing Friendster data (including arguments about using visualization in ethnography and recognizing the value of play in visualization)

    Recent years have witnessed the dramatic popularity of online social networking services, in which millions of members publicly articulate mutual “friendship” relations. Guided by ethnographic research of these online communities, we have designed and implemented a visualization system for playful end-user exploration and navigation of large-scale online social networks. Our design builds upon familiar node-link network layouts to contribute techniques for exploring connectivity in large graph structures, supporting visual search and analysis, and automatically identifying and visualizing community structures. Both public installation and controlled studies of the system provide evidence of the system’s usability, capacity for facilidiscovery, and potential for fun and engaged social activity.

    Public Displays of Connection by Judith Donath and danah boyd - a 2004 BT Journal article on how people publicly perform their social relations

    Abstract: Participants in social network sites create self-descriptive profiles that include their links to other members, creating a visible network of connections — the ostensible purpose of these sites is to use this network to make friends, dates, and business connections. In this paper we explore the social implications of the public display of one’s social network. Why do people display their social connections in everyday life, and why do they do so in these networking sites? What do people learn about another’s identity through the signal of network display? How does this display facilitate connections, and how does it change the costs and benefits of making and brokering such connections compared to traditional means? The paper includes several design recommendations for future networking sites.

    Friendster and Publicly Articulated Social Networks by danah boyd - a 2004 short CHI paper staking out what Friendster is.

    Abstract: This paper presents ethnographic fieldwork on Friendster, an online dating site utilizing social networks to encourage friend-of-friend connections. I discuss how Friendster applies social theory, how users react to the site, and the tensions that emerge between creator and users when the latter fails to conform to the expectations of the former. By offering this ethnographic piece as an example, I suggest how the HCI community should consider the co-evolution of the social community and the underlying technology.

    Comments (3) + TrackBacks (0) | Category: social software

    October 22, 2005

    Social Verbs

    Email This Entry

    Posted by Ross Mayfield

    Social verbs in online gaming are gestures that do not change the meaning of a object. When someone’s WoW Mage waves to your Paladin, you choose how object’s meaning will change because of the gesture. Language is power, just as an emoticon can get your out of trouble for telling a borderline joke.

    I’m paying particular attention to verbs these days as they seem to have greater meaning than nouns, especially places (which are non-persistent; persistence is vested in objects that take actions). The reason I keep coming back to my WoW research (cough) isn’t because of the virtual world, but what I do with a group.

    Beyond this gesture, the extended entry riffs on attention management, pull vs. push, marketing strategy and ownership of identity.

    ...continue reading.

    Comments (1) + TrackBacks (1) | Category: social software

    October 20, 2005

    I don't trust your attention

    Email This Entry

    Posted by Ross Mayfield

    I’ve been meaning to blog about a simply great article in the NY Times, Meet the Life Hackers, as I am a fan of the interruption tax, but I keep getting interrupted.

    When [Gloria] Mark [from UCI] crunched the data, a picture of 21st-century office work emerged that was, she says, “far worse than I could ever have imagined.” Each employee spent only 11 minutes on any given project before being interrupted and whisked off to do something else. What’s more, each 11-minute project was itself fragmented into even shorter three-minute tasks, like answering e-mail messages, reading a Web page or working on a spreadsheet. And each time a worker was distracted from a task, it would take, on average, 25 minutes to return to that task. To perform an office job today, it seems, your attention must skip like a stone across water all day long, touching down only periodically. Yet while interruptions are annoying, Mark’s study also revealed their flip side: they are often crucial to office work…

    Focusing on the cost of interruption is one of the better design principles, not just for productivity applications, but all those social software apps clamoring for attention. The answer is not automation, but using the social network as a filter and pushing things down to asynchronous modalities.

    My 11 minutes are almost up. Really, it’s a great read, and for now I’ll point you towards Jon Udell

    Comments (2) + TrackBacks (0) | Category: social software

    Nick Carr's Amorality

    Email This Entry

    Posted by Ross Mayfield

    Cast aside the anti-hype rhetoric, and keep in mind it is an argument not of fact or policy, but value, and you will find Nicolas Carr’s post on the amorality of Web 2.0 has a salient point — that social software is on an inevitable march of disruption. Commoditization wrought by commons based peer production does enable the triumph of the amateur over the professional. But this does not portend the destruction of mainstream media, only it’s reformation.

    Yes, the economics favor the bottom-up. This allows the creation of an alternative we have never had before. A choice. But media selection theory holds that old media simply doesn’t die. Carr’s very desire to retain professional media as his selection is one consumer’s proof point.

    The underlying economics of MSM must change, and it will, through creative destruction and unfortunately the loss of many jobs in the transitionary period. Think of social media as a fork in social software, or a third party movement in politics. Unfulfilled demand is self-fullfilled by a new grassroots consituency. New and previously unrepresented constituencies are forming fast as the cost of personal publishing and group forming trend towards zero. But the mainstream gradually co-opts these experiments and movements as their own to stay in power. Today MSM is experimenting with social media in areas where the cost structure previously prevented them to access the market, such as hyperlocal media. To say that mainstream media will not leverage the tools and co-opt the culture of the amateur smacks of technological determinism.

    But this is an argument about values, so it’s important to highlight what values needs to diffuse from professional to amateur. Dan Gillmor’s mission to pass on ethical standards from journalists to citizen media is case in point. The former audience is about to go through media training on a massive scale, all in all a good thing, but there is much we can do to pass on practices.

    Carr provides a healthy contrarian perspective for the blogosphere. Perhaps by claiming amorality he makes us think, and is advancing our values.

    Where I have to take issue on fact is with his post on Wikipedia. I won’t repeat the dead, tired and defeated arguments on quality, so let’s center on fact:

    Now, there’s a way around this “collective mediocrity” trap. You can abandon democracy and impose centralized control over the output. That’s one of the things that separates open-source software projects from wikis; they incorporate a rigorous quality-control filter to weed out the crap before it pollutes the product. If Wikipedia wants to achieve it’s goal of being “authoritative,” I think it will have to abandon its current structure, admit that “collective intelligence” makes a pretty buzzphrase but a poor organizational model, and define and impose some kind of hierarchical power structure. But that, of course, would raise a whole other dilemma: Is a wiki still a wiki if it isn’t a pure democracy? Can some wikipedians be more equal than others?

    Open source software and Wikipedia are both driven by commons-based peer production. How they differ, and the reason software development requires rigorous quality-control, is that code has dependencies. Writing code is vertical information assembly, while contributions to a wiki is horizontal information assembly. Wikipedia does have quality control and an organiztional model, but it isn’t a feature embodied in code, it is embodied in the group. I know of no goal of being authoritative, but the group voice that emerges on a page with enough edits (not time) represents a social authority that provides choice for the media literate. Carr could create a Wikipedia page to help define what “pure democracy” is to help him answer his rhetorical question — but a wiki is just a tool, and Wikipedia is an exceptional community using it.

    Keep in mind that most wiki use is behind the firewall where there is an organizational hierarchy and norms in place. There it taps into similar economics, without the great debates on social truth, and for the competitive advantage of firms.

    Back to values, when you tap into the renewable resource of people in mass collaboration, allocated against the scarcity of time, driven by social signals — is this not of greater benefit for social and economic welfare than the disruption that created mainstream media in the first place? I’m glad we agree with Carr on the facts of the disruption. If we can get past the misunderstanding that there is a value difference, we could maybe focus on the right policies that will help us in years to come.

    Comments (9) + TrackBacks (0) | Category: social software

    October 19, 2005

    seattle mind camp, november 5-6

    Email This Entry

    Posted by Liz Lawley

    In the grand tradition of bar camp, web 2.01, and other creative, self-organizing tech events comes Seattle’s first Mind Camp. It will be held from noon on Saturday, November 5th through noon the following day.

    Take a look at the sidebar to see the people already committed to being there—Chris Pirillo & Ponzi Indharasophang, Julie & Ted Leung, Beth Goza & Phil Torrone, Nancy White, Shelly Farnham…

    (did you notice all the cool women on that list? w00t!)

    Registration is open (and free), but the event is capped at 150—so act fast if you’re planning to attend.

    See you there, I hope!

    Comments (3) + TrackBacks (1) | Category: social software

    October 18, 2005

    seattle social computing event - october 19th

    Email This Entry

    Posted by Liz Lawley

    I’ve been planning to post an announcement here about an upcoming event in Seattle, but kept forgetting. (Well, that, and I tend to be reluctant to self-promote, but the organizers kept asking…) As a result, this is rather short notice.

    This Wednesday night, I’ll be one of the panelists at an MIT Enterprise Forum dinner event titled “Two Degrees of Separation - How Social Network Technology is Connecting Us for Money, Jobs, and Love. It will take place at the Bellevue Hyatt. Doors open at 5:30, and there will be dinner and a chance to network with other attendees before the panel itself.

    I’ll be joined on the podium by Konstantin Guericke, co-founder of LinkedIn, Bill Bryant, CEO of Mobile Operandi, and our moderator Mike Flynn, publisher of the Puget Sound Business Journal.

    You can register online or at the door—the $40 price includes dinner, of course.

    If you’re in the area, it would be lovely to see you there. Be sure to come say “hi”—it’s always nice to meet people who actually read the blog. :)

    Comments (4) + TrackBacks (1) | Category: social software

    October 17, 2005

    Ward Cunningham on the Crucible of Creativity

    Email This Entry

    Posted by Ross Mayfield

    UPDATE: Ward left Microsoft

    Impressionistic transcript from Ward Cunningham’s opening keynote at wikisym

    I don’t need to explain wiki to this audience. It;’s so tiny it doesn’t need explanation, but you don’t understand it until you have been there and done that. It’s you and the community that participates that makes it real, gives me perhaps too much credit. My hope is that wiki becomes a totem for a way of interacting with people. Tradition in the work world has been more top down, while wiki, standing for the Internet, is becoming a model for a new way of work. Largely driven by reduced communication costs, it changes what needs to be done and how it’s going to get done. I hope that the wiki nature, if not the wiki code, makes some contribution.

    A wiki is a work sustained by a community. Often asked about difference between wiki and blog. Something tangible is ve The blogosphere is the magic that happens above blogs — the blogosphere is a community that might produce a work. Whereas a wikis a work that might produce a community. It’s all just people communicating.

    One’s words are a gift to the community. For the wiki nature to take whole, you have to let go of your words. You have to be okay with that. This goes into the name, called refactoring. To collaborate on a work, one must trust. The reason the cooperation happens is we are people and it is deep in our nature to do things together. Important to make a distinction. Cooperation has a transactional nature, we agree it is a mutual good. Collaboration is deeper, we don’t know what the transaction is, or if there is one, but if I give of myself to thsi collablration, some good will come out of it. You have to trust somebody to collaborate. With wiki, you have to trust people more than you have any reason to trust them. In 1995, it was a safer environment, don’t know if I could have launched wiki today.

    Refactoring makes the work supple. Word borrowed from mathematics, not going to change the meaning of the work, but change it so I can understand it better. Continuous refactoring. Putting a new feature into a program is important, but refactoring so new features can be added in the future is equally important. The ability to do things in the future is something that I consider suppleness, like clay your hands that accepts your expression. Programs and documents get brittle very quickly. Wiki imagines a more dynamic environment where we accept change, with the aid of a computer not make that dramatic, embraces hypertext which lets a document start small and grow while always being the right size. When there are two ideas in the page, split them into different pages with new names, so a third page can reference both. This is built into the web in some sense, it’s just exploited in a wiki. Phenomenal that so much as been done in a tiny text interface, writing an encyclopedia. I have to apologize as a computer scientist that we have to go through that, but also says how strong the desire is for people to work together, but I look forward to the day where we don’t have to do it just this way.

    I was in favor of anonymity when I started this. Anonymity relieves refactoring friction. Have learned that people want to sign things. But try to write in a way where you don’t have to know who said it. But when someone who is not in a giving mood uses anonymity (spammers), that abuse can drive us away from anonymity. But I hope we can drive the ill-intended out without having to give up the openness. Can one trust the anonymous? If you think of trust as believing people will behave in the way they did before, it seems dependent upon identity, but it may not be imporant to know if online behavior is consistent with offline behavior. But knowing what is going to happen when you give something away is significant.

    The web has been an experiment in anonymity. Conscious design of low level protocols. Lots of identity infrastructure has been created to make it an online shopping mall, which makes it unpleasant for all of us because the machinery isn’t that great.

    Result: people can and do trust works produced by people they don’t know. The real world is still trying to figure out how Wikipedia works. A fantastic resource. Open source is produced by people that you can’t track down, but you can trust it in very deep ways. People can trust works by people they don’t know in this low communication cost environment.

    Result: the clubby days of friendly internet are over. Lots of technical questions about to sustain something we have experienced in a more complicated environment.

    Opportunity: reputation systems for the creative (non-transactional). Reputation systems are an umbrella term for where the computer keeps more track over who you are and trys to make that visible in controlled ways to other people. eBay as an outstanding example, creating a space that didn’t exist before. Again, going back to collaboration vs. cooperation. Doing this well depends upon excellent collaboraiton between the scientific community and the practitioners. Hopes this symposium becomes the center of this exchange.

    Opportunity: organizational forms supporting creative work. The form we have today is a legacy from GM. Corporations aggregate and deploy capital to make things happen. Necessary back when communication was more expensive in this country. Top down hierarchies make communication work when it is expensive, I hope that wiki can be a flagship in this move in the industry to produce computer support for this kind of work and evolve organizational forms.

    Eugene Kim asks about the conflict between anonymity and reputation. He calls it an opportunity because it isn’t reconciled. The first thing we think of with reputation will be wrong and has adverse impacts. Do it by watching the impact it has on people in the area of creativity. Doesn’t have to be complicated, but careful with what it reveals. If you walk in

    Richard Gabriel: reputation can be attached to an individual or to something, such as words. The reputation can be attached to the words can enable anonymity. Ward says great, idea — take notes.

    On moderating change in the original wiki over the past year, and the tools he created for it (the following is probably only of interest to wiki moderators)…

    ...continue reading.

    Comments (5) + TrackBacks (2) | Category: social software

    October 15, 2005

    M2.0M

    Email This Entry

    Posted by Ross Mayfield

    Comments seem to be broken here, so I’m replying to danah’s existential post here.

    Wrestling with the same issue, I’ve found it’s difficult to decide what to contribute here, because topics are being commercially exhausted. We went through a period where new companies and products were passed on as news, in between well thought-out posts. The job of covering social software news started being done by others elsewhere. As we enaged deeper in out own kind of ventures, this effort was well appreciated. We also found less that was really new to report. The bar was set pretty high for the well thought-out pieces, almost introducing a formality for contribution, that in busy times couldn’t be met.

    But with the whole Web 2.0 thing, it may be more important than ever.

    What was unique about social software and it’s design principles was how it didn’t emphasize tools, but practice and an understanding of social context. Too much of Web 2.0 is not just made of white people, but an alphabet soup of supporting technologies that mean nothing without communities, networks and even real business models. As the market we helped found continues to froth, commentary on new business models based on power laws matters even more.

    But the real reason I haven’t been contributing as much as I used to is because we forbade MMOGs in the topic, and I’ve been playing too much World of Warcraft.

    Comments (5) + TrackBacks (0) | Category: social software

    October 13, 2005

    Web 2.0 and Many-To-Many

    Email This Entry

    Posted by danah boyd

    So, when this blog started, it was intended to capture various aspects of social software. The hype has kinda gotten taken over by Web2.0. But what is the relationship between Web2.0 and social software? And what about Many-To-Many?

    Over on my personal blog, i’ve written two long posts on Web2.0 that i think are pretty interesting for those invested in social software:

    It’s pretty clear that social software has become essential to Web2.0 - social networks, communication, identity production, etc. But how do we discuss social software as something separate from all that? Have we gotten to the point where that concept has escaped us? I look at my co-bloggers here and we’re all still doing our thing but yet, are we all still talking about social software? We’re certainly doing a terrible job at blogging, or at least here. There’s something funny about group blogging around a topic. What about when things change?

    The thing about a personal blog is that it changes with you because you don’t feel so compelled to stick with a topic (much to the chagrin of some readers). I know it sounds like a broken record, but i’m still always at a loss over when to cross-post to M2M. Consider this pair of recent posts:

    These are certainly at the center of Web2.0 and at the center of culture and sociability. But is it about social software? Quite a few folks have asked me to repost these here, but i think it’s weird that i don’t think of it as the core to social software.

    Herein lies the problem with all of this… Our lives have started to escape categories. And topical blogs are categories. Hmmm…

    Comments (6) + TrackBacks (2) | Category: social software

    October 11, 2005

    Intranet Wiki Case Study

    Email This Entry

    Posted by Ross Mayfield

    When a bank replaces their Intranet with a wiki, something wonderful is bound to happen. We’ve been working with Suw Charman to document it and the first version of the case study is in. It’s a great account of the adoption pattern, user experience and mass collaboration.

    Dresdner Kleinwort Wasserstein has adopted Socialtext at a depth and scope well beyond what most businesses have attempted. The following case study points to the near-future of simple collaboration in the enterprise.

    One thing that didn’t make it into the case study in time is a practice I’m considering myself. The manager of an equity trading group has created an email filter that auto-replys to any team member with instructions to put their message on the wiki. I’ve had managers tell their team they will only read what is in the wiki before, but this truly grabbing the bull by the horns.

    Comments (6) + TrackBacks (0) | Category: social software

    This thing on?

    Email This Entry

    Posted by Ross Mayfield

    About all I can offer is that Web 2.0 is made of people, while keeping this blog clean of commercialization.

    But let me share two neat wiki communities with you. Om Malik just put up the Broadband Wiki: We are building a “broadband profile” of the planet. What I would like to do is find contributors who are kind enough to write 250 words about the broadband situation in their country. In the spirit of Loic’s European Blogosphere, the data is coming in fast and furious.

    Also check out the Startup Exchange, a renewable resource for those working with fewer resources. It’s chock-full of links to resources and includes a Startup Kit of wiki templates and best practices. Given the number of Web 2.0 products out there without businesses, it might be a good place to start — over.

    Comments (5) + TrackBacks (0) | Category: social software

    October 4, 2005

    Email 2.0

    Email This Entry

    Posted by Ross Mayfield

    Tim O’Reilly (I’m not worthy! — huh, that kind of rhymes) picks up on my email signature meme:

    This is a first for me, but I expect it will eventually become common. I received an email with the following addition to the signature block:

    this email is: [ ] bloggable [x] ask first [ ] private

    Now that’s a social hack that could one day be replaced by a technical hack. Email messages could have “bloggable” as a mime-type for example, and forwarding to a blog client would set up an entry. Lacking that mime-type, you’d have to resort to cut and paste, as now…
    I post this here not for sake of memetic vanity, but to make a point. The reason we are building Web 2.0 is because we were not able to build Email 2.0. The first web didn’t support our social needs, so we used email for everything. But we couldn’t really hack it. Most social software has by now adapted to email, but email could never have adapted to it.

    Comments (11) + TrackBacks (0) | Category: social software

    September 24, 2005

    LibraryThing

    Email This Entry

    Posted by David Weinberger

    Timothy Spalding has put together a really interesting site, called LibraryThing, that lets you list your books, tag them, and share the list with others. You can search by bibliographic info, user or tags. And Tim does some useful listing of the top 25 books by author, tags, etc.

    One of the cool things: You enter a book into your list by typing in sloppy information. For example, if you want to enter The Social Construction of What? by Ian Hacking, you can type in “social construction hacking” and LibraryThing will search the Library of Congress and Amazon. Sure enough, it finds the right one. Click and all the bibliographic info, plus the cover graphic, are added to your list.

    It’s basically free, although to add more than 200 books to your list, Tim asks for a one-time fee of $10, which seems pretty reasonable to me…especially once Tim adds RSS feeds so we can subscribe to a tag, reader, etc., and discover the new books others are reading.

    Comments (11) + TrackBacks (0) | Category: social software

    September 16, 2005

    Facets + Tags

    Email This Entry

    Posted by David Weinberger

    Siderean has always allowed their customers to embed hierarchical trees within their faceted classification system (example here) when appropriate. E.g., if someone is navigating via the geography category, the system can know that SoHo is in NYC which is in NY state which is in the US. And Siderean has shown an early curiosity about tags: Its fac.etio.us thought-experiment/demo turns del.icio.us bookmarks into a faceted system.

    I got briefed by the company a couple of days ago and learned that future releases of their navigation software are going to incorporate tagging more directly, enabling users to annotate/tag the data they find. A faceted system might add a right amount of organization to a pile of tags, making that pile far more useful. Imagine a folksonomic faceted system…

    Comments (31) + TrackBacks (0) | Category: social software

    September 11, 2005

    The Power of Conversation

    Email This Entry

    Posted by Paul B Hartzog

    “I don’t read anymore; I just talk to people who have.” — Dr. Tom Malloy, University of Utah

    Dr. Malloy’s tongue-in-cheek comment sparked an interesting conversation about… well… conversation. When two people have a conversation, they act as proxies for the many ideas in their heads which are drawn from the many things they have read. In effect, a conversation is a many-to-many interaction that is both mediated and moderated by the participants. The individuals catalog, sort, tag, and filter ideas as they are drawn into the shared space of the conversation.

    The upshot of this is that the memes, or actual ideas, gain a tremendous advantage in establishing new connections when conversations happen. Similar to Dawkin’s principle of the “selfish gene,” these “selfish memes” promote their longevity every time humans converse. For memes, the conversation is like sex, an opportunity to mingle, merge, and generate offspring that will outlast them.

    Moreover, the use of the Internet, cell phones, and social software has greatly increased the number of conversations happening at any given moment via chat, newsgroups, discussion forums, and even comment-savvy blogs. Without a doubt, the potential for survival of various memes has skyrocketed as these channels have emerged.

    But the great thing about all this is that conversation gives us an incredible way of processing the world as we move into an age of relentless and omnipresent information. Rather than setting up a really clever RSS reader using technology, just go talk to someone who reads blogs. Rather than spend hours organizing bookmarks, just ask around for what’s useful when you need it.

    I discovered a while back that I could get what I need faster by asking someone else than by looking for it myself — precisely because of the time it takes to process the glut of information now available on any given topic (just hit google sometime and you’ll see what I mean)!

    So, the real value of communicative technologies like social software is that they re-enable and enhance our ability to use a time-tested means of information processing, i.e. the conversation, in new and interesting ways!

    Now stop reading this and go have a conversation with someone. :-)

    Comments (12) + TrackBacks (0) | Category: social software

    September 9, 2005

    Patient Opinion

    Email This Entry

    Posted by Ross Mayfield

    I'm at Our Social World in Cambridge, UK today and taking notes here. But wanted to point out a really interesting Enterprise Social Software project that Headshift launched today:

    Patient Opinion is all about enabling patients to share their experiences of health care, and by doing so help other patients — and perhaps even change the NHS. As well as allowing everyone to see what patients are saying about their services, it also offers a way to feed the experience of patients back to the NHS so that their insights and ideas can be put to good use.

    They leverage structured calls on a new NHS web service for data about health service providers, then let people tag and blog about their experience with them. What a wonderful feedback loop.

    Comments (11) + TrackBacks (0) | Category: social software

    September 6, 2005

    web2.0 and glocalization

    Email This Entry

    Posted by danah boyd

    I just wrote a rather lengthy essay on glocalization and Web2.0 that discusses the socio-technical aspects of Web2.0. Most M2M readers are interested in social software; this essay is important if you are interested in understanding how social software is being taken to the next level, building a broader paradigm. I argue that the key to Web2.0 is not technology but a process of designing with glocalization in mind.

    Because of its length, i have not copied it to M2M.

    Comments (8) + TrackBacks (0) | Category: social software

    September 5, 2005

    Emerging Tech Call for Proposals

    Email This Entry

    Posted by danah boyd

    Each year, O’Reilly hosts the Emerging Technology Conference where geeks gather to discuss the latest innovations in technology. Although a lot of folks don’t realize it, they have an open call for proposals where people can suggest talks and topics that will provide new insights for the tech geek community.

    Conferences are typically word-of-mouth events where people attend because their friends are attending. I would really like to attend E-Tech this year but i really want to be blown away by talks and topics that are not part of the echo chamber. Thus, i have a request for you dear reader. Think about the people that you know and the people that they know. In the comments, suggest people and/or topics that you don’t think will be addressed at E-Tech, things that i don’t know about. Bonus points for the inclusion of innovations that are occurring outside of the US/UK. Also, pass on the CFP to people who you think might not know about it. Please help expand the diversity of this conference by including diverse topics and people. And please, if you’re working on something that fits into emerging technologies, consider submitting a proposal, especially if your voice is not typically heard at the various O’Reilly conferences. The broader the network of people, the more enjoyable the conference.

    Proposals are due September 19!

    Comments (0) + TrackBacks (0) | Category: social software

    September 2, 2005

    Seb Joins Socialtext

    Email This Entry

    Posted by Ross Mayfield

    I'm completely stoked to share the news that longtime M2M contributor Seb Paquet has joined Socialtext. I've wanted to bring him on board since we started the company and was pleasantly suprised to find us at the top of the list he put out when he announced on his blog that he was looking for something new.

    Let me use this as an excuse to reintroduce you to Seb. Prior to coming on board, Seb was an Associate Research Officer at the National Research Council of Canada, where he worked on innovative uses of social software, in particular in collaborative learning and knowledge management. Over the past several years, Seb has been contributing insightful articles and talks about those topics in English and French and has been running blogs in both languages. He will help us reach out to new customers and pitch into enhancing the experience and value of our software.

    Yet another great person hired by blog. Welcome aboard, and see you at Wiki Wednesday, Seb!

    Comments (6) + TrackBacks (0) | Category: social software

    August 31, 2005

    RawSugar

    Email This Entry

    Posted by David Weinberger

    You can think of RawSugar as a searchable del.icio.us with automagic, hierarchical clustering. (Users can also manually create hierarchical tag sets.) So, instead of seeing a long list of links on the left and a long list of tags on the right, at RawSugar you see a list of links on the bottom and your top-level tag categories on the top. The higher level tags are automatically propagated to the lower level ones. So far there is no way for users to publish their tag sets so others can use them.

    I spoke briefly with founder Ofer Ben-Schachar who told me only that the auto-hierarchy infers relationships among mulitple tags an individual gives to a single object and among multiple tags multiple people give to the same object. He says the company has 5 patents.

    The site is new and only has a few thousand users and about 15,000 links. It looks very usable. Now we’ll just have to see if it reaches the critical masses…

    Comments (2) + TrackBacks (0) | Category: social software

    August 24, 2005

    apophenia round-up: posts that slipped through

    Email This Entry

    Posted by danah boyd

    I’ve been doing a terrible job at posting to M2M because i’m never quite sure what fraction of my posts belong here and what tone is appropriate. I’ve been actively posting to my personal blog apophenia and looking back, i realize that some of what i’ve written this month might be interesting to M2M readers. So here’s a listing round-up:

    If you, dear reader, have an opinion on what you think is appropriate for M2M, i’d love to hear it in the comments because i’m definitely struggling with it. My personal blog gives me freedom to post whatever, but i don’t want to abandon M2M since i know many of you appreciate what we post here.

    Comments (5) + TrackBacks (0) | Category: social software

    August 22, 2005

    Wikiwyg

    Email This Entry

    Posted by Ross Mayfield

    This weekend we put something cool out into the world. Wikiwyg is what-you-see-is-what-you-get editor for wikis, or pretty much any other text area on the web. It's open source licensed, available for download and demo. Jeff Jarvis said wikiwyg is "the way wikis are supposed to be."

    Our hope is this makes the two-way web usable. You can see the genius of Socialtext lead developer Brian Ingerson in something that is almost a bug, but might be a feature: double click anywhere to edit. Then you will notice it snaps into edit mode, as the editor was already loaded with the page -- reducing, but keeping, the distinction between display and edit mode. You can toggle between wysiwyg and wiki text (more efficient when you know it). Sexy Ajax pixie dust lets you edit without touching the server until you are ready to save. Always remember that Wiki Wiki is Very Quick in Hawaiian.

    Here's some wikis running it:

    * http://wiki.oreillynet.com/foocamp05/
    * http://www.kwiki.org/
    * http://wiki.wikiwyg.net/
    * http://barcamp.org/

    One of the benefits of being based on open source is not only that we can share, but innovate openly. We still have some work to do (IE support, ugh) until it's ready for Socialtext production and would appreciate feedback and participation.

    Comments (10) + TrackBacks (0) | Category: social software

    August 16, 2005

    I am 344, hear me roar

    Email This Entry

    Posted by Ross Mayfield

    Feedster launched the Feedster Top 500 setting a new standard for length, the first salvo in the size matters war of microcontent. Go here and bitch about M2M isn't on the list, but my crappy blog is, or if you have to, contribute something constructive.

    Kidding, but they should be commended for providing an inclusive process for otherwise exclusive outcome, by both opening the algorthim and being open for feedback on a wiki page. An index is a reflection of a community, and the more inclusive and open the process for it's creation, the more we trust it and grant it authority.

    Mary Hodder's latest activist wiki, topicindex, is a Community Algorithm project to open the engine of attention. Given the importance of rankism, it's worth paying attention to. My hope is this does more than shift the debate from ranks to clouds, but gives us the tools to seed our own.

    Comments (3) + TrackBacks (0) | Category: social software

    August 13, 2005

    Does frequency count?

    Email This Entry

    Posted by David Weinberger

    Pito Salas blogs about a new beta feature of his open source BlogBridge aggregator: A small histogram shows each feed’s frequency of posts.

    Is this useful information? I think so. If I see one of the feeds has been very active, I may be driven to catch up. Of course, there are many feeds I value where the posts are few, and I would worry about a widget that drives people merely to the frequently-updated blogs. On the one hand, this is an aggregator of feeds I’ve chosen, so I already know that I’m going to read, say, Jay Rosen’s feed even if he’s not posting eight times a day. On the other hand, BlogBridge prides itself on its ability to help users discover new feeds, and there the frequency chart may slightly skew people towards the more frenetic blogs.

    Overall, it looks like a useful meter. I hope Pito lets us turn it off if we want, but I’ll probably leave it on. (Disclosure: I’m an unpaid advisor to BlogBridge.)

    Comments (5) + TrackBacks (0) | Category: social software

    August 12, 2005

    Governance, Scaling and Anonymity in Wikipedia.

    Email This Entry

    Posted by Ross Mayfield

    I'm sitting in Jimmy Wales' talk at OSAF, as though I am his roadie these days, and reminded about anonymity in Wikipedia. Anonymity is not something commonly valued in the blog world, where it is largely a strong expression of identity, but seems to be an essential attribute within the Wikipedia community. Maybe it's just the difference of people working together vs. having conversations. Perhaps it's the initial user experience of being able to edit without logging in, or strong enough social bonds and extreme cases for widespread support for maintaining anonymity.

    Jimmy describes the basics of Wikipedia, and then gets on his self-acknowledged soap box. Most social software is designed in away that makes no sense. If you think about it resurant, serving steak, you need knives, because the customers might stab each other, so, no knives. This creates a culture without trust, with comunity. Most software is too complex from trying to keep people from being bad. Leave things open when you know people can do bad things. Instead of locking pages, leave a note asking them not to damage it -- an opportunity to build trust. When they haven't done any damage in a while, I know Stewart, for example, has not vandalized this page, so I trust him more.

    ...continue reading.

    Comments (2) + TrackBacks (0) | Category: social software

    August 9, 2005

    Valuing Social Gestures

    Email This Entry

    Posted by Ross Mayfield

    Mary Hodder offers an open source algorithm for scoring blogs beyond authority:

    We wanted to see these measures used in an algorithm that balanced the weight of each social gesture, put against large data sets to see whether the resulting score or characterization felt right against what we know about blogs as readers and writers. One thing to consider is that some data sets are made up of spidered data (including blogrolls), while others are made up of RSS feed information (some partial and some whole posts, but there are no blogrolls in RSS feeds) and some are a blend. So we would want to adjust the algorithm for different types of data sets.

    So this is my first post think about making an open source algorithm...

    The value of the Paris Index approach is three-fold:

    1. Current indexes value blogs without involving blog readers (link ranks) or without involving blog writers (sub ranks). It's like a market where price is only set by sellers or buyers.
    2. An open algorithm is akin to a standardized contract for commodity markets. Today the market for AdWords works gives the market owner the benefits of information arbitrage while buyers and sellers have little transparency into market clearing mechanisms.
    3. An open algorithm is akin to an open standard, upon which new services can be built. If this algorithm gave significant weight to 2nd generation links, this could be the Cost Per Influence metric for Sell Side Advertising.

    See Also: Seth Goldstein points to Michael Goldhaber's 11 Principles of the New Economy which directly relates to CPI. Stowe Boyd ruminates on the Paris Index. Shelly Powers on good and evil. danah on the biases of links. Calacanis does his thing. Adina Levin on ranks vs. clouds. There is probably more to see, but after disconnecting for two days I don't have anyway to sift through the 1,500 posts in my aggregator to tell what's worth attention.

    Comments (0) + TrackBacks (0) | Category: social software

    August 8, 2005

    the biases of links

    Email This Entry

    Posted by danah boyd

    I have a hard time respecting anyone who believes that science or technology is neutral. Unfortunately, even when people consciously know that they are not, they give credence to the biased outputs without questioning the underlying assumptions. This is why i’m an academic - nothing gives me greater joy than to think about what biases go into the creation of a particular system.

    After reminding folks at Blogher that there are gender differences in networking habits, i decided to do some investigation into the network structures of blogs. Kevin Marks of Technorati kindly gave me a random sample of 500 blogs to play with. I began coding them based on gender (which is surprisingly easy to do given the amount of personal information people put about themselves) and looking for patterns in links and blogrolls.

    I decided to do the same for non-group blogs in the Technorati Top 100. I hadn’t looked at the Top 100 in a while and was floored to realize that most of those blogs are group blogs and/or professional blogs (with “editors” and clear financial backing). Most are covered in advertisements and other things meant to make them money. It’s very clear that their creators have worked hard to reach many eyes (for fame, power or money?).

    Here are some of the patterns that i saw*:

    ...continue reading.

    Comments (29) + TrackBacks (0) | Category: social software

    August 5, 2005

    Jimbo's Problems: A Free Culture Manifesto

    Email This Entry

    Posted by Ross Mayfield

    I'm in Frankfurt this week for the first Wikipedia conference. Jimmy Wales has been warming up for his Wikimania Keynote on Larry Lessig's blog, talking about 10 things that should be free. The idea for this list comes from Hilbert's problems. In 1900s Mathematician David Hilbert posed 23 problems, 10 were announced at a conference, the full list published later, very influential. He notes that all of these things were obvious, suggested or proposed by others.

    10 Challenges for thee Free Culture Movement

    1. Free the Encyclopedia!

    Mission is to create a free encyclopedia for every person on the planet in their own language. For English and German, this work is done (of course there could be be quality control, etc.). French and Japanese in a year or so, ton of work to be done globally. Will be done in 10 years time, an amazing thing when you consider minority languages that have never had an encyclopedia.

    2. Free the Dictionary!

    Not as far along, but picking up speed. A dictionary is only useful when it's full of words you don't know, unlike an encyclopedia. Needs software development, such as WikiData. It is structured information, for cross reference and search.

    3. Free the Curriculum!

    There should be a complete curriculum in every language. A much bigger task than the encyclopedia. Need not just one article about the Moon, but one for every grade level. WikiBooks isn't the only one working on this project. The price of university textbooks is a real burden for students. The book market doesn't take advantage of potential supply of expertise. Not hard to imagine 500 economics professors writing instead of one or two to create a better offering than the traditional model.

    4. Free the Music!

    The most amazing works in history are public domain but not many public domain recordings exist (even in classical music). Proper scores are often proprietary derivative works (such as arrangements for a modern orchestra). Volunteer orchestras, student orchestras could provide the music for free.

    5. Free the Art!

    Show two 400 year old paintings. Routinely get complaints from museums saying there is copyright infringements. National Portrait Gallery of England threatens to sue, a chilling effect, but they have no grounds. Controlling physical access keeps people from getting high quality images "I wouldn't encourage you to break the law, but if you accidentally take a photo of these works it would be great to put it on Wikipedia for the public domain.

    6. Free the File Formats!

    Proprietary file formats are worse than proprietary software because they leave you with no ability to switch at a later time. Your data is controlled. If all of your personal documents are in an open file format, then free software could serve you in the future. Need to educate the public on lock-in. There is considerable progress here and continued European rejection of software patents is critical.

    7. Free the Maps!

    "What could be more public domain than basic information about location on the planet?" -- Stefan Magdalinksi. FreeGIS software, Free GeoData. This will become increasingly important for open competition in mobile data services.

    8. Free the Product Identifiers!

    Hobby Princess blog Huge subculture of people making crafts, selling them on eBay, but need competition from distributors.

    Increasingly, small producers can have a global market. Such producers need a clobal identifiers. Similar to ISBN, not ASIN (proprietary to Amazon). Suggests the "LTIN: Long Tail Identification Numbers" would be cheap or inexpensive to obtain (has to have some cost to fend off spam). Extensive database freely licensed and easly downloadable to empower multiple rating systems, e-commerc, etc. The alternative is proprietary eBay and Amazon. Small craft producers should be able to get a number and immediately gain distribution across them.

    9. Free the TV Listings!

    A smaller issue, it may seem. But development of free software digital PVRs is going on. Free-as-in-beer listings exist, but this is tenuous. Free listings could be used to power many different innovations in this area. Otherwise we will be in a world where everything you watch will be DRM'ed -- so this is important.

    10. Free the Communities!

    Wikipedia demonstrates the power of a free community. Consumers of web forum and wiki services should demand a free license. Otherwise, the company controls the community. Similar to a feudal serf, company maintained communities have a hold on communities. Are you a serf living on your master's estate, or free to move? Social compact: need to have Open Data and Openly Licensed software for communities to truly be free. Wikicities - for profit, free communities - founded by Jimmy and Angela. Free licensing attracts contributors.

    He will be adding more on Larry Lessig's blog over the coming weeks.

    Notes from the extended Q&A are here.

    Comments (4) + TrackBacks (0) | Category: social software

    August 2, 2005

    Hacking the A-List

    Email This Entry

    Posted by Ross Mayfield

    Following Liz's read of BlogHer, one of the more interesting points to come out of the conference is the need for constituent algorithms -- ways of revealing hidden groups. For the BlogHer community, the Technorati 100 was more than a whipping boy, but an index where a group was under-represented. Mary Hodder's approach, spot on, is to develop alternative indexes.

    No index is all-inclusive and all are biased. This isn't necessarily a bad thing. Each is just a way to view the world and it's information. But the interesting part is the sociology of how coders frame the world with each index and how we accept, reject or game the indexes that frame us.

    Think about the politics at play with the US Census, Gerrymandering jurisdiction or any list constructed by the mainstream media. Or how we over-react any time someone makes a new blog index when it hints at a hierarchy. Suddenly we are thrown back to gold stars, grades, being picked for the kickball team, caste judgments, nationalism, ageism, other isms, clicks, ins and outs. But an index is just one way to view the world. What happens when creating and distributing an index is as democratized as blogging is today?

    Each index is an attempt to institutionalize, where merely publishing it with credentialed claims invites circumspect vigilance. Somehow we teat lists as authorities, further incenting people to create lists to claim authority. Lists are just groupings, or clusters, but as such, we treat inclusion seriously. With easy group forming, we also get easy group representation -- so on the whole the scarcity of groups decreases with the right and convenience to fork.

    Other great idea to come out of BlogHer was a list. Mary started a Speaker's Wiki as a simple answer for event organizers that say there aren't enough women speakers. What's great about this idea is that was implemented on a Sunday morning. Initially, it's an answer, but I think it will raise some questions. The index begins with all women. But will it evolve to reflect the state of the events markets with a male-dominated power law? Or will it shape the curve? As the gender or other balance tips, will it spawn a fork for under-represented constituencies?

    Comments (4) + TrackBacks (0) | Category: social software

    blogher from afar

    Email This Entry

    Posted by Liz Lawley

    I was very disappointed not to be attending BlogHer, but I’m delighted to see the level of discourse that it has been generating online. That’s an excellent sign of a good conference, and was one of the stated goals of the organizers.

    Among the post-conference posts that caught my eye was Mary Hodder’s discussion of creating a community-based algorithm to address some of the problems and frustration surrounding current blog “ranking” mechanisms (like the Technorati 100):

    After 45 minutes of intense anger and frustration from many audience speakers in the room toward Technorati link counts and top 100, I suggested we create a community based algorithm, based on more complex social relationships than links. It’s something I’ve been working on for few months, trying to frame, about what this problem is and how we might solve it. But it’s a complex issue and I’m also busy. So it’s taken a while. However, my blog post is almost done, and I do plan to put it up in the next day or so.

    I loved Halley Suitt’s comment about the Q&A sessions at the conference:

    During Q&A — and this will shock you too — the people asking questions aren’t standing up to hog the mike and show off for the most part. The people at Blogher who asked questions actually wanted answers, wanted to be educated and were happy to be educated by anyone in the room who could educate them. The speakers deferred to others in the audience who could answer questions better than they could.

    It reminded me of someone once telling me about an academic conference where an unoffical award was regularly given for “best statement phrased in the form of a question.” Anyone who goes to tech conferences (or academic conferences) is well aware of this phenomenon, where someone who believes they know more than the presenters steps up to “ask a question” but instead uses the microphone as their personal soapbox.

    For a visual assessment of how Blogher was different, take a look at TW’s “Blogher Vs Gnomedex:

    There was one thing I really wanted to comment on. Look at the pictures on Flickr tagged Gnomedex vs those tagged for Blogher. These are totally different sorts of pictures. Pictures of PowerPoint projections at Gnomedex. Pictures of women, their FACES at BlogHer. (as opposed to the backs of heads at Gnomedex. It speaks to what women value.

    Particularly gratifying to me is the fact that it’s not just the women who are talking about the conference and its participants. I loved this post from Christopher Carfi, who attended the conference. Here’s an excerpt:

    This problem has deep roots, and a number of them. How did it come to pass that “number of links” became a surrogate for “quality?” It’s a result of a number of factors that lie in the technical underpinnings of how we currently “discover” new things online, namely PageRank and related algorithms. If a lot of people link to something it must be good, right? Well…sort of. The concept of “a link is a vote” is a blunt instrument.

    Read the whole post. It’s good stuff.

    And finally, Evelyn Rodriguez has a great roundup of quotes and highlights from the conference, including this great observation:

    Although Marc’s heart is in the right place, his suggestion that BlogHers create our own list, our own companies and tell the guys to fuck off…is ultimately simply playing the game by the same old (tired, not wired) rules. (Guys aren’t the real issue; it’s the metaphors we unconsciously live by, the worldviews embedded in the games.) Marc’s Implicit Assumption much like August issue of Wired: You only change the world when you are on a list. You only change the world when you are heading a company. Bigger is better. Louder is more impactful. Celebrity matters.

    Go forth and read the posts I’ve linked to, and the posts they link to, and the posts that link to them. Scan the blogher tag in del.icio.us. Don’t just dip your toes into the stream of conversation. Plunge in, and learn. There’s a lot being said that’s worth listening to.

    Comments (3) + TrackBacks (0) | Category: social software

    July 28, 2005

    SmashedTogetherSearches

    Email This Entry

    Posted by Ross Mayfield

    Ever notice that SmashedTogetherWords, like you find in some wikis, can be queries of a machine code culture? Try people's names: clayshirky, danahboyd, sebpaquet, lizlawley, davidweinberger and rossmayfield on Google, or the same on Technorati. Try with other Pronouns and even more than nouns and you discover the emerging culture. Or maybe just a byproduct of blunt tagging and usable urls. Anywho, maybe it's better spaced out, but this is higher quality metadata.

    Comments (1) + TrackBacks (0) | Category: social software

    July 23, 2005

    social networks and drug networks

    Email This Entry

    Posted by danah boyd

    Rule #1 for studying social culture: pay attention to the sex and drugs.

    When it was reported that Orkut is being used as a drug networking tool in Brazil, my immediate response was duh.

    I have interviewed subjects who distributed cocaine in Baltimore via Friendster. (To my knowledge, they were never caught which makes it different than the situation with Orkut.) Other subjects have told me ways to find drugs on Tribe.net and MySpace. Obviously, i am not willing to disclose how or who. But this is definitely not unique to Orkut nor to social networking in general. For example, in college, people used to buy drugs on eBay.

    Give people the ability to distribute information and they will distribute drugs. Tis just as obvious as if you give people access to attractive people, they will date. So, i find it very entertaining that people get up in arms about this.

    Comments (0) + TrackBacks (0) | Category: social software

    July 20, 2005

    The tagging culture war

    Email This Entry

    Posted by David Weinberger

    Tom Coates does some analysis to illustrate what he suggests is a cultural difference in how people use tags. Some use tags as folders to house objects, others use them as descriptions of objects. (And, it seems to me, many of us do both.) His example: If you tag an URL as “blogs,” you are collecting blogs into a virtual folder. If you tag an URL “blog,” you are describing it as an example of a blog. In the first case, you’re probably putting blogs aside so you can read them. In the second, you may be researching the blog phenomenon. Tom’s research leads him to conjecture that “the folder metaphor is losing ground and the keyword one is currently assuming dominance.”

    I assume this is correlated to blogging for myself and blogging to add to the social tagstream: I tend to folder for myself and to keyword when contributing to a social tagstream

    It’s all very confusing. Fortunately, Tom is a good explainer…

    Comments (13) + TrackBacks (0) | Category: social software

    July 19, 2005

    Cinema-On-Demand: Theater as Social Software

    Email This Entry

    Posted by Paul B Hartzog

    A darkened theatre. A full house. A heroic act. A mighty roar from the crowd. This is the delight of good cinema.

    I love going to the movies with people, even people I don’t know. I love to hear others’ reactions, and discuss the movie with people afterwards. In fact, I love it so much, that when my neighbor shows movies in many languages from all over the world in his backyard on Saturday nights during the summer, I often go down for the movie and end up enjoying the wine, cheese, and conversation more than the images flickering across a bedsheet waving gently in the breeze.

    So, I got to thinking: What if you could rent a theater for a night? Then I read this: “At this year’s Sundance Film Festival in Park City, Utah, filmmaker David LaChapelle screened his new hi-def movie, Rize, by streaming it from Oregon and then transmitting it through a WiMax station in Salt Lake City. It worked flawlessly - soon even theaters won’t have to rely on physical media anymore” (from http://www.wired.com/wired/archive/13.04/start.html?pg=2).

    Improvements in bandwidth and compression will usher in the possibility of streaming movies directly to local theaters.

    ...continue reading.

    Comments (4) + TrackBacks (0) | Category: guests | social software

    July 18, 2005

    MySpace -> News Corp.

    Email This Entry

    Posted by danah boyd

    I’ve been waiting for a mega-media company to buy MySpace and sure enough, it happened. News Corp bought Intermix Media (the half-parent of MySpace). Unlike the other YASNS, the value of MySpace comes from the data on media trends that is the core of what people share on that service. You have millions of American youth identifying with media and expressing their cultural values on the site. Marketers who want to understand the constantly shifting youth trends are often looking for a perch from which to be the ideal voyeur. And with MySpace, they found it. Here, youth are sharing media left right and center and forgetting that they are doing so under the watchful eye of Big Media who are certain to use this to manipulate them. Because youth believe that MySpace is a social tool for them, they are not conscious of how much data they’re giving to marketers about their habits.

    Really, it’s a brilliant move for News Corp. (assuming they can stay out of the courts and that the RIAA is nice to them). I’m just not so certain how good it is for youth culture.

    Comments (23) + TrackBacks (0) | Category: social software

    July 8, 2005

    Tag Spam Enclosure

    Email This Entry

    Posted by Ross Mayfield

    Steve Rubel points out that Yahoo's Social Search is cluttered with tag spam. Further evidence that Clay's definition of social software may be spot on.

    But take a deeper look. Everyone's Tags are about to be overrun by Nigerians, a future for most social bookmarking services. My Community's Tags (2 degrees) are definitively not spam. At least in my little community.

    Comments (4) + TrackBacks (0) | Category: social software

    July 4, 2005

    wikiHow to Open Content

    Email This Entry

    Posted by Ross Mayfield

    wikiHow is one of the more interesting cases of opening a proprietary content and community site. A couple of entrepreneurs bought eHow (editorially produced How To Guides, a dot com showcase) out of hock and appended a wiki to it. Today it may be the second fastest growing public wiki and they recently adopted Creative Commons licensing. The real story is the process of opening an asset, transitioning a community and how to be a net-enabled entrepreneur.

    During the boom, eHow spent $30 million, developed a rich base of How To content, respectable traffic an loyal contributors-as-users. Many of these contributors were experts in their fields and valued how they could contribute content while retaining copyright. Under a questionable business model, eHow filed for bankruptcy in February 2001, but traffic continued at 250k visitors per month. Another now defunct internet company called IdeaExchange.com purchased eHow, but also was unable to run the site profitably and began to look for buyers.

    Two entrepreneurs who happened to love the site, bought the asset and worked part time to keep the site operational. Literally, it is a nights and weekend labor of love.

    They leveraged Internet Archive to find an republish lost content during the bankruptcy and published 1,000 articles previously composed by the dot com's professional editors. But noting the parallel between the Nupedia/Wikipedia story, they looked to evolve the user-generated content model. One of them happened to be a Socialtext customer (was the first deal I closed via Skype, incidentally) for their day job, so I've been helping them out informally.

    They adapted the open source MediaWiki to fit the eHow format by bre