Clay’s book makes sense of the way that groups are using the Internet. Really good sense. In a treatise that spans all manner of social activity from vigilantism to terrorism, from Flickr to Howard Dean, from blogs to newspapers, Clay unpicks what has made some “social” Internet media into something utterly transformative, while other attempts have fizzled or fallen to griefers and vandals. Clay picks perfect anecdotes to vividly illustrate his points, then shows the larger truth behind them.
Here Comes Everybody goes beyond wild-eyed webby boosterism and points out what seems to be different about web-based communities and organisation and why it’s different; the good and the bad. With useful and interesting examples, good stories and sticky theories. Very good stuff.
These newly possible activities are moving us towards the collapse of social structures created by technology limitations. Shirky compares this process to how the invention of the printing press impacted scribes. Suddenly, their expertise in reading and writing went from essential to meaningless. Shirky suggests that those associated with controlling the means to media production are headed for a similar fall.
Shirky has a piercingly sharp eye for the spotting the illuminating case studies - some familiar, some new - and using them to energise wider themes. His basic thesis is simple: “Everywhere you look groups of people are coming together to share with one another, work together, take some kind of public action.” The difference is that today, unlike even ten years ago, technological change means such groups can be form and act in new and powerful ways. Drawing on a wide range of examples Shirky teases out remarkable contrasts with what has been the expected logic, and shows quite how quickly the dynamics of reputation and relationships have changed.
Here Comes Everybody is about why new social tools matter for society. It is a non-techie book for the general reader (the letters TCP IP appear nowhere in that order). It is also post-utopian (I assume that the coming changes are both good and bad) and written from the point of view I have adopted from my students, namely that the internet is now boring, and the key question is what we are going to do with it.
One of the great frustrations of writing a book as opposed to blogging is seeing a new story that would have been a perfect illustration, or deepened an argument, and not being able to add it. To remedy that, I’ve just launched a new blog, at HereComesEverybody.org, to continue writing about the effects of social tools.
Wow. What a great response — we’ve given out all the copies we can, but many thanks for all the interest. Also, I’ve convinced the good folks at Penguin Press to let me give a few review copies away to people in the kinds of communities the book is about. I’ve got half a dozen copies to give to anyone reading this, with the only quid pro quo being that you blog your reactions to it, good bad or indifferent, some time in the next month or so. Drop me a line if you would like a review copy — email@example.com.
It gives me unquantifiable amounts of joy to announce that the JCMC special theme issue on “Social Network Sites” is now completely birthed. It was a long and intense labor, but all eight newborn articles are doing just fine and the new mommies are as proud as could be. So please, join us in our celebration by heading on over to the Journal for Computer-Mediated Communication and snuggling up to an article or two. The more you love them, the more they’ll prosper!
In June, I wrote a controversial blog essay about how U.S. teens appeared to be self-dividing by class on MySpace and Facebook during the 2006-2007 school year. This piece got me into loads of trouble for all sorts of reasons, forcing me to respond to some of the most intense critiques.
While what I was observing went beyond what could be quantitatively measured, certain aspects of it could be measured. To my absolute delight, Eszter Hargittai (professor at Northwestern) had collected data to measure certain aspects of the divide that I was trying to articulate. Not surprising (to me at least), what she was seeing lined up completely with what I was seeing on the ground.
While over 99% of the students had heard of both Facebook and MySpace, 79% use Facebook and 55% use MySpace. The story looks a bit different when you break it down by race/ethnicity and parent education:
While Eszter is not able to measure the other aspects of lifestyle that I was trying to describe that differentiate usage, she is able to show that Facebook and MySpace usage differs by race/ethnicity and parent education. These substitutes for “class” can be contested, but what is important here is that there is genuinely differences in usage patterns, even with consistent familiarity. People are segmenting themselves in networked publics and this links to the ways in which they are segmented in everyday life. Hopefully Eszter’s article helps those who can’t read qualitative data understand that what I was observing is real and measurable.
As many of you know, Nicole Ellison and I are guest editing a special issue of JCMC. As a part of this issue, we are writing an introduction that will include a description of social network sites, a brief history of them, a literature review, a description of the works in this issue, and a discussion of future research. We have decided to put a draft of our history section up to solicit feedback from those of you who know this space well. It is a work-in-progress so please bear with us. But if you have suggestions, shout out.
I have never understood Nick Carr’s objections to the cultural effects of the internet. He’s much too smart to lump in with nay-sayers like Keen, and when he talks about the effects of the net on business, he sounds more optimistic, even factoring in the wrenching transition, so why aren’t the cultural effects similar cause for optimism, even accepting the wrenching transition in those domains as well?
I think I finally got understood the dichotomy between his reading of business and culture after reading Long Player, his piece on metadata and what he calls “the myth of liberation”, a post spurred in turn by David Weinberger’s Everything Is Miscellaneous.
Carr discusses the ways in which the long-playing album was both conceived of and executed as an aesthetic unit, its length determined by a desire to hold most of the classical canon on a single record, and its possibilities exploited by musicians who created for the form — who created albums, in other words, rather than mere bags of songs. He illustrates this with an exegesis of the Rolling Stones’ Exile on Main Street, showing how the overall construction makes that album itself a work of art.
Carr uses this point to take on what he calls the myth of liberation: “This mythology is founded on a sweeping historical revisionism that conjures up an imaginary predigital world - a world of profound physical and economic constraints - from which the web is now liberating us.” Carr observes, correctly, that the LP was what it was in part for aesthetic reasons, and the album, as a unit, became what it became in the hands of people who knew how to use it.
That is not, however, the neat story Carr wants to it be, and the messiness of the rest of the story is key, I think, to the anxiety about the effects on culture, his and others.
The LP was an aesthetic unit, but one designed within strong technical constraints. When Edward Wallerstein of Columbia Records was trying to figure out how long the long-playing format should be, he settled on 17 minutes a side as something that would “…enable about 90% of all classical music to be put on two sides of a record.” But why only 90%? Because 100% would be impossible — the rest of the canon was too long for the technology of the day. And why should you have to flip the record in the middle? Why not have it play straight through? Impossible again.
Contra Carr, in other words, the pre-digital world was a world of profound physical and economic constraints. The LP could hold 34 minutes of music, which was a bigger number of minutes than some possibilities (33 possibilities, to be precise), but smaller than an infinite number of others. The album as a form provided modest freedom embedded in serious constraints, and the people who worked well with the form accepted those constraints as a way of getting at those freedoms. And now the constraints are gone; there is no necessary link between an amount of music and its playback vehicle.
And what Carr dislikes, I think, is evidence that the freedoms of the album were only as valuable as they were in the context of the constraints. If Exile on Main Street was as good an idea as he thinks it was, it would survive the removal of those constraints.
And it hasn’t.
Here is the iTunes snapshot of Exile, sorted by popularity:
While we can’t get absolute numbers from this, we can get relative ones — many more people want to listen to Tumbling Dice or Happy than Ventilator Blues or Turd on the Run, even though iTunes makes it cheaper per song to buy the whole album. Even with a financial inducement to preserve the album form, the users still say no thanks.
The only way to support the view that Exile is best listened to as an album, in other words, is to dismiss the actual preferences of most of the people who like the Rolling Stones. Carr sets about this task with gusto:
Who would unbundle Exile on Main Street or Blonde on Blonde or Tonight’s the Night - or, for that matter, Dirty Mind or Youth and Young Manhood or (Come On Feel the) Illinoise? Only a fool would.
Only a fool. If you are one of those people who has, say, Happy on your iPod (as I do), then you are a fool (though you have lots of company). And of course this foolishness extends to the recording industry, and to the Stones themselves, who went and put Tumbling Dice on a Greatest Hits collection. (One can only imagine how Carr feels about Greatest Hits collections.)
I think Weinberger’s got it right about liberation, even taking at face value the cartoonish version Carr offers. Prior to unlimited perfect copyability, media was defined by profound physical and economic constraints, and now it’s not. Fewer constraints and better matching of supply and demand are good for business, because business is not concerned with historical continuity. Fewer constraints and better matching of supply and demand are bad for current culture, because culture continually mistakes current exigencies for eternal verities.
This isn’t just Carr of course. As people come to realize that freedom destroys old forms just as surely as it creates new ones, the lament for the long-lost present is going up everywhere. As another example, Sven Birkerts, the literary critic, has a post in the Boston Globe, Lost in the blogosphere, that is almost indescribably self-involved. His two complaints are that newspapers are reducing the space allotted to literary criticism, and too many people on the Web are writing about books. In other words, literary criticism, as practiced during Birkerts’ lifetime, was just right, and having either fewer or more writers are both lamentable situations.
In order that the “Life was better when I was younger” flavor of his complaint not become too obvious, Birkerts frames the changing landscape not as a personal annoyance but as A Threat To Culture Itself. As he puts it “…what we have been calling “culture” at least since the Enlightenment — is the emergent maturity that constrains unbounded freedom in the interest of mattering.”
This is silly. The constraints of print were not a product of “emergent maturity.” They were accidents of physical production. Newspapers published book reviews because their customers read books and because publishers took out ads, the same reason they published pieces about cars or food or vacations. Some newspapers hired critics because they could afford to, others didn’t because they couldn’t. Ordinary citizens didn’t write about books in a global medium because no such medium existed. None of this was an attempt to “constrain unbounded freedom” because there was no such freedom to constrain; it was just how things were back then.
Genres are always created in part by limitations. Albums are as long as they are because that Wallerstein picked a length his engineers could deliver. Novels are as long as they are because Aldus Manutius’s italic letters and octavo bookbinding could hold about that many words. The album is already a marginal form, and the novel will probably become one in the next fifty years, but that also happened to the sonnet and the madrigal.
I’m old enough to remember the dwindling world, but it never meant enough to me to make me a nostalgist. In my students’ work I see hints of a culture that takes both the new freedoms and the new constraints for granted, but the fullest expression of that world will probably come after I’m dead. But despite living in transitional times, I’m not willing to pretend that the erosion of my worldview is a crisis for culture itself. It’s just how things are right now.
Carr fails to note that the LP was created for classical music, but used by rock and roll bands. Creators work within whatever constraints exist at the time they are creating, and when the old constraints give way, new forms arise while old ones dwindle. Some work from the older forms will survive — Shakespeare’s 116th sonnet remains a masterwork — while other work will wane — Exile as an album-length experience is a fading memory. This kind of transition isn’t a threat to Culture Itself, or even much of a tragedy, and we should resist attempts to preserve old constraints in order to defend old forms.
One month ago, I put out a blog essay that took on a life of its own. This essay addressed one of America’s most taboo topics: class. Due to personal circumstances, I wasn’t online as things spun further and further out of control and I had neither the time nor the emotional energy to address all of the astounding misinterpretations that I saw as a game of digital telephone took hold. I’ve browsed the hundreds of emails, thousands of blog posts, and thousands of comments across the web. I’m in awe of the amount of time and energy people put into thinking through and critiquing my essay. In the process, I’ve also realized that I was not always so effective at communicating what I wanted to communicate. To clarify some issues, I decided to put together a long response that addresses a variety of different issues.
im Spalding at LibraryThing has introduced a new wrinkle in the tagosphere…and wrinkles are welcome because they pucker space in semantically interesting ways. (Block that metaphor!)
At LibraryThing, people list their books. And, of course, we tag ‘em up good. For example, Freakonomics has 993 unique tags (ignoring case differences), and 8,760 total tags. Now, tags are of course useful. But so are subject headings. So, Tim has come up with a clever way of deriving subject headings bottom up. He’s introduced “tagmashes,” which are (in essence) searches on two or more tags. So, you could ask to see all the books tagged “france” and “wwii.” But the fact that you’re asking for that particular conjunction of tags indicates that those tags go together, at least in your mind and at least at this moment. Library turns that tagmash into a page with a persistent URL. The page presents a de-duped list of the results, ordered by interestinginess, and with other tagmashes suggested, all based on the magic of statistics. Over time, a large, relatively flat set of subject headings may emerge, which, subject to further analysis, could get clumpier and clumpier with meaning.
You may be asking yourself how this differs from saved searches. I asked Tim. He explained that while the system does a search when you ask for a new tagmash, it presents the tagmash as if it were a topic, not a search. For one thing, lists of search results generally don’t have persistent URLs. More important, to the user, tagmash pages feel like topic pages, not search results pages.
And you may also be asking yourself how this differs from a folksonomy. While I’d want to count it as a folksonomic technique, in a traditional folksonomy (oooh, I hope I’m the first to use that phrase!), a computer can notice which terms are used most often, and might even notice some of the relationships among the terms. With tagmashes, the info that this tag is related to that one is gleaned from the fact that a human said that they were related.
LibraryThing keeps innovating this way. It’s definitely a site to watch.
The cool thing about blogs is that while they may be quiet, and it may be hard to find what you’re looking for, at least you can say what you think without being shouted down. This makes it possible for unpopular ideas to be expressed. And if you know history, the most important ideas often are the unpopular ones…. That’s what’s important about blogs, not that people can comment on your ideas. As long as they can start their own blog, there will be no shortage of places to comment.
When a blog allows comments right below the writer’s post, what you get is a bunch of interesting ideas, carefully constructed, followed by a long spew of noise, filth, and anonymous rubbish that nobody … nobody … would say out loud if they had to take ownership of their words.
But the uselessness of comments it is not the universal truth that Dave or (fixed, per Dave’s comment below) Joel makes it out to be, for two reasons. First, posting and conversation are different kinds of things — same keyboard, same text box, same web page, different modes of expression. Second, the sites that suffer most from anonymous postings and drivel are the ones operating at large scale.
Those three threads contain a hundred or so comments, including some distinctly low-signal bouquets and brickbats. But there is also spirited disputation and emendation, alternate points of view, linky goodness, and a conversational sharpening of the argument on all sides, in a way that doesn’t happen blog to blog. This, I think, is the missing element in Dave and Joel’s points — two blog posts do not make a conversation. The conversation that can be attached to a post is different in style and content, and in intent and effect, than the post itself.
I have long thought that the ‘freedom of speech means no filtering’ argument is dumb where blogs are concerned — it is the blogger’s space, and he or she should feel free to delete, disemvowel, or otherwise dispose of material, for any reason, or no reason. But we’ve long since passed the point where what happens on a blog is mainly influenced by what the software does — the question to ask about comments is not whether they are available, but how a community uses them. The value in in blogs as communities of practice is considerable, and its a mistake to write off comment threads on those kinds of blogs just because, in other environments, comments are lame.
There are assertions of verifiable fact and then there are invocations of shared values. Don’t mix them up.
I meant this as an assertion of fact, but re-reading it after Tom’s feedback, it comes off as simple flag-waving, since I’d compressed the technical part of the argument out of existence. So here it is again, in slightly longer form:
The internet’s essential operation is to encode and transmit data from sender to receiver. In 1969, this was not a new capability; we’d had networks that did this in since the telegraph, at the day of the internet’s launch, we had a phone network that was nearly a hundred years old, alongside more specialized networks for things like telexes and wire-services for photographs.
Thus the basics of what the internet did (and does) isn’t enough to explain its spread; what is it for has to be accounted for by looking at the difference between it and the other data-transfer networks of the day.
The principal difference between older networks and the internet (ARPAnet, at its birth) is the end-to-end principle, which says, roughly, “The best way to design a network is to allow the sender and receiver to decide what the data means, without asking the intervening network to interpret the data.” The original expression of this idea is from the Saltzer and Clark paper End-to-End Arguments in System Design; the same argument is explained in other terms in Isenberg’s Stupid Network and Searls and Weinberger’s World of Ends.
What the internet is for, in other words, what made it worth adopting in a world already well provisioned with other networks, was that the sender and receiver didn’t have to ask for either help or permission before inventing a new kind of message. The core virtue of the internet was a huge increase in the technical freedom of all of its participating nodes, a freedom that has been translated into productive and intellectual freedoms for its users.
As Scott Bradner put it, the Internet means you don’t have to convince anyone else that something is a good idea before trying it. The upshot is that the internet’s output is data, but its product is freedom.
Last week, while in a conversation with Andrew Keen on the radio show To The Point, he suggested that he was not opposed to the technology of the internet, but rather to how it was being used.
This reminded me of Michael Gorman’s insistence that digital tools are fine, so long as they are shaped to replicate the social (and particularly academic) institutions that have grown up around paper.
There is a similar strand in these two arguments, namely that technology is one thing, but the way it is used is another, and that the two can and should be separated. I think this view is in the main wrong, even Luddite, but to make such an accusation requires a definition of Luddite considerably more grounded than ‘anti-technology’ (a vacuous notion — no one who wears shoes can reasonably be called ‘anti-technology.’) Both Keen and Gorman have said they are not opposed to digital technology. I believe them when they say this, but I still think their views are Luddite, by historical analogy with the real Luddite movement of the early 1800s.
What follows is a long detour into the Luddite rebellion, followed by a reply to Keen about the inseparability of the internet from its basic effects.
The historical record is relatively clear. In March of 1811, a group of weavers in Nottinghamshire began destroying mechanical looms. This was not the first such riot — in the late 1700s, when Parliament refused to guarantee the weaver’s control of supply of woven goods, workers in Nottingham destroyed looms as well. The Luddite rebellion, though, was unusual for several reasons: its breadth and sustained character, taking place in many industrializing towns at once; its having a nominal leader, going by the name Ned Ludd, General Ludd, or King Ludd (the pseudonym itself a reference to an apocryphal figure from an earlier loom-breaking riot in the late 1700s); and its written documentation of grievances and rationale. The rebellion, which lasted two years, was ultimately put down by force, and was over in 1813.
Over the last two decades, several historians have re-examined the record of the Luddite movement, and have attempted to replace the simplistic view of Luddites as being opposed to technological change with a more nuanced accounting of their motivations and actions. The common thread of the analysis is that the Luddites didn’t object to mechanized wide-frame looms per se, they objected to the price collapse of woven goods caused by the way industrialists were using the looms. Though the target of the Luddite attacks were the looms themselves, their concerns and goals were not about technology but about economics.
I believe that the nuanced view is wrong, and that the simpler view of Luddites as counter-revolutionaries is in fact the correct one. The romantic view of Luddites as industrial-age Robin Hoods, concerned not to halt progress but to embrace justice, runs aground on both the written record, in which the Luddites outline a program that is against any technology that increases productivity, and on their actions, which were not anti-capitalist but anti-consumer. It also assumes that there was some coherent distinction between technological and economic effects of the looms; there was none.
A Technology is For Whatever Happens When You Use It
The idea that the Luddites were targeting economic rather than technological change is a category fallacy, where the use of two discrete labels (technology and economics, in this case) are wrongly thought to demonstrate two discrete aspects of the thing labeled (here wide-frame looms.) This separation does not exist in this case; the technological effects of the looms were economic. This is because, at the moment of its arrival, what a technology does and what it is for are different.
What any given technology does is fairly obvious: rifles fire bullets, pencils make marks, looms weave cloth, and so on. What a technology is for, on the other hand, what leads people to adopt it, is whatever new thing becomes possible on the day of its arrival. The Winchester repeating rifle was not for firing bullets — that capability already existed. It was for decreasing the wait between bullets. Similarly, pencils were not for writing but for portability, and so on.
And the wide-frame looms, target of the Luddite’s destructive forays? What were they for? They weren’t for making cloth — humankind was making cloth long before looms arrived. They weren’t for making better cloth — in 1811, industrial cloth was inferior to cloth spun by the weavers. Mechanical looms were for making cheap cloth, lots and lots of cheap cloth. The output of a mechanical loom was cloth, but the product of such a loom was savings.
The wide-frame loom was a cost-lowering machine, and as such, it threatened the old inefficiencies on which the Luddite’s revenues depended. Their revolt had the goal of preventing those savings from being passed along to the customer. One of their demands was that Parliament outlaw “all Machinery hurtful to Commonality” — all machines that worked efficiently enough to lower prices.
Perhaps more tellingly, and against recent fables of Luddism as a principled anti-capitalist movement, they refrained from breaking the looms of industrial weavers who didn’t lower their prices. What the Luddites were rioting in favor of was price gouging; they didn’t care how much a wide-frame loom might save in production costs, so long as none of those savings were passed on to their fellow citizens.
Their common cause was not with citizens and against industrialists, it was against citizens and with those industrialists who joined them in a cartel. The effect of their campaign, had it succeeded, would been to have raise, rather than lower, the profits of the wide-frame operators, while producing no benefit for those consumers who used cloth in their daily lives, which is to say the entire population of England. (Tellingly, none of the “Robin Hood” versions of Luddite history make any mention of the effect of high prices on the buyers of cloth, just on the sellers.)
Back to Keen
A Luddite argument is one in which some broadly useful technology is opposed on the grounds that it will discomfit the people who benefit from the inefficiency the technology destroys. An argument is especially Luddite if the discomfort of the newly challenged professionals is presented as a general social crisis, rather than as trouble for a special interest. (“How will we know what to listen to without record store clerks!”) When the music industry suggests that the prices of music should continue to be inflated, to preserve the industry as we have known it, that is a Luddite argument, as is the suggestion that Google pay reparations to newspapers or the phone company’s opposition to VoIP undermining their ability to profit from older ways of making phone calls.
This is what makes Keen’s argument a Luddite one — he doesn’t oppose all uses of technology, just ones that destroy older ways of doing things. In his view, the internet does not need to undermine the primacy of the copy as the anchor for both filtering and profitability.
But Keen is wrong. What the internet does is move data from point A to B, but what it is for is empowerment. Using the internet without putting new capabilities into the hands of its users (who are, by definition, amateurs in most things they can now do) would be like using a mechanical loom and not lowering the cost of buying a coat — possible, but utterly beside the point.
The internet’s output is data, but its product is freedom, lots and lots of freedom. Freedom of speech, freedom of the press, freedom of association, the freedom of an unprecedented number of people to say absolutely anything they like at any time, with the reasonable expectation that those utterances will be globally available, broadly discoverable at no cost, and preserved for far longer than most utterances are, and possibly forever.
Keen is right in understanding that this massive supply-side shock to freedom will destabilize and in some cases destroy a number of older social institutions. He is wrong in believing that there is some third way — lets deploy the internet, but not use it to increase the freedom of amateurs to do as they like.
It is possible to want a society in which new technology doesn’t demolish traditional ways of doing things. It is not possible to hold this view without being a Luddite, however. That view — incumbents should wield veto-power over adoption of tools they dislike, no matter the positive effects for the citizenry — is the core of Luddism, then and now.
Over at the Britannica Blog, Michael Gorman (the former president of the American Library Association) wrote a series of posts concerning web2.0. In short, he’s against it and thinks everything to do with web2.0 and Wikipedia is bad bad bad. A handful of us were given access to the posts before they were posted and asked to craft responses. The respondents are scholars and thinkers and writers of all stripes (including my dear friend and fellow M2M blogger Clay Shirky). Because I addressed all of his arguments at once, my piece was held to be released in the final week of the public discussion. And that time is now. So enjoy!
Over the last six months, i’ve noticed an increasing number of press articles about how high school teens are leaving MySpace for Facebook. That’s only partially true. There is indeed a change taking place, but it’s not a shift so much as a fragmentation. Until recently, American teenagers were flocking to MySpace. The picture is now being blurred. Some teens are flocking to MySpace. And some teens are flocking to Facebook. Which go where gets kinda sticky, because it seems to primarily have to do with socio-economic class.
I’ve been trying to figure out how to articulate this division for months. I have not yet succeeded. So, instead, I decided to write a blog essay addressing what I’m seeing. I suspect that this will be received with criticism, but my hope is that the readers who encounter this essay might be able to help me think through this. In other words, I want feedback on this piece.
What I lay out in this essay is rather disconcerting. Hegemonic American teens (i.e. middle/upper class, college bound teens from upwards mobile or well off families) are all on or switching to Facebook. Marginalized teens, teens from poorer or less educated backgrounds, subculturally-identified teens, and other non-hegemonic teens continue to be drawn to MySpace. A class division has emerged and it is playing out in the aesthetics, the kinds of advertising, and the policy decisions being made.
Siren Song of the Internet contains a curious omission and a basic misunderstanding. The omission is part of his defense of the Luddites; the misunderstanding is about the value of paper and the nature of e-books.
The omission comes early: Gorman cavils at being called a Luddite, though he then embraces the label, suggesting that they “…had legitimate grievances and that their lives were adversely affected by the mechanization that led to the Industrial Revolution.” No one using the term Luddite disputes the effects on pre-industrial weavers. This is the general case — any technology that fixes a problem (in this case the high cost of homespun goods) threatens the people who profit from the previous inefficiency. However, Gorman omits mentioning the Luddite response: an attempt to halt the spread of mechanical looms which, though beneficial to the general populace, threatened the livelihoods of King Ludd’s band.
By labeling the Luddite program legitimate, Gorman seems to be suggesting that incumbents are right to expect veto power over technological change. Here his stand in favor of printed matter is inconsistent, since printing was itself enormously disruptive, and many people wanted veto power over its spread as well. Indeed, one of the great Luddites of history (if we can apply the label anachronistically) was Johannes Trithemius, who argued in the late 1400s that the printing revolution be contained, in order to shield scribes from adverse effects. This is the same argument Gorman is making, in defense of the very tools Trithemius opposed. His attempt to rescue Luddism looks less like a principled stand than special pleading: the printing press was good, no matter happened to the scribes, but let’s not let that sort of thing happen to my tribe.
Gorman then defends traditional publishing methods, and ends up conflating several separate concepts into one false conclusion, saying “To think that digitization is the answer to all that ails the world is to ignore the uncomfortable fact that most people, young and old, prefer to interact with recorded knowledge and literature in the form of print on paper.”
Dispensing with the obvious straw man of “all that ails the world”, a claim no one has made, we are presented with a fact that is supposed to be uncomfortable — it’s good to read on paper. Well duh, as the kids say; there’s nothing uncomfortable about that. Paper is obviously superior to the screen for both contrast and resolution; Hewlett-Packard would be about half the size it is today if that were not true. But how did we get to talking about paper when we were talking about knowledge a moment ago?
Gorman is relying on metonymy. When he notes a preference for reading on paper he means a preference for traditional printed forms such as books and journals, but this is simply wrong. The uncomfortable fact is that the advantages of paper have become decoupled from the advantages of publishing; a big part of preference for reading on paper is expressed by hitting the print button. As we know from Lyman and Varian’s “How Much Information” study, “…the vast majority of original information on paper is produced by individuals in office documents and postal mail, not in formally published titles such as books, newspapers and journals.”
We see these effects everywhere: well over 90% of new information produced in any year is stored electronically. Use of the physical holdings of libraries are falling, while the use of electronic resources is rising. Scholarly monographs, contra Gorman, are increasingly distributed electronically. Even the physical form of newspapers is shrinking in response to shrinking demand, and so on.
The belief that a preference for paper leads to a preference for traditional publishing is a simple misunderstanding, demonstrated by his introduction of the failed e-book program as evidence that the current revolution is limited to “hobbyists and premature adopters.” The problems with e-books are that they are not radical enough: they dispense with the best aspect of books (paper as a display medium) while simultaneously aiming to disable the best aspects of electronic data (sharability, copyability, searchability, editability.) The failure of e-books is in fact bad news for Gorman’s thesis, as it demonstrates yet again that users have an overwhelming preference for the full range of digital advantages, and are not content with digital tools that are designed to be inefficient in the ways that printed matter is inefficient.
If we gathered every bit of output from traditional publishers, we could line them up in order of vulnerability to digital evanescence. Reference works were the first to go — phone books, dictionaries, and thesauri have largely gone digital; the encyclopedia is going, as are scholarly journals. Last to go will be novels — it will be some time before anyone reads One Hundred Years of Solitude in any format other than a traditionally printed book. Some time, however, is not forever. The old institutions, and especially publishers and libraries, have been forced to use paper not just for display, for which is it well suited, but also for storage, transport, and categorization, things for which paper is completely terrible. We are now able to recover from those disadvantages, though only by transforming the institutions organized around the older assumptions.
The ideal situation, which we are groping our way towards, will be to have all written material, wherever it lies on the ‘information to knowledge’ continuum, in digital form, right up the moment a reader wants it. At that point, the advantages of paper can be made manifest, either by printing on demand, or by using a display that matches paper’s superior readability. Many of the traditional managers of books and journals will suffer from this change, though it will benefit society as a whole. The question Gorman pointedly asks, by invoking Ned Ludd and his company, is whether we want that change to be in the hands of people who would be happy to discomfit society as a whole in order to preserve the inefficiencies that have defined their world.
I’m old enough to know a lot of things, just from life experience. I know that music comes from stores. I know that newspapers are where you get your political news and how you look for a job. I know that if you need to take a trip, you visit a travel agent. In the last 15 years or so, I’ve had to unlearn those things and a million others. This makes me a not-bad analyst, because I have to explain new technology to myself first — I’m too old to understand it natively. But it makes me a lousy entrepreneur.
It is incredibly hard to think of new paradigms when you’ve grown up reading the newspaper every morning. When you turn to TV for your entertainment. When you read magazines on the train home from work. But we have a generation coming of age right now that has never relied on newspapers, TV, and magazines for their information and entertainment.[…] The Internet is their medium and they are showing us how it needs to be used.
This is exactly right.
I think the real issue, of which age is a predictor, is this: the future belongs to those who take the present for granted. I had this thought while talking to Robert Cook of Metaweb, who are making Freebase. They need structured metadata, lots of structured metadata, and one of the places they are getting it is from Wikipedia, by spidering the bio boxes (among other things) for things like birthplace and age of people listed Freebase. While Andrew Keen is trying to get a conversation going on whether Wikipedia is a good idea, Metaweb takes it for granted as a stable part of the environment, which lets them see past this hurdle to the next one.
This is not to handicap the success of Freebase itself — it takes a lot more than taking the present for granted to make a successful tool. But one easy way to fail is to assume that the past is more solid than it is, and the present more contingent. And the people least likely to make this mistake — the people best able to take the present for granted — are young people, for whom knowing what the world is really like is as easy as waking up in the morning, since this is the only world they’ve ever known.
Some things improve with age — I wouldn’t re-live my 20s if you paid me — but high-leverage ignorance isn’t one of them.
Encyclopedia Britannica has started a Web 2.0 Forum, where they are hosting a conversation going on around a set of posts by Michael Gorman. The first post, in two parts, is titled Web 2.0: The Sleep of Reason Brings Forth Monsters, and is a defense of the print culture against alteration by digital technologies. This is my response, which will be going up on the Britannica site later this week.
Web 2.0: The Sleep of Reason Brings Forth Monsters starts with a broad list of complaints against the current culture, from biblical literalism to interest in alternatives to Western medicine.
The life of the mind in the age of Web 2.0 suffers, in many ways, from an increase in credulity and an associated flight from expertise. Bloggers are called “citizen journalists”; alternatives to Western medicine are increasingly popular, though we can thank our stars there is no discernable “citizen surgeon” movement; millions of Americans are believers in Biblical inerrancy—the belief that every word in the Bible is both true and the literal word of God, something that, among other things, pits faith against carbon dating; and, scientific truths on such matters as medical research, accepted by all mainstream scientists, are rejected by substantial numbers of citizens and many in politics. Cartoonist Garry Trudeau’s Dr. Nathan Null, “a White House Situational Science Adviser,” tells us that: “Situational science is about respecting both sides of a scientific argument, not just the one supported by facts.”
This is meant to set the argument against a big canvas of social change, but the list is so at odds with the historical record as to be self-defeating.
The percentage of the US population believing in the literal truth of the Bible has remained relatively constant since the 1980s, while the percentage listing themselves as having “no religion” has grown. Interest in alternative medicine dates to at least the patent medicines of the 19th century; the biggest recent boost for that movement came under Reagan, when health supplements, soi-disant, were exempted from FDA scrutiny. Trudeau’s welcome critique of the White House’s assault on reason targets a political minority, not the internet-using population, and so on. If you didn’t know that this litany appeared under the heading Web 2.0, you might suspect Gorman’s target was anti-intellectualism during Republican administrations.
Even the part of the list specific to new technology gets it wrong. Bloggers aren’t called citizen-journalists; bloggers are called bloggers. Citizen-journalist describes people like Alisara Chirapongse, the Thai student who posted photos and observations of the recent coup during a press blackout. If Gorman can think of a better label for times when citizens operate as journalists, he hasn’t shared it with us.
Similarly, lumping Biblical literalism with Web 2.0 misses the mark. Many of the most active social media sites — Slashdot, Digg, Reddit — are rallying points for those committed to scientific truth. Wikipedia users have so successfully defended articles on Evolution, Creationism and so on from the introduction of counter-factual beliefs that frustrated literalists helped found Conservapedia, whose entry on Evolution is a farrago of anti-scientific nonsense.
But wait — if use of social media is bad, and attacks on the scientific method are bad, what are we to make of social media sites that defend the scientific method? Surely Wikipedia is better than Conservapedia on that score, no? Well, it all gets confusing when you start looking at the details, but Gorman is not interested in the details. His grand theory, of the hell-in-a-handbasket variety, avoids any look at specific instantiations of these tools — how do the social models of Digg and Wikipedia differ? does Huffington Post do better or worse than Instapundit on factual accuracy? — in favor of one sweeping theme: defense of incumbent stewards of knowledge against attenuation of their erstwhile roles.
There are two alternate theories of technology on display in Sleep of Reason. The first is that technology is an empty vessel, into which social norms may be poured. This is the theory behind statements like “The difference is not, emphatically not, in the communication technology involved.” (Emphasis his.) The second theory is that intellectual revolutions are shaped in part by the tools that sustain them. This is the theory behind his observation that the virtues of print were “…often absent in the manuscript age that preceded print.”
These two theories cannot both be true, so it’s odd to find them side by side, but Gorman does not seem to be comfortable with either of them as a general case. This leads to a certain schizophrenic quality to the writing. We’re told that print does not necessarily bestow authenticity and that some digital material does, but we’re also told that he consulted “authoritative printed sources” on Goya. If authenticity is an option for both printed and digital material, why does printedness matter? Would the same words on the screen be less scholarly somehow?
Gorman is adopting a historically contingent view: Revolution then was good, revolution now is bad. As a result, according to Gorman, the shift to digital and networked reproduction of information will fail unless it recapitulates the institutions and habits that have grown up around print.
Gorman’s theory about print — its capabilities ushered in an age very different from manuscript culture — is correct, and the same kind of shift is at work today. As with the transition from manuscripts to print, the new technologies offer virtues that did not previously exist, but are now an assumed and permanent part of our intellectual environment. When reproduction, distribution, and findability were all hard, as they were for the last five hundred years, we needed specialists to undertake those jobs, and we properly venerated them for the service they performed. Now those tasks are simpler, and the earlier roles have instead become obstacles to direct access.
Digital and networked production vastly increase three kinds of freedom: freedom of speech, of the press, and of assembly. This perforce increases the freedom of anyone to say anything at any time. This freedom has led to an explosion in novel content, much of it mediocre, but freedom is like that. Critically, this expansion of freedom has not undermined any of the absolute advantages of expertise; the virtues of mastery remain are as they were. What has happened is that the relative advantages of expertise are in precipitous decline. Experts the world over have been shocked to discover that they were consulted not as a direct result of their expertise, but often as a secondary effect — the apparatus of credentialing made finding experts easier than finding amateurs, even when the amateurs knew the same things as the experts.
This improved ability to find both content and people is one of the core virtues of our age. Gorman insists that he was able to find “…the recorded knowledge and information I wanted [about Goya] in seconds.” This is obviously an impossibility for most of the population; if you wanted detailed printed information on Goya and worked in any environment other than a library, it would take you hours at least. This scholars-eye view is the key to Gorman’s lament: so long as scholars are content with their culture, the inability of most people to enjoy similar access is not even a consideration.
Wikipedia is the best known example of improved findability of knowledge. Gorman is correct that an encyclopedia is not the product of a collective mind; this is as true of Wikipedia as of Britannica. Gorman’s unfamiliarity and even distaste for Wikipedia leads him to mistake the dumbest utterances of its most credulous observers for an authentic accounting of its mechanisms; people pushing arguments about digital collectivism, pro or con, known nothing about how Wikipedia actually works. Wikipedia is the product not of collectivism but of unending argumentation; the corpus grows not from harmonious thought but from constant scrutiny and emendation.
The success of Wikipedia forces a profound question on print culture: how is information is to be shared with the majority of the population? This is an especially tough question, as print culture has so manifestly failed at the transition to a world of unlimited perfect copies. Because Wikipedia’s contents are both useful and available, it has eroded the monopoly held by earlier modes of production. Other encyclopedias now have to compete for value to the user, and they are failing because their model mainly commits them to denying access and forbidding sharing. If Gorman wants more people reading Britannica, the choice lies with its management. Were they to allow users unfettered access to read and share Britannica’s content tomorrow, the only interesting question is whether their readership would rise a ten-fold or a hundred-fold.
Britannica will tell you that they don’t want to compete on universality of access or sharability, but this is the lament of the scribe who thinks that writing fast shouldn’t be part of the test. In a world where copies have become cost-free, people who expend their resources to prevent access or sharing are forgoing the principal advantages of the new tools, and this dilemma is common to every institution modeled on the scarcity and fragility of physical copies. Academic libraries, which in earlier days provided a service, have outsourced themselves as bouncers to publishers like Reed-Elsevier; their principal job, in the digital realm, is to prevent interested readers from gaining access to scholarly material.
If Gorman were looking at Web 2.0 and wondering how print culture could aspire to that level of accessibility, he would be doing something to bridge the gap he laments. Instead, he insists that the historical mediators of access “…promote intellectual development by exercising judgment and expertise to make the task of the seeker of knowledge easier.” This is the argument Catholic priests made to the operators of printing presses against publishing translations of the Bible — the laity shouldn’t have direct access to the source material, because they won’t understand it properly without us. Gorman offers no hint as to why direct access was an improvement when created by the printing press then but a degradation when created by the computer. Despite the high-minded tone, Gorman’s ultimate sentiment is no different from that of everyone from music executives to newspaper publishers: Old revolutions good, new revolutions bad.
In my last post, i shared my case study response to the Harvard Business Review Case Study “We Googled You.” Since then, thanks to a kind reader (tx Andy Blanco), i learned that HBR made this case study the First Interactive Case Study. This means that you can read the case (without the respondents’ responses) and submit your own response.
You are still more than welcome to read my response, but i’d be super duper stoked to read your response as well. I found this exercise mentally invigorating and suspect you might as well. HBR wants you to submit your response to them, but i’d also be stoked if you’d be willing to share it with us.
Feel free to add your response to the comments on Apophenia or write your response on your own blog and add a link to the comments. Either way, i’d really love to hear how you would handle this scenario in your own business practices.
(Note: the reason that i use comments on Apophenia is because they notify me… i don’t get notified here and i find it easier to keep the conversation in one place.)
I have recently uploaded a bunch of talk cribs, a new book essay, and a case commentary for your enjoyment.
Harvard Business Review Case Commentary
The Harvard Business Review has a section called “Case Commentary” where they propose a fictional but realistic scenario and invite different prominent folks to respond. I was given the great honor of being invited to respond to a case entitled “We Googled You.”
In Diane Coutu’s hypothetical scenario, Fred is trying to decide whether or not to hire Mimi after one of Fred’s co-workers googles Mimi and finds newspaper clippings about Mimi protesting Chinese policies. [The case study is 2 pages - this is a very brief synopsis.] Given the scenario, we were then asked, “should Fred hire Mimi despite her online history?”
Unfortunately, Harvard Business Review does not make their issues available for free download (although they are available at the library and the case can be purchased for $6) but i acquired permission to publish my commentary online for your enjoyment. It’s a llittle odd taken out of context, but i still figured some folks might enjoy my view on this matter, especially given that the press keep asking me about this exact topic.
At the Cannes Film Festival’s Opening Forum on “Cinema: The Audiences of Tomorrow,” i gave a keynote about youth, DRM, remix, film, MySpace, YouTube, and other such good things. Check out: “Film and the Audience of Tomorrow”
A month or so ago, Micah Sifry offered me a chance to respond to Andrew Keen, author of the forthcoming Cult of the Amateur, at a panel at last week’s Personal Democracy Forum (PdF). The book is a polemic against the current expansion of freedom of speech, freedom of the press, and freedom of association. Also on the panel were Craig Newmark and Robert Scoble, so I was in good company; my role would, I thought, be easy — be pro-amateur production, pro-distributed creation, pro-collective action, and so on, things that come naturally to me.
What I did not expect was what happened — I ended up defending Keen, and key points from Cult of the Amateur, against a panel of my peers.
I won’t review CotA here, except to say that the book is going to get a harsh reception from the blogosphere. It is, as Keen himself says, largely anecdotal, which makes it more a list of ‘bad things that have happened where the internet is somewhere in the story’ than an account of cause and effect; as a result, internet gambling and click fraud are lumped together with the problems with DRM and epistemological questions about peer-produced material. In addition to this structural weakness, it is both aggressive enough and reckless enough to make people spitting mad. Dan Gillmor was furious about the inaccuracies, including his erroneous (and since corrected) description in the book, Yochai Benkler asked me why I was even deigning to engage Andrew in conversation, and so on. I don’t think I talked to anyone who wasn’t dismissive of the work.
But even if we stipulate that the book doesn’t do much to separate cause from effect, and has the problems of presentation that often accompany polemic, the core point remains: Keen’s sub-title, “How today’s internet is destroying our culture”, has more than a grain of truth to it, and the only thing those of us who care about the network could do wrong would be to dismiss Keen out of hand.
Which is exactly what people were gearing up to do last week. Because Keen is a master of the dismissive phrase — bloggers are monkeys, only people who get paid do good work, and so on — he will engender a reaction from our side that assumes that everything he says in the book is therefore wrong. This is a bad (but probably inevitable) reaction, but I want to do my bit to try to stave it off, both because fairness dictates it — Keen is at least in part right, and we need to admit that — and because a book-burning accompanied by a hanging-in-effigy will be fun for us, but will weaken the pro-freedom position, not strengthen it.
The panel at PdF started with Andrew speaking, in some generality, about ways in which amateurs were discomfiting people who actually know what they are doing, while producing sub-standard work on their own.
My response started by acknowledging that many of the negative effects Keen talked about were real, but that the source of these effect was an increase in the freedom of people to say what they want, when they want to, on a global stage; that the advantages of this freedom outweigh the disadvantages; that many of the disadvantages are localized to professions based on pre-internet inefficiencies; and that the effort required to take expressive power away from citizens was not compatible with a free society.
This was, I thought, a pretty harsh critique of the book. I was wrong; I didn’t know from harsh.
Scoble was simply contemptuous. He had circled offending passages which he would read, and then offer an aphoristic riposte that was more scorn than critique. For instance, in taking on Andrew’s point that talent is unevenly distributed, Scoble’s only comment was, roughly, “Yeah, Britney must be talented…”
Now you know and I know what Scoble meant — traditional media gives outsize rewards to people on characteristics other than pure talent. This is true, but because he was so dismissive of Keen, it’s not the point that Scoble actually got across. Instead, he seemed to be denying either that talent is unevenly distributed, or that Britney is talented.
But Britney is talented. She’s not Yo-Yo Ma, and you don’t have to like her music (back when she made music rather than just headlines), but what she does is hard, and she does it well. Furthermore, deriding the music business’s concern with looks isn’t much of a criticism. It escaped no one’s notice that Amanda Congdon and lonelygirl15 were easy on the eyes, and that that was part of their appeal. So cheap shots at mainstream talent or presumptions of the internet’s high-mindedness are both non-starters.
More importantly, talent is unevenly distributed, and everyone knows it. Indeed, one of the many great things about the net is that talent can now express itself outside traditional frameworks; this extends to blogging, of course, but also to music, as Clive Thompson described in his great NY Times piece, or to software, as with Linus’ talent as an OS developer, and so on. The price of this, however, is that the amount of poorly written or produced material has expanded a million-fold. Increased failure is an inevitable byproduct of increased experimentation, and finding new filtering methods for dealing with an astonishingly adverse signal-to-noise ratio is the great engineering challenge of our age (c.f. Google.) Whatever we think of Keen or CotA, it would be insane to deny that.
Similarly, Scoble scoffed at the idea that there is a war on copyright, but there is a war on copyright, at least as it is currently practiced. As new capabilities go, infinite perfect copyability is a lulu, and it breaks a lot of previously stable systems. In the transition from encoding on atoms to encoding with bits, information goes from having the characteristics of chattel to those of a public good. For the pro-freedom camp to deny that there is a war on copyright puts Keen in the position of truth-teller, and makes us look like employees of the Ministry of Doublespeak.
It will be objected that engaging Keen and discussing a flawed book will give him attention he neither needs nor deserves. This is fantasy. CotA will get an enthusiastic reception no matter what, and whatever we think of it or him, we will be called to account for the issues he raises. This is not right, fair, or just, but it is inevitable, and if we dismiss the book based on its errors or a-causal attributions, we will not be regarded as people who have high standards, but rather as defensive cult members who don’t like to explain ourselves to outsiders.
What We Should Say
Here’s my response to the core of Keen’s argument.
Keen is correct in seeing that the internet is not an improvement to modern society; it is a challenge to it. New technology makes new things possible, or, put another way, when new technology appears, previously impossible things start occurring. If enough of those impossible things are significantly important, and happen in a bundle, quickly, the change becomes a revolution.
The hallmark of revolution is that the goals of the revolutionaries cannot be contained by the institutional structure of the society they live in. As a result, either the revolutionaries are put down, or some of those institutions are transmogrified, replaced, or simply destroyed. We are plainly witnessing a restructuring of the music and newspaper businesses, but their suffering isn’t unique, it’s prophetic. All businesses are media businesses, because whatever else they do, all businesses rely on the managing of information for two audiences — employees and the world. The increase in the power of both individuals and groups, outside traditional organizational structures, is epochal. Many institutions we rely on today will not survive this change without radical alteration.
This change will create three kinds of loss.
First, people whose jobs relied on solving a hard problem will lose those jobs when the hard problems disappear. Creating is hard, filtering is hard, but the basic fact of making acceptable copies of information, previously the basis of the aforementioned music and newspaper industries, is a solved problem, and we should regard with suspicion anyone who tries to return copying to its previously difficult state.
Similarly, Andrew describes a firm running a $50K campaign soliciting user-generated ads, and notes that some professional advertising agency therefore missed out on something like $300,000 dollars of fees. Its possible to regard this as a hardship for the ad guys, but its also possible to wonder whether they were really worth the $300K in the first place if an amateur, working in their spare time with consumer-grade equipment, can create something the client is satisfied with. This loss is real, but it is not general. Video tools are sad for ad guys in the same way movable type was sad for scribes, but as they say in show biz, the world doesn’t owe you a living.
The second kind of loss will come from institutional structures that we like as a society, but which are becoming unsupportable. Online ads offer better value for money, but as a result, they are not going to generate enough cash to stand up the equivalent of the NY Times’ 15-person Baghdad bureau. Josh Wolf has argued that journalistic privilege should be extended to bloggers, but the irony is that Wolf’s very position as a videoblogger makes that view untenable — journalistic privilege is a special exemption to a general requirement for citizens to aid the police. We can’t have a general exception to that case.
The old model of defining a journalist by tying their professional identity to employment by people who own a media outlet is broken. Wolf himself has helped transform journalism from a profession to an activity; now we need a litmus test for when to offer source confidentiality for acts of journalism. This will in some ways be a worse compromise than the one we have now, not least because it will take a long time to unfold, but we can’t have mass amateurization of journalism and keep the social mechanisms that regard journalists as a special minority.
The third kind of loss is the serious kind. Some of these Andrew mentions in his book: the rise of spam, the dramatically enlarged market for identity theft. Other examples he doesn’t: terrorist organizations being more resilient as a result of better communications tools, pro-anorexic girls forming self-help groups to help them remain anorexic. These things are not side-effects of the current increase in freedom, they are effects of that increase. Spam is not just a plague in open, low-entry-cost systems; it is a result of those systems. We can no longer limit things like who gets to form self-help groups through social controls (the church will rent its basement to AA but not to the pro-ana kids), because no one needs help or permission to form such a group anymore.
The hard question contained in Cult of the Amateur is “What are we going to do about the negative effects of freedom?” Our side has generally advocated having as few limits as possible (when we even admit that there are downsides), but we’ve been short on particular cases. It’s easy to tell the newspaper people to quit whining, because the writing has been on the wall since Brad Templeton founded Clarinet. It’s harder to say what we should be doing about the pro-ana kids, or the newly robust terror networks.
Those cases are going to shift us from prevention to reaction (a shift that parallels the current model of publishing first, then filtering later), but so much of the conversation about the social effects of the internet has been so upbeat that even when there is an obvious catastrophe (as with the essjay crisis on Wikipedia), we talk about it amongst ourselves, but not in public.
What Wikipedia (and Digg and eBay and craigslist) have shown us is that mature systems have more controls than immature ones, as the number of bad cases is identified and dealt with, and as these systems become more critical and more populous, the number of bad cases (and therefore the granularity and sophistication of the controls) will continue to increase.
We are creating a governance model for the world that will coalesce after the pre-internet institutions suffer whatever damage or decay they are going to suffer. The conversation about those governance models, what they look like and why we need them, is going to move out into the general public with CotA, and we should be ready for it. My fear, though, is that we will instead get a game of “Did not!”, “Did so!”, and miss the opportunity to say something much more important.
This is a relief for people like me — you’re as young as you feel, and all that — or rather it would be a relief but for one little problem: Fred was right before, and he’s wrong now. Young entrepreneurs have an advantage over older ones (and by older I mean over 30), and contra Fred’s second post, age isn’t in fact a mindset. Young people have an advantage that older people don’t have and can’t fake, and it isn’t about vigor or hunger — it’s a mental advantage. The principal asset a young tech entrepreneur has is that they don’t know a lot of things.
In almost every other circumstance, this would be a disadvantage, but not here, and not now. The reason this is so (and the reason smart old people can’t fake their way into this asset) has everything to do with our innate ability to cement past experience into knowledge.
Probability and the Crisis of Novelty
The classic illustration for learning outcomes based on probability uses a bag of colored balls. Imagine that you can take out one ball, record its color, put it back, and draw again. How long does it take you to form an opinion about the contents of the bag, and how correct is that opinion?
Imagine a bag of black and white balls, with a slight majority of white. Drawing out a single ball would provide little information beyond “There is at least one white (or black) ball in this bag.” If you drew out ten balls in a row, you might guess that there are a similar number of black and white balls. A hundred would make you relatively certain of that, and might give you an inkling that white slightly outnumbers black. By a thousand draws, you could put a rough percentage on that imbalance, and by ten thousand draws, you could say something like “53% white to 47% black” with some confidence.
This is the world most of us live in, most of the time; the people with the most experience know the most.
But what would happen if the contents of the bag changed overnight? What if the bag suddenly started yielding balls of all colors and patterns — black and white but also green and blue, striped and spotted? The next day, when the expert draws a striped ball, he might well regard it as a mere anomaly. After all, his considerable experience has revealed a predictable and stable distribution over tens of thousands of draws, so no need to throw out the old theory because of just one anomaly. (To put it in Bayesian terms, the prior beliefs of the expert are valuable precisely because they have been strengthened through repetition, which repetition makes the expert confident in them even in the face of a small number of challenging cases.)
But the expert keeps drawing odd colors, and so after a while, he is forced to throw out the ‘this is an anomaly, and the bag is otherwise as it was’ theory, and start on a new one, which is that some novel variability has indeed entered the system. Now, the expert thinks, we have a world of mostly black and white, but with some new colors as well.
But the expert is still wrong. The bag changed overnight, and the new degree of variation is huge compared to the older black-and-white world. Critically, any attempt to rescue the older theory will cause the expert to misunderstand the world, and the more carefully the expert relies on the very knowledge that constitutes his expertise, the worse his misunderstanding will be.
Meanwhile, on the morning after the contents of the bag turn technicolor, someone who just showed up five minutes ago would say “Hey, this bag has lots of colors and patterns in it.” While the expert is still trying to explain away or minimize the change as a fluke, or as a slight adjustment to an otherwise stable situation, the novice, who has no prior theory to throw out, understands exactly what’s going on.
What our expert should have done, the minute he saw the first odd ball, is to say “I must abandon everything I have ever thought about how this bag works, and start from scratch.” He should, in other words, start behaving like a novice.
Which is exactly the thing he — we — cannot do. We are wired to learn from experience. This is, in almost all cases, absolutely the right strategy, because most things in life benefit from mental continuity. Again, today, gravity pulls things downwards. Again, today, I get hungry and need to eat something in the middle of the day. Again, today, my wife will be happier if I put my socks in the hamper than on the floor. We don’t need to re-learn things like this; once we get the pattern, we can internalize it and move on.
A Lot of Knowledge Is A Dangerous Thing
This is where Fred’s earlier argument comes in. In 999,999 cases, learning from experience is a good idea, but what entrepreneurs do is look for the one in a million shot. When the world really has changed overnight, when wild new things are possible if you don’t have any sense of how things used to be, then it is the people who got here five minutes ago who understand that new possibility, and they understand it precisely because, to them, it isn’t new.
These cases, let it be said, are rare. The mistakes novices make come from a lack of experience. They overestimate mere fads, seeing revolution everywhere, and they make this kind of mistake a thousand times before they learn better. But the experts make the opposite mistake, so that when a real once-in-a-lifetime change comes along, they are at risk of regarding it as a fad. As a result of this asymmetry, the novice makes their one good call during an actual revolution, at exactly the same time the expert makes their one big mistake, but at that moment, that’s all that is needed to give the newcomer a considerable edge.
Here’s a tech history question: Which went mainstream first, the PC or the VCR?
People over 35 have a hard time even understanding why you’d even ask — VCRs obviously pre-date PCs for general adoption.
Here’s another: Which went mainstream first, the radio or the telephone?
The same people often have to think about this question, even though the practical demonstration of radio came almost two decades after the practical demonstration of the telephone. We have to think about that second question because, to us, radio and the telephone arrived at the same time, which is to say the day we were born. And for college students today, that is true of the VCR and the PC.
People who think of the VCR as old and stable, and the PC as a newer invention, are not the kind of people who think up Tivo. It’s people who are presented with two storage choices, tape or disk, without historical bias making tape seem more normal and disk more provisional, who do that kind of work, and those people are, overwhelmingly, young.
This is sad for a lot of us, but its also true, and Fred’s kind lies about age being a mind set won’t reverse that.
The Uses of Experience
I’m old enough to know a lot of things, just from life experience. I know that music comes from stores. I know that you have to try on pants before you buy them. I know that newspapers are where you get your political news and how you look for a job. I know that if you want to have a conversation with someone, you call them on the phone. I know that the library is the most important building on a college campus. I know that if you need to take a trip, you visit a travel agent.
In the last 15 years or so, I’ve had to unlearn every one of those things and a million others. This makes me a not-bad analyst, because I have to explain new technology to myself first — I’m too old to understand it natively. But it makes me a lousy entrepreneur.
Ten years ago, I was the CTO of a web company we built and sold in what seemed like an eon but what was in retrospect an eyeblink. Looking back, I’m embarrassed at how little I knew, but I was a better entrepreneur because of it.
I can take some comfort in the fact that people much more successful than I succumb to the same fate. IBM learned, from decades of experience, that competitive advantage lay in the hardware; Bill Gates had never had those experiences, and didn’t have to unlearn them. Jerry and David at Yahoo learned, after a few short years, that search was a commodity. Sergey and Larry never knew that. Mark Cuban learned that the infrastructure required for online video made the economics of web video look a lot like TV. That memo was never circulated at YouTube.
So what can you do when you get kicked out of the club? My answer has been to do the things older and wiser people do. I teach, I write, I consult, and when I work with startups, it’s as an advisor, not as a founder.
And the hardest discipline, whether talking to my students or the companies I work with, is to hold back from offering too much advice, too definitively. When I see students or startups thinking up something crazy, and I want to explain why that won’t work, couldn’t possibly work, why this recapitulates the very argument that led to RFC 939 back in the day, I have to remind myself to shut up for a minute and just watch, because it may be me who will be surprised when I see what color comes out of the bag next.
Over at Knowledge Tree is a recent essay i wrote called Social Network Sites: Public, Private, or What? For many who follow my blog, the arguments are not new, but i suspect some folks might appreciate the consolidated and not-so-spastic version. At the very least, perhaps you’ll be humored to see my writing splattered with the letter ‘s’ instead of the letter ‘z’ (it’s an Australian e-journal). There’s also an MP3 of me reading the essay for those who fear text (which is very novel since y’all know how much i fear audio/video recordings of me, but i did resist trying to sound funny while pronouncing the letter s instead of the letter z). And here’s a PDF of the essay for those who wishing to kill trees.
In conjunction with this essay, there’s a life chat at 2PM Australian Eastern on 22 May. This translates to 9PM PST on 21 May and midnight New York time (which is where i’ll be so hopefully i won’t be too loopy, or at least no more loopy than i am feeling right now).
Four years ago, I wrote a piece called Fame vs Fortune: Micropayments and Free Content. The piece was sparked by the founding of a company called BitPass and its adoption by the comic artist Scott McCloud (author of the seminal Understanding Comics, among other things.) McCloud created a graphic work called “The Right Number”, which you had to buy using BitPass.
BitPass will fail, as FirstVirtual, Cybercoin, Millicent, Digicash, Internet Dollar, Pay2See, and many others have in the decade since Digital Silk Road, the paper that helped launch interest in micropayments. These systems didn’t fail because of poor implementation; they failed because the trend towards freely offered content is an epochal change, to which micropayments are a pointless response.
I’d love to take credit for having made a brave prediction there, but in fact Nick Szabo wrote a dispositive critique of micropayments back in 1996. The BitPass model never made a lick of sense, so predicting its demise was mere throat-clearing on the way to the bigger argument. The conclusion I drew in 2003 (and which I still believe) was that the vanishingly low cost of making unlimited perfect copies would put creators in the position of having to decide between going for audience size (fame) or restricting and charging for access (fortune), and that the desire for fame, no longer tempered by reproduction costs, would generally win out.
Creators are not publishers, and putting the power to publish directly into their hands does not make them publishers. It makes them artists with printing presses. This matters because creative people crave attention in a way publishers do not. […] with the power to publish directly in their hands, many creative people face a dilemma they’ve never had before: fame vs fortune.
Scott McCloud, who was also an advisor to BitPass, took strong issue with this idea in Misunderstanding Micropayments, a reply to the Fame vs. Fortune argument:
In many cases, it’s no longer a choice between getting it for a price or getting it for free. It’s the choice between getting it for price or not getting it at all. Fortunately, the price doesn’t have to be high.
McCloud was arguing that the creator’s natural monopoly — only Scott McCloud can produce another Scott McCloud work — would provide the artist the leverage needed to insist on micropayments (true), and that this leverage would create throngs of two-bit users (false).
What’s really interesting is that, after the failure of BitPass, McCloud has now released The Right Numberabsolutely free of charge. Nothing. Nada. Kein Preis. After the micropayment barrier had proved too high for his potential audience (as predicted), McCloud had to choose between keeping his work obscure, in order to preserve the possibility of charging for it, or going for attention. His actual choice in 2007, upends his argument of four years ago: he went for the fame, at the expense of the fortune. (This recapitulates Tim O’Reilly’s formulation: “Obscurity is a far greater threat to authors and creative artists than piracy.” [ thanks, Cory, for the pointer ])
Everyone who imagines a working micropayment system either misunderstands user preferences, or imagines preventing users from expressing those preferences. The working micropayments systems that people hold up as existence proofs — ringtones, iTunes — are businesses that have escaped from market dynamics through a monopoly or cartel (music labels, carriers, etc.) Indeed, the very appeal of micropayments to content producers (the only people who like them — they offer no feature a user has ever requested) is to re-establish the leverage of the creator over the users. This isn’t going to happen, because the leverage wasn’t based on the valuing of content, but of packaging and distribution.
I’ll let my 2003 self finish the argument:
People want to believe in things like micropayments because without a magic bullet to believe in, they would be left with the uncomfortable conclusion that what seems to be happening — free content is growing in both amount and quality — is what’s actually happening.
The economics of content creation are in fact fairly simple. The two critical questions are “Does the support come from the reader, or from an advertiser, patron, or the creator?” and “Is the support mandatory or voluntary?”
The internet adds no new possibilities. Instead, it simply shifts both answers strongly to the right. It makes all user-supported schemes harder, and all subsidized schemes easier. It likewise makes collecting fees harder, and soliciting donations easier. And these effects are multiplicative. The internet makes collecting mandatory user fees much harder, and makes voluntarily subsidy much easier.
The only interesting footnote, in 2007, is that these forces have now reversed even McCloud’s behavior.
I love Etech. This year, i had the great opportunity to keynote Etech (albeit at an ungodly hour). The talk i wrote was entirely new and intended for the tech designer/developer audience (warning: the academics will hate it). The talk is called:
It’s about how technologists need to pay attention to the magic that everyday people create using the Web2.0 technologies that we in the tech world think are magical. It’s quite a fun talk and i figured that some might enjoy reading it so i just uploaded my crib notes. It is unlikely that i said exactly what i wrote, but the written form should provide a good sense of the points i was trying to make in the talk.
I should give infinite amounts of appreciation to Raph Koster who took unbelievable notes during my presentation, letting me adjust my crib to be more in tune with what i actually said. THANK YOU! I was half tempted to not bother blogging my crib notes given the fantastic-ness of his notes, but i figure that there still might be some out there who would prefer the crib. Enjoy!
(PS: If you remember me saying something that i didn’t put in the crib, let me know and i’ll add it… i’m stunned at how many of you took notes during the talk.)
SXSW has come and gone and my phone might never recover. Y’see, last year i received over 500 Dodgeballs. To the best that i can tell, i received something like 3000 Tweets during the few days i was in Austin. My phone was constantly hitting its 100 message cap and i spent more time trying to delete messages than reading them. Still, i think that Twitter and Dodgeball are interesting and i want to take a moment to consider their strengths and weaknesses as applications.
While you can use Dodgeball for a variety of things, it’s primarily a way of announcing presence in a social venue where you’d be willing to interact with other people. Given that i’m a hermit, i primarily use Dodgeball to announce my presence at conference outtings and to sigh in jealousy as people romp around Los Angeles. Dodgeball is culturally linked to place. I’m still pretty peeved with Google over the lack of development of Dodgeball because i still think it would be a brilliant campus-based application where people actually do party-hop on every weekend and want to know if their friends are at the neighboring frat party instead of this one. When it comes to usage at SXSW, Dodgeball is great. I know when 7 of my friends are in one venue and 11 are in another; it helps me decide where to go.
Twitter has taken a different path. It is primarily micro-blogging or group IMing or push away messaging. You write whatever you damn well please and it spams all of the people who agreed to be your friends. The biggest strength AND weakness of Twitter is that it works through your IM client (or Twitterrific) as well as your phone. This means that all of the tech people who spend far too much time bored on their laptops are spamming people at a constant rate. Ah, procrastination devices. If you follow all of your friends on your mobile, you’re in for a hellish (and every expensive) experience. Folks quickly learn to stop following people on their mobile (or, if they don’t, they turn Twitter off altogether). This, unfortunately, kills the mobile value of it, making it far more of a web tool than a mobile tool. Considering how much of a bitch it is to follow/unfollow people, users quickly choose and rarely turn back. Thus, once they stop following someone on their phone, they don’t return just because they are going out with that person that night (unless they run into them and choose to switch it on).
At SXSW, Twitter is fantastic for mobile. Everyone is running around the same town commenting on talks, remarking on venues, bitching about the rain. But dear god did i feel bad for the people who weren’t at SXSW who were getting spammed with that crap. One value of Twitter is that it’s really lightweight and easy. One problem is that this is terrible if your social world is not one giant cluster. While my tech friends who normally attend SXSW moped about how jealous they were upon receiving all of the SXSW messages, my non-tech friends were more of the WTF camp. Without segmentation, i had to choose one audience over the other because there was no way to move seamlessly between the audiences. Of course, groups are much heavier to manage. Still, i think it’s possible and i gave Ev some notes.
I think it’s funny to watch my tech geek friends adopt a social tech. They can’t imagine life without their fingers attached to a keyboard or where they didn’t have all-you-can-eat phone plans. More importantly, the vast majority of their friends are tech geeks too. And their social world is relatively structurally continuous. For most 20/30-somethings, this isn’t so. Work and social are generally separated and there are different friend groups that must be balanced in different ways.
Of course, the population whose social world is most like the tech geeks is the teens. This is why they have no problems with MySpace bulletins (which are quite similar to Twitter in many ways). The biggest challenge with teens is that they do not have all-you-can-eat phone plans. Over and over, the topic of number of text messages in one’s plan comes up. And my favorite pissed off bullying act that teens do involves ganging up to collectively spam someone so that they’ll go over their limit and get into trouble with their parents (phone companies don’t seem to let you block texts from particular numbers and of course you have to pay 10c per text you receive). This is particularly common when a nasty breakup occurs and i was surprised when i found out that switching phone numbers is the only real solution to this. Because most teens are not permanently attached to a computer and because they typically share their computers with other members of the family, Twitterific-like apps wouldn’t really work so well. And Twitter is not a strong enough app to replace IM time.
Of course, this doesn’t mean that all teens would actually like Twitter. There are numerous complaints about the lameness of bulletins. People forward surveys just as something to do and others complain that this is a waste of their time. (Of course, then they go on to do it themselves.) Still, bulletin space is like Twitter space. You need to keep posting so that your friends don’t forget you. Or you don’t post at all. Such is the way of Twitter. Certain people i see flowing 5-15 times a day. Others i never hear from (or like once a week).
There’s another issue at play… Like with bulletins, it’s pretty ostentatious to think that your notes are worth pushing to others en masse. It takes a certain kind of personality to think that this kind of spamming is socially appropriate and desirable. Sure, we all love to have a sense of what’s going on, but this is push technology at its most extreme. You’re pushing your views into the attention of others (until they turn it or you off).
The techno-geek users keep telling me that it’s a conversation. Of course, this is also said of blogging. But i don’t think that either are typically conversations. More often, they are individuals standing on their soap boxes who enjoy people responding to them and may wander around to others soap boxes looking for interesting bits of data. By and large, people Twitter to share their experience; only rarely do they expect to receive anything in return. What is returned is typically a kudos or a personal thought or an organizing question. I’d be curious what percentage of Tweets start a genuine back-and-forth dialogue where the parties are on equal ground. It still amazes me that when i respond to someone’s Tweet personally, they often ignore me or respond curtly with an answer to my question. It’s as though the Tweeter wants to be recognized en masse, but doesn’t want to actually start a dialogue with their pronouncements. Of course, this is just my own observation. Maybe there are genuine conversations happening beyond my purview.
Unfortunately, i don’t know how sustainable Twitter is for most people. It’s very easy to burn out on it and once someone does, will they return? It’s also really hard for friend-management. If you add someone, even if you “leave” them, you’ll get Twitteriffic posts from them. This creates a huge disincentive for adding people, even if you welcome them to read your Tweets. Post-SXSW, i’ve seen two things: the most active in Austin are still ridiculously active. The rest have turned it off for all intents and purposes. Personally, i’m trying to see how long i’ll last before i can’t stand the invasion any longer. Given that my non-tech friends can’t really join effectively (for the same reasons as teens - text messaging plan and lack of always-on computerness and hatred of IM interruptions), i don’t think that i can get a good sense of how this would play out beyond the geek crowd. But it sure is entertaining to watch.
PS: I should note that my favorite part of Twitter is that when i wander to a non-functioning page, i get this image:
When adults aren’t dismissing MySpace as the land-o-predators, they’re often accusing it of producing narcissistic children. I find it hard to bite my tongue in these situations, but i know that few adults are willing to take the blame for producing narcissistic children. The issue of narcissism and fame is back in public circulation with a vengeance (thanks in part to Britney Spears for having a public meltdown). While the mainstream press is having a field day with blaming celebrities and teens for being narcissistic, more solid research on narcissism is emerging.
For those who are into pop science coverage of academic work, i’d encourage you to start with Jake Halpern’s “Fame Junkies” (tx Anastasia). For simplicity sake, let’s list a few of the key findings that have emerged over the years concerning narcissism.
While many personality traits stay stable across time, it appears as though levels of narcissism (as tested by the NPI) decrease as people grow older. In other words, while adolescents are more narcissistic than adults, you were also more narcissistic when you were younger than you are now.
The scores of adolescents on the NPI continue to rise. In other words, it appears as though young people today are more narcissistic than older people were when they were younger.
There appears to be a correlation between narcissism and self-esteem based education. In other words, all of that school crap about how everyone is good and likable has produced a generation of narcissists.
Celebrity does not make people narcissists but narcissistic people seek fame.
Reality TV stars score higher on the NPI than other celebrities.
OK… given these different findings (some of which are still up for debate in academic circles), what should we make of teens’ participation on social network sites in relation to narcissism?
My view is that we have trained our children to be narcissistic and that this is having all sorts of terrifying repercussions; to deal with this, we’re blaming the manifestations instead of addressing the root causes and the mythmaking that we do to maintain social hierarchies. Let’s unpack that for a moment.
American individualism (and self-esteem education) have allowed us to uphold a myth of meritocracy. We sell young people the idea that anyone can succeed, anyone can be president. We ignore the fact that working class kids get working class jobs. This, of course, has been exacerbated in recent years. There used to be meaningful working class labor that young people were excited to be a part of. It was primarily masculine labor and it was rewarded through set hierarchies and unions helped maintain that structure. The unions crumpled in the 1980s and by the time the 1987 recession hit, there was a teenage wasteland No longer were young people being socialized into meaningful working class labor; the only path out was the “lottery” (aka becoming a famous rock star, athlete, etc.).
Since the late 80s, the lottery system has become more magnificent and corporatized. While there’s nothing meritocratic about reality TV or the Spice Girls, the myth of meritocracy remains. Over and over, working class kids tell me that they’re a better singer than anyone on American Idol and that this is why they’re going to get to be on the show. This makes me sigh. Do i burst their bubble by explaining that American Idol is another version of Jerry Springer where hegemonic society can mock wannabes? Or does their dream have value?
So, we have a generation growing up being told that they can be anyone, magnifying the level of narcissism. Narcissists seek fame and Hollywood dangles fame like a carrot on a stick. Meanwhile, technology emerges that challenges broadcast’s control over distribution. It just takes a few Internet success stories for fame-seeking narcissists to begin projecting themselves into the web in the hopes of being seen and being validated. While the important baseline of peer-validation still dominates, the hopes of becoming famous are still part of the narrative. Unfortunately, it’s kinda like watching wannabe actors work as waiters in Hollywood. They think that they’ll be found there because one day long ago someone was and so they go to work everyday in a menial service job with a dream.
Perhaps i should rally behind people’s dreams, but i tend to find them quite disturbing. It is these kinds of dreams that uphold the American myths that get us into such trouble. They also uphold hegemony and the powerful feed on their dreams, offering nothing in return. We can talk about reality TV as an amazing opportunity for anyone to act, but realistically, it’s nothing more than Hollywood’s effort to bust the actors’ guild and related unions. Feed on people’s desire for fame, pay them next to nothing and voila profit margin!
Unfortunately, union busting is the least of my worries when it comes to dream parasites. When i was trying to unpack the role of crystal meth in domestic violence, i started realizing that the meth offered a panacea when the fantasy bubble burst. Needless to say, this resulted in a spiral into hell for many once-dreamers. The next step was even more nauseating. When i started seeing how people in rural America recovered from meth, i found one common solution: born-again Christianity. The fervor for fame which was suppressed by meth re-emerged in zealous religiosity. Christianity promised an even less visible salvation: God’s grace. While blind faith is at the root of both fame-seeking and Christianity, Christianity offers a much more viable explanation for failures: God is teaching you a lesson… be patient, worship God, repent, and when you reach heaven you will understand.
While i have little issue with the core tenants of Christianity or religion in general, i am disgusted by the Christian Industrial Complex. In short, i believe that there is nothing Christian about the major institutions behind modern day organized American Christianity. Decades ago, the Salvation Army actively engaged in union-busting in order to maintain the status-quo. Today, the Christian Industrial Complex has risen into power in both politics and corporate life, but their underlying mission is the same: justify poor people’s industrial slavery so that the rich and powerful can become more rich and powerful. Ah, the modernization of the Protestant Ethic.
Let’s pop the stack and return to fame-seeking and massively networked society. Often, you hear Internet people modify Andy Warhol’s famous quote to note that on the Internet, everyone will be famous amongst 15. I find this very curious, because aren’t both time and audience needed to be famous? Is one really famous for 15 minutes? Or amongst 15? Or is it just about the perceived rewards around fame?
Why is it that people want to be famous? When i ask teens about their desire to be famous, it all boils down to one thing: freedom. If you’re famous, you don’t have to work. If you’re famous, you can buy anything you want. If you’re famous, your parents can’t tell you what to do. If you’re famous, you can have interesting friends and go to interesting parties. If you’re famous, you’re free! This is another bubble that i wonder whether or not i should burst. Anyone who has worked with celebrities knows that fame comes with a price and that price is unimaginable to those who don’t have to pay it.
How does this view of fame play into narcissism? If you think you’re all that, you don’t want to be told what to do or how to do it… You think you’re above all of that. When you’re parents are telling you that you have to clean your room and that you’re not allowed out, they’re cramping your style. How can you be anyone you want to be if you can’t even leave the house? Fame appears to be a freedom from all of that.
The question remains… does micro-fame (such as the attention one gets from being very cool on MySpace) feed into the desires of narcissists to get attention? On a certain level, yes. The attention feels good, it feeds the ego. But the thing about micro-celebrities is that they’re not free from attack. One of the reasons that celebrities go batty is that fame feeds into their narcissism, further heightening their sense of self-worth as more and more people tell them that they’re all that. They never see criticism, their narcissism is never called into check. This isn’t true with micro-fame and this is especially not true online when celebrities face their fans (and haters) directly. Net celebrities feel the exhaustion of attention and nagging much quicker than Hollywood celebrities. It’s a lot easier to burn out quicker and before reaching that mass scale of fame. Perhaps this keeps some of the desire for fame in check? Perhaps not. I honestly don’t know.
What i do know is that MySpace provides a platform for people to seek attention. It does not inherently provide attention and this is why even if people wanted 90M viewers to their blog, they’re likely to only get 6. MySpace may help some people feel the rush of attention, but it does not create the desire for attention. The desire for attention runs much deeper and has more to do with how we as a society value people than with what technology we provide them.
I am most certainly worried about the level of narcissism that exists today. I am worried by how we feed our children meritocratic myths and dreams of being anyone just so that current powers can maintain their supremacy at a direct cost to those who are supplying the dreams. I am worried that our “solutions” to the burst bubble are physically, psychologically, and culturally devastating, filled with hate and toxic waste. I am worried that Paris Hilton is a more meaningful role model to most American girls than Mother Theresa ever was. But i am not inherently worried about social network technology or video cameras or magazines. I’m worried by how society leverages different media to perpetuate disturbing ideals and pray on people’s desire for freedom and attention. Eliminating MySpace will not stop the narcissistic crisis that we’re facing; it will simply allow us to play ostrich as we continue to damage our children with unrealistic views of the world.
I’m often asked what “Web 3.0” will be about. Lately, i have found myself talking about two critical stages of web sociality in order to explain where we’re going. I realized that i never succinctly described this here so i thought i should.
In early networked publics, there were two primary organizing principles for group sociability: interests and activities. People came together on rec.motorcylcles because they shared an interest in motorcycles. People also came together in work groups to discuss activities. Usenet, mailing lists, chatrooms, etc. were organized around these principles.
By and large, these were strangers meeting. Early net adopters were often engaging with people like them who were not geographically proximate. Then the boom hit and everyone got online, often to email with their friends (and consume). With everyone online, the organizing principles of sociality shifted.
As blogging began to take hold, people started arranging themselves around pre-existing friend groups. In this way, the organizing principle was about ego-centric networks. People’s “communities” began being defined by their friends. This model is quite different than group-driven structures where there are defined network boundaries. Ego-centric system are a (mostly) continuous graph. There are certainly clusters, but rarely bounded groups. This is precisely how we get the notion of “6 degrees of separation.” While blogging (and to a lesser degree homepages) were key to this shift, it was really social network sites that took the ball to the endzone. They made the networks visible, allowing people to put themselves at the center of their world. We finally have a world wide WEB of people, not just documents.
When i think about what’s next, i don’t think it’s going more virtual, more removed from everyday life. Actually, i think it’s even more connected to everyday life. We moved from ideas to people. What’s next? Place.
I believe that geographic-dependent context will be the next key shift. GPS, mesh networks, articulated presence, etc. People want to go mobile and they want to use technology to help them engage in the mobile world. Unfortunately, i think we have huge structural barriers in front of us. It’s not that we can’t do this on a technological level, it’s that there are old-skool institutions that want to get in the way. And they want to do it by plugging the market and shaping the law to their advantage. Primarily, i’m talking about carriers. And the handset makers who help keep the carriers alive. Let me explain.
The internet was not made for social communities. It was not made for social network sites. This grew because some creative folks decided to build on the open platform that was made available. Until recently, network neutrality was never a debate in the internet world because it was assumed. Given a connection (and time and literacy), anyone could contribute. Gotta love libertarian idealism.
Unfortunately, the same is not true for the mobile network. There’s never been neutrality and it’s the last thing that the carriers want. They want to control every byte and every application that can be put on the handsets that they adopt (and control through locking). In short, they want to control everything. It’s near impossible to develop networked social applications for mobiles. If it works on one carrier, it’s bound to be ignored by others. Even worse, the carriers have a disincentive to allow you to spread bytes over the network. (I can’t imagine how much those with all-you-can-eat plans detest Twittr.) Culturally, this is the step that’s next. Too bad i think that inane corporate bullshit is going to get in the way.
Of course, while i think that people want to move in this direction, i also think that privacy confusion has only just begun.
On Wednesday, Twitter tipped the tuna. By that I mean it started peaking. Adoption amongst the people I know seemed to double immediately, an apparent tipping point. It hasn’t jumped the shark, and probably won’t until Steven Colbert covers this messaging of the mundane. As Twitter turns 1 on March 13th, not only is there a quickening of users, but messages per user.
Twitter, in a nutshell, is mobile social software that lets you broadcast and receive short messages with your social network. You can use it with SMS (sending a message to 40404), on the web or IM. A darn easy API has enabled other clients such as Twitterific for the Mac. Twitter is Continuous Partial Presence, mostly made up of mundane messages in answer to the question, “what are you doing?” A never-ending steam of presence messages prompts you to update your own. Messages are more ephemeral than IM presence — and posting is of a lower threshold, both because of ease and accessibility, and the informality of the medium.
Anil Dash was spot-on to highlight “The sign of success in social software is when your community does something you didn’t expect.” A couple of weeks ago it became a convention to start messages with @username as a way of saying something to someone visible to everyone. Within the limited affordances of the tool, people started to use it not only for presence, but a kind of shouting at the party conversation. Further, when you see an to someone who isn’t in your social network, you find yourself inclined to go see who it is or add them if they are a friend who just joined. This kind of social discovery goes beyond seeing friend lists on profiles, aids network structure and quickens adoption.
While the app is viral (you have to get others to adopt to be able to use it), mobile social software has great word-of-mouth properties. At Wikimania this summer, a buzz went off in my pocket when I was having dinner, which prompted me to get Jason Calacanis, Dave Winer and the brothers Gillmor to adopt. Wednesday was the first day of TED, so a bunch of A-listers spread it. At SXSW it seems to be the smart mob tool of choice, and there is even a group for it with a feature I’ve never seen before, JOIN.
This week most of my company joined Twitter and I set up http://twitter.com/socialtext for no reason in particular. I posted the login in a private wiki page to let anyone contribute. But when Moconner saw how simple the API was, he wrote a bot to let us post from our IRC channel. Now we have a low threshold way to express group identity that fits with the way we work.
Liz Lawley well addressed the differences of this form of presence and criticisms of mundane content and interruption costs. She highlights “exploring clusters of loosely related people by looking at the updates from their friends. There are stories told in between updates.”
However, I do think the the interruption tax is significant — especially with the quickening of adoption. You use your social network as a filter, which helps both in scoping participation within a pull model of attention management, but also to Liz’s point that my friends are digesting the web for me and perhaps reducing my discovery costs. But the affordance within Twitter of both mobile and web, that not only lets Anil use it (he is Web-only) is what helps me manage attention overload. I can throttle back to web-only and curb interruptions, simply by texting off.
Good thing too, because back when it was called twittr people held back believing what they posted would be interrupting on mostly mobile devices. Lately I think people just go for it, and most consumption is on the web or other clients. I’d love to see some research on posts/user, client use, tracking @username, group identities, geographic dispersion and revealing other undesigned conventions.
So a few weeks ago, I started getting spam referencing O’Reilly books in the subject line, and I thought that the spammers had just gotten lucky, and that the universe of possible offensive measures for spammers now included generating so many different subject lines that at least some of them got through to my inbox, but recently I’ve started to get more of this kind of spam, as with:
Subject: definition of what “free software” means. Outgrowing its
Subject: What makes it particularly interesting to private users is that there has been much activity to bring free UNIXoid operating systems to the PC,
Subject: and so have been long-haul links using public telephone lines. A rapidly growing conglomerate of world-wide networks has, however, made joining the global
(All are phrases drawn from http://tldp.org/LDP/nag/node2.html.)
Can it be that spammers are starting to associate context with individual email addresses, in an effort to evade Bayesian filters? (If you wanted to make sure a message got to my inbox, references to free software, open source, and telecom networks would be a pretty good way to do it. I mean, what are the chances?) Some of this stuff is so close to my interests that I thought I’d written some of the subject lines and was receiving this as a reply. Or is this just general Bayes-busting that happens to overlap with my interests?
If it’s the former, then Teilhard de Chardin is laughing it up in some odd corner of the noosphere, as our public expressions are being reflected back to us as a come-on. History repeats itself, first as self-expression, then as ad copy…
I’m completely fascinated by Twitter right now—in much the same way I was by blogging four years ago, and by ICQ years before that.
If you haven’t tried it yet, Twitter is a site that allows you to post one-line messages about what you’re currently doing—via the web interface, IM, or SMS. You can limit who sees the messages to people you’ve explicitly added to your friends list, or you can make the messages public. (My Twitter posts are private, but my friend Joi’s are public.)
What Twitter does, in a simple and brilliant way, is to merge a number of interesting trends in social software usage—personal blogging, lightweight presence indicators, and IM status messages—into a fascinating blend of ephemerality and permanence, public and private.
The big “P” word in technology these days is “participatory.” But I’m increasingly convinced that a more important “P” word is “presence.” In a world where we’re seldom able to spend significant amounts of time with the people we care about (due not only to geographic dispersion, but also the realities of daily work and school commitments), having a mobile, lightweight method for both keeping people updated on what you’re doing and staying aware of what others are doing is powerful.
I’ve experimented a bit with a visual form of this lightweight presence indication, through cameraphone photos taken while traveling. A photo of a boarding gate sign, or of a hotel entrance, conveys where I am and what I’m doing quickly and easily. But that only works if people are near a computer and are watching my Flickr photo feed, and that’s a lot to ask.
I also use IM status messages to broadcast what I’m doing. My iChat has a stack of custom messages that I’ve saved for re-use, from “packing” and “at the airpot” to “breaking up sibling squabbles” and “grading…the horror! the horror!” But status messages have no permanence to them, and require some degree of synchronicity—people have to be logged into IM, and looking at status messages, while I’m there. Because Twitter archives your messages on the web (and can send them as SMS that you can check at any time), that requirement for synchronous connections goes away.
Blogs allow this kind of archived update, of course—but they’re not lightweight. Where one might easily post a Twitter message along the lines of “on my way to work”, a blog post like that wouldn’t be worth the effort and overhead.
I’ve heard two kinds of criticisms of Twitter already.
The first criticizes the triviality of the content. But asking “who really cares about that kind of mindless trivia about your day” misses the whole point of presence. This isn’t about conveying complex theory—it’s about letting the people in your distributed network of family and friends have some sense of where you are and what you’re doing. And we crave this, I think. When I travel, the first thing I ask the kids on the phone when I call home is “what are you doing?” Not because I really care that much about the show on TV, or the homework they’re working on, but because I care about the rhythms and activities of their days. No, most people don’t care that I’m sitting in the airport at DCA, or watching a TV show with my husband. But the people who miss being able to share in day-to-day activity with me—family and close friends—do care.
The second type of criticism is that the last thing we need is more interruptions in our already discontinuous and partially attentive connected worlds. What’s interesting to me about Twitter, though, is that it actually reduces my craving to surf the web, ping people via IM, and cruise Facebook. I can keep a Twitter IM window open in the background, and check it occasionally just to see what people are up to. There’s no obligation to respond, which I typically feel when updates come from individuals via IM or email. Or I can just check my text messages or the web site when I feel like getting a big picture of what my friends are up to.
Which then leads to one of the aspects of Twitter that I find most fascinating—exploring clusters of loosely related people by looking at the updates from their friends. There are stories told in between updates. Who’s at a conference, and do they know each other? Who’s on the road, and who’s at home. Narratives that wind around and between the updates and the people, that show connections. Updates that echo each other, or even directly respond to another Twitter post.
There’s more to it than that, but I’m still sorting it all out in my head. Just wanted to post an early-warning signal that I see something important happening here, something worth paying (more than partial) attention to.
(cross-posted from mamamusings; since comments have been unreliable here, any comments can be posted there)
Not long ago, I wrote an article on “Social Publishing” here on Many-to-Many, which suggests the possibility of a system where
“authors create and distribute their work, and readers, individually and collectively, including fans as well as editors and peers, review, comment, rank, and tag, everything.”
So I followed up on the post and, along with a colleague Richard Adler, started Oort-Cloud.org
Oort-Cloud is a site where science fiction and fantasy readers and writers can build precisely the kind of community that I alluded to in Social Publishing. Oort-Cloud utilizes a process we have termed “OpenLit” which you can read more about on the OpenLit page. Basically, OpenLit is a simple catalytic cycle:
Write - Share - Read - Respond
First, writers write. Share
Second, writers share with others what they have written. Read
Third, readers read what is available. Respond
Fourth, readers respond to what they have read.
In this way, writers become better writers by virtue of having a distribution outlet that embeds constant feedback, and readers have access to better and better stories, where “better” actually means better for them based on their interaction with the writers.
Hopefully, this all means new opportunities for everyone involved in science fiction and fantasy — readers, writers, and publishers alike.
Last week, Facebook unveiled a gifting feature. For $1, you can purchase a gift for the person you most adore. If you choose to make the gift public, you are credited with that gift on the person’s profile under the “gift box” region. If you choose to make the gift private, the gift is still there but there’s no notice concerning who gave it.
Before getting into this, let me take a moment to voice my annual bitterness over Hallmark Holidays, particularly the one that involves an obscene explosion of pink, candy, and flowers.
The gifting feature is fantastically times to align with a holiday built around status: Valentine’s Day. Valentine’s Day is all about pronouncing your relationship to loved ones (and those you obsess over) in the witness of others. Remember those miniature cards in elementary school? Or the carnations in high school? Listening to the radio, you’d think Valentine’s Day was a contest. Who can get the most flowers? The fanciest dinner? This holiday should make most people want to crawl in bed and eat bon-bons while sobbing over sappy movies. But it works. It feeds on people’s desire to be validated and shown as worthy to the people around them, even at the expense of others. It is a holiday built purely on status (under the guise of “love”). You look good when others love you (and the more the merrier).
Of course, Valentine’s Day is not the only hyper-commercialized holiday. The celebration of Christ’s birth is marked by massive shopping. In response, the Festival of Lights has been turned into 8 days of competitive gift giving in American Jewish culture. Acknowledging that people get old in patterns that align with a socially constructed calendar also requires presents. Hell, anything that is seen as a lifestage change requires gifts (marriage, childbirth, graduation, Bat Mitzvah, etc.).
Needless to say, gift giving is perpetuated by a consumer culture that relishes any excuse to incite people to buy. My favorite of this is the “gift certificate” - a piece of paper that says that you couldn’t think of what to give so you assuaged your guilt by giving money to a corporation. You get brainwashed into believing that forcing your loved one to shop at that particular venue is thoughtful, even though the real winner is the corporation since only a fraction of those certificates are ever redeemed. No wonder corporations love gift certificates - they allow them to make bundles and bundles of money, knowing that the receiver will never come back for the goods.
But anyhow… i’ve gone off on a tangent… Gifts. Facebook.
Unlike Fred, i think that gifts make a lot more sense than identity purchases when it comes to micro-payments and social network sites. Sure, buying clothes in virtual systems makes sense, but what’s the value of paying to deck out your profile if the primary purpose of it is to enable communication? I think that for those who actively try to craft a public identity through profiles (celebrities and fame junkies), paying to make a cooler profile makes sense. But most folks are quite content with the crap that they can do for free and i don’t see them paying money to get more fancified backgrounds when they can copy/paste. That said, i think it’s very interesting when you can pay to affect someone else’s profile. I think it’s QQ where you can pay to have a donkey shit on your friend’s page and then they have to pay to clean it up. This prankster “gift” has a lot of value. It becomes a game within the system and it bonds two people together.
In a backchannel conversation, Fred argues with me that digital gifts will have little value because they only make people look good for a very brief period. They do not have the same type of persistence as identity-driven purchases like clothing in WoW. I think that it is precisely this ephemeralness that will make gifts popular. There are times for gift giving (predefined by society). Individuals’ reaction to this is already visible on social network sites comments. People write happy birthday and send glitter for holidays (a.k.a. those animated graphical disasters screaming “happy valentine’s day!”). These expressions are not simply altruistic kindness. By publicly performing the holiday or birthday, the individual doing the expression looks good before hir peers. It also prompts reciprocity so that one’s own profile is then also filled with validating comments. Etc. Etc. (If interested in gifting, you absolutely must read the canon: Marcel Mauss’ “The Gift”.)
Like Fred, i too have an issue with the economic structure of Facebook Gifts, but it’s not because i think that $1 is too expensive. Gifts are part of status play. As such, there are critical elements about gift giving that must be taken into consideration. For example, it’s critical to know who gifted who first. You need to know this because it showcases consideration. Look closely at comments on MySpace and you’ll see that timing matters; there’s no timing on Facebook so you can’t see who gifted who first and who reciprocated. Upon receipt of a gift, one is often required to reciprocate. To handle being second, people up the ante in reciprocating. The second person gives something that is worth more than the first. This requires having the ability to offer more; offering two of something isn’t really the right answer - you want to offer something of more value. All of Facebook’s gifts are $1 so they are all equal. Value, of course, doesn’t have to be about money. Scarcity is quite valuable. If you gift something rare, it’s far more desired than offering a cheesy gift that anyone could get. This is why the handmade gift matters in a culture where you can buy anything.
I don’t think Facebook gifts - in its current incarnation - is sustainable. You can only gift so many kisses and rainbows before it’s meaningless. And what’s the point of paying $1 for them (other than to help the fight against breast cancer)? $1 is nothing if the gift is meaningful, but the 21 gift options will quickly lose meaning. It’s not just about dropping the price down to 20 cents. It’s about recognizing that gifting has variables that must be taken into account.
People want gifts. And they want to give gifts. Comments (or messages on the wall) are a form of gifting and every day, teens and 20-somethings log in hoping that someone left a loving comment. (And all the older folks cling to their Crackberries with the same hope.) It’s very depressing to log in and get no love.
I think that Facebook is right-on for making a gifting-based offering, but i think that to make it work long-term, they need to understand gifting a bit better. It’s about status. It’s about scarcity. It’s about reciprocity and upping the ante. These need to worked into the system and evolving this will make Facebook look good, not like they are backpeddling. This is not about gifting being a one-time rush; it’s about understanding the social structure of gifting.
Wikipedia’s policy of neutrality sometimes forces resolution when we’d rather have debate. Yes, competing sides get represented in the articles, and the discussion pages let us hear people arguing their points, but the arguments themselves are treated as stations on the way to neutral agreement.
So, there’s room for additional approaches that take the arguments themselves as their topics. That’s what Debatepedia.org does, and it looks like it’s on its way to being really useful.
Like Wikipedia, anyone can edit existing content. Unlike Wikipedia, its topics are all up for debate. Each topic presents both sides, structured into sub-questions, with a strong ethos of citation, factuality, and lack of flaming; the first of its Guiding Principles is “No personal opinion.” Rather, it attempts to present the best case and best evidence for each side.
Debatepedia limits itself to topics with yes-no alternatives and with clear pro and con cases. To start a debate, a user has to propose it and the editors (who seem to be the people who founded it…I couldn’t find info about them on the site) have to accept it. This keeps people from proposing stupid topics and boosts the likelihood that if you visit a listed debate, you’ll find content there. It also limits discussion to topics that have two and only two sides, which may turn out to be a serious limitation. But, we’ll see. And it can adapt as required.
Will Debatepedia take off? Who the hell knows. But it’s a welcome addition to the range of experiments in pulling ourselves together.
In the tech circles in which i run, the term “walled gardens” evokes a scrunching of the face if not outright spitting. I shouldn’t be surprised by this because these are the same folks who preach the transparent society as the panacea. But i couldn’t help myself from thinking that this immediate revulsion is obfuscating the issue… so i thought i’d muse a bit on walled gardens.
Walled gardens are inevitably built out of corporate greed - a company wants to lock in your data so that you can’t move between services and leave them in the dust. They make money off of your eyeballs. They make money off of your data. (In return, they often provide you with “free” services.) You put blood, sweat, and tears - or at least a little bit of time - into providing them with valuable data and you can’t get it out when you decide you’ve had enough. If this were the full story, of course walled gardens look foul to the core.
The term “walled garden” implies that there is something beautiful being surrounded by walls. The underlying assumption is that walls are inherently bad. Yet, walls have certain value. For example, i’m very appreciative of walls when i’m having sex. I like to keep my intimate acts intimate and part of that has to do with the construction of barriers that prevent others from accessing me visually and audibly. I’m not so thrilled about tearing down all of the walls in meatspace. Walls are what allow us to construct a notion of “private” and, even more importantly, contextualized publics. Walls help contain the social norms so that you know how to act properly within their confines, whether you’re at a pub or in a classroom.
One of the challenges online is that there really aren’t walls. What walls did exist came tumbling down with the introduction of search. Woosh - one quick query and the walls that separated comp.lang.perl from alt.sex.bondage came crashing down. Before search (a.k.a. Deja), there were pseudo digital walls. Sure, Usenet was public but you had to know where the door was to enter the conversation. Furthermore, you had to care to enter. There are lots of public and commercial places i pass by every day that i don’t bother entering. But, “for the good of all humankind”, search came to pave the roads and Arthur Dent couldn’t stop the digital bulldozer.
We’re living with the complications of no walls online. Determining context is really really hard. Is your boss really addressing you when he puts his pic up on Match.com? Does your daughter take your presence into consideration when she crafts her MySpace? No doubt it’s public, but it’s not like any public that we’re used to in meatspace.
For a long time, one of the accidental blessings of walled gardens was that they kept out search bots as part of their selfish data retention plan. This meant that there were no traces left behind of people’s participation in walled gardens when they opted out - no caches of previous profiles, no records of a once-embarassing profile. Much to my chagrin, many of the largest social network sites (MySpace, LinkedIn, Friendster, etc.) have begun welcoming the bots. This makes me wonder… are they really walled gardens any longer? It sounds more like chain linked fences to me. Or maybe a fishbowl with a little plastic castle.
What does it mean when the supposed walled gardens begin allowing external sites to cache their content?
[tangent] And what on earth does it mean that MySpace blocks the Internet Archive in its robots.txt but allows anyone else? It’s like they half-realize that posterity might be problematic for profiles, but fail to realize that caches of the major search engines are just as freaky. Of course, to top it off, their terms say that you may not use scripts on the site - isn’t a bot a script? The terms also say that participating in MySpace does not give them a license to distribute your content outside of MySpace - isn’t a Google cache of your profile exactly that? [end tangent]
Can we really call these sites walled gardens if the walls are see-through? I mean, if a search bot can grab your content for cache, what’s really stopping you from doing so? Most tech folks would say that they are walled gardens because there are no tools to support easy export. Given that thousands of sites have popped up to provide codes for you to turn your MySpace profile into a dizzy display of animated daisies with rainbow hearts fluttering from the top (while inserting phishing scripts), why wouldn’t there be copy/pastable code to let you export/save/transfer your content? Perhaps people don’t actually want to do this. Perhaps the obsessive personal ownership of one’s content is nothing more than a fantasy of the techno-elite (and the businessmen who haven’t yet managed to lock you in to their brainchild). I mean, if you’re producing content into a context, do you really want to transfer it wholesale? I certainly don’t want my MySpace profile displayed on LinkedIn (even if there are no nude photos there).
For all of this rambling, perhaps i should just summarize into three points:
If walls have value in meatspace, why are they inherently bad in mediated environments? I would argue that walls provide context and allow us to have some control over the distribution of our expressions. Walls should be appreciated, even if they are near impossible to construct.
If robots can run around grabbing the content of supposed walled gardens, are they really walled? It seems to me that the tizzy around walled gardens fails to recognize that those most interested in caching the data (::cough:: Google) can do precisely that. And those most interested does not seem to include the content producers.
If the walls come crashing down, what are we actually losing? Walls provide context, context is critical for individuals to properly express themselves in a socially appropriate way. I fear that our loss of walls is resulting in a very confused public space with far more visibility than anyone can actually handle.
Basically, i don’t think that walled gardens are all that bad. I think that they actually provide a certain level of protection for those toiling in the mud. The problem is that i think that we’ve torn down the walls of the supposed walled gardens and replaced them with chain links or glass. Maybe even one-way glass. And i’m not sure that this is such a good thing. ::sigh::
So, what am i missing? What don’t i understand about walled gardens?
Technorati has a new feature that’s only slightly confusing but very interesting and potentially quite useful. (Disclosure: I’m on Technorati’s board of advisors.)
It’s called “WTF,” which technically stands for “Where’s the Fire,” but has another more likely meaning. (David Isenberg named one of his conferences “WTF” and then had a contest to decide what it stood for.) So, if you go to Technorati and take a look at the Top Searches in the upper right, to the left of each entry there’s an orange flame. Don’t click on it yet because the page it takes you to is confusing. Instead, click on one of the searches. At the moment, “Boston Mooninites” is the top search. Click on it to go to the search results page. The top result is not a result at all. It’s got a flame icon next to it, indicating that it’s actually the WTF about the phrase “Boston Mooninites.” It’s an explanation of what that phrase means and why people are searching on it now. Who wrote it? Anybody who wants to. So now click on the flame icon. It takes you to the same page you would have gotten to if you had clicked on the flame icon in the Top Searches list on the home page.
Ok, so now you’re on the WTF page for “Boston Mooninites.” Note that this is not the search results page. It’s where you get to create your own WTF for that search query. Or, you can vote on which of the existing ones; the one with the most votes is featured on the search results page for the query.
It’ll be very interesting to see how this develops. For example, the current top WTF for Windows Vista is a product review, not a neutral explanation. (I’m not complaining.) Many of the WTFs on the Vista list are responses to previous ones, as if WTFs are discussion board, probably an artifact of the layout of the WTF page.
Introduction: This post is an experiment in synchronization. Since Henry Jenkins, Beth Coleman, and I are all writing about Second Life and because we like each other’s work, even when (or especially when) we disagree, we’ve decided to all post something on Second Life today. Beth’s post will appear at http://www.projectgoodluck.com/blog/, and Henry’s is at http://www.henryjenkins.org/.
Let me start with some background. Because of the number of themes involved in discussions of Second Life, it’s easy to end up talking at different levels of abstraction, so let me start with two core assertions, things that I take as background to my part of the larger conversation:
First, Linden’s Residents figures are methodologically worthless. Any claim about Second Life derived from a count of Residents is not to be taken seriously, and anyone making claims about Second Life based on those figures is to be regarded with skepticism. (Explanation here and here.)
Second, there are many interesting things going on in Second Life. As I have said in other forums, and will repeat here, passionate users are a law unto themselves, and rightly so. Nothing I could say about their experience in Second Life, pro or con, would matter to those users. My concerns are demographic.
With those assertions covered, I am asking myself two things: will Second Life become a platform for a significant online population? And, second, what can Second Life tell us about the future of virtual worlds generally?
Concerning popularity, I predict that Second Life will remain a niche application, which is to say an application that will be of considerable interest to a small percentage of the people who try it. Such niches can be profitable (an argument I made in the Meganiche article), but they won’t, by definition, appeal to a broad cross-section of users.
The logic behind this belief is simple: most people who try Second Life don’t like it. Something like five out of six new users abandon it before a month is up. The three month abandonment figure seems to be closer to nine out of ten. (This figure is less firm, as it has only been reported colloquially, with no absolute numbers behind it.)
More importantly, the current active population is still an unknown. (Call this metric something like “How many users in the last 30 days have accounts more than 30 days old?”) We know the highest that figure could be is in the low hundreds of thousands, but no one other than the Lindens (and, presumably, their bigger marketing clients) knows how much lower it is than this theoretical maximum.
The poor adoption rate is a form of aggregate judgment. Anything bruited for wide adoption would have trouble with 85%+ abandonment, whether software or toothpaste. One possible explanation for this considerable user defection might be a technological gap. I do not doubt that improvements to the client and server would decrease the abandonment rate. I do doubt the improvement would be anything other than incremental, given 5 years and tens of millions in effort already.
Note too that abandonment is not a problem that all visually traversable spaces suffer from. Both Doom and Cyworld serve as counter-examples; in those cases, the rendering is cartoonish, yet both platforms achieved huge popularity in a short period. If the non-visual experience is good, the rendering does not need to be, but the converse does not seem to be true, on present evidence.
There have been two broad responses to skepticism occasioned by the Linden population numbers. (Three, if you count ad hominem, but Chris Lott has already covered that.)
The first response is not specific to Second Life. Many people have recalled earlier instances of misguided skepticism about new technologies, but the logical end-case of that thought is that skepticism about technology is never appropriate. (Disconfirmation of this thesis is left as an exercise for the reader.) Given that most new technologies fail, the challenge is to figure out which ones won’t. No one has noted examples of software with 85% abandonment rates, after five years of development, that went on to become widespread. Such examples may exist, but I can’t think of any.
The second objection is a conviction that demographics are irrelevant, and that the interesting goings-on in Second Life are what matters, no matter how few users are engaged in those activities.
I have never doubted (and have explicitly noted above) that there are interesting things happening in Second Life. The mistake, from my point of view, is in mixing two different questions. Whether some people like Second Life a lot is a completely separate issue from whether a lot of people like it. It is possible for the first assertion to be true and the second one false, and this is the only reading I believe is supported by the low absolute numbers and high abandonment rates. Nor is this an unusual case. We have several examples of platforms with fascinating in-world effects (Alphaworld, Black Sun/Blaxxun, The Palace, Dreamscape, LambdaMOO and environments on the SuperMOO List, etc.), all of which also failed to achieve wide use.
It is here that assertions about Second Life have most often been inconsistent. Before the uselessness of Linden’s population numbers was widely understood, the illusion of a large and rapidly growing community was touted as evidence of Second Life’s success. When both the absolute numbers and growth turned out to be more modest, population was downgraded and other metrics have been introduced as predictive of Second Life’s inevitable success.
A hypothesis which is strengthened by evidence of popularity, but not weakened by evidence of unpopularity, isn’t really a hypothesis, it’s a religious assertion. And a core tenet of the faithful seems to be that claims about Second Life are buttressed by the certain and proximate arrival of virtual worlds generally.
If we had but worlds enough and time…
It is worth pausing at this junction. Many people writing about Second Life make little distinction between ‘Second Life as a particular platform’ and ‘Second Life as an exemplar of the coming metaverse’. I would like to buck this trend, by explicitly noting the difference between those two conversations. I am basing my prediction of continued niche status for Second Life on the current evidence that most people who try it don’t like it. My beliefs about virtual worlds, on the other hand, are more conjectural. Everything below should be read with this caveat in mind.
With that said, I don’t believe that “virtual worlds” describes a coherent category, or, put another way, I believe that the group of things lumped together as virtual worlds have such variable implementations and user adoption rates that they are not well described as a single conceptual group.
I alluded to Pointcast in an earlier article; one of the ways the comparison is apt is in the abuse of categorization as a PR tool. Pointcast’s management claimed that email, the Web, and Pointcast all were about delivering content, and that the future looked bright for content delivery platforms. And indeed it did, except for Pointcast.
The successes of email and of the Web were better explained by their particular utilities than by their membership in a broad class of “content delivery.” Pointcast tried to shift attention from those particularities to a generic label in order to create a club in which it would automatically be included.
I believe a similar thing happens whenever Second Life is lumped with Everquest, World of Warcraft, et al., into a category called virtual worlds. If we accept the validity of this category, then multi-player games provide an existence proof of millions-strong virtual worlds, and the only remaining question is simply when we arrive at wider adoption of more general-purpose versions.
If, on the other hand, we don’t start off by lumping Second Life with Warcraft as virtual worlds, a very different question emerges: why do virtual game worlds outperform non-game worlds in their adoption? This pattern is quite stable over time — it well predates Second Live and World of Warcraft, as with first Ultima Online (1997) and then Everquest (1999) each quickly dwarfing the combined populations of Alphaworld and Black Sun (later Blaxxun) despite the significant lead times of those virtual worlds. What is it about games that would make them a better fit for virtual environments than non-games?
Games have at least three advantages other virtual worlds don’t. First, many games, and most social games, involve an entrance into what theorists call the magic circle, an environment whose characteristics include simplified and knowable rules. The magic circle saves the game from having to live up to expectations carried over from the real world.
Second, games are intentionally difficult. If all you knew about golf was that you had to get this ball in that hole, your first thought would be to hop in your cart and drive it over there. But no, you have to knock the ball in, with special sticks. This is just about the stupidest possible way to complete the task, and also the only thing that makes golf interesting. Games create an environment conducive to the acceptance of artificial difficulties.
Finally, and most relevant to visual environments, our ability to ignore information from the visual field when in pursuit of an immediate goal is nothing short of astonishing (viz. the gorilla experiment.) The fact that we could clearly understand spatial layout even in early and poorly rendered 3D environments like Quake has much to do with our willingness to switch from an observational Architectural Digest mode of seeing (Why has this hallway been accessorized with lava?) to a task-oriented Guns and Ammo mode (Ogre! Quad rocket for you!)
In this telling, games are not just special, they are special in a way that relieves designers of the pursuit of maximal realism. There is still a premium on good design and playability, but the magic circle, acceptance of arbitrary difficulties, and goal-directed visual filtering give designers ways to contextualize or bury at least some platform limitations. These are not options available to designers of non-game environments; asking users to accept such worlds as even passable simulacra subjects those environments to withering scrutiny.
We can also reverse this observation. One question we might ask about successful non-game uses of virtual worlds is whether they too are special cases. One obvious example is erotic imagery. The zaftig avatar has been a trope of 3D rendering since designers have been able to scrape together enough polygons to model a torso, but examples start far earlier than virtual worlds. In fact, visual representation of voluptuous womanhood predates the invention of agriculture by the same historical interval as agriculture predates the present. This is a deep pattern.
It is also a pattern that, like games and unlike ordinary life, has a special relation to visual cues (though this effect is somewhat unbalanced by gender.) If someone is shown a virtual hamburger, it can arouse real hunger. However, to satisfy this hunger, he must then walk away from the image and get his hands on an actual hamburger. This is not the case, to put the matter delicately, with erotic imagery; a fetching avatar can arouse desire, but that desire can then be satiated without recourse to the real.
This pair of characteristics — a human (and particularly male) fixation on even poorly rendered erotic images, plus an ability to achieve a kind of gratification in the presence of those images — means that a sexualized rendering can create both attraction and satisfaction in a way that a rendering of, say, a mountain or an office cannot. As with games, visual worlds work in the context of eros not because the images themselves are so convincing, but because they reach a part of the brain that so desperately wants to be convinced.
More generally, I suspect that the cases where 3D immersion works are, and will continue to be, those uses that most invite the mind to fill in or simply do without missing detail, whether because of a triggering of sexual desire, the fight or flight reflex (many games), avarice (gambling), or other areas where we are willing and even eager to make rapid inferences based on a paucity of data. I also assume that these special cases are not simply adding up to a general acceptance of visual immersion, and that finding another avatar beguiling in a virtual bar is not in fact a predictor of being able to read someone’s face or body language in a virtual meeting as if you were with them. That, I believe, is a neurological problem of a different order.
Jaron Lanier is the Charles Babbage of Our Generation
Here we arrive at the furthest shores of speculation. One of the basic promises of virtual reality, at least in its Snow Crash-inflected version, is that we will be able to re-create the full sense of being in someone’s presence in a mediated environment. This desire, present at least since Shamash appeared to Gilgamesh in a dream, can be re-stated in technological terms as a hope that communications will finally become an adequate substitute for travel. We have been promised that this will come to pass with current technology since ATT demoed a video phone at the 1964 World’s Fair.
I believe this version of virtual reality will in fact be achieved, someday. I do not, however, believe that it will involve a screen. Trying to trick the brain by tricking the eyes is a mug’s game. The brain is richly arrayed with tools to detect and unmask visual trickery — if the eyes are misreporting, the brain falls back on other externally focussed senses like touch and smell, or internally focussed ones like balance and proprioception.
Though the conception of virtual reality is clear, the technologies we have today are inadequate to the task. In the same way that the theory of computation arose in the mechanical age, but had to wait first for electrics and then electronics to be fully realized, general purpose virtual reality is an idea waiting on a technology, and specifically on neural interface, which will allow us to trick the brain by tricking the brain. (The neural interface in turn waits on trifling details like an explanation of consciousness.)
In the meantime, the 3D worlds program in the next decade is likely to resemble the AI program in the last century, where early optimism about rapid progress on general frameworks gave way to disconnected research topics (machine vision, natural language processing) and ‘toy worlds’ environments. We will continue to see valuable but specific uses for immersive environments, from flight training and architectural flythroughs to pain relief for burn victims and treatment for acrophobia. These are all indisputably good things, but they are not themselves general, and more importantly don’t suggest rapid progress on generality. As a result, games will continue to dominate the list of well-populated environments for the foreseeable future, rendering ineffectual the category of virtual worlds, and, critically, many of the predictions being attached thereunto.
[We’ve been experiencing continuing problems with our MT-powered commenting system. We’re working on a fix but for now send you to a temporary page where the discussion can continue.]
Intro: I was part of a group of people asked by Beth Noveck to advise the Community Patent review project about the design of a reputation and ranking system, to allow the widest possible input while keeping system gaming to a minimum. This was my reply, edited slightly for posting here.
We’ve all gone to school on the moderation and reputation systems of Slashdot and eBay. In those cases, their growing popularity in the period after their respective launches led to a tragedy of the commons, where open access plus incentives led to nearly constant attack by people wanting to game the system, whether to gain attention for themselves or their point of view in the case of Slashdot, or to defraud other users, as with eBay.
The traditional response to these problems would have been to hire editors or other functionaries to police the system for abuse, in order to stem the damage and to assure ordinary users you were working on their behalf. That strategy, however, would fail at the scale and degree of openness at which those services function. The Slashdot FAQ tells the story of trying to police the comments with moderators chosen from among the userbase, first 25 of them and later 400. Like the Charge of the Light Brigade, however, even hundreds of committed individuals were just cannon fodder, given the size of the problem. The very presence of effective moderators made the problem worse over time. In a process analogous to more roads creating more traffic, the improved moderation saved the site from drowning in noise, so more users joined, but this increase actually made policing the site harder, eventually breaking the very system that made the growth possible in the first place.
EBay faced similar, ugly feedback loops; any linear expenditure of energy required for policing, however small the increment, would ultimately make the service unsustainable. As a result, the only opportunity for low-cost policing of such systems is to make them largely self-policing. From these examples and others we can surmise that large social systems will need ways to highlight good behavior or suppress negative behavior or both. If the guardians are to guard themselves, oversight must be largely replaced by something we might call intrasight, designed in such a way that imbalances become self-correcting.
The obvious conclusion to draw is that, when contemplating the a new service with these characteristics, the need for some user-harnessed reputation or ranking system can be regarded as a foregone conclusion, and that these systems should be carefully planned so that tragedy of the commons problems can be avoided from launch. I believe that this conclusion is wrong, and that where it is acted on, its effects are likely to be at least harmful, if not fatal, to the service adopting them.
There is an alternate reading of the Slashdot and eBay stories, one that I believe better describes those successes, and better places Community Patent to take advantage of similar processes. That reading concentrates not on outcome but process; the history of Slashdot’s reputation system should teach us not “End as they began — build your reputation system in advance” but rather “Begin as they began — ship with a simple set of features, watch and learn, and implement reputation and ranking only after you understand the problems you are taking on.” In this telling, constituting users’ relations as a set of bargains developed incrementally and post hoc is more predictive of eventual success than simply adopting any residue from previous successes.
As David Weinberger noted in his talk The Unspoken of Groups, clarity is violence in social settings. You don’t get 1789 without living through 1788; successful constitutions, which necessarily create clarity, are typically ratified only after a group has come to a degree of informal cohesion, and is thus able to absorb some of the violence of clarity, in order to get its benefits. The desire to participate in a system that constrains freedom of action in support of group goals typically requires that the participants have at least seen, and possibly lived through, the difficulties of unfettered systems, while at the same time building up their sense of membership or shared goals in the group as a whole. Otherwise, adoption of a system whose goal is precisely to constrain its participants can seem too onerous to be worthwhile. (Again, contrast the US Constitution with the Articles of Confederation.)
Most current reputation systems have been fit to their situation only after that situation has moved from theoretical to actual; both eBay and Slashdot moved from a high degree of uncertainty to largely stable systems after a period of early experimentation. Perhaps surprisingly, this has not committed them to continual redesign. In those cases, systems designed after launch, but early in the process of user adoption, have survived to this day with only relatively minor subsequent adjustments.
Digg is the important counter-example, the most successful service to date to design a reputation system in advance. Digg differs from the community patent review process in that the designers of Digg had an enormous amount of prior art directly in its domain (Slashdot, Kuro5hin, Metafilter, et al), and still ended up with serious re-design issues. More speculatively, Digg seems to have suffered more from both system gaming and public concern over its methods, possibly because the lack of organic growth of its methods prevented it from becoming legitimized over time in the eyes of its users. Instead, they were asked to take it or leave it (never a choice users have been know to relish.)
Though more reputation design work may become Digg-like over time, in that designers can launch with systems more complete than eBay or Slashdot did, the ability to survey significantly similar prior art, and the ability to adopt a fairly high-handed attitude towards users who dislike the service, are not luxuries the community patent review process currently enjoys.
The Argument in Two Pictures
The argument I’m advancing can be illustrated with two imaginary graphs. The first concerns plasticity, the ease with which any piece of software can be modified.
Plasticity generally decays with time. It is highest at the in the early parts of the design phase, when a project is in its most formative stages. It is easier to change a list of potential features than a set of partially implemented features, and it is easier to change partially implemented features than fully implemented features. Especially significant is the drop in plasticity at launch; even for web-based services, which exist only in a single instantiation and can be updated frequently and for all users at once, the addition of users creates both inertia, in the direction of not breaking their mental model of the service, and caution in upgrading, so as not to introduce bugs or create downtime in a working service. As the userbase grows, the expectations of the early adopters harden still further, while the expectations of new users follows the norms set up by those adopters; this is particularly true of any service with a social component.
An obvious concern with reputation systems is that, as with any feature, they are easier to implement when plasticity is high. Other things being equal, one would prefer to design the system as early as possible, and certainly before launch. In the current case, however, other things are not equal. In particular, the specificity of information the designers have about the service and how it behaves in the hands of real users moves counter to plasticity over time.
When you are working to understand the ideal design for a particular piece of software, the specificity of your knowledge increases with time. During the design phase, the increasing concreteness of the work provides concomitant gains in specificity, but nothing like launch. No software, however perfect, survives first contact with the users unscathed, and given the unparalleled opportunities with web-based services to observe user behavior — individually and in bulk, in the moment and over time — the period after launch increases specificity enormously, after which it continues to rise, albeit at a less torrid pace.
There is a tension between knowing and doing; in the absence of the ideal scenario where you know just what needs to be done while enjoying complete freedom to do it (and a pony), the essential tradeoff is in understanding which features benefit most from increased specificity of knowledge. Two characteristics that will tend to push the ideal implementation window to post-launch are when a set of possible features is very large, but the set of those features that will ultimately be required is small; and when culling the small number of required features from the set of all possible features can only be done by observing actual users. I believe that both conditions apply a fortiori to reputation and ranking.
Costs of Acting In Advance of Knowing
Consider the costs of designing a reputation system in advance. In addition to the well-known problems of feature-creep (“Let’s make it possible to rank reputation rankings!”) and Theory of Everything technologies (“Let’s make it Semantic Web-compliant!”), reputation systems create an astonishing perimeter defense problem. The number of possible threats you can imagine in advance is typically much larger than the number that manifest themselves in functioning communities. Even worse, however large the list of imagined threats, it will not be complete. Social systems are degenerate, which is to say that there are multiple alternate paths to similar goals — someone who wants to act out and is thwarted along one path can readily find others.
As you will not know which of these ills you will face, the perimeter you will end up defending will be very large and, critically, hard to maintain. The likeliest outcome from such an a priori design effort is inertness; a system designed in advance to prevent all negative behavior will typically have as a side effect deflecting almost all behavior, period, as users simply turn away from adoption.
Working social systems are both complex and homeostatic; as a result, any given strategy for mediating social relations can only be analyzed in the context of the other strategies in use, including strategies adopted by the users themselves. Since the user strategies cannot, by definition, be perfectly predicted in advance, and since the only ungameable social system is the one that doesn’t ship, every social system will have some weakness. A system designed in advance is likely to be overdefended while still having a serious weaknesses unknown the designer, because the discovery and exploitation of that class of weakness can only occur in working, which is to say user-populated, systems. (As with many observations about the design of social systems, these are precedents first illustrated in Lessons from Lucasfilm’s Habitat, in the sections “Don’t Trust Anybody” and “Detailed Central Planning Is Impossible, Don’t Even Try”.)
The worst outcome of such a system would be collapse (the Communitree scenario), but even the best outcome would still require post hoc design to fix the system with regard to observed user behavior. You could save effort while improving the possibility of success by letting yourself not know what you don’t know, and then learning as you go.
In Favor of Instrumentation Plus Attention
The N-squared problem is only a problem when N is large; in most social systems the users are the most important N, and the userbase only grows large gradually, even for successful systems. (Indeed, this scaling up only over time typically provides the ability for a core group, once they have self-identified, to inculcate new users a bit at a time, using moral suasion as their principal tool.) As a result, in the early days of a system, the designers occupy a valuable point of transition, after user behavior is observable, but before scale and culture defeat significant intervention.
To take advantage of this designable moment, I believe that what Community Patent needs, at launch, is only this: metadata, instrumentation, and attention.
Metadata: There are, I believe, three primitive types of metadata required for Community Patent — people, patents, and interjections. Each of these will need some namespace to exist in — identity for the people, and named data for the patents themselves and for various forms of interjection, from simple annotation to complex conversation. In addition, two abstract types are needed — links and labels. A link is any unique pair of primitives — this user made that comment, this comment is attached to that conversation, this conversation is about those patents. All links should be readily observable and extractable from the system, even if they are not exposed in the interface the user sees. Finally, following Schachter’s intuition from del.icio.us, all links should be labelable. (Another way to view the same problem is to see labels as another type of interjection, attached to links.) I believe that this will be enough, at launch, to maximize the specificity of observation while minimizing the loss of plasticity.
Instrumentation: As we know from collaborative filtering algorithms from Ringo to PageRank, it is not necessary to ask users to rank things in order to derive their rankings. The second necessary element will be the automated delivery of as many possible reports to the system designers as can be productively imagined, and, at least as essential, a good system for quickly running ad hoc queries, and automating their production should they prove fruitful. This will help identify both the kinds of productive interactions on the site that need to be defended and the kinds of unproductive interactions they need to be defended from.
Designer Attention: This is the key — it will be far better to invest in smart people watching the social aspects of the system at launch than in smart algorithms guiding those aspects. If we imagine the moment when the system has grown to an average of 10 unique examiners per patent and 10 comments per examiner, then a system with even a thousand patents will be relatively observable without complex ranking or reputation systems, as both the users and the comments will almost certainly exhibit power-law distributions. In a system with as few as ten thousand users and a hundred thousand comments, it will still be fairly apparent where the action is, allowing you the time between Patent #1 and Patent #1000 to work out what sorts of reputation and ranking systems need to be put in place.
This is a simplification, of course, as each of the categories listed above presents its own challenges — how should people record their identity? What’s the right balance between closed and open lists of labels? And so on. I do not mean to minimize those challenges. I do however mean to say that the central design challenge of user governance — self-correcting systems that do not raise crushing participation burdens on the users or crushing policing barriers on the hosts — are so hard to design in advance that, provided you have the system primitives right, the Boyd Strategy of OODA — Orient, Observe, Decide, Act — will be superior to any amount of advance design work.
[We’ve been experiencing continuing problems with our MT-powered commenting system. We’re working on a fix but for now send you to a temporary page where the discussion can continue.]
LinkedIn now is enabling users to pose questions to their social network. Only members can respond. They’re also limiting how many questions you can ask per month. Interestingly, you’re only allowed to give one answer to any one question. As always, it’s those details that determine the shape of the society and its success. (Thanks for the pointer, Eric Scheid.)
I’ve been complaining about bad reporting of Second Life population for some time now. David Kirkpatrick at Fortune has finally gotten some signal out of Linden Labs. Kirkpatrick’s report is here, in the comments. (CNN.com comments don’t have permalinks, so scroll down.)
Here are the numbers Philip Rosedale of Linden gave him. These are, I presume, as of Jan 3:
1,525,670 unique people have logged into SL at least once (so now we know: Residents is seeing something a bit over 50% inflation over users.)
Of that number, 252,284 people have logged in more than 30 days after their account creation date.
Monthly growth in that figure, calculated as the change between last September and last October, was 23%.
Those of us who wanted the conversation to be grounded in real numbers owe Kirkpatrick our thanks for helping us get there.
These numbers should have two good effects. First, now that Linden has reported, and Kirkpatrick has published, the real figures, maybe we’ll see the press shift to reporting users and active users, instead of Residents.
Second, we’re no longer going to be asked to stomach absurd claims of size and growth. The ‘2.3 million user/77% growth in two months’ figures would have meant 70 million Second Life users this time next year. 250 thousand and 23% growth will mean 3 million in a year’s time, a healthy number, but not hyperbolic growth.
We can start asking more sophisticated questions now, like the use pattern of active users, or the change in monthly growth rates, or whether the Residents-users inflation rate is stable, but those questions are for later. Right now, we’ve got enough real numbers to think about for a while.
Disney is launching a social network for kids. My knee-jerk reaction: Yech.
Gavin O’Malley at Online Media Daily has a more considered reaction. He points to the apparent failure of Wal-Mart’s social network for kids (“The Hub”—an awfully grown-up name), and worries that having parental controls will kill the Disney effort as well. I agree with Gartner’s Andrew Frank that it’s likely to be all product placement all the time…and, if so, I hope kids reject it.
But, of course, I haven’t seen it and don’t know what it’ll be like. Maybe Disney is smarter than that.
Mark Cuban doesn’t understand television. He holds a belief, common to connoisseurs the world over, that quality trumps everything else. The current object of his faith in Qualität Über Alles is HDTV. Says Cuban:
HDTV is the Internet video killer. Deal with it. Internet bandwidth to the home places a cap on the quality and simplicity of video delivery to the home, and to HDTVs in particular. Not only does internet capacity create an issue, but the complexity of moving HDTV streams around the home and to the HDTV is pretty much a deal killer itself.
“HDTV is the Internet video killer.” Th appeal of this argument — whoever provides the highest quality controls the market — is obvious. So obvious, in fact, that it’s been used before. By audiophiles.
As January 1, 2000 approaches, and the MP3 whirlpool continues to swirl, one simple fact has made me feel as if I’m stuck at the starting line of the entire download controversy: The sound quality of MP3 has yet to improve above that of the average radio broadcast. Until that changes, I’m merely curious—as opposed to being in the I-want-to-know-it-all-now frenzy that is my usual m.o. when to comes to anything that promises music you can’t get anywhere else. Robert Baird, October, 1999
MP3s won’t catch on, because they are lower quality than CDs. And this was true, wasn’t it? People cared about audio quality so much that despite other advantages of MP3s (price, shareability, better integration with PCs), they’ve stayed true to the CD all these years. The commercial firms that make CDs, and therefore continue to control the music market, thank these customers daily for their loyalty.
Cuban doesn’t understand that television has been cut in half. The idea that there should be a formal link between the tele- part and the vision part has ended. Now, and from now on, the form of a video can be handled separately from it’s method of delivery. And since they can be handled separately, they will be, because users prefer it that way.
But Cuban goes further. He doesn’t just believe that, other things being equal, quality will win; he believes quality is so important to consumers that they will accept enormous inconvenience to get that higher-quality playback. When Cuban’s list of advantages of HDTV includes an inability to watch your own video on it (“the complexity of moving HDTV streams around the home and to the HDTV”), you have to wonder what he thinks a disadvantage would look like.
This is the season of the HDTV gotcha. After Christmas, people are starting to understand that they didn’t buy a nicer TV, they bought only one part of a Total Controlled Content Delivery Package. Got an HDTV monitor and a new computer for Christmas? You might as well have gotten a Fabergé Egg and a framing hammer for all the useful ways you can combine the two presents.
Media is a triathlon event. People like to watch, but they also like to create, and to share. Doubling down on the watching part while making it harder for the users to play their own stuff or share with their friends makes a medium worse in the users eyes. By contrast, the last 50 years have been terrible for user creativity and for sharing, so even moderate improvements in either of those abilities make the public go wild.
When it comes to media quality, people don’t optimize, they satisfice. Once the medium, whether audio or video or whatever, crosses a minimum threshold, users accept it and move on to caring about other attributes. The change in internet video quality from 1996 to 2006 was the big jump, and YouTube is the proof. After this, firms that offer higher social value for video will have an edge over firms that offer higher production values while reducing social value.
And because the audience for internet video will grow much faster than the audience for HDTV (and will be less pissed, because YouTube doesn’t rely on a ‘bait and switch’ walled garden play) the premium for making internet video better will grow with it. As Richard Gabriel said of programming languages years ago “[E]ven though Lisp compilers in 1987 were about as good as C compilers, there are many more compiler experts who want to make C compilers better than want to make Lisp compilers better.” That’s where video is today. HDTV provides a better viewing experience than internet video, but many more people care about making internet video better than making HDTV better.
The US Food and Drug Administration has decided tentatively that meat and milk from cloned animals are the same as from normal animals, so it is not going to require those products to carry special labels.
It’s not that I think cloned food is dangerous. I’d still like the labels to note that the animals were cloned because more metadata is always good. If people don’t want to eat clones for whatever reason, they should be enabled to make that choice. In fact, we’d be better off with full access to the information about what we’re purchasing. Where was the cow raised? What was it fed? What was its weight? What was its body fat ratio? How old was it? Did it get to roam free? Did it have a sweet smile? What was its sign? We’re better off being able to access it all, no matter how farfetched.
But, because of the nature of non-digital reality, taking up label space with a notice that the meat is cloned would itself be metadata indicating that the government thinks such information is worth noting. Metadata in the physical world is a zero sum game.
And that means not only is it true that (as Clay says) “metadata is worldview (or is that “metadata are worldview”?), physical labels are politics. We are forced to make value-driven decisions by the constraints of the physical (labels take up valuable space), the biological (human eyes require fonts to be sized above a certain minimum) and the economic (it is not feasible to attach an almanac of information to every chicken wing). But online, all those limit go away…
…except for the economic. It would be expensive to do a cholesterol count for every slaughtered cow (assuming that cows have cholesterol) simply to gather information that so far nobody cares about, but there’s plenty of information that we’re gathering anyway or for which there is predictable interest—e.g., cloning—that we could make available online (via a unique identifier for each slab of flesh). There would still be politics in the decision about which information to put into the extended set, but it would be a more inclusive, bigger tent, allowing customers to decide according to their own cockamamie values.
And isn’t cockamamie consumerism what democracy is all about?
“Here at KingsRUs.com, we call our website our Kingdom, and any time our webservers serve up a copy of the home page, we record that as a Loyal Subject. We’re very pleased to announce that in the last two months, we have added over 1 million Loyal Subjects to our Kingdom.”
Put that baldly, you wouldn’t fall for this bit of re-direction, and yet that is exactly what Linden Labs has pulled off with its Residents™ label. By adopting a term that seems like a simple re-branding of “users”, but which is actually unconnected to head count or adoption, they’ve managed to report what the press wants to hear, while providing no actual information.
If you like your magic tricks to stay mysterious, leave now, but if you want to understand how Linden has managed to disable the fact-checking apparatus of much of the US business press, turning them into a zombie army of unpaid flacks, read on. (And, as with the earlier piece on Linden, this piece has also been published on Valleywag.)
The basic trick is to make it hard to remember that Linden’s definition of Resident has nothing to do with the plain meaning of the word resident. My dictionary says a resident is a person who lives somewhere permanently or on a long term basis. Linden’s definition of Residents, however, has nothing to do with users at all — it measures signups for an avatar. (Get it? The avatar, not the user, is the resident of Second Life.)
The obvious costume-party assumption is that there is one avatar per person, but that’s wrong. There can be more than one avatar per account, and more than one account per person, and there’s no public explanation of which of those units Residents measures, and thus no way to tell anything about how many actual people use Second Life. (An embarrassingly First Life concern, I know.)
Confused yet? Wait, there’s less! Linden’s numbers also suggest that the Residents figure includes even failed attempts to use the service. They reported adding their second million Residents between mid-October and December 14th, but they also reported just shy of 810 thousand logins for the same period. One million new Residents but only 810K logins leaves nearly 200K new Residents unaccounted for. Linden may be counting as Residents people who signed up and downloaded the client software, but who never logged in, or there may be some other reason for the mismatched figures, but whatever the case, Residents is remarkably inflated with regards to the published measure of use.
(If there are any actual reporters reading this and doing a big cover story on Linden, you might ask about how many real people use Second Life regularly, as opposed to Residents or signups or avatars. As I write those words, though, I realize I might as well be asking Business Week to send me a pony for my birthday.)
Like a push-up bra, Linden’s trick is as effective as it is because the press really, really wants to believe:
Professional journalists wrote those sentences. They work for newspapers and magazines that employ (or used to employ) fact-checkers. Yet here they are, supplementing Linden’s meager PR budget by telling their readers that Residents measures something it actually doesn’t.
This credulity appears even in the smallest items. I discovered the “Residents vs Logins” gap when I came across a Business 2.0 post by Erick Schonfeld, where he included the mismatched numbers while congratulating Linden on a job well done. When I asked the obvious question in the comments — How come there are fewer logins than new Residents in the same period? — I got a nice email from Mr. Schonfeld, complimenting me on a good catch.
Now I’m generally pretty enthusiastic about taking credit where it isn’t due, but this bit of praise failed to meet even my debased standards. The post was a hundred words long, and it had only two numbers in it. I didn’t have to use forensic accounting to find the discrepancy, I just used subtraction (an oft-overlooked tool in the journalistic toolkit, but surprisingly effective when dealing with numbers.)
This is the state of business reporting in an age when even the pros want to roll with the cool blogger kids. Got a paragraph that contains only two numbers, and they don’t match? No problem! Post it anyway, and on to the next thing.
The prize bit of PReporting so far, though, has to be Elizabeth Corcoran’s piece for Forbes called A Walk on the Virtual Side, where she claimed that Second Life had recently passed “a million unique customers.”
This is three lies in four words. There isn’t one million of anything human inhabiting Second Life. There is no one-to-one correlation between Residents and users. And whatever Residents does measure, it has nothing to do with paying customers. The number of paid accounts is in the tens of thousands, not the millions (and remember, if you’re playing along at home, there can be more than one account per person. Kits, cats, sacks, and wives, how many logged into St. Ides?)
Despite the credulity of the Fourth Estate (Classic Edition), there are enough questions being asked in the weblogs covering Second Life that the usefulness is going to drain out of the ‘Resident™ doesn’t mean resident’ trick over the next few months. We’re going to see three things happen as a result.
The first thing that’s going to happen, or rather not happen, is that the regular press isn’t going go back over this story looking for real figures. As much as they’ve written about the virtual economy and the next net, the press hasn’t really covered Second Life as business story or tech story so much as a trend story. The sine qua non of trend stories is that a trend is fast-growing. The Residents figure was never really part of the story, it just provided permission to write about about how crazy it is that all the kids these days are getting avatars. By the time any given writer was pitching that story to their editors, any skepticism about the basic proposition had already been smothered.
No journalist wants to have to write “When we told you that Second Life had 1.3 million members, we in no way meant to suggest that figure referred to individual people. Fortune regrets any misunderstanding.” And since no one wants to write that, no one will. They’ll shift their coverage without pointing out the shift to their readers.
The second thing that is going to happen is an increase in arguments of the form “We mustn’t let Linden’s numbers blind us to the inevitability of the coming metaverse.” That’s the way it is with things we’re asked to take on faith — when A works, it’s evidence of B, but if A isn’t working as well as everyone thought, it’s suddenly unrelated to B.
Finally, there is going to be a spike in the number of the posts claiming that the two million number was never important anyway, the press’s misreporting was all an innocent mistake, Linden was planning to call those reporters first thing Monday morning and explain everything. Tateru Nino has already kicked off this genre with a post entitled The Value of One. The flow of her argument is hard to synopsize, but you can get a sense of it from this paragraph:
So, a hundred thousand, one million, two million. Those numbers mean something to us, but not because they have intrinsic, direct meaning. They have meaning because they’re filtered through the media, disseminated out into the world, believed by people, who then act based on that belief, and that is where the meaning lies.
Expect more, much more, of this kind of thing in 2007.
Public Library of Science has gone beta with PLos ONE, a peer-reviewed journal that publishes everything that passes the review, not just what it considers to be important. So, if it’s good science about a nit, it’ll find a home at PLoS ONE.
Articles are all published under a Creative Commons Attribution License. It does, however, cost a scientist (or her institution) $1,250 to be published by PLoS ONE. This is, alas, an improvement over what traditional journals charge scientists. PLoS ONE will waive the fee for authors who don’t have the funds.
Readers can discuss and annotate the articles. But the site could really use tags ‘n’ feeds. Maybe after beta…
Lately, i’ve become very irritated by the immersive virtual questions i’ve been getting. In particular, “will Web3.0 be all about immersive virtual worlds?” Clay’s post on Second Life reminded me of how irritated i am by this. I have to admit that i get really annoyed when techno-futurists fetishize Stephenson-esque visions of virtuality. Why is it that every 5 years or so we re-instate this fantasy as the utopian end-all be-all of technology? (Remember VRML? That was fun.)
Maybe i’m wrong, maybe i’ll look back twenty years ago and be embarrassed by my lack of foresight. But honestly, i don’t think we’re going virtual.
There is no doubt that immersive games are on the rise and i don’t think that trend is going to stop. I think that WoW is a strong indicator of one kind of play that will become part of the cultural landscape. But there’s a huge difference between enjoying WoW and wanting to live virtually. There ARE people who want to go virtual and i wouldn’t be surprised if there are many opportunities for sustainable virtual environments. People who feel socially ostracized in meatspace are good candidates for wanting to go virtual. But again, that’s not everyone.
If you look at the rise of social tech amongst young people, it’s not about divorcing the physical to live digitally. MySpace has more to do with offline structures of sociality than it has to do with virtuality. People are modeling their offline social network; the digital is complementing (and complicating) the physical. In an environment where anyone could socialize with anyone, they don’t. They socialize with the people who validate them in meatspace. The mobile is another example of this. People don’t call up anyone in the world (like is fantasized by some wrt Skype); they call up the people that they are closest with. The mobile supports pre-existing social networks, not purely virtual ones.
That’s the big joke about the social media explosion. 1980s and 1990s researchers argued that the Internet would make race, class, gender, etc. extinct. There was a huge assumption that geography and language would no longer matter, that social organization would be based on some higher function. Guess what? When the masses adopted social media, they replicated the same social structures present in the offline world. Hell, take a look at how people from India are organizing themselves by caste on Orkut. Nothing gets erased because it’s all connected to the offline bodies that are heavily regulated on a daily basis.
While social network sites and mobile phones are technology to adults, they are just part of the social infrastructure for teens. Remember what Alan Kay said? “Technology is anything that wasn’t around when you were born.” These technologies haven’t been adopted as an alternative to meatspace; they’ve been adopted to complement it.
Virtual systems will be part of our lives, but i don’t think immersion is where it’s at. Most people are deeply invested in the physicality of life; this is not going away.
Update: to discuss this post, please join the conversation at apophenia.
Second Life is heading towards two million users. Except it isn’t, really. We all know how this game works, and has since the earliest days of the web:
Member of the Business Press: “How many users do you have?” CEO of Startup: (covers phone) “Hey guys, how many rows in the ‘users’ table?”
[Sound F/X: Typing]
Offstage Sysadmin: “One million nine hundred and one thousand one hundred and seventy-three.” CEO: (Into phone) “We have one point nine million users.”
Someone who tries a social service once and bails isn’t really a user any more than someone who gets a sample spoon of ice cream and walks out is a customer.
So here’s my question — how many return users are there? We know from the startup screen that the advertised churn of Second Life is over 60% (as I write this, it’s 690,800 recent users to 1,901,173 signups, or 63%.) That’s not stellar but it’s not terrible either. However, their definition of “recently logged in” includes everyone in the last 60 days, even though the industry standard for reporting unique users is 30 days, so we don’t actually know what the apples to apples churn rate is.
At a guess, Second Life churn measured in the ordinary way is in excess of 85%, with a surge of new users being driven in by the amount of press the service is getting. The wider the Recently Logged In reporting window is, the bigger the bulge of recently-arrived-but-never-to-return users that gets counted in the overall numbers.
I suspect Second Life is largely a “Try Me” virus, where reports of a strange and wonderful new thing draw the masses to log in and try it, but whose ability to retain anything but a fraction of those users is limited. The pattern of a Try Me virus is a rapid spread of first time users, most of whom drop out quickly, with most of the dropouts becoming immune to later use. Pointcast was a Try Me virus, as was LambdaMOO, the experiment that Second Life most closely resembles.
I have been watching the press reaction to Second Life with increasing confusion. Breathless reports of an Immanent Shift in the Way We Live® do not seem to be accompanied by much skepticism. I may have been made immune to the current mania by ODing on an earlier belief in virtual worlds:
Similar to the way previous media dissolved social boundaries related to time and space, the latest computer-mediated communications media seem to dissolve boundaries of identity as well. […] I know a respectable computer scientist who spends hours as an imaginary ensign aboard a virtual starship full of other real people around the world who pretend they are characters in a Star Trek adventure. I have three or four personae myself, in different virtual communities around the Net. I know a person who spends hours of his day as a fantasy character who resembles “a cross between Thorin Oakenshield and the Little Prince,” and is an architect and educator and bit of a magician aboard an imaginary space colony: By day, David is an energy economist in Boulder, Colorado, father of three; at night, he’s Spark of Cyberion City—a place where I’m known only as Pollenator.
This wasn’t written about Second Life or any other 3D space, it was Howard Rheingold writing about MUDs in 1993. This was a sentiment I believed and publicly echoed at the time. Per Howard, “MUDs are living laboratories for studying the first-level impacts of virtual communities.” Except, of course, they weren’t. If, in 1993, you’d studied mailing lists, or usenet, or irc, you’d have a better grasp of online community today than if you’d spent a lot of time in LambdaMOO or Cyberion City. Ou sont les TinyMUCKs d’antan?
You can find similar articles touting 3D spaces shortly after the MUD frenzy. Ready for a blast from the past? “August 1996 may well go down in the annals of the Internet as the turning point when the Web was released from the 2D flatland of HTML pages.” Oops.
So what accounts for the current press interest in Second Life? I have a few ideas, though none is concrete enough to call an answer yet.
First, the tech beat is an intake valve for the young. Most reporters don’t remember that anyone has ever wrongly predicted a bright future for immersive worlds or flythrough 3D spaces in the past, so they have no skepticism triggered by the historical failure of things like LambdaMOO or VRML. Instead, they hear of a marvelous thing — A virtual world! Where you have an avatar that travels around! And talks to other avatars! — which they then see with their very own eyes. How cool is that? You’d have to be a pretty crotchety old skeptic not to want to believe. I bet few of those reporters ever go back, but I’m sure they’re sure that other people do (something we know to be false, to a first approximation, from the aforementioned churn.) Second Life is a story that’s too good to check.
Second, virtual reality is conceptually simple. Unlike ordinary network communications tools, which require a degree of subtlety in thinking about them — as danah notes, there is no perfect metaphor for a weblog, or indeed most social software — Second Life’s metaphor is simplicity itself: you are a person, in a space. It’s like real life. (Only, you know, more second.) As Philip Rosedale explained it to Business Week “[I]nstead of using your mouse to move an arrow or cursor, you could walk your avatar up to an Amazon.com (AMZN) shop, browse the shelves, buy books, and chat with any of the thousands of other people visiting the site at any given time about your favorite author over a virtual cuppa joe.”
Never mind that the cursor is a terrific way to navigate information; never mind that Amazon works precisely because it dispenses with rather than embraces the cyberspace metaphor; never mind that all the “Now you can shop in 3D efforts” like the San Francisco Yellow Pages tanked because 3D is a crappy way to search. The invitation here is to reason about Second Life by analogy, which is simpler than reasoning about it from experience. (Indeed, most of the reporters writing about Second Life seem to have approached it as tourists getting stories about it from natives.)
Third, the press has a congenital weakness for the Content Is King story. Second Life has made it acceptable to root for the DRM provider, because of their enlightened user agreements concerning ownership. This obscures the fact that an enlightened attempt to make digital objects behave like real world objects suffers from exactly the same problems as an unenlightened attempt, a la the RIAA and MPAA. All the good intentions in the world won’t confer atomicity on binary data. Second Life is pushing against the ability to create zero-cost perfect copies, whereas Copybot relied on that most salient of digital capabilities, which is how Copybot was able to cause so much agida with so little effort — it was working with the actual, as opposed to metaphorical, substrate of Second Life.
Finally, the current mania is largely push-driven. Many of the articles concern “The first person/group/organization in Second Life to do X”, where X is something like have a meeting or open a store — it’s the kind of stuff you could read off a press release. Unlike Warcraft, where the story is user adoption, here most of the stories are about provider adoption, as with the Reuters office or the IBM meeting or the resident creative agencies. These are things that can be created unilaterally and top-down, catnip to the press, who are generally in the business of covering the world’s deciders.
The question about American Apparel, say, is not “Did they spend money to set up stores in Second Life?” Of course they did. The question is “Did it pay off?” We don’t know. Even the recent Second Life millionaire story involved eliding the difference between actual and potential wealth, a mistake you’d have thought 2001 would have chased from the press forever. In illiquid markets, extrapolating that a hundred of X are worth the last sale price of X times 100 is a fairly serious error.
Artifacts vs. Avatars
Like video phones, which have been just one technological revolution away from mass adoption since 1964, virtual reality is so appealingly simple that its persistent failure to be a good idea, as measured by user adoption, has done little to dampen enthusiasm for the coming day of Keanu Reeves interfaces and Snow Crash interactions.
I was talking to Irving Wladawsky-Berger of IBM about Second Life a few weeks ago, and his interest in the systems/construction aspect of 3D seems promising, in the same way video phones have been used by engineers who train the camera not on their faces but on the artifacts they are talking about. There is something to environments for modeling or constructing visible things in communal fashion, but as with the video phone, they will probably involve shared perceptions of artifacts, rather than perceptions of avatars.
This use, however, is specific to classes of problems that benefit from shared visual awareness, and that class is much smaller that the current excitement about visualization would suggest. More to the point, it is at odds with the “Son of MUD+thePalace” story currently being written about Second Life. If we think of a user as someone who has returned to a site after trying it once, I doubt that the number of simultaneous Second Life users breaks 10,000 regularly. If we raise the bar to people who come back for a second month, I wonder if the site breaks 10,000 simultaneous return visitors outside highly promoted events.
Second Life may be wrought by its more active users into something good, but right now the deck is stacked against it, because the perceptions of great user growth and great value from scarcity are mutually reinforcing but built on sand. Were the press to shift to reporting Recently Logged In as their best approximation of the population, the number of reported users would shrink by an order of magnitude; were they to adopt industry-standard unique users reporting (assuming they could get those numbers), the reported population would probably drop by two orders. If the growth isn’t as currently advertised (and it isn’t), then the value from scarcity is overstated, and if the value of scarcity is overstated, at least one of the engines of growth will cool down.
There’s nothing wrong with a service that appeals to tens of thousands of people, but in a billion-person internet, that population is also a rounding error. If most of the people who try Second Life bail (and they do), we should adopt a considerably more skeptical attitude about proclamations that the oft-delayed Virtual Worlds revolution has now arrived.
“Are you my friend? Yes or no?” This question, while fundamentally odd, is a key component of social network sites. Participants must select who on the system they deem to be ‘Friends.’ Their choice is publicly displayed for all to see and becomes the backbone for networked participation. By examining what different participants groups do on social network sites, this paper investigates what Friendship means and how Friendship affects the culture of the sites. I will argue that Friendship helps people write community into being in social network sites. Through these imagined egocentric communities, participants are able to express who they are and locate themselves culturally. In turn, this provides individuals with a contextual frame through which they can properly socialize with other participants. Friending is deeply affected by both social processes and technological affordances. I will argue that the established Friending norms evolved out of a need to resolve the social tensions that emerged due to technological limitations. At the same time, I will argue that Friending supports pre-existing social norms yet because the architecture of social network sites is fundamentally different than the architecture of unmediated social spaces, these sites introduce an environment that is quite unlike that with which we are accustomed.
I very much enjoyed writing this paper and i hope you enjoy reading it!
I want to offer a less telegraphic account of the relationship between expertise, credentials, and authority than I did in Larry Sanger, Citizendium, and the Problem of Expertise, and then say why I think the cost of coordination in the age of social software favors Wikipedia over Citizendium, and over traditionally authoritative efforts such as Britannica.
Make a pot of coffee; this is going to be long, and boring.
Those of us who write about Wikipedia, both pro and con, often mix two different views: descriptive — Wikipedia is/is not succeeding — and judgmental — Wikipedia is/is not good. (For the record, my view is that Wikipedia is a success, and that society is better off with Wikipedia than it would be without it.) What I love about the Citizendium proposal is that, by proposing a fusion of collaborative construction and expert authority, it presses people who dislike or mistrust Wikipedia to say whether they think that the wiki form of communal production can be improved, or is per se bad.
Nicholas Carr, in What will kill Citizendium, came out in the latter camp. Explaining why he thinks Ctizendium is a bad idea, he offers his prescription for the right way to do things: “[…] you keep the crowd out of it and, in essence, create a traditional encyclopedia.” No need for that ‘in essence’ there. The presence of the crowd is what distinguishes wiki production; this is a defense of the current construction of authority, suggesting that the traditional mechanism for creating encyclopedias is the correct one, and alternate forms of construction are not.
This is certainly a coherent point of view, but one that I believe will fail in practical terms, because it is uneconomical. (Carr, in his darker moments, seems to believe something similar, but laments what the economics of peer production mean. This is a “Wikipedia is succeeding/is not good” argument.) In particular, I believe that the costs of nominating and then deferring to experts will make Citizendium underperform its competition, relative to the costs of merely involving experts as ordinary participants, as Wikipedia does.
Expertise, Credentials, and Authority
First, let me say that I am a realist, which is to say that I believe in a reality that is not socially constructed. The materials that make up my apartment, wood and stone and so on, actually exist, and are independent of any observer. A real tree that falls in a real forest displaces real air, even if no one is there to interpret that as sound.
I also believe in social facts, things that are true because everyone agrees they are true. My apartment itself is made of real stuff, but its my-ness is built on agreements: my landlady leases it to me, that lease is predicated on her ownership, that ownership is recognized by the city of New York, and so on. Social facts are no less real than non-social facts — my apartment is actually my apartment, my wife is my wife, my job is my job — they are just real for different reasons.
If everyone stopped agreeing that my job was my job (I quit or was fired, say), I could still walk down to NYU and draw network diagrams on a whiteboard at 1pm on a Tuesday, but no one would come to listen, because my ramblings wouldn’t be part of a class anymore. I wouldn’t be faculty; I’d be an interloper. Same physical facts — same elevator and room and white board and even the same person — but different social facts.
Some facts are social, some are not. I believe that Sanger, Carr and I all agree that expertise is not a social fact. As Carr says ‘An architect does not achieve expertise through some arbitrary social process of “credentialing.” He gains expertise through a program of study and apprenticeship in which he masters an array of facts and techniques drawn from such domains as mathematics, physics, and engineering.’ I agree with that, and amended my earlier sloppiness in distinguishing between having expertise and being an expert, after being properly called on it by Eric Finchley in the comments.
However, though Carr’s description is accurate, is it incomplete: an architect does not achieve expertise through credentialing, but an architect does not become an architect through expertise either. An architect is someone with expertise who has also been granted an architect’s credentials. These credentials are ideally granted on proof of the kinds of antecedents that indicate expertise — in the case of architects, relevant study (itself certified with the social fact of a degree) and significant professional work.
Consider the following case: a young designer with an architect’s degree designs a building, and a credentialed architect working at the same firm then affixes her stamp to the drawings. The presence of the stamp means that a contractor can use the drawings to do certain kinds of work; without it the drawings shouldn’t be used for such things. Both the expertise and the credentials are necessary to make a set of drawings usable, but in this fairly common scenario, the expertise and the credentials are held by different people.
This system is designed to produce enough liability for architects that they will supervise the uncredentialed; if they fail to, their own credentials will be taken away. Now consider a disbarred architect (or lawyer or doctor.) There has been no change in their expertise, but a great change in their credentials. Most of the time, we can take the link between authority, credentials, and expertise for granted (its why we have credentials, in fact), but in edge cases, we can see them as separate things.
The clarity to be gotten from all this definition is a bit of a damp squib: Carr and I are in large agreement about the Citizendium proposal. He thinks that conferring authority is the hard challenge for Citizendium; I think that conferring authority is the hard challenge for Citizendium. He thinks that the openness of a wiki is incompatible with Citizendium’s proposed form of conferring authority, as do I. And we both believe this weakness will be fatal.
Where we disagree is in what this means for society.
The Cost of Credentials
Lying on a bed in an emergency room, you think “Oh good, here comes the doctor.” Your relief comes in part because the doctor has the expertise necessary to diagnose and treat you, and in part because the doctor has the authority to do things like schedule you for surgery if you need it. Whatever your anxieties at that moment, they don’t include the possibility that the nurses will ignore the doctor’s diagnosis, or refuse to treat you in the manner the doctor suggests.
You don’t worry that expertise and authority are different kinds of things, in other words, because they line up perfectly from your point of view. You simply ascribe to the visible doctor many things that are actually true of the invisible system the doctor works in. The expertise resides in the doctor, but the authority is granted by the hospital, with credentials helping bridge the gap.
So here’s the thing: it’s incredibly expensive to create and maintain such systems, including especially the cost of creating and policing credentials and authority. We have to make and enforce myriad refined distinctions — not just physician and soldier and chairman but ‘admitting physician’ and ‘second lieutenant’ and ‘acting chairman.’ We don’t let people get married or divorced without the presence of official oversight. Lots of people can drive the bus; only bus drivers may drive the bus. We make it illegal to impersonate an officer. And so on, through innumerable tiny, self-reinforcing choices, all required to keep the links between expertise, credentials and authority functional.
These systems are beneficial for society. However, they are not absolutely beneficial, they are only beneficial when their benefits outweigh their costs. And we live in an era where all kinds of costs — social costs, coordination costs, Coasean costs — are undergoing a revolution.
Cost Changes Everything
Earlier, writing about folksonomies, I said “We need a phrase for the class of comparisons that assumes that the status quo is cost-free.” We still need that; I propose “Cost-free Present” — when people believe in we live in a cost-free present, they also believe that any value they see in the world is absolute, not relative. A related assumption is that any new system that has disadvantages relative to the present one is therefore inferior; if the current system creates no costs, then any proposed change that creates new bad outcomes, whatever the potential new good outcomes, is worse than maintaining the status quo.
Meanwhile, out here in the real world, cost matters. As a result, when the cost structure for creating, say, an encyclopedia changes, our existing assumptions about encyclopedic value have to be re-examined, because current encyclopedic values are relative, not absolute. It is possible for low-cost, low-value systems to be better than high-cost, high-value systems in the view of the society adopting them. If the low-cost system can increase in value over time while remaining low cost, even better.
Pick your Innovator’s Dilemma: the Gutenberg bible was considerably less beautiful than scribal copies, the Model T was less well constructed than the Curved Dash Olds, floppy disks were considerably less reliable than hard drives, et cetera. So with Wikipedia and Encyclopedia Britannica: Wikipedia began life as a lost-cost, low-value alternative, but it was accessible, shareable, and improvable. Britannica, by contrast, has always been high-value, but it is both difficult and expensive for readers to get to, and worse, they can’t use what they see — a Britannica reader can’t copy and post an article, can’t email the contents to their friends, can’t even email those friends the link with any confidence that they will be able to see it.
Barriers to both access and re-use are built into the Britannica cost structure, and without those barriers, it will collapse. Nothing about the institution of Britannica has changed in the five years of Wikipedia’s existence, but in the current ecosystem, the 1768 model of creation — you pay us and we make an Encyclopedia — has been transformed from a valuable service to a set of self-perpetuating, use-crippling barriers.
This what’s wrong with Cost-free Present arguments: the principal competitive advantages of Wikipedia over Britannica, such as shareability or rapid refactoring (as of the Planet entry after Pluto’s recent demotion) are things which were simply not possible in 1768. Wikipedia is not a better Britannica than Britannica; it is a better fit for the current environment than Britannica is. The measure of possible virtues of an encyclopedia now include free universal access and unlimited re-use. As a result, maintaining Britannica costs more in a world with Wikipedia than it did in a world without it, in the same way scribal production became more expensive after the invention of movable type than before, without the scribes themselves doing anything different.
If we do what we always did, we’ll get the result we always got
Citizendium seems predicated on several related ideas about cost and value: having expertise and being an expert are roughly the same thing; the costs of certifying experts will be relatively low; building and running software that confers a higher degree of authority to them than on non-expert users will be similarly low; and the appeal to non-experts of participating in such a system will be high. If these things are true, than a hybrid of voluntary participation and expert authority will be more valuable than either extreme.
I am betting that those things aren’t true, because the costs of certifying experts and insuring deference to them — the costs of creating and sustaining the necessary social facts — will sandbag the system, making it too annoying to use.
The first order costs will come from the certification and deference itself. By proposing to recognize external credentialing mechanisms, Citizendium sets itself up to take on the expenses of determining thresholds and overlaps of expertise. A masters student in psychology doing work on human motivation may know more about behavioral economics than a Ph.D. in neo-classical economics. It would be easy to label them both experts, but on what grounds should their disputes be adjudicated?
On Wikipedia, the answer is simple — deference is to contributions, not to contributors, and is always provisional. (As with the Pluto example enough, even things as seemingly uncontentious as planethood turned out to be provisional.) Wikipedia certainly has management costs (all social systems do), but it has the advantage that those costs are internal, and much of the required oversight is enforced by moral suasion. It doesn’t take on the costs of forcing deference to experts because it doesn’t recognize the category of ‘expert’ as primitive in the system. Experts contribute to Wikipedia, but without requiring any special consideration.
Citizendium’s second order costs will come from policing the system as a whole. If the process of certification and enforcement of deference become even slightly annoying to the users, they will quickly become non-users. The same thing will happen if the projection of force needed to manage Citizendium delegitimizes the system in the eyes of the contributors.
The biggest risk with Wikipedia is ongoing: lousy or malicious edits, an occurrence that happens countless times a day. The biggest risk with Citizendium, on the other hand, is mainly up front, in the form of user inaction. The Citizendium project assumes that the desire of ordinary users to work alongside and be guided by experts is high, but everything in the proposal seems to raise the costs of contribution, relative to Wikipedia. If users do not want to participate in a system where the costs of participating are high, Citizendium will simply fail to grow.
I would like to offer my working definition of “social network sites” per confusion over my request for a timeline.
A “social network site” is a category of websites with profiles, semi-persistent public commentary on the profile, and a traversable publicly articulated social network displayed in relation to the profile.
Profile. A profile includes an identifiable handle (either the person’s name or nick), information about that person (e.g. age, sex, location, interests, etc.). Most profiles also include a photograph and information about last login. Profiles have unique URLs that can be visited directly.
Traversable, publicly articulated social network. Participants have the ability to list other profiles as “friends” or “contacts” or some equivalent. This generates a social network graph which may be directed (“attention network” type of social network where friendship does not have to be confirmed) or undirected (where the other person must accept friendship). This articulated social network is displayed on an individual’s profile for all other users to view. Each node contains a link to the profile of the other person so that individuals can traverse the network through friends of friends of friends….
Semi-persistent public comments. Participants can leave comments (or testimonials, guestbook messages, etc.) on others’ profiles for everyone to see. These comments are semi-persistent in that they are not ephemeral but they may disappear over some period of time or upon removal. These comments are typically reverse-chronological in display. Because of these comments, profiles are a combination of an individuals’ self-expression and what others say about that individual.
This definition includes all of the obvious sites that i talk about as social network sites: MySpace, Facebook, Friendster, Cyworld, Mixi, Orkut, etc. Some of the obvious players like LinkedIn are barely social network sites because of their efforts to privatize the articulated social network but, given that it’s possible, I count them (just like i count MySpace even when the users turn their profiles private).
There are sites that primarily fit into other categories but contain all of the features of social network sites. This is particularly common with sites that were once a different type of community site but have added new features. BlackPlanet, AsianAvenue, MiGente, QQ, and Xanga all fit into this bucket. I typically include LiveJournal as a social network site but it is sorta an edge-cases because they do not allow you to comment on people’s profiles. They do however allow you to publicly comment on the blog entries. For this reason, Dodgeball is also a problem - there are no comments whatsoever. In many ways, i do not consider Dodgeball a social network site, but i do consider it a mobile social network tool which is why i often lump it into this cluster of things.
Of course, things are getting trickier every day. I’m half-inclined to qualify the definition to say that the profile and articulated social network are the centralizing feature of these sites because there are tons of sites that have profiles and social network site features as a peripheral components of their service but where the primary focus is elsewhere. Examples of this include: YouTube, Flickr, Last.FM, 43Things, Meetup, Vox, Crushspot, etc. (Dating sites are probably the most tricky because they are very profile-centric but the social network is peripheral.) But, on the other hand, most of these sites grew out of this phenomenon. So, for the sake of argument, i leave room to include them but also consider them edge cases.
At the same time, it’s critical to point out what social network sites are most definitely NOT. They are NOT the same as all sites that support social networks or all sites that allow people to engage in social networking. Your mobile phone, your email, your instant message client… these all support the articulation of social networks (addressbooks) but they do not let you publicly display them in relation to a profile for others to traverse. MUDs/MOOs, BBSes, chatrooms, bulletin boards, mailing lists, MMORPGS… these all allow you to meet new people and make friends but they are not social network sites.
This is part of why i get really antsy when people talk about this category as “social networks” or “social networking” or “social networking sites.” I think that this is leading to all sorts of confusion about what is and what is not in the category. These alternative categories are far far far too broad and all too often i hear people talking about everything that allows you to talk to anyone in any way as one of these sites (this is the mistake that DOPA makes for example).
While it’s great to talk about all of these things as part of a broader “social software” or “social media” phenomenon, there are also good reasons to have a label to address a subset of these sites that are permitting very particular practices. This allows academics, politicians, technologists, educators, and others discuss how structural shifts are prompting different kinds of behaviors. (What happens when people publicly articulate their relationships? How do these systems change the rules of virality because the network is visible? Etc.) Because of this, i don’t want the slippage to be too great because people are using terrible terms or because people want their site to fit into the category of what’s currently cool.
Of course, like most categories, there are huge issues around the edges and there’s never a clean way to construct boundaries. (To understand the challenges, read Women, Fire, and Dangerous Things.) Just think of the category “game” and try to come up with a comfortable definition and boundary for that. Still, there are things that are most definitely not games. An apple is not a game. Sure, it can be used in a game but it is not inherently a game. Not all sites that allow people to engage in social activity are social network sites and it is ridiculous to try to shove them all there simply because there’s a lot of marketing money to be made (yet i realize that this is often the reason why people do try). For this reason, i really want to stake out “social network sites” as a category that has meaningful properties even if the edges are a little fuzzy. There is still meaningful family resemblance and more central prototypes than others. I really want to focus on making sense of what’s happening with this category by focusing primarily on the prototypes and less on the edge cases.
Anyhow, this is a work in progress but i wanted to write some of this down since i seem to be getting into lots of fights via email about this.
When i started tracking social network sites, i didn’t think that i would be studying them. I did a terrible job at keeping a timeline and now, i realize, this is important information to have on hand. I’m currently in the process of trying to go backwards and capture critical dates and i need your help. I know a lot of you have a lot of this information and can probably help me (and thus help everyone else interested in this arena).
I have created a simple pbwiki at http://yasns.pbwiki.com/ (password yasns) where i’m starting to make a timeline. Can you please add what you know to it? Pretty please with a cherry on top? A lot of this information is scattered all over the web and in people’s heads and it’d be great to get it documented in a centralized source. (I know that there is some info on Wikipedia but it’s not complete; as appropriate, i will transfer information back in their format.) Note: i didn’t include citations because i often don’t have them but if you have them, they’d be very very welcome.
Please let others know about this if you think they might have information to add. Thank you kindly for your time.
(PS: i have a new academic paper coming out shortly. Stay tuned.)
Read the ComScore press release. Completely. Read the details. They have found that the unique VISITORS have gotten older. This is not the same thing as USERS. A year ago, most adults hadn’t heard about MySpace. The moral panic has made it such that many US adults have now heard of it. This means that they visit the site. Do they all have accounts? Probably not. Furthermore, MySpace has attracted numerous bands in the last year. If you Google most bands, their MySpace page is either first or second; you can visit these without an account. People of all ages look for bands through search.
Why is Xanga far greater in terms of young people? Most adults haven’t heard of it. It’s not something that comes up high in search for other things. Facebook’s bimodal population pre-public launch shows that more professors/teachers are present than i thought (or maybe companies are more popular than i thought? or maybe comScore’s data is somehow counting teens/college students as 35-54…).
Can someone tell me exactly how comScore measures this? Is it based on the known age of the person using a given computer? Remember that many teens are logging in through their parent’s computer in the living room. Is it based on reported age? I kinda doubt it but the fact that there are more 100+ year olds on MySpace than are living should make people think about reported data. Is it based on phone interviews? How do they collect it? This isn’t really parseable into English.
My problem is that all of these teen sites show a heavy usage amongst 35-54. I cannot for the life of me explain how Xanga is 36% 35-54. There’s just no way. I don’t get how the data is formulated but it seems like an odd pattern across these sites to see a drop in 25-34 and a rise in 35-54. Older folks aren’t suddenly blogging on Xanga. So what gives? My hunch is that comScore’s metrics are consistently counting teens as 35-54 across all sites. My hypothesis is that because comScore is measuring per computer and teens are using their parent’s computer, comScore can’t tell the difference between a teen user and a parent user. If so, maybe all this is telling us is that parents have definitely listened to the warnings over the last year and are now making their teens access these sites through their computer?
Finally, when we talk about data, we also need to separate Visitors from Active Users from Accounts. The number of accounts is not the same as the number of users. The number of visitors is not the same as the number of users.
All this said, there is no doubt that more older people are creating accounts. Parents are told that they should check in on their kids. Police officers, teachers, marketers… they are all logging in to look at the youth. Is that the same as meaningful users? Some yes, some no.
From my qualitative experience, the vast majority of actual users are 14-30 with a skew to the lower end. Furthermore, the majority of the accounts are presenting themselves as 14-30. To confirm the latter (which is easier), i did a random sample of 100 profiles with UIDs over 50M (to address the “last year” phenomenon). What i found was:
26 are under 18
45 are 18-30 (with a skew to the lower)
10 are over 30 but under 70
1 is over 70 (but looks less than 18)
6 are bands
11 are invalid or deleted
1 is complete fake characters (explained in descript)
A few more things of note…
18 have private profiles
Of those over 30, only 2 has more than 2 friends (one has 3 friends; one has 5)
This account data hints that the general assumption that approximately 25% of users are minors is correct. Of the remaining, the bulk is under 30. Qualitatively, i’m seeing the most active use from those under 21. Given account practices, i don’t think that i’m off in what i’m seeing.
I do suspect that MySpace is holding strong at being primarily for younger people but that older folks have definitely been checking it out a LOT more. Still, i’m still suspicious of the fact that 35-54 are common across all youth sites. I’d really like to see comScore’s data on something that we can check. Maybe LiveJournal?
(I’d really really really love to be proven wrong on this. If anyone has data that can provide an alternate explanation to the comScore numbers, please let me know!)
Update:Fred Stutzman and i just jockeyed back and forth to find something we could agree on wrt the comScore numbers. Here are some ways of making sense of the data of VISITORS:
Xanga is more of a teen-flavored site than MySpace, Facebook or Friendster
Facebook is more of a college-flavored site than MySpace, Friendster or Xanga
Friendster is more of a 20/30-something flavored site than MySpace, Facebook or Xanga
Of users going to these four sites, MySpace does not swing to any one group; it draws people of all ages to visit the site.
A greater percentage of adults (most likely parents) visit MySpace than any of the other social sites
This is all fine and well and confirms most intuition. The problem is that what we CANNOT confirm via this data is that more adults visit any of these sites than minors. Again, intuitive but the comScore data seems to indicate that adults visit each of sites more than their key population. This is really visible in their “total internet” users which seems to suggest that the vast majority of visitors to all of these social sites are adults. I cannot find a single person who works for one of these companies that believes this.
I’ve spoked to numerous folks since i posted last nite. Most believe that comScore gets this data by running a program on people’s computers. Young people are supposed to use a separate account than their parents. This data seems to indicate that comScore is wrong in assuming that people will do so. Most minors probably use their parent’s account to check these social sites. So, if we assume that, Xanga is obscenely a teen site, Facebook probably has nearly as many high school users as college users and MySpace swings young but is used by a wider variety of age groups than most social sites.
Finally, it’s all nice and well that Fox Interactive spokespeople confirm this data but i’ve watched over and over as FIM has confirmed or said things that were patently untrue in public. I don’t know if this is because FIM (the parent of MySpace) doesn’t know what’s going on on MySpace or if it’s because they don’t care whether or not they are accurate publicly. I don’t honestly believe that FIM has any clue about the age of its unique visitors. They know the purported age of people who have accounts and it would be patently false to say that 35-54 dominates account holders.
Frankly, i’m uber disappointed with comScore but even more disappointed with all of the press and bloggers who ran with the story that MySpace is gray without really looking at the data. This encourages inaccurate data and affects the entire tech industry as well as policy makers, advertisers, and users. I’m horrified that AP, Slashdot, Wall Street Journal, and numerous respectable bloggers are just reporting this as truth and speaking about it as though this is about users instead of visitors. C’mon now. If we’re going to fetishize quantitative data, let’s at least use a properly critical eye.
SlideShare launches today — the YouTube of Powerpoint. While Powerpoint destroys thought, so does TV. And misgivings aside, slides can be an art form in and of itself. They are objects you spin stories around. Like this:
It is easy to embed a presentation and player within a site, blog or wiki. The above presentation is one I found by danah. I’ve been playing with the Alpha and really have to applaud Rashmi (you may know her from Dcamp), Jonathan and the gang at Uzanto.
You upload your Powerpoint (PPT and PPS formats) or OpenOffice (ODP format) slides into My Slidespace with a familiar title, description and tags. The flash player is fast and intuitive.
What’s also fascinating is their servers are backed by Amazon S3 (Simple Storage Service). The other week when Socialtext 2.0 launched with a large-file webcast, we got Techcrunched and were worried about the load on our servers. After a little scrambling in IRC, Pete Kaminski leveraged S3, and problem solved. In this case, SlideShare has web serviced their scalability. An interesting model to watch, and good thing if this thing is a sudden hit.
22. David Gerard on September 22, 2006 07:08 AM writes…
Plenty of people complain of Wikipedia’s alleged “anti-expert bias”. I’ve yet to see solid evidence of it. Unless “expert-neutral” is conflated to mean “anti-expert.” Wikipedia is expert-neutral - experts don’t get a free ride. Which is annoying when you know something but are required to show your working, but is giving us a much better-referenced work.
One thing the claims of “anti-expert bias” fail to explain is: there’s lots of experts who do edit Wikipedia. If Wikipedia is so very hostile to experts, you need to explain their presence.
Permalink to Comment
23. engineer_scotty on September 22, 2006 01:19 PM writes…
I’ve been studying the so-called “expert problem” on Wikipedia—and I’m becoming more and more convinced that it isn’t and expert problem per se; it is a jackass problem. As in some Wikipedians are utter jackasses—in this context, “jackass” is an umbrella category for a wide variety of problem behaviors which are contrary to Wikipedia policy—POV pushing, advocacy of dubious theories, vandalism, abusive behavior, etc. Wikipedia policy is reasonably good at dealing with vandalism, abusive behavior and incivility (too good, some think, as WP:NPA occasionally results in good editors getting blocked for wielding the occasional cluestick ‘gainst idiots who sorely need it). It isn’t currently good at dealing with POV-pushers and crackpots whose edits are civil but unscholarly, and who repeatedly insert dubious material into the encyclopedia. Recent policy proposals are designed to address this.
Many experts who have left, or otherwise have expressed dissatisfaction with Wikipedia, fall into two categories: Those who have had repeated bad experiences dealing with jackassses, and are frustrated by Wikipedia’s inability to restrain said jackasses; and those who themselves are jackasses. Wikipedia has seen several recent incidents, including one this month, where notable scientists have joined the project and engaged in patterns of edits which demonstrated utter contempt for other editors of the encyclopedia (many of whom were also PhD-holding scientists, though lesser known), attempted to “own” pages, attempted to portray conjecture or unpublished research as fact, or have exaggerated the importance or quality of their own work. When challenged, said editors have engaged in (predictable) tirades accusing the encyclopedia of anti-intellectualism and anti-expert bias—charges we’ve all heard before.
The former sort of expert the project should try to keep. The latter, I think the project is probably better off without; and I suspect they would wear out their welcomes quickly on Citizendium as well.
I would love to see a few case studies, linked to the History and Talk pages of a few articles— “Here was the expert contribution, here was the jackass edit, this is what was lost”, etc. Reading Engineer Scotty’s comment, and given the general sense of outraged privilege that seems to run through much of the “Experts have their work edited without permission!” literature, I am guessing that the problem is not so much experts contributing and then being driven away as it is non-contributions by people unwilling to work in an environment wherre their contributions aren’t sacrosanct.
A response from Larry Sanger, posted here in its entirety:
Thanks to Clay Shirky for the opportunity to reply here on Many2Many
to his “Larry Sanger, Citizendium, and the Problem of Expertise,” First, two points about Clay’s style of argumentation, which I simply cannot let go without comment. Then some replies to his actual arguments.
1. Allow me to identify my own core animating beliefs, thank you very much.
Clay’s piece annoying tendency to characterize my assumptions uncharitably and without evidence, and to psychologize about me. Thus, Clay says things like: “Sanger‚s published opinions seem based on three beliefs”; “Sanger wants to believe that expertise can survive just fine outside institutional frameworks”; “Sanger’s core animating belief seems to be a faith in experts”; “Sanger’s view seems to be that expertise is a quality like height”; and “Sanger also underestimates the costs of setting up and then enforcing a process that divides experts from the rest of us.”
I find myself strongly disagreeing with Clay’s straw Sanger. However, I am not that Sanger! Last time I checked, I was made of flesh and blood, not straw.
2. May I borrow that crystal ball when you’re done with it?
Repeatedly, Clay makes dire predictions for the Citizendium. “Structural issues…will probably prove quickly fatal”; “institutional overhead…will stifle Citizendium”; “policing certification will be a common case, and a huge time-sink” so “the editor-in-chief will then have to spend considerable time monitoring that process”; “Citizendium will re-create the core failure of Nupedia”; “Sanger believes that Wikipedia goes too far in its disrespect of experts; what killed Nupedia and will kill Citizendium is that they won’t go far enough.”
I think Clay lacks any good reason to think the Citizendium will fail; but clearly hebadly wants it to fail, and his comments are animated by wishful thinking. That, anyway, seems the most parsimonious explanation. To borrow one of Clay’s phrases, and return him the favor: it is interesting “how consistent Clay has been about his beliefs” on the low value of officially-recognized expertise in online communities. “His published opinions seem based on” the belief in the supreme value and efficacy of completely flat self-organizing communities. The notion of experts being given special authority, even very circumscribed authority, does extreme violence to this “core animating belief” (to borrow another of Clay’s phrases). It must, therefore, be impossible.
Less flippantly now. I do make a point of being properly skeptical about all of my projects—that’s another thing I’ve been consistent about. You can probably still find writings from 2000 and 2001 in which I said I didn’t know whether Nupedia or Wikipedia would work. I have no idea if the Citizendium will work. What I do know is that it is worth a try, and we’ll do our best to solve problems that we can anticipate and as they arise.
By the way, there’s a certain irony in the situation, isn’t there? Clay Shirky, respected expert about online communities, holds forth about a new proposed online community, and does what so many experts love to do: make bold predictions about the prospects of items in their purview. Meanwhile, I, the alleged expert-lover, cast aspersions on his abilities to make such predictions. If my “core animating belief” were “a faith in experts,” why would I lack faith in this particular expert?
3. I want to be a social fact, too!
Let’s move on to Clay’s actual arguments. He begins his first argument with something perfectly true, that expertise (in the relevant sense, an operational concept of expertise) is a social fact, that this social fact is conferred (not always formally, but often) by institutions, and that, therefore, one cannot have expertise without (in some sense) “institutional overhead.” So far, so good. The current proposal—which is open to debate, at this early stage, even from Clay himself—addresses this situation by proposing to avoid editor application review committees in favor of self-designation of editorial status. The details are relevant, so let me quote them from the FAQ:
We do not want editors to be selected by a committee, which process is too open to abuse and politics in a radically open and global project like this one is. Instead, we will be posting a list of credentials suitable for editorship. (We have not constructed this list yet, but we will post a draft in the next few weeks. A Ph.D. will be neither necessary nor sufficient for editorship.) Contributors may then look at the list and make the judgment themselves whether, essentially, their CVs qualify them as editors. They may then go to the wiki, place a link to their CV on their user page, and declare themselves to be editors. Since this declaration must be made publicly on the wiki, and credentials must be verifiable online via links on user pages, it will be very easy for the community to spot [most] false claims to editorship.
What then is Clay’s criticism? “The problem” at the beginning of the argument was that “experts are social facts.” Yeah, so? So, says Clay,
Sanger expects that decertification will only take place in unusual cases. This is wrong; policing certification will be a common case, and a huge time-sink. If there is a value to being an expert, people will self-certify to get at that value, not matter what their credentials. The editor-in-chief will then have to spend considerable time monitoring that process, and most of that time will be spent fighting about edge cases.
My initial reaction to this was: howon Earth could Shirky know all that? Furthermore, isn’t it quite obvious that, far from being a static proposal, this project is going to be able to move nimbly (I usually propose radical changes and refinements to my projects) in order to solve just such problems, should they arise?
In any event, based on my own experience, I counter-predict that Clay will probably be wrong in his prediction. There will probably be a lot of people who humorously, out of cluelessness, or whatever, claim to be editors.
For the easy cases, which will probably be most of them, constables will be able to rein people in, nearly as easy as they can rein in vandalism. No doubt we will have a standard procedure for achieving this. As to the borderline (“edge”) cases (e.g., some grad students and independent scholars), Clay gives us no reason to think that the editor-in-chief will have to spend large amounts of time fighting about them. Unlike Wikipedia, and like many OSS projects, there will be a group of people authorized to select the “release managers” (so to speak). This policy will be written into the project charter, support of which will be a requirement of participation in the project.
The review process for editor declarations, therefore, will be clear and well-accepted enough—that, after all, is the whole point of establishing a charter and “rule of law” in the online community—that the process can be expected to work smoothly. Mind you, it will be needed because of course there will be borderline cases, and disgruntled people, but Clay has given no reason whatsoever to think it will dominate the entire proceedings.
Besides, this is a responsibility I propose to delegate to a workgroup; I will probably be too busy to be closely involved in it.
Far from being persuasive, it is actually ironic that Clay cites primordial fights I had with trolls on Wikipedia as evidence of his points. It was precisely due to a lack of clearly-circumscribed authority and widely-accepted rules that I had to engage in such fights. Consequently, the Citizendium is setting up a charter, editors, and constables precisely to prevent such problems.
4. Warm and fuzzy yes, a hierarchy no.
Clay nicely sums up his next argument this way:
Real experts will self-certify; rank-and-file participants will be delighted to work alongside them; when disputes arise, the expert view will prevail; and all of this will proceed under a process that is lightweight and harmonious. All of this will come to naught when the citizens rankle at the reflexive deference to editors; in reaction, they will debauch self-certification (leading to irc-style chanop wars), contest expert prerogatives, raising the cost of review to unsupportable levels (Wikitorial, round II,) take to distributed protest (q.v.Hank the Angry Drunken Dwarf), or simply opt-out (Nupedia in a nutshell.)
(By the way, Clay is completely wrong about citizen participation in Nupedia. They made up the bulk of authors in the pipeline. Our first article was by a grad student. An undergrad wrote several biology articles. There have been so many myths are made about Nupedia, so completely divorced from reality, that it has become a fascinating and completely fact-free Rohrschach test for everything bad that anyone wants to say about expert authority in open collaboration.)
The Citizendium is, by Clay’s lights, a radical experiment that does violence to his cherished notions of what online communities should be like. Persons inclined to “debauch self-certification” as on IRC chatrooms will be removed from the project; and others will not protest at such perfectly appropriate treatment, because we will have already announced this as a policy.
Through self-selection the community can be expected to be in favor of such policies; those who dislike them will always have Wikipedia.
That’s part of the beauty of a world with both a Citizendium and a Wikipedia in it. Those who (like you, Clay) instinctively hate the Citizendium—we’ve seen a little of this in blogs lately, calling the very idea “Wikipedia for stick-in-the-muds,” “Wikipedia for control freaks,” a “horror,” etc.—will always have Wikipedia. I strongly encourage you to stick with Wikipedia if you dislike the idea of the Citizendium that much. That will make matters easier for everyone. If other people want to organize themselves in a different way—a way you’d never dream of doing—then please give them room to do so. As a result we’ll have one project for people who agree with you, Clay, and one for people who agree with me, and the world will be richer.
Clay does give some more support for thinking that an editor-guided wiki is unworkable. He says that the viability of a community resembles a “U curve” with one end being a total hierarchy and the other end being “a functioning community with a core group.” Apparently, projects that are neither hierarchies nor communities, which Clay implies is where the Citizendium would fit, would incur too many “costs of being an institution” and “significant overhead of process.”What I find particularly puzzling about this is how he describes the ends of U curve. I would have expected him to say hierarchy on one end and a totally flat, leaderless community on the other end. But instead, opposite the hierarchy is “a functioning community with a core group.” How is it, then, that the Citizendium as proposed would not constitute “a functioning community with a core group”?
Let me put this more plainly, setting aside Clay’s puzzling theoretical apparatus. What the world has yet to test is the notion of experts and ordinary folks (and remember: experts working outside their areas of expertise are then “ordinary folks”) working together, shoulder-to-shoulder, on a single project according to open, open source principles. That is the radical experiment I propose. This actually hearkens back to the way OSS projects essentially work. So far, to my knowledge, experts have not been invited in to “gently guide” open content projects in a way roughly analogous to the way that senior developers gently guide OSS projects, deciding what changes are in the next release and what isn’t. You might say that the analogy does not work because senior developers of OSS projects are chosen based on the merits of their contributions within the project. But what if we regard an encyclopedia as continuous with the larger world of scholarship, so that scholarly work outside of the narrow province of a single project becomes relevant for determining a senior content developer? For an encyclopedia, that’s simply a sane variant on the model.
Whereas OSS projects have special, idiosyncratic requirements, encyclopedias frankly do not. There’s no point to creating an insular community, an “in group” of people who have mastered the particular system, because it’s not about the system—it’s about something any good scholar can contribute to, an encyclopedia. Then, if the larger, self-selecting community invites and welcomes such people to join them as “senior content developers,” why not think the analogy with OSS is adequately preserved?
(For more of the latter argument please see a new essay I am going to try to circulate among academics.)
The interesting thing about Citizendium, Larry Sanger’s proposed fork of Wikipedia designed to add expert review, is how consistent Sanger has been about his beliefs over the last 5 years. I’ve been reviewing the literature from the dawn of Wikipedia, born from the failure of the process-laden and expert-driven Nupedia, and from then to now, Sanger’s published opinions seem based on three beliefs:
1. Experts are a special category of people, who can be readily recognized within their domains of expertise.
2. A process of open creation in which experts are deferred to as of right will be superior to one in which they are given no special treatment.
3. Once experts are identified, that deference will mainly be a product of moral suasion, and the only place authority will need to intrude are edge cases.
All three beliefs are false.
There are a number of structural issues with Citizendium, many related to the question of motivation on the part of the putative editors; these will probably prove quickly fatal. More interesting to me, though, is is the worldview behind Sanger’s attitude towards expertise, and why it is a bad fit for this kind of work. Reading the Citizendium manifesto, two things jump out: his faith in experts as a robust and largely context-free category of people, and his belief that authority can exist largely free of expensive enforcement. Sanger wants to believe that expertise can survive just fine outside institutional frameworks, and that Wikipedia is the anomaly. It can’t, and it isn’t.
Experts Don’t Exist Independent of Institutions
Sanger’s core animating belief seems to be a faith in experts. He took great care to invite experts to the Nupedia Advisory Board, and he has consistently lamented that Wikipedia offers no special prerogatives for expert review, and no special defenses against subsequent editing of material written by experts. Much of his writing, and the core of Citizendium, is based on assumptions about how experts should be involved in a project like this.
The problem Citizendium faces is that experts are social facts — society typically recognizes experts through some process of credentialling, such as the granting of degrees, professional certifications, or institutional engagement. We have a sense of what it means that someone is a doctor, a judge, an architect, or a priest, but these facts are only facts because we agree they are. If I say “I sentence you to 45 days in jail”, nothing happens. If a judge says “I sentence you to 45 days in jail”, in a court of law, dozens of people will make it their business to act on that imperative, from the bailiff to the warden to the prison guards. My words are the same as the judges, but the judge occupies a position of authority that gives his words an effect mine lack, an authority only exists because enough people agree that it does.
Sanger’s view seems to be that expertise is a quality like height — some people are obviously taller than others, and the rest of us have no problem recognizing who the tall people are. But expertise isn’t like that at all; it is in fact highly subject to shifts in context. A lawyer from New York can’t practice in California without passing the bar there. A surgeon from India can’t operate on a patient in the US without further certification. The UN representative from Yugoslavia went away when Yugoslavia did, and so on.
As a result, you cannot have expertise without institutional overhead, and institutional overhead is what stifled Nupedia, and what will stifle Citizendium. Sanger is aware of this challenge, and offers mollifying details:
[…]we will be posting a list of credentials suitable for editorship. (We have not constructed this list yet, but we will post a draft in the next few weeks. A Ph.D. will be neither necessary nor sufficient for editorship.) Contributors may then look at the list and make the judgment themselves whether, essentially, their CVs qualify them as editors. They may then go to the wiki, place a link to their CV on their user page, and declare themselves to be editors. Since this declaration must be made publicly on the wiki, and credentials must be verifiable online via links on user pages, it will be very easy for the community to spot false claims to editorship.
We will also no doubt need a process where people who do not have the credentials are allowed to become editors, and where (in unusual cases) people who have the credentials are removed as editors.
Sanger et al. set the bar for editorship, editors self-certify, then, in order to get around the problems this will create, there will be an additional certification and de-certification process internal to the site. On Citizendium, if you are competent but uncredentialed, you will have to be vetted before you are allowed to ascend to the editor’s chair, and if you are credentialed but incompetent, you’re in until decertification. And, critically, Sanger expects that decertification will only take place in unusual cases.
This is wrong; policing certification will be a common case, and a huge time-sink. If there is a value to being an expert, people will self-certify to get at that value, not matter what their credentials. The editor-in-chief will then have to spend considerable time monitoring that process, and most of that time will be spent fighting about edge cases.
Sanger himself experienced this in his fight with Cunctator at the dawn of Wikipedia; Cunc questioned Sanger’s authority, leading Sanger to defend it with increasing vigor. As Sanger said at the time “…in order to preserve my time and sanity, I have to act like an autocrat. In a way, I am being trained to act like an autocrat.” Sanger’s authority at Wikipedia required his demonstrating it, yet this very demonstration made his job harder, and ultimately untenable. This the common case; as any parent can tell you, exercise of presumptive authority creates the conditions under which it is tested. As a result, Citizendium will re-create the core failure of Nupedia, namely putting at the center of the effort a process whose maintenance takes more energy than can be mustered by a volunteer project.
“We’re a Warm And Fuzzy Hierarchy”: The Costs of Enforcement
In addition to his misplaced faith in the rugged condition of expertise, Sanger also underestimates the costs of setting up and then enforcing a process that divides experts from the rest of us. Curiously, this underestimation seems to be borne of a belief that most of the world shares his views on the appropriate deference to expertise:
Can you really expect headstrong Wikipedia types to work under the guidance of expert types in this way?
Probably not. But then, the Citizendium will not be Wikipedia. We do expect people who have proper respect for expertise, for knowledge hard gained, to love the opportunity to work alongside editors. Imagine yourself as a college student who had the opportunity to work alongside, and under the loose and gentle direction of, your professors. This isn’t going to be a top-down, command-and-control system. It is merely a sensible community: one where the people who have made it their life’s work to study certain areas are given a certain appropriate authority—without thereby converting the community into a traditional top-down academic editorial scheme.
Well, can you expect the experts to want to work “shoulder-to-shoulder” with nonexperts?
Yes, because some already do on Wikipedia. Furthermore, they will have an incentive to work in this project, because when it comes to content—i.e., what the experts really care about—they will be in charge.
These passages evince a wounded sense of purpose: Experts are real, and it is only sensible and proper that they be given an appropriate amount of authority. The totality of the normative view on display here is made more striking because Sanger never reveals the source of these judgments. “Sensible” according to whom? How much authority is “appropriate”? How much control is implied by being “in charge”, and what happens when that control is abused?
These responses are also mutually contradictory. Citizendium, the manifesto claims, will not be a traditional top-down academic scheme, but experts will be in charge of the content. The only way experts can be in charge without top-down imposition is if every participant internalizes respect for authority to the point that it is never challenged in the first place. One need allude only lightly to the history of social software since at least Communitree to note that this condition is vanishingly rare.
Citizendium is based less on a system of supportable governance than on the belief that such governance will not be necessary, except in rare cases. Real experts will self-certify; rank-and-file participants will be delighted to work alongside them; when disputes arise, the expert view will prevail; and all of this will proceed under a process that is lightweight and harmonious. All of this will come to naught when the citizens rankle at the reflexive deference to editors; in reaction, they will debauch self-certification (leading to irc-style chanop wars), contest expert preogatives, rasing the cost of review to unsupportable levels (Wikitorial, round II,) take to distributed protest (q.v. Hank the Angry Drunken Dwarf), or simply opt-out (Nupedia in a nutshell.)
The “U”-Curve of Organization and the Mechanisms of Deference
Sanger is an incrementalist, and assumes that the current institutional framework for credentialling experts and giving them authority can largely be preserved in a process that is open and communally supported. The problem with incrementalism is that the very costs of being an institution, with the significant overhead of process, creates a U curve — it’s good to be a functioning hierarchy, and its good to be a functioning community with a core group, but most of the hybrids are less fit than either of the end points.
The philosophical issue here is one of deference. Citizendium is intended to improve on Wikipedia by adding a mechanism for deference, but Wikipedia already has a mechanism for deference — survival of edits. I recently re-wrote the conceptual recipe for a Menger Sponge, and my edits have survived, so far. The community has deferred not to me, but to my contribution, and that deference is both negative (not edited so far) and provisional (can always be edited.)
Deference, on Citizendium will be for people, not contributions, and will rely on external credentials, a priori certification, and institutional enforcement. Deference, on Wikipedia, is for contributions, not people, and relies on behavior on Wikipedia itself, post hoc examination, and peer-review. Sanger believes that Wikipedia goes too far in its disrespect of experts; what killed Nupedia and will kill Citizendium is that they won’t go far enough.
Last night, i asked will Facebook learn from its mistake? In the first paragraph, i alluded to a “privacy trainwreck” and then went on to briefly highlight the political actions that were taking place. I never returned to why i labeled it that way and in my coarseness, i failed to properly convey what i meant by this.
When i sat down to explain the significance of the “privacy trainwreck,” a full-length essay came out. Rather than make you read this essay in blog form (or via your RSS reader), i partitioned it off to a printable webpage.
I believe the Wired Wiki experiment can be called a success, and yesterday I would have said it was doomed. Just came back from Wiki Wednesday, where Wired reporter Ryan Singel held a conversation about it. How we conducted the experiment, what part of the editorial process it was directed at it and the participation of the community gives us a lot to learn from.
Do recall that the use of wikis in journalism has been significantly tainted by the LA Times Wikitorial debacle. It was a failure in wiki implementation, goal setting, content structure and moderation. While the media has embraced public blogs, they still have a while to go before public wikis are accepted.
In an experiment in collaborative journalism, Wired News is putting reporter Ryan Singel at your service.
This wiki began as an unedited 1,059-word article on the wiki phenomenon, exactly as Ryan filed it. Your mission, should you choose to accept it, is to do the job of a Wired News editor and whip it into shape. Don’t change the quotes, but feel free to reorganize it, make cuts, smooth the prose or add links — whatever it takes to make it a lively, engaging news piece.
Ryan will answer questions from the comments page, and, when consensus calls for it, conduct additional reporting. If there’s something he missed, let him know, and he’ll get on the phone and investigate, then submit new text to the wiki for your review.
Tim Spalding has taken discussion forums a big step forward over at LibraryThing. The concept is simple but could make a real difference because it allows forum msgs to be aggregated in multiple ways. When you’re entering a msg at a forum, you can put a title or author in brackets and LibraryThing will take a stab at identifying what you have in mind. Think of it as in-place tagging. You can thus easily find all the posts about a book. And all the references to a book or author will be lilsted on that book or author’s page.
Because LibraryThing knows which books you own (because you’ve told it), it can feed you msgs about any of them. And, as Tim points out, this unhiding of msgs will change the temporality of posts: Rather than msgs fading into obscurity a few days or weeks after they’re posted, they’ll be easily findable and reply-able.
Over the last month, i’ve been driving Mimi’s Hybrid on and off. One of my favorite things about the Hybrid is that it tells you how many MPG you’re averaging over time. I find myself driving around town trying to maximize that number, getting uber excited when it goes up and super sad when it goes down. It reminds me of when i used to try to maximize my miles per hour when going from Boston to New York only this is more environmental. Yet, it’s not the environment that i’m concerning myself with - it’s all about number games in the same way that people obsess over every pound on the scale or the calories in every bite.
Then i was thinking about Tantek and Jason raving about Consumating. I love the fact that it’s a lot of cool geeky people but i can never get over the lameness that i feel when i log in and look at my score. And yet, i can’t be bothered to answer the questions that make me feel all uncomfortable in the hopes that someone will like my answers and rate me higher. It’s a catch-22 for me. Yet, i totally understand why Tantek and Jason and others absolutely love it and why they go back for more.
And then i was thinking about the people on Yahoo! Answers who spend hours every day answering questions to get high ranks. It’s very similar to Consumating only it’s not all embarassing because it’s not really about you - it’s about the answers. There’s no real gain from getting points but still, it’s like a mouse in a cage determined to do well just cuz they can.
This all reminds me of a scene in some movie. I can’t recall what movie it was but it was about how you just want to be the best at something, anything… to have something to point at and say look, i’m #1! The validation, the proof of greatness! Even if that something is problematic attention getting like being the #1 serial killer. (Was it Bowling for Columbine?)
I started wondering about these number games… They’re all over social software - Neopets, friends on social network sites, blog visitors, etc. Who is motivated by what number games? Who is demotivated? Does it make a difference if the number game is about the group vs. the individual, about one’s self directly vs. about some abstract capability?
Are there some number games that work better than others in attracting a broader audience? I’m thinking about Orkut here… if the game is to get as many Brazillians on the site as possible, you only need a few obsessives to be the rallying forces; everyone else is part of the number game simply by signing up. So there are tons competing in the number games but only a few invested.
Does anyone know anything about how these number games work as incentives?
The article mentions that iStockphoto (cheap stock photography via the Internet) has obliterated the “future for professional stock photography.” (Similarly, Clay Shirky noted way back when that blogs “are such an efficient tool for distributing the written word that they make publishing a financially worthless activity.”)
But more importantly, the Wired article discusses the rise of R&D networking. For example, InnoCentive matches problems and problem-solvers: “The strength of a network like InnoCentive’s is exactly the diversity of intellectual background…. We actually found the odds of a solver’s success increased in fields in which they had no formal expertise.”
Now, just this year, Chevy attempted its own kind of crowdsourcing, allowing website visitors to apply their own text input over Chevy Tahoe footage to create-your-own-commercial. What they got was a barrage of anti-pollution, anti-accident, and just-about-anti-anything creations. (See them at YouTube: http://youtube.com/results?search=chevy+tahoe). One participant even launched a website where you can rate the videos).
Using existing mass media images to twist, mock, refute, subvert, or as wikipedia more politely says “produce negative commentary about itself” is called “culture jamming.”
Umberto Eco calls this “semiological guerrilla warfare” and supports “action which would urge the audience to control the message and its multiple possibilities of interpretation.” (from Travels in Hyperreality).
But what happens when the culture jammers actually want to continue and extend the media in question?
The fans are saying, look, if we can’t get what we want on television, the technology is out there for us to do it ourselves…. It has become so popular that Walter Koenig, the actor who played Chekov in the original “Star Trek,” is guest starring in an episode, and George Takei, who played Sulu, is slated to shoot another one later this year.
Now the Star Trek franchise has a real opportunity here that could be taken as a crowdsourcing lesson to other media producers (music, film, books, etc.). Here it comes:
Free the content!
Let the Star Trek fans take the initiative and spend the money to keep the interest-level going, crank out a studio movie once in a while, foster crossovers between shows, organize events, provide financial assistance, etc.
It is without shame that I can share the release of Socialtext Open, an Open Source distribution of Socialtext. I figure this is in demand by M2M readers, and, well, we are quite proud of it. For your downloading pleasure.
“The real value of communicative technologies like social software is that they re-enable and enhance our ability to use a time-tested means of information processing, i.e. the conversation, in new and interesting ways!”
Conversation has long been the cornerstone of our society. New technologies enable us to speak to people anytime, anywhere. However, there is growing concern – both in the UK and elsewhere - that we are talking less than we used to. This work suggests that this is a misconception and that the issue is actually much more complex.
Robert Putnam’s book Bowling Alone catalyzed the debate about the decline of community. Putnam, like many others, suffered from ontological blinders. By defining community in a narrow way, he failed to see forms of community that didn’t fit his narrow definition. But:
The adherence to outdated ways of thinking about social involvement have intensified concern about our sense of community. The way that we engage with those around us has changed. We no longer necessarily connect with either conventional structures like community societies or even less formal associative fora, like markets. Community involvement remains of vital importance, but structures of engagement no longer reflect the ways in which people are comfortable in having their say.
This problem is also rampant in politics where scholars who focus on the primacy of nation-states ignore transnational social organization, and scholars who focus on the structures of formal government fail to notice the networks of informal governance that are emerging across the globe. The bottom line is that technology ushers in new forms of social organization that escape notice precisely because they are invisible to adherents of the old paradigm. By the time anyone notices the impending social transformation, it is too powerful to contain, and social transformation cascades across the landscape. Or so the theory goes.
So what about conversation? Well, I venture to suggest that it is through conversation, the connecting of people with other people, the exchange of ideas, the spread of information, debate, dissent, and empathy, that collective wisdom arises. Furthermore, given the resurgence of violent politics, the ambivalence in the face of environmental crises, and profit-driven enclosure movements like overly restrictive copyright law and the Net Neutrality concern, we could definitely benefit from new forms of social organization as carriers of collective wisdom.
Last week, i had drinks with Ian Rogers and Kareem Mayan and we were talking about shifts in the development of technology. Although all of us have made these arguments before in different forms, we hit upon a set of metaphors that i feel the need to highlight.
Complete with references to engineering, technology development was originally seen as a type of formalized production. You design, build and ship products. And then they’re out in the wild, removed from the production cycle until you make Version 2. Of course, it didn’t take long for people to realize that when they shipped flaws, they didn’t need to do a recall. Instead, they could just ship free updates in the form of Version 1.1.
As the world went web-a-rific, companies held onto the ship-final-products mentality in its stodgy archaic form. Until the forever-in-beta hit. I, for one, love the persistent beta. It signals that the system is continuously updating, never fully baked and meant to be organic. This is the way that it should be.
Web development is fundamentally different than packaged software. Because it is the web, there’s no vast distance between producers and consumers. Distribution channels cross space and time (much to the chagrin of most old skool industries). Particularly when it comes to social software, producers can live inside their creations, directly interact with those using the system, and evolve the system alongside the practices that are emerging. In fact, not only can they, they’re stupid to do anything else.
The same revolution has happened in writing. Sure, we still ship books but what does it mean to have the author have direct interaction with the reader like they do in blogging? It’s almost as though someone revived the author from the dead . And maybe turned hir into a kind of peculiar looking Frankenstein who realizes that things aren’t quite right in interpretation-land but can’t make them right no matter what. Regardless, with the author able to directly connect to the reader, one must wonder how the process changes. For example, how is the audience imagined when its presence is persistent?
I’m reminded of a book by Stewart Brand - How Building Learn. In it, Brand talks about how buildings evolve over time based on their use and the aging that takes place. A building is not just the end-result of the designer, but co-constructed by the designer, nature, and the inhabitant over time. When i started thinking about technology as architecture, i realized the significance of that book. We cannot think about technologies as finalized products, but as evolving architectures. This should affect the design process at the getgo, but it also highlights the differences between physical and digital architectures. What would it mean if 92 million people were living in the house simultaneously with different expectations for what colors the walls should be painted? What would it mean if the architect was living inside the house and fighting with the family about the intention of the mantel?
The networked nature of web technologies brings the architect into the living room of the house, but the question still remains: what is the responsibility of a live-in architect? Coming in as an authority on the house does no good - in that way, the architect should still be dead. But should the architect just be a glorified fixer-upper/plumber/electrician? Should the architect support the aging of the house to allow it to become eccentric? Should the architect build new additions for the curious tenants? What should the architect be doing? One might think that the architect should just leave the place alone… but is this how digital sites evolve? Do they just need plumbers and electricians? Perhaps the architect is not just an architect but also an urban planner… It is not just the house that is of concern, but the entire city. How the city evolves depends on a whole variety of forces that are constantly in flux. Negotiating this large-scale system is daunting - the house seems so much more manageable. But 92 million people never lived in a single house together.
 Note to Barthes scholars: i’m being snippy here. I realize that the author’s authority should still be contested, that multiple interpretations are still valid, and that the author is still a product of social forces. I also realize that even as i’m writing this blogpost, its reading will be out of my control, but the reality is that i’ll still - as author - get all huffy and puffy and try to be understood. Damnit.
Prepare to be spammed globally. Twttr just launched, a mobile social software app for SMSing your social network developed by Odeo. It’s slightly simpler than Dodgeball, not location centric and a bit more viral. Biz Stone calls it present-tense blogging. Ev notes you might want to upgrade your SMS plan and they are working on compatibility outside the US. To me its reply-to-all baked in your phone.
If they support MMS and let me send a photo to twttr and CC flickr, it will be a killer app. But for now, put my SMS’ in a sidebar widget or give me feeds I can splice.
Ever get that feeling why you are blogging and flickring your life away that you have lost something? That you are telling your life’s story, but it is lost in the archives and in the minds of people who are really paying attention?
There is a gap in social software for binding stories in a chronology. For building biographies of people, places and things. I think Dandelife serves as different object to tell stories around. Time.
The horizontal and vertical visualizations are what makes this work:
Dandelife is definately beta and Edward and Kelly are working hard on it. But when you can upload your blog and photos to start your story, its pretty powerful. Go play. And let them know how it can get better.
We had to move away from a static, dead intranet,” says Myrto Lazopoulou. “The wiki has allowed us to improve collaboration, communication and publication. We can cross time zones, improve the way teams works, reduce email and increase transparency.”
The case study is also available in PDF format and complements other research done on this leading deployment:
University of Pennsylvania’s del.icio.us-like PennTags project allows readers to tag catalogued items. It’s a great way to track resources for a research project and simultaneously make the results of your forays available to future researchers. In fact, it seems just plain selfish not to do so.
Integrating tagging with the book catalogue (and therefore with the book taxonomy) instantaneously provides the best of both worlds: Structured browsing leads you to nodes with jumping off points into the connections made by others who are putting those nodes into various contexts, and tags lead you back into the structured world organized by experts in structure.
My guess is that the folksonomy that emerges will not change the existing taxonomy because in a miscellaneous world you don’t have to change something in order to change it. The existing taxonomy could stay exactly as it is, as the folksonomy supplements it by providing synonyms for existing categories (e.g., a search for “recipes” takes you to the “cuisine” category of the existing taxonomy) and leaping-off-points from it into the user-created clusters of meaning (e.g., here’s the tag cloud for the node you’re browsing). Rather than disrupting, transforming or replacing the existing taxonomy, the folksonomy may just affectionately tousle its hair.