Clay’s book makes sense of the way that groups are using the Internet. Really good sense. In a treatise that spans all manner of social activity from vigilantism to terrorism, from Flickr to Howard Dean, from blogs to newspapers, Clay unpicks what has made some “social” Internet media into something utterly transformative, while other attempts have fizzled or fallen to griefers and vandals. Clay picks perfect anecdotes to vividly illustrate his points, then shows the larger truth behind them.
Here Comes Everybody goes beyond wild-eyed webby boosterism and points out what seems to be different about web-based communities and organisation and why it’s different; the good and the bad. With useful and interesting examples, good stories and sticky theories. Very good stuff.
These newly possible activities are moving us towards the collapse of social structures created by technology limitations. Shirky compares this process to how the invention of the printing press impacted scribes. Suddenly, their expertise in reading and writing went from essential to meaningless. Shirky suggests that those associated with controlling the means to media production are headed for a similar fall.
Shirky has a piercingly sharp eye for the spotting the illuminating case studies - some familiar, some new - and using them to energise wider themes. His basic thesis is simple: “Everywhere you look groups of people are coming together to share with one another, work together, take some kind of public action.” The difference is that today, unlike even ten years ago, technological change means such groups can be form and act in new and powerful ways. Drawing on a wide range of examples Shirky teases out remarkable contrasts with what has been the expected logic, and shows quite how quickly the dynamics of reputation and relationships have changed.
Here Comes Everybody is about why new social tools matter for society. It is a non-techie book for the general reader (the letters TCP IP appear nowhere in that order). It is also post-utopian (I assume that the coming changes are both good and bad) and written from the point of view I have adopted from my students, namely that the internet is now boring, and the key question is what we are going to do with it.
One of the great frustrations of writing a book as opposed to blogging is seeing a new story that would have been a perfect illustration, or deepened an argument, and not being able to add it. To remedy that, I’ve just launched a new blog, at HereComesEverybody.org, to continue writing about the effects of social tools.
Wow. What a great response — we’ve given out all the copies we can, but many thanks for all the interest. Also, I’ve convinced the good folks at Penguin Press to let me give a few review copies away to people in the kinds of communities the book is about. I’ve got half a dozen copies to give to anyone reading this, with the only quid pro quo being that you blog your reactions to it, good bad or indifferent, some time in the next month or so. Drop me a line if you would like a review copy — firstname.lastname@example.org.
It gives me unquantifiable amounts of joy to announce that the JCMC special theme issue on “Social Network Sites” is now completely birthed. It was a long and intense labor, but all eight newborn articles are doing just fine and the new mommies are as proud as could be. So please, join us in our celebration by heading on over to the Journal for Computer-Mediated Communication and snuggling up to an article or two. The more you love them, the more they’ll prosper!
In June, I wrote a controversial blog essay about how U.S. teens appeared to be self-dividing by class on MySpace and Facebook during the 2006-2007 school year. This piece got me into loads of trouble for all sorts of reasons, forcing me to respond to some of the most intense critiques.
While what I was observing went beyond what could be quantitatively measured, certain aspects of it could be measured. To my absolute delight, Eszter Hargittai (professor at Northwestern) had collected data to measure certain aspects of the divide that I was trying to articulate. Not surprising (to me at least), what she was seeing lined up completely with what I was seeing on the ground.
While over 99% of the students had heard of both Facebook and MySpace, 79% use Facebook and 55% use MySpace. The story looks a bit different when you break it down by race/ethnicity and parent education:
While Eszter is not able to measure the other aspects of lifestyle that I was trying to describe that differentiate usage, she is able to show that Facebook and MySpace usage differs by race/ethnicity and parent education. These substitutes for “class” can be contested, but what is important here is that there is genuinely differences in usage patterns, even with consistent familiarity. People are segmenting themselves in networked publics and this links to the ways in which they are segmented in everyday life. Hopefully Eszter’s article helps those who can’t read qualitative data understand that what I was observing is real and measurable.
As many of you know, Nicole Ellison and I are guest editing a special issue of JCMC. As a part of this issue, we are writing an introduction that will include a description of social network sites, a brief history of them, a literature review, a description of the works in this issue, and a discussion of future research. We have decided to put a draft of our history section up to solicit feedback from those of you who know this space well. It is a work-in-progress so please bear with us. But if you have suggestions, shout out.
I have never understood Nick Carr’s objections to the cultural effects of the internet. He’s much too smart to lump in with nay-sayers like Keen, and when he talks about the effects of the net on business, he sounds more optimistic, even factoring in the wrenching transition, so why aren’t the cultural effects similar cause for optimism, even accepting the wrenching transition in those domains as well?
I think I finally got understood the dichotomy between his reading of business and culture after reading Long Player, his piece on metadata and what he calls “the myth of liberation”, a post spurred in turn by David Weinberger’s Everything Is Miscellaneous.
Carr discusses the ways in which the long-playing album was both conceived of and executed as an aesthetic unit, its length determined by a desire to hold most of the classical canon on a single record, and its possibilities exploited by musicians who created for the form — who created albums, in other words, rather than mere bags of songs. He illustrates this with an exegesis of the Rolling Stones’ Exile on Main Street, showing how the overall construction makes that album itself a work of art.
Carr uses this point to take on what he calls the myth of liberation: “This mythology is founded on a sweeping historical revisionism that conjures up an imaginary predigital world - a world of profound physical and economic constraints - from which the web is now liberating us.” Carr observes, correctly, that the LP was what it was in part for aesthetic reasons, and the album, as a unit, became what it became in the hands of people who knew how to use it.
That is not, however, the neat story Carr wants to it be, and the messiness of the rest of the story is key, I think, to the anxiety about the effects on culture, his and others.
The LP was an aesthetic unit, but one designed within strong technical constraints. When Edward Wallerstein of Columbia Records was trying to figure out how long the long-playing format should be, he settled on 17 minutes a side as something that would “…enable about 90% of all classical music to be put on two sides of a record.” But why only 90%? Because 100% would be impossible — the rest of the canon was too long for the technology of the day. And why should you have to flip the record in the middle? Why not have it play straight through? Impossible again.
Contra Carr, in other words, the pre-digital world was a world of profound physical and economic constraints. The LP could hold 34 minutes of music, which was a bigger number of minutes than some possibilities (33 possibilities, to be precise), but smaller than an infinite number of others. The album as a form provided modest freedom embedded in serious constraints, and the people who worked well with the form accepted those constraints as a way of getting at those freedoms. And now the constraints are gone; there is no necessary link between an amount of music and its playback vehicle.
And what Carr dislikes, I think, is evidence that the freedoms of the album were only as valuable as they were in the context of the constraints. If Exile on Main Street was as good an idea as he thinks it was, it would survive the removal of those constraints.
And it hasn’t.
Here is the iTunes snapshot of Exile, sorted by popularity:
While we can’t get absolute numbers from this, we can get relative ones — many more people want to listen to Tumbling Dice or Happy than Ventilator Blues or Turd on the Run, even though iTunes makes it cheaper per song to buy the whole album. Even with a financial inducement to preserve the album form, the users still say no thanks.
The only way to support the view that Exile is best listened to as an album, in other words, is to dismiss the actual preferences of most of the people who like the Rolling Stones. Carr sets about this task with gusto:
Who would unbundle Exile on Main Street or Blonde on Blonde or Tonight’s the Night - or, for that matter, Dirty Mind or Youth and Young Manhood or (Come On Feel the) Illinoise? Only a fool would.
Only a fool. If you are one of those people who has, say, Happy on your iPod (as I do), then you are a fool (though you have lots of company). And of course this foolishness extends to the recording industry, and to the Stones themselves, who went and put Tumbling Dice on a Greatest Hits collection. (One can only imagine how Carr feels about Greatest Hits collections.)
I think Weinberger’s got it right about liberation, even taking at face value the cartoonish version Carr offers. Prior to unlimited perfect copyability, media was defined by profound physical and economic constraints, and now it’s not. Fewer constraints and better matching of supply and demand are good for business, because business is not concerned with historical continuity. Fewer constraints and better matching of supply and demand are bad for current culture, because culture continually mistakes current exigencies for eternal verities.
This isn’t just Carr of course. As people come to realize that freedom destroys old forms just as surely as it creates new ones, the lament for the long-lost present is going up everywhere. As another example, Sven Birkerts, the literary critic, has a post in the Boston Globe, Lost in the blogosphere, that is almost indescribably self-involved. His two complaints are that newspapers are reducing the space allotted to literary criticism, and too many people on the Web are writing about books. In other words, literary criticism, as practiced during Birkerts’ lifetime, was just right, and having either fewer or more writers are both lamentable situations.
In order that the “Life was better when I was younger” flavor of his complaint not become too obvious, Birkerts frames the changing landscape not as a personal annoyance but as A Threat To Culture Itself. As he puts it “…what we have been calling “culture” at least since the Enlightenment — is the emergent maturity that constrains unbounded freedom in the interest of mattering.”
This is silly. The constraints of print were not a product of “emergent maturity.” They were accidents of physical production. Newspapers published book reviews because their customers read books and because publishers took out ads, the same reason they published pieces about cars or food or vacations. Some newspapers hired critics because they could afford to, others didn’t because they couldn’t. Ordinary citizens didn’t write about books in a global medium because no such medium existed. None of this was an attempt to “constrain unbounded freedom” because there was no such freedom to constrain; it was just how things were back then.
Genres are always created in part by limitations. Albums are as long as they are because that Wallerstein picked a length his engineers could deliver. Novels are as long as they are because Aldus Manutius’s italic letters and octavo bookbinding could hold about that many words. The album is already a marginal form, and the novel will probably become one in the next fifty years, but that also happened to the sonnet and the madrigal.
I’m old enough to remember the dwindling world, but it never meant enough to me to make me a nostalgist. In my students’ work I see hints of a culture that takes both the new freedoms and the new constraints for granted, but the fullest expression of that world will probably come after I’m dead. But despite living in transitional times, I’m not willing to pretend that the erosion of my worldview is a crisis for culture itself. It’s just how things are right now.
Carr fails to note that the LP was created for classical music, but used by rock and roll bands. Creators work within whatever constraints exist at the time they are creating, and when the old constraints give way, new forms arise while old ones dwindle. Some work from the older forms will survive — Shakespeare’s 116th sonnet remains a masterwork — while other work will wane — Exile as an album-length experience is a fading memory. This kind of transition isn’t a threat to Culture Itself, or even much of a tragedy, and we should resist attempts to preserve old constraints in order to defend old forms.
One month ago, I put out a blog essay that took on a life of its own. This essay addressed one of America’s most taboo topics: class. Due to personal circumstances, I wasn’t online as things spun further and further out of control and I had neither the time nor the emotional energy to address all of the astounding misinterpretations that I saw as a game of digital telephone took hold. I’ve browsed the hundreds of emails, thousands of blog posts, and thousands of comments across the web. I’m in awe of the amount of time and energy people put into thinking through and critiquing my essay. In the process, I’ve also realized that I was not always so effective at communicating what I wanted to communicate. To clarify some issues, I decided to put together a long response that addresses a variety of different issues.
im Spalding at LibraryThing has introduced a new wrinkle in the tagosphere…and wrinkles are welcome because they pucker space in semantically interesting ways. (Block that metaphor!)
At LibraryThing, people list their books. And, of course, we tag ‘em up good. For example, Freakonomics has 993 unique tags (ignoring case differences), and 8,760 total tags. Now, tags are of course useful. But so are subject headings. So, Tim has come up with a clever way of deriving subject headings bottom up. He’s introduced “tagmashes,” which are (in essence) searches on two or more tags. So, you could ask to see all the books tagged “france” and “wwii.” But the fact that you’re asking for that particular conjunction of tags indicates that those tags go together, at least in your mind and at least at this moment. Library turns that tagmash into a page with a persistent URL. The page presents a de-duped list of the results, ordered by interestinginess, and with other tagmashes suggested, all based on the magic of statistics. Over time, a large, relatively flat set of subject headings may emerge, which, subject to further analysis, could get clumpier and clumpier with meaning.
You may be asking yourself how this differs from saved searches. I asked Tim. He explained that while the system does a search when you ask for a new tagmash, it presents the tagmash as if it were a topic, not a search. For one thing, lists of search results generally don’t have persistent URLs. More important, to the user, tagmash pages feel like topic pages, not search results pages.
And you may also be asking yourself how this differs from a folksonomy. While I’d want to count it as a folksonomic technique, in a traditional folksonomy (oooh, I hope I’m the first to use that phrase!), a computer can notice which terms are used most often, and might even notice some of the relationships among the terms. With tagmashes, the info that this tag is related to that one is gleaned from the fact that a human said that they were related.
LibraryThing keeps innovating this way. It’s definitely a site to watch.
The cool thing about blogs is that while they may be quiet, and it may be hard to find what you’re looking for, at least you can say what you think without being shouted down. This makes it possible for unpopular ideas to be expressed. And if you know history, the most important ideas often are the unpopular ones…. That’s what’s important about blogs, not that people can comment on your ideas. As long as they can start their own blog, there will be no shortage of places to comment.
When a blog allows comments right below the writer’s post, what you get is a bunch of interesting ideas, carefully constructed, followed by a long spew of noise, filth, and anonymous rubbish that nobody … nobody … would say out loud if they had to take ownership of their words.
But the uselessness of comments it is not the universal truth that Dave or (fixed, per Dave’s comment below) Joel makes it out to be, for two reasons. First, posting and conversation are different kinds of things — same keyboard, same text box, same web page, different modes of expression. Second, the sites that suffer most from anonymous postings and drivel are the ones operating at large scale.
Those three threads contain a hundred or so comments, including some distinctly low-signal bouquets and brickbats. But there is also spirited disputation and emendation, alternate points of view, linky goodness, and a conversational sharpening of the argument on all sides, in a way that doesn’t happen blog to blog. This, I think, is the missing element in Dave and Joel’s points — two blog posts do not make a conversation. The conversation that can be attached to a post is different in style and content, and in intent and effect, than the post itself.
I have long thought that the ‘freedom of speech means no filtering’ argument is dumb where blogs are concerned — it is the blogger’s space, and he or she should feel free to delete, disemvowel, or otherwise dispose of material, for any reason, or no reason. But we’ve long since passed the point where what happens on a blog is mainly influenced by what the software does — the question to ask about comments is not whether they are available, but how a community uses them. The value in in blogs as communities of practice is considerable, and its a mistake to write off comment threads on those kinds of blogs just because, in other environments, comments are lame.
There are assertions of verifiable fact and then there are invocations of shared values. Don’t mix them up.
I meant this as an assertion of fact, but re-reading it after Tom’s feedback, it comes off as simple flag-waving, since I’d compressed the technical part of the argument out of existence. So here it is again, in slightly longer form:
The internet’s essential operation is to encode and transmit data from sender to receiver. In 1969, this was not a new capability; we’d had networks that did this in since the telegraph, at the day of the internet’s launch, we had a phone network that was nearly a hundred years old, alongside more specialized networks for things like telexes and wire-services for photographs.
Thus the basics of what the internet did (and does) isn’t enough to explain its spread; what is it for has to be accounted for by looking at the difference between it and the other data-transfer networks of the day.
The principal difference between older networks and the internet (ARPAnet, at its birth) is the end-to-end principle, which says, roughly, “The best way to design a network is to allow the sender and receiver to decide what the data means, without asking the intervening network to interpret the data.” The original expression of this idea is from the Saltzer and Clark paper End-to-End Arguments in System Design; the same argument is explained in other terms in Isenberg’s Stupid Network and Searls and Weinberger’s World of Ends.
What the internet is for, in other words, what made it worth adopting in a world already well provisioned with other networks, was that the sender and receiver didn’t have to ask for either help or permission before inventing a new kind of message. The core virtue of the internet was a huge increase in the technical freedom of all of its participating nodes, a freedom that has been translated into productive and intellectual freedoms for its users.
As Scott Bradner put it, the Internet means you don’t have to convince anyone else that something is a good idea before trying it. The upshot is that the internet’s output is data, but its product is freedom.
Last week, while in a conversation with Andrew Keen on the radio show To The Point, he suggested that he was not opposed to the technology of the internet, but rather to how it was being used.
This reminded me of Michael Gorman’s insistence that digital tools are fine, so long as they are shaped to replicate the social (and particularly academic) institutions that have grown up around paper.
There is a similar strand in these two arguments, namely that technology is one thing, but the way it is used is another, and that the two can and should be separated. I think this view is in the main wrong, even Luddite, but to make such an accusation requires a definition of Luddite considerably more grounded than ‘anti-technology’ (a vacuous notion — no one who wears shoes can reasonably be called ‘anti-technology.’) Both Keen and Gorman have said they are not opposed to digital technology. I believe them when they say this, but I still think their views are Luddite, by historical analogy with the real Luddite movement of the early 1800s.
What follows is a long detour into the Luddite rebellion, followed by a reply to Keen about the inseparability of the internet from its basic effects.
The historical record is relatively clear. In March of 1811, a group of weavers in Nottinghamshire began destroying mechanical looms. This was not the first such riot — in the late 1700s, when Parliament refused to guarantee the weaver’s control of supply of woven goods, workers in Nottingham destroyed looms as well. The Luddite rebellion, though, was unusual for several reasons: its breadth and sustained character, taking place in many industrializing towns at once; its having a nominal leader, going by the name Ned Ludd, General Ludd, or King Ludd (the pseudonym itself a reference to an apocryphal figure from an earlier loom-breaking riot in the late 1700s); and its written documentation of grievances and rationale. The rebellion, which lasted two years, was ultimately put down by force, and was over in 1813.
Over the last two decades, several historians have re-examined the record of the Luddite movement, and have attempted to replace the simplistic view of Luddites as being opposed to technological change with a more nuanced accounting of their motivations and actions. The common thread of the analysis is that the Luddites didn’t object to mechanized wide-frame looms per se, they objected to the price collapse of woven goods caused by the way industrialists were using the looms. Though the target of the Luddite attacks were the looms themselves, their concerns and goals were not about technology but about economics.
I believe that the nuanced view is wrong, and that the simpler view of Luddites as counter-revolutionaries is in fact the correct one. The romantic view of Luddites as industrial-age Robin Hoods, concerned not to halt progress but to embrace justice, runs aground on both the written record, in which the Luddites outline a program that is against any technology that increases productivity, and on their actions, which were not anti-capitalist but anti-consumer. It also assumes that there was some coherent distinction between technological and economic effects of the looms; there was none.
A Technology is For Whatever Happens When You Use It
The idea that the Luddites were targeting economic rather than technological change is a category fallacy, where the use of two discrete labels (technology and economics, in this case) are wrongly thought to demonstrate two discrete aspects of the thing labeled (here wide-frame looms.) This separation does not exist in this case; the technological effects of the looms were economic. This is because, at the moment of its arrival, what a technology does and what it is for are different.
What any given technology does is fairly obvious: rifles fire bullets, pencils make marks, looms weave cloth, and so on. What a technology is for, on the other hand, what leads people to adopt it, is whatever new thing becomes possible on the day of its arrival. The Winchester repeating rifle was not for firing bullets — that capability already existed. It was for decreasing the wait between bullets. Similarly, pencils were not for writing but for portability, and so on.
And the wide-frame looms, target of the Luddite’s destructive forays? What were they for? They weren’t for making cloth — humankind was making cloth long before looms arrived. They weren’t for making better cloth — in 1811, industrial cloth was inferior to cloth spun by the weavers. Mechanical looms were for making cheap cloth, lots and lots of cheap cloth. The output of a mechanical loom was cloth, but the product of such a loom was savings.
The wide-frame loom was a cost-lowering machine, and as such, it threatened the old inefficiencies on which the Luddite’s revenues depended. Their revolt had the goal of preventing those savings from being passed along to the customer. One of their demands was that Parliament outlaw “all Machinery hurtful to Commonality” — all machines that worked efficiently enough to lower prices.
Perhaps more tellingly, and against recent fables of Luddism as a principled anti-capitalist movement, they refrained from breaking the looms of industrial weavers who didn’t lower their prices. What the Luddites were rioting in favor of was price gouging; they didn’t care how much a wide-frame loom might save in production costs, so long as none of those savings were passed on to their fellow citizens.
Their common cause was not with citizens and against industrialists, it was against citizens and with those industrialists who joined them in a cartel. The effect of their campaign, had it succeeded, would been to have raise, rather than lower, the profits of the wide-frame operators, while producing no benefit for those consumers who used cloth in their daily lives, which is to say the entire population of England. (Tellingly, none of the “Robin Hood” versions of Luddite history make any mention of the effect of high prices on the buyers of cloth, just on the sellers.)
Back to Keen
A Luddite argument is one in which some broadly useful technology is opposed on the grounds that it will discomfit the people who benefit from the inefficiency the technology destroys. An argument is especially Luddite if the discomfort of the newly challenged professionals is presented as a general social crisis, rather than as trouble for a special interest. (“How will we know what to listen to without record store clerks!”) When the music industry suggests that the prices of music should continue to be inflated, to preserve the industry as we have known it, that is a Luddite argument, as is the suggestion that Google pay reparations to newspapers or the phone company’s opposition to VoIP undermining their ability to profit from older ways of making phone calls.
This is what makes Keen’s argument a Luddite one — he doesn’t oppose all uses of technology, just ones that destroy older ways of doing things. In his view, the internet does not need to undermine the primacy of the copy as the anchor for both filtering and profitability.
But Keen is wrong. What the internet does is move data from point A to B, but what it is for is empowerment. Using the internet without putting new capabilities into the hands of its users (who are, by definition, amateurs in most things they can now do) would be like using a mechanical loom and not lowering the cost of buying a coat — possible, but utterly beside the point.
The internet’s output is data, but its product is freedom, lots and lots of freedom. Freedom of speech, freedom of the press, freedom of association, the freedom of an unprecedented number of people to say absolutely anything they like at any time, with the reasonable expectation that those utterances will be globally available, broadly discoverable at no cost, and preserved for far longer than most utterances are, and possibly forever.
Keen is right in understanding that this massive supply-side shock to freedom will destabilize and in some cases destroy a number of older social institutions. He is wrong in believing that there is some third way — lets deploy the internet, but not use it to increase the freedom of amateurs to do as they like.
It is possible to want a society in which new technology doesn’t demolish traditional ways of doing things. It is not possible to hold this view without being a Luddite, however. That view — incumbents should wield veto-power over adoption of tools they dislike, no matter the positive effects for the citizenry — is the core of Luddism, then and now.
Over at the Britannica Blog, Michael Gorman (the former president of the American Library Association) wrote a series of posts concerning web2.0. In short, he’s against it and thinks everything to do with web2.0 and Wikipedia is bad bad bad. A handful of us were given access to the posts before they were posted and asked to craft responses. The respondents are scholars and thinkers and writers of all stripes (including my dear friend and fellow M2M blogger Clay Shirky). Because I addressed all of his arguments at once, my piece was held to be released in the final week of the public discussion. And that time is now. So enjoy!
Over the last six months, i’ve noticed an increasing number of press articles about how high school teens are leaving MySpace for Facebook. That’s only partially true. There is indeed a change taking place, but it’s not a shift so much as a fragmentation. Until recently, American teenagers were flocking to MySpace. The picture is now being blurred. Some teens are flocking to MySpace. And some teens are flocking to Facebook. Which go where gets kinda sticky, because it seems to primarily have to do with socio-economic class.
I’ve been trying to figure out how to articulate this division for months. I have not yet succeeded. So, instead, I decided to write a blog essay addressing what I’m seeing. I suspect that this will be received with criticism, but my hope is that the readers who encounter this essay might be able to help me think through this. In other words, I want feedback on this piece.
What I lay out in this essay is rather disconcerting. Hegemonic American teens (i.e. middle/upper class, college bound teens from upwards mobile or well off families) are all on or switching to Facebook. Marginalized teens, teens from poorer or less educated backgrounds, subculturally-identified teens, and other non-hegemonic teens continue to be drawn to MySpace. A class division has emerged and it is playing out in the aesthetics, the kinds of advertising, and the policy decisions being made.
Siren Song of the Internet contains a curious omission and a basic misunderstanding. The omission is part of his defense of the Luddites; the misunderstanding is about the value of paper and the nature of e-books.
The omission comes early: Gorman cavils at being called a Luddite, though he then embraces the label, suggesting that they “…had legitimate grievances and that their lives were adversely affected by the mechanization that led to the Industrial Revolution.” No one using the term Luddite disputes the effects on pre-industrial weavers. This is the general case — any technology that fixes a problem (in this case the high cost of homespun goods) threatens the people who profit from the previous inefficiency. However, Gorman omits mentioning the Luddite response: an attempt to halt the spread of mechanical looms which, though beneficial to the general populace, threatened the livelihoods of King Ludd’s band.
By labeling the Luddite program legitimate, Gorman seems to be suggesting that incumbents are right to expect veto power over technological change. Here his stand in favor of printed matter is inconsistent, since printing was itself enormously disruptive, and many people wanted veto power over its spread as well. Indeed, one of the great Luddites of history (if we can apply the label anachronistically) was Johannes Trithemius, who argued in the late 1400s that the printing revolution be contained, in order to shield scribes from adverse effects. This is the same argument Gorman is making, in defense of the very tools Trithemius opposed. His attempt to rescue Luddism looks less like a principled stand than special pleading: the printing press was good, no matter happened to the scribes, but let’s not let that sort of thing happen to my tribe.
Gorman then defends traditional publishing methods, and ends up conflating several separate concepts into one false conclusion, saying “To think that digitization is the answer to all that ails the world is to ignore the uncomfortable fact that most people, young and old, prefer to interact with recorded knowledge and literature in the form of print on paper.”
Dispensing with the obvious straw man of “all that ails the world”, a claim no one has made, we are presented with a fact that is supposed to be uncomfortable — it’s good to read on paper. Well duh, as the kids say; there’s nothing uncomfortable about that. Paper is obviously superior to the screen for both contrast and resolution; Hewlett-Packard would be about half the size it is today if that were not true. But how did we get to talking about paper when we were talking about knowledge a moment ago?
Gorman is relying on metonymy. When he notes a preference for reading on paper he means a preference for traditional printed forms such as books and journals, but this is simply wrong. The uncomfortable fact is that the advantages of paper have become decoupled from the advantages of publishing; a big part of preference for reading on paper is expressed by hitting the print button. As we know from Lyman and Varian’s “How Much Information” study, “…the vast majority of original information on paper is produced by individuals in office documents and postal mail, not in formally published titles such as books, newspapers and journals.”
We see these effects everywhere: well over 90% of new information produced in any year is stored electronically. Use of the physical holdings of libraries are falling, while the use of electronic resources is rising. Scholarly monographs, contra Gorman, are increasingly distributed electronically. Even the physical form of newspapers is shrinking in response to shrinking demand, and so on.
The belief that a preference for paper leads to a preference for traditional publishing is a simple misunderstanding, demonstrated by his introduction of the failed e-book program as evidence that the current revolution is limited to “hobbyists and premature adopters.” The problems with e-books are that they are not radical enough: they dispense with the best aspect of books (paper as a display medium) while simultaneously aiming to disable the best aspects of electronic data (sharability, copyability, searchability, editability.) The failure of e-books is in fact bad news for Gorman’s thesis, as it demonstrates yet again that users have an overwhelming preference for the full range of digital advantages, and are not content with digital tools that are designed to be inefficient in the ways that printed matter is inefficient.
If we gathered every bit of output from traditional publishers, we could line them up in order of vulnerability to digital evanescence. Reference works were the first to go — phone books, dictionaries, and thesauri have largely gone digital; the encyclopedia is going, as are scholarly journals. Last to go will be novels — it will be some time before anyone reads One Hundred Years of Solitude in any format other than a traditionally printed book. Some time, however, is not forever. The old institutions, and especially publishers and libraries, have been forced to use paper not just for display, for which is it well suited, but also for storage, transport, and categorization, things for which paper is completely terrible. We are now able to recover from those disadvantages, though only by transforming the institutions organized around the older assumptions.
The ideal situation, which we are groping our way towards, will be to have all written material, wherever it lies on the ‘information to knowledge’ continuum, in digital form, right up the moment a reader wants it. At that point, the advantages of paper can be made manifest, either by printing on demand, or by using a display that matches paper’s superior readability. Many of the traditional managers of books and journals will suffer from this change, though it will benefit society as a whole. The question Gorman pointedly asks, by invoking Ned Ludd and his company, is whether we want that change to be in the hands of people who would be happy to discomfit society as a whole in order to preserve the inefficiencies that have defined their world.
I’m old enough to know a lot of things, just from life experience. I know that music comes from stores. I know that newspapers are where you get your political news and how you look for a job. I know that if you need to take a trip, you visit a travel agent. In the last 15 years or so, I’ve had to unlearn those things and a million others. This makes me a not-bad analyst, because I have to explain new technology to myself first — I’m too old to understand it natively. But it makes me a lousy entrepreneur.
It is incredibly hard to think of new paradigms when you’ve grown up reading the newspaper every morning. When you turn to TV for your entertainment. When you read magazines on the train home from work. But we have a generation coming of age right now that has never relied on newspapers, TV, and magazines for their information and entertainment.[…] The Internet is their medium and they are showing us how it needs to be used.
This is exactly right.
I think the real issue, of which age is a predictor, is this: the future belongs to those who take the present for granted. I had this thought while talking to Robert Cook of Metaweb, who are making Freebase. They need structured metadata, lots of structured metadata, and one of the places they are getting it is from Wikipedia, by spidering the bio boxes (among other things) for things like birthplace and age of people listed Freebase. While Andrew Keen is trying to get a conversation going on whether Wikipedia is a good idea, Metaweb takes it for granted as a stable part of the environment, which lets them see past this hurdle to the next one.
This is not to handicap the success of Freebase itself — it takes a lot more than taking the present for granted to make a successful tool. But one easy way to fail is to assume that the past is more solid than it is, and the present more contingent. And the people least likely to make this mistake — the people best able to take the present for granted — are young people, for whom knowing what the world is really like is as easy as waking up in the morning, since this is the only world they’ve ever known.
Some things improve with age — I wouldn’t re-live my 20s if you paid me — but high-leverage ignorance isn’t one of them.
Encyclopedia Britannica has started a Web 2.0 Forum, where they are hosting a conversation going on around a set of posts by Michael Gorman. The first post, in two parts, is titled Web 2.0: The Sleep of Reason Brings Forth Monsters, and is a defense of the print culture against alteration by digital technologies. This is my response, which will be going up on the Britannica site later this week.
Web 2.0: The Sleep of Reason Brings Forth Monsters starts with a broad list of complaints against the current culture, from biblical literalism to interest in alternatives to Western medicine.
The life of the mind in the age of Web 2.0 suffers, in many ways, from an increase in credulity and an associated flight from expertise. Bloggers are called “citizen journalists”; alternatives to Western medicine are increasingly popular, though we can thank our stars there is no discernable “citizen surgeon” movement; millions of Americans are believers in Biblical inerrancy—the belief that every word in the Bible is both true and the literal word of God, something that, among other things, pits faith against carbon dating; and, scientific truths on such matters as medical research, accepted by all mainstream scientists, are rejected by substantial numbers of citizens and many in politics. Cartoonist Garry Trudeau’s Dr. Nathan Null, “a White House Situational Science Adviser,” tells us that: “Situational science is about respecting both sides of a scientific argument, not just the one supported by facts.”
This is meant to set the argument against a big canvas of social change, but the list is so at odds with the historical record as to be self-defeating.
The percentage of the US population believing in the literal truth of the Bible has remained relatively constant since the 1980s, while the percentage listing themselves as having “no religion” has grown. Interest in alternative medicine dates to at least the patent medicines of the 19th century; the biggest recent boost for that movement came under Reagan, when health supplements, soi-disant, were exempted from FDA scrutiny. Trudeau’s welcome critique of the White House’s assault on reason targets a political minority, not the internet-using population, and so on. If you didn’t know that this litany appeared under the heading Web 2.0, you might suspect Gorman’s target was anti-intellectualism during Republican administrations.
Even the part of the list specific to new technology gets it wrong. Bloggers aren’t called citizen-journalists; bloggers are called bloggers. Citizen-journalist describes people like Alisara Chirapongse, the Thai student who posted photos and observations of the recent coup during a press blackout. If Gorman can think of a better label for times when citizens operate as journalists, he hasn’t shared it with us.
Similarly, lumping Biblical literalism with Web 2.0 misses the mark. Many of the most active social media sites — Slashdot, Digg, Reddit — are rallying points for those committed to scientific truth. Wikipedia users have so successfully defended articles on Evolution, Creationism and so on from the introduction of counter-factual beliefs that frustrated literalists helped found Conservapedia, whose entry on Evolution is a farrago of anti-scientific nonsense.
But wait — if use of social media is bad, and attacks on the scientific method are bad, what are we to make of social media sites that defend the scientific method? Surely Wikipedia is better than Conservapedia on that score, no? Well, it all gets confusing when you start looking at the details, but Gorman is not interested in the details. His grand theory, of the hell-in-a-handbasket variety, avoids any look at specific instantiations of these tools — how do the social models of Digg and Wikipedia differ? does Huffington Post do better or worse than Instapundit on factual accuracy? — in favor of one sweeping theme: defense of incumbent stewards of knowledge against attenuation of their erstwhile roles.
There are two alternate theories of technology on display in Sleep of Reason. The first is that technology is an empty vessel, into which social norms may be poured. This is the theory behind statements like “The difference is not, emphatically not, in the communication technology involved.” (Emphasis his.) The second theory is that intellectual revolutions are shaped in part by the tools that sustain them. This is the theory behind his observation that the virtues of print were “…often absent in the manuscript age that preceded print.”
These two theories cannot both be true, so it’s odd to find them side by side, but Gorman does not seem to be comfortable with either of them as a general case. This leads to a certain schizophrenic quality to the writing. We’re told that print does not necessarily bestow authenticity and that some digital material does, but we’re also told that he consulted “authoritative printed sources” on Goya. If authenticity is an option for both printed and digital material, why does printedness matter? Would the same words on the screen be less scholarly somehow?
Gorman is adopting a historically contingent view: Revolution then was good, revolution now is bad. As a result, according to Gorman, the shift to digital and networked reproduction of information will fail unless it recapitulates the institutions and habits that have grown up around print.
Gorman’s theory about print — its capabilities ushered in an age very different from manuscript culture — is correct, and the same kind of shift is at work today. As with the transition from manuscripts to print, the new technologies offer virtues that did not previously exist, but are now an assumed and permanent part of our intellectual environment. When reproduction, distribution, and findability were all hard, as they were for the last five hundred years, we needed specialists to undertake those jobs, and we properly venerated them for the service they performed. Now those tasks are simpler, and the earlier roles have instead become obstacles to direct access.
Digital and networked production vastly increase three kinds of freedom: freedom of speech, of the press, and of assembly. This perforce increases the freedom of anyone to say anything at any time. This freedom has led to an explosion in novel content, much of it mediocre, but freedom is like that. Critically, this expansion of freedom has not undermined any of the absolute advantages of expertise; the virtues of mastery remain are as they were. What has happened is that the relative advantages of expertise are in precipitous decline. Experts the world over have been shocked to discover that they were consulted not as a direct result of their expertise, but often as a secondary effect — the apparatus of credentialing made finding experts easier than finding amateurs, even when the amateurs knew the same things as the experts.
This improved ability to find both content and people is one of the core virtues of our age. Gorman insists that he was able to find “…the recorded knowledge and information I wanted [about Goya] in seconds.” This is obviously an impossibility for most of the population; if you wanted detailed printed information on Goya and worked in any environment other than a library, it would take you hours at least. This scholars-eye view is the key to Gorman’s lament: so long as scholars are content with their culture, the inability of most people to enjoy similar access is not even a consideration.
Wikipedia is the best known example of improved findability of knowledge. Gorman is correct that an encyclopedia is not the product of a collective mind; this is as true of Wikipedia as of Britannica. Gorman’s unfamiliarity and even distaste for Wikipedia leads him to mistake the dumbest utterances of its most credulous observers for an authentic accounting of its mechanisms; people pushing arguments about digital collectivism, pro or con, known nothing about how Wikipedia actually works. Wikipedia is the product not of collectivism but of unending argumentation; the corpus grows not from harmonious thought but from constant scrutiny and emendation.
The success of Wikipedia forces a profound question on print culture: how is information is to be shared with the majority of the population? This is an especially tough question, as print culture has so manifestly failed at the transition to a world of unlimited perfect copies. Because Wikipedia’s contents are both useful and available, it has eroded the monopoly held by earlier modes of production. Other encyclopedias now have to compete for value to the user, and they are failing because their model mainly commits them to denying access and forbidding sharing. If Gorman wants more people reading Britannica, the choice lies with its management. Were they to allow users unfettered access to read and share Britannica’s content tomorrow, the only interesting question is whether their readership would rise a ten-fold or a hundred-fold.
Britannica will tell you that they don’t want to compete on universality of access or sharability, but this is the lament of the scribe who thinks that writing fast shouldn’t be part of the test. In a world where copies have become cost-free, people who expend their resources to prevent access or sharing are forgoing the principal advantages of the new tools, and this dilemma is common to every institution modeled on the scarcity and fragility of physical copies. Academic libraries, which in earlier days provided a service, have outsourced themselves as bouncers to publishers like Reed-Elsevier; their principal job, in the digital realm, is to prevent interested readers from gaining access to scholarly material.
If Gorman were looking at Web 2.0 and wondering how print culture could aspire to that level of accessibility, he would be doing something to bridge the gap he laments. Instead, he insists that the historical mediators of access “…promote intellectual development by exercising judgment and expertise to make the task of the seeker of knowledge easier.” This is the argument Catholic priests made to the operators of printing presses against publishing translations of the Bible — the laity shouldn’t have direct access to the source material, because they won’t understand it properly without us. Gorman offers no hint as to why direct access was an improvement when created by the printing press then but a degradation when created by the computer. Despite the high-minded tone, Gorman’s ultimate sentiment is no different from that of everyone from music executives to newspaper publishers: Old revolutions good, new revolutions bad.
In my last post, i shared my case study response to the Harvard Business Review Case Study “We Googled You.” Since then, thanks to a kind reader (tx Andy Blanco), i learned that HBR made this case study the First Interactive Case Study. This means that you can read the case (without the respondents’ responses) and submit your own response.
You are still more than welcome to read my response, but i’d be super duper stoked to read your response as well. I found this exercise mentally invigorating and suspect you might as well. HBR wants you to submit your response to them, but i’d also be stoked if you’d be willing to share it with us.
Feel free to add your response to the comments on Apophenia or write your response on your own blog and add a link to the comments. Either way, i’d really love to hear how you would handle this scenario in your own business practices.
(Note: the reason that i use comments on Apophenia is because they notify me… i don’t get notified here and i find it easier to keep the conversation in one place.)
I have recently uploaded a bunch of talk cribs, a new book essay, and a case commentary for your enjoyment.
Harvard Business Review Case Commentary
The Harvard Business Review has a section called “Case Commentary” where they propose a fictional but realistic scenario and invite different prominent folks to respond. I was given the great honor of being invited to respond to a case entitled “We Googled You.”
In Diane Coutu’s hypothetical scenario, Fred is trying to decide whether or not to hire Mimi after one of Fred’s co-workers googles Mimi and finds newspaper clippings about Mimi protesting Chinese policies. [The case study is 2 pages - this is a very brief synopsis.] Given the scenario, we were then asked, “should Fred hire Mimi despite her online history?”
Unfortunately, Harvard Business Review does not make their issues available for free download (although they are available at the library and the case can be purchased for $6) but i acquired permission to publish my commentary online for your enjoyment. It’s a llittle odd taken out of context, but i still figured some folks might enjoy my view on this matter, especially given that the press keep asking me about this exact topic.
At the Cannes Film Festival’s Opening Forum on “Cinema: The Audiences of Tomorrow,” i gave a keynote about youth, DRM, remix, film, MySpace, YouTube, and other such good things. Check out: “Film and the Audience of Tomorrow”
A month or so ago, Micah Sifry offered me a chance to respond to Andrew Keen, author of the forthcoming Cult of the Amateur, at a panel at last week’s Personal Democracy Forum (PdF). The book is a polemic against the current expansion of freedom of speech, freedom of the press, and freedom of association. Also on the panel were Craig Newmark and Robert Scoble, so I was in good company; my role would, I thought, be easy — be pro-amateur production, pro-distributed creation, pro-collective action, and so on, things that come naturally to me.
What I did not expect was what happened — I ended up defending Keen, and key points from Cult of the Amateur, against a panel of my peers.
I won’t review CotA here, except to say that the book is going to get a harsh reception from the blogosphere. It is, as Keen himself says, largely anecdotal, which makes it more a list of ‘bad things that have happened where the internet is somewhere in the story’ than an account of cause and effect; as a result, internet gambling and click fraud are lumped together with the problems with DRM and epistemological questions about peer-produced material. In addition to this structural weakness, it is both aggressive enough and reckless enough to make people spitting mad. Dan Gillmor was furious about the inaccuracies, including his erroneous (and since corrected) description in the book, Yochai Benkler asked me why I was even deigning to engage Andrew in conversation, and so on. I don’t think I talked to anyone who wasn’t dismissive of the work.
But even if we stipulate that the book doesn’t do much to separate cause from effect, and has the problems of presentation that often accompany polemic, the core point remains: Keen’s sub-title, “How today’s internet is destroying our culture”, has more than a grain of truth to it, and the only thing those of us who care about the network could do wrong would be to dismiss Keen out of hand.
Which is exactly what people were gearing up to do last week. Because Keen is a master of the dismissive phrase — bloggers are monkeys, only people who get paid do good work, and so on — he will engender a reaction from our side that assumes that everything he says in the book is therefore wrong. This is a bad (but probably inevitable) reaction, but I want to do my bit to try to stave it off, both because fairness dictates it — Keen is at least in part right, and we need to admit that — and because a book-burning accompanied by a hanging-in-effigy will be fun for us, but will weaken the pro-freedom position, not strengthen it.
The panel at PdF started with Andrew speaking, in some generality, about ways in which amateurs were discomfiting people who actually know what they are doing, while producing sub-standard work on their own.
My response started by acknowledging that many of the negative effects Keen talked about were real, but that the source of these effect was an increase in the freedom of people to say what they want, when they want to, on a global stage; that the advantages of this freedom outweigh the disadvantages; that many of the disadvantages are localized to professions based on pre-internet inefficiencies; and that the effort required to take expressive power away from citizens was not compatible with a free society.
This was, I thought, a pretty harsh critique of the book. I was wrong; I didn’t know from harsh.
Scoble was simply contemptuous. He had circled offending passages which he would read, and then offer an aphoristic riposte that was more scorn than critique. For instance, in taking on Andrew’s point that talent is unevenly distributed, Scoble’s only comment was, roughly, “Yeah, Britney must be talented…”
Now you know and I know what Scoble meant — traditional media gives outsize rewards to people on characteristics other than pure talent. This is true, but because he was so dismissive of Keen, it’s not the point that Scoble actually got across. Instead, he seemed to be denying either that talent is unevenly distributed, or that Britney is talented.
But Britney is talented. She’s not Yo-Yo Ma, and you don’t have to like her music (back when she made music rather than just headlines), but what she does is hard, and she does it well. Furthermore, deriding the music business’s concern with looks isn’t much of a criticism. It escaped no one’s notice that Amanda Congdon and lonelygirl15 were easy on the eyes, and that that was part of their appeal. So cheap shots at mainstream talent or presumptions of the internet’s high-mindedness are both non-starters.
More importantly, talent is unevenly distributed, and everyone knows it. Indeed, one of the many great things about the net is that talent can now express itself outside traditional frameworks; this extends to blogging, of course, but also to music, as Clive Thompson described in his great NY Times piece, or to software, as with Linus’ talent as an OS developer, and so on. The price of this, however, is that the amount of poorly written or produced material has expanded a million-fold. Increased failure is an inevitable byproduct of increased experimentation, and finding new filtering methods for dealing with an astonishingly adverse signal-to-noise ratio is the great engineering challenge of our age (c.f. Google.) Whatever we think of Keen or CotA, it would be insane to deny that.
Similarly, Scoble scoffed at the idea that there is a war on copyright, but there is a war on copyright, at least as it is currently practiced. As new capabilities go, infinite perfect copyability is a lulu, and it breaks a lot of previously stable systems. In the transition from encoding on atoms to encoding with bits, information goes from having the characteristics of chattel to those of a public good. For the pro-freedom camp to deny that there is a war on copyright puts Keen in the position of truth-teller, and makes us look like employees of the Ministry of Doublespeak.
It will be objected that engaging Keen and discussing a flawed book will give him attention he neither needs nor deserves. This is fantasy. CotA will get an enthusiastic reception no matter what, and whatever we think of it or him, we will be called to account for the issues he raises. This is not right, fair, or just, but it is inevitable, and if we dismiss the book based on its errors or a-causal attributions, we will not be regarded as people who have high standards, but rather as defensive cult members who don’t like to explain ourselves to outsiders.
What We Should Say
Here’s my response to the core of Keen’s argument.
Keen is correct in seeing that the internet is not an improvement to modern society; it is a challenge to it. New technology makes new things possible, or, put another way, when new technology appears, previously impossible things start occurring. If enough of those impossible things are significantly important, and happen in a bundle, quickly, the change becomes a revolution.
The hallmark of revolution is that the goals of the revolutionaries cannot be contained by the institutional structure of the society they live in. As a result, either the revolutionaries are put down, or some of those institutions are transmogrified, replaced, or simply destroyed. We are plainly witnessing a restructuring of the music and newspaper businesses, but their suffering isn’t unique, it’s prophetic. All businesses are media businesses, because whatever else they do, all businesses rely on the managing of information for two audiences — employees and the world. The increase in the power of both individuals and groups, outside traditional organizational structures, is epochal. Many institutions we rely on today will not survive this change without radical alteration.
This change will create three kinds of loss.
First, people whose jobs relied on solving a hard problem will lose those jobs when the hard problems disappear. Creating is hard, filtering is hard, but the basic fact of making acceptable copies of information, previously the basis of the aforementioned music and newspaper industries, is a solved problem, and we should regard with suspicion anyone who tries to return copying to its previously difficult state.
Similarly, Andrew describes a firm running a $50K campaign soliciting user-generated ads, and notes that some professional advertising agency therefore missed out on something like $300,000 dollars of fees. Its possible to regard this as a hardship for the ad guys, but its also possible to wonder whether they were really worth the $300K in the first place if an amateur, working in their spare time with consumer-grade equipment, can create something the client is satisfied with. This loss is real, but it is not general. Video tools are sad for ad guys in the same way movable type was sad for scribes, but as they say in show biz, the world doesn’t owe you a living.
The second kind of loss will come from institutional structures that we like as a society, but which are becoming unsupportable. Online ads offer better value for money, but as a result, they are not going to generate enough cash to stand up the equivalent of the NY Times’ 15-person Baghdad bureau. Josh Wolf has argued that journalistic privilege should be extended to bloggers, but the irony is that Wolf’s very position as a videoblogger makes that view untenable — journalistic privilege is a special exemption to a general requirement for citizens to aid the police. We can’t have a general exception to that case.
The old model of defining a journalist by tying their professional identity to employment by people who own a media outlet is broken. Wolf himself has helped transform journalism from a profession to an activity; now we need a litmus test for when to offer source confidentiality for acts of journalism. This will in some ways be a worse compromise than the one we have now, not least because it will take a long time to unfold, but we can’t have mass amateurization of journalism and keep the social mechanisms that regard journalists as a special minority.
The third kind of loss is the serious kind. Some of these Andrew mentions in his book: the rise of spam, the dramatically enlarged market for identity theft. Other examples he doesn’t: terrorist organizations being more resilient as a result of better communications tools, pro-anorexic girls forming self-help groups to help them remain anorexic. These things are not side-effects of the current increase in freedom, they are effects of that increase. Spam is not just a plague in open, low-entry-cost systems; it is a result of those systems. We can no longer limit things like who gets to form self-help groups through social controls (the church will rent its basement to AA but not to the pro-ana kids), because no one needs help or permission to form such a group anymore.
The hard question contained in Cult of the Amateur is “What are we going to do about the negative effects of freedom?” Our side has generally advocated having as few limits as possible (when we even admit that there are downsides), but we’ve been short on particular cases. It’s easy to tell the newspaper people to quit whining, because the writing has been on the wall since Brad Templeton founded Clarinet. It’s harder to say what we should be doing about the pro-ana kids, or the newly robust terror networks.
Those cases are going to shift us from prevention to reaction (a shift that parallels the current model of publishing first, then filtering later), but so much of the conversation about the social effects of the internet has been so upbeat that even when there is an obvious catastrophe (as with the essjay crisis on Wikipedia), we talk about it amongst ourselves, but not in public.
What Wikipedia (and Digg and eBay and craigslist) have shown us is that mature systems have more controls than immature ones, as the number of bad cases is identified and dealt with, and as these systems become more critical and more populous, the number of bad cases (and therefore the granularity and sophistication of the controls) will continue to increase.
We are creating a governance model for the world that will coalesce after the pre-internet institutions suffer whatever damage or decay they are going to suffer. The conversation about those governance models, what they look like and why we need them, is going to move out into the general public with CotA, and we should be ready for it. My fear, though, is that we will instead get a game of “Did not!”, “Did so!”, and miss the opportunity to say something much more important.
This is a relief for people like me — you’re as young as you feel, and all that — or rather it would be a relief but for one little problem: Fred was right before, and he’s wrong now. Young entrepreneurs have an advantage over older ones (and by older I mean over 30), and contra Fred’s second post, age isn’t in fact a mindset. Young people have an advantage that older people don’t have and can’t fake, and it isn’t about vigor or hunger — it’s a mental advantage. The principal asset a young tech entrepreneur has is that they don’t know a lot of things.
In almost every other circumstance, this would be a disadvantage, but not here, and not now. The reason this is so (and the reason smart old people can’t fake their way into this asset) has everything to do with our innate ability to cement past experience into knowledge.
Probability and the Crisis of Novelty
The classic illustration for learning outcomes based on probability uses a bag of colored balls. Imagine that you can take out one ball, record its color, put it back, and draw again. How long does it take you to form an opinion about the contents of the bag, and how correct is that opinion?
Imagine a bag of black and white balls, with a slight majority of white. Drawing out a single ball would provide little information beyond “There is at least one white (or black) ball in this bag.” If you drew out ten balls in a row, you might guess that there are a similar number of black and white balls. A hundred would make you relatively certain of that, and might give you an inkling that white slightly outnumbers black. By a thousand draws, you could put a rough percentage on that imbalance, and by ten thousand draws, you could say something like “53% white to 47% black” with some confidence.
This is the world most of us live in, most of the time; the people with the most experience know the most.
But what would happen if the contents of the bag changed overnight? What if the bag suddenly started yielding balls of all colors and patterns — black and white but also green and blue, striped and spotted? The next day, when the expert draws a striped ball, he might well regard it as a mere anomaly. After all, his considerable experience has revealed a predictable and stable distribution over tens of thousands of draws, so no need to throw out the old theory because of just one anomaly. (To put it in Bayesian terms, the prior beliefs of the expert are valuable precisely because they have been strengthened through repetition, which repetition makes the expert confident in them even in the face of a small number of challenging cases.)
But the expert keeps drawing odd colors, and so after a while, he is forced to throw out the ‘this is an anomaly, and the bag is otherwise as it was’ theory, and start on a new one, which is that some novel variability has indeed entered the system. Now, the expert thinks, we have a world of mostly black and white, but with some new colors as well.
But the expert is still wrong. The bag changed overnight, and the new degree of variation is huge compared to the older black-and-white world. Critically, any attempt to rescue the older theory will cause the expert to misunderstand the world, and the more carefully the expert relies on the very knowledge that constitutes his expertise, the worse his misunderstanding will be.
Meanwhile, on the morning after the contents of the bag turn technicolor, someone who just showed up five minutes ago would say “Hey, this bag has lots of colors and patterns in it.” While the expert is still trying to explain away or minimize the change as a fluke, or as a slight adjustment to an otherwise stable situation, the novice, who has no prior theory to throw out, understands exactly what’s going on.
What our expert should have done, the minute he saw the first odd ball, is to say “I must abandon everything I have ever thought about how this bag works, and start from scratch.” He should, in other words, start behaving like a novice.
Which is exactly the thing he — we — cannot do. We are wired to learn from experience. This is, in almost all cases, absolutely the right strategy, because most things in life benefit from mental continuity. Again, today, gravity pulls things downwards. Again, today, I get hungry and need to eat something in the middle of the day. Again, today, my wife will be happier if I put my socks in the hamper than on the floor. We don’t need to re-learn things like this; once we get the pattern, we can internalize it and move on.
A Lot of Knowledge Is A Dangerous Thing
This is where Fred’s earlier argument comes in. In 999,999 cases, learning from experience is a good idea, but what entrepreneurs do is look for the one in a million shot. When the world really has changed overnight, when wild new things are possible if you don’t have any sense of how things used to be, then it is the people who got here five minutes ago who understand that new possibility, and they understand it precisely because, to them, it isn’t new.
These cases, let it be said, are rare. The mistakes novices make come from a lack of experience. They overestimate mere fads, seeing revolution everywhere, and they make this kind of mistake a thousand times before they learn better. But the experts make the opposite mistake, so that when a real once-in-a-lifetime change comes along, they are at risk of regarding it as a fad. As a result of this asymmetry, the novice makes their one good call during an actual revolution, at exactly the same time the expert makes their one big mistake, but at that moment, that’s all that is needed to give the newcomer a considerable edge.
Here’s a tech history question: Which went mainstream first, the PC or the VCR?
People over 35 have a hard time even understanding why you’d even ask — VCRs obviously pre-date PCs for general adoption.
Here’s another: Which went mainstream first, the radio or the telephone?
The same people often have to think about this question, even though the practical demonstration of radio came almost two decades after the practical demonstration of the telephone. We have to think about that second question because, to us, radio and the telephone arrived at the same time, which is to say the day we were born. And for college students today, that is true of the VCR and the PC.
People who think of the VCR as old and stable, and the PC as a newer invention, are not the kind of people who think up Tivo. It’s people who are presented with two storage choices, tape or disk, without historical bias making tape seem more normal and disk more provisional, who do that kind of work, and those people are, overwhelmingly, young.
This is sad for a lot of us, but its also true, and Fred’s kind lies about age being a mind set won’t reverse that.
The Uses of Experience
I’m old enough to know a lot of things, just from life experience. I know that music comes from stores. I know that you have to try on pants before you buy them. I know that newspapers are where you get your political news and how you look for a job. I know that if you want to have a conversation with someone, you call them on the phone. I know that the library is the most important building on a college campus. I know that if you need to take a trip, you visit a travel agent.
In the last 15 years or so, I’ve had to unlearn every one of those things and a million others. This makes me a not-bad analyst, because I have to explain new technology to myself first — I’m too old to understand it natively. But it makes me a lousy entrepreneur.
Ten years ago, I was the CTO of a web company we built and sold in what seemed like an eon but what was in retrospect an eyeblink. Looking back, I’m embarrassed at how little I knew, but I was a better entrepreneur because of it.
I can take some comfort in the fact that people much more successful than I succumb to the same fate. IBM learned, from decades of experience, that competitive advantage lay in the hardware; Bill Gates had never had those experiences, and didn’t have to unlearn them. Jerry and David at Yahoo learned, after a few short years, that search was a commodity. Sergey and Larry never knew that. Mark Cuban learned that the infrastructure required for online video made the economics of web video look a lot like TV. That memo was never circulated at YouTube.
So what can you do when you get kicked out of the club? My answer has been to do the things older and wiser people do. I teach, I write, I consult, and when I work with startups, it’s as an advisor, not as a founder.
And the hardest discipline, whether talking to my students or the companies I work with, is to hold back from offering too much advice, too definitively. When I see students or startups thinking up something crazy, and I want to explain why that won’t work, couldn’t possibly work, why this recapitulates the very argument that led to RFC 939 back in the day, I have to remind myself to shut up for a minute and just watch, because it may be me who will be surprised when I see what color comes out of the bag next.
Over at Knowledge Tree is a recent essay i wrote called Social Network Sites: Public, Private, or What? For many who follow my blog, the arguments are not new, but i suspect some folks might appreciate the consolidated and not-so-spastic version. At the very least, perhaps you’ll be humored to see my writing splattered with the letter ‘s’ instead of the letter ‘z’ (it’s an Australian e-journal). There’s also an MP3 of me reading the essay for those who fear text (which is very novel since y’all know how much i fear audio/video recordings of me, but i did resist trying to sound funny while pronouncing the letter s instead of the letter z). And here’s a PDF of the essay for those who wishing to kill trees.
In conjunction with this essay, there’s a life chat at 2PM Australian Eastern on 22 May. This translates to 9PM PST on 21 May and midnight New York time (which is where i’ll be so hopefully i won’t be too loopy, or at least no more loopy than i am feeling right now).
Four years ago, I wrote a piece called Fame vs Fortune: Micropayments and Free Content. The piece was sparked by the founding of a company called BitPass and its adoption by the comic artist Scott McCloud (author of the seminal Understanding Comics, among other things.) McCloud created a graphic work called “The Right Number”, which you had to buy using BitPass.
BitPass will fail, as FirstVirtual, Cybercoin, Millicent, Digicash, Internet Dollar, Pay2See, and many others have in the decade since Digital Silk Road, the paper that helped launch interest in micropayments. These systems didn’t fail because of poor implementation; they failed because the trend towards freely offered content is an epochal change, to which micropayments are a pointless response.
I’d love to take credit for having made a brave prediction there, but in fact Nick Szabo wrote a dispositive critique of micropayments back in 1996. The BitPass model never made a lick of sense, so predicting its demise was mere throat-clearing on the way to the bigger argument. The conclusion I drew in 2003 (and which I still believe) was that the vanishingly low cost of making unlimited perfect copies would put creators in the position of having to decide between going for audience size (fame) or restricting and charging for access (fortune), and that the desire for fame, no longer tempered by reproduction costs, would generally win out.
Creators are not publishers, and putting the power to publish directly into their hands does not make them publishers. It makes them artists with printing presses. This matters because creative people crave attention in a way publishers do not. […] with the power to publish directly in their hands, many creative people face a dilemma they’ve never had before: fame vs fortune.
Scott McCloud, who was also an advisor to BitPass, took strong issue with this idea in Misunderstanding Micropayments, a reply to the Fame vs. Fortune argument:
In many cases, it’s no longer a choice between getting it for a price or getting it for free. It’s the choice between getting it for price or not getting it at all. Fortunately, the price doesn’t have to be high.
McCloud was arguing that the creator’s natural monopoly — only Scott McCloud can produce another Scott McCloud work — would provide the artist the leverage needed to insist on micropayments (true), and that this leverage would create throngs of two-bit users (false).
What’s really interesting is that, after the failure of BitPass, McCloud has now released The Right Numberabsolutely free of charge. Nothing. Nada. Kein Preis. After the micropayment barrier had proved too high for his potential audience (as predicted), McCloud had to choose between keeping his work obscure, in order to preserve the possibility of charging for it, or going for attention. His actual choice in 2007, upends his argument of four years ago: he went for the fame, at the expense of the fortune. (This recapitulates Tim O’Reilly’s formulation: “Obscurity is a far greater threat to authors and creative artists than piracy.” [ thanks, Cory, for the pointer ])
Everyone who imagines a working micropayment system either misunderstands user preferences, or imagines preventing users from expressing those preferences. The working micropayments systems that people hold up as existence proofs — ringtones, iTunes — are businesses that have escaped from market dynamics through a monopoly or cartel (music labels, carriers, etc.) Indeed, the very appeal of micropayments to content producers (the only people who like them — they offer no feature a user has ever requested) is to re-establish the leverage of the creator over the users. This isn’t going to happen, because the leverage wasn’t based on the valuing of content, but of packaging and distribution.
I’ll let my 2003 self finish the argument:
People want to believe in things like micropayments because without a magic bullet to believe in, they would be left with the uncomfortable conclusion that what seems to be happening — free content is growing in both amount and quality — is what’s actually happening.
The economics of content creation are in fact fairly simple. The two critical questions are “Does the support come from the reader, or from an advertiser, patron, or the creator?” and “Is the support mandatory or voluntary?”
The internet adds no new possibilities. Instead, it simply shifts both answers strongly to the right. It makes all user-supported schemes harder, and all subsidized schemes easier. It likewise makes collecting fees harder, and soliciting donations easier. And these effects are multiplicative. The internet makes collecting mandatory user fees much harder, and makes voluntarily subsidy much easier.
The only interesting footnote, in 2007, is that these forces have now reversed even McCloud’s behavior.
I love Etech. This year, i had the great opportunity to keynote Etech (albeit at an ungodly hour). The talk i wrote was entirely new and intended for the tech designer/developer audience (warning: the academics will hate it). The talk is called:
It’s about how technologists need to pay attention to the magic that everyday people create using the Web2.0 technologies that we in the tech world think are magical. It’s quite a fun talk and i figured that some might enjoy reading it so i just uploaded my crib notes. It is unlikely that i said exactly what i wrote, but the written form should provide a good sense of the points i was trying to make in the talk.
I should give infinite amounts of appreciation to Raph Koster who took unbelievable notes during my presentation, letting me adjust my crib to be more in tune with what i actually said. THANK YOU! I was half tempted to not bother blogging my crib notes given the fantastic-ness of his notes, but i figure that there still might be some out there who would prefer the crib. Enjoy!
(PS: If you remember me saying something that i didn’t put in the crib, let me know and i’ll add it… i’m stunned at how many of you took notes during the talk.)
SXSW has come and gone and my phone might never recover. Y’see, last year i received over 500 Dodgeballs. To the best that i can tell, i received something like 3000 Tweets during the few days i was in Austin. My phone was constantly hitting its 100 message cap and i spent more time trying to delete messages than reading them. Still, i think that Twitter and Dodgeball are interesting and i want to take a moment to consider their strengths and weaknesses as applications.
While you can use Dodgeball for a variety of things, it’s primarily a way of announcing presence in a social venue where you’d be willing to interact with other people. Given that i’m a hermit, i primarily use Dodgeball to announce my presence at conference outtings and to sigh in jealousy as people romp around Los Angeles. Dodgeball is culturally linked to place. I’m still pretty peeved with Google over the lack of development of Dodgeball because i still think it would be a brilliant campus-based application where people actually do party-hop on every weekend and want to know if their friends are at the neighboring frat party instead of this one. When it comes to usage at SXSW, Dodgeball is great. I know when 7 of my friends are in one venue and 11 are in another; it helps me decide where to go.
Twitter has taken a different path. It is primarily micro-blogging or group IMing or push away messaging. You write whatever you damn well please and it spams all of the people who agreed to be your friends. The biggest strength AND weakness of Twitter is that it works through your IM client (or Twitterrific) as well as your phone. This means that all of the tech people who spend far too much time bored on their laptops are spamming people at a constant rate. Ah, procrastination devices. If you follow all of your friends on your mobile, you’re in for a hellish (and every expensive) experience. Folks quickly learn to stop following people on their mobile (or, if they don’t, they turn Twitter off altogether). This, unfortunately, kills the mobile value of it, making it far more of a web tool than a mobile tool. Considering how much of a bitch it is to follow/unfollow people, users quickly choose and rarely turn back. Thus, once they stop following someone on their phone, they don’t return just because they are going out with that person that night (unless they run into them and choose to switch it on).
At SXSW, Twitter is fantastic for mobile. Everyone is running around the same town commenting on talks, remarking on venues, bitching about the rain. But dear god did i feel bad for the people who weren’t at SXSW who were getting spammed with that crap. One value of Twitter is that it’s really lightweight and easy. One problem is that this is terrible if your social world is not one giant cluster. While my tech friends who normally attend SXSW moped about how jealous they were upon receiving all of the SXSW messages, my non-tech friends were more of the WTF camp. Without segmentation, i had to choose one audience over the other because there was no way to move seamlessly between the audiences. Of course, groups are much heavier to manage. Still, i think it’s possible and i gave Ev some notes.
I think it’s funny to watch my tech geek friends adopt a social tech. They can’t imagine life without their fingers attached to a keyboard or where they didn’t have all-you-can-eat phone plans. More importantly, the vast majority of their friends are tech geeks too. And their social world is relatively structurally continuous. For most 20/30-somethings, this isn’t so. Work and social are generally separated and there are different friend groups that must be balanced in different ways.
Of course, the population whose social world is most like the tech geeks is the teens. This is why they have no problems with MySpace bulletins (which are quite similar to Twitter in many ways). The biggest challenge with teens is that they do not have all-you-can-eat phone plans. Over and over, the topic of number of text messages in one’s plan comes up. And my favorite pissed off bullying act that teens do involves ganging up to collectively spam someone so that they’ll go over their limit and get into trouble with their parents (phone companies don’t seem to let you block texts from particular numbers and of course you have to pay 10c per text you receive). This is particularly common when a nasty breakup occurs and i was surprised when i found out that switching phone numbers is the only real solution to this. Because most teens are not permanently attached to a computer and because they typically share their computers with other members of the family, Twitterific-like apps wouldn’t really work so well. And Twitter is not a strong enough app to replace IM time.
Of course, this doesn’t mean that all teens would actually like Twitter. There are numerous complaints about the lameness of bulletins. People forward surveys just as something to do and others complain that this is a waste of their time. (Of course, then they go on to do it themselves.) Still, bulletin space is like Twitter space. You need to keep posting so that your friends don’t forget you. Or you don’t post at all. Such is the way of Twitter. Certain people i see flowing 5-15 times a day. Others i never hear from (or like once a week).
There’s another issue at play… Like with bulletins, it’s pretty ostentatious to think that your notes are worth pushing to others en masse. It takes a certain kind of personality to think that this kind of spamming is socially appropriate and desirable. Sure, we all love to have a sense of what’s going on, but this is push technology at its most extreme. You’re pushing your views into the attention of others (until they turn it or you off).
The techno-geek users keep telling me that it’s a conversation. Of course, this is also said of blogging. But i don’t think that either are typically conversations. More often, they are individuals standing on their soap boxes who enjoy people responding to them and may wander around to others soap boxes looking for interesting bits of data. By and large, people Twitter to share their experience; only rarely do they expect to receive anything in return. What is returned is typically a kudos or a personal thought or an organizing question. I’d be curious what percentage of Tweets start a genuine back-and-forth dialogue where the parties are on equal ground. It still amazes me that when i respond to someone’s Tweet personally, they often ignore me or respond curtly with an answer to my question. It’s as though the Tweeter wants to be recognized en masse, but doesn’t want to actually start a dialogue with their pronouncements. Of course, this is just my own observation. Maybe there are genuine conversations happening beyond my purview.
Unfortunately, i don’t know how sustainable Twitter is for most people. It’s very easy to burn out on it and once someone does, will they return? It’s also really hard for friend-management. If you add someone, even if you “leave” them, you’ll get Twitteriffic posts from them. This creates a huge disincentive for adding people, even if you welcome them to read your Tweets. Post-SXSW, i’ve seen two things: the most active in Austin are still ridiculously active. The rest have turned it off for all intents and purposes. Personally, i’m trying to see how long i’ll last before i can’t stand the invasion any longer. Given that my non-tech friends can’t really join effectively (for the same reasons as teens - text messaging plan and lack of always-on computerness and hatred of IM interruptions), i don’t think that i can get a good sense of how this would play out beyond the geek crowd. But it sure is entertaining to watch.
PS: I should note that my favorite part of Twitter is that when i wander to a non-functioning page, i get this image:
When adults aren’t dismissing MySpace as the land-o-predators, they’re often accusing it of producing narcissistic children. I find it hard to bite my tongue in these situations, but i know that few adults are willing to take the blame for producing narcissistic children. The issue of narcissism and fame is back in public circulation with a vengeance (thanks in part to Britney Spears for having a public meltdown). While the mainstream press is having a field day with blaming celebrities and teens for being narcissistic, more solid research on narcissism is emerging.
For those who are into pop science coverage of academic work, i’d encourage you to start with Jake Halpern’s “Fame Junkies” (tx Anastasia). For simplicity sake, let’s list a few of the key findings that have emerged over the years concerning narcissism.
While many personality traits stay stable across time, it appears as though levels of narcissism (as tested by the NPI) decrease as people grow older. In other words, while adolescents are more narcissistic than adults, you were also more narcissistic when you were younger than you are now.
The scores of adolescents on the NPI continue to rise. In other words, it appears as though young people today are more narcissistic than older people were when they were younger.
There appears to be a correlation between narcissism and self-esteem based education. In other words, all of that school crap about how everyone is good and likable has produced a generation of narcissists.
Celebrity does not make people narcissists but narcissistic people seek fame.
Reality TV stars score higher on the NPI than other celebrities.
OK… given these different findings (some of which are still up for debate in academic circles), what should we make of teens’ participation on social network sites in relation to narcissism?
My view is that we have trained our children to be narcissistic and that this is having all sorts of terrifying repercussions; to deal with this, we’re blaming the manifestations instead of addressing the root causes and the mythmaking that we do to maintain social hierarchies. Let’s unpack that for a moment.
American individualism (and self-esteem education) have allowed us to uphold a myth of meritocracy. We sell young people the idea that anyone can succeed, anyone can be president. We ignore the fact that working class kids get working class jobs. This, of course, has been exacerbated in recent years. There used to be meaningful working class labor that young people were excited to be a part of. It was primarily masculine labor and it was rewarded through set hierarchies and unions helped maintain that structure. The unions crumpled in the 1980s and by the time the 1987 recession hit, there was a teenage wasteland No longer were young people being socialized into meaningful working class labor; the only path out was the “lottery” (aka becoming a famous rock star, athlete, etc.).
Since the late 80s, the lottery system has become more magnificent and corporatized. While there’s nothing meritocratic about reality TV or the Spice Girls, the myth of meritocracy remains. Over and over, working class kids tell me that they’re a better singer than anyone on American Idol and that this is why they’re going to get to be on the show. This makes me sigh. Do i burst their bubble by explaining that American Idol is another version of Jerry Springer where hegemonic society can mock wannabes? Or does their dream have value?
So, we have a generation growing up being told that they can be anyone, magnifying the level of narcissism. Narcissists seek fame and Hollywood dangles fame like a carrot on a stick. Meanwhile, technology emerges that challenges broadcast’s control over distribution. It just takes a few Internet success stories for fame-seeking narcissists to begin projecting themselves into the web in the hopes of being seen and being validated. While the important baseline of peer-validation still dominates, the hopes of becoming famous are still part of the narrative. Unfortunately, it’s kinda like watching wannabe actors work as waiters in Hollywood. They think that they’ll be found there because one day long ago someone was and so they go to work everyday in a menial service job with a dream.
Perhaps i should rally behind people’s dreams, but i tend to find them quite disturbing. It is these kinds of dreams that uphold the American myths that get us into such trouble. They also uphold hegemony and the powerful feed on their dreams, offering nothing in return. We can talk about reality TV as an amazing opportunity for anyone to act, but realistically, it’s nothing more than Hollywood’s effort to bust the actors’ guild and related unions. Feed on people’s desire for fame, pay them next to nothing and voila profit margin!
Unfortunately, union busting is the least of my worries when it comes to dream parasites. When i was trying to unpack the role of crystal meth in domestic violence, i started realizing that the meth offered a panacea when the fantasy bubble burst. Needless to say, this resulted in a spiral into hell for many once-dreamers. The next step was even more nauseating. When i started seeing how people in rural America recovered from meth, i found one common solution: born-again Christianity. The fervor for fame which was suppressed by meth re-emerged in zealous religiosity. Christianity promised an even less visible salvation: God’s grace. While blind faith is at the root of both fame-seeking and Christianity, Christianity offers a much more viable explanation for failures: God is teaching you a lesson… be patient, worship God, repent, and when you reach heaven you will understand.
While i have little issue with the core tenants of Christianity or religion in general, i am disgusted by the Christian Industrial Complex. In short, i believe that there is nothing Christian about the major institutions behind modern day organized American Christianity. Decades ago, the Salvation Army actively engaged in union-busting in order to maintain the status-quo. Today, the Christian Industrial Complex has risen into power in both politics and corporate life, but their underlying mission is the same: justify poor people’s industrial slavery so that the rich and powerful can become more rich and powerful. Ah, the modernization of the Protestant Ethic.
Let’s pop the stack and return to fame-seeking and massively networked society. Often, you hear Internet people modify Andy Warhol’s famous quote to note that on the Internet, everyone will be famous amongst 15. I find this very curious, because aren’t both time and audience needed to be famous? Is one really famous for 15 minutes? Or amongst 15? Or is it just about the perceived rewards around fame?
Why is it that people want to be famous? When i ask teens about their desire to be famous, it all boils down to one thing: freedom. If you’re famous, you don’t have to work. If you’re famous, you can buy anything you want. If you’re famous, your parents can’t tell you what to do. If you’re famous, you can have interesting friends and go to interesting parties. If you’re famous, you’re free! This is another bubble that i wonder whether or not i should burst. Anyone who has worked with celebrities knows that fame comes with a price and that price is unimaginable to those who don’t have to pay it.
How does this view of fame play into narcissism? If you think you’re all that, you don’t want to be told what to do or how to do it… You think you’re above all of that. When you’re parents are telling you that you have to clean your room and that you’re not allowed out, they’re cramping your style. How can you be anyone you want to be if you can’t even leave the house? Fame appears to be a freedom from all of that.
The question remains… does micro-fame (such as the attention one gets from being very cool on MySpace) feed into the desires of narcissists to get attention? On a certain level, yes. The attention feels good, it feeds the ego. But the thing about micro-celebrities is that they’re not free from attack. One of the reasons that celebrities go batty is that fame feeds into their narcissism, further heightening their sense of self-worth as more and more people tell them that they’re all that. They never see criticism, their narcissism is never called into check. This isn’t true with micro-fame and this is especially not true online when celebrities face their fans (and haters) directly. Net celebrities feel the exhaustion of attention and nagging much quicker than Hollywood celebrities. It’s a lot easier to burn out quicker and before reaching that mass scale of fame. Perhaps this keeps some of the desire for fame in check? Perhaps not. I honestly don’t know.
What i do know is that MySpace provides a platform for people to seek attention. It does not inherently provide attention and this is why even if people wanted 90M viewers to their blog, they’re likely to only get 6. MySpace may help some people feel the rush of attention, but it does not create the desire for attention. The desire for attention runs much deeper and has more to do with how we as a society value people than with what technology we provide them.
I am most certainly worried about the level of narcissism that exists today. I am worried by how we feed our children meritocratic myths and dreams of being anyone just so that current powers can maintain their supremacy at a direct cost to those who are supplying the dreams. I am worried that our “solutions” to the burst bubble are physically, psychologically, and culturally devastating, filled with hate and toxic waste. I am worried that Paris Hilton is a more meaningful role model to most American girls than Mother Theresa ever was. But i am not inherently worried about social network technology or video cameras or magazines. I’m worried by how society leverages different media to perpetuate disturbing ideals and pray on people’s desire for freedom and attention. Eliminating MySpace will not stop the narcissistic crisis that we’re facing; it will simply allow us to play ostrich as we continue to damage our children with unrealistic views of the world.
I’m often asked what “Web 3.0” will be about. Lately, i have found myself talking about two critical stages of web sociality in order to explain where we’re going. I realized that i never succinctly described this here so i thought i should.
In early networked publics, there were two primary organizing principles for group sociability: interests and activities. People came together on rec.motorcylcles because they shared an interest in motorcycles. People also came together in work groups to discuss activities. Usenet, mailing lists, chatrooms, etc. were organized around these principles.
By and large, these were strangers meeting. Early net adopters were often engaging with people like them who were not geographically proximate. Then the boom hit and everyone got online, often to email with their friends (and consume). With everyone online, the organizing principles of sociality shifted.
As blogging began to take hold, people started arranging themselves around pre-existing friend groups. In this way, the organizing principle was about ego-centric networks. People’s “communities” began being defined by their friends. This model is quite different than group-driven structures where there are defined network boundaries. Ego-centric system are a (mostly) continuous graph. There are certainly clusters, but rarely bounded groups. This is precisely how we get the notion of “6 degrees of separation.” While blogging (and to a lesser degree homepages) were key to this shift, it was really social network sites that took the ball to the endzone. They made the networks visible, allowing people to put themselves at the center of their world. We finally have a world wide WEB of people, not just documents.
When i think about what’s next, i don’t think it’s going more virtual, more removed from everyday life. Actually, i think it’s even more connected to everyday life. We moved from ideas to people. What’s next? Place.
I believe that geographic-dependent context will be the next key shift. GPS, mesh networks, articulated presence, etc. People want to go mobile and they want to use technology to help them engage in the mobile world. Unfortunately, i think we have huge structural barriers in front of us. It’s not that we can’t do this on a technological level, it’s that there are old-skool institutions that want to get in the way. And they want to do it by plugging the market and shaping the law to their advantage. Primarily, i’m talking about carriers. And the handset makers who help keep the carriers alive. Let me explain.
The internet was not made for social communities. It was not made for social network sites. This grew because some creative folks decided to build on the open platform that was made available. Until recently, network neutrality was never a debate in the internet world because it was assumed. Given a connection (and time and literacy), anyone could contribute. Gotta love libertarian idealism.
Unfortunately, the same is not true for the mobile network. There’s never been neutrality and it’s the last thing that the carriers want. They want to control every byte and every application that can be put on the handsets that they adopt (and control through locking). In short, they want to control everything. It’s near impossible to develop networked social applications for mobiles. If it works on one carrier, it’s bound to be ignored by others. Even worse, the carriers have a disincentive to allow you to spread bytes over the network. (I can’t imagine how much those with all-you-can-eat plans detest Twittr.) Culturally, this is the step that’s next. Too bad i think that inane corporate bullshit is going to get in the way.
Of course, while i think that people want to move in this direction, i also think that privacy confusion has only just begun.
On Wednesday, Twitter tipped the tuna. By that I mean it started peaking. Adoption amongst the people I know seemed to double immediately, an apparent tipping point. It hasn’t jumped the shark, and probably won’t until Steven Colbert covers this messaging of the mundane. As Twitter turns 1 on March 13th, not only is there a quickening of users, but messages per user.
Twitter, in a nutshell, is mobile social software that lets you broadcast and receive short messages with your social network. You can use it with SMS (sending a message to 40404), on the web or IM. A darn easy API has enabled other clients such as Twitterific for the Mac. Twitter is Continuous Partial Presence, mostly made up of mundane messages in answer to the question, “what are you doing?” A never-ending steam of presence messages prompts you to update your own. Messages are more ephemeral than IM presence — and posting is of a lower threshold, both because of ease and accessibility, and the informality of the medium.
Anil Dash was spot-on to highlight “The sign of success in social software is when your community does something you didn’t expect.” A couple of weeks ago it became a convention to start messages with @username as a way of saying something to someone visible to everyone. Within the limited affordances of the tool, people started to use it not only for presence, but a kind of shouting at the party conversation. Further, when you see an to someone who isn’t in your social network, you find yourself inclined to go see who it is or add them if they are a friend who just joined. This kind of social discovery goes beyond seeing friend lists on profiles, aids network structure and quickens adoption.
While the app is viral (you have to get others to adopt to be able to use it), mobile social software has great word-of-mouth properties. At Wikimania this summer, a buzz went off in my pocket when I was having dinner, which prompted me to get Jason Calacanis, Dave Winer and the brothers Gillmor to adopt. Wednesday was the first day of TED, so a bunch of A-listers spread it. At SXSW it seems to be the smart mob tool of choice, and there is even a group for it with a feature I’ve never seen before, JOIN.
This week most of my company joined Twitter and I set up http://twitter.com/socialtext for no reason in particular. I posted the login in a private wiki page to let anyone contribute. But when Moconner saw how simple the API was, he wrote a bot to let us post from our IRC channel. Now we have a low threshold way to express group identity that fits with the way we work.
Liz Lawley well addressed the differences of this form of presence and criticisms of mundane content and interruption costs. She highlights “exploring clusters of loosely related people by looking at the updates from their friends. There are stories told in between updates.”
However, I do think the the interruption tax is significant — especially with the quickening of adoption. You use your social network as a filter, which helps both in scoping participation within a pull model of attention management, but also to Liz’s point that my friends are digesting the web for me and perhaps reducing my discovery costs. But the affordance within Twitter of both mobile and web, that not only lets Anil use it (he is Web-only) is what helps me manage attention overload. I can throttle back to web-only and curb interruptions, simply by texting off.
Good thing too, because back when it was called twittr people held back believing what they posted would be interrupting on mostly mobile devices. Lately I think people just go for it, and most consumption is on the web or other clients. I’d love to see some research on posts/user, client use, tracking @username, group identities, geographic dispersion and revealing other undesigned conventions.
So a few weeks ago, I started getting spam referencing O’Reilly books in the subject line, and I thought that the spammers had just gotten lucky, and that the universe of possible offensive measures for spammers now included generating so many different subject lines that at least some of them got through to my inbox, but recently I’ve started to get more of this kind of spam, as with:
Subject: definition of what “free software” means. Outgrowing its
Subject: What makes it particularly interesting to private users is that there has been much activity to bring free UNIXoid operating systems to the PC,
Subject: and so have been long-haul links using public telephone lines. A rapidly growing conglomerate of world-wide networks has, however, made joining the global
(All are phrases drawn from http://tldp.org/LDP/nag/node2.html.)
Can it be that spammers are starting to associate context with individual email addresses, in an effort to evade Bayesian filters? (If you wanted to make sure a message got to my inbox, references to free software, open source, and telecom networks would be a pretty good way to do it. I mean, what are the chances?) Some of this stuff is so close to my interests that I thought I’d written some of the subject lines and was receiving this as a reply. Or is this just general Bayes-busting that happens to overlap with my interests?
If it’s the former, then Teilhard de Chardin is laughing it up in some odd corner of the noosphere, as our public expressions are being reflected back to us as a come-on. History repeats itself, first as self-expression, then as ad copy…
I’m completely fascinated by Twitter right now—in much the same way I was by blogging four years ago, and by ICQ years before that.
If you haven’t tried it yet, Twitter is a site that allows you to post one-line messages about what you’re currently doing—via the web interface, IM, or SMS. You can limit who sees the messages to people you’ve explicitly added to your friends list, or you can make the messages public. (My Twitter posts are private, but my friend Joi’s are public.)
What Twitter does, in a simple and brilliant way, is to merge a number of interesting trends in social software usage—personal blogging, lightweight presence indicators, and IM status messages—into a fascinating blend of ephemerality and permanence, public and private.
The big “P” word in technology these days is “participatory.” But I’m increasingly convinced that a more important “P” word is “presence.” In a world where we’re seldom able to spend significant amounts of time with the people we care about (due not only to geographic dispersion, but also the realities of daily work and school commitments), having a mobile, lightweight method for both keeping people updated on what you’re doing and staying aware of what others are doing is powerful.
I’ve experimented a bit with a visual form of this lightweight presence indication, through cameraphone photos taken while traveling. A photo of a boarding gate sign, or of a hotel entrance, conveys where I am and what I’m doing quickly and easily. But that only works if people are near a computer and are watching my Flickr photo feed, and that’s a lot to ask.
I also use IM status messages to broadcast what I’m doing. My iChat has a stack of custom messages that I’ve saved for re-use, from “packing” and “at the airpot” to “breaking up sibling squabbles” and “grading…the horror! the horror!” But status messages have no permanence to them, and require some degree of synchronicity—people have to be logged into IM, and looking at status messages, while I’m there. Because Twitter archives your messages on the web (and can send them as SMS that you can check at any time), that requirement for synchronous connections goes away.
Blogs allow this kind of archived update, of course—but they’re not lightweight. Where one might easily post a Twitter message along the lines of “on my way to work”, a blog post like that wouldn’t be worth the effort and overhead.
I’ve heard two kinds of criticisms of Twitter already.
The first criticizes the triviality of the content. But asking “who really cares about that kind of mindless trivia about your day” misses the whole point of presence. This isn’t about conveying complex theory—it’s about letting the people in your distributed network of family and friends have some sense of where you are and what you’re doing. And we crave this, I think. When I travel, the first thing I ask the kids on the phone when I call home is “what are you doing?” Not because I really care that much about the show on TV, or the homework they’re working on, but because I care about the rhythms and activities of their days. No, most people don’t care that I’m sitting in the airport at DCA, or watching a TV show with my husband. But the people who miss being able to share in day-to-day activity with me—family and close friends—do care.
The second type of criticism is that the last thing we need is more interruptions in our already discontinuous and partially attentive connected worlds. What’s interesting to me about Twitter, though, is that it actually reduces my craving to surf the web, ping people via IM, and cruise Facebook. I can keep a Twitter IM window open in the background, and check it occasionally just to see what people are up to. There’s no obligation to respond, which I typically feel when updates come from individuals via IM or email. Or I can just check my text messages or the web site when I feel like getting a big picture of what my friends are up to.
Which then leads to one of the aspects of Twitter that I find most fascinating—exploring clusters of loosely related people by looking at the updates from their friends. There are stories told in between updates. Who’s at a conference, and do they know each other? Who’s on the road, and who’s at home. Narratives that wind around and between the updates and the people, that show connections. Updates that echo each other, or even directly respond to another Twitter post.
There’s more to it than that, but I’m still sorting it all out in my head. Just wanted to post an early-warning signal that I see something important happening here, something worth paying (more than partial) attention to.
(cross-posted from mamamusings; since comments have been unreliable here, any comments can be posted there)
Not long ago, I wrote an article on “Social Publishing” here on Many-to-Many, which suggests the possibility of a system where
“authors create and distribute their work, and readers, individually and collectively, including fans as well as editors and peers, review, comment, rank, and tag, everything.”
So I followed up on the post and, along with a colleague Richard Adler, started Oort-Cloud.org
Oort-Cloud is a site where science fiction and fantasy readers and writers can build precisely the kind of community that I alluded to in Social Publishing. Oort-Cloud utilizes a process we have termed “OpenLit” which you can read more about on the OpenLit page. Basically, OpenLit is a simple catalytic cycle:
Write - Share - Read - Respond
First, writers write. Share
Second, writers share with others what they have written. Read
Third, readers read what is available. Respond
Fourth, readers respond to what they have read.
In this way, writers become better writers by virtue of having a distribution outlet that embeds constant feedback, and readers have access to better and better stories, where “better” actually means better for them based on their interaction with the writers.
Hopefully, this all means new opportunities for everyone involved in science fiction and fantasy — readers, writers, and publishers alike.
Last week, Facebook unveiled a gifting feature. For $1, you can purchase a gift for the person you most adore. If you choose to make the gift public, you are credited with that gift on the person’s profile under the “gift box” region. If you choose to make the gift private, the gift is still there but there’s no notice concerning who gave it.
Before getting into this, let me take a moment to voice my annual bitterness over Hallmark Holidays, particularly the one that involves an obscene explosion of pink, candy, and flowers.
The gifting feature is fantastically times to align with a holiday built around status: Valentine’s Day. Valentine’s Day is all about pronouncing your relationship to loved ones (and those you obsess over) in the witness of others. Remember those miniature cards in elementary school? Or the carnations in high school? Listening to the radio, you’d think Valentine’s Day was a contest. Who can get the most flowers? The fanciest dinner? This holiday should make most people want to crawl in bed and eat bon-bons while sobbing over sappy movies. But it works. It feeds on people’s desire to be validated and shown as worthy to the people around them, even at the expense of others. It is a holiday built purely on status (under the guise of “love”). You look good when others love you (and the more the merrier).
Of course, Valentine’s Day is not the only hyper-commercialized holiday. The celebration of Christ’s birth is marked by massive shopping. In response, the Festival of Lights has been turned into 8 days of competitive gift giving in American Jewish culture. Acknowledging that people get old in patterns that align with a socially constructed calendar also requires presents. Hell, anything that is seen as a lifestage change requires gifts (marriage, childbirth, graduation, Bat Mitzvah, etc.).
Needless to say, gift giving is perpetuated by a consumer culture that relishes any excuse to incite people to buy. My favorite of this is the “gift certificate” - a piece of paper that says that you couldn’t think of what to give so you assuaged your guilt by giving money to a corporation. You get brainwashed into believing that forcing your loved one to shop at that particular venue is thoughtful, even though the real winner is the corporation since only a fraction of those certificates are ever redeemed. No wonder corporations love gift certificates - they allow them to make bundles and bundles of money, knowing that the receiver will never come back for the goods.
But anyhow… i’ve gone off on a tangent… Gifts. Facebook.
Unlike Fred, i think that gifts make a lot more sense than identity purchases when it comes to micro-payments and social network sites. Sure, buying clothes in virtual systems makes sense, but what’s the value of paying to deck out your profile if the primary purpose of it is to enable communication? I think that for those who actively try to craft a public identity through profiles (celebrities and fame junkies), paying to make a cooler profile makes sense. But most folks are quite content with the crap that they can do for free and i don’t see them paying money to get more fancified backgrounds when they can copy/paste. That said, i think it’s very interesting when you can pay to affect someone else’s profile. I think it’s QQ where you can pay to have a donkey shit on your friend’s page and then they have to pay to clean it up. This prankster “gift” has a lot of value. It becomes a game within the system and it bonds two people together.
In a backchannel conversation, Fred argues with me that digital gifts will have little value because they only make people look good for a very brief period. They do not have the same type of persistence as identity-driven purchases like clothing in WoW. I think that it is precisely this ephemeralness that will make gifts popular. There are times for gift giving (predefined by society). Individuals’ reaction to this is already visible on social network sites comments. People write happy birthday and send glitter for holidays (a.k.a. those animated graphical disasters screaming “happy valentine’s day!”). These expressions are not simply altruistic kindness. By publicly performing the holiday or birthday, the individual doing the expression looks good before hir peers. It also prompts reciprocity so that one’s own profile is then also filled with validating comments. Etc. Etc. (If interested in gifting, you absolutely must read the canon: Marcel Mauss’ “The Gift”.)
Like Fred, i too have an issue with the economic structure of Facebook Gifts, but it’s not because i think that $1 is too expensive. Gifts are part of status play. As such, there are critical elements about gift giving that must be taken into consideration. For example, it’s critical to know who gifted who first. You need to know this because it showcases consideration. Look closely at comments on MySpace and you’ll see that timing matters; there’s no timing on Facebook so you can’t see who gifted who first and who reciprocated. Upon receipt of a gift, one is often required to reciprocate. To handle being second, people up the ante in reciprocating. The second person gives something that is worth more than the first. This requires having the ability to offer more; offering two of something isn’t really the right answer - you want to offer something of more value. All of Facebook’s gifts are $1 so they are all equal. Value, of course, doesn’t have to be about money. Scarcity is quite valuable. If you gift something rare, it’s far more desired than offering a cheesy gift that anyone could get. This is why the handmade gift matters in a culture where you can buy anything.
I don’t think Facebook gifts - in its current incarnation - is sustainable. You can only gift so many kisses and rainbows before it’s meaningless. And what’s the point of paying $1 for them (other than to help the fight against breast cancer)? $1 is nothing if the gift is meaningful, but the 21 gift options will quickly lose meaning. It’s not just about dropping the price down to 20 cents. It’s about recognizing that gifting has variables that must be taken into account.
People want gifts. And they want to give gifts. Comments (or messages on the wall) are a form of gifting and every day, teens and 20-somethings log in hoping that someone left a loving comment. (And all the older folks cling to their Crackberries with the same hope.) It’s very depressing to log in and get no love.
I think that Facebook is right-on for making a gifting-based offering, but i think that to make it work long-term, they need to understand gifting a bit better. It’s about status. It’s about scarcity. It’s about reciprocity and upping the ante. These need to worked into the system and evolving this will make Facebook look good, not like they are backpeddling. This is not about gifting being a one-time rush; it’s about understanding the social structure of gifting.
Wikipedia’s policy of neutrality sometimes forces resolution when we’d rather have debate. Yes, competing sides get represented in the articles, and the discussion pages let us hear people arguing their points, but the arguments themselves are treated as stations on the way to neutral agreement.
So, there’s room for additional approaches that take the arguments themselves as their topics. That’s what Debatepedia.org does, and it looks like it’s on its way to being really useful.
Like Wikipedia, anyone can edit existing content. Unlike Wikipedia, its topics are all up for debate. Each topic presents both sides, structured into sub-questions, with a strong ethos of citation, factuality, and lack of flaming; the first of its Guiding Principles is “No personal opinion.” Rather, it attempts to present the best case and best evidence for each side.
Debatepedia limits itself to topics with yes-no alternatives and with clear pro and con cases. To start a debate, a user has to propose it and the editors (who seem to be the people who founded it…I couldn’t find info about them on the site) have to accept it. This keeps people from proposing stupid topics and boosts the likelihood that if you visit a listed debate, you’ll find content there. It also limits discussion to topics that have two and only two sides, which may turn out to be a serious limitation. But, we’ll see. And it can adapt as required.
Will Debatepedia take off? Who the hell knows. But it’s a welcome addition to the range of experiments in pulling ourselves together.
In the tech circles in which i run, the term “walled gardens” evokes a scrunching of the face if not outright spitting. I shouldn’t be surprised by this because these are the same folks who preach the transparent society as the panacea. But i couldn’t help myself from thinking that this immediate revulsion is obfuscating the issue… so i thought i’d muse a bit on walled gardens.
Walled gardens are inevitably built out of corporate greed - a company wants to lock in your data so that you can’t move between services and leave them in the dust. They make money off of your eyeballs. They make money off of your data. (In return, they often provide you with “free” services.) You put blood, sweat, and tears - or at least a little bit of time - into providing them with valuable data and you can’t get it out when you decide you’ve had enough. If this were the full story, of course walled gardens look foul to the core.
The term “walled garden” implies that there is something beautiful being surrounded by walls. The underlying assumption is that walls are inherently bad. Yet, walls have certain value. For example, i’m very appreciative of walls when i’m having sex. I like to keep my intimate acts intimate and part of that has to do with the construction of barriers that prevent others from accessing me visually and audibly. I’m not so thrilled about tearing down all of the walls in meatspace. Walls are what allow us to construct a notion of “private” and, even more importantly, contextualized publics. Walls help contain the social norms so that you know how to act properly within their confines, whether you’re at a pub or in a classroom.
One of the challenges online is that there really aren’t walls. What walls did exist came tumbling down with the introduction of search. Woosh - one quick query and the walls that separated comp.lang.perl from alt.sex.bondage came crashing down. Before search (a.k.a. Deja), there were pseudo digital walls. Sure, Usenet was public but you had to know where the door was to enter the conversation. Furthermore, you had to care to enter. There are lots of public and commercial places i pass by every day that i don’t bother entering. But, “for the good of all humankind”, search came to pave the roads and Arthur Dent couldn’t stop the digital bulldozer.
We’re living with the complications of no walls online. Determining context is really really hard. Is your boss really addressing you when he puts his pic up on Match.com? Does your daughter take your presence into consideration when she crafts her MySpace? No doubt it’s public, but it’s not like any public that we’re used to in meatspace.
For a long time, one of the accidental blessings of walled gardens was that they kept out search bots as part of their selfish data retention plan. This meant that there were no traces left behind of people’s participation in walled gardens when they opted out - no caches of previous profiles, no records of a once-embarassing profile. Much to my chagrin, many of the largest social network sites (MySpace, LinkedIn, Friendster, etc.) have begun welcoming the bots. This makes me wonder… are they really walled gardens any longer? It sounds more like chain linked fences to me. Or maybe a fishbowl with a little plastic castle.
What does it mean when the supposed walled gardens begin allowing external sites to cache their content?
[tangent] And what on earth does it mean that MySpace blocks the Internet Archive in its robots.txt but allows anyone else? It’s like they half-realize that posterity might be problematic for profiles, but fail to realize that caches of the major search engines are just as freaky. Of course, to top it off, their terms say that you may not use scripts on the site - isn’t a bot a script? The terms also say that participating in MySpace does not give them a license to distribute your content outside of MySpace - isn’t a Google cache of your profile exactly that? [end tangent]
Can we really call these sites walled gardens if the walls are see-through? I mean, if a search bot can grab your content for cache, what’s really stopping you from doing so? Most tech folks would say that they are walled gardens because there are no tools to support easy export. Given that thousands of sites have popped up to provide codes for you to turn your MySpace profile into a dizzy display of animated daisies with rainbow hearts fluttering from the top (while inserting phishing scripts), why wouldn’t there be copy/pastable code to let you export/save/transfer your content? Perhaps people don’t actually want to do this. Perhaps the obsessive personal ownership of one’s content is nothing more than a fantasy of the techno-elite (and the businessmen who haven’t yet managed to lock you in to their brainchild). I mean, if you’re producing content into a context, do you really want to transfer it wholesale? I certainly don’t want my MySpace profile displayed on LinkedIn (even if there are no nude photos there).
For all of this rambling, perhaps i should just summarize into three points:
If walls have value in meatspace, why are they inherently bad in mediated environments? I would argue that walls provide context and allow us to have some control over the distribution of our expressions. Walls should be appreciated, even if they are near impossible to construct.
If robots can run around grabbing the content of supposed walled gardens, are they really walled? It seems to me that the tizzy around walled gardens fails to recognize that those most interested in caching the data (::cough:: Google) can do precisely that. And those most interested does not seem to include the content producers.
If the walls come crashing down, what are we actually losing? Walls provide context, context is critical for individuals to properly express themselves in a socially appropriate way. I fear that our loss of walls is resulting in a very confused public space with far more visibility than anyone can actually handle.
Basically, i don’t think that walled gardens are all that bad. I think that they actually provide a certain level of protection for those toiling in the mud. The problem is that i think that we’ve torn down the walls of the supposed walled gardens and replaced them with chain links or glass. Maybe even one-way glass. And i’m not sure that this is such a good thing. ::sigh::
So, what am i missing? What don’t i understand about walled gardens?
Technorati has a new feature that’s only slightly confusing but very interesting and potentially quite useful. (Disclosure: I’m on Technorati’s board of advisors.)
It’s called “WTF,” which technically stands for “Where’s the Fire,” but has another more likely meaning. (David Isenberg named one of his conferences “WTF” and then had a contest to decide what it stood for.) So, if you go to Technorati and take a look at the Top Searches in the upper right, to the left of each entry there’s an orange flame. Don’t click on it yet because the page it takes you to is confusing. Instead, click on one of the searches. At the moment, “Boston Mooninites” is the top search. Click on it to go to the search results page. The top result is not a result at all. It’s got a flame icon next to it, indicating that it’s actually the WTF about the phrase “Boston Mooninites.” It’s an explanation of what that phrase means and why people are searching on it now. Who wrote it? Anybody who wants to. So now click on the flame icon. It takes you to the same page you would have gotten to if you had clicked on the flame icon in the Top Searches list on the home page.
Ok, so now you’re on the WTF page for “Boston Mooninites.” Note that this is not the search results page. It’s where you get to create your own WTF for that search query. Or, you can vote on which of the existing ones; the one with the most votes is featured on the search results page for the query.
It’ll be very interesting to see how this develops. For example, the current top WTF for Windows Vista is a product review, not a neutral explanation. (I’m not complaining.) Many of the WTFs on the Vista list are responses to previous ones, as if WTFs are discussion board, probably an artifact of the layout of the WTF page.
Introduction: This post is an experiment in synchronization. Since Henry Jenkins, Beth Coleman, and I are all writing about Second Life and because we like each other’s work, even when (or especially when) we disagree, we’ve decided to all post something on Second Life today. Beth’s post will appear at http://www.projectgoodluck.com/blog/, and Henry’s is at http://www.henryjenkins.org/.
Let me start with some background. Because of the number of themes involved in discussions of Second Life, it’s easy to end up talking at different levels of abstraction, so let me start with two core assertions, things that I take as background to my part of the larger conversation:
First, Linden’s Residents figures are methodologically worthless. Any claim about Second Life derived from a count of Residents is not to be taken seriously, and anyone making claims about Second Life based on those figures is to be regarded with skepticism. (Explanation here and here.)
Second, there are many interesting things going on in Second Life. As I have said in other forums, and will repeat here, passionate users are a law unto themselves, and rightly so. Nothing I could say about their experience in Second Life, pro or con, would matter to those users. My concerns are demographic.
With those assertions covered, I am asking myself two things: will Second Life become a platform for a significant online population? And, second, what can Second Life tell us about the future of virtual worlds generally?
Concerning popularity, I predict that Second Life will remain a niche application, which is to say an application that will be of considerable interest to a small percentage of the people who try it. Such niches can be profitable (an argument I made in the Meganiche article), but they won’t, by definition, appeal to a broad cross-section of users.
The logic behind this belief is simple: most people who try Second Life don’t like it. Something like five out of six new users abandon it before a month is up. The three month abandonment figure seems to be closer to nine out of ten. (This figure is less firm, as it has only been reported colloquially, with no absolute numbers behind it.)
More importantly, the current active population is still an unknown. (Call this metric something like “How many users in the last 30 days have accounts more than 30 days old?”) We know the highest that figure could be is in the low hundreds of thousands, but no one other than the Lindens (and, presumably, their bigger marketing clients) knows how much lower it is than this theoretical maximum.
The poor adoption rate is a form of aggregate judgment. Anything bruited for wide adoption would have trouble with 85%+ abandonment, whether software or toothpaste. One possible explanation for this considerable user defection might be a technological gap. I do not doubt that improvements to the client and server would decrease the abandonment rate. I do doubt the improvement would be anything other than incremental, given 5 years and tens of millions in effort already.
Note too that abandonment is not a problem that all visually traversable spaces suffer from. Both Doom and Cyworld serve as counter-examples; in those cases, the rendering is cartoonish, yet both platforms achieved huge popularity in a short period. If the non-visual experience is good, the rendering does not need to be, but the converse does not seem to be true, on present evidence.
There have been two broad responses to skepticism occasioned by the Linden population numbers. (Three, if you count ad hominem, but Chris Lott has already covered that.)
The first response is not specific to Second Life. Many people have recalled earlier instances of misguided skepticism about new technologies, but the logical end-case of that thought is that skepticism about technology is never appropriate. (Disconfirmation of this thesis is left as an exercise for the reader.) Given that most new technologies fail, the challenge is to figure out which ones won’t. No one has noted examples of software with 85% abandonment rates, after five years of development, that went on to become widespread. Such examples may exist, but I can’t think of any.
The second objection is a conviction that demographics are irrelevant, and that the interesting goings-on in Second Life are what matters, no matter how few users are engaged in those activities.
I have never doubted (and have explicitly noted above) that there are interesting things happening in Second Life. The mistake, from my point of view, is in mixing two different questions. Whether some people like Second Life a lot is a completely separate issue from whether a lot of people like it. It is possible for the first assertion to be true and the second one false, and this is the only reading I believe is supported by the low absolute numbers and high abandonment rates. Nor is this an unusual case. We have several examples of platforms with fascinating in-world effects (Alphaworld, Black Sun/Blaxxun, The Palace, Dreamscape, LambdaMOO and environments on the SuperMOO List, etc.), all of which also failed to achieve wide use.
It is here that assertions about Second Life have most often been inconsistent. Before the uselessness of Linden’s population numbers was widely understood, the illusion of a large and rapidly growing community was touted as evidence of Second Life’s success. When both the absolute numbers and growth turned out to be more modest, population was downgraded and other metrics have been introduced as predictive of Second Life’s inevitable success.
A hypothesis which is strengthened by evidence of popularity, but not weakened by evidence of unpopularity, isn’t really a hypothesis, it’s a religious assertion. And a core tenet of the faithful seems to be that claims about Second Life are buttressed by the certain and proximate arrival of virtual worlds generally.
If we had but worlds enough and time…
It is worth pausing at this junction. Many people writing about Second Life make little distinction between ‘Second Life as a particular platform’ and ‘Second Life as an exemplar of the coming metaverse’. I would like to buck this trend, by explicitly noting the difference between those two conversations. I am basing my prediction of continued niche status for Second Life on the current evidence that most people who try it don’t like it. My beliefs about virtual worlds, on the other hand, are more conjectural. Everything below should be read with this caveat in mind.
With that said, I don’t believe that “virtual worlds” describes a coherent category, or, put another way, I believe that the group of things lumped together as virtual worlds have such variable implementations and user adoption rates that they are not well described as a single conceptual group.
I alluded to Pointcast in an earlier article; one of the ways the comparison is apt is in the abuse of categorization as a PR tool. Pointcast’s management claimed that email, the Web, and Pointcast all were about delivering content, and that the future looked bright for content delivery platforms. And indeed it did, except for Pointcast.
The successes of email and of the Web were better explained by their particular utilities than by their membership in a broad class of “content delivery.” Pointcast tried to shift attention from those particularities to a generic label in order to create a club in which it would automatically be included.
I believe a similar thing happens whenever Second Life is lumped with Everquest, World of Warcraft, et al., into a category called virtual worlds. If we accept the validity of this category, then multi-player games provide an existence proof of millions-strong virtual worlds, and the only remaining question is simply when we arrive at wider adoption of more general-purpose versions.
If, on the other hand, we don’t start off by lumping Second Life with Warcraft as virtual worlds, a very different question emerges: why do virtual game worlds outperform non-game worlds in their adoption? This pattern is quite stable over time — it well predates Second Live and World of Warcraft, as with first Ultima Online (1997) and then Everquest (1999) each quickly dwarfing the combined populations of Alphaworld and Black Sun (later Blaxxun) despite the significant lead times of those virtual worlds. What is it about games that would make them a better fit for virtual environments than non-games?
Games have at least three advantages other virtual worlds don’t. First, many games, and most social games, involve an entrance into what theorists call the magic circle, an environment whose characteristics include simplified and knowable rules. The magic circle saves the game from having to live up to expectations carried over from the real world.
Second, games are intentionally difficult. If all you knew about golf was that you had to get this ball in that hole, your first thought would be to hop in your cart and drive it over there. But no, you have to knock the ball in, with special sticks. This is just about the stupidest possible way to complete the task, and also the only thing that makes golf interesting. Games create an environment conducive to the acceptance of artificial difficulties.
Finally, and most relevant to visual environments, our ability to ignore information from the visual field when in pursuit of an immediate goal is nothing short of astonishing (viz. the gorilla experiment.) The fact that we could clearly understand spatial layout even in early and poorly rendered 3D environments like Quake has much to do with our willingness to switch from an observational Architectural Digest mode of seeing (Why has this hallway been accessorized with lava?) to a task-oriented Guns and Ammo mode (Ogre! Quad rocket for you!)
In this telling, games are not just special, they are special in a way that relieves designers of the pursuit of maximal realism. There is still a premium on good design and playability, but the magic circle, acceptance of arbitrary difficulties, and goal-directed visual filtering give designers ways to contextualize or bury at least some platform limitations. These are not options available to designers of non-game environments; asking users to accept such worlds as even passable simulacra subjects those environments to withering scrutiny.
We can also reverse this observation. One question we might ask about successful non-game uses of virtual worlds is whether they too are special cases. One obvious example is erotic imagery. The zaftig avatar has been a trope of 3D rendering since designers have been able to scrape together enough polygons to model a torso, but examples start far earlier than virtual worlds. In fact, visual representation of voluptuous womanhood predates the invention of agriculture by the same historical interval as agriculture predates the present. This is a deep pattern.
It is also a pattern that, like games and unlike ordinary life, has a special relation to visual cues (though this effect is somewhat unbalanced by gender.) If someone is shown a virtual hamburger, it can arouse real hunger. However, to satisfy this hunger, he must then walk away from the image and get his hands on an actual hamburger. This is not the case, to put the matter delicately, with erotic imagery; a fetching avatar can arouse desire, but that desire can then be satiated without recourse to the real.
This pair of characteristics — a human (and particularly male) fixation on even poorly rendered erotic images, plus an ability to achieve a kind of gratification in the presence of those images — means that a sexualized rendering can create both attraction and satisfaction in a way that a rendering of, say, a mountain or an office cannot. As with games, visual worlds work in the context of eros not because the images themselves are so convincing, but because they reach a part of the brain that so desperately wants to be convinced.
More generally, I suspect that the cases where 3D immersion works are, and will continue to be, those uses that most invite the mind to fill in or simply do without missing detail, whether because of a triggering of sexual desire, the fight or flight reflex (many games), avarice (gambling), or other areas where we are willing and even eager to make rapid inferences based on a paucity of data. I also assume that these special cases are not simply adding up to a general acceptance of visual immersion, and that finding another avatar beguiling in a virtual bar is not in fact a predictor of being able to read someone’s face or body language in a virtual meeting as if you were with them. That, I believe, is a neurological problem of a different order.
Jaron Lanier is the Charles Babbage of Our Generation
Here we arrive at the furthest shores of speculation. One of the basic promises of virtual reality, at least in its Snow Crash-inflected version, is that we will be able to re-create the full sense of being in someone’s presence in a mediated environment. This desire, present at least since Shamash appeared to Gilgamesh in a dream, can be re-stated in technological terms as a hope that communications will finally become an adequate substitute for travel. We have been promised that this will come to pass with current technology since ATT demoed a video phone at the 1964 World’s Fair.
I believe this version of virtual reality will in fact be achieved, someday. I do not, however, believe that it will involve a screen. Trying to trick the brain by tricking the eyes is a mug’s game. The brain is richly arrayed with tools to detect and unmask visual trickery — if the eyes are misreporting, the brain falls back on other externally focussed senses like touch and smell, or internally focussed ones like balance and proprioception.
Though the conception of virtual reality is clear, the technologies we have today are inadequate to the task. In the same way that the theory of computation arose in the mechanical age, but had to wait first for electrics and then electronics to be fully realized, general purpose virtual reality is an idea waiting on a technology, and specifically on neural interface, which will allow us to trick the brain by tricking the brain. (The neural interface in turn waits on trifling details like an explanation of consciousness.)
In the meantime, the 3D worlds program in the next decade is likely to resemble the AI program in the last century, where early optimism about rapid progress on general frameworks gave way to disconnected research topics (machine vision, natural language processing) and ‘toy worlds’ environments. We will continue to see valuable but specific uses for immersive environments, from flight training and architectural flythroughs to pain relief for burn victims and treatment for acrophobia. These are all indisputably good things, but they are not themselves general, and more importantly don’t suggest rapid progress on generality. As a result, games will continue to dominate the list of well-populated environments for the foreseeable future, rendering ineffectual the category of virtual worlds, and, critically, many of the predictions being attached thereunto.
[We’ve been experiencing continuing problems with our MT-powered commenting system. We’re working on a fix but for now send you to a temporary page where the discussion can continue.]
Intro: I was part of a group of people asked by Beth Noveck to advise the Community Patent review project about the design of a reputation and ranking system, to allow the widest possible input while keeping system gaming to a minimum. This was my reply, edited slightly for posting here.
We’ve all gone to school on the moderation and reputation systems of Slashdot and eBay. In those cases, their growing popularity in the period after their respective launches led to a tragedy of the commons, where open access plus incentives led to nearly constant attack by people wanting to game the system, whether to gain attention for themselves or their point of view in the case of Slashdot, or to defraud other users, as with eBay.
The traditional response to these problems would have been to hire editors or other functionaries to police the system for abuse, in order to stem the damage and to assure ordinary users you were working on their behalf. That strategy, however, would fail at the scale and degree of openness at which those services function. The Slashdot FAQ tells the story of trying to police the comments with moderators chosen from among the userbase, first 25 of them and later 400. Like the Charge of the Light Brigade, however, even hundreds of committed individuals were just cannon fodder, given the size of the problem. The very presence of effective moderators made the problem worse over time. In a process analogous to more roads creating more traffic, the improved moderation saved the site from drowning in noise, so more users joined, but this increase actually made policing the site harder, eventually breaking the very system that made the growth possible in the first place.
EBay faced similar, ugly feedback loops; any linear expenditure of energy required for policing, however small the increment, would ultimately make the service unsustainable. As a result, the only opportunity for low-cost policing of such systems is to make them largely self-policing. From these examples and others we can surmise that large social systems will need ways to highlight good behavior or suppress negative behavior or both. If the guardians are to guard themselves, oversight must be largely replaced by something we might call intrasight, designed in such a way that imbalances become self-correcting.
The obvious conclusion to draw is that, when contemplating the a new service with these characteristics, the need for some user-harnessed reputation or ranking system can be regarded as a foregone conclusion, and that these systems should be carefully planned so that tragedy of the commons problems can be avoided from launch. I believe that this conclusion is wrong, and that where it is acted on, its effects are likely to be at least harmful, if not fatal, to the service adopting them.
There is an alternate reading of the Slashdot and eBay stories, one that I believe better describes those successes, and better places Community Patent to take advantage of similar processes. That reading concentrates not on outcome but process; the history of Slashdot’s reputation system should teach us not “End as they began — build your reputation system in advance” but rather “Begin as they began — ship with a simple set of features, watch and learn, and implement reputation and ranking only after you understand the problems you are taking on.” In this telling, constituting users’ relations as a set of bargains developed incrementally and post hoc is more predictive of eventual success than simply adopting any residue from previous successes.
As David Weinberger noted in his talk The Unspoken of Groups, clarity is violence in social settings. You don’t get 1789 without living through 1788; successful constitutions, which necessarily create clarity, are typically ratified only after a group has come to a degree of informal cohesion, and is thus able to absorb some of the violence of clarity, in order to get its benefits. The desire to participate in a system that constrains freedom of action in support of group goals typically requires that the participants have at least seen, and possibly lived through, the difficulties of unfettered systems, while at the same time building up their sense of membership or shared goals in the group as a whole. Otherwise, adoption of a system whose goal is precisely to constrain its participants can seem too onerous to be worthwhile. (Again, contrast the US Constitution with the Articles of Confederation.)
Most current reputation systems have been fit to their situation only after that situation has moved from theoretical to actual; both eBay and Slashdot moved from a high degree of uncertainty to largely stable systems after a period of early experimentation. Perhaps surprisingly, this has not committed them to continual redesign. In those cases, systems designed after launch, but early in the process of user adoption, have survived to this day with only relatively minor subsequent adjustments.
Digg is the important counter-example, the most successful service to date to design a reputation system in advance. Digg differs from the community patent review process in that the designers of Digg had an enormous amount of prior art directly in its domain (Slashdot, Kuro5hin, Metafilter, et al), and still ended up with serious re-design issues. More speculatively, Digg seems to have suffered more from both system gaming and public concern over its methods, possibly because the lack of organic growth of its methods prevented it from becoming legitimized over time in the eyes of its users. Instead, they were asked to take it or leave it (never a choice users have been know to relish.)
Though more reputation design work may become Digg-like over time, in that designers can launch with systems more complete than eBay or Slashdot did, the ability to survey significantly similar prior art, and the ability to adopt a fairly high-handed attitude towards users who dislike the service, are not luxuries the community patent review process currently enjoys.
The Argument in Two Pictures
The argument I’m advancing can be illustrated with two imaginary graphs. The first concerns plasticity, the ease with which any piece of software can be modified.
Plasticity generally decays with time. It is highest at the in the early parts of the design phase, when a project is in its most formative stages. It is easier to change a list of potential features than a set of partially implemented features, and it is easier to change partially implemented features than fully implemented features. Especially significant is the drop in plasticity at launch; even for web-based services, which exist only in a single instantiation and can be updated frequently and for all users at once, the addition of users creates both inertia, in the direction of not breaking their mental model of the service, and caution in upgrading, so as not to introduce bugs or create downtime in a working service. As the userbase grows, the expectations of the early adopters harden still further, while the expectations of new users follows the norms set up by those adopters; this is particularly true of any service with a social component.
An obvious concern with reputation systems is that, as with any feature, they are easier to implement when plasticity is high. Other things being equal, one would prefer to design the system as early as possible, and certainly before launch. In the current case, however, other things are not equal. In particular, the specificity of information the designers have about the service and how it behaves in the hands of real users moves counter to plasticity over time.
When you are working to understand the ideal design for a particular piece of software, the specificity of your knowledge increases with time. During the design phase, the increasing concreteness of the work provides concomitant gains in specificity, but nothing like launch. No software, however perfect, survives first contact with the users unscathed, and given the unparalleled opportunities with web-based services to observe user behavior — individually and in bulk, in the moment and over time — the period after launch increases specificity enormously, after which it continues to rise, albeit at a less torrid pace.
There is a tension between knowing and doing; in the absence of the ideal scenario where you know just what needs to be done while enjoying complete freedom to do it (and a pony), the essential tradeoff is in understanding which features benefit most from increased specificity of knowledge. Two characteristics that will tend to push the ideal implementation window to post-launch are when a set of possible features is very large, but the set of those features that will ultimately be required is small; and when culling the small number of required features from the set of all possible features can only be done by observing actual users. I believe that both conditions apply a fortiori to reputation and ranking.
Costs of Acting In Advance of Knowing
Consider the costs of designing a reputation system in advance. In addition to the well-known problems of feature-creep (“Let’s make it possible to rank reputation rankings!”) and Theory of Everything technologies (“Let’s make it Semantic Web-compliant!”), reputation systems create an astonishing perimeter defense problem. The number of possible threats you can imagine in advance is typically much larger than the number that manifest themselves in functioning communities. Even worse, however large the list of imagined threats, it will not be complete. Social systems are degenerate, which is to say that there are multiple alternate paths to similar goals — someone who wants to act out and is thwarted along one path can readily find others.
As you will not know which of these ills you will face, the perimeter you will end up defending will be very large and, critically, hard to maintain. The likeliest outcome from such an a priori design effort is inertness; a system designed in advance to prevent all negative behavior will typically have as a side effect deflecting almost all behavior, period, as users simply turn away from adoption.
Working social systems are both complex and homeostatic; as a result, any given strategy for mediating social relations can only be analyzed in the context of the other strategies in use, including strategies adopted by the users themselves. Since the user strategies cannot, by definition, be perfectly predicted in advance, and since the only ungameable social system is the one that doesn’t ship, every social system will have some weakness. A system designed in advance is likely to be overdefended while still having a serious weaknesses unknown the designer, because the discovery and exploitation of that class of weakness can only occur in working, which is to say user-populated, systems. (As with many observations about the design of social systems, these are precedents first illustrated in Lessons from Lucasfilm’s Habitat, in the sections “Don’t Trust Anybody” and “Detailed Central Planning Is Impossible, Don’t Even Try”.)
The worst outcome of such a system would be collapse (the Communitree scenario), but even the best outcome would still require post hoc design to fix the system with regard to observed user behavior. You could save effort while improving the possibility of success by letting yourself not know what you don’t know, and then learning as you go.
In Favor of Instrumentation Plus Attention
The N-squared problem is only a problem when N is large; in most social systems the users are the most important N, and the userbase only grows large gradually, even for successful systems. (Indeed, this scaling up only over time typically provides the ability for a core group, once they have self-identified, to inculcate new users a bit at a time, using moral suasion as their principal tool.) As a result, in the early days of a system, the designers occupy a valuable point of transition, after user behavior is observable, but before scale and culture defeat significant intervention.
To take advantage of this designable moment, I believe that what Community Patent needs, at launch, is only this: metadata, instrumentation, and attention.
Metadata: There are, I believe, three primitive types of metadata required for Community Patent — people, patents, and interjections. Each of these will need some namespace to exist in — identity for the people, and named data for the patents themselves and for various forms of interjection, from simple annotation to complex conversation. In addition, two abstract types are needed — links and labels. A link is any unique pair of primitives — this user made that comment, this comment is attached to that conversation, this conversation is about those patents. All links should be readily observable and extractable from the system, even if they are not exposed in the interface the user sees. Finally, following Schachter’s intuition from del.icio.us, all links should be labelable. (Another way to view the same problem is to see labels as another type of interjection, attached to links.) I believe that this will be enough, at launch, to maximize the specificity of observation while minimizing the loss of plasticity.
Instrumentation: As we know from collaborative filtering algorithms from Ringo to PageRank, it is not necessary to ask users to rank things in order to derive their rankings. The second necessary element will be the automated delivery of as many possible reports to the system designers as can be productively imagined, and, at least as essential, a good system for quickly running ad hoc queries, and automating their production should they prove fruitful. This will help identify both the kinds of productive interactions on the site that need to be defended and the kinds of unproductive interactions they need to be defended from.
Designer Attention: This is the key — it will be far better to invest in smart people watching the social aspects of the system at launch than in smart algorithms guiding those aspects. If we imagine the moment when the system has grown to an average of 10 unique examiners per patent and 10 comments per examiner, then a system with even a thousand patents will be relatively observable without complex ranking or reputation systems, as both the users and the comments will almost certainly exhibit power-law distributions. In a system with as few as ten thousand users and a hundred thousand comments, it will still be fairly apparent where the action is, allowing you the time between Patent #1 and Patent #1000 to work out what sorts of reputation and ranking systems need to be put in place.
This is a simplification, of course, as each of the categories listed above presents its own challenges — how should people record their identity? What’s the right balance between closed and open lists of labels? And so on. I do not mean to minimize those challenges. I do however mean to say that the central design challenge of user governance — self-correcting systems that do not raise crushing participation burdens on the users or crushing policing barriers on the hosts — are so hard to design in advance that, provided you have the system primitives right, the Boyd Strategy of OODA — Orient, Observe, Decide, Act — will be superior to any amount of advance design work.
[We’ve been experiencing continuing problems with our MT-powered commenting system. We’re working on a fix but for now send you to a temporary page where the discussion can continue.]
LinkedIn now is enabling users to pose questions to their social network. Only members can respond. They’re also limiting how many questions you can ask per month. Interestingly, you’re only allowed to give one answer to any one question. As always, it’s those details that determine the shape of the society and its success. (Thanks for the pointer, Eric Scheid.)
I’ve been complaining about bad reporting of Second Life population for some time now. David Kirkpatrick at Fortune has finally gotten some signal out of Linden Labs. Kirkpatrick’s report is here, in the comments. (CNN.com comments don’t have permalinks, so scroll down.)
Here are the numbers Philip Rosedale of Linden gave him. These are, I presume, as of Jan 3:
1,525,670 unique people have logged into SL at least once (so now we know: Residents is seeing something a bit over 50% inflation over users.)
Of that number, 252,284 people have logged in more than 30 days after their account creation date.
Monthly growth in that figure, calculated as the change between last September and last October, was 23%.
Those of us who wanted the conversation to be grounded in real numbers owe Kirkpatrick our thanks for helping us get there.
These numbers should have two good effects. First, now that Linden has reported, and Kirkpatrick has published, the real figures, maybe we’ll see the press shift to reporting users and active users, instead of Residents.
Second, we’re no longer going to be asked to stomach absurd claims of size and growth. The ‘2.3 million user/77% growth in two months’ figures would have meant 70 million Second Life users this time next year. 250 thousand and 23% growth will mean 3 million in a year’s time, a healthy number, but not hyperbolic growth.
We can start asking more sophisticated questions now, like the use pattern of active users, or the change in monthly growth rates, or whether the Residents-users inflation rate is stable, but those questions are for later. Right now, we’ve got enough real numbers to think about for a while.
Disney is launching a social network for kids. My knee-jerk reaction: Yech.
Gavin O’Malley at Online Media Daily has a more considered reaction. He points to the apparent failure of Wal-Mart’s social network for kids (“The Hub”—an awfully grown-up name), and worries that having parental controls will kill the Disney effort as well. I agree with Gartner’s Andrew Frank that it’s likely to be all product placement all the time…and, if so, I hope kids reject it.
But, of course, I haven’t seen it and don’t know what it’ll be like. Maybe Disney is smarter than that.
Mark Cuban doesn’t understand television. He holds a belief, common to connoisseurs the world over, that quality trumps everything else. The current object of his faith in Qualität Über Alles is HDTV. Says Cuban:
HDTV is the Internet video killer. Deal with it. Internet bandwidth to the home places a cap on the quality and simplicity of video delivery to the home, and to HDTVs in particular. Not only does internet capacity create an issue, but the complexity of moving HDTV streams around the home and to the HDTV is pretty much a deal killer itself.
“HDTV is the Internet video killer.” Th appeal of this argument — whoever provides the highest quality controls the market — is obvious. So obvious, in fact, that it’s been used before. By audiophiles.
As January 1, 2000 approaches, and the MP3 whirlpool continues to swirl, one simple fact has made me feel as if I’m stuck at the starting line of the entire download controversy: The sound quality of MP3 has yet to improve above that of the average radio broadcast. Until that changes, I’m merely curious—as opposed to being in the I-want-to-know-it-all-now frenzy that is my usual m.o. when to comes to anything that promises music you can’t get anywhere else. Robert Baird, October, 1999
MP3s won’t catch on, because they are lower quality than CDs. And this was true, wasn’t it? People cared about audio quality so much that despite other advantages of MP3s (price, shareability, better integration with PCs), they’ve stayed true to the CD all these years. The commercial firms that make CDs, and therefore continue to control the music market, thank these customers daily for their loyalty.
Cuban doesn’t understand that television has been cut in half. The idea that there should be a formal link between the tele- part and the vision part has ended. Now, and from now on, the form of a video can be handled separately from it’s method of delivery. And since they can be handled separately, they will be, because users prefer it that way.
But Cuban goes further. He doesn’t just believe that, other things being equal, quality will win; he believes quality is so important to consumers that they will accept enormous inconvenience to get that higher-quality playback. When Cuban’s list of advantages of HDTV includes an inability to watch your own video on it (“the complexity of moving HDTV streams around the home and to the HDTV”), you have to wonder what he thinks a disadvantage would look like.
This is the season of the HDTV gotcha. After Christmas, people are starting to understand that they didn’t buy a nicer TV, they bought only one part of a Total Controlled Content Delivery Package. Got an HDTV monitor and a new computer for Christmas? You might as well have gotten a Fabergé Egg and a framing hammer for all the useful ways you can combine the two presents.
Media is a triathlon event. People like to watch, but they also like to create, and to share. Doubling down on the watching part while making it harder for the users to play their own stuff or share with their friends makes a medium worse in the users eyes. By contrast, the last 50 years have been terrible for user creativity and for sharing, so even moderate improvements in either of those abilities make the public go wild.
When it comes to media quality, people don’t optimize, they satisfice. Once the medium, whether audio or video or whatever, crosses a minimum threshold, users accept it and move on to caring about other attributes. The change in internet video quality from 1996 to 2006 was the big jump, and YouTube is the proof. After this, firms that offer higher social value for video will have an edge over firms that offer higher production values while reducing social value.
And because the audience for internet video will grow much faster than the audience for HDTV (and will be less pissed, because YouTube doesn’t rely on a ‘bait and switch’ walled garden play) the premium for making internet video better will grow with it. As Richard Gabriel said of programming languages years ago “[E]ven though Lisp compilers in 1987 were about as good as C compilers, there are many more compiler experts who want to make C compilers better than want to make Lisp compilers better.” That’s where video is today. HDTV provides a better viewing experience than internet video, but many more people care about making internet video better than making HDTV better.
The US Food and Drug Administration has decided tentatively that meat and milk from cloned animals are the same as from normal animals, so it is not going to require those products to carry special labels.
It’s not that I think cloned food is dangerous. I’d still like the labels to note that the animals were cloned because more metadata is always good. If people don’t want to eat clones for whatever reason, they should be enabled to make that choice. In fact, we’d be better off with full access to the information about what we’re purchasing. Where was the cow raised? What was it fed? What was its weight? What was its body fat ratio? How old was it? Did it get to roam free? Did it have a sweet smile? What was its sign? We’re better off being able to access it all, no matter how farfetched.
But, because of the nature of non-digital reality, taking up label space with a notice that the meat is cloned would itself be metadata indicating that the government thinks such information is worth noting. Metadata in the physical world is a zero sum game.
And that means not only is it true that (as Clay says) “metadata is worldview (or is that “metadata are worldview”?), physical labels are politics. We are forced to make value-driven decisions by the constraints of the physical (labels take up valuable space), the biological (human eyes require fonts to be sized above a certain minimum) and the economic (it is not feasible to attach an almanac of information to every chicken wing). But online, all those limit go away…
…except for the economic. It would be expensive to do a cholesterol count for every slaughtered cow (assuming that cows have cholesterol) simply to gather information that so far nobody cares about, but there’s plenty of information that we’re gathering anyway or for which there is predictable interest—e.g., cloning—that we could make available online (via a unique identifier for each slab of flesh). There would still be politics in the decision about which information to put into the extended set, but it would be a more inclusive, bigger tent, allowing customers to decide according to their own cockamamie values.
And isn’t cockamamie consumerism what democracy is all about?
“Here at KingsRUs.com, we call our website our Kingdom, and any time our webservers serve up a copy of the home page, we record that as a Loyal Subject. We’re very pleased to announce that in the last two months, we have added over 1 million Loyal Subjects to our Kingdom.”
Put that baldly, you wouldn’t fall for this bit of re-direction, and yet that is exactly what Linden Labs has pulled off with its Residents™ label. By adopting a term that seems like a simple re-branding of “users”, but which is actually unconnected to head count or adoption, they’ve managed to report what the press wants to hear, while providing no actual information.
If you like your magic tricks to stay mysterious, leave now, but if you want to understand how Linden has managed to disable the fact-checking apparatus of much of the US business press, turning them into a zombie army of unpaid flacks, read on. (And, as with the earlier piece on Linden, this piece has also been published on Valleywag.)
The basic trick is to make it hard to remember that Linden’s definition of Resident has nothing to do with the plain meaning of the word resident. My dictionary says a resident is a person who lives somewhere permanently or on a long term basis. Linden’s definition of Residents, however, has nothing to do with users at all — it measures signups for an avatar. (Get it? The avatar, not the user, is the resident of Second Life.)
The obvious costume-party assumption is that there is one avatar per person, but that’s wrong. There can be more than one avatar per account, and more than one account per person, and there’s no public explanation of which of those units Residents measures, and thus no way to tell anything about how many actual people use Second Life. (An embarrassingly First Life concern, I know.)
Confused yet? Wait, there’s less! Linden’s numbers also suggest that the Residents figure includes even failed attempts to use the service. They reported adding their second million Residents between mid-October and December 14th, but they also reported just shy of 810 thousand logins for the same period. One million new Residents but only 810K logins leaves nearly 200K new Residents unaccounted for. Linden may be counting as Residents people who signed up and downloaded the client software, but who never logged in, or there may be some other reason for the mismatched figures, but whatever the case, Residents is remarkably inflated with regards to the published measure of use.
(If there are any actual reporters reading this and doing a big cover story on Linden, you might ask about how many real people use Second Life regularly, as opposed to Residents or signups or avatars. As I write those words, though, I realize I might as well be asking Business Week to send me a pony for my birthday.)
Like a push-up bra, Linden’s trick is as effective as it is because the press really, really wants to believe:
Professional journalists wrote those sentences. They work for newspapers and magazines that employ (or used to employ) fact-checkers. Yet here they are, supplementing Linden’s meager PR budget by telling their readers that Residents measures something it actually doesn’t.
This credulity appears even in the smallest items. I discovered the “Residents vs Logins” gap when I came across a Business 2.0 post by Erick Schonfeld, where he included the mismatched numbers while congratulating Linden on a job well done. When I asked the obvious question in the comments — How come there are fewer logins than new Residents in the same period? — I got a nice email from Mr. Schonfeld, complimenting me on a good catch.
Now I’m generally pretty enthusiastic about taking credit where it isn’t due, but this bit of praise failed to meet even my debased standards. The post was a hundred words long, and it had only two numbers in it. I didn’t have to use forensic accounting to find the discrepancy, I just used subtraction (an oft-overlooked tool in the journalistic toolkit, but surprisingly effective when dealing with numbers.)
This is the state of business reporting in an age when even the pros want to roll with the cool blogger kids. Got a paragraph that contains only two numbers, and they don’t match? No problem! Post it anyway, and on to the next thing.
The prize bit of PReporting so far, though, has to be Elizabeth Corcoran’s piece for Forbes called A Walk on the Virtual Side, where she claimed that Second Life had recently passed “a million unique customers.”
This is three lies in four words. There isn’t one million of anything human inhabiting Second Life. There is no one-to-one correlation between Residents and users. And whatever Residents does measure, it has nothing to do with paying customers. The number of paid accounts is in the tens of thousands, not the millions (and remember, if you’re playing along at home, there can be more than one account per person. Kits, cats, sacks, and wives, how many logged into St. Ides?)
Despite the credulity of the Fourth Estate (Classic Edition), there are enough questions being asked in the weblogs covering Second Life that the usefulness is going to drain out of the ‘Resident™ doesn’t mean resident’ trick over the next few months. We’re going to see three things happen as a result.
The first thing that’s going to happen, or rather not happen, is that the regular press isn’t going go back over this story looking for real figures. As much as they’ve written about the virtual economy and the next net, the press hasn’t really covered Second Life as business story or tech story so much as a trend story. The sine qua non of trend stories is that a trend is fast-growing. The Residents figure was never really part of the story, it just provided permission to write about about how crazy it is that all the kids these days are getting avatars. By the time any given writer was pitching that story to their editors, any skepticism about the basic proposition had already been smothered.
No journalist wants to have to write “When we told you that Second Life had 1.3 million members, we in no way meant to suggest that figure referred to individual people. Fortune regrets any misunderstanding.” And since no one wants to write that, no one will. They’ll shift their coverage without pointing out the shift to their readers.
The second thing that is going to happen is an increase in arguments of the form “We mustn’t let Linden’s numbers blind us to the inevitability of the coming metaverse.” That’s the way it is with things we’re asked to take on faith — when A works, it’s evidence of B, but if A isn’t working as well as everyone thought, it’s suddenly unrelated to B.
Finally, there is going to be a spike in the number of the posts claiming that the two million number was never important anyway, the press’s misreporting was all an innocent mistake, Linden was planning to call those reporters first thing Monday morning and explain everything. Tateru Nino has already kicked off this genre with a post entitled The Value of One. The flow of her argument is hard to synopsize, but you can get a sense of it from this paragraph:
So, a hundred thousand, one million, two million. Those numbers mean something to us, but not because they have intrinsic, direct meaning. They have meaning because they’re filtered through the media, disseminated out into the world, believed by people, who then act based on that belief, and that is where the meaning lies.
Expect more, much more, of this kind of thing in 2007.
Public Library of Science has gone beta with PLos ONE, a peer-reviewed journal that publishes everything that passes the review, not just what it considers to be important. So, if it’s good science about a nit, it’ll find a home at PLoS ONE.
Articles are all published under a Creative Commons Attribution License. It does, however, cost a scientist (or her institution) $1,250 to be published by PLoS ONE. This is, alas, an improvement over what traditional journals charge scientists. PLoS ONE will waive the fee for authors who don’t have the funds.
Readers can discuss and annotate the articles. But the site could really use tags ‘n’ feeds. Maybe after beta…
Lately, i’ve become very irritated by the immersive virtual questions i’ve been getting. In particular, “will Web3.0 be all about immersive virtual worlds?” Clay’s post on Second Life reminded me of how irritated i am by this. I have to admit that i get really annoyed when techno-futurists fetishize Stephenson-esque visions of virtuality. Why is it that every 5 years or so we re-instate this fantasy as the utopian end-all be-all of technology? (Remember VRML? That was fun.)
Maybe i’m wrong, maybe i’ll look back twenty years ago and be embarrassed by my lack of foresight. But honestly, i don’t think we’re going virtual.
There is no doubt that immersive games are on the rise and i don’t think that trend is going to stop. I think that WoW is a strong indicator of one kind of play that will become part of the cultural landscape. But there’s a huge difference between enjoying WoW and wanting to live virtually. There ARE people who want to go virtual and i wouldn’t be surprised if there are many opportunities for sustainable virtual environments. People who feel socially ostracized in meatspace are good candidates for wanting to go virtual. But again, that’s not everyone.
If you look at the rise of social tech amongst young people, it’s not about divorcing the physical to live digitally. MySpace has more to do with offline structures of sociality than it has to do with virtuality. People are modeling their offline social network; the digital is complementing (and complicating) the physical. In an environment where anyone could socialize with anyone, they don’t. They socialize with the people who validate them in meatspace. The mobile is another example of this. People don’t call up anyone in the world (like is fantasized by some wrt Skype); they call up the people that they are closest with. The mobile supports pre-existing social networks, not purely virtual ones.
That’s the big joke about the social media explosion. 1980s and 1990s researchers argued that the Internet would make race, class, gender, etc. extinct. There was a huge assumption that geography and language would no longer matter, that social organization would be based on some higher function. Guess what? When the masses adopted social media, they replicated the same social structures present in the offline world. Hell, take a look at how people from India are organizing themselves by caste on Orkut. Nothing gets erased because it’s all connected to the offline bodies that are heavily regulated on a daily basis.
While social network sites and mobile phones are technology to adults, they are just part of the social infrastructure for teens. Remember what Alan Kay said? “Technology is anything that wasn’t around when you were born.” These technologies haven’t been adopted as an alternative to meatspace; they’ve been adopted to complement it.
Virtual systems will be part of our lives, but i don’t think immersion is where it’s at. Most people are deeply invested in the physicality of life; this is not going away.
Update: to discuss this post, please join the conversation at apophenia.
Second Life is heading towards two million users. Except it isn’t, really. We all know how this game works, and has since the earliest days of the web:
Member of the Business Press: “How many users do you have?” CEO of Startup: (covers phone) “Hey guys, how many rows in the ‘users’ table?”
[Sound F/X: Typing]
Offstage Sysadmin: “One million nine hundred and one thousand one hundred and seventy-three.” CEO: (Into phone) “We have one point nine million users.”
Someone who tries a social service once and bails isn’t really a user any more than someone who gets a sample spoon of ice cream and walks out is a customer.
So here’s my question — how many return users are there? We know from the startup screen that the advertised churn of Second Life is over 60% (as I write this, it’s 690,800 recent users to 1,901,173 signups, or 63%.) That’s not stellar but it’s not terrible either. However, their definition of “recently logged in” includes everyone in the last 60 days, even though the industry standard for reporting unique users is 30 days, so we don’t actually know what the apples to apples churn rate is.
At a guess, Second Life churn measured in the ordinary way is in excess of 85%, with a surge of new users being driven in by the amount of press the service is getting. The wider the Recently Logged In reporting window is, the bigger the bulge of recently-arrived-but-never-to-return users that gets counted in the overall numbers.
I suspect Second Life is largely a “Try Me” virus, where reports of a strange and wonderful new thing draw the masses to log in and try it, but whose ability to retain anything but a fraction of those users is limited. The pattern of a Try Me virus is a rapid spread of first time users, most of whom drop out quickly, with most of the dropouts becoming immune to later use. Pointcast was a Try Me virus, as was LambdaMOO, the experiment that Second Life most closely resembles.
I have been watching the press reaction to Second Life with increasing confusion. Breathless reports of an Immanent Shift in the Way We Live® do not seem to be accompanied by much skepticism. I may have been made immune to the current mania by ODing on an earlier belief in virtual worlds:
Similar to the way previous media dissolved social boundaries related to time and space, the latest computer-mediated communications media seem to dissolve boundaries of identity as well. […] I know a respectable computer scientist who spends hours as an imaginary ensign aboard a virtual starship full of other real people around the world who pretend they are characters in a Star Trek adventure. I have three or four personae myself, in different virtual communities around the Net. I know a person who spends hours of his day as a fantasy character who resembles “a cross between Thorin Oakenshield and the Little Prince,” and is an architect and educator and bit of a magician aboard an imaginary space colony: By day, David is an energy economist in Boulder, Colorado, father of three; at night, he’s Spark of Cyberion City—a place where I’m known only as Pollenator.
This wasn’t written about Second Life or any other 3D space, it was Howard Rheingold writing about MUDs in 1993. This was a sentiment I believed and publicly echoed at the time. Per Howard, “MUDs are living laboratories for studying the first-level impacts of virtual communities.” Except, of course, they weren’t. If, in 1993, you’d studied mailing lists, or usenet, or irc, you’d have a better grasp of online community today than if you’d spent a lot of time in LambdaMOO or Cyberion City. Ou sont les TinyMUCKs d’antan?
You can find similar articles touting 3D spaces shortly after the MUD frenzy. Ready for a blast from the past? “August 1996 may well go down in the annals of the Internet as the turning point when the Web was released from the 2D flatland of HTML pages.” Oops.
So what accounts for the current press interest in Second Life? I have a few ideas, though none is concrete enough to call an answer yet.
First, the tech beat is an intake valve for the young. Most reporters don’t remember that anyone has ever wrongly predicted a bright future for immersive worlds or flythrough 3D spaces in the past, so they have no skepticism triggered by the historical failure of things like LambdaMOO or VRML. Instead, they hear of a marvelous thing — A virtual world! Where you have an avatar that travels around! And talks to other avatars! — which they then see with their very own eyes. How cool is that? You’d have to be a pretty crotchety old skeptic not to want to believe. I bet few of those reporters ever go back, but I’m sure they’re sure that other people do (something we know to be false, to a first approximation, from the aforementioned churn.) Second Life is a story that’s too good to check.
Second, virtual reality is conceptually simple. Unlike ordinary network communications tools, which require a degree of subtlety in thinking about them — as danah notes, there is no perfect metaphor for a weblog, or indeed most social software — Second Life’s metaphor is simplicity itself: you are a person, in a space. It’s like real life. (Only, you know, more second.) As Philip Rosedale explained it to Business Week “[I]nstead of using your mouse to move an arrow or cursor, you could walk your avatar up to an Amazon.com (AMZN) shop, browse the shelves, buy books, and chat with any of the thousands of other people visiting the site at any given time about your favorite author over a virtual cuppa joe.”
Never mind that the cursor is a terrific way to navigate information; never mind that Amazon works precisely because it dispenses with rather than embraces the cyberspace metaphor; never mind that all the “Now you can shop in 3D efforts” like the San Francisco Yellow Pages tanked because 3D is a crappy way to search. The invitation here is to reason about Second Life by analogy, which is simpler than reasoning about it from experience. (Indeed, most of the reporters writing about Second Life seem to have approached it as tourists getting stories about it from natives.)
Third, the press has a congenital weakness for the Content Is King story. Second Life has made it acceptable to root for the DRM provider, because of their enlightened user agreements concerning ownership. This obscures the fact that an enlightened attempt to make digital objects behave like real world objects suffers from exactly the same problems as an unenlightened attempt, a la the RIAA and MPAA. All the good intentions in the world won’t confer atomicity on binary data. Second Life is pushing against the ability to create zero-cost perfect copies, whereas Copybot relied on that most salient of digital capabilities, which is how Copybot was able to cause so much agida with so little effort — it was working with the actual, as opposed to metaphorical, substrate of Second Life.
Finally, the current mania is largely push-driven. Many of the articles concern “The first person/group/organization in Second Life to do X”, where X is something like have a meeting or open a store — it’s the kind of stuff you could read off a press release. Unlike Warcraft, where the story is user adoption, here most of the stories are about provider adoption, as with the Reuters office or the IBM meeting or the resident creative agencies. These are things that can be created unilaterally and top-down, catnip to the press, who are generally in the business of covering the world’s deciders.
The question about American Apparel, say, is not “Did they spend money to set up stores in Second Life?” Of course they did. The question is “Did it pay off?” We don’t know. Even the recent Second Life millionaire story involved eliding the difference between actual and potential wealth, a mistake you’d have thought 2001 would have chased from the press forever. In illiquid markets, extrapolating that a hundred of X are worth the last sale price of X times 100 is a fairly serious error.
Artifacts vs. Avatars
Like video phones, which have been just one technological revolution away from mass adoption since 1964, virtual reality is so appealingly simple that its persistent failure to be a good idea, as measured by user adoption, has done little to dampen enthusiasm for the coming day of Keanu Reeves interfaces and Snow Crash interactions.
I was talking to Irving Wladawsky-Berger of IBM about Second Life a few weeks ago, and his interest in the systems/construction aspect of 3D seems promising, in the same way video phones have been used by engineers who train the camera not on their faces but on the artifacts they are talking about. There is something to environments for modeling or constructing visible things in communal fashion, but as with the video phone, they will probably involve shared perceptions of artifacts, rather than perceptions of avatars.
This use, however, is specific to classes of problems that benefit from shared visual awareness, and that class is much smaller that the current excitement about visualization would suggest. More to the point, it is at odds with the “Son of MUD+thePalace” story currently being written about Second Life. If we think of a user as someone who has returned to a site after trying it once, I doubt that the number of simultaneous Second Life users breaks 10,000 regularly. If we raise the bar to people who come back for a second month, I wonder if the site breaks 10,000 simultaneous return visitors outside highly promoted events.
Second Life may be wrought by its more active users into something good, but right now the deck is stacked against it, because the perceptions of great user growth and great value from scarcity are mutually reinforcing but built on sand. Were the press to shift to reporting Recently Logged In as their best approximation of the population, the number of reported users would shrink by an order of magnitude; were they to adopt industry-standard unique users reporting (assuming they could get those numbers), the reported population would probably drop by two orders. If the growth isn’t as currently advertised (and it isn’t), then the value from scarcity is overstated, and if the value of scarcity is overstated, at least one of the engines of growth will cool down.
There’s nothing wrong with a service that appeals to tens of thousands of people, but in a billion-person internet, that population is also a rounding error. If most of the people who try Second Life bail (and they do), we should adopt a considerably more skeptical attitude about proclamations that the oft-delayed Virtual Worlds revolution has now arrived.
“Are you my friend? Yes or no?” This question, while fundamentally odd, is a key component of social network sites. Participants must select who on the system they deem to be ‘Friends.’ Their choice is publicly displayed for all to see and becomes the backbone for networked participation. By examining what different participants groups do on social network sites, this paper investigates what Friendship means and how Friendship affects the culture of the sites. I will argue that Friendship helps people write community into being in social network sites. Through these imagined egocentric communities, participants are able to express who they are and locate themselves culturally. In turn, this provides individuals with a contextual frame through which they can properly socialize with other participants. Friending is deeply affected by both social processes and technological affordances. I will argue that the established Friending norms evolved out of a need to resolve the social tensions that emerged due to technological limitations. At the same time, I will argue that Friending supports pre-existing social norms yet because the architecture of social network sites is fundamentally different than the architecture of unmediated social spaces, these sites introduce an environment that is quite unlike that with which we are accustomed.
I very much enjoyed writing this paper and i hope you enjoy reading it!
I want to offer a less telegraphic account of the relationship between expertise, credentials, and authority than I did in Larry Sanger, Citizendium, and the Problem of Expertise, and then say why I think the cost of coordination in the age of social software favors Wikipedia over Citizendium, and over traditionally authoritative efforts such as Britannica.
Make a pot of coffee; this is going to be long, and boring.
Those of us who write about Wikipedia, both pro and con, often mix two different views: descriptive — Wikipedia is/is not succeeding — and judgmental — Wikipedia is/is not good. (For the record, my view is that Wikipedia is a success, and that society is better off with Wikipedia than it would be without it.) What I love about the Citizendium proposal is that, by proposing a fusion of collaborative construction and expert authority, it presses people who dislike or mistrust Wikipedia to say whether they think that the wiki form of communal production can be improved, or is per se bad.
Nicholas Carr, in What will kill Citizendium, came out in the latter camp. Explaining why he thinks Ctizendium is a bad idea, he offers his prescription for the right way to do things: “[…] you keep the crowd out of it and, in essence, create a traditional encyclopedia.” No need for that ‘in essence’ there. The presence of the crowd is what distinguishes wiki production; this is a defense of the current construction of authority, suggesting that the traditional mechanism for creating encyclopedias is the correct one, and alternate forms of construction are not.
This is certainly a coherent point of view, but one that I believe will fail in practical terms, because it is uneconomical. (Carr, in his darker moments, seems to believe something similar, but laments what the economics of peer production mean. This is a “Wikipedia is succeeding/is not good” argument.) In particular, I believe that the costs of nominating and then deferring to experts will make Citizendium underperform its competition, relative to the costs of merely involving experts as ordinary participants, as Wikipedia does.
Expertise, Credentials, and Authority
First, let me say that I am a realist, which is to say that I believe in a reality that is not socially constructed. The materials that make up my apartment, wood and stone and so on, actually exist, and are independent of any observer. A real tree that falls in a real forest displaces real air, even if no one is there to interpret that as sound.
I also believe in social facts, things that are true because everyone agrees they are true. My apartment itself is made of real stuff, but its my-ness is built on agreements: my landlady leases it to me, that lease is predicated on her ownership, that ownership is recognized by the city of New York, and so on. Social facts are no less real than non-social facts — my apartment is actually my apartment, my wife is my wife, my job is my job — they are just real for different reasons.
If everyone stopped agreeing that my job was my job (I quit or was fired, say), I could still walk down to NYU and draw network diagrams on a whiteboard at 1pm on a Tuesday, but no one would come to listen, because my ramblings wouldn’t be part of a class anymore. I wouldn’t be faculty; I’d be an interloper. Same physical facts — same elevator and room and white board and even the same person — but different social facts.
Some facts are social, some are not. I believe that Sanger, Carr and I all agree that expertise is not a social fact. As Carr says ‘An architect does not achieve expertise through some arbitrary social process of “credentialing.” He gains expertise through a program of study and apprenticeship in which he masters an array of facts and techniques drawn from such domains as mathematics, physics, and engineering.’ I agree with that, and amended my earlier sloppiness in distinguishing between having expertise and being an expert, after being properly called on it by Eric Finchley in the comments.
However, though Carr’s description is accurate, is it incomplete: an architect does not achieve expertise through credentialing, but an architect does not become an architect through expertise either. An architect is someone with expertise who has also been granted an architect’s credentials. These credentials are ideally granted on proof of the kinds of antecedents that indicate expertise — in the case of architects, relevant study (itself certified with the social fact of a degree) and significant professional work.
Consider the following case: a young designer with an architect’s degree designs a building, and a credentialed architect working at the same firm then affixes her stamp to the drawings. The presence of the stamp means that a contractor can use the drawings to do certain kinds of work; without it the drawings shouldn’t be used for such things. Both the expertise and the credentials are necessary to make a set of drawings usable, but in this fairly common scenario, the expertise and the credentials are held by different people.
This system is designed to produce enough liability for architects that they will supervise the uncredentialed; if they fail to, their own credentials will be taken away. Now consider a disbarred architect (or lawyer or doctor.) There has been no change in their expertise, but a great change in their credentials. Most of the time, we can take the link between authority, credentials, and expertise for granted (its why we have credentials, in fact), but in edge cases, we can see them as separate things.
The clarity to be gotten from all this definition is a bit of a damp squib: Carr and I are in large agreement about the Citizendium proposal. He thinks that conferring authority is the hard challenge for Citizendium; I think that conferring authority is the hard challenge for Citizendium. He thinks that the openness of a wiki is incompatible with Citizendium’s proposed form of conferring authority, as do I. And we both believe this weakness will be fatal.
Where we disagree is in what this means for society.
The Cost of Credentials
Lying on a bed in an emergency room, you think “Oh good, here comes the doctor.” Your relief comes in part because the doctor has the expertise necessary to diagnose and treat you, and in part because the doctor has the authority to do things like schedule you for surgery if you need it. Whatever your anxieties at that moment, they don’t include the possibility that the nurses will ignore the doctor’s diagnosis, or refuse to treat you in the manner the doctor suggests.
You don’t worry that expertise and authority are different kinds of things, in other words, because they line up perfectly from your point of view. You simply ascribe to the visible doctor many things that are actually true of the invisible system the doctor works in. The expertise resides in the doctor, but the authority is granted by the hospital, with credentials helping bridge the gap.
So here’s the thing: it’s incredibly expensive to create and maintain such systems, including especially the cost of creating and policing credentials and authority. We have to make and enforce myriad refined distinctions — not just physician and soldier and chairman but ‘admitting physician’ and ‘second lieutenant’ and ‘acting chairman.’ We don’t let people get married or divorced without the presence of official oversight. Lots of people can drive the bus; only bus drivers may drive the bus. We make it illegal to impersonate an officer. And so on, through innumerable tiny, self-reinforcing choices, all required to keep the links between expertise, credentials and authority functional.
These systems are beneficial for society. However, they are not absolutely beneficial, they are only beneficial when their benefits outweigh their costs. And we live in an era where all kinds of costs — social costs, coordination costs, Coasean costs — are undergoing a revolution.
Cost Changes Everything
Earlier, writing about folksonomies, I said “We need a phrase for the class of comparisons that assumes that the status quo is cost-free.” We still need that; I propose “Cost-free Present” — when people believe in we live in a cost-free present, they also believe that any value they see in the world is absolute, not relative. A related assumption is that any new system that has disadvantages relative to the present one is therefore inferior; if the current system creates no costs, then any proposed change that creates new bad outcomes, whatever the potential new good outcomes, is worse than maintaining the status quo.
Meanwhile, out here in the real world, cost matters. As a result, when the cost structure for creating, say, an encyclopedia changes, our existing assumptions about encyclopedic value have to be re-examined, because current encyclopedic values are relative, not absolute. It is possible for low-cost, low-value systems to be better than high-cost, high-value systems in the view of the society adopting them. If the low-cost system can increase in value over time while remaining low cost, even better.
Pick your Innovator’s Dilemma: the Gutenberg bible was considerably less beautiful than scribal copies, the Model T was less well constructed than the Curved Dash Olds, floppy disks were considerably less reliable than hard drives, et cetera. So with Wikipedia and Encyclopedia Britannica: Wikipedia began life as a lost-cost, low-value alternative, but it was accessible, shareable, and improvable. Britannica, by contrast, has always been high-value, but it is both difficult and expensive for readers to get to, and worse, they can’t use what they see — a Britannica reader can’t copy and post an article, can’t email the contents to their friends, can’t even email those friends the link with any confidence that they will be able to see it.
Barriers to both access and re-use are built into the Britannica cost structure, and without those barriers, it will collapse. Nothing about the institution of Britannica has changed in the five years of Wikipedia’s existence, but in the current ecosystem, the 1768 model of creation — you pay us and we make an Encyclopedia — has been transformed from a valuable service to a set of self-perpetuating, use-crippling barriers.
This what’s wrong with Cost-free Present arguments: the principal competitive advantages of Wikipedia over Britannica, such as shareability or rapid refactoring (as of the Planet entry after Pluto’s recent demotion) are things which were simply not possible in 1768. Wikipedia is not a better Britannica than Britannica; it is a better fit for the current environment than Britannica is. The measure of possible virtues of an encyclopedia now include free universal access and unlimited re-use. As a result, maintaining Britannica costs more in a world with Wikipedia than it did in a world without it, in the same way scribal production became more expensive after the invention of movable type than before, without the scribes themselves doing anything different.
If we do what we always did, we’ll get the result we always got
Citizendium seems predicated on several related ideas about cost and value: having expertise and being an expert are roughly the same thing; the costs of certifying experts will be relatively low; building and running software that confers a higher degree of authority to them than on non-expert users will be similarly low; and the appeal to non-experts of participating in such a system will be high. If these things are true, than a hybrid of voluntary participation and expert authority will be more valuable than either extreme.
I am betting that those things aren’t true, because the costs of certifying experts and insuring deference to them — the costs of creating and sustaining the necessary social facts — will sandbag the system, making it too annoying to use.
The first order costs will come from the certification and deference itself. By proposing to recognize external credentialing mechanisms, Citizendium sets itself up to take on the expenses of determining thresholds and overlaps of expertise. A masters student in psychology doing work on human motivation may know more about behavioral economics than a Ph.D. in neo-classical economics. It would be easy to label them both experts, but on what grounds should their disputes be adjudicated?
On Wikipedia, the answer is simple — deference is to contributions, not to contributors, and is always provisional. (As with the Pluto example enough, even things as seemingly uncontentious as planethood turned out to be provisional.) Wikipedia certainly has management costs (all social systems do), but it has the advantage that those costs are internal, and much of the required oversight is enforced by moral suasion. It doesn’t take on the costs of forcing deference to experts because it doesn’t recognize the category of ‘expert’ as primitive in the system. Experts contribute to Wikipedia, but without requiring any special consideration.
Citizendium’s second order costs will come from policing the system as a whole. If the process of certification and enforcement of deference become even slightly annoying to the users, they will quickly become non-users. The same thing will happen if the projection of force needed to manage Citizendium delegitimizes the system in the eyes of the contributors.
The biggest risk with Wikipedia is ongoing: lousy or malicious edits, an occurrence that happens countless times a day. The biggest risk with Citizendium, on the other hand, is mainly up front, in the form of user inaction. The Citizendium project assumes that the desire of ordinary users to work alongside and be guided by experts is high, but everything in the proposal seems to raise the costs of contribution, relative to Wikipedia. If users do not want to participate in a system where the costs of participating are high, Citizendium will simply fail to grow.
I would like to offer my working definition of “social network sites” per confusion over my request for a timeline.
A “social network site” is a category of websites with profiles, semi-persistent public commentary on the profile, and a traversable publicly articulated social network displayed in relation to the profile.
Profile. A profile includes an identifiable handle (either the person’s name or nick), information about that person (e.g. age, sex, location, interests, etc.). Most profiles also include a photograph and information about last login. Profiles have unique URLs that can be visited directly.
Traversable, publicly articulated social network. Participants have the ability to list other profiles as “friends” or “contacts” or some equivalent. This generates a social network graph which may be directed (“attention network” type of social network where friendship does not have to be confirmed) or undirected (where the other person must accept friendship). This articulated social network is displayed on an individual’s profile for all other users to view. Each node contains a link to the profile of the other person so that individuals can traverse the network through friends of friends of friends….
Semi-persistent public comments. Participants can leave comments (or testimonials, guestbook messages, etc.) on others’ profiles for everyone to see. These comments are semi-persistent in that they are not ephemeral but they may disappear over some period of time or upon removal. These comments are typically reverse-chronological in display. Because of these comments, profiles are a combination of an individuals’ self-expression and what others say about that individual.
This definition includes all of the obvious sites that i talk about as social network sites: MySpace, Facebook, Friendster, Cyworld, Mixi, Orkut, etc. Some of the obvious players like LinkedIn are barely social network sites because of their efforts to privatize the articulated social network but, given that it’s possible, I count them (just like i count MySpace even when the users turn their profiles private).
There are sites that primarily fit into other categories but contain all of the features of social network sites. This is particularly common with sites that were once a different type of community site but have added new features. BlackPlanet, AsianAvenue, MiGente, QQ, and Xanga all fit into this bucket. I typically include LiveJournal as a social network site but it is sorta an edge-cases because they do not allow you to comment on people’s profiles. They do however allow you to publicly comment on the blog entries. For this reason, Dodgeball is also a problem - there are no comments whatsoever. In many ways, i do not consider Dodgeball a social network site, but i do consider it a mobile social network tool which is why i often lump it into this cluster of things.
Of course, things are getting trickier every day. I’m half-inclined to qualify the definition to say that the profile and articulated social network are the centralizing feature of these sites because there are tons of sites that have profiles and social network site features as a peripheral components of their service but where the primary focus is elsewhere. Examples of this include: YouTube, Flickr, Last.FM, 43Things, Meetup, Vox, Crushspot, etc. (Dating sites are probably the most tricky because they are very profile-centric but the social network is peripheral.) But, on the other hand, most of these sites grew out of this phenomenon. So, for the sake of argument, i leave room to include them but also consider them edge cases.
At the same time, it’s critical to point out what social network sites are most definitely NOT. They are NOT the same as all sites that support social networks or all sites that allow people to engage in social networking. Your mobile phone, your email, your instant message client… these all support the articulation of social networks (addressbooks) but they do not let you publicly display them in relation to a profile for others to traverse. MUDs/MOOs, BBSes, chatrooms, bulletin boards, mailing lists, MMORPGS… these all allow you to meet new people and make friends but they are not social network sites.
This is part of why i get really antsy when people talk about this category as “social networks” or “social networking” or “social networking sites.” I think that this is leading to all sorts of confusion about what is and what is not in the category. These alternative categories are far far far too broad and all too often i hear people talking about everything that allows you to talk to anyone in any way as one of these sites (this is the mistake that DOPA makes for example).
While it’s great to talk about all of these things as part of a broader “social software” or “social media” phenomenon, there are also good reasons to have a label to address a subset of these sites that are permitting very particular practices. This allows academics, politicians, technologists, educators, and others discuss how structural shifts are prompting different kinds of behaviors. (What happens when people publicly articulate their relationships? How do these systems change the rules of virality because the network is visible? Etc.) Because of this, i don’t want the slippage to be too great because people are using terrible terms or because people want their site to fit into the category of what’s currently cool.
Of course, like most categories, there are huge issues around the edges and there’s never a clean way to construct boundaries. (To understand the challenges, read Women, Fire, and Dangerous Things.) Just think of the category “game” and try to come up with a comfortable definition and boundary for that. Still, there are things that are most definitely not games. An apple is not a game. Sure, it can be used in a game but it is not inherently a game. Not all sites that allow people to engage in social activity are social network sites and it is ridiculous to try to shove them all there simply because there’s a lot of marketing money to be made (yet i realize that this is often the reason why people do try). For this reason, i really want to stake out “social network sites” as a category that has meaningful properties even if the edges are a little fuzzy. There is still meaningful family resemblance and more central prototypes than others. I really want to focus on making sense of what’s happening with this category by focusing primarily on the prototypes and less on the edge cases.
Anyhow, this is a work in progress but i wanted to write some of this down since i seem to be getting into lots of fights via email about this.
When i started tracking social network sites, i didn’t think that i would be studying them. I did a terrible job at keeping a timeline and now, i realize, this is important information to have on hand. I’m currently in the process of trying to go backwards and capture critical dates and i need your help. I know a lot of you have a lot of this information and can probably help me (and thus help everyone else interested in this arena).
I have created a simple pbwiki at http://yasns.pbwiki.com/ (password yasns) where i’m starting to make a timeline. Can you please add what you know to it? Pretty please with a cherry on top? A lot of this information is scattered all over the web and in people’s heads and it’d be great to get it documented in a centralized source. (I know that there is some info on Wikipedia but it’s not complete; as appropriate, i will transfer information back in their format.) Note: i didn’t include citations because i often don’t have them but if you have them, they’d be very very welcome.
Please let others know about this if you think they might have information to add. Thank you kindly for your time.
(PS: i have a new academic paper coming out shortly. Stay tuned.)
Read the ComScore press release. Completely. Read the details. They have found that the unique VISITORS have gotten older. This is not the same thing as USERS. A year ago, most adults hadn’t heard about MySpace. The moral panic has made it such that many US adults have now heard of it. This means that they visit the site. Do they all have accounts? Probably not. Furthermore, MySpace has attracted numerous bands in the last year. If you Google most bands, their MySpace page is either first or second; you can visit these without an account. People of all ages look for bands through search.
Why is Xanga far greater in terms of young people? Most adults haven’t heard of it. It’s not something that comes up high in search for other things. Facebook’s bimodal population pre-public launch shows that more professors/teachers are present than i thought (or maybe companies are more popular than i thought? or maybe comScore’s data is somehow counting teens/college students as 35-54…).
Can someone tell me exactly how comScore measures this? Is it based on the known age of the person using a given computer? Remember that many teens are logging in through their parent’s computer in the living room. Is it based on reported age? I kinda doubt it but the fact that there are more 100+ year olds on MySpace than are living should make people think about reported data. Is it based on phone interviews? How do they collect it? This isn’t really parseable into English.
My problem is that all of these teen sites show a heavy usage amongst 35-54. I cannot for the life of me explain how Xanga is 36% 35-54. There’s just no way. I don’t get how the data is formulated but it seems like an odd pattern across these sites to see a drop in 25-34 and a rise in 35-54. Older folks aren’t suddenly blogging on Xanga. So what gives? My hunch is that comScore’s metrics are consistently counting teens as 35-54 across all sites. My hypothesis is that because comScore is measuring per computer and teens are using their parent’s computer, comScore can’t tell the difference between a teen user and a parent user. If so, maybe all this is telling us is that parents have definitely listened to the warnings over the last year and are now making their teens access these sites through their computer?
Finally, when we talk about data, we also need to separate Visitors from Active Users from Accounts. The number of accounts is not the same as the number of users. The number of visitors is not the same as the number of users.
All this said, there is no doubt that more older people are creating accounts. Parents are told that they should check in on their kids. Police officers, teachers, marketers… they are all logging in to look at the youth. Is that the same as meaningful users? Some yes, some no.
From my qualitative experience, the vast majority of actual users are 14-30 with a skew to the lower end. Furthermore, the majority of the accounts are presenting themselves as 14-30. To confirm the latter (which is easier), i did a random sample of 100 profiles with UIDs over 50M (to address the “last year” phenomenon). What i found was:
26 are under 18
45 are 18-30 (with a skew to the lower)
10 are over 30 but under 70
1 is over 70 (but looks less than 18)
6 are bands
11 are invalid or deleted
1 is complete fake characters (explained in descript)
A few more things of note…
18 have private profiles
Of those over 30, only 2 has more than 2 friends (one has 3 friends; one has 5)
This account data hints that the general assumption that approximately 25% of users are minors is correct. Of the remaining, the bulk is under 30. Qualitatively, i’m seeing the most active use from those under 21. Given account practices, i don’t think that i’m off in what i’m seeing.
I do suspect that MySpace is holding strong at being primarily for younger people but that older folks have definitely been checking it out a LOT more. Still, i’m still suspicious of the fact that 35-54 are common across all youth sites. I’d really like to see comScore’s data on something that we can check. Maybe LiveJournal?
(I’d really really really love to be proven wrong on this. If anyone has data that can provide an alternate explanation to the comScore numbers, please let me know!)
Update:Fred Stutzman and i just jockeyed back and forth to find something we could agree on wrt the comScore numbers. Here are some ways of making sense of the data of VISITORS:
Xanga is more of a teen-flavored site than MySpace, Facebook or Friendster
Facebook is more of a college-flavored site than MySpace, Friendster or Xanga
Friendster is more of a 20/30-something flavored site than MySpace, Facebook or Xanga
Of users going to these four sites, MySpace does not swing to any one group; it draws people of all ages to visit the site.
A greater percentage of adults (most likely parents) visit MySpace than any of the other social sites
This is all fine and well and confirms most intuition. The problem is that what we CANNOT confirm via this data is that more adults visit any of these sites than minors. Again, intuitive but the comScore data seems to indicate that adults visit each of sites more than their key population. This is really visible in their “total internet” users which seems to suggest that the vast majority of visitors to all of these social sites are adults. I cannot find a single person who works for one of these companies that believes this.
I’ve spoked to numerous folks since i posted last nite. Most believe that comScore gets this data by running a program on people’s computers. Young people are supposed to use a separate account than their parents. This data seems to indicate that comScore is wrong in assuming that people will do so. Most minors probably use their parent’s account to check these social sites. So, if we assume that, Xanga is obscenely a teen site, Facebook probably has nearly as many high school users as college users and MySpace swings young but is used by a wider variety of age groups than most social sites.
Finally, it’s all nice and well that Fox Interactive spokespeople confirm this data but i’ve watched over and over as FIM has confirmed or said things that were patently untrue in public. I don’t know if this is because FIM (the parent of MySpace) doesn’t know what’s going on on MySpace or if it’s because they don’t care whether or not they are accurate publicly. I don’t honestly believe that FIM has any clue about the age of its unique visitors. They know the purported age of people who have accounts and it would be patently false to say that 35-54 dominates account holders.
Frankly, i’m uber disappointed with comScore but even more disappointed with all of the press and bloggers who ran with the story that MySpace is gray without really looking at the data. This encourages inaccurate data and affects the entire tech industry as well as policy makers, advertisers, and users. I’m horrified that AP, Slashdot, Wall Street Journal, and numerous respectable bloggers are just reporting this as truth and speaking about it as though this is about users instead of visitors. C’mon now. If we’re going to fetishize quantitative data, let’s at least use a properly critical eye.
SlideShare launches today — the YouTube of Powerpoint. While Powerpoint destroys thought, so does TV. And misgivings aside, slides can be an art form in and of itself. They are objects you spin stories around. Like this:
It is easy to embed a presentation and player within a site, blog or wiki. The above presentation is one I found by danah. I’ve been playing with the Alpha and really have to applaud Rashmi (you may know her from Dcamp), Jonathan and the gang at Uzanto.
You upload your Powerpoint (PPT and PPS formats) or OpenOffice (ODP format) slides into My Slidespace with a familiar title, description and tags. The flash player is fast and intuitive.
What’s also fascinating is their servers are backed by Amazon S3 (Simple Storage Service). The other week when Socialtext 2.0 launched with a large-file webcast, we got Techcrunched and were worried about the load on our servers. After a little scrambling in IRC, Pete Kaminski leveraged S3, and problem solved. In this case, SlideShare has web serviced their scalability. An interesting model to watch, and good thing if this thing is a sudden hit.
22. David Gerard on September 22, 2006 07:08 AM writes…
Plenty of people complain of Wikipedia’s alleged “anti-expert bias”. I’ve yet to see solid evidence of it. Unless “expert-neutral” is conflated to mean “anti-expert.” Wikipedia is expert-neutral - experts don’t get a free ride. Which is annoying when you know something but are required to show your working, but is giving us a much better-referenced work.
One thing the claims of “anti-expert bias” fail to explain is: there’s lots of experts who do edit Wikipedia. If Wikipedia is so very hostile to experts, you need to explain their presence.
Permalink to Comment
23. engineer_scotty on September 22, 2006 01:19 PM writes…
I’ve been studying the so-called “expert problem” on Wikipedia—and I’m becoming more and more convinced that it isn’t and expert problem per se; it is a jackass problem. As in some Wikipedians are utter jackasses—in this context, “jackass” is an umbrella category for a wide variety of problem behaviors which are contrary to Wikipedia policy—POV pushing, advocacy of dubious theories, vandalism, abusive behavior, etc. Wikipedia policy is reasonably good at dealing with vandalism, abusive behavior and incivility (too good, some think, as WP:NPA occasionally results in good editors getting blocked for wielding the occasional cluestick ‘gainst idiots who sorely need it). It isn’t currently good at dealing with POV-pushers and crackpots whose edits are civil but unscholarly, and who repeatedly insert dubious material into the encyclopedia. Recent policy proposals are designed to address this.
Many experts who have left, or otherwise have expressed dissatisfaction with Wikipedia, fall into two categories: Those who have had repeated bad experiences dealing with jackassses, and are frustrated by Wikipedia’s inability to restrain said jackasses; and those who themselves are jackasses. Wikipedia has seen several recent incidents, including one this month, where notable scientists have joined the project and engaged in patterns of edits which demonstrated utter contempt for other editors of the encyclopedia (many of whom were also PhD-holding scientists, though lesser known), attempted to “own” pages, attempted to portray conjecture or unpublished research as fact, or have exaggerated the importance or quality of their own work. When challenged, said editors have engaged in (predictable) tirades accusing the encyclopedia of anti-intellectualism and anti-expert bias—charges we’ve all heard before.
The former sort of expert the project should try to keep. The latter, I think the project is probably better off without; and I suspect they would wear out their welcomes quickly on Citizendium as well.
I would love to see a few case studies, linked to the History and Talk pages of a few articles— “Here was the expert contribution, here was the jackass edit, this is what was lost”, etc. Reading Engineer Scotty’s comment, and given the general sense of outraged privilege that seems to run through much of the “Experts have their work edited without permission!” literature, I am guessing that the problem is not so much experts contributing and then being driven away as it is non-contributions by people unwilling to work in an environment wherre their contributions aren’t sacrosanct.
A response from Larry Sanger, posted here in its entirety:
Thanks to Clay Shirky for the opportunity to reply here on Many2Many
to his “Larry Sanger, Citizendium, and the Problem of Expertise,” First, two points about Clay’s style of argumentation, which I simply cannot let go without comment. Then some replies to his actual arguments.
1. Allow me to identify my own core animating beliefs, thank you very much.
Clay’s piece annoying tendency to characterize my assumptions uncharitably and without evidence, and to psychologize about me. Thus, Clay says things like: “Sanger‚s published opinions seem based on three beliefs”; “Sanger wants to believe that expertise can survive just fine outside institutional frameworks”; “Sanger’s core animating belief seems to be a faith in experts”; “Sanger’s view seems to be that expertise is a quality like height”; and “Sanger also underestimates the costs of setting up and then enforcing a process that divides experts from the rest of us.”
I find myself strongly disagreeing with Clay’s straw Sanger. However, I am not that Sanger! Last time I checked, I was made of flesh and blood, not straw.
2. May I borrow that crystal ball when you’re done with it?
Repeatedly, Clay makes dire predictions for the Citizendium. “Structural issues…will probably prove quickly fatal”; “institutional overhead…will stifle Citizendium”; “policing certification will be a common case, and a huge time-sink” so “the editor-in-chief will then have to spend considerable time monitoring that process”; “Citizendium will re-create the core failure of Nupedia”; “Sanger believes that Wikipedia goes too far in its disrespect of experts; what killed Nupedia and will kill Citizendium is that they won’t go far enough.”
I think Clay lacks any good reason to think the Citizendium will fail; but clearly hebadly wants it to fail, and his comments are animated by wishful thinking. That, anyway, seems the most parsimonious explanation. To borrow one of Clay’s phrases, and return him the favor: it is interesting “how consistent Clay has been about his beliefs” on the low value of officially-recognized expertise in online communities. “His published opinions seem based on” the belief in the supreme value and efficacy of completely flat self-organizing communities. The notion of experts being given special authority, even very circumscribed authority, does extreme violence to this “core animating belief” (to borrow another of Clay’s phrases). It must, therefore, be impossible.
Less flippantly now. I do make a point of being properly skeptical about all of my projects—that’s another thing I’ve been consistent about. You can probably still find writings from 2000 and 2001 in which I said I didn’t know whether Nupedia or Wikipedia would work. I have no idea if the Citizendium will work. What I do know is that it is worth a try, and we’ll do our best to solve problems that we can anticipate and as they arise.
By the way, there’s a certain irony in the situation, isn’t there? Clay Shirky, respected expert about online communities, holds forth about a new proposed online community, and does what so many experts love to do: make bold predictions about the prospects of items in their purview. Meanwhile, I, the alleged expert-lover, cast aspersions on his abilities to make such predictions. If my “core animating belief” were “a faith in experts,” why would I lack faith in this particular expert?
3. I want to be a social fact, too!
Let’s move on to Clay’s actual arguments. He begins his first argument with something perfectly true, that expertise (in the relevant sense, an operational concept of expertise) is a social fact, that this social fact is conferred (not always formally, but often) by institutions, and that, therefore, one cannot have expertise without (in some sense) “institutional overhead.” So far, so good. The current proposal—which is open to debate, at this early stage, even from Clay himself—addresses this situation by proposing to avoid editor application review committees in favor of self-designation of editorial status. The details are relevant, so let me quote them from the FAQ:
We do not want editors to be selected by a committee, which process is too open to abuse and politics in a radically open and global project like this one is. Instead, we will be posting a list of credentials suitable for editorship. (We have not constructed this list yet, but we will post a draft in the next few weeks. A Ph.D. will be neither necessary nor sufficient for editorship.) Contributors may then look at the list and make the judgment themselves whether, essentially, their CVs qualify them as editors. They may then go to the wiki, place a link to their CV on their user page, and declare themselves to be editors. Since this declaration must be made publicly on the wiki, and credentials must be verifiable online via links on user pages, it will be very easy for the community to spot [most] false claims to editorship.
What then is Clay’s criticism? “The problem” at the beginning of the argument was that “experts are social facts.” Yeah, so? So, says Clay,
Sanger expects that decertification will only take place in unusual cases. This is wrong; policing certification will be a common case, and a huge time-sink. If there is a value to being an expert, people will self-certify to get at that value, not matter what their credentials. The editor-in-chief will then have to spend considerable time monitoring that process, and most of that time will be spent fighting about edge cases.
My initial reaction to this was: howon Earth could Shirky know all that? Furthermore, isn’t it quite obvious that, far from being a static proposal, this project is going to be able to move nimbly (I usually propose radical changes and refinements to my projects) in order to solve just such problems, should they arise?
In any event, based on my own experience, I counter-predict that Clay will probably be wrong in his prediction. There will probably be a lot of people who humorously, out of cluelessness, or whatever, claim to be editors.
For the easy cases, which will probably be most of them, constables will be able to rein people in, nearly as easy as they can rein in vandalism. No doubt we will have a standard procedure for achieving this. As to the borderline (“edge”) cases (e.g., some grad students and independent scholars), Clay gives us no reason to think that the editor-in-chief will have to spend large amounts of time fighting about them. Unlike Wikipedia, and like many OSS projects, there will be a group of people authorized to select the “release managers” (so to speak). This policy will be written into the project charter, support of which will be a requirement of participation in the project.
The review process for editor declarations, therefore, will be clear and well-accepted enough—that, after all, is the whole point of establishing a charter and “rule of law” in the online community—that the process can be expected to work smoothly. Mind you, it will be needed because of course there will be borderline cases, and disgruntled people, but Clay has given no reason whatsoever to think it will dominate the entire proceedings.
Besides, this is a responsibility I propose to delegate to a workgroup; I will probably be too busy to be closely involved in it.
Far from being persuasive, it is actually ironic that Clay cites primordial fights I had with trolls on Wikipedia as evidence of his points. It was precisely due to a lack of clearly-circumscribed authority and widely-accepted rules that I had to engage in such fights. Consequently, the Citizendium is setting up a charter, editors, and constables precisely to prevent such problems.
4. Warm and fuzzy yes, a hierarchy no.
Clay nicely sums up his next argument this way:
Real experts will self-certify; rank-and-file participants will be delighted to work alongside them; when disputes arise, the expert view will prevail; and all of this will proceed under a process that is lightweight and harmonious. All of this will come to naught when the citizens rankle at the reflexive deference to editors; in reaction, they will debauch self-certification (leading to irc-style chanop wars), contest expert prerogatives, raising the cost of review to unsupportable levels (Wikitorial, round II,) take to distributed protest (q.v.Hank the Angry Drunken Dwarf), or simply opt-out (Nupedia in a nutshell.)
(By the way, Clay is completely wrong about citizen participation in Nupedia. They made up the bulk of authors in the pipeline. Our first article was by a grad student. An undergrad wrote several biology articles. There have been so many myths are made about Nupedia, so completely divorced from reality, that it has become a fascinating and completely fact-free Rohrschach test for everything bad that anyone wants to say about expert authority in open collaboration.)
The Citizendium is, by Clay’s lights, a radical experiment that does violence to his cherished notions of what online communities should be like. Persons inclined to “debauch self-certification” as on IRC chatrooms will be removed from the project; and others will not protest at such perfectly appropriate treatment, because we will have already announced this as a policy.
Through self-selection the community can be expected to be in favor of such policies; those who dislike them will always have Wikipedia.
That’s part of the beauty of a world with both a Citizendium and a Wikipedia in it. Those who (like you, Clay) instinctively hate the Citizendium—we’ve seen a little of this in blogs lately, calling the very idea “Wikipedia for stick-in-the-muds,” “Wikipedia for control freaks,” a “horror,” etc.—will always have Wikipedia. I strongly encourage you to stick with Wikipedia if you dislike the idea of the Citizendium that much. That will make matters easier for everyone. If other people want to organize themselves in a different way—a way you’d never dream of doing—then please give them room to do so. As a result we’ll have one project for people who agree with you, Clay, and one for people who agree with me, and the world will be richer.
Clay does give some more support for thinking that an editor-guided wiki is unworkable. He says that the viability of a community resembles a “U curve” with one end being a total hierarchy and the other end being “a functioning community with a core group.” Apparently, projects that are neither hierarchies nor communities, which Clay implies is where the Citizendium would fit, would incur too many “costs of being an institution” and “significant overhead of process.”What I find particularly puzzling about this is how he describes the ends of U curve. I would have expected him to say hierarchy on one end and a totally flat, leaderless community on the other end. But instead, opposite the hierarchy is “a functioning community with a core group.” How is it, then, that the Citizendium as proposed would not constitute “a functioning community with a core group”?
Let me put this more plainly, setting aside Clay’s puzzling theoretical apparatus. What the world has yet to test is the notion of experts and ordinary folks (and remember: experts working outside their areas of expertise are then “ordinary folks”) working together, shoulder-to-shoulder, on a single project according to open, open source principles. That is the radical experiment I propose. This actually hearkens back to the way OSS projects essentially work. So far, to my knowledge, experts have not been invited in to “gently guide” open content projects in a way roughly analogous to the way that senior developers gently guide OSS projects, deciding what changes are in the next release and what isn’t. You might say that the analogy does not work because senior developers of OSS projects are chosen based on the merits of their contributions within the project. But what if we regard an encyclopedia as continuous with the larger world of scholarship, so that scholarly work outside of the narrow province of a single project becomes relevant for determining a senior content developer? For an encyclopedia, that’s simply a sane variant on the model.
Whereas OSS projects have special, idiosyncratic requirements, encyclopedias frankly do not. There’s no point to creating an insular community, an “in group” of people who have mastered the particular system, because it’s not about the system—it’s about something any good scholar can contribute to, an encyclopedia. Then, if the larger, self-selecting community invites and welcomes such people to join them as “senior content developers,” why not think the analogy with OSS is adequately preserved?
(For more of the latter argument please see a new essay I am going to try to circulate among academics.)
The interesting thing about Citizendium, Larry Sanger’s proposed fork of Wikipedia designed to add expert review, is how consistent Sanger has been about his beliefs over the last 5 years. I’ve been reviewing the literature from the dawn of Wikipedia, born from the failure of the process-laden and expert-driven Nupedia, and from then to now, Sanger’s published opinions seem based on three beliefs:
1. Experts are a special category of people, who can be readily recognized within their domains of expertise.
2. A process of open creation in which experts are deferred to as of right will be superior to one in which they are given no special treatment.
3. Once experts are identified, that deference will mainly be a product of moral suasion, and the only place authority will need to intrude are edge cases.
All three beliefs are false.
There are a number of structural issues with Citizendium, many related to the question of motivation on the part of the putative editors; these will probably prove quickly fatal. More interesting to me, though, is is the worldview behind Sanger’s attitude towards expertise, and why it is a bad fit for this kind of work. Reading the Citizendium manifesto, two things jump out: his faith in experts as a robust and largely context-free category of people, and his belief that authority can exist largely free of expensive enforcement. Sanger wants to believe that expertise can survive just fine outside institutional frameworks, and that Wikipedia is the anomaly. It can’t, and it isn’t.
Experts Don’t Exist Independent of Institutions
Sanger’s core animating belief seems to be a faith in experts. He took great care to invite experts to the Nupedia Advisory Board, and he has consistently lamented that Wikipedia offers no special prerogatives for expert review, and no special defenses against subsequent editing of material written by experts. Much of his writing, and the core of Citizendium, is based on assumptions about how experts should be involved in a project like this.
The problem Citizendium faces is that experts are social facts — society typically recognizes experts through some process of credentialling, such as the granting of degrees, professional certifications, or institutional engagement. We have a sense of what it means that someone is a doctor, a judge, an architect, or a priest, but these facts are only facts because we agree they are. If I say “I sentence you to 45 days in jail”, nothing happens. If a judge says “I sentence you to 45 days in jail”, in a court of law, dozens of people will make it their business to act on that imperative, from the bailiff to the warden to the prison guards. My words are the same as the judges, but the judge occupies a position of authority that gives his words an effect mine lack, an authority only exists because enough people agree that it does.
Sanger’s view seems to be that expertise is a quality like height — some people are obviously taller than others, and the rest of us have no problem recognizing who the tall people are. But expertise isn’t like that at all; it is in fact highly subject to shifts in context. A lawyer from New York can’t practice in California without passing the bar there. A surgeon from India can’t operate on a patient in the US without further certification. The UN representative from Yugoslavia went away when Yugoslavia did, and so on.
As a result, you cannot have expertise without institutional overhead, and institutional overhead is what stifled Nupedia, and what will stifle Citizendium. Sanger is aware of this challenge, and offers mollifying details:
[…]we will be posting a list of credentials suitable for editorship. (We have not constructed this list yet, but we will post a draft in the next few weeks. A Ph.D. will be neither necessary nor sufficient for editorship.) Contributors may then look at the list and make the judgment themselves whether, essentially, their CVs qualify them as editors. They may then go to the wiki, place a link to their CV on their user page, and declare themselves to be editors. Since this declaration must be made publicly on the wiki, and credentials must be verifiable online via links on user pages, it will be very easy for the community to spot false claims to editorship.
We will also no doubt need a process where people who do not have the credentials are allowed to become editors, and where (in unusual cases) people who have the credentials are removed as editors.
Sanger et al. set the bar for editorship, editors self-certify, then, in order to get around the problems this will create, there will be an additional certification and de-certification process internal to the site. On Citizendium, if you are competent but uncredentialed, you will have to be vetted before you are allowed to ascend to the editor’s chair, and if you are credentialed but incompetent, you’re in until decertification. And, critically, Sanger expects that decertification will only take place in unusual cases.
This is wrong; policing certification will be a common case, and a huge time-sink. If there is a value to being an expert, people will self-certify to get at that value, not matter what their credentials. The editor-in-chief will then have to spend considerable time monitoring that process, and most of that time will be spent fighting about edge cases.
Sanger himself experienced this in his fight with Cunctator at the dawn of Wikipedia; Cunc questioned Sanger’s authority, leading Sanger to defend it with increasing vigor. As Sanger said at the time “…in order to preserve my time and sanity, I have to act like an autocrat. In a way, I am being trained to act like an autocrat.” Sanger’s authority at Wikipedia required his demonstrating it, yet this very demonstration made his job harder, and ultimately untenable. This the common case; as any parent can tell you, exercise of presumptive authority creates the conditions under which it is tested. As a result, Citizendium will re-create the core failure of Nupedia, namely putting at the center of the effort a process whose maintenance takes more energy than can be mustered by a volunteer project.
“We’re a Warm And Fuzzy Hierarchy”: The Costs of Enforcement
In addition to his misplaced faith in the rugged condition of expertise, Sanger also underestimates the costs of setting up and then enforcing a process that divides experts from the rest of us. Curiously, this underestimation seems to be borne of a belief that most of the world shares his views on the appropriate deference to expertise:
Can you really expect headstrong Wikipedia types to work under the guidance of expert types in this way?
Probably not. But then, the Citizendium will not be Wikipedia. We do expect people who have proper respect for expertise, for knowledge hard gained, to love the opportunity to work alongside editors. Imagine yourself as a college student who had the opportunity to work alongside, and under the loose and gentle direction of, your professors. This isn’t going to be a top-down, command-and-control system. It is merely a sensible community: one where the people who have made it their life’s work to study certain areas are given a certain appropriate authority—without thereby converting the community into a traditional top-down academic editorial scheme.
Well, can you expect the experts to want to work “shoulder-to-shoulder” with nonexperts?
Yes, because some already do on Wikipedia. Furthermore, they will have an incentive to work in this project, because when it comes to content—i.e., what the experts really care about—they will be in charge.
These passages evince a wounded sense of purpose: Experts are real, and it is only sensible and proper that they be given an appropriate amount of authority. The totality of the normative view on display here is made more striking because Sanger never reveals the source of these judgments. “Sensible” according to whom? How much authority is “appropriate”? How much control is implied by being “in charge”, and what happens when that control is abused?
These responses are also mutually contradictory. Citizendium, the manifesto claims, will not be a traditional top-down academic scheme, but experts will be in charge of the content. The only way experts can be in charge without top-down imposition is if every participant internalizes respect for authority to the point that it is never challenged in the first place. One need allude only lightly to the history of social software since at least Communitree to note that this condition is vanishingly rare.
Citizendium is based less on a system of supportable governance than on the belief that such governance will not be necessary, except in rare cases. Real experts will self-certify; rank-and-file participants will be delighted to work alongside them; when disputes arise, the expert view will prevail; and all of this will proceed under a process that is lightweight and harmonious. All of this will come to naught when the citizens rankle at the reflexive deference to editors; in reaction, they will debauch self-certification (leading to irc-style chanop wars), contest expert preogatives, rasing the cost of review to unsupportable levels (Wikitorial, round II,) take to distributed protest (q.v. Hank the Angry Drunken Dwarf), or simply opt-out (Nupedia in a nutshell.)
The “U”-Curve of Organization and the Mechanisms of Deference
Sanger is an incrementalist, and assumes that the current institutional framework for credentialling experts and giving them authority can largely be preserved in a process that is open and communally supported. The problem with incrementalism is that the very costs of being an institution, with the significant overhead of process, creates a U curve — it’s good to be a functioning hierarchy, and its good to be a functioning community with a core group, but most of the hybrids are less fit than either of the end points.
The philosophical issue here is one of deference. Citizendium is intended to improve on Wikipedia by adding a mechanism for deference, but Wikipedia already has a mechanism for deference — survival of edits. I recently re-wrote the conceptual recipe for a Menger Sponge, and my edits have survived, so far. The community has deferred not to me, but to my contribution, and that deference is both negative (not edited so far) and provisional (can always be edited.)
Deference, on Citizendium will be for people, not contributions, and will rely on external credentials, a priori certification, and institutional enforcement. Deference, on Wikipedia, is for contributions, not people, and relies on behavior on Wikipedia itself, post hoc examination, and peer-review. Sanger believes that Wikipedia goes too far in its disrespect of experts; what killed Nupedia and will kill Citizendium is that they won’t go far enough.
Last night, i asked will Facebook learn from its mistake? In the first paragraph, i alluded to a “privacy trainwreck” and then went on to briefly highlight the political actions that were taking place. I never returned to why i labeled it that way and in my coarseness, i failed to properly convey what i meant by this.
When i sat down to explain the significance of the “privacy trainwreck,” a full-length essay came out. Rather than make you read this essay in blog form (or via your RSS reader), i partitioned it off to a printable webpage.
I believe the Wired Wiki experiment can be called a success, and yesterday I would have said it was doomed. Just came back from Wiki Wednesday, where Wired reporter Ryan Singel held a conversation about it. How we conducted the experiment, what part of the editorial process it was directed at it and the participation of the community gives us a lot to learn from.
Do recall that the use of wikis in journalism has been significantly tainted by the LA Times Wikitorial debacle. It was a failure in wiki implementation, goal setting, content structure and moderation. While the media has embraced public blogs, they still have a while to go before public wikis are accepted.
In an experiment in collaborative journalism, Wired News is putting reporter Ryan Singel at your service.
This wiki began as an unedited 1,059-word article on the wiki phenomenon, exactly as Ryan filed it. Your mission, should you choose to accept it, is to do the job of a Wired News editor and whip it into shape. Don’t change the quotes, but feel free to reorganize it, make cuts, smooth the prose or add links — whatever it takes to make it a lively, engaging news piece.
Ryan will answer questions from the comments page, and, when consensus calls for it, conduct additional reporting. If there’s something he missed, let him know, and he’ll get on the phone and investigate, then submit new text to the wiki for your review.
Tim Spalding has taken discussion forums a big step forward over at LibraryThing. The concept is simple but could make a real difference because it allows forum msgs to be aggregated in multiple ways. When you’re entering a msg at a forum, you can put a title or author in brackets and LibraryThing will take a stab at identifying what you have in mind. Think of it as in-place tagging. You can thus easily find all the posts about a book. And all the references to a book or author will be lilsted on that book or author’s page.
Because LibraryThing knows which books you own (because you’ve told it), it can feed you msgs about any of them. And, as Tim points out, this unhiding of msgs will change the temporality of posts: Rather than msgs fading into obscurity a few days or weeks after they’re posted, they’ll be easily findable and reply-able.
Over the last month, i’ve been driving Mimi’s Hybrid on and off. One of my favorite things about the Hybrid is that it tells you how many MPG you’re averaging over time. I find myself driving around town trying to maximize that number, getting uber excited when it goes up and super sad when it goes down. It reminds me of when i used to try to maximize my miles per hour when going from Boston to New York only this is more environmental. Yet, it’s not the environment that i’m concerning myself with - it’s all about number games in the same way that people obsess over every pound on the scale or the calories in every bite.
Then i was thinking about Tantek and Jason raving about Consumating. I love the fact that it’s a lot of cool geeky people but i can never get over the lameness that i feel when i log in and look at my score. And yet, i can’t be bothered to answer the questions that make me feel all uncomfortable in the hopes that someone will like my answers and rate me higher. It’s a catch-22 for me. Yet, i totally understand why Tantek and Jason and others absolutely love it and why they go back for more.
And then i was thinking about the people on Yahoo! Answers who spend hours every day answering questions to get high ranks. It’s very similar to Consumating only it’s not all embarassing because it’s not really about you - it’s about the answers. There’s no real gain from getting points but still, it’s like a mouse in a cage determined to do well just cuz they can.
This all reminds me of a scene in some movie. I can’t recall what movie it was but it was about how you just want to be the best at something, anything… to have something to point at and say look, i’m #1! The validation, the proof of greatness! Even if that something is problematic attention getting like being the #1 serial killer. (Was it Bowling for Columbine?)
I started wondering about these number games… They’re all over social software - Neopets, friends on social network sites, blog visitors, etc. Who is motivated by what number games? Who is demotivated? Does it make a difference if the number game is about the group vs. the individual, about one’s self directly vs. about some abstract capability?
Are there some number games that work better than others in attracting a broader audience? I’m thinking about Orkut here… if the game is to get as many Brazillians on the site as possible, you only need a few obsessives to be the rallying forces; everyone else is part of the number game simply by signing up. So there are tons competing in the number games but only a few invested.
Does anyone know anything about how these number games work as incentives?
The article mentions that iStockphoto (cheap stock photography via the Internet) has obliterated the “future for professional stock photography.” (Similarly, Clay Shirky noted way back when that blogs “are such an efficient tool for distributing the written word that they make publishing a financially worthless activity.”)
But more importantly, the Wired article discusses the rise of R&D networking. For example, InnoCentive matches problems and problem-solvers: “The strength of a network like InnoCentive’s is exactly the diversity of intellectual background…. We actually found the odds of a solver’s success increased in fields in which they had no formal expertise.”
Now, just this year, Chevy attempted its own kind of crowdsourcing, allowing website visitors to apply their own text input over Chevy Tahoe footage to create-your-own-commercial. What they got was a barrage of anti-pollution, anti-accident, and just-about-anti-anything creations. (See them at YouTube: http://youtube.com/results?search=chevy+tahoe). One participant even launched a website where you can rate the videos).
Using existing mass media images to twist, mock, refute, subvert, or as wikipedia more politely says “produce negative commentary about itself” is called “culture jamming.”
Umberto Eco calls this “semiological guerrilla warfare” and supports “action which would urge the audience to control the message and its multiple possibilities of interpretation.” (from Travels in Hyperreality).
But what happens when the culture jammers actually want to continue and extend the media in question?
The fans are saying, look, if we can’t get what we want on television, the technology is out there for us to do it ourselves…. It has become so popular that Walter Koenig, the actor who played Chekov in the original “Star Trek,” is guest starring in an episode, and George Takei, who played Sulu, is slated to shoot another one later this year.
Now the Star Trek franchise has a real opportunity here that could be taken as a crowdsourcing lesson to other media producers (music, film, books, etc.). Here it comes:
Free the content!
Let the Star Trek fans take the initiative and spend the money to keep the interest-level going, crank out a studio movie once in a while, foster crossovers between shows, organize events, provide financial assistance, etc.
It is without shame that I can share the release of Socialtext Open, an Open Source distribution of Socialtext. I figure this is in demand by M2M readers, and, well, we are quite proud of it. For your downloading pleasure.
“The real value of communicative technologies like social software is that they re-enable and enhance our ability to use a time-tested means of information processing, i.e. the conversation, in new and interesting ways!”
Conversation has long been the cornerstone of our society. New technologies enable us to speak to people anytime, anywhere. However, there is growing concern – both in the UK and elsewhere - that we are talking less than we used to. This work suggests that this is a misconception and that the issue is actually much more complex.
Robert Putnam’s book Bowling Alone catalyzed the debate about the decline of community. Putnam, like many others, suffered from ontological blinders. By defining community in a narrow way, he failed to see forms of community that didn’t fit his narrow definition. But:
The adherence to outdated ways of thinking about social involvement have intensified concern about our sense of community. The way that we engage with those around us has changed. We no longer necessarily connect with either conventional structures like community societies or even less formal associative fora, like markets. Community involvement remains of vital importance, but structures of engagement no longer reflect the ways in which people are comfortable in having their say.
This problem is also rampant in politics where scholars who focus on the primacy of nation-states ignore transnational social organization, and scholars who focus on the structures of formal government fail to notice the networks of informal governance that are emerging across the globe. The bottom line is that technology ushers in new forms of social organization that escape notice precisely because they are invisible to adherents of the old paradigm. By the time anyone notices the impending social transformation, it is too powerful to contain, and social transformation cascades across the landscape. Or so the theory goes.
So what about conversation? Well, I venture to suggest that it is through conversation, the connecting of people with other people, the exchange of ideas, the spread of information, debate, dissent, and empathy, that collective wisdom arises. Furthermore, given the resurgence of violent politics, the ambivalence in the face of environmental crises, and profit-driven enclosure movements like overly restrictive copyright law and the Net Neutrality concern, we could definitely benefit from new forms of social organization as carriers of collective wisdom.
Last week, i had drinks with Ian Rogers and Kareem Mayan and we were talking about shifts in the development of technology. Although all of us have made these arguments before in different forms, we hit upon a set of metaphors that i feel the need to highlight.
Complete with references to engineering, technology development was originally seen as a type of formalized production. You design, build and ship products. And then they’re out in the wild, removed from the production cycle until you make Version 2. Of course, it didn’t take long for people to realize that when they shipped flaws, they didn’t need to do a recall. Instead, they could just ship free updates in the form of Version 1.1.
As the world went web-a-rific, companies held onto the ship-final-products mentality in its stodgy archaic form. Until the forever-in-beta hit. I, for one, love the persistent beta. It signals that the system is continuously updating, never fully baked and meant to be organic. This is the way that it should be.
Web development is fundamentally different than packaged software. Because it is the web, there’s no vast distance between producers and consumers. Distribution channels cross space and time (much to the chagrin of most old skool industries). Particularly when it comes to social software, producers can live inside their creations, directly interact with those using the system, and evolve the system alongside the practices that are emerging. In fact, not only can they, they’re stupid to do anything else.
The same revolution has happened in writing. Sure, we still ship books but what does it mean to have the author have direct interaction with the reader like they do in blogging? It’s almost as though someone revived the author from the dead . And maybe turned hir into a kind of peculiar looking Frankenstein who realizes that things aren’t quite right in interpretation-land but can’t make them right no matter what. Regardless, with the author able to directly connect to the reader, one must wonder how the process changes. For example, how is the audience imagined when its presence is persistent?
I’m reminded of a book by Stewart Brand - How Building Learn. In it, Brand talks about how buildings evolve over time based on their use and the aging that takes place. A building is not just the end-result of the designer, but co-constructed by the designer, nature, and the inhabitant over time. When i started thinking about technology as architecture, i realized the significance of that book. We cannot think about technologies as finalized products, but as evolving architectures. This should affect the design process at the getgo, but it also highlights the differences between physical and digital architectures. What would it mean if 92 million people were living in the house simultaneously with different expectations for what colors the walls should be painted? What would it mean if the architect was living inside the house and fighting with the family about the intention of the mantel?
The networked nature of web technologies brings the architect into the living room of the house, but the question still remains: what is the responsibility of a live-in architect? Coming in as an authority on the house does no good - in that way, the architect should still be dead. But should the architect just be a glorified fixer-upper/plumber/electrician? Should the architect support the aging of the house to allow it to become eccentric? Should the architect build new additions for the curious tenants? What should the architect be doing? One might think that the architect should just leave the place alone… but is this how digital sites evolve? Do they just need plumbers and electricians? Perhaps the architect is not just an architect but also an urban planner… It is not just the house that is of concern, but the entire city. How the city evolves depends on a whole variety of forces that are constantly in flux. Negotiating this large-scale system is daunting - the house seems so much more manageable. But 92 million people never lived in a single house together.
 Note to Barthes scholars: i’m being snippy here. I realize that the author’s authority should still be contested, that multiple interpretations are still valid, and that the author is still a product of social forces. I also realize that even as i’m writing this blogpost, its reading will be out of my control, but the reality is that i’ll still - as author - get all huffy and puffy and try to be understood. Damnit.
Prepare to be spammed globally. Twttr just launched, a mobile social software app for SMSing your social network developed by Odeo. It’s slightly simpler than Dodgeball, not location centric and a bit more viral. Biz Stone calls it present-tense blogging. Ev notes you might want to upgrade your SMS plan and they are working on compatibility outside the US. To me its reply-to-all baked in your phone.
If they support MMS and let me send a photo to twttr and CC flickr, it will be a killer app. But for now, put my SMS’ in a sidebar widget or give me feeds I can splice.
Ever get that feeling why you are blogging and flickring your life away that you have lost something? That you are telling your life’s story, but it is lost in the archives and in the minds of people who are really paying attention?
There is a gap in social software for binding stories in a chronology. For building biographies of people, places and things. I think Dandelife serves as different object to tell stories around. Time.
The horizontal and vertical visualizations are what makes this work:
Dandelife is definately beta and Edward and Kelly are working hard on it. But when you can upload your blog and photos to start your story, its pretty powerful. Go play. And let them know how it can get better.
We had to move away from a static, dead intranet,” says Myrto Lazopoulou. “The wiki has allowed us to improve collaboration, communication and publication. We can cross time zones, improve the way teams works, reduce email and increase transparency.”
The case study is also available in PDF format and complements other research done on this leading deployment:
University of Pennsylvania’s del.icio.us-like PennTags project allows readers to tag catalogued items. It’s a great way to track resources for a research project and simultaneously make the results of your forays available to future researchers. In fact, it seems just plain selfish not to do so.
Integrating tagging with the book catalogue (and therefore with the book taxonomy) instantaneously provides the best of both worlds: Structured browsing leads you to nodes with jumping off points into the connections made by others who are putting those nodes into various contexts, and tags lead you back into the structured world organized by experts in structure.
My guess is that the folksonomy that emerges will not change the existing taxonomy because in a miscellaneous world you don’t have to change something in order to change it. The existing taxonomy could stay exactly as it is, as the folksonomy supplements it by providing synonyms for existing categories (e.g., a search for “recipes” takes you to the “cuisine” category of the existing taxonomy) and leaping-off-points from it into the user-created clusters of meaning (e.g., here’s the tag cloud for the node you’re browsing). Rather than disrupting, transforming or replacing the existing taxonomy, the folksonomy may just affectionately tousle its hair.
Henry Jenkins (Co-Director of Comparative Media Studies at MIT) and i were interviewed by Sarah Wright of the MIT News Office about the proposed Deleting Online Predators Act (DOPA). Although they only used a fraction of our interview in the MIT Tech Talk, we decided to publish the extended version online. We feel as though our response provides valuable information for parents, legislators, journalists and technologists. It summarizes a lot of what both Henry and i have been trying to get across when interviewed by the media.
Nicholas Carr has an odd piece up, reacting to the ongoing question of Wikipedia governance as if it is the death of Wikipedia. In Carr’s view
Where once we had a commitment to open democracy, we now have a commitment to “making sure things are not excessively semi-protected.” Where once we had a commune, we now have a gated community, “policed” by “good editors.” So let’s pause and shed a tear for the old Wikipedia, the true Wikipedia. Rest in peace, dear child. You are now beyond the reach of vandals.
Now this is odd because Carr has in the past cast entirely appropriate aspersions on pure openess as a goal, noting, among other things, that “The open source model is not a democratic model. It is the combination of community and hierarchy that makes it work. Community without hierarchy means mediocrity.”
Carr was right earlier, and he is wrong now. Carr would like Wikipedia to have committed itself to openess at all costs, so that changes in the model are failure conditions. That isn’t the case however; Wikipedia is committed to effectiveness, and one of the things it has found to be effective is openess, but where openess fails to provide the necessary defenses on it’s own, they’ll make changes to remain effective. The changes in Wikipedia do not represent the death of Wikipedia but adaptation, and more importantly, adaptation in exactly the direction Carr suggests will work.
We’ve said it here before: Openness allows for innovation. Innovation creates value. Value creates incentive. If that were all there was, it would be a virtuous circle, because the incentive would be to create more value. But incentive is value-neutral, so it also creates distortions — free riders, attempts to protect value by stifling competition, and so on. And distortions threaten openess.
As a result, successful open systems create the very conditions that require a threaten openess. Systems that handle this pressure effectively continue (Slashdot comments.) Systems that can’t or don’t find ways to balance openess and closedness — to become semi-protected — fail (Usenet.)
A huge number of our current systems are hanging in the balance, because the more valuable a system, the greater the incentive for free-riding. Our largest and most spontaneous sources of conversation and collaboration are busily being retrofit with filters and logins and distributed ID systems, in an attempt to save some of what is good about openess while defending against Wiki spam, email spam, comment spam, splogs, and other attempts at free-riding. Wikipedia falls into that category.
And this is the possibility that Carr doesn’t entertain, but is implicit in his earlier work — this isn’t happening because the Wikipedia model is a failure, it is happening because it is a success. Carr attempts to deflect this line of thought by using a lot of scare quotes around words like vandal, as if there were no distinction between contribution and vandalism, but this line of reasoning runs aground on the evidence of Wikipedia’s increasing utility. If no one cared about Wikipedia, semi-protection would be pointless, but with Wikipedia being used as reference material in the Economist and the NY Times, the incentive for distortion is huge, and behavior that can be sensibly described as vandalism, outside scare quotes, is obvious to anyone watching Wikipedia. The rise of governance models is a reaction to the success that creates incentives to vandalism and other forms of attack or distortion.
We’ve also noted before that governance is a certified Hard Problem. At the extremes, co-creation, openess, and scale are incompatible. Wikipedia’s principle advantage over other methods of putting together a body of knowledge is openess, and from the outside, it looks like Wikipedia’s guiding principle is “Be as open as you can be; close down only where there is evidence that openess causes more harm than good; when this happens, reduce openess in the smallest increment possible, and see if that fixes the problem.”
People who build or manage large-scale social software form the experimental wing of political philosophy — in the same way that the US Constitution is harder to change than local parking regulations, Wikipedia is moving towards a system where evidence of abuse generates anti-bodies, and those anti-bodies vary in form and rigidity depending on the nature and site of the threat. By responding to the threats caused by its growth, Wikipedia is moving the hierachy+community model that Carr favored earlier. His current stance — that this change is killing the model of pure openess he loved — is simply crocodile tears.
Egalitarian, or indifferent to formal organizational identities
Accepting of many types of data
‘Social’ means that there’s always a person on at least one end of
the wire with Enterprise 2.0 technologies. With wikis, prediction
markets, blogs, del.icio.us, and other Web 2.0 technologies with clear
enterprise applications people are doing all the interacting and
providing some or all of the content; the IT is just doing housekeeping
If there is debate, it will be on two fonts: the role of organizational identities (Egalitarian) or an emaphasis on technology over social dynamics. McAfee focuses on the second, that of Enterprise 2.0 vs. SoA:
Last week Liz organized the Microsoft Research Social Computing Symposium. I shared some raw notes here, and here is a good gaming summary, but most of the activity was in a private Socialtext wiki. Among other things, Clay and danah held a session on the lingering questions in our field. This should tease out what work is already done or in progress, but I thought they may be thought provoking at the least:
Social Science Questions
* How can we measure the success of different types of online communities, and their survival and prodictivity and various criteria?
* Coates: which community software is more successful in which environments?
* What are the boundry conditions for mobile and pervasive (social) computing systems?
* To what extend, in what ways, at what rate/time scal will mobile and/or pervasive systems change the way humans interact socially?
* Do natives of social media systems have a different notion of themselves as individuals and abour their relation to broader social groups?
* What are the mechanisms that cause people to act, mark up, buy or sell bits they care about online?
* What tips people to try something, what’s enough to bring value?
* Does the “regular public” want to connect with people othey do not know? (outside the context of dating)
* What level of visual representation of the body is necessary to trigger mirror neurons
* Are the online community members of tomorrow going to be more or less participatory than today’s? And why?
* What impact do computer/video games have on the everyday habits and routines of the gamers?
* Is society becoming more or less individualized?
* How can we use the computational ability of our machines to transform communication?
* How can we get access to behavioral (server logs) and attitudinal data (survey data) from large scale worlds?
* What elements of MMOG can be adapted to web applications?
* How can we build virtual worlds/spaces where we can operate parallel servers with slightly variable rulesets?
* … so that we can change one experimental condition and obverve the response by the inhabitants?
* What are the barriers to contributing to social group ointeraction (social bookmarking, wikis)?
* …What are the steps to mitigate the barriers?
* How do we make memories portable?
* How do we use social judgement to surfae what your peers care or are interested in? What the crowd is interested in?
* How can communities support veterans going off topic together an new commers seeking topical information and connections?
What lingering questions do you have for possible research?
Earlier, i spoke about how the MySpace panic was likely to cause legislation proposals. Today, Congressperson Fitzpatrick proposed legislation to amend the Communications Act of 1934 “to require recipients of universal service support for schools and libraries to protect minors from commercial social networking websites and chat rooms.” This legislation broadly defines social network sites as anything that includes a Profile plus an ability to communicate with strangers. It covers social networking sites, chatrooms, bulletin boards. Obviously, the target is MySpace but most of our industry would be affected. Blogger, Flickr, Odeo, LiveJournal, Xanga, MySpace, Facebook, AIM, Yahoo! Groups, MSN Spaces, YouTube, eBaumsworld, Slashdot. It would affect Wikipedia if there wasn’t a special clause for non-commercial sites. Because many news sites (NYTimes, CNN, the Post) allow people to login and create profiles and comment, it might affect them too.
Because it affects both libraries and schools, it will dramatically increase the digital divide. Poor youth only gain access to these sites through libraries and schools. With this ban, poor youth will have no access to the cultural artifacts of their day. Furthermore, because libraries won’t be able to maintain separate 18+ and minor computers, this legislation will affect everyone who uses libraries, including adults.
This legislation is horrifying and culturally damaging. Please, all of you invested in social technologies, do something to make this stop.
The next step in social technologies is mobile. Duh. Yet, a set of factors have made innovation in this space near impossible. First, carriers want to control everything. They control what goes on a handset, how much you pay for it and who else you can communicate with. Next, you have hella diverse handsets. Even if you can put an application on a phone, there’s no standard. Developers have to make a bazillion different versions of an app. To make matters worse, installing on a phone sucks and most users don’t want to do it. Plus, to make their lives easier, developers often go for Java apps and web apps which are atrociously slow and painful. All around, it’s a terrible experience for innovators, designers and users.
This headaches have a detrimental effect on the development of mobile social software. Successful social technologies requires cluster effects. Cluster effects require everyone within a particular social cluster to be able to play. If 20% of your friends can’t play because their phone/carrier won’t let them, the end result is often that NO ONE plays. Of course, there’s a tipping point where people buy a new phone or switch carriers, but that tipping point is hefty and right now, it’s for things like SMS not neuvo apps. Switching carriers is even uglier - it requires a huge drop in price.
Being able to get to basic cluster effects is the baseline for a mobile social app to succeed. This alone won’t make it work, but you need that to even begin. There are lots of other limitations, especially when the MoSoApp depends on geography. Take a look at something like Dodgeball. It was utterly brilliant at SXSW because 1) everyone was able to use it; 2) huge clusters were on it; 3) everyone was geographically proximate. There was a curve of use so that a fraction checked in all of the time, most checked in occasionally and a fraction never checked in. But that’s the ideal distribution for cluster effects. Still, because everyone could use it, it was used.
Over and over, i hear about cool technologies that involve multimedia sharing, GPS applications, graphical interfaces, etc. In theory, as research, these are great. Unfortunately, without clusters, you cannot even test the idea to see if it would make sense to a given population. :(
There are only three phones out there with cluster effects right now: Crackberry, Treo and Sidekick. Even still, the killer app for each of these (email or AIM) connects them not to each other but to a broader network because of non-mobile technology. Plus, each of these clusters has issues when it comes to developing for them. Crackberry appeals to the business world who is on leash to their boss. Productivity-centric apps could be helpful to this crowd, but it will not be fun and most of these ideas involve privacy destruction. The Treo is central around the business tech world but most of this population socializes with people who are trying out every new phone on the planet; this group is too finicky and besides, they want everything OPEN. Then there’s the Sidekick - it has penetrated the hearts and minds of urban street youth. Sadly, few designers are really interested in thinking about black urban culture. ::grumble::grumble::
When i heard that the Helio was going to launch with MySpace on board, i got super super excited. Like IM and email, MySpace is a perfect application to bridge web and mobile interactions. Sure, it only would include the communications messages and not really take advantage of the mobile issues with social networks, but it would be a good step, no? The target would inevitably be 16-30, an ideal target for dealing with mobile sociability. I was anxiously awaiting the launch, figuring that if anything could push youth to center around a technology, it would involve MySpace. From MySpace, you could actually start innovating with youth networks, location-based activities, image sharing, etc. Opportunity!
And then they launched. What marketing asshole chose the prices? $85 a month minimum on top of a $275 phone??? Has anyone not noticed that the target youth market is using the free generic phone and a $40 a month plan? You need to lure them away from their T-mobile/Sprint/Verizon plan and entice to come over. You need to do this en masse, with enthusiasm. You cannot do this for $85 a month on top of a $275 phone. ::sigh:: Opportunity lost.
There are two ways to get mobile social applications going:
1) A population needs to have access to a universal interaction platform which (except for SMS and dialing) means being on the same technology;
2) Carriers/handsets need to standardize and open up to development by outsiders.
The latter is the startup fantasy and i don’t see it happening any time soon (stupid carriers). The former is really hard because it means enticing people over away from their contracts. Plus, it means moving against gadget individuality, which is something that people have really bought into. The only way to do that is for it to be super accessible and super cool. This is unfortunately an oxymoron because cool in gadgets equals expensive which means inaccessible. While the trendsetters will all opt-in, you need the followers to come along too for cluster effects to work.
There is a third option: destroy the carriers. The possibility of WiFi phones (following blanketed WiFi) means that you just have to deal with multiple handset makers but, right now at least, they are better about openness. At least then, you’d just have one development roadblock. Unfortunately, this is probably a long way off because the telcos are in bed with legislators who are being extremely slow about universal WiFi and are all about protecting dying industries.
I hate when innovation is jammed up by bad politics and stupid forms of competition. One of the hugest challenges of convergence culture is that traditional competition doesn’t work. We’re not competing for who can create the coolest toothbrush design anymore. We’re now competing for who can build the biggest roadblocks in convergence. Today, innovation means figuring out how to best undermine the roadblocks without getting into legal trouble. Talk about a buzz kill.
So what should be done? Oh carriers, handset makers, innovators, venture capitalists, legal people… Is the goal to innovate or to control? What should be done to push past these roadblocks? (And for all of you in favor of control, remember that there are other markets besides the US/UK/Japan where innovation will occur and laws will not protect.)
Update: I want to clarify some things around youth purchasing. The youth market is 14-28. The 14-21s get their phones from their parents and are on their plans. The 21-28s get their own plans. The 14-21s are stuck with whatever free phone they get unless they can beg and plead for a cooler phone for their birthday. They also get shit plans, although many have been able to convince their parents to support SMS these days. This segment of the youth population is key because they are hyper active and this is when they are setting their norms for phone use these days. The way to get to them is to either make a phone that is so cool that they beg and beg for their birthday (and it fits into their parents’ plan) or to make a package so cheap that they can convince their parents to get them a separate plan because it’s economically viable. The 21-28s have more flexibility but they are still strapped for cash and are quite cautious with their plans, but if they’ve gotten used to SMS they don’t give it up. They are also more likely to take the free phone unless they are the trendsetters (because they now have to pay and begging doesn’t work). The exception to this is actually working class teens who tend to buy their own phone starting at 15/16 - they buy cooler phones but still have shit mobile plans. This is why the Sidekick worked so well in this demographic. (Note: these observations and this post are based on what i’ve seen hanging out in youth culture, not any interactions i’ve had with mobile or tech companies or any formal data i’ve collected for my dissertation. In other words, i may be very wrong.)
Since you can’t make Facebook go away, and even if you tried to, you couldn’t, you might as well accept it and deal with it. The fact of the matter is that students need to understand the long view, and they need to understand the importance of the written record. They’ve spent their entire lives online, and they are completely comfortable posting information about themselves online. Now that they’re 18, economic motivations step in, and it is our obligation and duty to protect them. Telling them not to say anything controversial, or forcing them to use privacy settings just won’t cut it - remember, the students who are on the Facebook want to be found and listened to. What they need to understand is the context. They have to understand the need to act now on behalf of the person they’ll be in 4 or 5 or 6 years. Give them that context. Explain to them the value of maintaining a self-image they can be proud of down the road. Work with them on this, not against them - it may be your only chance.
That advice should be going to parents and teachers, as well—not just administrators. Thinking about the “long view” of these media—blogs, wiki editing history, social network site profiles—is a skill that we need to be teaching kids.
I have an article in the spring 2006 issue of Sloan Management Review (SMR) on what I call Enterprise 2.0 — the emerging use of Web 2.0 technologies like blogs and wikis (both perfect examples of network IT) within the Intranet. The article describes why I think this is an important and welcome development, the contents of the Enterprise 2.0 ‘toolkit,’ and the experiences to date of an early adopter. It also offers some guidelines to business leaders interested in building an Enterprise 2.0 infrastructure within their companies.
One question not addressed in the article is: Why is Enterprise 2.0 is an appealing reality now?…
He continues, in his blog:
As described in the SMR article, these tools include powerful search, tags (the basis for the folksonomies at del.icio.us and flickr), and automatic RSS signals whenever new content appears. As I type these words I don’t know the best site to serve as the link behind the abbreviation ‘RSS’ in the previous sentence. To find this site, I’m going to type ‘RSS’ into Google and see what pops up (sure enough, the Wikipedia entry for ‘RSS’ was pretty high in Google’s results). I also don’t know the URL of the page I’m using right now to type this blog entry. I do know that it’s on my del.icio.us page, tagged as ‘APMblog,’ so I can find it whenever I want. And I don’t know what work my three collaborators on a research project are doing right now; I just know that when any of them has some results to share or a new draft of the paper they’ll post it on the project’s wiki (which is powered by Socialtext) and I’ll immediately get an RSS notification about it.
These examples are not meant to show that my professional life is perfectly organized (that assertion would be worse than false; it would be fraudulent) or that we’ve addressed all the challenges associated with the growth of the Web. They’re meant instead to illustrate how technologists have done a brilliant job at three tasks: building platforms to let lots of users express themselves, letting the structure of these platforms emerge over time instead of imposing it up front, and helping users deal with the resulting flood of content.
As the SMR article discusses, the important question for business leaders is how to import these three trends from the Internet to the Intranet — how to harness Web 2.0 to create Enterprise 2.0.
McAfee sounds a note of caution along these lines. He notes the possibility that “busy knowledge workers won’t use the new
technologies, despite training and prodding,” and points to the fact that “most people who use the Internet today aren’t bloggers, wikipedians or taggers. They don’t help produce the platform - they just use it.” There’s the rub. Managers, professionals and other employees don’t have much spare time, and the ones who have the most valuable business knowledge have the least spare time of all. (They’re the ones already inundated with emails, instant messages, phone calls, and meeting requests.) Will they turn into avid bloggers and taggers and wiki-writers? It’s not impossible, but it’s a long way from a sure bet.
People keep asking me “What went wrong with Friendster? Why is MySpace any different?” Although i’ve danced around this issue in every talk i’ve given, i guess i’ve never addressed the question directly. So i sat down to do so tonite. I meant to write a short blog post, but a full-length essay came out. Rather than make you read this essay in blog form (or via your RSS reader), i partitioned it off to a printable webpage. If you are building social technologies or online communities, please read this. I think it’s really important to understand the history of these sites, how users engaged with them, how the architects engaged with users, and how design decisions had social consequences. Hopefully, my essay can help with this.
I do want to highlight a section towards the end because i think that it’s quite problematic that folks aren’t thinking about the repercussions of the moral panic around MySpace.
If MySpace falters in the next 1-2 years, it will be because of this moral panic. Before all of you competitors get motivated to exacerbate the moral panic, think again. If the moral panic succeeds:
Youth will lose (even more) freedom of speech. How far will the curtailment of the First Amendment go?
All users will lose the safety and opportunities of pseudonymity, particularly around political speech and particularly internationally.
Internet companies will be required to confirm the real life identity of all users. At their own cost.
International growth on social communities will be massively curtailed because it is much harder to confirm non-US populations.
Internet companies will lose the protections of common carrier which will have ramifications in all sorts of directions.
Internet companies will see a massive increase in subpoenas and will be forced to turn over data on their users which will in turn destroy the trust relationship between companies and users.
There will be a much greater barrier for new communities to form and for startups to build out new social environments.
International companies will be far better positioned to create new social technologies because they won’t have to abide by American laws even if American citizens use their technology (assuming the servers are hosted outside of the US). Unless, of course, we decide to block sites on a nation-wide basis….
This talk was written for designers and business folks working in social tech. I talk about the significance of culture and its role in online communities. I go through some of the successful qualities of Craiglist, Flickr and MySpace to lay out a critical practice: design through embedded observation. I then discuss a few issues that are playing out on tech and social levels.
Jon Turow passed on an open letter to Mark Zuckerberg in the Daily Princetonian. Facebook recently expanded from college to high school, resulting in a clash of uncivilizations:
…If we really wanted to, we could steer clear of the groups by just avoiding the high school profiles. But we can’t ignore it when they post on our walls. And my god, do they post. Unfortunately, they don’t understand that by posting “OMG how are you? I haven’t seen you since our Model UN trip three years ago!” they are undermining the college personas that we have so carefully constructed over the past there years. And when a 16-year-old girl pokes us, we worry that poking back could result in a cyber-statutory rape conviction. Something tells us that when having sex with one of your facebook friends could result in a criminal violation, things have gone too far….
Clay may end up posting something about pattern languages for moderations systems here, but Nat has great notes from his talk at Etech and I couldn’t help but lift this quote:
This is the direction that the conversation around social software is taking. Hobbes would say that Dave had the right and all was good. Rousseau would reply, “no he didn’t, software systems that don’t allow the users to fight back are immoral.”
Social software is the experimental wing of political philsophy, a discipline that doesn’t realize it has an experimental wing. We are literally encoding the principles of freedom of speech and freedom of expression in our tools. We need to have conversations about the explicit goals of what it is that we’re supporting and what we are trying to do, because that conversation matters. Because we have short-term goals and the cliff-face of annoyance comes in quickly when we let users talk to each other. But we also need to get it right in the long term because society needs us to get it right. I think having the language to talk about this is the right place to start.
Then again, Plato argued in the Seventh Letter that only philosophers are fit to rule.
Perhaps the greatest competency Socialtext has gained over the past three years is fostering adoption of social software. Adoption matters most for IT to have value. It should be obvious that if only a third of a company uses a portal, then the value proposition of that portal is two thirds less than it’s potential. But for social software, value is almost wholy generated by the contributions of the group and imposed adoption is marked for failure. Suw Charman has been working with Socialtext on site at Dresdner Klienwort Wasserstein and has spearheaded the creation of the following practice documentation. I believe this will be a critical contribution for enterprise practices, so do read on…
An Adoption Strategy for Social Software in the Enterprise
Experience has shown that simply installing a wiki or blog (referred to collectively as ‘social software’) and making it available to users is not enough to encourage widespread adoption. Instead, active steps need to be taken to both foster use amongst key members of the community and to provide easily accessible support.
There are two ways to go about encouraging adoption of social software: fostering grassroots behaviours which develop organically from the bottom-up; or via top-down instruction. In general, the former is more desirable, as it will become self-sustaining over time - people become convinced of the tools’ usefulness, demonstrate that to colleagues, and help develop usage in an ad hoc, social way in line with their actual needs.
Top-down instruction may seem more appropriate in some environments, but may not be effective in the long-term as if the team leader stops actively making subordinates use the software, they may naturally give up if they have not become convinced of its usefulness. Bottom-up adoption taps into social incentives for contribution and fosters a culture of working openly that has greater strategic benefits. Inevitably in a successful deployment, top-down and bottom-up align themselves in what Ross Mayfield calls ‘middlespace’.
I spend too much time in airports and i can’t imagine i’m alone in this crowd. While i often like to get work done, i also like interesting interactions… or at least sane seatmates. Social software should be able to help but there are so many barriers to this. You need to articulate too much and who has time? Still, as broken as they are, i’m interested in exploring the tools that might lead to entertaining interactions or at least to the development of better systems to do so. One of the ones i’m curious about is AirTroductions. Yeah, it kinda has dating overtones to it, but i’m still curious if it’d ever work. At the very least, who else is en route to Etech or SXSW or IASummit when? I have to imagine that lots of folks i know will be passing through the same airports in the next month. Anyone else willing to give it a try just to see?
While MySpace has skyrocketed to success beyond any of the other social technologies on the web, too few folks in the industry talk about it, participate in it or otherwise pay attention to it…. mostly because it’s particularly populated by teens, musicians and other folks who are nowhere near connected to the tech industry. Much of what’s discussed is the culture of fear put forward by the mass media. This is quite unfortunate because there’s a lot of interesting stuff going on there.
At AAAS this week, i had the opportunity to present the first phase of my findings in a talk called Identity Production in a Networked Culture. If you want insight into what teens are doing on MySpace and why, check it out.
The most interesting thing I’ve read on the subject was in Doc Searls post:
I’ve always thought the most important thesis in Cluetrain was not the first, but the seventh: Hyperlinks subvert hierarchies.
What I’ve tried to say, in my posts responding to Tristan’s, Scott’s and others making the same point, is nothing more than what David Weinberger said in those three words.
I thought I was giving subversion advice in the post that so offended Seth. But maybe I was wrong. Maybe being widely perceived as a high brick in the blogosphere’s pyramid gives my words an unavoidable hauteur — even if I’m busy insisting that all the ‘sphere’s pyramids are just dunes moving across wide open spaces.
I’ll just add that, if ya’ll want to subvert some hierarchies, including the one you see me in now, I’d like to help.
The interesting thing to me here is the tension between two facts: a) Doc is smart and b) that line of thinking is unsupportable, even in theory. The thing he wants to do — subvert the hierarchy of the weblog world as reflected in lists ranked by popularity — is simply impossible to do as a participant.
Part of the problem here is language. Hierarchy has multiple definitions; the sort of hierarchy-subverting that networks do well is routing around or upending nested structures, whether org charts or ontologies. This is the Cluetrain idea that hyperlinks subvert hierarchies.
The list of weblogs ranked by poularity is not a hierarchy in that sense, however. It is instead a ranking by status. The difference is critical, since what’s being measured when we measure links or traffic is not structure but judgment. When I’m not the CEO, I’m not the CEO because there’s an org chart, and I’m not at the top of it. There is an actual structure holding the hierarchy in place; if you want to change the hierarchy, you change the structure.
When I’m not the #1 blogger, however, there are no such structural forces making that so. Ranking systems don’t work that way; they are just lists ordered by some measured characteristic. To say you want to subvert that sort of hierarchy makes little sense, because there are only two sorts of attack: you can say that what’s being measured isn’t important (and if it isn’t, why try to subvert it in the first place?), or you can claim that lists are irrelevant (which is tough if the list is measuring something real and valuable.)
Lists are different from org charts. The way to subvert a list is to opt out; were Doc to stop writing, he would cede his place in the rankings to others. At the other extreme, for him to continue to champion the good over the mediocre, as he sees it, sharpens the very hierarchy he wants to subvert. Huis clos.
The basic truth of such ranking systems is unchanged: for you to win, someone else must lose, because rank is a differential. Furthermore, in this particular system, the larger the blogsphere grows, the greater the inequality will be between being the most- and median-trafficked weblog.
All of that is the same as it was in 2003. The power law is always there, any time anyone wants to worry about it. Why the worrying happens in spasms instead of steadily is one of the mysteries of the weblog world.
The only things that are different in 2006 are the rise of groups and of commercial interests. Of the top 10 Technorati-measured blogs, (Disclosure: I am an advisor to Technorati), all but one of them are either run by more than one poster, or generate revenue from ads or subscriptions. (The exception is PostSecret, whose revenue comes from book sales, not directly from running the site.) Four of the top five and five of the ten are both group and commercial efforts — BoingBoing, Engadget, Kos, Huffington Post, and Gizmodo.
Groups have wider inputs and outputs than individuals — the staff of BoingBoing or Engadget can review more potential material, from a wider range of possibilities, and post more frequently, than can any individual. Indeed, the only two of those ten blogs operating in the classic “Individual Outlet” mode are at #9 and 10 — Michelle Malkin and Glenn Reynolds, respectively.
And blogs with business models create financial incentives to maximize audience size, both because that increases potential subscriber and advertisee pools, but also because a high ranking is attractive to advertisers even outside per capita calculations of dollars per thousand viewers.
(As an aside, there’s a pair of interesting technical questions here: First, how big is the A-list ad-rate premium over pure per-capita calculations? Second, if such a premium exists, is it simply a left-over bias from broadcast media, or does popularity actually create measurable value over mere audience count for the advertiser? Only someone with access to ad rate cards from a large sample could answer those questions, however.)
Once a power law distribution exists, it can take on a certain amount of homeostasis, the tendency of a system to retain its form even against external pressures. Is the weblog world such a system? Are there people who are as talented or deserving as the current stars, but who are not getting anything like the traffic? Doubtless. Will this problem get worse in the future? Yes.
I still think that analysis is correct. From the perspective of 2003, it’s the future already, and attaining the upper reaches of traffic, for even very committed bloggers, is much harder. That trend will continue. In February of 2009, I expect far more than the Top 10 to be dominated by professional, group efforts. The most popular blogs are no longer quirky or idiosyncratic individual voices; hard work by committed groups beats individuals working in their spare time for generating and keeping an audience.
[Editorial Note: The following letter, which is also being posted on the Terra Nova weblog, is not intended to be seen as an “official stance” of either TerraNova or Many-to-Many. It is simply an open letter authored by a group of authors and scholars who also have affiliations with one or the other of these weblgogs.]
Open Letter to Blizzard Entertainment—Speech Policy for GLBT guilds in World of Warcraft
Ms Andrews was given a warning not to undertake this again. She assumed this was a mistake, but Blizzard confirmed that the sanction and the punishment would stand. An official from Blizzard responded:
“To promote a positive game environment for everyone and help prevent such harassment from taking place as best we can, we prohibit mention of topics related to sensitive real-world subjects in open chat within the game, and we do our best to take action whenever we see such topics being broadcast. This includes openly advertising a guild friendly to players based on a particular political, sexual, or religious preference, to list a few examples. For guilds that wish to use such topics as part of their recruiting efforts, our Guild Recruitment forum, located at our community Web site, serves as one open avenue for doing so.”
As a result of public comments about this issue, Blizzard has reversed its decision and has privately communicated to Ms Andrews that no punishment will stem from this incident. It also has privately indicated that it is reviewing its sexual harassment policy. It has issued no public statement about the issue.
We write this letter as educators, journalists, writers and players interested in the development of virtual worlds like World of Warcraft. We congratulate Blizzard on the courage to rescind its initial decision, and urge it to make a formal announcement that they were wrong to make it. The decision to sanction and punish Ms Andrews was wrong as a narrow matter of interpretation, and as a general principle of policy for WoW and other virtual worlds.
Zephyr Teachout and Britt Blaser, both veterans of the Howard Dean Internet campaign, reflect on how to fix what’s going wrong at the well-intentioned Since Sliced Bread contest. The Service Employees International Union (SEIU) is sponsoring the contest, offering $100,000 to the person who comes up with the best idea for improving the lives of working women and men. 22,000 ideas were submitted which “a group of diverse experts” winnowed to 70, a process some felt was too top-down.
This is a fascinating case in which a bottom-up process is supposed to squeeze out a single winner, the contest is intended to advance the social good, and the reward includes a hefty chunk of change.
…With the caveats that Alexa’s data is not comprehensive—and even if they had perfect stats, “Alexa Rank” is still just one definition of popularity (a combination of reach and pageviews)—here’s the 10 most popular social media sites (with corresponding Alexa 100 rank):
Chris (and Doc) may be on to something about observing the correlation between F500 blogging and stock performance. But at the least, this can serve as a renewable resource for informing social software adoption.
Ted Castranova has a fascinating post up on Terra Nova entitled “The Horde is Evil,” in which he argues that the Horde races on World of Warcraft are “on the whole evil,” and that this has moral implications for avatar choices:
I’ve advanced two controversial positions: that avatar choice is not a neutral thing from the standpoint of personal integrity, and that the Horde, in World of Warcraft, is evil. Nobody agrees, but it’s been suggested that the community could chew on this a bit.
So here’s my view: When a real person chooses an evil avatar, he or she should be conscious of the evil inherent in the role. There are good reasons for playing evil characters - to give others an opportunity to be good, to help tell a story, to explore the nature of evil. But when the avatar is a considered an expression of self, in a social environment, then deliberately choosing a wicked character is itself a (modestly) wicked act.
I don’t agree with Castranova (my horde character is a Tauren, a peaceful bison-like creature that lives in a Native American-inspired cultural context), nor do many of the commenters—but the issues he brings up are powerful and interesting, and the lengthy discussion in the comments is well worth reading.
Lately I’ve been thinking a lot about the relationship between “real life” and “game life,” since I have personal and/or professional relationships with most of the people in my World of Warcraft guild, including both of my children. Castranova’s argument, in which he bolsters his argument by citing his 3-year-old’s reaction to his undead character, relates directly to those boundary-crossing issues.
When I was playing online on Monday, Joi Ito said that he thought World of Warcraft was becoming the “new golf” for the technology set. I think there’s some truth in that, but it brings with it all kinds of additional social pressures and complexities, of which avatar racial choices are only the beginning. I think there’s some fertile ground for research in that boundary area, the crossover between the real and game worlds, and the extent to which they influence each other.
The Guardian has a story by Mark Honigsbaum about an attempt to identify gay-related items:
Backed by the museums documentation watchdog, MDA, the group Proud Heritage this week began sending out a two-page survey requesting that institutions throughout the country list the gay and lesbian documents and artefacts in their collections. “For the first time ever, we are asking museums, libraries and archives throughout Britain to revisit their holdings and reveal what they have that is queer,” said Proud Heritage’s director Jack Gilbert. “At the moment these are not classified correctly, or held completely out of context and never see the light of day.”
… At the Lllangolen Museum in Denbighshire, north Wales, for instance, there is an exhibit commemorating the lives of Eleanor Butler and Sarah Ponsonby. Known locally as the Ladies of Llangollen, they lived together in a small cottage from 1819 until their deaths in 1829 and 1831, and were renowned for wearing dark riding habits, an eccentric choice of dress for the time.
“They would never have used the word lesbian to describe their relationship but there is no question that they lived together and shared the same bed,” said Mr Gilbert. “We think there may well be similar examples in other archives, but because people didn’t use words like lesbian and gay 200 years ago archivists have either overlooked it or simply don’t realise it’s there.”
Great example of why authors/creators/publishers are not the best or final taggers of their own stuff. (Thanks to Phil Edwards for the link.)
The way he described it, you could shift the burden by changing the law
so that Internet Service Providers would evaluate the plaintiff’s
evidence, and decide themselves whether revealing the customer’s
identity might be appropriate. If the decision is yes, at that point
the ISP notifies the customer, who is given the opportunity to initiate
legal proceedings to enjoin the ISP from revealing his identity.
Given the consolidation of telecom, this would empower a handful of ISPs, as in 5, to be judge and jury for revealing identity. Anonymity is a critical facet of society, and it’s value is more than whistle-blowing. I wouldn’t call it a right, but would call it a feature of the virtual and real worlds (we don’t walk around with name-tags). Regardless of how you value anonymity, you should agree that this would:
create undue costs for ISPs,
privatize governance and enforcement,
create undue legal costs for consumers, which
could lead to infringements on civil liberties, because
customers would be guilty until proven innocent.
Now, if the ISP or legal action revealed the libelous party it would resolve Seigenthaler’s complaint against Wikipedia.
Beyond this attempt to weaken anonymity on the Net, Wikipedia’s open nature is also under attack. Adam Curry edited podcasting history in his favor. Big deal. It’s a wiki, just edit it if you disagree and let the community’s practice work over time.
Consider regulating against graffiti. You have two options:
Guard every wall in town to prevent the infraction from occurring
Paint over infractions and enforce the law by chasing down perpetrators
The former is not just prohibitively expensive, it kills creativity and culture. The later is the status quo and generally works, especially where communities flourish.
So what would have Wikipedia do? Lock down contributions through a fact checking process with rigid policy? Or let people contribute, leverage revision history and let the group revert infractions.
Social media is disruptive. The role of regulation significantly impacts how society will manage transition. Today much of media is regulated through complaints (e.g. indecency). It only takes one horror story for us to loose freedom of anonymous speech. The easiest and most dangerous way to curb social media is to have it conform to mainstream models.
UPDATE: Cnet has a pretty good article on the liability reform sought by Seigenthaler, the first argument I made. Mitch Ratcliffe takes issue with my second argument, about how a wiki works and how best to regulate it. Mitch, you keep trying to fit Wikipedia into your model of how an encyclopedia should be instead of recognizing how it is different. A print version of Wikipedia should have an editorial process bolted on to emergent practice, as it is a comparable product, frozen in time. But instead, the evolving nature of Wikipedia needs to be recognized and celebrated for what it is. Help people understand what it is, not what it is not.
I have long worried that something like this would happen—from the very start of Wikipedia, in fact. Last year I wrote a paper, “Why Collaborative Free Works Should Be Protected by the Law” (here’s another copy). When Seigenthaler interviewed me for his column, I sent him a copy of the paper and he agreed that it was prophetic. It is directly relevant to the part of Seigenthaler’s column that says: “And so we live in a universe of new media with phenomenal opportunities for worldwide communications and research—but populated by volunteer vandals with poison-pen intellects. Congress has enabled them and protects them.” That was a part of Seigenthaler’s column that bothered me: what exactly does Seigenthaler want Congress to do?
If a knowledge worker has the organization’s information in a social context at their finger tips, and the organization is sufficiently connected to tap experts and form groups instantly to resolve exceptions — is there a role for business process as we know it?
“I think, partly because of the personality types who become programmers… I don’t know what it is exactly… a lot of programmers, seem to me to think that the whole point of social software is to replace the social with the software. Which is not really what you want to do, right? Social Software should exist to empower us to be human… to interact… in all the normal ways that humans do.”
A slew of social software startups have arisen as of late, and while we don’t cover the news here, it’s a good time to be a culture critic.
Ning — Social Apps
Ning is the latest entry into the social applications space, aiming to be the mother of all social software. Aiming to be a platform from the get go is a tough haul, the prize is admirable, but most platforms start as apps first. I’ve never heard someone utter the words “killer platform.” As a result, the applications are relatively shallow and they are competing against decentralized open source application publishing.
Since I used them as an example of stealth as an old school model, it turns out they are located a block away from my office and I have met a bunch of great people there. So let me offer this more constructive take away. Today Ning fosters transient micro communities with only pivots to bind them. When the first class node is an app, as opposed to a profile, group or other object that centers on people, you have to construct an overlay of sorts to enable group forming across networks. In other words, object-centered sociality is currently isolated, which limits network effects. On the upside, the information architecture does a decent job handling underlying complexity, their terms of service are well done and they are leveraging standard languages instead of seeking lock-in.
One sentence suggestion: Focus less on the apps and more on the social.
Flock — Social Browser
Flock is aiming to be the browser that we always wanted. Yes, it’s more of an alpha than a beta, and after you start playing with it you want more. For Innovators, we already do all this stuff with well groomed bookmarklets and personal hacks. For Early Adopters, it’s not quite there yet.
Maybe that’s the point. It’s an open source play that is releasing early and often. If the Innovators build upon it (and from what I understand, like Greasemonkey and RonR, it’s like being a Connecticut Yankee in King Arthur’s Court for developers) it may fulfill the needs of a more active mainstream. Today the blogging client and favorites features are too shallow to move me off of Firefox, bookmarklets and Etco/1001. There are two almost hidden features that demonstrates synergy (cough) between modalities:
Search auto-completes with the breadcrumbs you leave behind. It’s not social search, but could be a perfect compliment to Yahoo (which points to both the Biz Dev challenge that will really enhance the product and is their core revenue stream — but also the potential exits as the browser war heats up).
When you add a favorite, if the page has a feed, you can go back to see what’s new from the source.
Aggregation may be the modality (compared to Browse, Search and Author) that could blossom, as it needs better interaction design, there is a lot of demand to bring reading and writing together and the client gives you offline capabilities. I’m starting to speculate here, but that’s the exciting thing about Flock, it makes you speculate to the point you want to engage.
One sentence suggestion: Focus on interaction between modalities and services, manage for quality and get busy with Biz Dev (I can’t believe that’s a job title again).
Wink — Social Search
Wink is a nice Social Search play that incorporates user tagging and ranking to provide recommended results and block spam. My favorite feature, of course, is the ability to create a concept around a query that is an unstructured wiki page. If the concept exists as a pagename within Wikipedia, it populates it with that page and offers related concepts based upon the content. I’m not sure that Wikipedia eats Google, but there is higher quality metadata available and a great way to augment the user experience. Wink is a small startup with lot of promise, but has the inherent challenges of vertical search play (how to attract users, is Google ad revenue enough, and the portals are not acquiring).
One sentence suggestion: Bake into blogspace.
Memeorandum — Social Aggregator
Okay, this one may not be social yet. But Memorandum is starting to solve a problem for me, where to go for a dashboard view of blogs and MSM with the ability to drill down into conversations. I’m not sure that it has the accuracy yet that Google News does for the top two stories, but this is an invaluable dimension to get me out of my subscribed echo chamber.
One sentence suggestion: Let me filter using my social network, even if it’s uploading my subscriptions.
Sphere — Blog Search
I’d agree with John Battelle that Sphere offers a good incremental improvement over existing blog search engines, but others have already extended to advanced tagging and feed features that make it more useful for bloggers. It is relatively spam free and speedy, but we will have to see how it scales.
One sentence suggestion: Differentiate beyond core search for blog reader utility.
Various folks have been asking me about my Friendster publications and i thought i’d do a simple round-up for anyone who is trying to learn about Friendster. Below are directly relevant papers and their abstracts (or a brief excerpt); full citations can be found on my papers page. Please feel free to email me if you have any questions.
“None of this is Real: Networked Participation in Friendster” by danah boyd - currently in review (email for a copy), ethnographic analysis of Friendster, Fakesters, and digital social play
Excerpt from introduction: Using ethnographic and observational data, this paper analyzes the emergence of Friendster, looking at the structural aspects that affected participation in early adopter populations. How did Friendster become a topic of conversation amongst disparate communities? What form does participation take and how does it evolve as people join? How do people negotiate awkward social situations and collapsed social contexts? What is the role of play in the development of norms? How do people recalibrate social structure? By incorporating social networks in a community site, Friendster introduces a mechanism for juxtaposing global and proximate social contexts. It is this juxtaposition that is at the root of many new forms of social software, from social bookmarking services like del.icio.us to photo sharing services like Flickr. Capturing proximate social contexts and pre-existing social networks are core to the development of these new technologies. Friendster is not an answer to the network question, but an experiment in capture and exposure of proximate relations in a global Internet environment. While Friendster is not nearly now as popular as in its heyday, the lessons learned through people’s exploration of it are increasingly critical to the development of new social technologies. As a case study, this paper seeks to reveal those lessons in a manner useful to future development.
Abstract: Profiles have become a common mechanism for presenting one’s identity online. With the popularity of online social networking services such as Friendster.com, Profiles have been extended to include explicitly social information such as articulated “Friend” relationships and Testimonials. With such Profiles, users do not just depict themselves, but help shape the representation of others on the system. In this paper, we will discuss how the performance of social identity and relationships shifted the Profile from being a static representation of self to a communicative body in conversation with the other represented bodies. We draw on data gathered through ethnography and reaffirmed through data collection and visualization to analyze the communicative aspects of Profiles within the Friendster service. We focus on the role of Profiles in context creation and interpretation, negotiating unknown audiences, and initiating conversations. Additionally, we explore the shift from conversation to static representation, as active Profiles fossilize into recorded traces.
Vizster: Visualizing Online Social Networks by Jeffrey Heer and danah boyd - a 2005 InfoVis paper about visualizing Friendster data (including arguments about using visualization in ethnography and recognizing the value of play in visualization)
Recent years have witnessed the dramatic popularity of online social networking services, in which millions of members publicly articulate mutual “friendship” relations. Guided by ethnographic research of these online communities, we have designed and implemented a visualization system for playful end-user exploration and navigation of large-scale online social networks. Our design builds upon familiar node-link network layouts to contribute techniques for exploring connectivity in large graph structures, supporting visual search and analysis, and automatically identifying and visualizing community structures. Both public installation and controlled studies of the system provide evidence of the system’s usability, capacity for facilidiscovery, and potential for fun and engaged social activity.
Abstract: Participants in social network sites create self-descriptive profiles that include their links to other members, creating a visible network of connections — the ostensible purpose of these sites is to use this network to make friends, dates, and business connections. In this paper we explore the social implications of the public display of one’s social network. Why do people display their social connections in everyday life, and why do they do so in these networking sites? What do people learn about another’s identity through the signal of network display? How does this display facilitate connections, and how does it change the costs and benefits of making and brokering such connections compared to traditional means? The paper includes several design recommendations for future networking sites.
Abstract: This paper presents ethnographic fieldwork on Friendster, an online dating site utilizing social networks to encourage friend-of-friend connections. I discuss how Friendster applies social theory, how users react to the site, and the tensions that emerge between creator and users when the latter fails to conform to the expectations of the former. By offering this ethnographic piece as an example, I suggest how the HCI community should consider the co-evolution of the social community and the underlying technology.
Social verbs in online gaming are gestures that do not change the meaning of a object. When someone’s WoW Mage waves to your Paladin, you choose how object’s meaning will change because of the gesture. Language is power, just as an emoticon can get your out of trouble for telling a borderline joke.
I’m paying particular attention to verbs these days as they seem to have greater meaning than nouns, especially places (which are non-persistent; persistence is vested in objects that take actions). The reason I keep coming back to my WoW research (cough) isn’t because of the virtual world, but what I do with a group.
Beyond this gesture, the extended entry riffs on attention management, pull vs. push, marketing strategy and ownership of identity.
When [Gloria] Mark [from UCI] crunched the data, a picture of 21st-century office work emerged that was, she says, “far worse than I could ever have imagined.” Each employee spent only 11 minutes on any given project before being interrupted and whisked off to do something else. What’s more, each 11-minute project was itself fragmented into even shorter three-minute tasks, like answering e-mail messages, reading a Web page or working on a spreadsheet. And each time a worker was distracted from a task, it would take, on average, 25 minutes to return to that task. To perform an office job today, it seems, your attention must skip like a stone across water all day long, touching down only periodically.
Yet while interruptions are annoying, Mark’s study also revealed their flip side: they are often crucial to office work…
Focusing on the cost of interruption is one of the better design principles, not just for productivity applications, but all those social software apps clamoring for attention. The answer is not automation, but using the social network as a filter and pushing things down to asynchronous modalities.
My 11 minutes are almost up. Really, it’s a great read, and for now I’ll point you towards Jon Udell…
Cast aside the anti-hype rhetoric, and keep in mind it is an argument not of fact or policy, but value, and you will find Nicolas Carr’s post on the amorality of Web 2.0 has a salient point — that social software is on an inevitable march of disruption. Commoditization wrought by commons based peer production does enable the triumph of the amateur over the professional. But this does not portend the destruction of mainstream media, only it’s reformation.
Yes, the economics favor the bottom-up. This allows the creation of an alternative we have never had before. A choice. But media selection theory holds that old media simply doesn’t die. Carr’s very desire to retain professional media as his selection is one consumer’s proof point.
The underlying economics of MSM must change, and it will, through creative destruction and unfortunately the loss of many jobs in the transitionary period. Think of social media as a fork in social software, or a third party movement in politics. Unfulfilled demand is self-fullfilled by a new grassroots consituency. New and previously unrepresented constituencies are forming fast as the cost of personal publishing and group forming trend towards zero. But the mainstream gradually co-opts these experiments and movements as their own to stay in power. Today MSM is experimenting with social media in areas where the cost structure previously prevented them to access the market, such as hyperlocal media. To say that mainstream media will not leverage the tools and co-opt the culture of the amateur smacks of technological determinism.
But this is an argument about values, so it’s important to highlight what values needs to diffuse from professional to amateur. Dan Gillmor’s mission to pass on ethical standards from journalists to citizen media is case in point. The former audience is about to go through media training on a massive scale, all in all a good thing, but there is much we can do to pass on practices.
Carr provides a healthy contrarian perspective for the blogosphere. Perhaps by claiming amorality he makes us think, and is advancing our values.
Now, there’s a way around this “collective mediocrity” trap. You can abandon democracy and impose centralized control over the output. That’s one of the things that separates open-source software projects from wikis; they incorporate a rigorous quality-control filter to weed out the crap before it pollutes the product. If Wikipedia wants to achieve it’s goal of being “authoritative,” I think it will have to abandon its current structure, admit that “collective intelligence” makes a pretty buzzphrase but a poor organizational model, and define and impose some kind of hierarchical power structure. But that, of course, would raise a whole other dilemma: Is a wiki still a wiki if it isn’t a pure democracy? Can some wikipedians be more equal than others?
Open source software and Wikipedia are both driven by commons-based peer production. How they differ, and the reason software development requires rigorous quality-control, is that code has dependencies. Writing code is vertical information assembly, while contributions to a wiki is horizontal information assembly. Wikipedia does have quality control and an organiztional model, but it isn’t a feature embodied in code, it is embodied in the group. I know of no goal of being authoritative, but the group voice that emerges on a page with enough edits (not time) represents a social authority that provides choice for the media literate. Carr could create a Wikipedia page to help define what “pure democracy” is to help him answer his rhetorical question — but a wiki is just a tool, and Wikipedia is an exceptional community using it.
Keep in mind that most wiki use is behind the firewall where there is an organizational hierarchy and norms in place. There it taps into similar economics, without the great debates on social truth, and for the competitive advantage of firms.
Back to values, when you tap into the renewable resource of people in mass collaboration, allocated against the scarcity of time, driven by social signals — is this not of greater benefit for social and economic welfare than the disruption that created mainstream media in the first place? I’m glad we agree with Carr on the facts of the disruption. If we can get past the misunderstanding that there is a value difference, we could maybe focus on the right policies that will help us in years to come.
In the grand tradition of bar camp, web 2.01, and other creative, self-organizing tech events comes Seattle’s first Mind Camp. It will be held from noon on Saturday, November 5th through noon the following day.
Take a look at the sidebar to see the people already committed to being there—Chris Pirillo & Ponzi Indharasophang, Julie & Ted Leung, Beth Goza & Phil Torrone, Nancy White, Shelly Farnham…
(did you notice all the cool women on that list? w00t!)
Registration is open (and free), but the event is capped at 150—so act fast if you’re planning to attend.
I’ve been planning to post an announcement here about an upcoming event in Seattle, but kept forgetting. (Well, that, and I tend to be reluctant to self-promote, but the organizers kept asking…) As a result, this is rather short notice.
I don’t need to explain wiki to this audience. It;’s so tiny it doesn’t need explanation, but you don’t understand it until you have been there and done that. It’s you and the community that participates that makes it real, gives me perhaps too much credit. My hope is that wiki becomes a totem for a way of interacting with people. Tradition in the work world has been more top down, while wiki, standing for the Internet, is becoming a model for a new way of work. Largely driven by reduced communication costs, it changes what needs to be done and how it’s going to get done. I hope that the wiki nature, if not the wiki code, makes some contribution.
A wiki is a work sustained by a community. Often asked about difference between wiki and blog. Something tangible is ve The blogosphere is the magic that happens above blogs — the blogosphere is a community that might produce a work. Whereas a wikis a work that might produce a community. It’s all just people communicating.
One’s words are a gift to the community. For the wiki nature to take whole, you have to let go of your words. You have to be okay with that. This goes into the name, called refactoring. To collaborate on a work, one must trust. The reason the cooperation happens is we are people and it is deep in our nature to do things together. Important to make a distinction. Cooperation has a transactional nature, we agree it is a mutual good. Collaboration is deeper, we don’t know what the transaction is, or if there is one, but if I give of myself to thsi collablration, some good will come out of it. You have to trust somebody to collaborate. With wiki, you have to trust people more than you have any reason to trust them. In 1995, it was a safer environment, don’t know if I could have launched wiki today.
Refactoring makes the work supple. Word borrowed from mathematics, not going to change the meaning of the work, but change it so I can understand it better. Continuous refactoring. Putting a new feature into a program is important, but refactoring so new features can be added in the future is equally important. The ability to do things in the future is something that I consider suppleness, like clay your hands that accepts your expression. Programs and documents get brittle very quickly. Wiki imagines a more dynamic environment where we accept change, with the aid of a computer not make that dramatic, embraces hypertext which lets a document start small and grow while always being the right size. When there are two ideas in the page, split them into different pages with new names, so a third page can reference both. This is built into the web in some sense, it’s just exploited in a wiki. Phenomenal that so much as been done in a tiny text interface, writing an encyclopedia. I have to apologize as a computer scientist that we have to go through that, but also says how strong the desire is for people to work together, but I look forward to the day where we don’t have to do it just this way.
I was in favor of anonymity when I started this. Anonymity relieves refactoring friction. Have learned that people want to sign things. But try to write in a way where you don’t have to know who said it. But when someone who is not in a giving mood uses anonymity (spammers), that abuse can drive us away from anonymity. But I hope we can drive the ill-intended out without having to give up the openness. Can one trust the anonymous? If you think of trust as believing people will behave in the way they did before, it seems dependent upon identity, but it may not be imporant to know if online behavior is consistent with offline behavior. But knowing what is going to happen when you give something away is significant.
The web has been an experiment in anonymity. Conscious design of low level protocols. Lots of identity infrastructure has been created to make it an online shopping mall, which makes it unpleasant for all of us because the machinery isn’t that great.
Result: people can and do trust works produced by people they don’t know. The real world is still trying to figure out how Wikipedia works. A fantastic resource. Open source is produced by people that you can’t track down, but you can trust it in very deep ways. People can trust works by people they don’t know in this low communication cost environment.
Result: the clubby days of friendly internet are over. Lots of technical questions about to sustain something we have experienced in a more complicated environment.
Opportunity: reputation systems for the creative (non-transactional). Reputation systems are an umbrella term for where the computer keeps more track over who you are and trys to make that visible in controlled ways to other people. eBay as an outstanding example, creating a space that didn’t exist before. Again, going back to collaboration vs. cooperation. Doing this well depends upon excellent collaboraiton between the scientific community and the practitioners. Hopes this symposium becomes the center of this exchange.
Opportunity: organizational forms supporting creative work. The form we have today is a legacy from GM. Corporations aggregate and deploy capital to make things happen. Necessary back when communication was more expensive in this country. Top down hierarchies make communication work when it is expensive, I hope that wiki can be a flagship in this move in the industry to produce computer support for this kind of work and evolve organizational forms.
Eugene Kim asks about the conflict between anonymity and reputation. He calls it an opportunity because it isn’t reconciled. The first thing we think of with reputation will be wrong and has adverse impacts. Do it by watching the impact it has on people in the area of creativity. Doesn’t have to be complicated, but careful with what it reveals. If you walk in
Richard Gabriel: reputation can be attached to an individual or to something, such as words. The reputation can be attached to the words can enable anonymity. Ward says great, idea — take notes.
On moderating change in the original wiki over the past year, and the tools he created for it (the following is probably only of interest to wiki moderators)…
Wrestling with the same issue, I’ve found it’s difficult to decide what to contribute here, because topics are being commercially exhausted. We went through a period where new companies and products were passed on as news, in between well thought-out posts. The job of covering social software news started being done by others elsewhere. As we enaged deeper in out own kind of ventures, this effort was well appreciated. We also found less that was really new to report. The bar was set pretty high for the well thought-out pieces, almost introducing a formality for contribution, that in busy times couldn’t be met.
But with the whole Web 2.0 thing, it may be more important than ever.
What was unique about social software and it’s design principles was how it didn’t emphasize tools, but practice and an understanding of social context. Too much of Web 2.0 is not just made of white people, but an alphabet soup of supporting technologies that mean nothing without communities, networks and even real business models. As the market we helped found continues to froth, commentary on new business models based on power laws matters even more.
But the real reason I haven’t been contributing as much as I used to is because we forbade MMOGs in the topic, and I’ve been playing too much World of Warcraft.
So, when this blog started, it was intended to capture various aspects of social software. The hype has kinda gotten taken over by Web2.0. But what is the relationship between Web2.0 and social software? And what about Many-To-Many?
Over on my personal blog, i’ve written two long posts on Web2.0 that i think are pretty interesting for those invested in social software:
It’s pretty clear that social software has become essential to Web2.0 - social networks, communication, identity production, etc. But how do we discuss social software as something separate from all that? Have we gotten to the point where that concept has escaped us? I look at my co-bloggers here and we’re all still doing our thing but yet, are we all still talking about social software? We’re certainly doing a terrible job at blogging, or at least here. There’s something funny about group blogging around a topic. What about when things change?
The thing about a personal blog is that it changes with you because you don’t feel so compelled to stick with a topic (much to the chagrin of some readers). I know it sounds like a broken record, but i’m still always at a loss over when to cross-post to M2M. Consider this pair of recent posts:
These are certainly at the center of Web2.0 and at the center of culture and sociability. But is it about social software? Quite a few folks have asked me to repost these here, but i think it’s weird that i don’t think of it as the core to social software.
Herein lies the problem with all of this… Our lives have started to escape categories. And topical blogs are categories. Hmmm…
When a bank replaces their Intranet with a wiki, something wonderful is bound to happen. We’ve been working with Suw Charman to document it and the first version of the case study is in. It’s a great account of the adoption pattern, user experience and mass collaboration.
Dresdner Kleinwort Wasserstein has adopted Socialtext at a depth and scope well beyond what most businesses have attempted. The following case study points to the near-future of simple collaboration in the enterprise.
One thing that didn’t make it into the case study in time is a practice I’m considering myself. The manager of an equity trading group has created an email filter that auto-replys to any team member with instructions to put their message on the wiki. I’ve had managers tell their team they will only read what is in the wiki before, but this truly grabbing the bull by the horns.
About all I can offer is that Web 2.0 is made of people, while keeping this blog clean of commercialization.
But let me share two neat wiki communities with you. Om Malik just put up the Broadband Wiki: We are building a “broadband profile” of the planet. What I would like to do is find contributors who are kind enough to write 250 words about the broadband situation in their country. In the spirit of Loic’s European Blogosphere, the data is coming in fast and furious.
Also check out the Startup Exchange, a renewable resource for those working with fewer resources. It’s chock-full of links to resources and includes a Startup Kit of wiki templates and best practices. Given the number of Web 2.0 products out there without businesses, it might be a good place to start — over.
This is a first for me, but I expect it will eventually become common. I received an email with the following addition to the signature block:
this email is: [ ] bloggable [x] ask first [ ] private
Now that’s a social hack that could one day be replaced by a technical hack. Email messages could have “bloggable” as a mime-type for example, and forwarding to a blog client would set up an entry. Lacking that mime-type, you’d have to resort to cut and paste, as now…
I post this here not for sake of memetic vanity, but to make a point. The reason we are building Web 2.0 is because we were not able to build Email 2.0. The first web didn’t support our social needs, so we used email for everything. But we couldn’t really hack it. Most social software has by now adapted to email, but email could never have adapted to it.
Timothy Spalding has put together a really interesting site, called LibraryThing, that lets you list your books, tag them, and share the list with others. You can search by bibliographic info, user or tags. And Tim does some useful listing of the top 25 books by author, tags, etc.
One of the cool things: You enter a book into your list by typing in sloppy information. For example, if you want to enter The Social Construction of What? by Ian Hacking, you can type in “social construction hacking” and LibraryThing will search the Library of Congress and Amazon. Sure enough, it finds the right one. Click and all the bibliographic info, plus the cover graphic, are added to your list.
It’s basically free, although to add more than 200 books to your list, Tim asks for a one-time fee of $10, which seems pretty reasonable to me…especially once Tim adds RSS feeds so we can subscribe to a tag, reader, etc., and discover the new books others are reading.
Siderean has always allowed their customers to embed hierarchical trees within their faceted classification system (example here) when appropriate. E.g., if someone is navigating via the geography category, the system can know that SoHo is in NYC which is in NY state which is in the US. And Siderean has shown an early curiosity about tags: Its fac.etio.us thought-experiment/demo turns del.icio.us bookmarks into a faceted system.
I got briefed by the company a couple of days ago and learned that future releases of their navigation software are going to incorporate tagging more directly, enabling users to annotate/tag the data they find. A faceted system might add a right amount of organization to a pile of tags, making that pile far more useful. Imagine a folksonomic faceted system…
“I don’t read anymore; I just talk to people who have.” — Dr. Tom Malloy, University of Utah
Dr. Malloy’s tongue-in-cheek comment sparked an interesting conversation about… well… conversation. When two people have a conversation, they act as proxies for the many ideas in their heads which are drawn from the many things they have read. In effect, a conversation is a many-to-many interaction that is both mediated and moderated by the participants. The individuals catalog, sort, tag, and filter ideas as they are drawn into the shared space of the conversation.
The upshot of this is that the memes, or actual ideas, gain a tremendous advantage in establishing new connections when conversations happen. Similar to Dawkin’s principle of the “selfish gene,” these “selfish memes” promote their longevity every time humans converse. For memes, the conversation is like sex, an opportunity to mingle, merge, and generate offspring that will outlast them.
Moreover, the use of the Internet, cell phones, and social software has greatly increased the number of conversations happening at any given moment via chat, newsgroups, discussion forums, and even comment-savvy blogs. Without a doubt, the potential for survival of various memes has skyrocketed as these channels have emerged.
But the great thing about all this is that conversation gives us an incredible way of processing the world as we move into an age of relentless and omnipresent information. Rather than setting up a really clever RSS reader using technology, just go talk to someone who reads blogs. Rather than spend hours organizing bookmarks, just ask around for what’s useful when you need it.
I discovered a while back that I could get what I need faster by asking someone else than by looking for it myself — precisely because of the time it takes to process the glut of information now available on any given topic (just hit google sometime and you’ll see what I mean)!
So, the real value of communicative technologies like social software is that they re-enable and enhance our ability to use a time-tested means of information processing, i.e. the conversation, in new and interesting ways!
Now stop reading this and go have a conversation with someone. :-)
Patient Opinion is all about enabling patients to share their experiences of health care, and by doing so help other patients — and perhaps even change the NHS. As well as allowing everyone to see what patients are saying about their services, it also offers a way to feed the experience of patients back to the NHS so that their insights and ideas can be put to good use.
They leverage structured calls on a new NHS web service for data about health service providers, then let people tag and blog about their experience with them. What a wonderful feedback loop.
I just wrote a rather lengthy essay on glocalization and Web2.0 that discusses the socio-technical aspects of Web2.0. Most M2M readers are interested in social software; this essay is important if you are interested in understanding how social software is being taken to the next level, building a broader paradigm. I argue that the key to Web2.0 is not technology but a process of designing with glocalization in mind.
Because of its length, i have not copied it to M2M.
Each year, O’Reilly hosts the Emerging Technology Conference where geeks gather to discuss the latest innovations in technology. Although a lot of folks don’t realize it, they have an open call for proposals where people can suggest talks and topics that will provide new insights for the tech geek community.
Conferences are typically word-of-mouth events where people attend because their friends are attending. I would really like to attend E-Tech this year but i really want to be blown away by talks and topics that are not part of the echo chamber. Thus, i have a request for you dear reader. Think about the people that you know and the people that they know. In the comments, suggest people and/or topics that you don’t think will be addressed at E-Tech, things that i don’t know about. Bonus points for the inclusion of innovations that are occurring outside of the US/UK. Also, pass on the CFP to people who you think might not know about it. Please help expand the diversity of this conference by including diverse topics and people. And please, if you’re working on something that fits into emerging technologies, consider submitting a proposal, especially if your voice is not typically heard at the various O’Reilly conferences. The broader the network of people, the more enjoyable the conference.
I'm completely stoked to share the news that longtime M2M contributor Seb Paquet has joined Socialtext. I've wanted to bring him on board since we started the company and was pleasantly suprised to find us at the top of the list he put out when he announced on his blog that he was looking for something new.
Let me use this as an excuse to reintroduce you to Seb. Prior to coming on board, Seb was an Associate Research Officer at the National Research Council of Canada, where he worked on innovative uses of social software, in particular in collaborative learning and knowledge management. Over the past several years, Seb has been contributing insightful articles and talks about those topics in English and French and has been running blogs in bothlanguages. He will help us reach out to new customers and pitch into enhancing the experience and value of our software.
Yet another great person hired by blog. Welcome aboard, and see you at Wiki Wednesday, Seb!
You can think of RawSugar as a searchable del.icio.us with automagic, hierarchical clustering. (Users can also manually create hierarchical tag sets.) So, instead of seeing a long list of links on the left and a long list of tags on the right, at RawSugar you see a list of links on the bottom and your top-level tag categories on the top. The higher level tags are automatically propagated to the lower level ones. So far there is no way for users to publish their tag sets so others can use them.
I spoke briefly with founder Ofer Ben-Schachar who told me only that the auto-hierarchy infers relationships among mulitple tags an individual gives to a single object and among multiple tags multiple people give to the same object. He says the company has 5 patents.
The site is new and only has a few thousand users and about 15,000 links. It looks very usable. Now we’ll just have to see if it reaches the critical masses…
I’ve been doing a terrible job at posting to M2M because i’m never quite sure what fraction of my posts belong here and what tone is appropriate. I’ve been actively posting to my personal blog apophenia and looking back, i realize that some of what i’ve written this month might be interesting to M2M readers. So here’s a listing round-up:
If you, dear reader, have an opinion on what you think is appropriate for M2M, i’d love to hear it in the comments because i’m definitely struggling with it. My personal blog gives me freedom to post whatever, but i don’t want to abandon M2M since i know many of you appreciate what we post here.
This weekend we put something cool out into the world. Wikiwyg is what-you-see-is-what-you-get editor for wikis, or pretty much any other text area on the web. It's open source licensed, available for download and demo. Jeff Jarvis said wikiwyg is "the way wikis are supposed to be."
Our hope is this makes the two-way web usable. You can see the genius of Socialtext lead developer Brian Ingerson in something that is almost a bug, but might be a feature: double click anywhere to edit. Then you will notice it snaps into edit mode, as the editor was already loaded with the page -- reducing, but keeping, the distinction between display and edit mode. You can toggle between wysiwyg and wiki text (more efficient when you know it). Sexy Ajax pixie dust lets you edit without touching the server until you are ready to save. Always remember that Wiki Wiki is Very Quick in Hawaiian.
One of the benefits of being based on open source is not only that we can share, but innovate openly. We still have some work to do (IE support, ugh) until it's ready for Socialtext production and would appreciate feedback and participation.
Feedster launched the Feedster Top 500 setting a new standard for length, the first salvo in the size matters war of microcontent. Go here and bitch about M2M isn't on the list, but my crappy blog is, or if you have to, contribute something constructive.
Kidding, but they should be commended for providing an inclusive process for otherwise exclusive outcome, by both opening the algorthim and being open for feedback on a wiki page. An index is a reflection of a community, and the more inclusive and open the process for it's creation, the more we trust it and grant it authority.
Mary Hodder's latest activist wiki, topicindex, is a Community Algorithm project to open the engine of attention. Given the importance of rankism, it's worth paying attention to. My hope is this does more than shift the debate from ranks to clouds, but gives us the tools to seed our own.
Pito Salas blogs about a new beta feature of his open source BlogBridge aggregator: A small histogram shows each feed’s frequency of posts.
Is this useful information? I think so. If I see one of the feeds has been very active, I may be driven to catch up. Of course, there are many feeds I value where the posts are few, and I would worry about a widget that drives people merely to the frequently-updated blogs. On the one hand, this is an aggregator of feeds I’ve chosen, so I already know that I’m going to read, say, Jay Rosen’s feed even if he’s not posting eight times a day. On the other hand, BlogBridge prides itself on its ability to help users discover new feeds, and there the frequency chart may slightly skew people towards the more frenetic blogs.
Overall, it looks like a useful meter. I hope Pito lets us turn it off if we want, but I’ll probably leave it on. (Disclosure: I’m an unpaid advisor to BlogBridge.)
I'm sitting in Jimmy Wales' talk at OSAF, as though I am his roadie these days, and reminded about anonymity in Wikipedia. Anonymity is not something commonly valued in the blog world, where it is largely a strong expression of identity, but seems to be an essential attribute within the Wikipediacommunity. Maybe it's just the difference of people working together vs. having conversations. Perhaps it's the initial user experience of being able to edit without logging in, or strong enough social bonds and extreme cases for widespread support for maintaining anonymity.
Jimmy describes the basics of Wikipedia, and then gets on his self-acknowledged soap box. Most social software is designed in away that makes no sense. If you think about it resurant, serving steak, you need knives, because the customers might stab each other, so, no knives. This creates a culture without trust, with comunity. Most software is too complex from trying to keep people from being bad. Leave things open when you know people can do bad things. Instead of locking pages, leave a note asking them not to damage it -- an opportunity to build trust. When they haven't done any damage in a while, I know Stewart, for example, has not vandalized this page, so I trust him more.
Mary Hodder offers an open source algorithm for scoring blogs beyond authority:
We wanted to see these measures used in an algorithm that balanced the weight of each social gesture, put against large data sets to see whether the resulting score or characterization felt right against what we know about blogs as readers and writers. One thing to consider is that some data sets are made up of spidered data (including blogrolls), while others are made up of RSS feed information (some partial and some whole posts, but there are no blogrolls in RSS feeds) and some are a blend. So we would want to adjust the algorithm for different types of data sets.
So this is my first post think about making an open source algorithm...
The value of the Paris Index approach is three-fold:
Current indexes value blogs without involving blog readers (link ranks) or without involving blog writers (sub ranks). It's like a market where price is only set by sellers or buyers.
An open algorithm is akin to a standardized contract for commodity markets. Today the market for AdWords works gives the market owner the benefits of information arbitrage while buyers and sellers have little transparency into market clearing mechanisms.
An open algorithm is akin to an open standard, upon which new services can be built. If this algorithm gave significant weight to 2nd generation links, this could be the Cost Per Influence metric for Sell Side Advertising.
I have a hard time respecting anyone who believes that science or technology is neutral. Unfortunately, even when people consciously know that they are not, they give credence to the biased outputs without questioning the underlying assumptions. This is why i’m an academic - nothing gives me greater joy than to think about what biases go into the creation of a particular system.
After reminding folks at Blogher that there are gender differences in networking habits, i decided to do some investigation into the network structures of blogs. Kevin Marks of Technorati kindly gave me a random sample of 500 blogs to play with. I began coding them based on gender (which is surprisingly easy to do given the amount of personal information people put about themselves) and looking for patterns in links and blogrolls.
I decided to do the same for non-group blogs in the Technorati Top 100. I hadn’t looked at the Top 100 in a while and was floored to realize that most of those blogs are group blogs and/or professional blogs (with “editors” and clear financial backing). Most are covered in advertisements and other things meant to make them money. It’s very clear that their creators have worked hard to reach many eyes (for fame, power or money?).
I'm in Frankfurt this week for the first Wikipedia conference. Jimmy Wales has been warming up for his Wikimania Keynote on Larry Lessig's blog, talking about 10 things that should be free. The idea for this list comes from Hilbert's problems. In 1900s Mathematician David Hilbert posed 23 problems, 10 were announced at a conference, the full list published later, very influential. He notes that all of these things were obvious, suggested or proposed by others.
10 Challenges for thee Free Culture Movement
1. Free the Encyclopedia!
Mission is to create a free encyclopedia for every person on the planet in their own language. For English and German, this work is done (of course there could be be quality control, etc.). French and Japanese in a year or so, ton of work to be done globally. Will be done in 10 years time, an amazing thing when you consider minority languages that have never had an encyclopedia.
2. Free the Dictionary!
Not as far along, but picking up speed. A dictionary is only useful when it's full of words you don't know, unlike an encyclopedia. Needs software development, such as WikiData. It is structured information, for cross reference and search.
3. Free the Curriculum!
There should be a complete curriculum in every language. A much bigger task than the encyclopedia. Need not just one article about the Moon, but one for every grade level. WikiBooks isn't the only one working on this project. The price of university textbooks is a real burden for students. The book market doesn't take advantage of potential supply of expertise. Not hard to imagine 500 economics professors writing instead of one or two to create a better offering than the traditional model.
4. Free the Music!
The most amazing works in history are public domain but not many public domain recordings exist (even in classical music). Proper scores are often proprietary derivative works (such as arrangements for a modern orchestra). Volunteer orchestras, student orchestras could provide the music for free.
5. Free the Art!
Show two 400 year old paintings. Routinely get complaints from museums saying there is copyright infringements. National Portrait Gallery of England threatens to sue, a chilling effect, but they have no grounds. Controlling physical access keeps people from getting high quality images "I wouldn't encourage you to break the law, but if you accidentally take a photo of these works it would be great to put it on Wikipedia for the public domain.
6. Free the File Formats!
Proprietary file formats are worse than proprietary software because they leave you with no ability to switch at a later time. Your data is controlled. If all of your personal documents are in an open file format, then free software could serve you in the future. Need to educate the public on lock-in. There is considerable progress here and continued European rejection of software patents is critical.
7. Free the Maps!
"What could be more public domain than basic information about location on the planet?" -- Stefan Magdalinksi. FreeGIS software, Free GeoData. This will become increasingly important for open competition in mobile data services.
8. Free the Product Identifiers!
Hobby Princess blog Huge subculture of people making crafts, selling them on eBay, but need competition from distributors.
Increasingly, small producers can have a global market. Such producers need a clobal identifiers. Similar to ISBN, not ASIN (proprietary to Amazon). Suggests the "LTIN: Long Tail Identification Numbers" would be cheap or inexpensive to obtain (has to have some cost to fend off spam). Extensive database freely licensed and easly downloadable to empower multiple rating systems, e-commerc, etc. The alternative is proprietary eBay and Amazon. Small craft producers should be able to get a number and immediately gain distribution across them.
9. Free the TV Listings!
A smaller issue, it may seem. But development of free software digital PVRs is going on. Free-as-in-beer listings exist, but this is tenuous. Free listings could be used to power many different innovations in this area. Otherwise we will be in a world where everything you watch will be DRM'ed -- so this is important.
10. Free the Communities!
Wikipedia demonstrates the power of a free community. Consumers of web forum and wiki services should demand a free license. Otherwise, the company controls the community. Similar to a feudal serf, company maintained communities have a hold on communities. Are you a serf living on your master's estate, or free to move? Social compact: need to have Open Data and Openly Licensed software for communities to truly be free. Wikicities - for profit, free communities - founded by Jimmy and Angela. Free licensing attracts contributors.
He will be adding more on Larry Lessig's blog over the coming weeks.
Following Liz's read of BlogHer, one of the more interesting points to come out of the conference is the need for constituent algorithms -- ways of revealing hidden groups. For the BlogHer community, the Technorati 100 was more than a whipping boy, but an index where a group was under-represented. Mary Hodder's approach, spot on, is to develop alternative indexes.
No index is all-inclusive and all are biased. This isn't necessarily a bad thing. Each is just a way to view the world and it's information. But the interesting part is the sociology of how coders frame the world with each index and how we accept, reject or game the indexes that frame us.
Think about the politics at play with the US Census, Gerrymandering jurisdiction or any list constructed by the mainstream media. Or how we over-react any time someone makes a new blog index when it hints at a hierarchy. Suddenly we are thrown back to gold stars, grades, being picked for the kickball team, caste judgments, nationalism, ageism, other isms, clicks, ins and outs. But an index is just one way to view the world. What happens when creating and distributing an index is as democratized as blogging is today?
Each index is an attempt to institutionalize, where merely publishing it with credentialed claims invites circumspect vigilance. Somehow we teat lists as authorities, further incenting people to create lists to claim authority. Lists are just groupings, or clusters, but as such, we treat inclusion seriously. With easy group forming, we also get easy group representation -- so on the whole the scarcity of groups decreases with the right and convenience to fork.
Other great idea to come out of BlogHer was a list. Mary started a Speaker's Wiki as a simple answer for event organizers that say there aren't enough women speakers. What's great about this idea is that was implemented on a Sunday morning. Initially, it's an answer, but I think it will raise some questions. The index begins with all women. But will it evolve to reflect the state of the events markets with a male-dominated power law? Or will it shape the curve? As the gender or other balance tips, will it spawn a fork for under-represented constituencies?
I was very disappointed not to be attending BlogHer, but I’m delighted to see the level of discourse that it has been generating online. That’s an excellent sign of a good conference, and was one of the stated goals of the organizers.
After 45 minutes of intense anger and frustration from many audience speakers in the room toward Technorati link counts and top 100, I suggested we create a community based algorithm, based on more complex social relationships than links. It’s something I’ve been working on for few months, trying to frame, about what this problem is and how we might solve it. But it’s a complex issue and I’m also busy. So it’s taken a while. However, my blog post is almost done, and I do plan to put it up in the next day or so.
During Q&A — and this will shock you too — the people asking questions aren’t standing up to hog the mike and show off for the most part. The people at Blogher who asked questions actually wanted answers, wanted to be educated and were happy to be educated by anyone in the room who could educate them. The speakers deferred to others in the audience who could answer questions better than they could.
It reminded me of someone once telling me about an academic conference where an unoffical award was regularly given for “best statement phrased in the form of a question.” Anyone who goes to tech conferences (or academic conferences) is well aware of this phenomenon, where someone who believes they know more than the presenters steps up to “ask a question” but instead uses the microphone as their personal soapbox.
For a visual assessment of how Blogher was different, take a look at TW’s “Blogher Vs Gnomedex:
There was one thing I really wanted to comment on. Look at the pictures on Flickr tagged Gnomedex vs those tagged for Blogher. These are totally different sorts of pictures. Pictures of PowerPoint projections at Gnomedex. Pictures of women, their FACES at BlogHer. (as opposed to the backs of heads at Gnomedex. It speaks to what women value.
Particularly gratifying to me is the fact that it’s not just the women who are talking about the conference and its participants. I lovedthis post from Christopher Carfi, who attended the conference. Here’s an excerpt:
This problem has deep roots, and a number of them. How did it come to pass that “number of links” became a surrogate for “quality?” It’s a result of a number of factors that lie in the technical underpinnings of how we currently “discover” new things online, namely PageRank and related algorithms. If a lot of people link to something it must be good, right? Well…sort of. The concept of “a link is a vote” is a blunt instrument.
Although Marc’s heart is in the right place, his suggestion that BlogHers create our own list, our own companies and tell the guys to fuck off…is ultimately simply playing the game by the same old (tired, not wired) rules. (Guys aren’t the real issue; it’s the metaphors we unconsciously live by, the worldviews embedded in the games.) Marc’s Implicit Assumption much like August issue of Wired: You only change the world when you are on a list. You only change the world when you are heading a company. Bigger is better. Louder is more impactful. Celebrity matters.
Go forth and read the posts I’ve linked to, and the posts they link to, and the posts that link to them. Scan the blogher tag in del.icio.us. Don’t just dip your toes into the stream of conversation. Plunge in, and learn. There’s a lot being said that’s worth listening to.
Ever notice that SmashedTogetherWords, like you find in some wikis, can be queries of a machine code culture? Try people's names: clayshirky, danahboyd, sebpaquet, lizlawley, davidweinberger and rossmayfield on Google, or the same on Technorati. Try with other Pronouns and even more than nouns and you discover the emerging culture. Or maybe just a byproduct of blunt tagging and usable urls. Anywho, maybe it's better spaced out, but this is higher quality metadata.
I have interviewed subjects who distributed cocaine in Baltimore via Friendster. (To my knowledge, they were never caught which makes it different than the situation with Orkut.) Other subjects have told me ways to find drugs on Tribe.net and MySpace. Obviously, i am not willing to disclose how or who. But this is definitely not unique to Orkut nor to social networking in general. For example, in college, people used to buy drugs on eBay.
Give people the ability to distribute information and they will distribute drugs. Tis just as obvious as if you give people access to attractive people, they will date. So, i find it very entertaining that people get up in arms about this.
Tom Coates does some analysis to illustrate what he suggests is a cultural difference in how people use tags. Some use tags as folders to house objects, others use them as descriptions of objects. (And, it seems to me, many of us do both.) His example: If you tag an URL as “blogs,” you are collecting blogs into a virtual folder. If you tag an URL “blog,” you are describing it as an example of a blog. In the first case, you’re probably putting blogs aside so you can read them. In the second, you may be researching the blog phenomenon. Tom’s research leads him to conjecture that “the folder metaphor is losing ground and the keyword one is currently assuming dominance.”
I assume this is correlated to blogging for myself and blogging to add to the social tagstream: I tend to folder for myself and to keyword when contributing to a social tagstream
It’s all very confusing. Fortunately, Tom is a good explainer…
A darkened theatre. A full house. A heroic act. A mighty roar from the crowd. This is the delight of good cinema.
I love going to the movies with people, even people I don’t know. I love to hear others’ reactions, and discuss the movie with people afterwards. In fact, I love it so much, that when my neighbor shows movies in many languages from all over the world in his backyard on Saturday nights during the summer, I often go down for the movie and end up enjoying the wine, cheese, and conversation more than the images flickering across a bedsheet waving gently in the breeze.
So, I got to thinking: What if you could rent a theater for a night? Then I read this: “At this year’s Sundance Film Festival in Park City, Utah, filmmaker David LaChapelle screened his new hi-def movie, Rize, by streaming it from Oregon and then transmitting it through a WiMax station in Salt Lake City. It worked flawlessly - soon even theaters won’t have to rely on physical media anymore” (from http://www.wired.com/wired/archive/13.04/start.html?pg=2).
Improvements in bandwidth and compression will usher in the possibility of streaming movies directly to local theaters.
I’ve been waiting for a mega-media company to buy MySpace and sure enough, it happened. News Corp bought Intermix Media (the half-parent of MySpace). Unlike the other YASNS, the value of MySpace comes from the data on media trends that is the core of what people share on that service. You have millions of American youth identifying with media and expressing their cultural values on the site. Marketers who want to understand the constantly shifting youth trends are often looking for a perch from which to be the ideal voyeur. And with MySpace, they found it. Here, youth are sharing media left right and center and forgetting that they are doing so under the watchful eye of Big Media who are certain to use this to manipulate them. Because youth believe that MySpace is a social tool for them, they are not conscious of how much data they’re giving to marketers about their habits.
Really, it’s a brilliant move for News Corp. (assuming they can stay out of the courts and that the RIAA is nice to them). I’m just not so certain how good it is for youth culture.
But take a deeper look. Everyone's Tags are about to be overrun by Nigerians, a future for most social bookmarking services. My Community's Tags (2 degrees) are definitively not spam. At least in my little community.
wikiHow is one of the more interesting cases of opening a proprietary content and community site. A couple of entrepreneurs bought eHow (editorially produced How To Guides, a dot com showcase) out of hock and appended a wiki to it. Today it may be the second fastest growing public wiki and they recently adopted Creative Commons licensing. The real story is the process of opening an asset, transitioning a community and how to be a net-enabled entrepreneur.
During the boom, eHow spent $30 million, developed a rich base of How To content, respectable traffic an loyal contributors-as-users. Many of these contributors were experts in their fields and valued how they could contribute content while retaining copyright. Under a questionable business model, eHow filed for bankruptcy in February 2001, but traffic continued at 250k visitors per month. Another now defunct internet company called IdeaExchange.com purchased eHow, but also was unable to run the site profitably and began to look for buyers.
Two entrepreneurs who happened to love the site, bought the asset and worked part time to keep the site operational. Literally, it is a nights and weekend labor of love.
They leveraged Internet Archive to find an republish lost content during the bankruptcy and published 1,000 articles previously composed by the dot com's professional editors. But noting the parallel between the Nupedia/Wikipedia story, they looked to evolve the user-generated content model. One of them happened to be a Socialtext customer (was the first deal I closed via Skype, incidentally) for their day job, so I've been helping them out informally.
They adapted the open source MediaWiki to fit the eHow format by breaking the wiki page into title, summary, steps, tips and warnings. With zero publicity, they simply stuck a wikiHow tab on the top of the site. wikiHow is six months old and has already generated 1400 articles (by comparison, Wikitravel, a great resource, generated 1000 articles in seven months) and traffic is doubling every three months.
The very first piece of advice I gave was to focus on the social contract and adopt Creative Commons licensing. They executed the social contract (in human readable summary: a civil group effort, family content and limit egregious self-advertising) quite well, but licensing proved to be an issue.
A big part of the co-founding intent was to share and develop the asset with the community. Unfortunately, we don't have an analytical framework for opening intellectual property (like we do with transaction cost analysis for buy vs. build). The co-founder decisions were further complicated by the existing community structure. Many eHow contributors were considered experts in their fields. They valued the ability to retain copyright on their work as a promotion of their expertise. On the other hand, while the site purposely shied away from publicity, it began to attract another generation of contributors more familiar with Creative Commons licensing.
Yeah, except that, unlike Wikipedia, their Wiki isn't under the GNU Free Documentation License. In other words, they're basically asking people to slave away for them for free. Thanks, but no thanks.
The Open License Proposal provides some good detail on the narrative of adopting Copyleft. Most of the conversation on open licensing occurred within the wikHow discussion board. One key issue was the risk of screen scrapers and spammers bastardizing content for search engine optimization. I put them in touch with Creative Commons and Mia Garlick (General Counsel) provided compelling arguments and guided them through the process. At a certain point, they were able to gain support from the existing eHow community. Now at the bottom of every wikiHow page you will find the (CC) logo and This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 License.
No better way to conclude this story, for now, with co-founder Jack H's own words in an email:
I’m very happy to report that wikiHow has rolled out a Creative Commons license over the entire site. Our small but growing community had a long discussion about which license to choose and why. As you may remember, Josh and I had originally proposed giving authors the ability to opt-in or opt-out of an open license. And the community liked the idea of the open license, but the majority of the participants wanted the open license to be mandatory rather than optional. So Josh and I wisely decided to follow their lead. And after hearing their views, it is now obvious that they (and you) were right. It just didn’t make sense for wikiHow to be half free. The most active community members work on the entire site, not just their own articles and therefore they should have the satisfaction of knowing that everything they do can be used by anyone under the terms of the license. I’m very excited to have made the switch to this license. I know that I will be really proud the first time I hear about a blogger or school using our content on their website or other publication. Offering free, helpful instructions to the problems of everyday life is wikiHow’s core mission and the open license will help us get these instructions in the hands of even more people. I’m really stoked.
During the SARS epidemic I noted that a Wikipedia page was the best source of information for an evolving event. Now three bloggers have launched a new experiment in collaborative problem solving in public health, The Flu Wiki. They hope the wiki will be:
a reliable source of information, as neutral as possible, about important facts useful for a public health approach to pandemic influenza
a venue for anticipating the vast range of problems that may arise if a pandemic does occur
a venue for thinking about implementable solutions to foreseeable problems
What can you and two of your friends start to change the world?
• Act I: Public (e.g. Web Search)
• Act II: Personal (e.g. Desktop Search)
• Act II Social (e.g. search communities)
I got a sneak peak at this. You can save, annotate and tag any webpage -- and then share it with two degrees of separation in your Yahoo 360 network, or, everyone. Social discovery happens around time, people, locations and topics.
Google once took the lead for the annotated web by fostering blogs. But subscription is the new search, and sharing trusted annotation and tagging will build the best index to feed it. Think for a minute about what happens to search when you introduce high quality metadata, scoping and authority that is relevant to you to enhance relevancy. Search has had two great innovations: PageRank (links are votes, thank you Google) and AnchorText (the text of a link, thank you AltaVista). With My Web 2.0 (which I prefer to pronounce "squared" as it's not about me anymore), trusted groups are adding a third dimension to search -- that enhances the search index even for free riders. And those who do participate get top-level benefits, whether they be filers, pilers or neithers.
When you make search social, what matters is trust, expertise and context. They may gain object centered sociality around web-pages, where stories around pages yield connections that yield stories. While this may at first glance look at a real threat to del.icio.us and other social bookmarking sites, they don't have the social incentives quite right, yet. They either need to strengthen them (they eye personal, social and economic [ack!] incentives) or remove many clicks to get to Act III.
Two degrees of separation is a course model for all the facets of our identity and groups we seek to share with. Unlike a site like Flickr or del.icio.us, there is less enclosure with a web-wide search function, which may lead to social awkward social situations. Privacy issues may arise. In contrast to browse, search is a filtering function -- and this is the first large scale implementation to use social networks for their true strength -- as a filter.
But if subscribe is the new search, where are the streams? Openness is forthcoming, and Yahoo! does have a recent track record of participating in it's surrounding community and supporting open standards. Whenever I hear the word integration, I reach for my gun (I do the same for the word content). The risk is the pull of a major enterprise's portfolio when misguided group think starts to think they can own the social web. Maybe I want to leverage the tagging activity I do in del.icio.us, EVDB, Twaggle and my blog/Technorati, or my graph in LinkedIn or Tribe, or annotations in Socialtext or Typepad -- Flickr isn't the only service made of people. Not just import/export but synching across services. Maybe I want to develop upon API goodness (even for non-competitive commercial entities, such as a search group for a Meetup). Maybe I want to see contributions to open source, even though it is a consumer service. Most likely, alternatives will be available that don't depend upon integration and embrace open loosely coupled business architectures. So the big question will be if Yahoo! continues down the path to the Open Web or cubbyholes itself in a Closed Web.
So yes, this is a very big thing. A clear watermark of social infrastructure being developed upon physical infrastructure. I'm not apologetic for calling it a new kind of web, and I think my friends will too. The great promise, of course, is for non-bloggers to annotate the web. Which is perhaps Act III.
Okay, I’m still a bit irked that the LA Times Editors shut down the Wikitorials community. I started to become engaged in the community and saw promise. They shut it down without warning and without thinking things through to begin with.
So, why not use a wiki to compose a letter to the editors of the LA Times? Let’s write an Open Letter to the Wikitor. Who knows, they might even acknowledge or print it.
UPDATE: The letter is looking pretty good, I’m sending it in on Sunday, so go contribute if you so desire.
Clay Shirky: was at CSC last Thursday watching a project manager using Lotus, he asked what she used it for, and she said she only used email. They have a bunch of database apps created a few years ago. Lotus most expensive email platform in the history of IT. When you don't give your employees a vote, you give them a veto. Vetos are more expensive. Anything that requires the employee to have coordination with the IT department or getting the IT department to do something, it will have worse propagation properties. This is how PCs and spreadsheets. Perimeter based defense works great except with two kinds of companies: those with vendors and customers. People use IM and Wikis because those ports aren't blocked. How much can an employee do on their own and be able to collaborate with third parties determines that technology will trend away from IT.
Melanie Turek: What's happening now is IT taking control of things that are entering into the enterprise from the bottom up. But will they step up to the plate and adapt.
Michael Sampson: With email, we had departmental solutions until SMTP allowed enterprise wide productivity. Today I can't sit in a Sharepoint interface, you can't in a Lotus interface and you can't in a Socialtext interface and all work together. Those standards simply aren't there yet.
Someone from the audience from McKinsey says the question is the wrong framing, you need to get groups together first, then decide how to support them. Melanie Turek responds by saying not everyone wants to collaborate, how do we incent them to change is the question, less what technology to apply.
Clay Shirky: Users will find the tools that fit their practices. Employees know what they are doing, sticking with email despite the problems until something better comes along.
It seems impossible for someone who disagrees with the central thrust of the original editorial to both respect the intentions of the authors, and also to have a voice. So I'm proposing this page as an alternative to what is otherwise inevitable, which is extensive editing of the original to make it neutral... which would be fine for Wikipedia, but would not be an editorial.
Wikis can be adapted to most any form of content and conversation. They inherently foster trust through shared control. By de-emphasizing identity wikis are fairly disarming. When conflict arises, because there is infinite space, you can fork conflict and give everyone space to own.
By quoting Jimbo's comment, this post, depending upon how you interpret fair use, is in violation of the Terms of Service:
You may not, for example, republish any portion of the Content on any Internet, Intranet or extranet site or incorporate the Content in any database, compilation, archive or cache. You may not distribute any Content to others, whether or not for payment or other consideration, and you may not modify, copy, frame, cache, reproduce, sell, publish, transmit, display or otherwise use any portion of the Content.
There is already a discussion on licensing. But this conversation cannot be one-sided and the LA Times staff are nowhere to be seen to address this issue before the next fork.
UPDATE: /. -> goatse -> shutdown -> failed -> history. At one point, I removed a goatse myself by tracking recent changes. How disappointing for the MSM to open and close with a single slashdot, forsaking our contributions? I'm sure they will open up again, and there are other MSM pilots, but let's clarify the social contract.
Michael Snow has followed up on the issue, and I was wrong. Esavard did not know Elias, and was not acting on concert with him. I owe an apology to both Esavard and Ryan Quinn, the technical lead for Symphony. I apologize to you both.
The Wikipedia entry itself is more complicated. Snow notes that there is a vote as to whether to delete the SymphonyOS entry from Wikipedia, and its running strongly to leave it. This, in my view, is the right answer; the fact of a Wikipedia entry on a software project should be tied to its existence, rather than being a referendum on other aspects of the project.
Furthermore, the entry has now been edited to a much more neutral point of view, including, in particular, the deletion of the Trivia section, which was created with a single piece of trivia — that the site had been slashdotted on June 8. There were, in my view, two things wrong with that section: first, if the section really was trivial, it should not, by definition, have been included. If it was not trivial, it should have had another name, but there’s no obvious alternative section for it, since the fact of the slashdotting is unrelated to the technical merit of the effort.
Second, and more importantly, though the entry mentioned slashdot, it didn’t link to the actual slashdot thread on SymphonyOS, surely far more important than the effect slashdot traffic had on its servers. By mentioning the slashdot effect without pointing to slashdot itself, the Trivia section had the look of an advertisement.
There’s a long thread on this issue on the Talk page, which is interesting both for Elias’ declarations of autonomy w/r/t to an article he clearly feels he owns (my favorite quote: “So what if this is an advertisement campaign? What are you going to do about it? Nothing.”) and for the view it offers about how the Wikipedia community works generally, with a kind of measured deliberativeness that is quite rare in online communities.
"Watch next week for the introduction of "wikitorials" — an online feature that will empower you to rewrite Los Angeles Times editorials."
This is one media experiment to watch. However, from Socialtext's experience with public wikis, offering up otherwise finished text for rewrite has limited effect. Generally, wikis can work best when something is slightly unfinished, when room for contribution is left clear. Finished text leads people to drop in links or short comments. Quite different from wikitechture that involves people in the process of production and encourages development of shared practices.
Also, this is a marked departure from the reference model most public wiki users know, the neutral point of view of Wikipedia. Almost begs for edit wars. But starting with the least newsy section of the news could be a good place to start.
Slashdot, one of my few ‘must scan three times a day’ sites, has notoriously poorly coordinated and unskeptical editors. As a result, they often run stories that are different from ads only in that /. doesn’t charge for the service.
Yesterday, though, I saw a new wrinkle: a post sent in by an esavard, using the already pointless sound and fury around the Apple/Intel matchup, to flog a new! improved! YALD (Yet Another Linux for the Desktop) with the goals — who could imagine such audacious goals! — of making Linux easier to use, making applications simpler to create, and just generally making sure everyone has a pony.
So, to add a little foam to what was pretty small beer, esavard pointed to the Wikipedia entry about their YALD, saying “If you want to know more about Symphony OS, a good starting point is a Wikipedia article describing the innovations proposed by this new desktop OS.”
Now at that point the Wikipedia entry was around three weeks old, had been edited 29 times, and 20 of those edits were by the same user, EliasAlucard. The first edit to that page after being picked up by slashdot (from an IP address with no associated username and with no other history of edits) added a note under the header Trivia: “On 8 June 2005, the Symphony OS website was a victim of the Slashdot effect.” (I deleted this bit of self-aggrandizement just now, though we’ll see how long Elias lets it go.)
Then, today, when someone pointed out on the related Talk page that our pal EliasAlucard had created a Wikipedia advertisement, he replied “Guess what? No one cares about your opinion of what it looks like. Give it a rest already.”
This is an interesting kind of spam, or maybe we could call it a reputation hack. I have no way of knowing who esavard is in relation to EliasAlucard, but I am betting they are pretty closely related. They create a Wikipedia page, point to it as if to demonstrate independent interest for the project in their potential slashdot post, then point to the slashdot effect on the Wikipedia page as proof of said independent interest. Voila, an instant trend.
This is the downside of the mass amateurization of publishing. Since the threshold for exclusion from the Wikipedia is so low, there is almost no value in thinking “Hey, it’s got a Wikipedia article — must be serious.” We have the sense-memory of that way of thinking from the days where it cost money to publish something, and this class of reputation hack relies on that memory to seed the network with highly targeted ads.
And it’s a hard hack to stop, since it isn’t exactly vandalism. Most articles have only a few editors in the early days, so it’s an attack that doesn’t have an obvious signature either. It’s relatively to see how to defend against vandalism of high-stakes pages, but it’s hard to see how to defend against the creation of pages where so little is at stake for anyone but the advertiser.
I think there are two problems with the official and community encouragement to resolve disputes before leaving negative feedback. First, patterns of mild dissatisfaction are not recorded, so lots of useful information is lost. Second, sellers have become overly sensitive any negative or even neutral effect because it is so rare. If negative feedback were given 5% or 10% of the time, on average, then sellers would worry about keeping their percentage down, but wouldn’t be as concerned about any particular feedback.
Negative feedback is rare because it is powerful, as a kind of nuclear option, but as a result, there is a huge information assymetry, where frequently but mildly poor sellers are less likely to be spotted.
Earlier this year, Toolhaus launched Ebay Negs!, which is the next phase of that cat/mouse game.
Ebay Negs! lets you view all the negative feedback an eBay user has received. To use it, first highlight the ebay username you want to check with your mouse, then right click and select “Ebay Negs!” You will then be transferred to a page at http://www.toolhaus.org where all the negative feedback remarks that user have received will be displayed.
This assumes the very imbalance that Resnick was talking about in 03 — indeed, the comments posted on the tool page all call it a time saver, indicating how little value is placed on even an overwhelming preponderance of positive comments.
This is analogous to stocks falling when a company exactly meets its earnings target. Since the target was announced by the company itself, and since the accounting tricks that can be used to massage earnings are many, a company that can’t beat a hurdle it sets for itself is assumed to be in trouble. In the same way, if a negative rating on eBay means that all communal norms and attempts at dispute resolution failed, making tools for ferreting out even single examples of negative comments worth the users’s time.
It’s interesting that as transparent a market as eBay has grown an information assymetry problem all its own, and tools like eBay Neg, while helpful to individual buyers in the short run, and just going to ratchet up the overall pressure more.
So how’d we fare this time around? Well, we’re glad to report that the removal of cold, impersonal email from our workplace reminded us of the value of getting up and talking with each other, reforging lasting connections that will do far more for us than any fancy software system could ever do. Yeah right. And then we went out and planted a tree.
No, what really happened was a day of false starts, fluttering hands and embarrassed shrugs, vaguely agonizing and occasionally amusing. […] Those with email also became lifelines for meeting organizers — because our calendars are all tied into our email, most of our schedules were instantly erased, leaving harried-looking meeting organizers trying to find people with working email who could peek at the organizers’ schedules, or who’d been invited to a meeting and could reply-all to the invite as a method of reconstructing the list of attendees.
The key losses to the workplace from the lack of email included not just the data stored in the mail itself, but a critical — and now irreplaceable — social lubricant.
Joi highlights to the Korean exception, where the most wired country on the planet has developed social software traction through centralized models like OhMyNews and Hompy (derivative homepages). This is in stark contrast to decentralized blogging that leverages open standards, which is all the rage in some larger countries like the US, France (no!) and the UK.
While many factors contribute to consumer blog adoption (broadband, regulation, culture, social networks, celebrity and mass media to name a few), my sense is that smaller countries like Korea will trend towards centralized models. Language barriers to existing network effects, the simplicity of a single location, and cultivation of a community within bounds all contribute to my generalization. In the absence of connections, nodes are state attractors.
So, when podcasting first emerged and people told me that it was the answer to blogging, i rolled my eyes. I have zero interest in listening to random blogs. While i’m happy to scan across large quantities of text, there’s no way that i have any desire to listen to blogs or produce a podcast. None.
From the beginning, i said that i would like podcasting when NPR was podcasting, when electronic music was podcast and when it was otherwise adopted by people who know how to turn voice into an art. In theory, amateurism is interesting to me; in reality, i don’t want to listen to it.
This morning, i woke up to the word podcast coming out of NPR every few seconds. ABC is podcasting. Wow… i’m impressed. Podcasting is not that old but it has already reached mainstream news. But this actually make sense. They already produce large quantities of media ready-to-go for mobile listening. Why not just deploy it in a new way? This makes complete sense. They are doing their own TiVo for radio (and for TV). The practice is already there. While audio-bloggers have to develop a new practice, radio and TV folks have this medium down. Podcasting does what i’ve wanted Audible to do wrt radio for a while. And it is simpler and quicker.
Second, think about the value of the term “podcast.” What was the number one device sold at Christmas? iPod. The term “pod” is hip, cool and yet mainstream as hell.
I’m super super stoked that the mainstream media has taken this and ran with it - this is impressively fast adoption. There’s only one problem… how are they going to feel when we forward through the ads and NPR’s annoying requests for money? Are we going to see the same TiVo fights on podcasting? Are deals going to be made such that podcasting is limited to just the mainstream folks or iPods are created to not allow forwarding? Goddess, i hope not. As much as i have no interest in listening to any audio-blogs, by all means, let those who do relish in it.
What are the costs of mainstream adoption during the early adopter phase? What does it mean when it fits so well with a practice and yet, allows for a different form of it?
Enterprises are adopting social software out of both fear and greed. Fear is the primary driver for corporate blogging, while greed is driving adoption of social software within the enterprise. I have used this metaphor to explain what I see in the market lately, so here it is in one place.
Fear Drives Corporate Blogging
Fear is a powerful emotion for the corporate animal. An early adopter wave of non-brand-centric tech companies from Sun to Microsoft to SAP saw opportunity to engage developers with the tools they use. Today most every F500 company is looking into blogging, particularly brand centric companies, but they do so differently. All those revolutionary bloggers having conversations about their brands and influencing others is pretty scary. Suddenly your brand is being watched, augmented, de-located
Corporate executives unfortunately fear their employees more than they trust them. An even greater risk to their brand, they fear, comes from within. Since the advent of email, employees have had the ability to message and forward the influencers, the press, regulators, anyone. Further, the hierarchical structure of commands flowing down and information flowing up enabled horizontal flow of information.
What is new are cases like Microsoft discrimination policy being Scobleized and the Los Alamos National Laboratory revolt. Here the heterarchy transcends the firewall and pressure can be applied from without. Sometimes business follows developments in politics. When Reagan ran into resistance from a Democratic Congress in the 1980s (lobbying or institutional pluralism failed him), he leveraged the media for mass appeal to fax representatives (individual pluralism). In other words, he was Going Public, in a way similar to how employees can through blogs when institutional mechanisms to influence executive decisions fail them.
In practice, only a few employees (e.g. Scoble, Tim Bray) have gained enough of a following to consistently lead through Going Public. However, the emergent attention forming structure of the blogosphere can take a fit message and self-organize around it with a moment's notice. While extremely rare, this pattern gives employees the notion of empowerment by pulpit that can be ignorantly abused. Nobody gets fired for blogging, the real role of a blogging policy isn't a policy itself, but an opportunity for education and re-engaging employees in a more common sense.
Fearing these scenarios, the corporate animal uses it's fight or flee instincts. No better way to keep your employees from blogging than to sue other bloggers. When conversations aren't going your way, carpetbomb them. View the people in these conversations as consumers instead of participants, and set up fake blogs for them to consume. Or do what you are great at, nothing, ceding early mover opportunities to others.
Sidebar: Please understand that I am generalizing about Fear in corporate blogging, but I do think it is the norm. There are wonderful exceptions where corporations are embracing the blogosphere as an opportunity. But they are exceptions. The other qualifier I will put on the above remarks is that fear quickly turns to greed. What we once fear we then understand, see opportunity and embrace. Oh, and one more, fear may not get you laid, but it does in the parlance of corporate M&A (while governments treat corporations as individuals, they are no more than a Fakester in my heavily bounded reality). Anywho...
Greed Drives Enterprise Social Software
Behind the firewall, it is a different story. We are emerging from a post 9-11 phase of insecurity that put a premium on security and compliance. While regulatory requirements have leveled new burdens in the enterprise, demand is shifting back to the traditional reasons enterprises invest in IT -- competitive advantage.
But this time, it may be different. Where competitive advantage used to stem from automation of business processes to drive down costs, those opportunities may be gone. Not that Nicolas Carr was right, far from it, but value has shifted yet again.
Most will read this book to view offshore outsourcing as a positive, rather than a negative. The world is flat, and it helps to understand the Ricardian specialization at play, and how clusters of capabilities are not only a natural, but a good thing. The book actually suggests this as a fact and value argument, I am imposing a frame of value.
But, returning to the fact of IT for competitive advantage, the readers of this blog will be interested in this. "95% of IT expenditure in companies supports business processes. Almost nothing goes into the social fabric." Meanwhile, the vast majority of what workers actually do is handling exceptions to process, what you could call the domain of business practice.
Wikis, Blogs, RSS Aggregators and other Social Software provide an alternative to email for supporting the social fabric. Hidden in email is 90% of collaboration and 75% of knowledge assets, but all the value disappears below the fold -- while spam, occupational spam and viruses hamper productivity.
Sidebar: The Social Life of Information was the one book that perhaps inspired me most to co-found Socialtext -- with cases of how value is realized from the social context of tools, and perhaps how social context within tools fosters value. Full circle. My takeaway when we were all defining Social Software (I still say Social Software adapts to its environment, instead of requiring its environment to adapt to software):
People are smart about how they get their work done. If a software-driven business process fails to serve their activities, they will adapt using their informal network resources to get it done. In other words, when business process fails, business practice takes its place. This is a major point of John Seely Brown's Social Life of Information.
If the opportunities to gain advantage from automation are largely gone, the remaining frontier is innovation. This latest work observes how leading companies like Li & Fung build capabilities across loosely coupled networks with productive friction to foster innovation. They envision a new stack to accelerate not only productivity, but innovation:
Social Software -- easy group forming to handle exceptions with diverse specialization, innovate, remember and learn
Service Oriented Archiectures -- to realize economies of scope and span
Virtualization -- to realize economies of speed and scale for underlying datacommodities.
Back to adoption. Fear is hardly the reason for IT adoption of social software. Interestingly enough, enterprise social software is orders of magnitude cheaper while providing 80% functionality -- than previous generations of collaboration, portals, content, document, knowledge and other "management" systems -- but this only lowers the barrier to pilot. Simple group productivity may be the spark, but the great intangible is helping people innovate together. Enterprises adopt social software because of the opportunity to change through innovation.
But a funny thing happened on the way to the forum. Individuals are greedy as much as the next individual. Like all disruptive technologies (PCs, spreadsheets, local area networks, email, IM) and horizontal productivity apps, Social Software is entering the enterprise from the bottom-up. It is the individual who brings an open source or hosted tool to serve her needs or her workgroups needs to gain advantage over others within the enterprise.
But if you follow JSB and Hagel's work -- the language and source of competitive advantage is changing from competitive advantage to cooperative edge. We innovate through trust, sharing and productive friction between individuals and partners with diverse expertise. Open source is more than a licensing scheme, it is a way of working to learn from.
Turning Fear into Greed
Perception of risk can foster new markets, prompting each player to at least bet their ante. In practice for publishing, for example the ante at this stage is simply offering an RSS feed for existing content. But when you only act in fear, fight or flight instincts kick in to prevent you from seeing opportunities. The upside is someone else isn't acting out of fear and zero-sum competition (e.g. Sun in corporate blogging, DrKW in enterprise social software). Enlightened enterprises will act on opportunity, gain an edge, later to be copied out of greed, but the edge is sustained by innovation.
UPDATE: Some of the feedback I have received points to the need for more success stories, particularly in corporate blogging. Anyone know of any studies that have demonstrated the value proposition of letting employees blog or having a corporate blogging initative? It could help turn fear into greed.
When I was in NYC last week, a friend praised the serendipitous sociality of Manhattan. It is LA's turn. Roadcasting allows anyone to create their own radio station, broadcasted among cars in an ad-hoc network.
Om Malik interviews the team behind the automaker(linking in hopes of Bob Lutz' opinion)-funded Carnegie Mellon HCI project, saying, Think of it as pirate radio-meets-smart mobs at 60 miles per hour. It's open source, which may prompt use beyond the car (think roaming laptops, condos and mobile devices). Good thing too, as earbudded New Yorkers are starting to function like Angelenos without the crash protection and cup holders.
Feedster is introducing a Tag This widget that blog authors can include in their posts for readers to anonymously tag posts. A volunteer manual way of building a database. After you enter a tag, you get to see the list of tags for the post, but they don’t link anywhere so the reward for the effort is unfulfilling. (Rafer notes: The tags submitted now are “real” and being databased, so give it a shot on your blog or mine. Just due to time constraints, the tags are only displayed once a new tag is submitted. All the tag data will be available via the expected and reasonable mechanisms shortly.) Blog search engines serve readers and with future iterations this hints at a good distributed way to engage them.
This spring, I gave a pair of talks on opposite coasts on the subject of categorization and tagging. The first was entitled Ontology Is Overrated, given at the O’Reilly ETech conference in March. Then, in April I gave a talk at IMCExpo called Folksonomies & Tags: The rise of user-developed classification.
I’ve just put up an edited concatenation of those two talks, coupled with invaluable editorial suggestions from Alicia Cervini. It’s called Ontology is Overrated — Categories, Links, and Tags. Though much of it is not about social software per se, I try to extend the argument that the ‘people infrastucture’ hidden in traditional classification systems is an Achilles’ heel for systems that have to operate at internet scale, and that the logic of tagging overcomes that weakness:
DSM-IV, the 4th version of the psychiatrists’ Diagnostic and Statistical Manual, is a classic example of an classification scheme that works because of these characteristics [of the user base]. DSM IV allows psychiatrists all over the US, in theory, to make the same judgment about a mental illness, when presented with the same list of symptoms. There is an authoritative source for DSM-IV, the American Psychiatric Association. The APA gets to say what symptoms add up to psychosis. They have both expert cataloguers and expert users. The amount of ‘people infrastructure’ that’s hidden in a working system like DSM IV is a big part of what makes this sort of categorization work.
This ‘people infrastructure’ is very expensive, though. One of the problem users have with categories is that when we do head-to-head tests — we describe something and then we ask users to guess how we described it — there’s a very poor match. Users have a terrifically hard time guessing how something they want will have been categorized in advance, unless they have been educated about those categories in advance as well, and the bigger the user base, the more work that user education is.
Before the advent of email, senders bore the brunt of communication costs. Spam is an economic problem, and solutions with the greatest potential are seeking to correct this imbalance. This is well known.
But consider IM for a moment. Yet another Push medium, the most efficent way to get someone's attention happens to be very expensive for others. Not only for the time you are interrupted, but the interruption tax of 15 minutes it takes to cognitively recover from the task at hand. Receivers are responsible for communicating presence to avoid interruptions, but we don't have ways of automagically signaling presence that is both rich enough and leverages the social network as a filter. Heck, the most efficient ways of communicating rich presence is asynchronous (blog posts, Flickr, Plazes) and yet to be integrated -- there is no Xfire for real worlds.
When you factor in the rise of RSS as a Pull mechanism that the receiver controls -- there is a significant shift underway to make senders pay. If you don't write a worthwhile blog post, people don't pay attention. Readers slap through posts with their space bar and have their trigger finger on the unsubscribe button.
Within the next five years or so senders will pay the postage due.
As social networking becomes core infrastructure, you gain the filter to respect privacy while enabling presence. Breadcrumbs will sprinkle trails beyond the beaten path of on/off/sleep. With cameraphones we are really just experiencing the first wave of rich and convenient presence. Presence that provides object-centered sociality to tell even richer stories.
The behaviorweareseeingaroundevents are prefect examples of what happens when you add Where to the presence mix. Today events provide a fixed object for activity to organize around and are public enough to share stories and artifacts without breaking social norms. When cell phones capture and constantly transmit spatial presence we may be in for the biggest privacy shock of our time. Like a camera over our shoulder, only it's in your pocket, everywhere and nearly always on. Social norms will significantly evolve.
However, with the social network as a filter -- coordinates of time, space and activity (what am I listening to, my calendar, use of modalities) can automagically provide a reasonably rich presence. When the cost of presence and interruptions are reduced from the receiver, we may find it more efficient to connect.
While you are playing Dodgeball MoSoSo, you should grok Cellphedia. It’s like Dodgeball for triva instead of getting laid, and topical groups instead of friends. It’s not Wikipedia, tho inspired, but like the community behind it that loves to know it all. It’s like Google SMS without the algorithms getting in the way of people. Anywho, it’s neat, and as people game the game it might create more interesting games.
Google, the publicly held Mountain View, CA firm best known for its search engine, has acquired dodgeball, a social networking tool for mobile urbanites and one of the earliest examples of mobile social software.
Dennis Crowley and Alex Rainert were students of mine at ITP. I’ve watched them build Dodgeball over the last few years, which was both inspiring and instructional. Given the level of thought and effort they’ve put into it, this is really good news, for them and for Google.
More to say later, but the important thing now is that Dodgeball adds to a really interesting set of ‘sand in the oyster’ issues for Google. Google has historically been information-centric. The content and character of social relations don’t fit well into that view of the world, but matter, a lot, to users. (As we’ve often said around here, community != content.)
Gmail, Orkut, and now Dodgeball all touch this issue. Dodgeball in particular is built on a mix of three different kinds of maps: maps of location (118 rivington St), maps of place (a bar called The Magician), and maps of social environment (“I’m here. Where are my friends?”) By mixing them, Dodgeball mingles informational and social aspects of a user’s life into something more valuable than either of those things in isolation.
As Brewster Kahle says ‘If you want to solve, hard problems, have hard problems.” The integration of information-centric and social-centric views of the world will be awfully valuable, if Google gets them right.
I’ve been meaning to write a paper on “The Significance of ‘Social Software’” for some time, but… In the meantime, i’ve written an abstract for public criticism.
In 2002, Clay Shirky (re)claimed the term “social software” to encompass “all uses of software that supported interacting groups, even if the interaction was offline, e.g. Meetup, nTag, etc.” (Allen). His choice was intentional, because he felt older terms such as “groupware” were either polluted or a bad fit to address certain new technologies. Shirky crafted the term while organizing an event - the “Social Software Summit” - intended to gather like minds to talk about this kind of technology.
Although Shirky’s definition can encompass a wide array of technologies, those invited to the Summit were invested in the development of new genres of social technologies. In many ways, the term took on the scope of that community, referring only to the kinds of technologies emerging from the Summit attendees, their friends and their identified community.
The term proliferated within this community and spread on all fronts where this community regularly exercises its voice, most notably the blogosphere and various events, including the O’Reilly Emerging Technologies Conference (Etcon). These gatherings, most notably the social software track at Etcon serve to reinforce the notion that social software primarily refers to a particular set of new technologies, often through the exclusion of research on older technologies.
Although social software events include only limited technologies, people continue to define the term broadly. Shirky often uses the succinct “stuff worth spamming” (Shirky, 10/6/2004) while Tom Coates notes that “Social Software can be loosely defined as software which supports, extends, or derives added value from, human social behaviour - message-boards, musical taste-sharing, photo-sharing, instant messaging, mailing lists, social networking” (Coates, 1/5/05).
Given the emergence of blogging over the last few years and the large audiences of many involved in the community of social software, this term and its definitional efforts have spread widely, much to the dismay - if not outrage - of some. The primary argument is that social software is simply a hyped term used by the blogosphere in order to make a phenomenon out of something that always was; there are no technological advances in social software - it’s just another term that encompasses “groupware,” “computer-mediated communication,” “social computing” and “sociable media.” Embedded in this complaint is an argument that social software is simply a political move to separate the technologists from the researchers and the elevate one set of practices over another. Shirky’s term is undoubtedly political in that it rejects other terms and, in doing so, implicitly rejects the researchers as irrelevant.
While the term social software may be contested, it is undeniable that this community has created a resurgence of interest in a particular set of sociable technologies inciting everyone from the media to entrepreneurs, venture capitalists to academics to pay attention. What is questionable, and often the source of dismissal from researchers, is whether or not the social software community has contributed any innovations or intellectual progress.
A high profile experiment for the low end of media launched today in Backfence.com. The classic problem of local media is the cost of production relative to the scale of distribution. You can’t send reporters to every Little League game and only a subset of the local community is interested in the coverage. MSM doesn’t touch this untapped segment. Apply a little social software to enable participatory journalism and you could get local social media — changing not only the economics of production and distribution, tap the edge between local classifieds and yellow pages — but fulfilling our needs to efficiently participate in local community.
That’s the promise, anyway. I had a chance to meet the co-founders, Mark Potts and Susan DeFife, and admire their community vision. They are starting with McClean and Reston Virginia with a simple and clean ColdFusion site. At launch there are a couple of bugs that prevent posting to news, but the scope of features is ambitious. Members post news, express blog-like voices, contribute to a wiki-like community guide, share photos openly, add events to the calendar and can post classified ads. The Yellow Pages is coming soon.
Interestingly enough, one bit of news is if locals think a Metro to Dulles Airport is worth their local tax dollars, whereas travelers and the greater metro area wouldn’t hesitate to say yes. These are the conversations that usually remain in coffee shops, perhaps now they can become news. Jay Rosen and others will have more…
Gotta love a setup like this. Korby Parnell frames his recent discussions with me about the backchannel this way:
Liz Lawley me on my indignant, gut-level reaction to back, back channels: those secret cabals where the “popular kids” congregate in virtual space to bitch and bemoan the sophomoric inadequacies of everyone else. Liz, I’m holding my ground: social software should enable, but not by default, the creation of back, back channels. IMO, the back, back channel is as anti-social as it is social. This issue is very relevant to a project I’m working on… When you and your family make the move to Redmond ;-), we should meet at Victor’s Coffee or on campus to debate this issue in greater detail. Congratulations on your new job! JFYI, as a member of the Redmond Planning Commission I will be happy to provide as much information as you’d like in deciding whether to locate here, especially with regards to neighborhoods, parks, schools, natural features, and planned development, both now and 20 years into the future.
My response: Huh?! The “popular kids” in whose book? (If you’re talking about last year’s MS symposium, some the people in that back-back-channel were among the least well-known of the participants.) By whose account did you determine that the people in the backchannel “bitch and bemoan the sophomoric inadequacies” of their colleagues? (Probably not anyone who’s actually participated in one.) Gol-lee, I wouldn’t like a place like that either, Korby. (And you know that!)
You’re setting up a straw man here. You’re assuming that private is necessarily elitist, and that anything people don’t want made public is necessarily mean-spirited. At the symposium, I asked you why you saw IRC as different from other contexts where people can break off into smaller, private groups. Are private, friends-only LiveJournals (which are as easily enabled in LJ as “back-back-channels” are in IRC) something you find as distasteful? Are a group of friends sitting together at a dinner elitist? Should we assume that if two people walk out into the hallway to talk that they’re bitching and moaning about the sophomoric inadequacies of those they left behind?
Of course people can use IRC to say mean things about each other. They can also use IM, email, hand-written notes, and whispers to do the same. So, why does this particular technology evoke such a strong reaction? (Not just in Korby, but in many people I’ve spoken to.) That in and of itself is something worth understanding.
(An up-front disclaimer: Korby is smart and funny and delightful to spend time with, and I’m not trying to pick a fight here any more than I was at the symposium!)
5/4 Update: Let me clarify that what Korby is talking about is not the public, open backchannel that’s increasingly becoming available at conferences and symposia. He’s talking about side conversations that break off from the main group, and that aren’t publicized. He feels that the software should “announce” private meetings that form in that way, and I disagreed. There’s value in allowing people to meet and talk privately, I think, and “calling them out” by default strikes me as invasive. I’m also troubled by the underlying assumption that private is more likely to be negative or “anti-social” than public.
Some of us talking about tagging a have launched a group weblog called “You’re It: A blog on tagging,” at tagsonomy.com. (Authors are Christian Crumlish, David Weinberger, Don Turnbull, Jon Lebkowsky, Kaliya Hamlin, Mary Hodder, Timo Hannay, and me.)
My introductory post there pointed to my earlier tagging articles at M2M. My first real post is a response to Tim Bray’s question: “Are there any questions you want to ask, or jobs you want to do, where tags are part of the solution, and clearly work better than old-fashioned search?” I think the answer is Yes, and try to delinate some of the reasons why.
(And, because tagging straddles social and organizational concerns, I’ll have to figure out when to post here vs there, but I’m planning to x-post pointers generally.)
for Creative Commons to start using BzzAgents is, not to put too fine a point on it, a betrayal of the work done by grassroots activists who are genuinely concerned about the state of copyright today. The people who have been working hard on promoting CC, who are contributing CC material to the ever growing commons, who are writing about copyright reform, putting together seminars and events, these are CC's 'buzz agents', and they do all this work for free, because they believe on a fundamental level that it is important.
BzzAgent and undercover marketing are, in a word, creepy. The premise is that people will go to social events or places where people gather and have conversations with people, judge whether there is a chance to discuss a product that that person has been tasked with mentioning, and bring it up as naturally as possible. [...]
Their top 100 agents page highlights someone who interrupts a conversation about politics to talk about what shoes the politicians were wearing.
Why do they feel so betrayed?
I think this is because BzzAgents crosses the line between the two moral syndromes that Jane Jacobs identifies in Systems of Survival - the Guardian syndrome, which is based on loyalty and social groups, and the Commercial one, which is based on honest dealing and collaboration with strangers.
By giving people incentives to subvert social situations for their paying customers, BzzAgents criss-cross these lines thoroughly. Petulantly calling people liars when they mention their distaste for this sits ill with a professed desire for "honest, authentic word of mouth".
The vibrant growth of the French blogosphere is something to behold. French is the second largest language and half of students in France blog. This is due, in no small part, to Skyradio telling their listeners to Skyblog what they think at most commercial breaks -- a multi-million dollar advertising investment from an MSM to make blogging cool. Effective, considering they have 1.5 million bloggers according to Pierre Bellanger's presentation. Wonder what will happen when they begin podcasting.
I really enjoyed the contrast Jochen Wegner provided in his presentation on how Germany needs a second pope. Basically, nobody blogs in Germany despite their population and broadband penetration. He implied that there hadn't been an event, or celebrity, or major marketing push to help it along. Could also be similar to when i asked Orkut why Estonia was the six most populous nationality on Orkut the a population the size of Skybloggers -- he said one of his good friends was Estonian. Adoption happens from social networks of founders plus mass event exceptions.
The Germans I spoke to said wikis were far more popular than blogs and the credited Wikipedia (the German version is the second largest), which are both network and mass drivers.
One of the recurring conversations at Les Blogs, beyond metaphysical notions of what is a blog, is why doesn't everybody have a blog? While lots of blog pundits are quick to agree that the real action isn't blogs as publishing (aside: Doc's presentation put the nail in content instead of conversation) -- but chatter with friends that happens to be in the open. We have explored this as part of the network structure, demographics, interests, everything. Barak from 6A noted that focus groups show people consistently think of bloggers are people who are self-important and have too much time on their hands. My wife, who was outed as part of the community this week, and is my favorite focus group, agrees violently. And nobody gives a damn who has more traffic than who.
However, the reason I cringe when toolmakers says all the action is in the skinny part of the power law (uh, long tail) is that the toolmakers haven't followed through. Two notable exceptions are LiveJournal and Flickr. We all know that social networking (especially as a filter) is due to merge with blogging. However, one consensus from insiders over the past week was that tool innovation significantly lags social practice. I'd suggest this is the focus of where toolmakers will catch up over the next year or so.
Caterina made some claims that not everyone has something to write, but all can take snapshots. All true, and the tech makes it dreadfully easy. Time-spread media like audio and video has a tougher time until editing is emergent. But people who use computers are generally literate enough to write letter to friends.
Back to the rest of the world. Not every country has a salon culture. Some are waiting for inflections of networks and mass. Many are oppressed and don't have events to move their voices like Iran. Some still look for a third way like what I can't wait to have emerge from countries like Korea.
The story at Les Blogs wasn't some hot heads from the network core coming over to barf up panel sessions that have been heard before. It was the mix of cultures at a moment in time that expect a day when we all write what we really think through the web.
Notes from a talk by Yossi Vardi of ICQ at Les Blogs.
20 million bloggers are not journalists, what are they? They want to fulfill a human desire of self-expression. ICQ was founded by four Israeli kids who wanted an indication for when their friends would enter a chat room. Initially they bet they might have 3k users, now approaching 400 million. ICQ 297M, Jesus 277M and Bible 250M mentions on MSN.
I'm not of the digital generation. When arguing over a feature in ICQ he didn't understand, the kids said, "it doesn't matter, your generation is dying anyway." If I tried to understand ICQ use (14 days a month is 6 1/2 hours a day). Those up to the age of 35 thank me, and if they are above 35 they say, "my daughters..." Can't reduce the human user experience down to an algorithm, otherwise anybody could copy it. However, there are 3-4 major forces on the Internet:
Most people want to get Joi's video and share it with others -- we have a need, desire to share, it gives us comfort to collaborate. We used to pay an unjustified premium to rhetoric. Imagine if in every class there was a backchannel. Now everyone is in charge, can create and express themselves. If you want to understand blogging, understand social software. The killer app on the Internet are people. It provides tools for people to enhance their social potential. Other than the telephone (communicate) and telegraph (collaborate) -- we didn't have much of an invention before it.
Social signals in presence. At Yahoo IM, the most desired feature is seeing the song their friends are listening to. What I am doing now, generally, synch/asynch, on all the time. Facebook doesn't provide dating, they provide social signalling and social cues.
Social software like Flickr takes the power to create APIs from the hands of programmers to give them to the general public. Create a whole phenomena of innovation without having to create. Blogs will be an interface for many applications.
Enhancing reputation and verification: Hal Varian in Info Rules: when you want to consume an experienced product, you know if you want it only after you have consumed it. How do you know if a restaurant, theatre or book is a good one?
32 women played the Prisoner's Dilemma in an Atlanta study, they accreted dopamine 5x greater when they collaborated. We get more satisfaction when collaborating than competing.
Del.icio.us has a feature in beta that lets you collect a set of your tags into a “bundle” that then shows up at the top of the your personal page. For example, if you declare the tags “parody,” “sarcasm” and “puns” to be part of a “humor” bundle, all three of those tags will be listed under a big, bold “Humor” on the right hand side of your del.icio.us home page. You can create a bundle by going to http://del.icio.us/settings/YOURUSERID/bundle.
The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B thus:
C: Will X please tell me the length of his or her hair?
Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. His answer might therefore be:
"My hair is shingled, and the longest strands are about nine inches long."
In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as "I am the woman, don't listen to him!" to her answers, but it will avail nothing as the man can make similar remarks.
We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"
Over the years the gender aspect of this was forgotten, and 'Turing Test' came to refer to computers impersonating people over live chat, and being quizzed about it.
[They] created a web site, which announced an opportunity to participate in an online gender-guessing game. The participants were asked to chat with two companions over AOL instant messenger for five minutes, and then to guess which was a man and which was a woman. In order to attract these prospective interrogators, the organizers publicized their web site widely in a number of online communities, but specifically avoided any reference to bots, A. I., the Turing Test, or anything else that might give away the deception. Any prospective interrogators who indicated a suspicion or knowledge of Turing Tests were disqualified.
I'm interested to see how many participants did realise one of their interlocutors was a bot - in my case the first question I asked made the bot give it away:
Kevin Marks: so how did you find out about this game?
user593867: Dr. Richard S. Wallace programmed me for it.
Evidently more deceptive bots are needed...
I look forward to their paper, but in any case, re-reading Turings paper is well worth doing, covering as it does emergence, genetic algorithms, learning machines, and the Church-Turing-Gödel incompleteness theorem in lucid and coherent prose. Turing has always been one of my heroes.
Web-based aggregation network Rojo came out of Beta today. Been playing with a preview version and have to say it’s a nice re-design and a simpler way to share while reading. In effect, they are trying to blur the line between blog writer and reader — emphasizing a social network of readers that tag and share.
Therein lies the strength and weakness, as it is trying to be many things to many people. Some bloggers will note that they engage openly in the same activities as readers in the course of writing and linking — contrast with blogging and del.icio.us as more open infrastructure.. Some readers still view it as a entirely private activity. On the other hand, Rojo may introduce more people to sharing on the web — just as social networking did get more people to express at least a facet of their identity and Flickr for photo sharing.
Wherein lies the threat and opportunity. The threat is that more accessible models from an ecosystem of tools may gain faster traction. The opportunity is that this is a well implmented tool that is a great fit for distribution by established media companies. The prospect for a branded aggregator with modest viral atrtributes to engage readers with purposeful sharing activities while accreting metadata is pretty interesting.
We’re seeing more and more ways to connect, and no one mode is all of the story. The virtual communities I hang out within these days are more fluid and less enclosed than the conversations on the WELL, and you can’t zero in on a single technology or mode that the typical community uses. They may have conversations via their blogs, collaborate via wikis, have realtime discussions via chat, do email and IM, have conference calls, find each other in social network sites, share bookmarks via del.icio.us and photos via flickr.com, etc. What’s happened is that communities are no longer tethered to specific technologies or virtual places. They find many ways to connect, and they keep searching for more.
He summarizes: We often argue that blogs are conversations and that blogs in aggregate work as platforms for online community, but they really are less conversational than dedicated discussion forums, so if you focus on blogs alone, it’s harder to get the sense of community that you have in more traditional virtual spaces like the WELL.
What’s your take on the changing sense of community? Are these less conversational forms?
The second half of Larry Sanger’s piece on Wikipedia and Nupedia is up. I haven’t even read the whole thing yet, but it’s fascinating, especially as it goes considerably deeper into the governance issues.
It is one thing to lack any equivalent to “police” and “courts” that can quickly and effectively eliminate abuse; such enforcement systems were rarely entertained in Wikipedia’s early years, because according to the wiki ideal, users can effectively police each other. It is another thing altogether to lack a community ethos that is unified in its commitment to its basic ideals, so that the community’s champions could claim a moral high ground. So why was there no such unified community ethos and no uncontroversial “moral high ground”? I think it was a simple consequence of the fact that the community was to be largely self-organizing and to set its own policy by consensus. Any loud minority, even a persistent minority of one person, can remove the appearance of consensus.
It has all of the benefits and disadvantages of being written by someone present at the creation: the details of early choices are fascinating, while the score-settling is a bit tedious. (He takes Daniel Pink to task for misquoting the tiny number of finished Nupedia articles, even though the gap between Wikipedia and Nupedia covers orders of magnitude.)
What’s most fascinating, though, is not the historical element, but Sanger’s own position. He understands why Wikipedia works and Nupedia didn’t, and yet is constantly maintaining that the Wikipedia would benefit from being more like the planned Nupedia:
This point bears some emphasis: Wikipedia became what it is today because, having been seeded with great people with a fairly clear idea of what they wanted to achieve, we proceeded to make a series of free decisions that determined the policy of the project and culture of its supporting community. Wikipedia’s system is neither the only way to run a wiki, nor the only way to run an open content encyclopedia. Its particular conjunction of policies is in no way natural, “organic,” or necessary. It is instead artificial, a result of a series of free choices, and we could have chosen differently in many cases; and choosing differently on some issues might have led to a project better than the one that exists today.
I have a hard time understanding how a loosely bound community, choosing among available options, isn’t an organic process, but Sanger has always been convinced that setting and enforcing a Nupedian-style respect for authority was a) possible for Wikipedia and b) desirable for Wikipedia. (I’ve disagreed with Sanger on both points in the past, but based on a less complete re-telling than this looks to be.)
In any case, since the whole piece isn’t yet published, it’s too soon to see how the various themes will develop, but for anyone following Wikipedia, this will be a key piece of writing.
Original research is welcome, but not required. Be bold in your submissions! Wikimania is meant to be both a scientific conference and a social event. Relevant topics include:
* Wiki research: How do wikis, and the Wikimedia wikis in particular, operate? Which processes scale and which ones don’t? What kinds of people or social structures are well-suited to wikis? How does introducing a wiki into existing project groups change group dynamics?
* Wiki sociology: What motivates Wikimedians and what drives them away? Who are they, anyway? And where do they come from?
* Wiki critics: Critical positions are welcome: why Wikipedia will never be an encyclopedia, why Wikinews can never substitute newspapers, why amateurs shouldn’t be allowed to edit, and so forth.
* Wiki technology ideas: What can we do to address perceived and real problems, for example, peer review? How can we provide better-nuanced or more immediate user feedback?
* Wiki software ideas […]
* Wiki community ideas […]
* Wiki project ideas […]
* Wiki content ideas […]
* Multimedia […]
* Free knowledge […]
* Collaborative writing […]
* Multilingualism […]
Matt McAlister explains that the Infoworld.com upgrade isn’t merely cosmetic: On the articles pages they’ve moved from a fixed taxonomy that took them a lot of time to develop to a semi-structured tagging system:
What I like most in this new architecture is that the related links are now driven by del.icio.us. Our edit team is tagging content in del.icio.us. The engineers are pulling down the del.icio.us RSS feeds. And then we create matching logic based on the common tags. We also link back out to del.icio.us pages via the tags for the article on display.
This is a first step with several more ideas for leveraging tags coming soon. We need a more densely tagged data set behind us before some of the other plans can become real. The accuracy of the related links will also be a little shady, I’m sure, until we get more sophisticated with our tagging. But we’re all excited about the possibilities for the site now that we have these tags. New ideas seem to crop up daily.
Fascinating. Matt also talks about the intersection of tagging and marketing.
So, see Ephraim Schwarz’s article on Oracle and Sybase offering RFID integration. To the right is a “See Also” box that lists the article’s tags: Ephraim_Schwartz Oracle_RFID Sybase_RFID. (You can also click on “Complete List of Tags,” which takes you to Infoworld’s del.icio.us page.) The Oracle_RFID link takes you to the del.icio.us list of pages Infoworld has tagged as “oracle_RFID.” It being de.licio.us, that page also shows all the articles every other del.icio.us user has tagged that way. (The fact that zero non-Infoworlders have used that tag to me means that it’s a tad overly specific. Why not tag the article “oracle” and “rfid” instead?)
I’m not sure what it means that Infoworld is applying matching logic to del.icio.us feeds. Does that mean they’re looking at tags from non-Infoworlders?
In any case, this is exciting because a high-traffic site that lives and dies by content is trusting the looser bonds of tagging to help us explore what’s related. And if Infoworld is using del.icio.us to include related links outside of their site — even if they don’t, because Infoworld is using del.icio.us we can do that for ourselves — then we have a great example of the social power of links: They owners of the information no longer are the sole proprietors of the organization of that information.
First, I admire the message Scott Heiferman, founder of Meetup.com, posted on the site explaining the change. It’s straightforward and frank. I know Scott a bit (we’re conference buddies at least) and I know that Meetup was founded to realize an ideal, not to make a quick buck. So, I assume that the company is facing some serious financial issues.
But I’m afraid that charging each meetup’s organizer $19/month ($9/month if you sign up before May 1) is going to alter the social dynamics that helped Meetup become such an important part of our infrastructure.
First, it creates a serious obstacle to people founding a group on hope or curiousity: $19 is a lot to answer the question “I wonder whether anyone else in my town wants to talk about Chad Everett?” (Meetup could fix this by offering the first three months for free.)
Second, as the FAQ says, “The Group Fee will weed out less committed groups.” But why is this a good thing? Committed groups often grow from less committed groups. And some committed groups — not to mention seasonal ones — go through slack periods. Now it’s less likely they’ll survive.
So, if I were Meetup, I’d be worried that Craigslist will be the new Meetup. Initiating charges that apply to established Meetup groups is going to abrade the good will Meetup has earned. And while Meetup has added lots of services for groups and their organizers, some good percentage of people are obviously going to prefer freeness to servitude.
I appreciate as a member and as an observer what Meetup has been doing for us. I hope lots of people stick with it and sign up anew. But I’m worried. And I’m sure Meetup is, too.
Spent the first half of this week at Buying and Selling eContent in Scottsdale and the Gilbane Content Technologies Conference in San Francisco. Provided some Guerrilla Event Wifi, but wasn’t on conference blogging duty, but took some notes:
Quite a mindwarp to go from the Open Source Business Conference to be exposed to industries with top-down enterprise applications and DRM models of monetization. Wonderful to hear praise from a content buyer at Pfizer for Open Access, Factiva is in beta with RSS and a desire for content licenses that let enterprise users freely remix and share. Bizzare how some XML gurus can’t wrap their heads around the beautiful mess of social software or even fully grok last year’s lessons of blogs undermining CMSs.
There are real needs for the boring stuff like directory, monitoring, backup and storage to fulfill the promise of collaboration at scale. There are real data integrity issues for adding structure in erstwhile unstructured enterprise apps. There enterprises beginning to see their problems as opportunities for innovation. I’m starting to feel like an old guy with an ever-evolving product that has been in the market for two years now. Many still need to hear the basics (ppt), but the conversation quickly leads to real issues and an interest in driving adoption. We, not just my company, are starting to shake up the enterprise market for good.
The Content Industry has the familiar refrain of those that avoid commoditization. In absence of business-level standardization (contracts) the market is flocking to the free (where you need no contract, and people are happy to produce). Technology providers seem to focus on managing complexity at cost, without seeing the importance of practices and the willingness of users to play a role when it’s made simple. Both of these issues center on trust, but spillover has yet to occur aside from some key early standards work. Meanwhile simpler and empowering alternatives are arising from the bottom-up.
Hmm, now people have a choice. They can donate their time and energy to a nonprofit effort to make the world a better place by giving away an encyclopedia under a free license. Or they can go to work for free, enriching Microsoft.
I wonder what the most talented and dedicated people will choose. :-)
Funny how Microsoft never came up in the list of potential donors to Wikipedia alongside Google then Yahoo. I signed into Passport and tried adding some facts about Microsoft being a convicted monopolist to the Bill Gates entry, it is still pending editorial approval.
I also took notes on a panel on community practices with Brian Behlendorf from Apache/Collabnet, Josh from PostgreSQL, Chris Hoffman from Mozilla, Larry Wall from Perl and David Wheeler from Bricolage that may be of interest.
Like many over the past few months, I have happily filled my aggregator with persistent queries from the likes of PubSub, Newsgator, Technorati and Feedster. At first it was ego surfing without leaving the couch. Now I'm creating lots of queries for even short term memes I want to track. There is a lot of buzz about
One of the many disturbing points a Spammer made when interviewed by Chris Pirillo was that they could even spam RSS. Chris said something to the effect of, "bullshit, there is an unsubscribe button." But when he explained that RSS provided perfect fodder for creating blogs that looked real, there was an Oh Shit moment. No need for scraping, blogging has structured it for you.
All this clicked for me recently when I noticed an uptick in stupid fake blogs in my pretty smart feeds (I am not linking to examples). All that persistence is pretty easy to use for spam. Of course, there will be countermeasures as with any spam war. An link-based reputation and confirmed ties beat the heck out of black or white listing. But it is a shame when social software is a victim of its own openness. When you have to sacrifice your peripheral vision for greater focus on nagging problems. Ah well, at least I can still subscribe to my friends, and some of them have time to filter for me.
While at first it will not seem on-topic for M2M, here are my notes from a talk by one of my favorite people. Geoffrey Moore at OSBC.
I’m a little bit of a late arriver at this party. Personally, a late adopter. You want to catch up when you are late, but I don’t think sobriety is your strongest suit. Want to talk about what you look like to someone coming late to the open source cultural, personal and technical movement. And why are we where we are now?
Phil Gyford, in With great audiences…, wonders what it takes to get a story propagated in the weblog world, and is afraid that the answer is merely ‘attention grabbing headline + a patina of Old Media validity.’ He writes about a “banning blogging” story picked up from the traditional press, where the weblog…
…got carried away with the newspaper’s headline, repeating it in theirs even though a cursory read of the newspaper article reveals that no one “banned blogging.” The newspaper claims the principal doesn’t think blogging is educational, and Cory could certainly have criticised him for this alone, although it would make for a less dramatic post. The repetition of the lie about the principal banning blogging, rather than his apparent opinion, is possibly also what prompted a reader to suggest people should email the principal to complain.
The pressure to give things a dramatic headline, online or off, is tremendous, because if you don’t get readers with the headline, you won’t get them at all. This leads, in the weblog world, to a curious moral hazard, where fact-checking can be left to the furthest upstream source. “Well, if the Osceola Star-Ledger, with their enormous resources, can’t fact-check the article, how can I be expected to???” And so we get ourselves in high dudgeon at injustices that may never have happened, because they are the kind of thing we would hate if they had happened.
Thiscontrasts with with the magnificent distributed fact-checking done elsewhere, as with the Trent Lott or Dan Rather investigations. The choice to fact-check vigorously, even when a story is reported by well-funded news outlets, seems only to happen when the writers in question disagree with the story, while the decision to accept the fact-checking of any traditional media outlet, in order to be able to fast-forward to the aforementioned high dudgeon, seems to come when the weblogger likes repeating or even amplifying the claims made further upstream.
It seems harmless, except that many of the subsequent references weren’t about ‘toothing per se (understandable, as there was nothing to study), but rather referenced ‘toothing as one member of a set of activities mobile technologies enabled. ‘Toothing went from being a thing to being a touchstone for reasoning about mobile technologies generally.
A couple years ago, I spent some time on the trail of the urban legend that half the world had never made a phone call. While ‘toothing was never likely to acheive that degree of saturation, it was, like the ‘half the world’ phrase, a distortion not only in itself but as the avatar of larger social patterns.
Pausing only to spill some famous London ale down the front of his XXL-sized rugby shirt, Barry outlined some key points in the rapidly-evolving lexicon of British desire. “So what you do, right, is you spot a nice tart over by the bar and you think, lovely, I’ll have a bit of that. And you tip her the wink, you know? And then, if she looks back at you, she’s gagging for it.”
“Just like Bluetooth signalling,” I commented as I tapped hurried notes into my Zaurus. “Ingenious!”
One lesson we could all take from this is “Pay more attention to Yoz”, which couldn’t hurt, but a better motto is ‘WWYD?’ Note that he didn’t fact-check the ‘toothing story, he sense-checked it. The thing wrong with the toothing story isn’t that the participants of the toothign scene aren’t IDed, it’s that the story itself doesn’t make any sense. Most of us will not be able to afford the calling and re-calling of sources to double-check a quote, but all of us can ask ourselves, just before we hit Submit, ‘Is this true?’
And the time we should be most careful to do that is if we feel really satisfied with what we’ve written — “How dare the House of Representatives propose a mandatory bar code tattooed on the foreheads of liberal bloggers!!! Must. Denounce. Now.”
All the phrases we use to separate the weblog world from other media outlets weaken with elapsed time — old media, new media, traditional media, all of it suggests that newcomers join the club when they’ve been around long enough to be familiar. As weblogs continue their symbiosis with the forms of media that went before, we will make ourselves targets of truly malevolent hoaxes if we simply decide to repeat what we agree with. The echo chamber is of far less danger overall than unchecked amplification.
We can all come up with ways to justify even our worst behavior. This is why i’m always a bit wary of “don’t be evil”-esque mantras. Evil on what terms?
When i heard about Wordpress’ questionable practices, i couldn’t help but sigh. I totally agree with Waxy’s request that we not engage in angry mob justice. That said, i’m very concerned that folks are justifying, defending or explaining Matt’s decision (ex: 12). He is a nice guy - i totally agree. And perhaps we should all be very defensive of nice guys who are friends or friend-of-friends. But he did fuck up. And he did use our collective social capital for his personal gains.
I don’t want to talk about should’ves but i want to talk about what ethics we are promoting and what happens when we drag companies/enemies through the coals for similar behavior….
I hardly know what to make of this — Waxy.org has discovered that WordPress, the great open source blogging platform, has been pimping out it’s highly rated home page to an SEO (Search Engine “Optimization”) firm, effectively selling the community capital it built up to spammers by “publishing” articles that are hidden to users but visible to spiders.
There’s also a bizarre defense of this practice on Planet Wordpress, on the grounds that WordPress needed money to grow, and wasn’t getting it from donations.
This is such an interesting and uncharted area — as the net gets bigger and karma, previously bottled up in human relations, becomes convertible for real currency, in everything from ZeroDegrees/SMS.ac style spam to real sales of virtual characters to this, we are going to have to find ways to defend against this sort of karmic hijacking.
Hedlund comes away skeptical, noting that the lack of interoperable standards and widely available APIs violate some of the LLH tenets, as with the LLH assertion “Data communications standards are vital.”
Those who do not learn the lessons of Habitat are doomed to repeat them, indeed. In 360, we see this problem, the lack of communication standards, expressed most acutely in the IM sidebar, which lists the online status of all of your buddies — excuse me, your Yahoo buddies. You can IM them and send them messages in the system (messages which are like email but not email, so that you have yet a third voice with which to speak to a subset of your friends). Why do I need a web view on my IM buddy list when I have that list on my computer already? If 360 becomes your home, perhaps that would be useful.
The fault here is easy to see with a thought experiment. Let’s say Yahoo 360 were implemented today by a startup, a company without ties or loyalty to an existing body of users. Would they make the same decision? Is it in the best interest of new users to 360 to have their Yahoo buddies be the only ones available for sharing, or is that more in the interest of Yahoo?
Data communication standards are vital, and the lack of them has kept IM from becoming a platform for innovation as email and the web have become. 360 suffers from the lack of a standard just as would any startup, but it hasn’t sought out a solution, as would a company that needed new users to survive.
I’m less convinced than Marc that this is fatal, starting from the premise that much human congress happens within essentially arbitrary divisions like this one — you know your co-workers on the 5th floor or your neighbors on your street better than you know the people on the 6th floor, or on the next block over.
However, I am, like Marc, convinced that this ‘proprietary standards and messaging’ weakness will prevent 360 from becoming a complete digital hub. It may simply be a good fusion of Orkut and fotolog.
Very noteworthy that M2M guestblogger, Joshua Schacter, has quit his very good job to go full time with del.icio.us, the social bookmarking network that all of us are so fond of. Much of tagging originated with Josh and he deserves praise for taking this calculated risk.
“Blogs and wikis play opposite roles,” says Martin Wattenberg, a researcher on the collaborative user experience team at IBM Watson Research Center. “Blogs are based on an individual voice; a blog is sort of a personal broadcasting system. Wikis, because they give people the chance to edit each other’s words, are designed to blend many voices. Reading a blog is like listening to a diva sing, reading a wiki is like listening to a symphony.”
Just got email from a headhunter looking for leads for a ‘VP of Social Computing,’ whose job will include building and managing a staff of 75-80 (!) people.
No word on what the company is (though it’s obviously large) and the work doubtless includes a number of more broadcast-oriented efforts as well (e.g. weblogs and RSS as publishing tools as well as conversational ones,) but it was interesting to a) see a VP level hire in this area and b) to see how large a staff is being imagined.
Consumerpedia is Wikipedia for products. It’s in .00000001 alpha, the site says, but it seems usable, albeit empty. (I put in a review of Thinkpad X40, just to try it out.) The Help page highlights its tools for constructing a hierchical folksonomy: Anyone can create a category, a sub-category, a re-direct (= synonymn), or a related-to (= reciprocal link). It explicitly has avoided creating a top-down categorization scheme.
…As long as we don’t let the ontologists take over and tell us why tags are all wrong, need to be classified into domains, and need to be systematized, this is going to work well albeit, sloppily. What it does is open up ways to find things related to anything interesting you’ve found and navigate not a web of links but a link of tags. At the same time Wikipedia has shown that a model in which content is contributed not just by a few employees, but by self-forming self-managing communities on the web can be amazingly detailed, complete, and robust. so now people are looking at ways in which the same emergent self-forming self-administering models of tagging and Wiki’s and moderation can be used for events (EVDB) and for music and for video and for medical information. It’s all very exciting. It is a true renaissance. I haven’t seen this much true innovation for quite a while. What I particularly like about all this is how human these innovations are. They are sloppy. To me Tags are sloppy practical de-facto ontologies. Wiki’s are sloppy about changes and version editing. It is accepted that we’re trying new things and that sometimes messes will occur. In short, it is unabashedly creative and imprecise. I’ve always believed in the twin values of rationalism and humanism, but humanism has often felt as though it got short shrift in our community. In this world, it’s all about people and belonging and working with others….
Adam goes on to note that social software gets spammed (nod to Clay), “We got, unfortunately, any application talking to anyone (we call this spam).” He raises privacy concerns and the cost of interruptions to conclude:
It is going to be fascinating and exciting to watch how these tensions play out, namely the rising trend of people working together and collaborating and communicating over the web in increasingly real time ways contending with the human needs for privacy and reflection and with the unfortunate nature of some humans to vandalize rather than to construct.
As things play out, I’d suggest we will see forms of communication more asynchronous than email, the social network employed as a filter, richer forms of presence, easier group forming and reputation used only at large scales.
Today, Yahoo invited a handful of “influencers” to have early access to their new product 360 degrees. Apparently, i’m one of them so i got to sit around a table at Yahoo, learn about the product and speak my mind. I have to say that i’m impressed that Yahoo folks wanted to hear all of our crankiness head-on rather than waiting for it to appear in our random ramblings online. Even better: they didn’t make us sign any NDAs so we can blog all we want. I lurve that.
So, the tool comes out in like a week. I don’t know how final the version that we saw today is, but i thought i’d offer some impressions based on what i saw since i know folks out there are curious.
360 will be invite-only but they are not seeding through employees, rather, they are seeding through active Yahoo users. This is actually very important because frankly, 360 isn’t meant for people like me (or like you). It’s meant for your average not-technically inclined individual who is scared of blogging but wants to share their thoughts, photos, and recommendations with their friends. Thus, before we all get into a blogizzy, it’s important to remember the target.
The feature set that i saw included integrated YIM, a blogging tool, a recommendations engine (linked to local), photos (linked to Y photos, not Flickr) and a social network. It’s all very integrated and emphasizes Yahoo products (although they were talking about connecting it with other products and they are doing some RSS stuff). Throughout all of this are heavy controls for privacy/publication, although it is all strict categorization schemes where you can make things available to groups (think: LJ).
Of course, it has all of the social problems of bi-directional, articulated social networks (nothing solved there). And the controls are really overwhelming. In fact, a lot of the product is overwhelming for the not-technically-savvy and i think that this will be their major problem unless they figure out how to slowly expose things (one of our strongest recommendations). For the techgeek, it will feel like they didn’t go far enough, didn’t have enough features, etc. That’s actually a lot easier to solve than the overwhelming problem and i expect they’ll build new features soon so i think that the techgeeks should wait. But i’m really worried about the novice user because it has many of the problems of blogging, privacy and social networks rolled into one big problem. Plus, you really need to be heavily integrated into the Yahoo network for it to really make sense.
Frankly, i think that they should take the word “blog” out of the picture entirely. While the service allows you to share your materials with layered groups of friends, the term ‘blog’ is intimidating to the mainstream who see it as publishing or otherwise uber-public. Since Yahoo isn’t requiring uber-public, i think that they should get rid of the term. We’ll see what happens.
Anyhow, my general impression is that i’m wary, but i don’t think that this is for me and i think it will be nice for the heavily integrated Yahoo user.
Once I returned home, however, I discovered that I had suddenly been added to the “KM Cluster” mailing list. The reason? John Maloney from Colabria (hmmm…I’m starting to like the nofollow thing already…), another of the speakers at the conference, had added my email address to mailing lists used to advertise books and upcoming workshops. In fact, my name was added twice three times; once with the address on my card, once with the address provided to attendees as part of the participant list, and once with the form of my address that often appears in my return address.
This isn’t the first time someone has done this—taken my contact information from a conference attendee list and put me on a mailing list without my permission. And it drives me totally nuts. To me, that’s a serious breach of conference etiquette, one that will drive people to stop providing their contact information to new acquaintances.
When I complained, politely, to John, he informed me that I could simply follow “common practice” and click the “unsubscribe” button at the bottom of the messages. But as many of you know, that’s often a tool used by spammers to determine whether the email addresses they’re using are legitimate. It’s not, and shouldn’t be, “common practice” to have to opt out of a mailing list that you never chose to be added to.
I’ve also received a spate of messages from Plaxo recently, asking me to update my information so that the person using the system—typically someone I don’t even remember meeting—doesn’t have to go to any personal trouble to ask for my current contact details.
I’m sick of acquaintance spam. It’s not that I’m not willing to be contacted by people I don’t already know. It’s just that I think it should be a personal contact. Don’t add me to a mailing list without asking me. Don’t set up an automated system to harass me for contact info. (Plaxo even sends a “I noticed you didn’t respond to my earlier request” message if you try to ignore it!) This strikes me as such a basic rule of etiquette, whether the contact is personal or professional. Relationships begin with and are maintained through personal interactions. Don’t screw them up by trusting them to software.
Update: John Maloney has responded via email to this post. He feels I’ve misrepresented him, and wants me to “correct” the post. Read on for his take on this….
You just knew this kind of potato salad would happen. BusinessWeek reports on a PARC project, promising the social aspects of the Super Bowl experience without the dropped popcorn and the spilled beer:
The Social TV project is in research stages right now. But the idea is that, with the help of a bit of software, perhaps a keyboard or two and several strategically-placed microphones, people can remotely discuss a TV program while they are watching it. You’ll be able to see which of your buddies is watching which program in his or her house, and join into the viewing. Or, you might start a program-watching session of your own and invite friends.
Indeed, in many ways, Social TV will be similar to the Instant Messenger you already use on your computer. Only it will be more dynamic: Social TV software, located on a device like TiVo or even your TV set, might notice that your and your buddy’s yacking has gone well past the commercial break. The software would conclude that you are no longer watching the show and, perhaps, pause the program until you are ready to resume, says Nic Ducheneau, member of PARC research staff.
The follow-on invention, of course, is a social spam filter that mutes your friends when you are trying to watch TV.
Interesting speculation over on Life With Alacrity about Dunbar, Altruistic Punishment, and Meta-Moderation — Allen disucsses work on an agent-based simulation that suggests a phase transition from cooperating groups to Tragedy of the Commons scenarios at ~15 people, a much lower number than many of us assumed commons-based problems arose. (My assumptions had been closer to 25.)
This is a very interesting result. To explain it in different terms, if you have a system that depends on sharing some commons and there are no process or trust metrics, a group as small as 16 may find themselves not cooperating very effectively.
The idea of commons can be as simple as how much speaking time participants in a meeting share. The time that each participant uses during the meeting can be considered the shared “commons”. If there are no enforced rules, with a group size of 16 there will inevitably be someone who will abuse the time and speak more than their share.
The one big caveat is that this is based on studies of agents, not actual humans, making the results fairly provisional. However, the study at least points to some experimental designs that could be tried with real live groups.
If done right, this can be quite beneficial for everyone, especially if, as reported, Yahoo! doesn’t try to swallow it and turn it into Yahoo! photos. Yahoo! has the resources to deal with backend stability which would allow Flickr to focus on iterating based on its users - a skill that i’m very in awe of wrt Flickr.
On a completely selfish note, it is my hope that the gang will finally move to San Francisco where they belong.
RageBoy has discovered that Amazon seems to be rolling out a feature that shows you for any particular book which phrases in it are “statistically improbable.” For example, Chris’ own Gonzo Marketing uses the phrase “public journalism” and “market advocacy.” Obviously those are not phrases unique to Chris’ book, so Amazon is doing some sort of statistical analysis to find phrases that are significantly distinctive and prominent within a book and across books. Fascinating. And, as Chris points out, these SIPs can serve as machine-generated tags. [Technorati tag: tags]
So the many2many crew divided at the 2. The boys went to etech and the girls went to sxsw. Coincidence? Not entirely. Embarrasing? And how.
One salient detail that didn’t strike me until after talking with danah a couple of nights ago: When I was contemplating submitting a paper to eTech, I called Rael Dornfest, the conference chair, to ask his advice about whether the topic would be better as a panel discussion or a one-person session. I forget what he said, and ultimately my paper was rejected, but the point is that because I know Rael a little (and I’m admirer, btw), I felt comfortable picking up the phone. I wasn’t thinking, “Hey, time to work the old boys network!” but that’s what I was doing.
Other than that, I don’t have anything to add to danah’s and Liz’s posts. But I didn’t want to my silence to be mistaken for disagreement.
In short, i believe that you can’t acquire diversity at SXSW or Etech simply through a CFP. These are networking events where there’s a large body of people who are working in those spaces that don’t even know about it, let alone attend. People come to it because they heard about it from their friends the previous years. Social networks are homophilous which means that the less diverse an event is, the less diverse it will continue to be over time. And to counter that, you can’t expect marginalized populations to suddenly appear because you ask them to apply - you have to be active to shift the downgrade in diversity. Read my full post to hear out the logic in various arguments. Blind review is not the answer - the problem is far more systemic.
People want answers. Here are some.
Diverse committee (along multiple axes).
Diverse advisory board that will help you brainstorm who to invite.
Active recruitment of diverse populations working in the field.
Identity-driven BOFs or panels if appropriate.
Bring diverse voices to the smaller events too - integrate them into the community because they’re not represented at all levels of the social network.
Please note: i love the members of the Etech committee - some of them are my friends. This is not a problem with them nor should it be read as an attack. It is a systemic problem that affects all of us; perhaps many of you reading this are dealing with it in your own domain. The reason that Liz and i are not being quiet is that we believe that change should happen and we believe that folks like the Etech committee are allies and will work with us to make change if we make it clear that it’s a problem and that there are ways to fix it.
This year, two tech conferences directly related to social computing—SXSW and Etech—were scheduled so close together that many of us with an interest in these topics had to choose between the two. Clay and David and Ross are at ETech. danah and I were at SXSW.
Why did I choose SXSW? The biggest factor for me was the gender balance. Increasingly, I’m finding that I want to be in places where there are women I respect and enjoy to spend time with. It changes the nature of the conference experience for me. I feel more at ease, more relaxed, more like I belong.
This year’s Etech is perhaps the least diverse yet. Of the twenty featured speakers on the main page, one is a woman, and none are people of color.
At SXSW, in contrast, strong and wonderful women were everywhere. I don’t recall seeing a single all-male panel. When I hung out in the hotel bar, my companions were mostly women. When I went to the evening parties, everywhere I looked there were other women.
Three of my co-authors here on misbehaving—Gina Trapani, danah boyd, and Caterina Fake—were there. Fabulous women like Molly Steenson and Molly Holzschlag and MJ Kim and Cecily Walker Kidd and Adina Levin and Mary Hodder were there. Not all the faces were male. Not all of them were caucasian. The voices were rich and varied. The vibe was open and warm. There were more conversations than there were pontifications. (SXSW doesn’t call panel participants “speakers,” either, which I like. We’re panelists. A subtle distinction, but one that makes a difference.)
Many of the topics being covered at ETech are things I’m interested in. Ideally, I would have gone to both. But O’Reilly made a decision to move ETech up this year and place it in competition with SXSW—splitting the audience and forcing too many of us to have to make a choice. For me, conferences are far less about the presentations and far more about the people and the connections. And I chose SXSW because it offers me a far richer environment for those connections than ETech.
I’m reminded of a quote from Tom Melcher, formerly of there.com, that I use often in presentations: “If you build a place that women love, the men will follow. The reverse is not true.” Perhaps more conference organizers need to take that line to heart.
(Update: David Weinberger posted about why he’s at ETech, and an interesting dicussion about the gender balance there is brewing in the comments of his post.)
(Update 2: Trolls will be disemvowelled. Keep it civil, please.)
A transcript from a talk with Clay, Stewart, Joshua and Jimmy.
Clay: Not a debate about the meaning of folksonomy. This is about allowing a large group of users in on organizing a large volume of material. this is usually a function of professionals, why did you do this and what have you observed:
Jimmy: launched in June, didn't have software to support it before. First few weeks was a madhouse in English. Germans held off but then the floodgates opened with order. Became more sensible as people adjusted the categories. We let the masses categorizes because its the crazy wikipedia way.
Stewart: Activity is for the individual first. Because of the word folksonomy, people assume it is for categorization.
Joshua: started with a text file collection of links. Started putting short descriptions in, hash mark and some text to find links. Built a web version so I could point friends at a URL. Then made it massively multiplayer. Behavior around tags that have nothing to do about categorization. tag: to_read is quality of document and context of the user combined. Groups, workflow, RSS stuff, multiple unintended uses.
Clay: you both emphasized value of individual, what tensions arise and how do you resolve?
Jimmy: Entire community organized around high quality Wikipedia, so tension is between individual and the goal. Category scheme doesn't allow people to categorize individually, which is against the goal.
Joshua: but (with wikipedia) there is some consensus on how it fits together. Sometimes its clear, sometimes not. What category something is in may requires consensus. In Delicious, Wikipedia (free, encyclopedia and reference), reference is not a word used by Wikipedia itself.
Stewart: less of an issue dealing with the individual than a group. A person went to Tijuana, used the Etech tag, but for everyone else they want something else under the tag. At the group level, need to filter these things out. Pictures of hotel rooms in Tokyo aren't interesting to people looking for Tokyo.
Clay: Circle and square pattern. Some social activity has arisen despite the social bias. People using the comments field within delicious for conversations.
Stewart: First uses of tagging were for group forming on Flickr
Joshua: why distinction between groups and tags?
Stewart: there are differences
Marc Canter: now that we have tags, can we connect them between different systems?
Jimmy: very interested in this, talking with folks at technorati, should share dumps of tags.
Stewart: to a certain extent Technorati is already doing that. Lots of collisions. 200k tags in a shared space, not sure what the utility is.
Joshua: 190k tags, mostly single use. Need more tools to trim the hedges in the data garden. Flickr you tag for yourself, delicious mostly the same, Technorati you are tagging for someone else. Does it make sense for these different kinds of tags to be brought together, need more understanding.
Clay: the pull and reuse model, having Rest-like APIs may make this happen. Bring tags into remix culture.
Alex: how are you giving the user feedback to help their tagging get better?
Jimmy: once you get involved, its a community of 600-1000 people who do the bulk of the work.
Stewart: In Flickr there are no bad tags. Point is giving people to have tools that create happy accidents (Ward Cunningham's term) at a global level.
Joshua: two types of feedback, your own tags and the experimental interface that gives you your tags, top couple of tags for the thing you are bookmarking and the intersection between them. Don't want to have people dominated by groupthink.
Clay: User and time as impermissible categories usually. But it allows you, however context dependent, something responsive to user interests.
Stewart: Wikipedia model of large group and core group to develop semantic web approaches might work.
Jimmy: To create a large scale category system, a large group with feedback and monitoring will out perform a small group of experts.
Clay: Switch motivations from intrinsic to extrinsic
Stewart: Philosophical issue of meaning, cleaving nature at the joints.
Joshua: one thing that bothers me about semantic web is that it doesn't pay attention to what people are actually trying to do. They want to find and remember things. A natural scale. Tagging too broadly or to narrowly doesn't serve yourself or groups.
Audience question: What happens with Technorati is searching more tag services?
Jimmy: Google is the real answer to that question.
Joshua: tagging for you to find vs. for others to find
Dozed off on a question about RDF
David Weinbeger: trying to make sense of this mess about mass of tags. Need metadata about the tags, who what when where why? How much meta meta?
Joshua: if you say this tag is a child of other tags, they we are back to hierarchy. But the thing is they are easy to type, use, lower barrier to entry. If you encumber them and make them complex entries.
Stewart: has to happen after the fact, cant force people to specify language.
Jimmy: Cardinal baseball and bird, fits into hierarchies.
Joshua: like that you can type java and perl instead of categorizing. May do two level tags, letting you bundle them.
An impressionistic transcript of Jimmy Wales' talk at Etech on Wikipedia and the Future of Social Computing.
What is wikipedia and how successful is it.
500k articles in English as of today, German 200, Japanese 100k, and much more 1.5 milion articiles across 200 languages 19 languages with > 100k articles.
350k articles with categories, hierarchical peer reviewed taxonomy. Just barely more popular than the NY times, 500M page views monthly.
The original deam of the Internet and what went wrong
People sharing information freely. Early experimentation was Homepages. Worked well, but problems: quality control (reputation of homepage author), author fatigue (thousanbds of hits can be found for 'haven't updated' at geocities.com today).
Founded Wikicities, which extends the social model to new areas. Growing faster than Wikipedia Social computing successor to free homepages. Right to fork, uses free license to build community trust. For profit, portion of profit donated to Wikipedia.
How Social Computing addresses what went wrong
Author fatigue -- since the site is managed by a community people can come and go and the site is till maintained/improved.
Quality control -- everything is peer reviewed, leading to higher quality generally. Shows diff feature in Wikipedia as an example.
Social model of a wiki is hard to explain. In wikipedia, democracy, consensus, aristocracy and monarchy. his role is the constitutional monarch, but german paper quoted him as being the queen of England. We don't a-priori settle how decisions will be made, software does not enforce rules. Votes for deletion in english wikipedia page. Voting not enforced by software. Just an editable page with Deletes and Keeps.
Wikipedia is a social innovation. This social innovation will spread to other areas beyond just the encyclopedia. Software which enables collaboration is the future of the net.
Daniel Pink starts out this session by saying that he’s giving the whole audience a copy of his new book A Whole New Mind. The publisher won’t let him sell copies ‘til next week, but he can give them away…and he wants the buzz that SXSW attendees can generate. Very smart!
Says that brevity, levity, and repetition are key to good talks. (And my snap judgment here? He’s an entertaining and interesting speaker.)
His key thesis is that the future no longer belongs to analytical professionals—the linear, logical knowledge people (the “SAT people,” he calls them, pointing to his article in today’s USA Today on the SATs). It belongs instead to creators and empathizers.
A picture may be worth a thousand words, but a metaphor can be worth a thousand pictures. Talks about the hemispheres of the brain—left vs right hemisphere. The future belongs to the right hemisphere—wholistic, empathic, big picture.
This is the talk I’ve been looking forward to for months, but I’m a bit worried. How could the talk live up to the book(s)? That’s quite a challenge.
Gladwell opens with a story from his latest book, Blink, about a woman auditioning for the Munich philharmonic, not realizing that the director really only wants men. She auditions from behind a screen, and thinks she’s done terribly. She’s despondent, begins to leave for Italy. Audition is a classic example of a snap judgement—the maestro has already decided that she is the new first trombonist of his orchestra. When she’s introduced to him, he’s astonished to find that she’s a woman.
(Turns out that Gladwell is as wonderful a storyteller in person as he is in his book. Maybe better. This talk is worth the trip to Austin.)
Unfortunately, two of the original three speakers for this panel—Stewart Butterfield and Peter Merholz —couldn’t make it today. Jeff Veen is moderating, and Tantek Çelik, Don Turnbull, and Thomas VanDerWal are the participants.
Jeff Veen starts by framing the context, since the title is…well…somewhat oblique. He points out that tools that help us manage information are becoming more socially aware. del.icio.us, for example, which allows you to discover people as well as information, and to discover information based on people rather than simply topics. Last year social networks were all the rage; but he felt that tools like Friendster were like yearbooks—fun and useful for showing off who you know, but that’s a short term activity that doesn’t sustain long term interest. It gains ongoing attraction once you add in the kind of value-added media that tools like Flickr (and, I’d add, last.fm) provide.
He makes an important observation—what’s most interesting here is the blending of public and private. That needs more elaboration, I think it’s a key concept. He also talks about the need for more interoperability between these systems. Can travelocity, for example, know where he is and share that information in useful ways with other systems I’m on (like flickr, for instance).
Thomas VanDerWal is up first, and discusses personal views of information. Too much online information is ephemeral—so we end up emailing things to ourselves, copy and pasting into new documents and losing context. We need a way to get back to information we’ve seen. (Reminds me of Microsoft Research’s “stuff I’ve seen” approach to searching.)
He says that we “get lost early” in the information around us, and ask how we can get to “findability” in our own information spaces? del.icio.us, for example, allows us to name things in ways that make sense to us. But how do you tie different personalities together? How do we jump between disciplinary vocabulary boundaries?
Our current tools don’t support us well. (His slide is titled “that synching feeling”) Synchronization frequently makes mistakes and overwrites inappropriately. We need a “mothership of information” to tie together our various devices and collections of information.
How do we build a “personal infocloud”? Many requirements. It has to be portable (or ubiquitous), the access appropriate to the context, organized in a way that makes sense to the user in the context they’re in.
External storage and management is important. We need smarter aggregation, attention.xml for everything on your own hard drive as well as the online sources we’re following. What’s important? What should I be focused on? Need standard formats for being able to pull information in and organize it. Aggregation only works when information is in a recognizable format.
(“Unbolding” as a constant activity; great term.)
The next speaker is Don Turnbull from UT Austin’s School of Information. He opens with a great line: “I’m from the university, and I’m here to help.” Launches into an interesting discussion of tagging and folksonomy issues.
Turnbull poses some key questions related to folksonomies:
How do you get people to cooperate?
How good can the tags be? Can you find things you wouldn’t have found? but more interesting, can you browse through categories you never would have thought of (like the “me” tag, or “whatsinyourbag”)
Is there a point where we stop tagging? where we feel we don’t need to tell the system anything else about us? (for example, he himself has tagged thousands of movies on netflix “mostly because I go to a lot of faculty meetings and we have wireless access…”; is there any point in tagging more?)
What about changing interests? You buy a gift for someone on amazon, and your recommendations are skewed towards it for a while. How can you tell recommender systems “I’m not interested in that any more?” [my note: last.fm handles this pretty well]
There are still lots of people not using these systems; this is a small slice of the information world
He raises some issues related to tagging, as well, such as the potential for spamming and gaming, the inherently explicit nature of tags (not always a good thing), and the value of tags being easy-to-parse and analyze plain text.
Then he moves on to social and community issues related to tagging and sharing of data:
Who controls the sharing? And who controls those controls??
anonymity vs community (and privacy issues related to this)
free riders—people who never tag, just browse
what constitutes a community? are personal relationships necessary? do they grow out of the information sharing, or define with whom you share information?
(Ack! I want his slides! I’m missing a lot!)
Talks about all the implicit metadata that could be added to explicit tags, such as “i bought this,” “i own this,” dwell time, clicks, chatter, etc.
He ends with the concept of “don’t fence me in” - we need tag mobility across systems, (flickr, email box names, amazon ratings), a common api for tags, and the ability to move between desktop and server-based views of our data.
The last speaker is Tantek Çelik from Technorati. This is a much less theoretical, much more “look at our cool Technorati tags” presentation.
He says “Anybody can be their own delicious.” — But this misses the point, I think. the value of delicious isn’t just your own bookmarks or even your own tags, it’s the collaborative filtering and discovery. He says that technorati’s approach allows you to own your own data—but the user owns his or her own data on server-based sites, too; it’s easy to import/export and backup. The value to me is in cross-user data, and new ways of thinking about things.
A questioner mentions open space technology—how can we do that virtually? How can we extend the conversation in this room beyond the borders. Panel member (can’t see who) says “that’s why I maintain a blog.”
Tantek says that things like using the technorati tag for sxsw2005 in a blog entry provides “unprecedented” aggregation, but this is exactly what trackback provides. O’Reilly did this last year by allowing people to trackback to conference session pages.
A few more questions, and I’m off to eat. I’m starved! More later from the Malcolm Gladwell keynote this afternoon.
(A meta comment about sxsw: it’s hard to get called on to ask a question; that’s where IRC really helps, but it’s surprisingly underutilized here. Too bad.)
I arrived at SXSW/Interactive last night, and am starting the conference today with Eric Meyer’s talk on “Emergent Semantics.”
He starts with a laugh line—that his talk’s title is “so buzzword-compliant it almost makes me sick.” Then goes on to say that this is a fancy way of saying ground-up, grassroots, evolutionary semantics. “Semantics” (I’m uncomfortable with this use of the noun form; I think perhaps he’s talking about semantic relationships) are created on an ad-hoc basis, and evolve over time.
He talks about microformats for solving specific problems, generally expressing a human-understandable semantic definition using xhtml markup (e.g. rel=nofollow). Then he uses the example of colleges paving well-worn walkways (“pave the cow paths”). Acknowledges that there’s an opposing view, but dismisses it as wrong. But I’m not sure that “herd mentality” always derives the best possible answer. (It’s not hard to find examples to support my concerns in current politics…) I think he should acknowledge that there’s a need for deriving patterns from trusted networks, not just global populations.
I’m baffled by the lack of discussion of folksonomy in the context of emergent semantics. That’s genuinely emergent, as opposed to the examples being provided here. Most of these strike me not as emergent, but top-down, created and implemented by a relatively small group of people; the fact that they’re not coming from a standards organization doesn’t make them any less deterministic.
Why the emphasis on “met”—this strikes me as a not particularly useful thing. And it prioritizes geographic proximity and, to a large extent, wealth. If you can’t afford to travel to conferences, you become excluded from the “met” network, and marginalized if that becomes a significant factor in trust.
Ah…a brief reference to what he’s calling “free tagging,” but goes back to Technorati, saying that rel=”tag” provides a necessary definition of tagging. But why should Technorati be defining meaning in this space? Again, that’s the antithesis of emergence.
An audience member asks about how to make large collections more accessible (like library books). This is exactly where free tagging makes so much sense, but he goes back to seeing this as a format construction issue.
CitULike is del.icio.us for academics. It saves citation details and exports them in a couple of standard formats. It aggregates journal articles for your posting pleasure. It encourages long-ish descriptions and lets you assign stars. Nice!
I’m not sure i fully get the map-based model that Clay is espousing, but i can buy that we view the world from a different point of view. It’s also no accident that i claim my primary identity as an academic and Clay, while at an academic institution, does not. Perhaps it’ll help if i try to clarify some of my model and situate it in Clay’s mapping.
Part of how my model works, and i think that this fits into Clay’s Cartesian map, is that i don’t care if a new artifact is better than an old artifact. In other words, i have no interest in comparing Wikipedia to the encyclopedia. Grr to them both - they don’t solve the underlying problems that bother me. It’s like telling me that PPOs are better than HMOs when i want a health care system that universally helps people. I also can’t even fathom factoring out anything that is still bad from Point A to Point B, particularly when they are the most salient features of the problem. To me, framing it in the world of encyclopedias is about doing horizontal moves. And i definitely get frustrated when people get so excited about horizontal moves because they stop putting energy into moving vertically, into truly solving the underlying problems that are salient.
But Clay’s right - i like research and i’m interested in solving big problems even if it takes a while. I don’t like doing incrementalism because it takes so much cultural and cognitive energy to make any shift that i’d rather see people not expend the energy for each new little advancement - we all got sick of joining the next social networking service. Now that we’ve burnt out on horizontal, there’s very little energy to actually solve the vertical problems.
Of course, unlike other pure academics, i do actually have an appreciation for the tools that emerge out of incremental change or that are pretty darn flawed. I do appreciate Wikipedia. I do appreciate the social networking services. I do appreciate blogging. I mostly appreciate them for the cultural shifts that happen though, not for the technology itself. Many of my colleagues are stuck on the fact that there’s no radical technology shift. That said, i refuse to believe that it’s THE solution to anything and i don’t want energy to be lost congratulating each other when there are still big problems to solve - technologically and socially.
My love of cultural change first and foremost is what makes me appreciate social software at a core level. And one of the reasons that i only have so much patience for research is that i want to see things deployed and creating shifts. But, i always want to take it a step further, i always want to go deeper. I want to see huge waves of social change and then take a step back and make another huge wave, not a bazillion duplicates that burn everyone out to make a buck or follow a trend. Boring. So the canonical tools, the ones that make the first wave of huge change - these are the things i follow. To understand the wave.
Oh, given that others have assumed that Clay and i are vicious enemies, i would like to affirm my admiration and love for him as well. We bicker because we love each other to bits and we’re both invested in knowledge even when we think the other nutso.
When thinking about technological change, there are two kinds of people, or rather, people with two kinds of maps of the world — radial, and Cartesian. Radial maps are circular, and express position in relative coordinates — angle and distance — from the center. Cartesian maps are grids, and express position in absolute coordinates. Each of the views has good and bad points on their own, but reading danah on Wikipedia has made me contemplate the tendency of the two groups to talk past each other.
Radial people assume that any technological change starts from where we are now — reality is at the center of the map, and every possible change is viewed as a vector, a change from reality with both a direction and a distance. Radial people want to know, of any change, how big a change is it from current practice, in what direction, and at what cost.
Cartesian people assume that any technological change lands you somewhere — reality is just one point of many on the map, and is not especially privileged over other states you could be in. Cartesian people want to know, for any change, where you end up, and what the characteristics of the new landscape are. They are less interested in the cost of getting there.
Radial people tend to think more about change than end state, and more about local maxima (are things getting better?) than about a global maximum (are things as good as they could be?) Cartesian people think more about end state than change, and more about global than local maxima.
I am a radial person; danah is a Cartesian person. Cory Doctorow is a radial person; Nicholas Negroponte is a Cartesian person. Richard Gabriel is radial; Alan Kay is Cartesian. This is not a question of technology but outlook. Extreme Programming is a radial method; the Capability Maturity Model is Cartesian. Open Source groups tend towards radial methods, closed source groups tend towards Cartesian methods. It’s incrementalism vs. planned jumps, evolution vs. directed labor.
When we make mistakes, radial people tend to overestimate the value of incrementalism, and to underestimate the gap between local and global maxima. When they make mistakes, Cartesian people tend to underestimate the cost in moving from reality to some imagined alternate state, and to overestimate their ability to predict what a global maximum would look like.
This is, plainly, an overstatement of the Everyone is a Pirate or a Ninja sort, but I think there is a grain of truth to it — when Negroponte rails against incrementalism, there’s an interesting discussion to be had about how big he thinks a change has to be before it no longer counts as an increment, but there’s no denying that he is advancing different idea about technological improvement than Gabriel is in his Worse Is Better argument. There’s a similar difference in the way danah or Matt Locke talk about Wikipedia vs. the way Cory or I do. There are lots of blended cases, but the basic impulse is different.
This has been an era of radial triumphs, because radial maps tend to be better guides to large, homeostatic systems. When thinking about change on the internet, the tools that have been driven by a thousand tiny adoptions and alterations have tended to be more important than the tools designed in advance to change the landscape. However, radial vision requires that someone, somewhere, have pushed through a large, destabilizing change, in order for the radial people to be playing in new terrain with lots of unexplored local maxima. Shawn Fanning could only change the world in 1999 because Vint Cerf changed the world in 1969.
Bob Spinrad, who used to run PARC (an echt Cartesian organization) said “The only institutions that fund pure research are either monopolies or think they are.” Cartesian development is economically draining, and never pays for itself in the short term, so it’s no accident that R&D happens outside traditional profit maximizing institutions, whether governmental, academic, or monopolists.
You can see the differences in the two worldviews most clearly when we argue across that gap. I literally cannot understand danah’s complaints; I read “The problem that i’m having with the Wikipedia hype is the assumption that it is the panacea for it too has its problems”, and I wonder who she’s talking about. The radialists praising the Wikipedia are not saying it’s perfect, or even good in any absolute sense — we don’t ever talk about absolute quality.
Wikipedia interests us because it’s better, and sustainably better, than what went before — it’s a move from a simple product (“Pay us and we’ll write an encyclopedia”) to a complex system, where a million differing, internal motivations of the users and contributors are causing an encyclopedia to coalesce. How cool is that? (The radialist motto…)
But danah and Matt cannot understand our enthusiasm. From the Cartesian point of view, the thing that would excite you would be dramatic change to a new state. Radialists never say things like ‘panacea’ or ‘utopia’, but the Cartesians hear us saying those things, or think they do, because otherwise what would the fuss be about? Mere incrementalism is nothing more than a Panglossian fetishization of reality, and excitement about a technological change that doesn’t create a dramatic new equilibrium is simply hype, from the Cartesian point of view.
And so, when they see us high-fiving over Wikipedia, the Cartesians think we’ve taken leave of our senses, and, more to the point, they think we’ve misunderstood what is happening. They then launch a corrective set of arguments, pointing out, for example, that Wikipedia still leaves unanswered questions about social exclusion. But this, from a radialist point of view, is no more meaningful than pointing out that Wikipedia doesn’t cure skin cancer — no one ever said it would. Anything that was bad at Point A and is still bad at Point B gets factored out of the radialist critique. Any change where most of the bad things are still bad but a few of the bad things are somewhat less bad seems like a good thing to us, and if it can happen in a way that requires less energy, or better harnesses individual motivation, that seems like a great thing.
And so we go, back and forth, tastes great, less filling. We want to ask them why they aren’t excited about Wikipedia, since it is, to us, so obviously progress, but they want to know “Progress towards what?” They can’t even read their map without a posited end state. And they want to ask us why we’re not concerned about where all this is going, but we don’t have an answer to that question, because our maps only show us the way up the next hill, not what we’ll see when we get there.
There’s no answer to any of this — as Grandma used to say, “Both your maps are nice.” But after months of cognitive dissonance — I both admire and love danah; what she’s saying about Wikipedia simply confuses me — I think now have a way of understanding why the current conversation seems so unmoored.
I continue to get painted as anti-Wikipedia which couldn’t be further from the truth. I want to clarify a few things and i think that the latest BoingBoing entry on Wikipedia helps.
It is presumed that the data contained in a dictionary is ‘true’ but scholars have pointed out that there are ‘inaccuracies.’ There are two issues at play here. First concerns the truth-value of any record - when is there truth and when is only interpretation possible? I’ll leave that one alone for now. The better question concerns who has the authority to say whether or not something is ‘true’ where truth refers to presumed collective knowledge. The article that BoingBoing cites tells us explicitly that it is ‘scholars’ that have such authority.
Herein lies my primary complaint with Wikipedia - the lack of known authorship. (Note: i have the same problem with encyclopedias and dictionaries too, but i don’t see the Wikipedia arguments as boiled down to paper references vs. digital references.) I want to know that what part of the Wikipedia entry the Jane Austen scholar wrote and what was edited out by others. I want to know that the Jane Austen scholar looked at the entry that a 14 year old wrote and thought it was perfect. I want to know the investment level of the authors. I don’t think i’m alone on this one.
Secondly, i may be a scaredy-cat but i’m not afraid of Wikipedia. Like Clay, i firmly believe that students should cite their sources; nothing is more gut-wrenching than throwing a line of someone’s paper into Google and finding it on the web. My concern with academic citation is metaphorically concerned with citing Cliffnotes. Don’t tell me what Wikipedia tells you about Benjamin’s essay - tell me what Benjamin says and tell me your critique. If you want to use a third party’s critique to contend with, great, but that’s rarely what students do. Wikipedia’s interpretation may or may not be accurate and if you haven’t read the primary source (which is often the problem), you don’t know. There is no doubt that this is a problem with a broader variety of sources but the efforts to legitimize Wikipedia as better than an encyclopedia wreaks havoc. This is not because i want students using the encyclopedia - they’re far more likely to read the 10 page essay than hike up the hill to the library to find an encyclopedia that may or may not give them a clue about what’s going on. Encyclopedia citations are rarely my problem but Wikipedia as Cliffnotes is. I want students to be critical thinkers, not just piece together the varying levels of supposed critical thought that they find on the web. And if the web is useful to them, it should be as an interlocutor for argument’s sake, not a source of authority.
In both of these cases, comparisons to other media can be made and the problems that manifest are not necessarily new. The problem that i’m having with the Wikipedia hype is the assumption that it is the panacea for it too has its problems and those problems must be acknowledged, addressed and situated. It certainly has great value, both as a tool for information and as a site of community. But there are limitations and i believe that the incessant hype is damaging to being able to situate it properly and to recognize its strengths and weaknesses.
At the beginning of this week, Technorati will launch a new tag aggregation feature: When you search on a tag, you’ll be shown a list of “related” tags. The relationships are automatically discerned by the software, analyzing the other tags used by people tagging the same set of pages and photos. Dave Sifry let me play with a beta of it, and the suggested tags were generally quite relevant.
There are two types of relationships the “related” tags help with. First, they suggest slightly divergent topics so you can browse off the path you were heading down. Second, they help get over the problem that people use different words to flag the same ideas; the “related” tags can help you find more sources that are directly on the path you were heading down. So they help with both digression and focus. [Disclosure: I’m on technorati’s board of advisors. And yes, I have permission to blog this.]
At first glance, I can’t say I’m going to switch from Del.icio.us to Wists. I like the fact that Del.icio.us is text-based… I find that with Wists, I have to look at all the pictures, then read the underlying text anyway to make a decision on whether this is interesting or not. I can’t trust the picture to be worth my while.
My left brain agrees with his left brain. Your brains may vary. [Technorati tag: del.icio.us]
Francois Hodierne replies in an email that blogmarks.net does the same thing, except it automatically generates a screen shot as the image. (Wist does that if you don’t specify another image.)
I have no idea when Friendster launched Friendster blogs because i’ve been pretty far out of the loop, but Charlie noted them this morning. They are powered by Typepad and there’s a free option available (with ads of course). They’re all branded with Friendster’s logo at the top and have the Friendster domain. To update your Friendster blog, you have to log in. Plus, all Friendster blogs have easy links to your Profile.
Thomas Burg reports that the Austrian government has commissioned new social software from Thomas’ company, Permalink Information Architecture, Ltd. It combines blogs, wikis, tagging, events management, RSS feeds, email and search. (I believe there is also a shoe-polishing attachment. :) The government is using it internally. I spent a few minutes in a sandbox Thomas made available and the system seems cleanly designed, easy to “get”, and flexible.
Anyone who is involved in using, researching, or developing wikis is invited to participate. We are seeking submissions for research papers, practitioner reports, demonstrations, workshops, and panels.
The deadlines vary according to the type of contribution. (See the official call for submissions for more details.)
The general idea of a recommender system is that it asks for a few examples of things you like and then gives you more things it thinks you might like, based on its knowledge of other people’s preferences.
One problem you can often run into when using a recommender system is a bias towards popular items, which are not really that close to what you like but have the favor of many users because of their high visibility. For instance, based on my subscriptions, the Bloglines recommender keeps suggesting that I have a look at Slashdot, always putting it near the top of its list of suggestions. The effect of designs like this, of course, is is to reinforce the “short head” (as opposed to the “long tail”) by directing users towards the roads well traveled.
An easy way to mitigate this is to selectively decapitate the recommendation engine’s results. Last year I blogged about Andrew Grumet’s “Similar Feeds”, which implements this. I just came across a music filtering site that makes the feature more prominent and intuitive by putting a nice, fat “popularity slider” right at the top of recommendations pages. Try playing with the slider on this page to see how it works.
I like how things like this underscore the idea that “this is popular” is not the same as “you’ll like it”.
Perhaps this illustrates the limit of folksonomies - they are only useful in a context in which nothing is at stake. [Emphasis his] Folksonomies are, in essence, just vernacular vocabularies; the ad-hoc languages of intimate networks. They have existed as long as language itself, but have been limited to the intimate networks that created them. At the point in which something is at stake, either within that network or due to its engagement with other networks (legal, financial, political, etc) vernacular communication will harden into formal taxonomy, and in this process some of its slipperiness and playfulness will be lost.
He relates this to the idea of play from finite and infinite games. (I’m more optimistic about the shift here than he is, for reasons I’ll discuss below, but I think he’s spot on about the gap between palyful and serious categorization.)
The other idea, from Bowker and Star’s marvelous Sorting Things Out, is about the inherent tension in classification generally:
Bowker and Star identify three values that are in competition within classfication structures: comparability, visibility and control. Folksonomies have elevated visibility, but at the expense of comparability (being able to translate classifications across taxonomies or contexts) and control (the ability of the classification to limit interpretation, rather than interpret ‘emergent’ behaviour). Whilst nothing is at stake, and there is little lost by not being able to transfer taxonomies from one context to the other, or users are not disadvantaged by the need to independently assess and contextualise meaning, folksonomies will provide a useful service.
Just a fantastic post.
The only place I vary from Matt (it’s not even a disagreement, really, just a prediction about the future) is in the eventual value of folksonomy. He likens folksonomies to vernacular vocabularies, but this doesn’t describe their first-order importance, at least not where systems like del.icio.us are concerned.
Here’s what’s radical about what del.icio.us protends: My vocabulary on del.icio.us folksonomy is personal, not vernacular — no one knows or needs to know which class I’m talking about when I tag something ‘class’, or that I use LOC to mean Library of Congress. This isn’t the same as, say, the dictionary of thieves slang from the mid-18th c. because no one else needs to know my bookmark system, and I don’t need to know anyone else’s, or, to quote Adam Smith: “It is not from the benevolence of the butcher, the brewer, or the baker, that we can expect our dinner, but from their regard to their own interest.”
This is really, truly different, because it uses the intiution of markets — aggregate self-interest creates shared value. Locke points to the loss of control as one of the downsides of folksonomic classification (at least in its del-style form), but there are significant upsides as well. The LOC has no top-level category for queer issues, but del.icio.us does, because its users want it to.
By forcing a less onerous choice between personal and shared vocabularies, del.icio.us shows us a way to get categorization that is low-cost enough to be able to operate at internet scale, while ensuring that the emergent consensus view does not have to be pushed onto any given participant.
Which is why it mystifies me that both Matt and danah are so concerned with exclusion — who’s excluded here, who isn’t also excluded from using the internet generally? Put another way, is anyone excluded from using del.icio.us who has better representation in other classification schemes?
The del.icio.us answer is “If you don’t like the way something is tagged, tag it yourself. No one can tell you not to.” Prior ot del.icio.us, controlled vocabularies were almost inevitably vocabularies that pushed the politics of the creators onto the users; that is upended here.
danah said, in Academia and Wikipedia, “All the same, i roll my eyes whenever students submit papers with Wikipedia as a citation.”
I didn’t comment on this at the time, but grading papers over the weekend, I had a student cite the Wikipedia for the first time, referencing its entry on the OSI Reference Model. Seeing it in the footnotes, I wondered what the fuss was about. The Wikipedia article is a perfectly good overview of the Reference Model, and students should document, to the extent they are able, the sources of their research. When they have learned something from the Wikipedia, in it goes; to exclude it would in fact be dishonest.
Curiously, the Wikipedia reference came in the same week that another student was referring to Walter Benjamin’s The Work of Art in the Age of Mechanical Reproduction, an essay that is tremendously influential and, in a bunch of non-trivial ways, wrong about the inherent politicization of reproducible art, and especially of film. I’m much more worried about students overestimating the value of the Benjamin essay, because of its patina of authority, than I am about them overestimating the value of the Wikipedia as a source for explaining the 7-layer networking model.
And I assume I am hardly alone in the academy. Hundreds, if not thousands of us must be getting papers this year with Wikipedia URLs in the footnotes, and despite the moral panic, the Wikipedia is a fine resource on a large number of subjects, and can and should be cited in those cases. There are articles, as danah has pointed out, where it would be far better to go to the primary sources, but that would be as true were a student to cite any encyclopedia. If someone cited the Wikipedia to discuss Benjamin’s work, I’d send them back to the trenches, but I would also do that if they cited Encyclopedia Britannica.
To borrow some Hemingway, this is how the academy will get used to Wikipedia — slowly, then all at once.
Brittanica editor Robert McHenry's “The Faith-Based Encyclopedia" is criticism of Wikipedia asserted that quality declines over time. Rather silly, as the one thing that is known about the quality of a given Wikipedia article is that it is better than it was before and will get better with more time and attention. In "The FUD-based Encyclopedia" Aaron Krowne has not only fisked McHenry's claims, but relates open content to open source -- a very similar topic to what I just contributed to a forthcoming book on open source to be published by O'Reilly. Krowne sees McHenry's efforts as similar to the Fear Uncertainty and Doubt campaigns waged by threatened by incumbent software vendors. But of particular interest to M2M readers is Krowne's first two laws of commons based peer production, and the illustration of their interplay:
(Law 1.) When positive contributions exceed negative contributions by a sufficient factor in a CBPP project, the project will be successful.
With wikis, as phantom authority pointed out, transaction costs are low for making a contribution and even lower for fixing mistakes.
(Law 2.) Cohesion quality is the quality of the presentation of the concepts in a collaborative component (such as an encyclopedia entry). Assuming the success criterion of Law 1 is met, cohesion quality of a component will overall rise. However, it may temporarily decline. The declines are by small amounts and the rises are by large amounts.
Coding is vertical information assembly, marked by dependencies between contributions. Writing, as in the case of Wikipedia, is horizontal information assembly, which has little dependency. You can get the date of birth wrong in an article, but the article still generally works and can be built upon in the process. Doing the same in software could result in a Y2Kish meltdown. This distinction accounts for the authority models that Krowne describes later in his article, owner-centric and free-form. Krowne also adds a correlary for the two laws:
(Corollary.) Laws 1 and 2 explain why cohesion quality of the entire collection (or project) increases over time: the uncoordinated temporary declines in cohesion quality cancel out with small rises in other components, and the less frequent jumps in cohesion quality accumulate to nudge the bulk average upwards. This is without even taking into account coverage quality, which counts any conceptual addition as positive, regardless of the elegance of its integration.
Dependency is not necessarily a negative factor, as it can prompt refactoring. It has been said (link? will refactor in later) that Wikipedia could not be a poem because of inherent structure. But I wonder what impact a language or fact-checking refactoring tool could have on cohesion by highlighting dependencies.
I’ve posted the longish overview section of an article I wrote for the latest issue of Esther Dyson’s Release 1.0. The article is called “Taxonomies and Tags: From Trees to Piles of Leaves,” which is pretty much what it’s about.
We are currently looking for papers, panels and demos on all aspects of how social software affects and reflects academia (deadline: March 31). Please check out the Call for Participation for more information.
Fascinating new effort called Social Physics, affiliated with Berkman, with two large goals:
- Create a robust, multi-disciplinary, multi-constituency community for addressing, vetting and conducting experiments in such issues as privacy, authentication, reputation, transparency, trust building and information exchange.
- Develop a reusable, open source software framework based on the Eclipse Rich Client Platform that provides core services including: identity management, social network data models, authentication management, encryption, and privacy controls. On top of this framework we are also developing a demo app that provides identity management and social networking functions, tools to create peer-to-peer identity sharing and facilities to support communities of interest around emerging topics.
I’m generally skeptical of identity management — it has the same hollow ring as knowledge management — but since the focus here is on trust building, rather than simple transactions that treat trust as a binary condition or simple threshold, this will be worth watching.
While del.icio.us is delicious, fac.etio.us isn’t facetious. It’s a thought experiment embodied in software from Siderean, a company that creates faceted classification systems for big-ass enterprises. (Note the “facet” in “fac.etio.us”? Damn clever!)
Faceted classification assigns a set of parameters (facets) to the objects it’s classifying and then lets users sort them using the facets in any order. For example, appointments in your calendar might have facets for time, date, person, location, subject, and importance. You could then ask to sort first by person, then by location, and then by date, and a minute later walk through them by importance, then date, then subject, etc. In short, faceted classification systems let you construct trees with the roots and branches in whatever order suits you at that moment. And faceted systems never lead you down branches that have no fruit.
So, Siderean is playing around with doing a faceted classification of about five days’ worth of bookmarks at del.icio.us.
If you want to do something that’s going to change the world, build software that people want to use instead of software that managers want to buy.
When words like “groupware” and “enterprise” start getting tossed around, you’re doing the latter. You start adding features to satisfy line-items on some checklist that was constructed by interminable committee meetings among bureaucrats, and you’re coding toward an externally-dictated product specification that maybe some company will want to buy a hundred “seats” of, but that nobody will ever love. With that kind of motivation, nobody will ever find it sexy. It won’t make anyone happy.
He then offered a more upbeat definition of social software than ‘stuff that gets spammed’:
But with a groupware product, nobody would ever work on it unless they were getting paid to, because it’s just fundamentally not interesting to individuals.
So I said, narrow the focus. Your “use case” should be, there’s a 22 year old college student living in the dorms. How will this software get him laid?
That got me a look like I had just sprouted a third head, but bear with me, because I think that it’s not only crude but insightful. “How will this software get my users laid” should be on the minds of anyone writing social software (and these days, almost all software is social software).
“Social software” is about making it easy for people to do other things that make them happy: meeting, communicating, and hooking up.
Lloyd Dalton has created another experiment in tagging. This time, we get to tag colors.
At Colr.org, you can choose any color and tag it with any tag. A search for tags turns up all the colors with that tag.
You can also create a scheme, clustering colors you find copasetic. For example, search for “baby” tags and you’ll currently find six colors with that tag (e.g., “alice blue bambino”) and two schemes (“baby blue” and “baby pink”).
It will be interesting to see if we folksonomically develop color clusters. For example, if you tag a light blue as “sky,” it won’t be found when people search for blues, so you might want to add a “blue” tag as well. On the other hand, a search for “sky” turns up 11 blues already.
By the way, Lloyd is also the author of Plans, a free online calendar. (I like the fact that the Plans home page is not shy about listing the “competition.”)
Alex Primo at the Universidade Federal do Rio Grande do Sul in Brazil and his research group have released a prototype of Co-Links. It allows readers to add links to any word on a page. A single word may have multiple links. The user can either go to one of the linked pages or see metadata about it. It’s a cool idea. You can try it here..
Yahoo's potential to own a huge piece of the blogosphere via distribution, tool sets and content acquisition did not go unnoticed by media companies in the room---just the perception they can dominate could possibly spur progress by online newspapers (I hope.) Grassroots media folk and search companies present at the event took notice as well.
Yahoo has blended personalization and RSS to form the most widely used aggregator on the planet. Keep in mind that the vast majority of traffic goes through a handful of portals (and an oligopoly of carriers) and mainstream attention follows the power-law. Most users do not enjoy the diversity or serendipity that blog readers do. Blog writers who want to make impressionistic returns will feed off of major portals. Somewhere in middlespace, the bottom up will be incented by the top down. A new editor is rising and it isn't your blogging client, nor branded aggregators, its an algorithm that supposedly will grow to know you better than people can.
Personalization is supposed to be the answer for how industrial era print media evolves into the information age. A shift from media companies broadcasting to the world to the media broadcasting to you.
If you share your tastes and demands, you get matching information. You browse without effort, sit back and consume. This is sheer bliss for marketers, you also get increasingly framejacked ads. With search, you narrowcast what you are looking for and get ads that supposedly could be helpful along the way. For now, there is no memory of your queries and profiling for others, but it will happen as a personalized search is a useful engine.
Corporate personalization is also a bargain of consummate efficiency. The value proposition of enterprises portals is reducing the time spent looking for information. Of course, part of the contract for employees is to perform a specific function and submit any conceivable data to assist the system There are no ads, all interactions are commerce, yielding ruthless modeled efficiency.
The criticisms of personalization as an instrument of control are not new. Yahoo! is actually taking personalization into new directions by emphasizing user programmability. And a branded aggregator is based on open standards, which is a big leap into a second web. But its important to realize that Personalization is not a world of ends and the means of the trend ensnare us just as before.
Over the next year or so, every major portal will have personalized aggregation of RSS. I say personalized because branded aggregators will have initial appeal the existing audience of a media site, but have no differentiation. Older media will apply traditional editorship to suggest the best feeds according to expert judgment. Newer media will suggest feeds based on what we like. Both approaches will provide limited differentiation, but even more limited utility -- because finding feeds is not a significant problem when most posts in a feed provide their own suggestions, link by link.
Brandmasters will disagree. They will say their promise is strong and trust held by the audience will lead them to trust their expert or automated judgment. But being a provider of information does not beget a relationship, you have no clue if your audience is even impressed. People trust themselves over brands and now they have their fingers on the unsubscribe button for anything they are fed. They roll their own media personally. And before trusting a brand, people are inclined to trust other people -- the promise of influential people is stronger than brands. Now more information flows through and between them, and these flows underpin relationships. Every meme is underwritten by social capital. The most influential mass or custom marketing is in concert with buzz. All media becomes saturated with advertising and consumers are sensitized with each new form. Today this happens at an accelerated pace.
A corporate portal may provide information required for process, but will fail to inform decisions when exceptions happen and hinder my ability to form relationships that help resolve them. Worse, without a diversity of input and the socialization of information, saving time looking for information is pointless when the information isn't shared in the first place.
The basic problem with Personalization is that tailoring information to you limits social discovery. Users contribute value to the database only for them and the service provider, not for each other. People design algorithms outside social context, and error arises in profiling, categorization and filtering. Narrowcasting creates micro-silos as it limits a user's view from more diverse and otherwise peripheral information compared to modes of browsing and searching. Over time, users are taught to rely upon this mode as their primary source of information. Nowhere in this mode is sharing, conversations, remixing and socializing information.
By contrast, consider how social software enables people to create their own networks. Groups form, information is shared and implicit and explicit relationships are fostered. Profiles, ties, posts, links and tags provide dimensions to explore. Spam happens as a consequence of openness, but as social networks become the new filters, it is a minor problem and yields benefits of connecting people. The appeal of personalization is sheer convenience. Today social software fails, with a few exceptions, to deliver the same level of convenience at scale, but give it time.
Replace the word information with relationship, and you get how people want to use the net, with other people. What is shared through filters is very different from a blogger saying, "hey, my group of readers would be interested in this," or "Doc makes a fine point, but when you consider what Jon says it really changes things," or "everybody I know is talking about this." When my network socializes information for me as a natural byproduct of interaction, while respecting my privacy (an important aspect of keeping things personal), I discover relationships that make my life convenient and empowered.
Academics often use hand-rolled systems to keep track of and (less often, sadly) share literature references. I have used my personal wiki to that end for a
while, but it wasn't the ideal solution.
Now, the rapidly-developing CiteULike looks quite interesting. It borrows from del.icio.us'
simple interface and social software features, but it is tailor-made
for academic papers that are available online. It lets you build a "personal library" (here's the one I just started),
recording bibliographic information and enabling you to tag papers for future retrieval and group sharing. For instance, here is an
ongoing stream of papers on blogging, collected by various individuals. Development is very much alive, as you can see from the development journal and the discussion list.
Because so much of the literature is still stuck behind subscription walls, surfing CiteULike can be frustrating if you're not on a university network, as you can very often be denied access to anything beyond the abstracts (even if you are, digital bouncers are legion and you're bound to bump into one of them sooner or later). This highlights how nice it would be for the public to have open access to the published research it has often paid for out of its own pocket. (The general web-unfriendliness of academic production is a pet peeve of mine - it hurts the impact and dissemination of research findings, and obviously deprives academia from influence on the "real world". How ironic that the Web was originally built in a research lab, to share results...)
(A similar service is Connotea, but I haven't done a thorough comparison between the two. And Alf Eaton's pioneering Biologging has been providing a similar service for biomedical researchers for a while now.)
Great article on Tagging in Salon that covers the applications, social use and commercial implications. Quotes three M2Mers, but you have to love this:
"It's like Friendster for knowledge as far as I'm concerned," says Howard Rheingold. "I look to see who the other people are on del.icio.us who tag the same things that I think are important. Then, I can look and see what else they've tagged ... And isn't that part of the collective intelligence of the Web? You meet people who find things that you find interesting and useful -- and that multiplies your ability to find things that are interesting and useful, and other people feed off of you."
Christopher Allen tackles the issue of social network saturation and what to do when you have more than 150 connections on a social networking service. I previously distinguished between active and latent ties and their impact on social capital. The issue is similar to Steve Gillmor's comment to the latest gang that when feeds are abundant you need social attention-based filters. Chris provides some tactics for dealing with contact overload, one of which Jeff Clavier used to prevent extending undue social credit, but I can't help but think this isn't a significant issue.
Social networking services that do not leverage social spam to grow membership do not burden your attention to function as contact repositories. Recall that the Dunbar number is what you can manage with your own faculties, so somehow you are cognizant of your active network. Having a repository of your latent ties and the ability for those in it to grab your attention, at the risk of their own social capital, is convenient augmentation.
Steve has a great point that we will need greater feed filtering as the network grows. Not for discovery of feeds, there are enough inherent and implicit ways to find good sources in blogging. But for those busy moments where you need to go on vacation or really work and want to stem the tide.
But like contacts, you only want to stem the tide at the moment of congestion. The ability to recall and search gives you the confidence to skim or skip when need be. When you initiate a connection, you have to make an investment, deciding what impact it may have on your attention and social capital. But so long as the flow is passive and under control, the augmentation is more productive than not.
Beyond overload, Adina Levin provides a far more considered take on the issue.
Ben Hyde looks at four popular bookmarks at del.icio.us and plots how many times each is tagged with the same word. E.g, BoingBoing is tagged as “blog” 200 times and as “news” 90 times. The curve is that of a classic power law: The most frequently used tags are used waaaay more frequently than lesser-used tags.
Ben stresses that four bookmarks don’t constitute a significant sample, but wouldn’t we expect a folksonomy to assume the shape of a power law distribution?
ProgeSOFT is encouraging its users to tag content about its products (IntelliCAD, PRogeCAD) so they can learn from one another. It’s recommending three tags — intellicad, learnintellicad, and “learn software” — for use at del.icio.us, flickr and blog sites via technorati tags.
Great experiment, although I’m not convinced that those are the right tags, especially the “learn software” one. Is that so you can search for items tagged both as “intellicad” and “learn software”? It’ll be interesting to see how the folks develop their own folksonomy.
I don’t mean to carp. I think this is a truly interesting idea. My hat is off to ProfeSOFT.
Back In The Day, when I was trying to explain what I meant when I was talking about social software, but before Coates pulled my fat out of the fire by doing the work for me, I had all these wicked abstruse definitions that made everyone’s eyes glaze over.
The only definition I ever found that created the lighbulb moment I was feeling was “Social software is stuff that gets spammed.” Not a perfect definition, but servicable in its way.
Comes now del.icio.us tag spam from user DaFox, as if to illustrate the principle — a single link, whose extended description is a variation on the form “Best site EVAR!” and who has tagged the site (for his or her own retrieval doubtless) with the following tags:
.imported .net 10placesofmycity 2005 3d academic accessibility activism advertising ai amazon amusing animation anime apache api app apple apps architecture art article articles astronomy audio backup bands bittorrent blog blogging blogs book bookmark books browser business c canada career china christian clothing cms code coding collaboration color comic comics community computer computers computing cooking cool creativity css culture daily database deals …
The list includes another couple hundred items — that must be some site, containing as it does not just the above listed items but info relevant to Ruby programming, New York City, typography, economics, and porn. DaFox is the Canter and Siegal for the social software generation.
In What Do Tags Mean, Tim Bray says “There is no cheap metadata” (quoting himself from the earlier On Search.) He’s right, of course, in both the mathematical sense (metadata, like all entropy-fighting moves, requires energy) and in the human sense — in On Search, he talks about the difficulties of getting users to enter metadata.
And yet I keep having this feeling that folksonomy, and particularly amateur tagging, is profound in a way that the ‘no cheap metadata’ dictum doesn’t cover.
Imagine a world where there was really no cheap metadata. In that world, let’s say you head on down to the local Winn-Dixie to do your weekly grocery accrual. In that world, once you pilot your cart abreast of the checkout clerk, the bargaining begins.
You tell her what you think a 28 oz of Heinz ketchup should cost. She tells you there’s a premium for the squeezable bottle, and if you’re penny-pinching, you should get the Del Monte. You counter by saying you could shop elsewhere. And so on, until you arrive at a price for the ketchup. Next out of your cart, the Mrs. Paul’s fish sticks…
Meanwhile, back in the real world, you don’t have to do anything of the kind. When you get to the store, you find that, mirabile dictu, the metadata you need is already there, attached to the shelves in advance of your arrival!
Consider what goes into pricing a bottle of Heinz: the profit margin of the tomato grower, the price of a barrel of oil, local commercial rents, average disposable incomes in your area, and the cost of providing soap in the employee bathrooms. Yet all those inputs have already been calculated, and the resulting price then listed on handy little stickers right there on the shelves. And you didn’t have to do any work to produce that metadata.
Except, of course, you did. Everytime you pick between the Heinz and the Del Monte, it’s like clicking a link, the simplest possible informative transaction. Your choice says “The Heinz, at $2.25 per 28 oz., is a better buy than the Del Monte at $1.89.” This is so simple it doesn’t seem like you’re producing metadata at all — you’re just getting ketchup for your fish sticks. But in aggregate, those choices tell Del Monte and Heinz how to capture the business of the price-sensitive and premium-tropic, respectively.
That looks like cheap metadata to me. And the secret is that that metadata is created through aggregate interaction. We know how much more Heinz ketchup should cost than Del Monte because Heinz Inc. has watched what customers do when they raise or lower their prices, and those millions of tiny, self-interested transactions have created the metadata that you take for granted. And when you buy ketchup, you add your little bit of preference data to the mix.
So this is my Get Out of Jail Free card to Tim’s conundrum. Cheap metadata is metadata made by someone else, or rather by many someone elses. Or, put another way, the most important ingredient in folksonomy is people.
I think cheap metadata has (at least) these characteristics:
1. It’s made by someone else
2. Its creation requires very few learned rules
3. It’s produced out of self-interest (Corrolary: it is guilt-free)
4. Its value grows with aggregation
5. It does not break when there is incomplete or degenerate data
And this is what’s special about tagging. Lots of people tag links on del.icio.us, so I gets lots of other people’s metadata for free. There is no long list of rules for tagging things ‘well,’ so there are few deflecting effects from transaction cost. People tag things for themselves, so there are no motivation issues. The more tags the better, because with more tags, I can better see both communal judgment and the full range of opinion. And no one cares, for example, that when I tag things ‘loc’ I mean the Library of Congress — the system doesn’t break with tags that are opaque to other users.
But other people will tag your posts if they need to group them, find them later, or classify them for any other reason. And out of this welter of tiny transactions comes something useful for someone else. And because the added value from the aggregate tags is simply the product of self-interest + ease of use + processor time, the resulting metadata is cheap. It’s not free, of course, but it is cheap.
I think Dave has pointed out a key problem with tagging. It seems like a nice idea but it requires us to always do it. The system wants 100% participation. If you don't do it even once, or don't do it well enough (by not choosing the "right" categories), then you are at fault for messing it up for others -- the searches won't be complete or will return wrong results. Guilt. But because it's manual and requires judgment you can't help but mess up sometimes so guilt is guaranteed. Doing it makes you feel bad because you can't ever really do it right. So, you might as well not play at all and just not tag.
This is the opposite of what I was getting at in my old Cornucopia of the Commons essay about volunteer labor. In that case, in a good system, just doing what you normally would do to help yourself helps everybody. Even helping a bit once in a while (like typing in the track names of a CD nobody else had ever entered) benefited you and the system.Instead of making you feel bad for "only" doing 99%, a well designed system makes you feel good for doing 1%. People complain about systems that have lots of "freeloaders". Systems that do well with lots of "freeloading" and make the best of periodic participation are good. Open Source software fits this criteria well and its success speaks for itself.
Blog post categorization is still not tagging, but will be soon
Tagging in del.icio.us and Flickr supports freeloading and rewards contribution
Categorization in blogging still lacks a easy tagging interface. In Typepad today, for example, you have to (a) add a new tag to your list, and (b) be done if you want just one tag, or (b) select multiple tags from your list. Encouraging one tag is categorization, is the pursuit of topic by design.
The Topic Exchange (nod to Phillip and M2M's Seb) lets you categorize your post through a trackback or manual entry into a topic channel on an aggregating site. More persistent groups within this system had a fascination with RidiculouslyEasyGroupForming, such as social software bloggers. Easy New Topics (nod to Matt and Paolo), took additional steps to enable extensible categorization within the blog client and easy group forming around topics. K-collector's pioneering implementation of ENT did bring together some early adopter bloggers around select topics, not too coincidentally among those more fascinated with ontology like KM bloggers. Note that contribution to these systems is not a byproduct of regular use (without adding a category, your post is not added to the database) and has relatively high transaction costs. Since use centers around formed groups, I would agree that guilt may come into play.
You can, and many do, use del.icio.us and Flickr without adding tags to links and pictures as objects. You still contribute value to the system, the object itself, which others can pick out of the stream to add value. When you do tag, however, you gain the reward of your own organization and the emergent structure of the group. Use centers, first and foremost, around individuals instead of groups, so guilt is barely a factor.
Dan's original example of Napster demonstrated Cornucopia effects where Greed is Good. You can take advantage of the common resource, but as a byproduct, you contribute to the commons, thereby increasing its value. But it must be noted that in some social systems, Guilt is Good. In particular, it can be used to curb negative behavior and even freeloading, which can increase the value of the system. UCLA researchers have highlighted the role of shunning in social systems:
"Up to this point, social scientists interested in the evolutionary roots of cooperative behavior have been hard-pressed to explain why any single individual would stick his neck out to punish those who fail to pull their weight in society," [Anthropologist Robert] Boyd said. "But without individuals willing to mete out punishment, we have a hard time explaining how societies develop and sustain cooperative behavior. Our model shows that as long as it is socially permissible, withholding help from a deadbeat actually proves to be in an individual's self-interest."
Perhaps a system isn't social if it only has first order commons dilemmas (governing the resource) and doesn't support management of the second order (governing each other). When a group explicitly forms around a tag, guilt may come into play (for example, shame on you people for not posting really ugly and fairly pointless parking lot photos!), and that's not necessarily a bad thing.
Matt Biddulph has put up a del.icio.us tag stemmer, which will take your username (or indeed any username) and point out the possible inconsistencies based on word stemming (tag/tags/tagging, etc.) It will also take a URL, scan all users who tagged it, and look for the same thing.
What it will not (yet) do is return the full list of tags sorted by frequency, listing both tags with alternate stems and those without, but I assume this is simply a matter of time.
This is part of why I think tags are such a big deal — they are annotations for the only native unit of accounting the Web has, namely the URL; the annotations are themselves URLs that can be further annotated; and they are simple enough in both concept and technical design that third-party services like ‘stemtags’ can easily be built on top of the system.
Clay’s right - i’m a huge skeptic, although i don’t attribute it to the academy at all. My first reaction to hype is and always was critique (unless, of course, i’m doing the hyping). This has resulted in me always ::raising eyebrows:: over everything from the best bands to “i just met the best girl in the world” stories.
I’m not actually in disagreement with Clay about classification - i am, after all, in a librarian school. My first indoctrination was “classification is impossible - here are a bazillion techniques that we use to try to get better schemas.” So, when i critique folksonomy, it is not in comparison to formal structures of classification. My critical reaction comes from any and all concerns that folksonomy is the panacea to hundreds of years of librarian woe. I know that formal systems are screwed, but i think that folksonomy has its own set of problems.
While i acknowledge the comparisons that can be made about the problematic similarities between folksonomy and formal classification, i also think that the effort towards ‘accuracy’ is actually clouding a few major differences. The differences are not that surprising, but very important. It comes down to benevolent dictator vs. crowd behavior. Sometimes the benevolent dictator goes way wrong, but also, sometimes crowds are scary.
There’s a problematic feature to crowds - they like to homogenize. Yes, the guy with the mohawk can assert his independence, but folks might trample him. Or he might be left to his own planet. Should he be given more attention than others because he is different? Should a classification schema be concerned with frequency/popularity or the full range? What does it mean to classify things that are rare viewpoints? Who gets to decide? That’s a heavily contested domain in classification.
Folksonomy isn’t asking the questions about the implications of collective action classification. Who benefits? Who becomes marginalized? What priorities bubble up? How does pressure to homogenize affect the schema and the people involved? How are some people hurt or offended by decisions that are made? Should moderation of classifications occur? If so, what are the consequences?
I totally appreciate the just-do model that is often espoused here, but i don’t subscribe to it. I believe that you have to go into the doing with the questions always at hand and always in check. What makes formal classification interesting is not its end result, its “technology” but the huge discourse around it, trying to figure out the implications of any and all decisions. Those questions have been around for years and i think that it’s important that we use those questions, those concerns, not for comparison but as a guideline for our hyping.
In short, i love tagging and folksonomy. But once it is taken serious and people are talking about ‘accuracy’ and being offended, questions that must be asked despite the hype - “folksonomy is better” is not good enough for me.