A Consuming Experience

Blogging, internet, software, mobile, telecomms, gadgets, technology, media and digital rights from the perspective of a consumer / user, including reviews, rants and random thoughts. Aimed at intelligent non-geeks, who are all too often unnecessarily disenfranchised by excessive use of tech jargon, this blog aims to be informative and practical without being patronising. With guides, tutorials, tips - and the occasional ever so slightly naughty observation.

Deutsch | Español | Français | Italiano | Português | 日本語 | 한국어 | 汉语

Add this blog to Del.icio.us, Digg or Furl | Create Watchlist for this blog

Add this blog to my Technorati Favorites!


Diginomics: why DRM promotes piracy, & why monetizing Web 2.0 is tough, etc

Monday, August 04, 2008
Deutsch | Español | Français | Italiano | Português | 日本語 | 한국어 | 汉语
Add this post to Del.icio.us, Digg or Furl | Create Watchlist





By applying principles of economics and management theory to analyse the nature and characteristics of digital goods and Web 2.0, economist Thierry Rayna (Thierry Rayna's papers / articles) and management scientist Ludmila Striukova (some papers) reach some pertinent conclusions and make some interesting and thought-provoking suggestions on the economics of the digital world, specifically on:
  • Piracy and DRM - why people pirate digital goods, why current technical implementations of digital rights management don't work (in fact are counter-productive) and are bad for society and consumers, and what kind of DRM might be effective while striking a fairer balance, and

  • Monetising Web 2.0 - why it's hard for content producers / providers / publishers to make money out of Web 2.0 under traditional business models, and what sort of new business model might work to monetise Web 2.0.
I first heard their ideas at "Diginomics" (the economics of digital technologies and Web 2.0), chaired by Thierry with a panel including Ludmila, at The Wealth of Networks conference 2008 (see that post for a summary and the MP3 recording of that session). It very much typified the "eureka scenario" for me, as the economics / management perspective was totally new to me, though no doubt not to others. There were lots of "lightbulb" moments when I was going, "Aha! That makes sense! That explains it!".

I later read their detailed papers with the same sense of excitement and dawning understanding. In this post I want to share, and raise some comments and queries on, their key theses - based on their session and the following papers written by them, which Thierry kindly made available to attendees:
Blogging. To whet the appetite of bloggers, in terms of blogging a couple of points struck me from the Web 2.0 paper (pgs. 5 & 12, my emphasis):
  • "Basically, in order to create a successful blog, the time spent on marketing is expected to be, at least, the same as time creating the blog." [Well there goes ACE then...]
  • Is a successful blogger one who understands technology and keeps up with change, or one who provides quality content? "The current incentive system, even for professional bloggers, is not so much about the quality of the content, but instead about the ability of the blogger to ‘play the rules of the game’ and make their blog more known than others." [Ditto!]
[In this context, they cited a 2006 book by blogger and internet marketing consultant Chris Garrett called Killer Flagship Content. However, I'd have liked to hear more thoughts from them personally on the economics / management principles behind successfully monetising blogs, and why, in economics terms, it's more important to increase fame and popularity than quality - but perhaps that will be the subject of a future paper.]

I'm doing this post in 3 parts:
  1. A summary of the nature of digital goods

  2. What that means in relation to DRM, and

  3. The nature and features of Web 2.0 and and their implications for the monetization of Web 2.0.

1. Digital goods - their nature and special features

Digital goods - e.g. music, movies, computer games, software or documents transformed into binary code, like MP3s, Flash movies, DVDs and PDFs - are unique because they're infinitely durable (perfect copies can be made easily and cheaply in all kinds of formats), they're "public goods" (different copies can be used by different consumers at the same time ("non-rival") and producers can't stop non-paying consumers from getting and using their own copies ("non-excludable")), and they're "experience goods" (you can't assess their value to you until you've experienced them e.g. heard a song or seen a movie, so you're not willing to pay before trying it - "sampling" or free trials or tasters - yet, suppliers rarely provide adequate samples).

As digital goods are effectively public, piracy is in fact economically rational behaviour on the part of consumers - with the motivation often being to sample, not just to free ride - which reduces demand for legitimate versions; plus, cheap perfect replicability reduces the price of digital goods to zero. All this hits the ability of creators / producers / providers to recover their initial production costs never mind make profits on digital goods, so they're less willing to produce them, resulting in their "under-provision" or under-supply, which is bad for society - unless there is appropriate public intervention or an adequate protection system.

The usual solution to this kind of problem is government intervention in the form of intellectual property rights (IPRs) in order to incentivise creators. Although IPRs in the form of patents may be sufficient in the case of inventions (- or maybe not?), IPRs in the form of copyright don't have much effect in the case of digital goods, in practice, because piracy is so widespread.

2. DRM - why it doesn't work and promotes piracy, & what DRM systems could work

DRM protection is the use of technical measures built into the product (e.g. a media file) from the get go, to enable content creators / publishers to control / restrict access to digital media and/or its distribution, sharing, copying or conversion into other formats, as well as preventing its simultaneous use by more than one person. DRM is commonly implemented by encrypting the digital good and embedding in it DRM tags with information on the owner / device and their rights of usage, so that the good can't be consumed until it's activated by contacting the producer (or rather its server) to specifically identify and authenticate the consumer (check the owner /device info matches up with their records) and check the number of times it's been used, etc, and if all is as required then it authorizes the use by providing a decoding key. (The producer obviously won't activate or authorise copies it finds are pirated). The most widespread DRM system currently is FairPlay, used by Apple to control and restrict use of both audio and video content in their products and services: iPods, iTunes and iTunes Store.

DRM aims to reduce piracy of digital goods by restricting who can access them and how, controlling how many times they can be used or consumed (played), and/or restricting their lifespan to a fixed period, which anyway can't extend beyond the life of the hardware device authorised to play them. (DRM could theoretically be used to provide better, more tailored sampling, but it isn't.)

In a way, DRM should be more effective than IPRs because you can only try to enforce copyright after it's been infringed (punitive) and have to bear the enforcement costs (litigation costs etc), whereas DRM aims to stop infringement from happening (preventative).

But in fact, DRM doesn't work, and is bad for consumers and society to boot.

DRM will work only if consumers can be persuaded to buy DRM-protected goods instead of getting them from pirates or filesharing sites. But people aren't switching to buying DRM'd goods. Why?

Because unprotected versions are still easily available (all it takes is one unprotected leaked copy for pirated copies to spread all over the Net), and DRM'd goods are less valuable to consumers than non-DRM protected goods (legal or illegal) as they're inferior to non protected goods: you can't lend them, back them up or resell them secondhand; transferring them to use in other formats, media or devices is restricted or impossible; if bought online they're often of lower quality (compressed) compared with say CDs or DVDs.

Rational consumers aren't likely to buy DRM-protected goods when they can get, with wider availability:
  • (for only a slightly higher price) legal, fully-featured unrestricted non-DRM goods (e.g. CDs), or
  • (for free or negligible cost) pirated, full-featured unrestricted non-DRM goods (ripped files) obtained through illegal cracking, burning iTunes downloads to CD then ripping from that, or through the analogue hole (if you can hear or see it you can copy it).
Even those willing to buy DRM'd goods won't pay as high a price for them as for unprotected goods. And, in fact, usually DRM protected goods are indeed priced more cheaply than the unprotected version - but generally (and deliberately) not cheaply enough to persuade people who buy legal unprotected versions like CDs to buy them instead of CDs or DVDs. Suppliers feature-strip DRM'd versions in order that the profitable class of consumer who is willing to buy more expensive unprotected versions (DVDs etc) won't want to switch to "value-subtracted" DRM versions, and will keep on buying CDs and DVDs. So in practice DRM may induce some of the people who were pirating digital goods or not consuming them at all to buy DRM versions, and at least suppliers will get a bit of money from that class of consumer when they weren't before, but that's all it achieves.

As the authors put it (White Knight paper, pg.12):
"Thus, consumers are facing a dilemma. If they want to access digital goods online, they can either choose DRM protected files, which are legal, but have a low value due to the restrictions of DRM, and a comparatively high price; or they can download pirated digital goods, which are illegal, but have no restrictions, and are available at no cost. It can even be argued that law-abiding consumers are, in a way, “punished”: although they do pay for their digital goods, the digital goods they obtain have fewer features and involve tedious authorisation process. In contrast, consumers who decide to pirate obtain full featured digital goods, for free."

Consumers are generally more willing to risk buying "experience goods" (whose value is uncertain before they've consumed it) if the goods are durable and can be re-sold in the secondary market to recoup some of the initial purchase price should they not like it - e.g. printed books. But without the ability to on-sell purchased DRM-protected digital goods (because of the DRM), consumers will be even more reluctant to buy them without having tried them first. And, in fact, a major reason consumers download pirated digital goods from file sharing sites is in order to sample or try them. So unless producers come up with a better sampling strategy (at the moment it's very "one size fits all"), consumers will be even more likely to pirate. [Note: I'd be interested to see examples of exactly what kinds of alternative sampling strategies the authors have in mind as better options.]

The key point: consumers haven't been given enough incentives to buy DRM protected goods in preference to unprotected goods. On the contrary, paradoxically the restrictions and reduced features imposed by DRM, the increased risk to consumers due to the lack of a secondhand market, and the inadequacy of the samples or free trials currently provided, together all mean that DRM-protected goods are much less valuable to the consumer than unprotected digital goods (whether legal or illegal) - and the introduction of DRM has thereby increased piracy. DRM hasn't made it harder for consumers to access pirated digital goods; it's just made legal DRM-protected goods less attractive to consumers than pirated goods, and it's also made legitimate buyers more willing to share their purchased digital goods.

DRM is also bad for society because the above factors mean that consumers avoid buying DRM-protected digital goods, leading to their "under-utilisation" - which like "underprovision" is bad for social welfare.

Furthermore, DRM systems are usually incompatible with each other and non-interoperable, partly because of the lack of common standards for DRM, so there's a risk of anti-competitive, even monopolistic, behaviour on the part of suppliers, and consumers have to bear switching costs (which are generally more important in networked than non-networked environments) if they move systems - which adds to their reluctance to buy DRM-protected goods, as they may not want to be "locked in" to one system. So from a public policy viewpoint, as a minimum pre-requisite for DRM to be considered socially beneficial there would need to be a universal DRM technology based on open standards.

However, even if DRM systems were standardised, the benefits of DRM for society are still questionable. It aims to prevent piracy, but piracy can never be prevented while non-protected digital goods are available - even a single unprotected copy is enough to start "a stream of piracy".

The authors conclude that current DRM systems are wasteful and socially undesirable, decreasing the welfare of society as a whole, because:
  • DRM doesn't encourage consumers to buy DRM-protected goods and doesn't stop piracy, so pirating consumers are in the same position as before the introduction of DRM while law-abiding consumers are worse off due to the lower value of DRM'd goods, and
  • DRM is costly - to produce deliberately value-subtracted goods involves additional costs (it would actually be cheaper for firms to distribute full featured legal goods than to strip them down), and there are also costs to develop and continually upgrade DRM systems to counter new cracks, which together outweigh the benefits to suppliers of the extra money they might get from persuading pirating consumers or non-consumers to buy DRM'd goods (recall that they're unlikely to get buyers of more expensive unprotected goods to switch, and indeed they don't want to).
Also, DRM systems make anonymity of consumption impossible (since they work by identifying users) so they have privacy implications, which is another social concern - the collection of information about consumers and their consumption activities, often without their knowledge. These concerns may further deter consumers from buying DRM restricted products. (See the Privacy paper for definitions of privacy, the authors settled on: "Privacy generally guarantees that personal information, which is not in the public domain, is not released without authorisation.")

As the authors say (White Knight paper, pg.17):
"Instead of stripping digital goods of their distinctive positive features, firms using DRM should instead increase the value of protected digital goods. So far, law abiding consumers are punished for their honesty: the digital goods they pay for have less features than pirated digital goods. Such consumers should, on the contrary be rewarded. It is clear when examining the current DRM policies used by the firms that they do not use DRM to its full potential, but merely as a way to capture additional surplus from honest consumers, who end up paying for pirating consumers. DRM is a very powerful tool, and it could enable firms to achieve near-first degree price discrimination [i.e. charging individual consumers differently, selling at a higher price to someone who's willing to pay more]. But this would certainly require a complete rethinking of firms marketing and pricing strategies."

Is there a form of DRM which would reduce piracy while protecting privacy?

DRM could theoretically be designed to collect enough information to allow first degree price discrimination (charging different individual customers different prices depending on how much the individual is prepared to pay), to make more money for suppliers. If DRM tracks every consumption of a particular good (e.g. each time you play a song), they can work out the value of the good (or type of good) to the individual consumer and charge them accordingly (Privacy paper pg. 6): "For example, consumers who listen to a certain group/artist on a regular basis could be charged a standard price when a new album is released, whereas those who are not familiar with this group/artist could be offered a discount to encourage the purchase."

However, current DRM systems are designed to collect relatively little user information. First degree price discrimination isn't commonly used for digital goods because it's unprofitable - as long there are alternative sources for digital goods, the consumer would switch to a different supplier for goods they value more, buying only goods they value less (and which are therefore priced lower, e.g. discounted) from the discriminating supplier, so the discriminating seller would end up making less and less money. Also, of course, consumers are unlikely to be willing to disclose enough information to allow firms to charge them more for the products they like! If DRM attempted to track this information, it would further reduce the demand for DRM protected goods and increase demand for unprotected goods.

The authors suggest the possibility of what they call a "mutually advantageous disclosure" or "rewarded disclosure" DRM system: firms would pay (share with?) consumers a certain portion of their higher profits (higher due to being able to apply first degree discrimination), in return for consumers disclosing the extra information, and the result would be better for consumers, firms and society (including allowing firms to make enough money to recover their initial sunk costs i.e. fixed costs of production). For the maths behind this idea, which I won't even try to go into, see their model of the demand function on pages 7-8 of the Privacy paper and their graph - quantity on the x axis, price on the y axis.

The authors note that price discrimination would be easier for repeat consumption products (music, software, games) or products supplied in parts (TV show series / serials) than for films or books. Also, consumers who value privacy intrinsically may still not be willing to disclose more personal information unless the reward is still higher - even so, the authors think the potential gains would make it worthwhile for suppliers to offer the higher reward.

[Personally, especially as I'm no expert in economics, I would find it helpful to see concrete hypothetical examples, with figures, of precisely how all this might work. What extra personal information would be given, precisely? How would suppliers calculate exactly what amount should be "given back" to which individual? Would each person get the same proportion, or would different individuals be rewarded differently? I think a major issue will be consumer trust, and transparency on the part of providers. Can consumers trust that suppliers, having been given enough information to make even more money from them, will actually then pay the due reward over to consumers? How will an individual know if the amount of "reward" rebated to them is the correct fair amount? And how do they ensure they'll receive the right amount?]

Another interesting idea is for an alternative type of DRM. Current DRM systems work by authenticating users and controlling lifespan. The authors suggest it's possible to design what they call a "rivalness-based DRM system" that ensures each unit of the digital good (e.g. a media file) can only be used by one consumer at one time - i.e. by:
  • identifying each unit of the good (e.g. through a unique ID code or serial number for each unit), rather than identifying the individual customer, and
  • contacting a central server before each consumption to check that the unit is not already being used.
The key point: any number of copies of a unit can be made, but if someone is already consuming a copy of that unit (e.g. playing a music or movie file), the server won't let anyone else play any other copy of the same unit. Owners of other units of that product can however play their units independently.

Example: say I record a death metal version of Greensleeves. With a rivalness-based DRM system, unit 1 of my recording is given the unique serial / ID no. of 001. Unit 2 bears unique serial no. 002. And so on. My mum, who buys unit 1 (001), can make as many copies of unit 1 as she likes, and store copies on e.g. her home computer, work computer, portable MP3 player, car MP3 player etc - but only one of those copies can be played at a time. As long as it's still playing on her home computer, the copy on her iPod (or car etc) can't be played. However, playing unit 001 won't stop the owner of 002 from playing their unit, because it has a different ID number (even though it's the same recording of the same song). So my best friend, who bought unit 2 ID no. 002 of my recording (with only a little arm twisting), can play her unit 2 at the same time as my mum is playing her unit 1. Different units can be used at the same time; but different copies of the same unit can't be.

This method has advantages for both suppliers and buyers:
  • It allows copying to different devices and for backup; only one copy can be played at a time, but other copies can be made and used.
  • It identifies the unit, not the consumer, and tracks usage of the unit, not the individual consumer - thus preserving personal privacy and anonymity.
  • It reduces consumers' willingness to share copies and so reduces the dissemination of illegal copies - people are happy to share music and video files because sharing doesn't deprive them of the use of their own copy in future. But if only one person can play a copy of any one unit at a time, the original owner / buyer of that unit won't be so willing to let other people have copies, because if someone else happens to be already playing a copy of the unit at a time when the original owner wants to play it, the original owner won't be able to play it; and the more people that have a copy of a unit, the more likely it is that someone else will be already using it when the owner (or anyone else) wants to. So, legitimate owners have good reasons not to share copies of their files (just as they'd be reluctant to lend someone else their car, or garden fork, if they know they're going to be needing it to use it). This also means that there will be fewer illegal copies around, as most copies would remain in the hands of only their original legal buyers, so if a crack is discovered suppliers would have more time to update their DRM system to counter it before illegal copies became too widespread.
  • It re-aligns the interests of suppliers and consumers by transferring the burden of piracy on to consumers. Currrently consumers don't suffer from piracy (except in the indirect, weak sense of fewer digital goods being created generally). In fact, they benefit from it, because piracy results in more goods they can get for free. With rivalness-based DRM, it's not just suppliers who suffer from piracy (due to decreased sales) - it's consumers too (due to inability to consume the goods they've bought if too many copies of their unit are in use).
  • It decreases the value of pirated digital goods and piracy generally - there's little incentive for consumers to acquire or disseminate pirated copies, because they can't use their copy if someone else is already using it, and the more copies that are spread around, the more likely it is that someone else is already using it, so in time it would become totally unusable. A pirating consumer currently bears a (small) risk of being sued for copyright breach; with a rivalness-based DRM system, the pirating consumer would a face a (more likely, and much larger) risk of not being able to use the pirated good, a risk that would increase as more people pirated it.
There's one obvious difficulty. A rivalness-based DRM system requires collecting usage information in real time, for all copies of particular units - but how can it do that if the media player or computer isn't connected to the Net at the time? The authors suggest collecting and storing the usage information and uploading it to the server only as and when the device is connected to the internet (or to a connected computer), and if it's found that more than one copy has been used on more than one machine at the same time, it will trigger a "punishment mechanism" stopping all copies of that unit from being used for a certain period of time (or other units, in the case of goods consumed only once). [Is this a loophole? It may be possible to ensure certain machines are never connected to the Net or to a computer that's connected to the Net; if copies are played only on those machines, their use can never be detected and they can never be "punished". If the device is a multimedia phone, checking existing usage over the Net would also incur data charges for the phone owner, perhaps behind the scenes, which won't endear the system to the owner unless they have a flat rate unlimited data plan; and if it's an N95 smartphone like mine, the owner can just deny the player permission to access the Net! I suppose the system could be set up so that if a unit's usage is not checked at least once every say 3 or 5 plays on the same device then it stops working, in order to force the owner to connect it.]

Their suggested alternative system is certainly ingenious and it will be interesting to see if anyone can create such a system and get it to work; personally, being keen on my privacy, I like the sound of this option much better than the "Keep existing DRM but pay consumers to disclose more personal information" option.

Now, on to how to monetize Web 2.0.

3. Monetizing Web 2.0

The really major issues for society which the authors raise on Web 2.0 (apart from the old chestnut of copyright), are:
  • How do you fund the production of original content on Web 2.0? The most successful strategy so far seems to be to provide free content whose production is funded though selling ads.

  • How do you incentivise the production of quality content? On Web 2.0, more often than not the monetary rewards go, not to the producers or creators of that content, but to intermediaries (with the exception of ad-funded blogs and the like).
Incentives to Web 2.0 content producers are currently mainly provided not through price, as in a traditional free market, but through advertisement income. The problem this creates is that the relationship between ads income and value is loose, so market distortions and inefficiencies are likely to result. [Here I'd have liked to see reference to studies on the links between ad income and actual market / social value, or other explanations why they're not closely coupled enough. Or maybe that's a principle that's just obvious to economists? Especially as the authors acknowledged that advertising has still proved to be the most successful model so far, I'd have liked more on why they think it doesn't work, and on why and in what way advertisement income and market value aren't tightly coupled enough.]

Furthermore, most Web 2.0 content creators are mainly incentivised by non-financial considerations (reputation, altruism etc) which don't tie in with the social value of what they produce.

Web 2.0 doesn't operate like a simple traditional market, because of two key factors: the economic nature of digital goods, explained above (which means the main issue with monetising Web 2.0 is the large volume of freeriding, i.e. piracy), and transaction costs coupled with the low value of most Web 2.0 content. So, what are "transaction costs"?

Transaction costs and search costs

In economics terms, transaction costs (the costs incurred in making an economic exchange) will include search costs (the costs of checking what's the best or most suitable product, where is it available, which seller offers the lowest price etc).

There are 2 types of search costs: external search costs (monetary cost and opportunity cost of the time taken searching) - depends on technology etc, and are usually the same for everyone; and internal search costs (cognitive costs) - which vary with the individual consumer (thinking time / load to formulate search queries, analyse results in order to make decisions, etc).

Search engines like Google have reduced external search costs effectively to zero, but cognitive costs remain, and indeed will grow with the amount of information to be processed. Search costs are further increased by the availability of more and more content generally (because Web 2.0 has facilitated content creation by the masses), more and more of which is relatively invisible / inaccessible content i.e. private" content which is not indexed by the search engine (whose existence also increases external search costs), and more and more of which is multimedia content like videos, images - not accessible to search engines unless the content producer tags it.

Web 2.0 - the costs, incentives and inefficiencies

It costs content creators / publishers time / money to tag their content accurately, and producers by and large don't directly benefit from their contributions, so why should they expend even more time / effort on tagging? So, they don't.

Now if I upload a video without tagging it, it costs other people (lots of people, potentially the whole connected world) extra time to try to find my video via current search engines (or, perhaps more likely, to find other stuff that they're really looking for amongst the extra "noise" added by my content!). The publisher fails to tag, yet it's society, not the publisher, who has to bear the extra (retrieval) costs. It costs society more than it cost the creator. (That's what economists call a "negative externality").

The result, as with other negative externalities: too much untagged multimedia content is produced by publishers, increasing search costs more and more, which, as the authors put it, is socially inefficient. Less content, but fully tagged, would actually be more beneficial to society.

Most contributors don't benefit directly (or indeed financially) from their own produced content. So, why do people contribute to Web 2.0? Incentives are similar to those with open source software: the immediate satisfaction of bug fixing/producing content, delayed benefits (reputation, ego gratification, career improvement), altruism and community identification. These incentives, along with the desire to publish accurate facts particularly in areas of personal interest (Wikipedia contributors), and the attractions of receiving positive (or indeed any) feedback, are sufficient to motivate production of at least some content.

Also, although it's rare, some professional bloggers do make money, a few of them good money, through ads, affiliate commissions, product sales, donations etc. However, substantial investment is needed first to generate reputation (participation in forums, online communities, social networking sites) and traffic (marketing the blog: knowledge of blog publishing software, feed aggregators, blog carnivals, SEO, tagging etc). Indeed, to create a successful blog, as much time has to be spent on marketing as creating content.

The main issue is that, presumably with the exception of professional blogs (which are rare), the (often subjective) incentives for Web 2.0 contributors, the private benefits perceived by the contributors, don't match up to the social value, the actual benefit to society, of their contributions: contributions benefit the contributor more than they benefit society. (In economics terms, there's a "positive externality".)

Although there are some incentives to produce valuable high quality Web 2.0 content, unfortunately there's no Web 2.0 mechanism to systematically ensure that the incentives for producers / creators match up closely enough with the social value of their contributions. (Despite the long tail concept, which might suggest that all content has some social value, research shows that in fact, on e.g. YouTube, 10% of videos in fact account for 90% of the views.)

Web 2.0 is unusual in being the source of both negative and positive externalities at the same time. The production of more Web 2.0 content costs society more than it costs the creator; and the extra content benefits the creator more than it benefits society (Web 2.0 paper pg. 9): "Unfortunately, this means that low (social) value content is very likely to be over-produced, while high (social) value content is, probably, produced in insufficient quantity, thereby leading to an inefficient outcome".

The large supply of content reduces the market value of all online content generally, and thereby reduces the supply of good quality content. Too much free (or low cost) low quality content competing for consumers' time and attention crowds out the good quality content (in economics, the market for "lemons"). Where there are information asymmetries between buyers and sellers, e.g. regarding the quality of Web 2.0 content (producers know more about the content than consumers do; consumers won't know the quality of the content till after they've consumed it), what's known as "adverse selection" applies: "bad" products are more likely to be selected than good ones.

Where there's adverse selection, consumers' willingness to pay is usually a weighted average of the quality present in the market. So if there's proportionally a lot of low quality content, consumers' willingness to pay becomes close to zero - so much so that higher quality content may be driven out of the market altogether.

To counter adverse selection, producers normally use "signalling" to indicate the quality of their product objectively and clearly. But signalling is unlikely to be effective in Web 2.0: usual signalling strategies (guarantees, money back etc) don't generally apply to Web 2.0 content; the cost (especially for bloggers) of signalling may not be thought worthwhile; the value of content to the consumer is often subjective rather than objective in Web 2.0, so it's hard to arrive at universal signalling criteria; and their lack of market knowledge / experience in assessing the market or social value of their product means the multitude of amateur producers may overestimate the quality of their own content and decide to signal, while producers of higher quality content may not signal; and finally, professional bloggers need to spend as much time promoting their blogs as writing them, as the current incentive system is more about the blogger's ability to play the rules of the game and make their blog more well known than others than it is about the quality of their content. [Is the point here that signalling doesn't work with blogs because it's fame rather than quality which matters there? Here I'd have liked to see more evidence / research and economics explanations as to why that's the case.]

In summary, Web 2.0 is largely non-monetary and suffers from economic inefficiencies like search costs, crowding out and adverse selection: the costs and benefits of producing extra content, to the creator and to society, don't match up, so too much low quality content is produced, which drowns out / drives out the good content.

Possible business models for Web 2.0

The authors, as previously mentioned, feel that the "free, funded by advertisements" business model often used in Web 2.0 is inefficient, in terms of the market and society, because there isn't close enough a link between ad income and the market / social value of the Web 2.0 content produced. What alternative models might be used?

Pay per use.
Monetizing Web 2.0 by switching from a free to "pay per use" or "pay per access" model would involve large transaction costs as there would be an enormous number of transactions owing to the huge amount of content and large numbers of producers.

Micropayments are the most likely way due to the relatively low value of most content. But micropayments won't be worthwhile, and so won't be adopted, unless the transaction costs are low enough - and that's not just the monetary costs e.g. payment systems fees, but also opportunity costs (time spent on and leading up to the purchase) and cognitive costs (again on and leading up to that particular purchase).

For consumers, "pay per use" costs them in terms not only of money but also time and cognitive costs, e.g. checking out options fully to ensure they're forking out their hard-earned (even micro) cash on the best value product for them. And with experience goods, even extensive research may not help - only actual consumption enables proper evaluation. Also, there are the costs of coordinating with the supplier, entering into a contractual relationship, and the potential costs of dealing with any problems that might arise (which people would shrug off if it was free). For relatively low value goods, would the extra "hassle factors" (including search costs) be worth it to consumers, even if the actual monetary spend is very small?

For producers / providers, even reducing transaction costs for pay per use might not make pay per use profitable enough - with information goods the best route to profit has normally been to bundle information goods and/or their use, so much so that different independent producers get together to package all their goods together into one bundle sold in one transaction (e.g. MacHeist), and thereby lower transaction costs.

All of which is why the authors think pay per use doesn't seem to be a viable way to monetise Web 2.0.

Subscription fees. A subscription route may be better (e.g. to a feed), as it would reduce the volume of transactions (subscription once per year instead of per use). But it won't work for all Web 2.0 content - e.g. irregularly published content is hard to price, and there would still be lots of transactions given the large number of content producers. A subscription via an intermediary like YouTube, probably flat fee to reduce transaction costs, might work - they could then further divvy up fees amongst creators based on e.g. usage. Yet no attempts along those lines have so far worked out. [Note: I'd have liked to see examples here, and suggestions of reasons based on principles of economics as to why they've not worked out and aren't considered viable.]

In Web 2.0, participants are both consumers and providers, so exchanges between them are many and frequent (which according to transaction cost theory is why firms and corporations arose: there's a point where transaction costs become too high for exchanges between individuals in a market environment, resulting in the creation of "non-market entities" like companies). Charging for so many exchanges would be very costly, particularly in terms of time and cognitive resources, so it's not surprising that a very collaborative environment like Web 2.0 has developed into an environment which is primarily non-monetary. But will it always stay that way?

Another way to monetise Web 2.0 - a "demand-driven" Web?

The authors believe the key challenge to monetising Web 2.0 is: how to better align incentives for producers with social value (i.e. how to incentivise providers to produce higher quality content), in light of the economic characteristics of digital goods as public goods (which means piracy can't in practice be prevented, so reproduction / distribution of existing digital goods won't be very profitable), without incurring excessive transaction costs (e.g. how to reduce the volume of transactions).

They suggest one method which would address all 3 of those issues: instigate a demand-driven Web 2.0, instead of or alongside the current supply-driven Web 2.0 - i.e. publish content (whether pre-existing or created to order) only to order.

That should incentivise the production of high quality content as those demanding it would be willing to pay for it, and producers who believe their content has high value would wait for demand before publishing (while those who think their content is relatively low value will still continue to publish).

Access to the first ever unit of any digital good can be fully controlled, so at the time of first publication it can be charged for, but after it's been published copies will become available to lots of people over time, so there's little point in insisting that every copy has to be charged for forever, or trying to restrict access or copying (and a compulsory charge only for the initial publication would also reduce transactions, and therefore transaction costs, compared with pay per use). However, because it will take time for the goods to spread amongst consumers, some people may still be willing to pay for early access to it (as in the case of e.g. breaking news), so it's possible for the producer and first buyers to charge for access in the meantime - which means initial buyers should be willing to pay even more for the first unit, as they can charge for access to their copy in the early days. A new field of research in economics has in fact shown that an efficient competitive market can be achieved with digital goods, even though they're public goods, as long as there is "finite expansibility" (i.e. they don't spread through the economy instantly).

The authors think such a demand-driven Web would involve an intermediary with whom potential providers register existing content on offer (e.g. tagged holiday pics) or their ability to produce content (e.g. coding skills), and potential consumers register their needs and demands. The intermediary's system would match suppliers and buyers, who would agree on price, etc. After supply of the digital good, although the producer retains the copyright in it everyone - buyers too - would be allowed to distribute or resell it. A reputation system could also be used (feedback, ratings etc presumably).

While a demand-based system would not guarantee an efficient market, it could work alongside and complement the existing Web 2.0 to increase the proportion of high quality content on Web 2.0 and reduce search costs (because less low value content would be published and/or it would incentivise the tagging of content to make it easier to match with demand).

[Question: this might work for some things, but how would it work for other things where there isn't an existing known demand, like new music? Fans of well known bands may be willing to pay for them to produce more songs, but what about unknowns? I suppose this is where something like Creative Commons free samples comes into it, to build up a fan base. But then persuading people to thereafter pay for new material - is that feasible? And if even the hugely popular bestselling Stephen King couldn't get fans to pay enough for him to finish The Plant serial, what hope for others? I'd love to hear the authors' thoughts on the "Give recordings away, make money on touring and merchandising" approach, or any alternatives. Songwriting to order, back to the days of commissioned composers, Mozart etc?]

Aside: demand-driven approach for NGO's?

The "demand-driven", "first buyer pays" (or perhaps, rather, "first request is paid for") concept seems to me to very much tie in with the approach Alan Mitchell wrote about in FT 23 March 2008 of The Key, an experimental problem-solving community formed by 2 UK government-sponsored agencies (including the Training and Development Agency for Schools) and provided by lifestyle and concierge / management services company Ten UK, which "tries to combine human beings’ ability to understand significance and meaning with the efficiencies of new technologies" in order to build up a very focused and relevant knowledge base for a particular community. It started offline, but is now moving to the Web.

The idea is to provide information within a tightly focused community facing similar problems (in this case head teachers and other school leaders) by using individuals’ specific questions to define the con­tent of what's provided (my emphases):

"In The Key’s first phase, school leaders phone or e-mail questions to researchers who find the best possible answer from official sources, experts and published res­earch. The researchers, some of whom are former school leaders, compile a full answer, with references, sources and suggestions for further reading, and tag it for future reference.

At first sight, the model looks econ­omic nonsense. Paying for hum­­an beings to research answers to tricky questions from potentially 20,000 school leaders, one by one, would be expensive. But they are all facing similar problems; and the more times a question comes up, the lower the cost per answer. The aim is to manage the resulting information so that each ans­wer adds to an ever-expanding knowledge base. In the first four weeks, half the questions required new research. At three months, nearly 90 per cent could be answer­ed using existing content...

...nine out of 10 users saying it has saved an average of five hours per question. In addition, most say it has improved the way the school works because better decisions are made more quickly..."

In the second phase previously-answered questions are being made available on the Internet for direct access to the knowledge base. Thereafter, the researchers should only need to answer new questions and update old ans­wers. "The goal is to turn ignorance (individuals’ questions) into a valuable resource".

While this approach could extend to other fields in both public and private sectors, important issues need to be addressed to make it workable. The service must be sufficiently usable and user-centric (in terms of not just the information content provided but also the structure of the website (e.g. number and type of questions asked) and language used (the words and phraseology actually used by school leaders in asking questions, rather than by policymakers or government officials). Costs management is necessary for the community to be economically viable; relative costs could rise too high if the volume of questions falls too much, which means the knowledge base will be less useful and may not be worth the investment, or if the community is too diffuse and not narrowly-focused enough, as that means the same questions will not be repeated and again the investment not worthwhile, or if too many users use phone or e-mail rather than web self-service for their questions (which they will if the search system isn't up to scratch). User confidence and trust in the answers need to be be maintained, i.e. that the answers will remain unbiassed (in this particular case, will politicians and civil servants stay out of the way and let policy implementers rather than policymakers set the information agenda?!) and also I think that the answers will be accurate (confidence that the researchers had suitable expertise in the field both theoretical and practical).

So it seems a possible business model would be: find a narrowly-focused field or community (a narrow focus seems to be good for blog SEO too, see item 2.7 of that post ), set up as an intermediary for that community (find and pay researchers with appropriate expertise to answer questions of course - as employees? or more likely independent contractors), charge for the service (perhaps just for running it, and/or a cut for each answer), but critically make sure the service and website are sufficiently well structured and useable for users to find what they are looking for efficiently and quickly. However, I think this approach would be less sustainable in an area where new questions keep cropping up (e.g. software support where they keep changing the software very often? Indeed, isn't this approach much the same as that already in use within corporations that provide support for their products or services, to build up their FAQs / customer knowledge base?).

Like me you may also wonder whether The Key's approach is more useful as a way to help a narrowly-focused, non-profitmaking community, like NGOs operating in certain niche or specialist fields where things don't change too rapidly, to club together (perhaps with some government support) to help build up a communal knowledge base more cost-effectively, than as a way to make profits (though it does seem to be a way for the intermediary, in this case Ten UK, to make money, especially given that it's government-financed or subsidised!). Widen the user base and charge others outside the original community a subscription fee to access it once it's been built up?

Back to the wider issue of the suggested demand-driven Web 2.0, I think usability, costs management and maintaining user trust will be as important for a demand-driven Web 2.0 system as for The Key. It will be interesting to see to what extent a demand-driven system arises and is profitable. And I'm still not sure what system could be devised that would reward and incentivise creators of Web 2.0 content adequately and more than the intermediary / middleman - to me, as a writer and musician, for the sake of the creative industries and innovation, a system that rewards producers more than distributors or middlemen would be the best. I do wonder if a demand-driven web would be more profitable for the intermediary matchmaker than the content providers...?

Labels: , , , ,

| Links to this post | Post a comment or view 2 comment(s) | Subscribe to all comments on all posts

Wealth of Networks 2008: digital economies & next generation internet - audio recordings


Deutsch | Español | Français | Italiano | Português | 日本語 | 한국어 | 汉语
Add this post to Del.icio.us, Digg or Furl | Create Watchlist




Here are audio recordings and notes on a few of the sessions at the Wealth of Networks conference hosted by Imperial College on 24 July 2008, organised by the Imperial College Internet Centre, Margaret Gold and others, which to me was one of the most interesting and useful conferences ever - it was much better than many expensive conferences, yet it was free to attendees as it was sponsored by the ESPRC (Engineering and Physical Sciences Research Council), who wanted to encourage public attendance and feedback.

Click the arrows to play the recordings from this page.

You can read Margaret's write up of the day (intro, keynote, panel discussion, internet angel) - and another summary of the internet angel (personalised agents) session. See also the pages on transport, e-healthcare, and cascade of information (identifying influential bloggers etc) which link to the slides for those sessions.

Digital Economy Programme

Digital Economy Programme speech - audio MP3 recording. By John Hand, above, of the EPRSC (Engineering and Physical Sciences Research Council), about the joint Councils' visionary, multi-disciplinary Digital Economy Programme, which initially is concentrating on:

  • healthcare
  • transport, and
  • the creative industries.

Keynote speech: John Varney, MaximumClarity



The keynote speech was an absolutely brilliant exposition by John Varney, founder and CEO of MaximumClarity (and formerly CTO of the BBC), on the Net and networks - where they've come from, where they're going.

I'm gutted that something went wrong with the MP3 recording of the keynote presentation, but the video should be up shortly and no doubt will be linked to from the main Wealth of Networks site - it's very well worth watching, even for non-geeks (also see this blog post summary).

Here are some key points based on his slides and some jottings.

John Varney's 5 C's

Starting with radio / TV pioneer David Sarnoff in 1935, there have been increasing advances in terms of:
  • Capability - power of the device (Gordon Moore) 1965
  • Connectivity - power of the network (Bob Metcalfe) 1973
  • Collaboration - power of the group formed network (David Reed) 1984
  • Communities - power of the participants (Ross Mayfield) 2003
  • Content - all driven by a content (and service) rich Web.

The democratisation of culture

He mentioned The Wealth of Networks: how social production transforms markets and freedoms, a book (available online) by Yochai Benkler published in 2006. The title of the conference is no coincidence... Social wealth, monetary wealth and creative wealth have all increased - plus, there's increasing information symmetry, as illustrated by changes in news reporting:
  • 9/11 - in 2001, the mainstream media provided the primary coverage and dictated what information people received.
  • Indian Ocean tsunami - in 2004, it was all over by the time journalists from the mainstream media reached the affected regions. But people who were on the spot, ordinary people, took photos and videos of the events, and sent them in to newspapers and TV stations.
  • 7 July Tube bombings - in 2005, people who were directly affected by the bombings recorded their own experiences. Some sent their cameraphone videos and photos to the press and broadcasters, but many just sent them straight to family and friends, or uploaded them online.

The demand for network

It took:
  • 50 years for books to reach 50 million people
  • 75 years for the telephone
  • 11 years for the television
  • only 5 years for the internet (...and in another 10 years the Net will reach 1 billion people).

John Varney's top 5

His personal predictions for the future:
  • Death of the organisation, rise of the collaboration (firms will still exist, but the way they work will be different)
  • Micro-entrepreneurialism (as exemplified by online micro-lending platform Kiva, which he personally contributes to)
  • Open source innovation
  • Telemedicine
  • Computing-free enterprise (thin clients, cloud computing, Google Docs etc) - there will be no IT departments in a dozen years...!

"The future has arrived, it's just not evenly distributed yet." William Gibson, science fiction writer.

Plenary Panel on Digital Economies and the Next-generation Internet

Plenary Panel discussion - audio MP3 recording. An excellent discussion, focusing on:
  • music - P2P filesharing and ISPs' recent deal with music industry (letters to ISP customers); business model for the music business
  • internet - business models for internet service providers, bandwidth, internet blackout, Net neutrality.
Panelists (above from left to right): John Varney (MaximumClarity), Professor John Darlington (Director, Internet Centre), Nick Leon (Design London), Dan Appelquist (Vodafone), Dr. Gary Graham (University of Manchester), John Hand (EPSRC), John Barr (451 Group), and David deRoure (University of Southampton), moderated by Gareth Mitchell, below (Imperial College and Digital Planet, BBC World Service).



Diginomics - the importance of being (re)born digital



Diginomics panel discussion - audio MP3 recording, on the economic impact of digital technologies: how the news and newspaper industry is changing, why people pirate music etc, why DRM doesn't work, the economic inefficiencies of Web 2.0 and why it's hard to monetise Web 2.0, the impact on privacy, and possible new business models. See also the slides for the session, and Broadstuff's writeup.

It was chaired by Thierry Rayna (Imperial College Internet Centre) (below), with (above from left to right) Samuel Landau (Comwax CTO), Gary Graham (University of Manchester), Ludmila Striukova (University College London), and Paul Bourgine (Ecole Polytechnique, France).


This session was in my view (along with the keynote speech) the best session of the day, and certainly the best in terms of the "eureka scenario"- what was said was very insightful and made a lot of sense, and I've finally begun to understand what's behind some of the issues and problems unique to the digital economy.

Or, as I'd paraphrase it, "It's the economics, stupid!". There's no need for anything fancy or new - the principles of good ol' fashioned economics apply equally to the digital world; they just need to be applied appropriately, and then suddenly a lot of things make sense. This really shows how important it is to take a multidisciplinary approach to the internet and technology, and how much technologists can learn from fields of study which may look at things from a different perspective (in this session most of the panel were economists or management/business experts, rather than IT geeks).

Newspapers - Gary Graham thinks that for newspapers to survive in the Internet age, they'll have to move to:
  • more customisation / personalisation
  • consumer participation - so readers become more involved
- and become more entrepreneurial.

Digital goods (Thierry Rayna) are the only products which have all 3 of the following characteristics:
  • they have the economic characteristics of "public goods" (e.g. a lighthouse, which ships from nations that haven't funded its cost of building can still benefit from) - and therefore there will be "free riders" (who consume more than their fair share of a resource or bear less than their fair share of production costs) which is economically rational behaviour for consumers (why pay for something you can get without paying?), and there are (piracy)
  • they are infinitely durable - so the market will shrink unless new products are created
  • they are mainly "experience goods" - you can't find out the value to you before trying it out, e.g. watch movies or listen to music. Consumers aren't willing to pay a lot for something whose value is uncertain (how to know how much to pay when you don't know how much something is worth?). But if asked to pay after the event, they can say they didn't like it!
The result: people can get digital goods without having to pay for them, when they decide to buy or get them they can keep them forever, and they're not willing to pay before consuming them (and possibly after). Each of these characteristics undermines suppliers' profitability.

But of course digital goods aren't rare, they are an increasingly big part of the economy, so it's important to address these issues.

Suppliers could try to pretend digital goods don't have these economic characteristics, or try to alter their nature (e.g. by using existing business models and DRM) - which won't work - or they could acknowledge their economic characteristics and work with them to devise different business models:
  • Advertising business models (Google, etc.)
  • Subscription business models (newspapers, MobileMe)
  • Tangible complement business models (Microsoft, Apple)
  • Tax funded business models (BBC).
However, Thierry thinks that none of those business models can be generally applied. [Note: there wasn't time to discuss why he thinks so - I'd certainly like to know, e.g. is there empirical evidence suggesting it, or economics principles etc? Many would say that Google's advertising-based model does work - for Google, anyway. Does he mean that none of those models will be sustainable in the longer term?]

Again there wasn't time to discuss fully the interesting points about Web 2.0 that economics can help explain, like why people pirate and why it can't be stopped, why existing DRM methods are bound to fail, why Web 2.0 is inefficient and how it could be improved. For other points and questions please see the slides. Thierry was kind enough to give out copies of some papers co-written by Ludmila and him, which expand on many of the points raised (and more) and which attempt to address some of the questions raised in the session.

Having read them I think they're fascinating, and very worthy of summary and comment in a separate dedicated post on digital economics - please see that post for detai