TPI Research Roundup

By Brandon Silberstein
November 9th, 2015

TPI Research Roundup
Edition 2015.1

Welcome to the return of the Technology Policy Institute’s semi-frequent Research Roundup (or “TPIRR,” since everything in DC needs an acronym in order to be taken seriously). The TPIRR digs through the dark depths of research journals and working papers to bring you hand-selected, craftman-quality scholarship of the most recent vintage (note the subtle earthy tones, with hints of jasmine and citrus). More specifically, your friendly neighborhood research associate, Brandon Silberstein, painstakingly sorts through updates from SSRN, NBER, [everything else] so you don’t have to.

The criteria for inclusion in the TPIRR is only that we find the research interesting and think others in the tech policy world will, too. We will not necessarily agree with opinions or conclusions expressed by the research, so, to paraphrase Twitter feeds, a mention does not necessarily imply endorsement.

With today’s edition, the research roundup re-establishes itself as the pre-eminent and post-eminent (possibly currently-eminent as well) source for fascinating and riveting research on any and all topics technology-related.

This week, we highlight research on an IBM patent experiment and bring you some scintillating work on how to best leverage the power of the masses in research, the pros and cons of making your new products backwards compatible with old ones, and a primer on bitcoin.

Opening up IP Strategy: Implications for Open Source Software Entry by Start-Up Firms – Using IBM patent data, researchers show a strong relationship between certain strategic decisions (in particular, not asserting patents against Open Source Software (OSS) developers) and innovation by other companies. The authors conclude that the fear of patent litigation is a substantial barrier to development in the software world, and mitigating this risk could drive further innovation by small OSS developers.

The authors reviewed thousands of OSS product announcements, sorted them into 33 software categories, and then compared the number in each category to the number of patents in each sector IBM put into a non-litigation pool called “The Commons,” along with the remainder of IBM’s patent portfolio not included in The Commons. The authors found statistically significant correlations between an increased number of patents in The Commons and an increase in novel OSS market entries per sector, even after controlling for time fixed effects and other endogenous variables.

The authors’ conclusions imply that private companies could take unilateral strategic action, as IBM did, and create environments that encourage new OSS entrants into the market. By carving out a non-litigation exception for OSS competitors, companies can increase complementary innovation by startups, potentially strengthening their own positions by bolstering their market segment as a whole.

Author written abstracts:

Opening up IP Strategy: Implications for Open Source Software Entry by Start-Up Firms

Wen Wen, Marco Ceccagnoli, Chris Forman

We examine whether a firm’s IP strategy in support of the open source software (OSS) community stimulates new OSS product entry by start-up software firms. In particular, we analyze the impact of strategic decisions taken by IBM around the mid-2000s, such as its announcement that it will not assert its patents against the OSS community and its creation of a patent commons. These decisions formed a coherent IP strategy in support of OSS. We find that IBM’s actions stimulated new OSS product introductions by entrepreneurial firms, and that their impact is increasing in the cumulativeness of innovation in the market and the extent to which patent ownership in the market is concentrated.

Crowd Science User Contribution Patterns And Their Implications

Henry Sauermann, Chiara Franzoni

Scientific research performed with the involvement of the broader public (the “crowd”) attracts increasing attention from scientists and policy makers. A key premise is that project organizers may be able to draw on underutilized human resources to advance research at relatively low cost. Despite a growing number of examples, systematic research on the effort contributions volunteers are willing to make to crowd science projects is lacking. Analyzing data on seven different projects, we quantify the financial value volunteers can bring by comparing their unpaid contributions with counterfactual costs in traditional or online labor markets. The volume of total contributions is substantial, although some projects are much more successful in attracting effort than others. Moreover, contributions received by projects are very uneven across time – a tendency towards declining activity is interrupted by spikes typically resulting from outreach efforts or media attention. Analyzing user-level data, we find that most contributors participate only once and with little effort, leaving a relatively small share of users who return responsible for most of the work. While top contributor status is earned primarily through higher levels of effort, top contributors also tend to work faster. This speed advantage develops over multiple sessions, suggesting that it reflects learning rather than inherent differences in skills. Our findings inform recent discussions about potential benefits from crowd science, suggest that involving the crowd may be more effective for some kinds of projects than others, provide guidance for project managers, and raise important questions for future research.

The Double-Edged Sword of Backward Compatibility: The Adoption of Multi-Generational Platforms in the Presence of Intergenerational Services

Il-Horn Hann, Byungwan Koh, Marius F. Niculescu

We investigate the impact of the intergenerational nature of services, via backward compatibility, on the adoption of multi-generational platforms. We consider a mobile Internet platform that has evolved over several generations and for which users download complementary services from third party providers. These services are often intergenerational: newer platform generations are backward compatible with respect to services released under earlier generation platforms. In this paper, we propose a model to identify the main drivers of consumers’ choice of platform generation, accounting for (i) the migration from older to newer platform generations, (ii) the indirect network effect on platform adoption due to same-generation services, and (iii) the effect on platform adoption due to the consumption of intergenerational services via backward compatibility. Using data on mobile Internet platform adoption and services consumption for the time period of 2001 – 2007 from a major wireless carrier in an Asian country, we estimate the three effects noted above. We show that both the migration from older to newer platform generations and the indirect network effects are significant. The surprising finding is that intergenerational services that connect subsequent generations of platforms essentially engender backward compatibility with two opposing effects. While an intergenerational service may accelerate the migration to the subsequent platform generations, it may also, perhaps unintentionally, provide a fresh lease on life for earlier generation platforms due to the continued use of earlier generation services on newer platform generations.


Campbell R. Harvey

Cryptography is about communication in the presence of an adversary. Cryptofinance is the efficient exchange of ownership, the verification of ownership, as well as the ability to algorithmically design conditional contracts, all with security, privacy, and minimal trust without using centralized institutions. Our current financial system is ripe for disruption. At a swipe of a debit or credit card, we are at risk (think of Target’s breach of 40 million cards). The cost of transacting using traditional methods is enormous and will increase in the future. Cryptofinance offers some solutions. This paper explores the mechanics of cryptofinance and a number of applications including bitcoin. Also attached is a slide deck that I use in my graduate course.

Because you worked hard to get here.

A technologically advanced puppy

2015 TPI Aspen Forum – Monday Lunch Keynote Discussion and Dinner Address Videos Available

By Amy Smorodin
August 20th, 2015

The Monday morning TPI Aspen Forum activities concluded with a special luncheon discussion featuring Michelle K. Lee, Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office and Daniel Marti, U.S. Intellectual Property Enforcement Coordinator. John Duffy from the University of Virginia School of Law acted as moderator of the discussion.

After short remarks form Director Lee, they discussed a range of intellectual property topics including recent patent reform legislation efforts, proposals to relocate the U.S. Copyright Office, and activities in China concerning moving to an innovation from a manufacturing economy. Video of this discussion can be viewed here.

The Monday night dinner keynote this year was Kelly Merryman, Vice President of Content Partnerships for YouTube. In her  speech, Merryman covered the role of YouTube in the creative industries. She discussed YouTube’s use as a platform for original content, such as how-to videos and hugely popular gaming videos (which the author of this post is all too familiar with thanks to her kid). In addition, she discussed how YouTube is used in tandem with traditional media outlets.

Video of the dinner keynote, in addition to the general session and other remarks, are posted on the TPI YouTube channel.

Dispatch from the 2015 TPI Aspen Forum – Monday General Session Keynotes and Panels

By Amy Smorodin
August 17th, 2015

The first full day of the Forum began with a keynote by Tim Bresnahan, Landau Professor of Technology and the Economy, Department of Economics and, by courtesy, Professor of Economics for the Graduate School of Business at Stanford University. Bresnahan kicked off the conference with a riveting talk on ICT innovation over the past 50 years and his prediction of what’s to come. During the Q&A session, he was asked if we are accurately measuring ICT innovations and their effect on the economy. Bresnahan explained that jobs and shifts in the labor force were a fairly accurate representation, as quality improvements are hard to quantify. The entire keynote can be viewed here.

The first panel of the day was a nod to the theme of the conference, “Fall and Rise of the Regulatory State,” moderated by TPI President Thomas Lenard. Many of the panelists took issue with the idea that we’ve reverted to pre-emptive regulation void of evidence of harm. Robert Crandall, TPI Adjunct Senior Fellow and Nonresident Senior Fellow at Brookings Institution, stated that much of this pre-emptive regulation is concentrated in a few key areas, such as banking and environmental, and is not necessarily a trend. Roger Noll of Stanford University sees that market-based solutions are actually preferred, as illustrated by cap-and-trade for environmental concerns and the auctioning of spectrum. Nancy Rose, Deputy Assistant Attorney General for Economic Analysis, Antitrust Division at the U.S. Department of Justice also stated that there is not an obvious resurgence and that we are seeing more regulation to deal with externalities due to higher standards of living, not necessarily economic regulation.

Taking a slightly different view was William Kovacic from George Washington University Law School. He explained that Europe now sets the global norms and standards for regulation, and the shift is taking place outside of the U.S. However, Howard Shelanski from OMB took issue with this perception and stated that the U.S. has been a leader in regulatory analysis and therefore other countries have followed. The panel can now be viewed online.

The second panel “Congress and the FCC after Title II” was moderated by TPI’s Scott Wallsten. Rebecca Arbogast from Comcast warned that “we are coming to the requiem of good policymaking.” She stated Title II reclassification and the related regulatory requirement are putting a drag on what has been a bright spot in the U.S. economy and will hamper new services and risk-taking by ISPs. Robert Quinn from AT&T echoed many of Arbogast’s concerns and warned that rate regulation will certainly begin this fall when the FCC looks at fiber and wholesale prices.

Although admitting that “the other side doesn’t always tell me” what actions the FCC plans to take next, FCC Commissioner Michael O’Rielly, predicted that the Commission will begin soliciting reports of violations of the rules and will attempt to enforce vague claims and expand authority. Ominously, O’Rielly warned “There has been a power grab” in the form of the right to regulate in the future.

David Redl from the House Commerce Committee Subcommittee on Communications and Technology, doesn’t see any movement in Congress regarding alternative network neutrality legislation until the second circuit court rules. He also warned that the FCC will indeed add privacy into their regulatory reach and will act on regulating personally identifiable information in addition to its current regulation of consumer proprietary network info.

The lone supporter on the panel of the Open Internet Order, Jonathan Baker from American University’s Washington College of Law, stated that the supporting analysis in the order is “infused with economics.” Baker explained that investment in edge providers leads to investment in infrastructure, and that the FCC was appropriately thinking about the core and edge as a whole. You can watch the entire (very entertaining) panel here.

The final panel of the day was “Whose Rules? Internet Regulations in a Global Economy,” moderated by Ambassador David Gross from Wiley Rein. FTC Commissioner Julie Brill focused on transatlantic issues. He warned that the EU’s new Digital Single Market strategy does not stem from protectionism for EU companies. She explained that there are deep-rooted cultural and legal differences between the US and the EU that effect how each look at the gig economy. Andrea Glorioso from the Delegation of the European Union to the USA also addressed the Digital single market strategy. He explained that antitrust investigations in the EU concerning the tech industry have mostly been toward US companies, but that’s because they are so successful. In other sectors, EU companies are investigated much more that US companies.

He took issue with the idea that the EU and the US have deep-rooted difference, stating that “we have so much more in common than we each do with other countries” and therefore much work together.

Adam Kovacevich from Google reflected on the past vision that the internet would be the “leveler” of government policy. This, he explained, was disproven, as illustrated by the strong incentive to have local policy reflect local norms. Kevin Martin from Facebook identified the importance of regulatory regimes for expanding infrastructure for internet access for areas that currently do not have it. He urged policymakers to not lose sight of the broader goal of connectivity. Peter Davidson from Verizon agreed that tax and regulatory policies should be viewed through the lens of connecting people. He identified digital protectionism as a concern and urged principles and norms for cross-border data flows to be included in trade agreements to encourage investment. Video of the discussion will be up on the TPI YouTube page.

There much more to come soon, including the a wrap-up of the IP themed luncheon discussion and tonight’s dinner keynote by Kelly Merryman, Vice President of Content Partnerships for YouTube. Stay tuned.

Dispatch from the 2015 TPI Aspen Forum – Sunday Discussion

By Amy Smorodin
August 17th, 2015

The rain, thankfully, held off for the opening reception of this year’s TPI Aspen Forum.

After short remarks from TPI President Tom Lenard, a the Sunday night reception featured a timely discussion with Michael Daniel, Special Assistant to the President and U.S. Cybersecurity Coordinator, and Alan Raul from Sidley Austin.

After Alan Raul assured the crowd that “sometimes a computer glitch is just a computer glitch,” referring to the breakdown of air traffic control on the east coast Saturday, the two discussed a broad range of issues concerning cybersecurity.

Topics discussed included: the tools available to the U.S. government to respond to threats, the experience of government agencies with cybersecurity vs. the private sector, recent cybersecurity legislation, and the role of the U.S. in leading international cybersecurity efforts in the aftermath of the Snowden leaks.

Daniel’s final predictions for the evening? He noted that the many successes concerning U.S. cybersecurity happen in the dark, un-reported and un-noticed. Additionally, he is optimistic that the U.S. can tackle the cybersecurity issue. However, it’s going to look worse before it gets better because they are more actively looking for threats.

You can watch the entire discussion on the TPI YouTube channel, and you can follow along with the pithy and insightful tweets from attendees at #TPIAspen.


Highlights of today’s panels and keynotes will be coming soon.

The Perfect Storm: Snowstorms and the Impact of Theatrical Attendance on DVD Sales

By Michael Smith
August 12th, 2015

By Michael Smith, Peter Boatwright and Patrick Choi

Everyone knows that movies that are popular in theaters are also popular at home. But no one knows whether increased theater viewing actually causes increased home viewing. Scientifically speaking, this is the difference between correlation and causation. In this instance, it’s difficult to test causation because a movie’s intrinsic appeal affects both measures. To do so accurately, we need an event that changes the number of people who see the movie in theaters, but does so in a way that is completely unrelated to specific movie characteristics.

In our recent paper, we show how snowstorms can provide just such a “perfect” measurement event. When a snowstorm occurs on a movie’s opening weekend in a particular city, fewer people go to see that movie in that city for reasons completely unrelated to the movie itself. In other words, for the purposes of this experiment, snowstorms are essentially random events: Whether it snows in Buffalo versus Minneapolis on the second weekend of November has nothing to do with the characteristics of the movies opening that weekend.

Using this information and examining box office and home video sales data, our results allow us to ask “when fewer people attend a movie’s opening weekend in a particular city, does that change the number of DVD and Blu-ray sales for that movie in that city when DVDs and Blu-ray Disks are released a few months later?”

Our results show that theatrical demand actually causes increases in DVD/Blu-ray demand. Specifically, a 10 percent increase (decline) in theatrical attendance causes an 8 percent increase (decline) in DVD/Blu-ray demand. This result suggests that there is significant differentiation between these two products, meaning that theatrical sales complement DVD/Blu-ray demand, which is an important thing to consider in this rapidly evolving media marketplace.

The Effectiveness of Site Blocking as an Anti-Piracy Strategy: Evidence from the U.K.

By Michael Smith
June 3rd, 2015

Brett Danaher, Michael D. Smith, Rahul Telang

It is well established in the academic research that piracy harms sales for entertainment goods;[1] and there is emerging evidence that, by reducing the profitability of content creation, piracy may reduce the quality and quantity of the content that is created.[2]

Given these empirical results, as academic researchers, we have spent considerable effort trying to understand the effectiveness of various anti-piracy strategies that attempt to mitigate the impact of piracy on industry revenues by either making legal content more appealing or making illegal content less appealing (see for example here and here). Our latest research examines an anti-piracy strategy known as “site-blocking” adopted in many countries, including the United Kingdom where we conduct our analysis. In the U.K. courts respond to blocking requests, and where they find cause, order Internet Service Providers (ISPs) to block access to specific piracy-enabling sites.

This approach is notably different than shutting down entire sites that store pirated content: the sites and pirated content remain online worldwide, and within the affected country the blocked sites can still be accessed by technologically sophisticated users. Given these differences we decided to study the effectiveness of site-blocking strategies at changing consumer behavior, focusing on court-ordered blocks in the UK: The May 2012 block of one site, The Pirate Bay, and the October/November 2013 block of 19 major piracy sites.

Our results, which were first presented to an academic audience at the December 2014 Workshop on Information Systems and Economics, used consumer data from an Internet panel tracking company to examine the behavior of a set of UK Internet users before and after these sites were blocked. We considered users who had not visited the site(s) before the block as a control group (since they were largely unaffected by the block) and we asked how treated users – those who had used the site(s) before the block – changed their behavior after the block (relative to the control group).

Our analysis found that blocking The Pirate Bay had little impact on UK users’ consumption through legal channels. Instead blocked users switched to other piracy sites or circumvented the block by using Virtual Private Networks. However, unlike the May 2012 Pirate Bay block, our results showed that when 19 sites were blocked simultaneously, former users of these sites increased their usage of paid legal streaming sites by 12% on average, relative to the control group.[3]  The blocks caused the lightest users of the blocked sites (and thus the users who were least affected by the blocks, other than the control group) to increase their use of paid streaming sites by 3.5% while they caused the heaviest users of the blocked sites to increase paid streaming visits by 23.6%, strengthening the causal inference in our result.

As we discuss in our paper, the most likely explanation for this result — and one supported by other observations in the data — is that when only one site is blocked, most pirates have an easy time finding and switching to other piracy sites. But, blocking many sites can increase the cost of finding alternate sources of piracy enough that a significant number of former pirates will switch their behavior toward legal sources.

As with our other empirical findings, summarized above, this finding suggests that consumers behave like consumers: They make choices based on the costs and benefits of what is available, and will change their behavior based on sufficient changes in those costs and benefits.


[1]       See this paper or this paper for a review of the academic literature on how piracy impacts sales.

[2]       See, for example, this paper or its summary in this blog post

[3]       Importantly, our data did not allow us to determine whether this 12% increase reflected new users coming to these paid sites or simply increased usage of an already existing customer base.

The NABU Network: A Great Lesson, But Not About Openness

By Scott Wallsten
February 5th, 2015

When announcing his plan to regulate Internet Service Providers under Title II in Wired, FCC Chairman Tom Wheeler argued that his experience at NABU Network in the 1980s helped inform his decision. He writes that NABU failed because “The phone network was open whereas the cable networks were closed. End of story.”

But that’s not the whole story, and its lessons aren’t really about openness. Instead, it teaches us about the importance of investment and network effects.

NABU sprang from the mind of Canadian entrepreneur John Kelly, who realized that cable television networks were uniquely suited to high-speed data transmission. The service allowed a home computer to connect to a remote software library and, in principle, play games, shop, bank, and do email. And apparently it could do all that at speeds up to 6.5 Mbps—even more than Chairman Wheeler claimed in his recent Wired article.[1] Not too shabby. NABU first launched in Ottawa in 1983 and Sowa, Japan and Fairfax, VA in 1984. By the time it went out of business it had reached agreements with cable companies in 40 other regions.

As it turned out, the world wasn’t ready for NABU, and it failed in 1986.

Analyses of NABU, however, do not point to issues of openness as the cause of death. After all, other computer networks in the early 1980s that relied on the telephone network also failed.[2]

Instead, post-mortems point to issues we know are important in network industries: network effects and investment, or, rather, the lack thereof in both cases.

As has been written ad nauseam, the Internet is a two-sided (actually, multi-sided) platform. In order to succeed, it must attract both users and applications. In early stages, when uses and users are scarce, it can be difficult to get anyone on board. The presence of indirect network effects makes it worse, since the benefit from each new user or application is greater than the benefit that accrues just to the new subscriber or developer. That is, a new user benefits by being able to access all the available content, but the entire network benefits due to increased incentives to develop new applications. The new user, however, does not realize all those benefits, meaning that adoption, at least in the early stages, may be artificially slow.

Early commercial data networks faced precisely this problem. Why would someone pay to go online if there were nothing to do when he logged on? In order to subscribe to NABU, consumers paid not just a monthly fee, but also had to buy or lease a $299 NABU personal computer. Data networks tried to induce consumers to subscribe by making collections of software available. In the 1980s, however, most commercial data networks just could not provide enough of an incentive to attract or keep subscribers.

Creating reasons to subscribe played an important role in NABU’s failure. As one source put it, “ the NABU Network did not catch on due to lack of accessible resources.”

Another reason for its failure appears to have been the inability of the then-existing infrastructure to fully deliver on NABU’s promises. Cable TV systems were not built to handle much upstream traffic—an issue they still face today. Upgrading the cable infrastructure for two-way communication required significant investment.

Competition also made survival difficult for NABU. NABU faced direct competitors in the form of other data networks like AOL (founded in 1985), Prodigy, and the dominant firm, Compuserve. Additionally, to the extent that consumers would sign up to play games, NABU also faced competition from packaged software games and gaming consoles, and faced the same over-saturation of the market that led to the great video game crash. It even faced potential competition from The Games Network, a firm that was developing a system that used cable networks to distribute video games but failed to get off the ground.

In short, the market wasn’t quite ready for the kind of service NABU was selling, although NABU founder Kelly was right about the potential of cable networks. As Marty McFly might have said to potential subscribers in the 1980s, “your kids are gonna love it.

Openness is a key part of the Internet. It just wasn’t a key part of the NABU story. Instead, it reminds us of the importance of network effects, the economics of multi-sided networks, and network investment. Unlike the 1980s, these are now working together in a virtuous cycle favoring innovation. Let’s make sure any new rules don’t change that.


For a fascinating and detailed history of early data networks, including NABU, see

Zbigniew Stachniak, “Early Commercial Electronic Distribution of Software,” IEEE Annals of the History of Computing 36, no. 1 (2014): 39–51, doi:10.1109/MAHC.2013.55.

[1] Stachniak, “Early Commercial Electronic Distribution of Software”, n. 21.

[6] Stachniak, “Early Commercial Electronic Distribution of Software” Table 1.


A Closer Look at Those FCC Emails

By Scott Wallsten
November 24th, 2014

Recently, Vice News received 623 pages of emails from the FCC in response to a Freedom of Information Act request. Vice News has kindly made the entire PDF file available for download.

We decided to categorize the emails to get a picture of who contacts the FCC and what they want to talk about. This simple categorization is time consuming given the need to review each page to pull out the relevant information. Nevertheless, our intrepid research associate, Nathan Kliewer, managed to slog his way through the pile, leaving us with a clean dataset. The fruits of his labor are printed below.

The statistics derived from this dataset come with important caveats. First, and most importantly, we categorize only the initial email in any given chain of emails. As a result, this analysis tells us nothing about the extent of a given email interaction. Second, it is possible that some emails are mischaracterized (seriously, you try reading 623 pages of blurry PDFs). Third, because the FCC released only selected emails, we do not know if these emails are representative of FCC email correspondence.

Nevertheless, let’s see what we’ve got.

Figure 1 shows the number of emails from different types of organizations.

Figure 1

Number of Emails by Type of Organization


The figure shows that most emails were initiated by news organizations, followed closely by industry. The FCC itself appears as the originator of a good number of these emails, most of which are from one FCC staff member to another. Eleven emails are from law firms (which represent industry clients), nine from people affiliated with universities, eight from other government agencies, seven from consumer advocacy groups, and six from think tanks. Among the unexpected emails is one from a representative of the government of Serbia simply inquiring about “current regulatory challenges,” and another from someone applying for an internship at the FCC (the latter we did not include in the figure).

Figure 2 highlights the general subject or topic of the email. The largest number of emails, not surprisingly, contains the sender’s views on policy issues relevant to net neutrality. The second largest number is news items people forward to FCC staff. Next are requests for comments, followed by information about events and requests for meetings.

Figure 2

emails by subject

Figure 3 combines these two categories to reveal which type of organizations focus on which issues. Industry, consumer groups, and other government agencies tend to send emails discussing views on policy issues. News organizations send requests for comments. Industry and law firms, generally representing industry, send ex parte notices.

Figure 3

email by org and topic


Unfortunately, this meta-analysis tells us little about whether those emails mattered in any real way. I also can’t believe I spent so much time on this.

According to the Vice News story, the FCC plans on releasing more emails on November 26. I look forward to seeing an updated meta-analysis of those emails, but prepared by somebody else.

Google, Search Ranking, and the Fight Against Piracy

By Michael Smith
October 20th, 2014

Last month, Rahul Telang and I blogged about research we conducted with Liron Sivan where we used a field experiment to analyze how the position of pirate links in search results impact consumer behavior. Given this research, we were very interested in Google’s announcement last Friday that they were changing their ranking algorithm to make pirate links harder to find in search results.

According to the announcement, Google changed their ranking algorithm to more aggressively demote links from sites that receive a large number of valid DMCA notices, and to make legal links more prominent in search results. The hope is that these changes will move links from many “notorious” pirate sites off the first page of Google’s search results and will make legal content easier to find.

One might ask whether these changes — moving pirate results from the first to the second page of search results and promoting legal results — could have any effect on user behavior. According to our experimental results, the answer seems to be “yes, they can.”

Specifically, in our experiment we gave users the task of finding a movie of their choosing online. We then randomly assigned users to a control group and to two treatment groups: one where pirate links were removed from the first page of search results and where legal links were highlighted (legal treatment), and one where legal links were removed from the first page of search results (piracy treatment).

Our results show that users are much more likely to purchase legally in the legal treatment condition than in the control. We also found that these results hold even among users who initially search using terms related to piracy (e.g., by including the terms “torrent” or “free” in their search, or by including the name of well-known pirate sites), suggesting that even users with a predisposition to pirate can be persuaded to purchase legally through small changes in search results.

Given our findings, reducing the prominence of pirated links and highlighting legal links seems like a very promising and productive decision by Google. While it remains to be seen just how dramatically Google’s new search algorithm will reduce the prominence of pirate links, we are hopeful that Google’s efforts to fight piracy will usher in a new era of cooperation with the creative industries to improve how consumers discover movies and other copyrighted content, and to encourage users to consume this content through legal channels instead of through theft. If implemented well, both Google and studios stand to benefit significantly from such a partnership.

Using Search Results to Fight Piracy

By Michael Smith
September 15th, 2014

With the growing consensus in the empirical literature that piracy harms sales, and emerging evidence that increased piracy can affect both the quantity and quality of content produced (here and here for example), governments and industry partners are exploring a variety of ways to reduce the harm caused by intellectual property theft. In addition to graduated response efforts and site shutdowns, Internet intermediaries such as Internet Service Providers, hosting companies, and web search engines are increasingly being asked play a role in limiting the availability of pirated content to consumers.

However, for this to be a viable strategy, it must first be the case that these sorts of efforts influence consumers’ decisions to consume legally. Surprisingly, there is very little empirical evidence one way or the other on this question.

In a recent paper, my colleagues Liron Sivan, Rahul Telang and I used a field experiment to address one aspect of this question: Does the prominence of pirate and legal sites in search results impact consumers’ choices for infringing versus legal content? Our results suggest that reducing the prominence of pirate links in search results can reduce copyright infringement.

To conduct our study, we first developed a custom search engine that allows us to experimentally manipulate what results are shown in response to user search queries. We then studied how changing what sites are listed in search results impacted the consumption behavior of a panel of users drawn from a general population, and a separate panel of only college aged participants.

In our experiments, we first randomly assigned users to one of three groups: a control group of users who are shown the same search results they would receive from a major search engine, and two treatment groups where pirate sites are artificially promoted and artificially demoted in the displayed search results. We then asked users to obtain a movie they are interested in watching, and to use our search engine instead of the search engine they would normally use. We observe what queries each set of users issued to search for their chosen movie, and surveyed them regarding what site they used to obtain the movie.

Our results suggest that changing the prominence of pirate and legal links has a strong impact on user choices: Relative to the control condition, users are more likely to consume legally (and less likely to infringe copyright) when legal content is more prominent in search results, and user are more likely to consume pirate content when pirate content is more prominent in search results.

By analyzing users’ initial search terms we find that these results hold even among users with an apparent predisposition to pirate: users whose initial search terms indicate an intention to consume pirated content are more likely to use legal channels when pirated content is harder to find in search results.

Our results suggest that reducing the prominence of pirate links in search results can reduce copyright infringement. We also note that there is both precedent and available data for this sort of response. In terms of precedent, search engines are already required to block a variety of information, including content from non-FDA approved pharmacies in the U.S. and content that violates an individual’s “right to be forgotten” in a variety of EU countries. Likewise, the websites listed in DMCA notices give search engines some of the raw data necessary to determine which sites are most likely to host infringing content.

Thus, while more research and analysis is needed to craft effective policy, we believe that our experimental results provide important initial evidence that users’ choices for legal versus infringing content can be influenced by what information they are shown, and thus that search engines can play a role in the ongoing fight against intellectual property theft.