2015 TPI Aspen Forum – Monday Lunch Keynote Discussion and Dinner Address Videos Available

By Amy Smorodin
August 20th, 2015

The Monday morning TPI Aspen Forum activities concluded with a special luncheon discussion featuring Michelle K. Lee, Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office and Daniel Marti, U.S. Intellectual Property Enforcement Coordinator. John Duffy from the University of Virginia School of Law acted as moderator of the discussion.

After short remarks form Director Lee, they discussed a range of intellectual property topics including recent patent reform legislation efforts, proposals to relocate the U.S. Copyright Office, and activities in China concerning moving to an innovation from a manufacturing economy. Video of this discussion can be viewed here.

The Monday night dinner keynote this year was Kelly Merryman, Vice President of Content Partnerships for YouTube. In her  speech, Merryman covered the role of YouTube in the creative industries. She discussed YouTube’s use as a platform for original content, such as how-to videos and hugely popular gaming videos (which the author of this post is all too familiar with thanks to her kid). In addition, she discussed how YouTube is used in tandem with traditional media outlets.

Video of the dinner keynote, in addition to the general session and other remarks, are posted on the TPI YouTube channel.

Dispatch from the 2015 TPI Aspen Forum – Monday General Session Keynotes and Panels

By Amy Smorodin
August 17th, 2015

The first full day of the Forum began with a keynote by Tim Bresnahan, Landau Professor of Technology and the Economy, Department of Economics and, by courtesy, Professor of Economics for the Graduate School of Business at Stanford University. Bresnahan kicked off the conference with a riveting talk on ICT innovation over the past 50 years and his prediction of what’s to come. During the Q&A session, he was asked if we are accurately measuring ICT innovations and their effect on the economy. Bresnahan explained that jobs and shifts in the labor force were a fairly accurate representation, as quality improvements are hard to quantify. The entire keynote can be viewed here.

The first panel of the day was a nod to the theme of the conference, “Fall and Rise of the Regulatory State,” moderated by TPI President Thomas Lenard. Many of the panelists took issue with the idea that we’ve reverted to pre-emptive regulation void of evidence of harm. Robert Crandall, TPI Adjunct Senior Fellow and Nonresident Senior Fellow at Brookings Institution, stated that much of this pre-emptive regulation is concentrated in a few key areas, such as banking and environmental, and is not necessarily a trend. Roger Noll of Stanford University sees that market-based solutions are actually preferred, as illustrated by cap-and-trade for environmental concerns and the auctioning of spectrum. Nancy Rose, Deputy Assistant Attorney General for Economic Analysis, Antitrust Division at the U.S. Department of Justice also stated that there is not an obvious resurgence and that we are seeing more regulation to deal with externalities due to higher standards of living, not necessarily economic regulation.

Taking a slightly different view was William Kovacic from George Washington University Law School. He explained that Europe now sets the global norms and standards for regulation, and the shift is taking place outside of the U.S. However, Howard Shelanski from OMB took issue with this perception and stated that the U.S. has been a leader in regulatory analysis and therefore other countries have followed. The panel can now be viewed online.

The second panel “Congress and the FCC after Title II” was moderated by TPI’s Scott Wallsten. Rebecca Arbogast from Comcast warned that “we are coming to the requiem of good policymaking.” She stated Title II reclassification and the related regulatory requirement are putting a drag on what has been a bright spot in the U.S. economy and will hamper new services and risk-taking by ISPs. Robert Quinn from AT&T echoed many of Arbogast’s concerns and warned that rate regulation will certainly begin this fall when the FCC looks at fiber and wholesale prices.

Although admitting that “the other side doesn’t always tell me” what actions the FCC plans to take next, FCC Commissioner Michael O’Rielly, predicted that the Commission will begin soliciting reports of violations of the rules and will attempt to enforce vague claims and expand authority. Ominously, O’Rielly warned “There has been a power grab” in the form of the right to regulate in the future.

David Redl from the House Commerce Committee Subcommittee on Communications and Technology, doesn’t see any movement in Congress regarding alternative network neutrality legislation until the second circuit court rules. He also warned that the FCC will indeed add privacy into their regulatory reach and will act on regulating personally identifiable information in addition to its current regulation of consumer proprietary network info.

The lone supporter on the panel of the Open Internet Order, Jonathan Baker from American University’s Washington College of Law, stated that the supporting analysis in the order is “infused with economics.” Baker explained that investment in edge providers leads to investment in infrastructure, and that the FCC was appropriately thinking about the core and edge as a whole. You can watch the entire (very entertaining) panel here.

The final panel of the day was “Whose Rules? Internet Regulations in a Global Economy,” moderated by Ambassador David Gross from Wiley Rein. FTC Commissioner Julie Brill focused on transatlantic issues. He warned that the EU’s new Digital Single Market strategy does not stem from protectionism for EU companies. She explained that there are deep-rooted cultural and legal differences between the US and the EU that effect how each look at the gig economy. Andrea Glorioso from the Delegation of the European Union to the USA also addressed the Digital single market strategy. He explained that antitrust investigations in the EU concerning the tech industry have mostly been toward US companies, but that’s because they are so successful. In other sectors, EU companies are investigated much more that US companies.

He took issue with the idea that the EU and the US have deep-rooted difference, stating that “we have so much more in common than we each do with other countries” and therefore much work together.

Adam Kovacevich from Google reflected on the past vision that the internet would be the “leveler” of government policy. This, he explained, was disproven, as illustrated by the strong incentive to have local policy reflect local norms. Kevin Martin from Facebook identified the importance of regulatory regimes for expanding infrastructure for internet access for areas that currently do not have it. He urged policymakers to not lose sight of the broader goal of connectivity. Peter Davidson from Verizon agreed that tax and regulatory policies should be viewed through the lens of connecting people. He identified digital protectionism as a concern and urged principles and norms for cross-border data flows to be included in trade agreements to encourage investment. Video of the discussion will be up on the TPI YouTube page.

There much more to come soon, including the a wrap-up of the IP themed luncheon discussion and tonight’s dinner keynote by Kelly Merryman, Vice President of Content Partnerships for YouTube. Stay tuned.

Dispatch from the 2015 TPI Aspen Forum – Sunday Discussion

By Amy Smorodin
August 17th, 2015

The rain, thankfully, held off for the opening reception of this year’s TPI Aspen Forum.

After short remarks from TPI President Tom Lenard, a the Sunday night reception featured a timely discussion with Michael Daniel, Special Assistant to the President and U.S. Cybersecurity Coordinator, and Alan Raul from Sidley Austin.

After Alan Raul assured the crowd that “sometimes a computer glitch is just a computer glitch,” referring to the breakdown of air traffic control on the east coast Saturday, the two discussed a broad range of issues concerning cybersecurity.

Topics discussed included: the tools available to the U.S. government to respond to threats, the experience of government agencies with cybersecurity vs. the private sector, recent cybersecurity legislation, and the role of the U.S. in leading international cybersecurity efforts in the aftermath of the Snowden leaks.

Daniel’s final predictions for the evening? He noted that the many successes concerning U.S. cybersecurity happen in the dark, un-reported and un-noticed. Additionally, he is optimistic that the U.S. can tackle the cybersecurity issue. However, it’s going to look worse before it gets better because they are more actively looking for threats.

You can watch the entire discussion on the TPI YouTube channel, and you can follow along with the pithy and insightful tweets from attendees at #TPIAspen.


Highlights of today’s panels and keynotes will be coming soon.

The Perfect Storm: Snowstorms and the Impact of Theatrical Attendance on DVD Sales

By Michael Smith
August 12th, 2015

By Michael Smith, Peter Boatwright and Patrick Choi

Everyone knows that movies that are popular in theaters are also popular at home. But no one knows whether increased theater viewing actually causes increased home viewing. Scientifically speaking, this is the difference between correlation and causation. In this instance, it’s difficult to test causation because a movie’s intrinsic appeal affects both measures. To do so accurately, we need an event that changes the number of people who see the movie in theaters, but does so in a way that is completely unrelated to specific movie characteristics.

In our recent paper, we show how snowstorms can provide just such a “perfect” measurement event. When a snowstorm occurs on a movie’s opening weekend in a particular city, fewer people go to see that movie in that city for reasons completely unrelated to the movie itself. In other words, for the purposes of this experiment, snowstorms are essentially random events: Whether it snows in Buffalo versus Minneapolis on the second weekend of November has nothing to do with the characteristics of the movies opening that weekend.

Using this information and examining box office and home video sales data, our results allow us to ask “when fewer people attend a movie’s opening weekend in a particular city, does that change the number of DVD and Blu-ray sales for that movie in that city when DVDs and Blu-ray Disks are released a few months later?”

Our results show that theatrical demand actually causes increases in DVD/Blu-ray demand. Specifically, a 10 percent increase (decline) in theatrical attendance causes an 8 percent increase (decline) in DVD/Blu-ray demand. This result suggests that there is significant differentiation between these two products, meaning that theatrical sales complement DVD/Blu-ray demand, which is an important thing to consider in this rapidly evolving media marketplace.

The Effectiveness of Site Blocking as an Anti-Piracy Strategy: Evidence from the U.K.

By Michael Smith
June 3rd, 2015

Brett Danaher, Michael D. Smith, Rahul Telang

It is well established in the academic research that piracy harms sales for entertainment goods;[1] and there is emerging evidence that, by reducing the profitability of content creation, piracy may reduce the quality and quantity of the content that is created.[2]

Given these empirical results, as academic researchers, we have spent considerable effort trying to understand the effectiveness of various anti-piracy strategies that attempt to mitigate the impact of piracy on industry revenues by either making legal content more appealing or making illegal content less appealing (see for example here and here). Our latest research examines an anti-piracy strategy known as “site-blocking” adopted in many countries, including the United Kingdom where we conduct our analysis. In the U.K. courts respond to blocking requests, and where they find cause, order Internet Service Providers (ISPs) to block access to specific piracy-enabling sites.

This approach is notably different than shutting down entire sites that store pirated content: the sites and pirated content remain online worldwide, and within the affected country the blocked sites can still be accessed by technologically sophisticated users. Given these differences we decided to study the effectiveness of site-blocking strategies at changing consumer behavior, focusing on court-ordered blocks in the UK: The May 2012 block of one site, The Pirate Bay, and the October/November 2013 block of 19 major piracy sites.

Our results, which were first presented to an academic audience at the December 2014 Workshop on Information Systems and Economics, used consumer data from an Internet panel tracking company to examine the behavior of a set of UK Internet users before and after these sites were blocked. We considered users who had not visited the site(s) before the block as a control group (since they were largely unaffected by the block) and we asked how treated users – those who had used the site(s) before the block – changed their behavior after the block (relative to the control group).

Our analysis found that blocking The Pirate Bay had little impact on UK users’ consumption through legal channels. Instead blocked users switched to other piracy sites or circumvented the block by using Virtual Private Networks. However, unlike the May 2012 Pirate Bay block, our results showed that when 19 sites were blocked simultaneously, former users of these sites increased their usage of paid legal streaming sites by 12% on average, relative to the control group.[3]  The blocks caused the lightest users of the blocked sites (and thus the users who were least affected by the blocks, other than the control group) to increase their use of paid streaming sites by 3.5% while they caused the heaviest users of the blocked sites to increase paid streaming visits by 23.6%, strengthening the causal inference in our result.

As we discuss in our paper, the most likely explanation for this result — and one supported by other observations in the data — is that when only one site is blocked, most pirates have an easy time finding and switching to other piracy sites. But, blocking many sites can increase the cost of finding alternate sources of piracy enough that a significant number of former pirates will switch their behavior toward legal sources.

As with our other empirical findings, summarized above, this finding suggests that consumers behave like consumers: They make choices based on the costs and benefits of what is available, and will change their behavior based on sufficient changes in those costs and benefits.


[1]       See this paper or this paper for a review of the academic literature on how piracy impacts sales.

[2]       See, for example, this paper or its summary in this blog post

[3]       Importantly, our data did not allow us to determine whether this 12% increase reflected new users coming to these paid sites or simply increased usage of an already existing customer base.

The NABU Network: A Great Lesson, But Not About Openness

By Scott Wallsten
February 5th, 2015

When announcing his plan to regulate Internet Service Providers under Title II in Wired, FCC Chairman Tom Wheeler argued that his experience at NABU Network in the 1980s helped inform his decision. He writes that NABU failed because “The phone network was open whereas the cable networks were closed. End of story.”

But that’s not the whole story, and its lessons aren’t really about openness. Instead, it teaches us about the importance of investment and network effects.

NABU sprang from the mind of Canadian entrepreneur John Kelly, who realized that cable television networks were uniquely suited to high-speed data transmission. The service allowed a home computer to connect to a remote software library and, in principle, play games, shop, bank, and do email. And apparently it could do all that at speeds up to 6.5 Mbps—even more than Chairman Wheeler claimed in his recent Wired article.[1] Not too shabby. NABU first launched in Ottawa in 1983 and Sowa, Japan and Fairfax, VA in 1984. By the time it went out of business it had reached agreements with cable companies in 40 other regions.

As it turned out, the world wasn’t ready for NABU, and it failed in 1986.

Analyses of NABU, however, do not point to issues of openness as the cause of death. After all, other computer networks in the early 1980s that relied on the telephone network also failed.[2]

Instead, post-mortems point to issues we know are important in network industries: network effects and investment, or, rather, the lack thereof in both cases.

As has been written ad nauseam, the Internet is a two-sided (actually, multi-sided) platform. In order to succeed, it must attract both users and applications. In early stages, when uses and users are scarce, it can be difficult to get anyone on board. The presence of indirect network effects makes it worse, since the benefit from each new user or application is greater than the benefit that accrues just to the new subscriber or developer. That is, a new user benefits by being able to access all the available content, but the entire network benefits due to increased incentives to develop new applications. The new user, however, does not realize all those benefits, meaning that adoption, at least in the early stages, may be artificially slow.

Early commercial data networks faced precisely this problem. Why would someone pay to go online if there were nothing to do when he logged on? In order to subscribe to NABU, consumers paid not just a monthly fee, but also had to buy or lease a $299 NABU personal computer. Data networks tried to induce consumers to subscribe by making collections of software available. In the 1980s, however, most commercial data networks just could not provide enough of an incentive to attract or keep subscribers.

Creating reasons to subscribe played an important role in NABU’s failure. As one source put it, “ the NABU Network did not catch on due to lack of accessible resources.”

Another reason for its failure appears to have been the inability of the then-existing infrastructure to fully deliver on NABU’s promises. Cable TV systems were not built to handle much upstream traffic—an issue they still face today. Upgrading the cable infrastructure for two-way communication required significant investment.

Competition also made survival difficult for NABU. NABU faced direct competitors in the form of other data networks like AOL (founded in 1985), Prodigy, and the dominant firm, Compuserve. Additionally, to the extent that consumers would sign up to play games, NABU also faced competition from packaged software games and gaming consoles, and faced the same over-saturation of the market that led to the great video game crash. It even faced potential competition from The Games Network, a firm that was developing a system that used cable networks to distribute video games but failed to get off the ground.

In short, the market wasn’t quite ready for the kind of service NABU was selling, although NABU founder Kelly was right about the potential of cable networks. As Marty McFly might have said to potential subscribers in the 1980s, “your kids are gonna love it.

Openness is a key part of the Internet. It just wasn’t a key part of the NABU story. Instead, it reminds us of the importance of network effects, the economics of multi-sided networks, and network investment. Unlike the 1980s, these are now working together in a virtuous cycle favoring innovation. Let’s make sure any new rules don’t change that.


For a fascinating and detailed history of early data networks, including NABU, see

Zbigniew Stachniak, “Early Commercial Electronic Distribution of Software,” IEEE Annals of the History of Computing 36, no. 1 (2014): 39–51, doi:10.1109/MAHC.2013.55.

[1] Stachniak, “Early Commercial Electronic Distribution of Software”, n. 21.

[6] Stachniak, “Early Commercial Electronic Distribution of Software” Table 1.


A Closer Look at Those FCC Emails

By Scott Wallsten
November 24th, 2014

Recently, Vice News received 623 pages of emails from the FCC in response to a Freedom of Information Act request. Vice News has kindly made the entire PDF file available for download.

We decided to categorize the emails to get a picture of who contacts the FCC and what they want to talk about. This simple categorization is time consuming given the need to review each page to pull out the relevant information. Nevertheless, our intrepid research associate, Nathan Kliewer, managed to slog his way through the pile, leaving us with a clean dataset. The fruits of his labor are printed below.

The statistics derived from this dataset come with important caveats. First, and most importantly, we categorize only the initial email in any given chain of emails. As a result, this analysis tells us nothing about the extent of a given email interaction. Second, it is possible that some emails are mischaracterized (seriously, you try reading 623 pages of blurry PDFs). Third, because the FCC released only selected emails, we do not know if these emails are representative of FCC email correspondence.

Nevertheless, let’s see what we’ve got.

Figure 1 shows the number of emails from different types of organizations.

Figure 1

Number of Emails by Type of Organization


The figure shows that most emails were initiated by news organizations, followed closely by industry. The FCC itself appears as the originator of a good number of these emails, most of which are from one FCC staff member to another. Eleven emails are from law firms (which represent industry clients), nine from people affiliated with universities, eight from other government agencies, seven from consumer advocacy groups, and six from think tanks. Among the unexpected emails is one from a representative of the government of Serbia simply inquiring about “current regulatory challenges,” and another from someone applying for an internship at the FCC (the latter we did not include in the figure).

Figure 2 highlights the general subject or topic of the email. The largest number of emails, not surprisingly, contains the sender’s views on policy issues relevant to net neutrality. The second largest number is news items people forward to FCC staff. Next are requests for comments, followed by information about events and requests for meetings.

Figure 2

emails by subject

Figure 3 combines these two categories to reveal which type of organizations focus on which issues. Industry, consumer groups, and other government agencies tend to send emails discussing views on policy issues. News organizations send requests for comments. Industry and law firms, generally representing industry, send ex parte notices.

Figure 3

email by org and topic


Unfortunately, this meta-analysis tells us little about whether those emails mattered in any real way. I also can’t believe I spent so much time on this.

According to the Vice News story, the FCC plans on releasing more emails on November 26. I look forward to seeing an updated meta-analysis of those emails, but prepared by somebody else.

Google, Search Ranking, and the Fight Against Piracy

By Michael Smith
October 20th, 2014

Last month, Rahul Telang and I blogged about research we conducted with Liron Sivan where we used a field experiment to analyze how the position of pirate links in search results impact consumer behavior. Given this research, we were very interested in Google’s announcement last Friday that they were changing their ranking algorithm to make pirate links harder to find in search results.

According to the announcement, Google changed their ranking algorithm to more aggressively demote links from sites that receive a large number of valid DMCA notices, and to make legal links more prominent in search results. The hope is that these changes will move links from many “notorious” pirate sites off the first page of Google’s search results and will make legal content easier to find.

One might ask whether these changes — moving pirate results from the first to the second page of search results and promoting legal results — could have any effect on user behavior. According to our experimental results, the answer seems to be “yes, they can.”

Specifically, in our experiment we gave users the task of finding a movie of their choosing online. We then randomly assigned users to a control group and to two treatment groups: one where pirate links were removed from the first page of search results and where legal links were highlighted (legal treatment), and one where legal links were removed from the first page of search results (piracy treatment).

Our results show that users are much more likely to purchase legally in the legal treatment condition than in the control. We also found that these results hold even among users who initially search using terms related to piracy (e.g., by including the terms “torrent” or “free” in their search, or by including the name of well-known pirate sites), suggesting that even users with a predisposition to pirate can be persuaded to purchase legally through small changes in search results.

Given our findings, reducing the prominence of pirated links and highlighting legal links seems like a very promising and productive decision by Google. While it remains to be seen just how dramatically Google’s new search algorithm will reduce the prominence of pirate links, we are hopeful that Google’s efforts to fight piracy will usher in a new era of cooperation with the creative industries to improve how consumers discover movies and other copyrighted content, and to encourage users to consume this content through legal channels instead of through theft. If implemented well, both Google and studios stand to benefit significantly from such a partnership.

Using Search Results to Fight Piracy

By Michael Smith
September 15th, 2014

With the growing consensus in the empirical literature that piracy harms sales, and emerging evidence that increased piracy can affect both the quantity and quality of content produced (here and here for example), governments and industry partners are exploring a variety of ways to reduce the harm caused by intellectual property theft. In addition to graduated response efforts and site shutdowns, Internet intermediaries such as Internet Service Providers, hosting companies, and web search engines are increasingly being asked play a role in limiting the availability of pirated content to consumers.

However, for this to be a viable strategy, it must first be the case that these sorts of efforts influence consumers’ decisions to consume legally. Surprisingly, there is very little empirical evidence one way or the other on this question.

In a recent paper, my colleagues Liron Sivan, Rahul Telang and I used a field experiment to address one aspect of this question: Does the prominence of pirate and legal sites in search results impact consumers’ choices for infringing versus legal content? Our results suggest that reducing the prominence of pirate links in search results can reduce copyright infringement.

To conduct our study, we first developed a custom search engine that allows us to experimentally manipulate what results are shown in response to user search queries. We then studied how changing what sites are listed in search results impacted the consumption behavior of a panel of users drawn from a general population, and a separate panel of only college aged participants.

In our experiments, we first randomly assigned users to one of three groups: a control group of users who are shown the same search results they would receive from a major search engine, and two treatment groups where pirate sites are artificially promoted and artificially demoted in the displayed search results. We then asked users to obtain a movie they are interested in watching, and to use our search engine instead of the search engine they would normally use. We observe what queries each set of users issued to search for their chosen movie, and surveyed them regarding what site they used to obtain the movie.

Our results suggest that changing the prominence of pirate and legal links has a strong impact on user choices: Relative to the control condition, users are more likely to consume legally (and less likely to infringe copyright) when legal content is more prominent in search results, and user are more likely to consume pirate content when pirate content is more prominent in search results.

By analyzing users’ initial search terms we find that these results hold even among users with an apparent predisposition to pirate: users whose initial search terms indicate an intention to consume pirated content are more likely to use legal channels when pirated content is harder to find in search results.

Our results suggest that reducing the prominence of pirate links in search results can reduce copyright infringement. We also note that there is both precedent and available data for this sort of response. In terms of precedent, search engines are already required to block a variety of information, including content from non-FDA approved pharmacies in the U.S. and content that violates an individual’s “right to be forgotten” in a variety of EU countries. Likewise, the websites listed in DMCA notices give search engines some of the raw data necessary to determine which sites are most likely to host infringing content.

Thus, while more research and analysis is needed to craft effective policy, we believe that our experimental results provide important initial evidence that users’ choices for legal versus infringing content can be influenced by what information they are shown, and thus that search engines can play a role in the ongoing fight against intellectual property theft.


Does Piracy Undermine Product Creation?

By Michael Smith
September 5th, 2014

(Below is a guest post by my colleague, Rahul Telang from Carnegie Mellon University)

That Piracy undermines demand for products in copyright industries is intuitive and well supported by data. Music, movies, books, software have seen demand degradation due to various forms of piracy. What is not so well supported by data is whether piracy undermines product creation. For example, does piracy reduce the number of movies made, or quality of movies made, or investments in movies? Common sense suggests that this must be true. After all, this is the core principle of copyright. Large scale copyright infringement should affect revenues which in turn should affect producers’ incentives to create.

Despite this compelling argument the data does not support this claim readily. The reasons are many. For one, while the change in demand due to infringement happens more quickly, the production adjustments take time. So unless the infringement is persistent for a period of time, the contraction in production is not readily visible. The technology that leads to widespread infringement (say P2P networks and broadband infra-structure that facilitates online piracy) might also be accompanied by a period where cost of production and distribution declines or new markets open up. The net effect of these two opposing factors is all we can see in the data. And, the net effect could very well be that the production actually has increased!!!. This is not an evidence that piracy does not matter. Finally, there may be distributional bottlenecks (say number of theatres) that may prevent growth in production but might lead to larger investments in movies or in some cases higher input costs (actors and directors become more expensive).

In short, to see the effects of piracy in data, we need a setting where other factors are largely unchanged. With my co-author Joel Waldfogel, we explore Indian movie industry around the diffusion of Cable television and VCR. This phenomenon took place during 1985-2000. The paper is here. The story of our paper from the abstract is essentially that:

The diffusion of the VCR and cable television in India between 1985 and 2000 created substantial opportunities for unpaid movie consumption. We first document, from narrative sources, conditions conducive to piracy as these technologies diffused. We then provide strong circumstantial evidence of piracy in diminished appropriability: movies’ revenues fell by a third to a half, conditional on their ratings by movie-goers and their ranks in their annual revenue distributions. Weaker effective demand undermined creative incentives. While the number of new movies released had grown steadily from 1960 to 1985, it fell markedly between 1985 and 2000, suggesting a supply elasticity in the range of 0.2-0.7.

Even the quality as measured by IMDb ratings declined substantially. Thus, our study provides affirmative evidence on a central tenet of copyright policy, that stronger effective copyright protection effects more creation. For empirical research, sometimes you have to look at the historical context to see the evidence of the effect of a policy. Doing a similar study in post 2000 era for any other country might be tricky because the other competing factors have altered. There will be a need to be more creative in defining and measuring product creation in this new context. And, I am sure we will see such efforts in near future. Needless to say, a lot more research is needed to settle this issue.  However, our paper does provide an evidence that in an appropriate setting, effects of copyright infringement on product creation can be measured.