Archive for the ‘Uncategorized’ Category

TPI Research Roundup

Monday, November 9th, 2015

TPI Research Roundup
Edition 2015.1

Welcome to the return of the Technology Policy Institute’s semi-frequent Research Roundup (or “TPIRR,” since everything in DC needs an acronym in order to be taken seriously). The TPIRR digs through the dark depths of research journals and working papers to bring you hand-selected, craftman-quality scholarship of the most recent vintage (note the subtle earthy tones, with hints of jasmine and citrus). More specifically, your friendly neighborhood research associate, Brandon Silberstein, painstakingly sorts through updates from SSRN, NBER, [everything else] so you don’t have to.

The criteria for inclusion in the TPIRR is only that we find the research interesting and think others in the tech policy world will, too. We will not necessarily agree with opinions or conclusions expressed by the research, so, to paraphrase Twitter feeds, a mention does not necessarily imply endorsement.

With today’s edition, the research roundup re-establishes itself as the pre-eminent and post-eminent (possibly currently-eminent as well) source for fascinating and riveting research on any and all topics technology-related.

This week, we highlight research on an IBM patent experiment and bring you some scintillating work on how to best leverage the power of the masses in research, the pros and cons of making your new products backwards compatible with old ones, and a primer on bitcoin.

Opening up IP Strategy: Implications for Open Source Software Entry by Start-Up Firms – Using IBM patent data, researchers show a strong relationship between certain strategic decisions (in particular, not asserting patents against Open Source Software (OSS) developers) and innovation by other companies. The authors conclude that the fear of patent litigation is a substantial barrier to development in the software world, and mitigating this risk could drive further innovation by small OSS developers.

The authors reviewed thousands of OSS product announcements, sorted them into 33 software categories, and then compared the number in each category to the number of patents in each sector IBM put into a non-litigation pool called “The Commons,” along with the remainder of IBM’s patent portfolio not included in The Commons. The authors found statistically significant correlations between an increased number of patents in The Commons and an increase in novel OSS market entries per sector, even after controlling for time fixed effects and other endogenous variables.

The authors’ conclusions imply that private companies could take unilateral strategic action, as IBM did, and create environments that encourage new OSS entrants into the market. By carving out a non-litigation exception for OSS competitors, companies can increase complementary innovation by startups, potentially strengthening their own positions by bolstering their market segment as a whole.

Author written abstracts:

Opening up IP Strategy: Implications for Open Source Software Entry by Start-Up Firms

Wen Wen, Marco Ceccagnoli, Chris Forman

We examine whether a firm’s IP strategy in support of the open source software (OSS) community stimulates new OSS product entry by start-up software firms. In particular, we analyze the impact of strategic decisions taken by IBM around the mid-2000s, such as its announcement that it will not assert its patents against the OSS community and its creation of a patent commons. These decisions formed a coherent IP strategy in support of OSS. We find that IBM’s actions stimulated new OSS product introductions by entrepreneurial firms, and that their impact is increasing in the cumulativeness of innovation in the market and the extent to which patent ownership in the market is concentrated.

Crowd Science User Contribution Patterns And Their Implications

Henry Sauermann, Chiara Franzoni

Scientific research performed with the involvement of the broader public (the “crowd”) attracts increasing attention from scientists and policy makers. A key premise is that project organizers may be able to draw on underutilized human resources to advance research at relatively low cost. Despite a growing number of examples, systematic research on the effort contributions volunteers are willing to make to crowd science projects is lacking. Analyzing data on seven different projects, we quantify the financial value volunteers can bring by comparing their unpaid contributions with counterfactual costs in traditional or online labor markets. The volume of total contributions is substantial, although some projects are much more successful in attracting effort than others. Moreover, contributions received by projects are very uneven across time – a tendency towards declining activity is interrupted by spikes typically resulting from outreach efforts or media attention. Analyzing user-level data, we find that most contributors participate only once and with little effort, leaving a relatively small share of users who return responsible for most of the work. While top contributor status is earned primarily through higher levels of effort, top contributors also tend to work faster. This speed advantage develops over multiple sessions, suggesting that it reflects learning rather than inherent differences in skills. Our findings inform recent discussions about potential benefits from crowd science, suggest that involving the crowd may be more effective for some kinds of projects than others, provide guidance for project managers, and raise important questions for future research.

The Double-Edged Sword of Backward Compatibility: The Adoption of Multi-Generational Platforms in the Presence of Intergenerational Services

Il-Horn Hann, Byungwan Koh, Marius F. Niculescu

We investigate the impact of the intergenerational nature of services, via backward compatibility, on the adoption of multi-generational platforms. We consider a mobile Internet platform that has evolved over several generations and for which users download complementary services from third party providers. These services are often intergenerational: newer platform generations are backward compatible with respect to services released under earlier generation platforms. In this paper, we propose a model to identify the main drivers of consumers’ choice of platform generation, accounting for (i) the migration from older to newer platform generations, (ii) the indirect network effect on platform adoption due to same-generation services, and (iii) the effect on platform adoption due to the consumption of intergenerational services via backward compatibility. Using data on mobile Internet platform adoption and services consumption for the time period of 2001 – 2007 from a major wireless carrier in an Asian country, we estimate the three effects noted above. We show that both the migration from older to newer platform generations and the indirect network effects are significant. The surprising finding is that intergenerational services that connect subsequent generations of platforms essentially engender backward compatibility with two opposing effects. While an intergenerational service may accelerate the migration to the subsequent platform generations, it may also, perhaps unintentionally, provide a fresh lease on life for earlier generation platforms due to the continued use of earlier generation services on newer platform generations.


Campbell R. Harvey

Cryptography is about communication in the presence of an adversary. Cryptofinance is the efficient exchange of ownership, the verification of ownership, as well as the ability to algorithmically design conditional contracts, all with security, privacy, and minimal trust without using centralized institutions. Our current financial system is ripe for disruption. At a swipe of a debit or credit card, we are at risk (think of Target’s breach of 40 million cards). The cost of transacting using traditional methods is enormous and will increase in the future. Cryptofinance offers some solutions. This paper explores the mechanics of cryptofinance and a number of applications including bitcoin. Also attached is a slide deck that I use in my graduate course.

Because you worked hard to get here.

A technologically advanced puppy

Dispatch from the 2015 TPI Aspen Forum – Monday General Session Keynotes and Panels

Monday, August 17th, 2015

The first full day of the Forum began with a keynote by Tim Bresnahan, Landau Professor of Technology and the Economy, Department of Economics and, by courtesy, Professor of Economics for the Graduate School of Business at Stanford University. Bresnahan kicked off the conference with a riveting talk on ICT innovation over the past 50 years and his prediction of what’s to come. During the Q&A session, he was asked if we are accurately measuring ICT innovations and their effect on the economy. Bresnahan explained that jobs and shifts in the labor force were a fairly accurate representation, as quality improvements are hard to quantify. The entire keynote can be viewed here.

The first panel of the day was a nod to the theme of the conference, “Fall and Rise of the Regulatory State,” moderated by TPI President Thomas Lenard. Many of the panelists took issue with the idea that we’ve reverted to pre-emptive regulation void of evidence of harm. Robert Crandall, TPI Adjunct Senior Fellow and Nonresident Senior Fellow at Brookings Institution, stated that much of this pre-emptive regulation is concentrated in a few key areas, such as banking and environmental, and is not necessarily a trend. Roger Noll of Stanford University sees that market-based solutions are actually preferred, as illustrated by cap-and-trade for environmental concerns and the auctioning of spectrum. Nancy Rose, Deputy Assistant Attorney General for Economic Analysis, Antitrust Division at the U.S. Department of Justice also stated that there is not an obvious resurgence and that we are seeing more regulation to deal with externalities due to higher standards of living, not necessarily economic regulation.

Taking a slightly different view was William Kovacic from George Washington University Law School. He explained that Europe now sets the global norms and standards for regulation, and the shift is taking place outside of the U.S. However, Howard Shelanski from OMB took issue with this perception and stated that the U.S. has been a leader in regulatory analysis and therefore other countries have followed. The panel can now be viewed online.

The second panel “Congress and the FCC after Title II” was moderated by TPI’s Scott Wallsten. Rebecca Arbogast from Comcast warned that “we are coming to the requiem of good policymaking.” She stated Title II reclassification and the related regulatory requirement are putting a drag on what has been a bright spot in the U.S. economy and will hamper new services and risk-taking by ISPs. Robert Quinn from AT&T echoed many of Arbogast’s concerns and warned that rate regulation will certainly begin this fall when the FCC looks at fiber and wholesale prices.

Although admitting that “the other side doesn’t always tell me” what actions the FCC plans to take next, FCC Commissioner Michael O’Rielly, predicted that the Commission will begin soliciting reports of violations of the rules and will attempt to enforce vague claims and expand authority. Ominously, O’Rielly warned “There has been a power grab” in the form of the right to regulate in the future.

David Redl from the House Commerce Committee Subcommittee on Communications and Technology, doesn’t see any movement in Congress regarding alternative network neutrality legislation until the second circuit court rules. He also warned that the FCC will indeed add privacy into their regulatory reach and will act on regulating personally identifiable information in addition to its current regulation of consumer proprietary network info.

The lone supporter on the panel of the Open Internet Order, Jonathan Baker from American University’s Washington College of Law, stated that the supporting analysis in the order is “infused with economics.” Baker explained that investment in edge providers leads to investment in infrastructure, and that the FCC was appropriately thinking about the core and edge as a whole. You can watch the entire (very entertaining) panel here.

The final panel of the day was “Whose Rules? Internet Regulations in a Global Economy,” moderated by Ambassador David Gross from Wiley Rein. FTC Commissioner Julie Brill focused on transatlantic issues. He warned that the EU’s new Digital Single Market strategy does not stem from protectionism for EU companies. She explained that there are deep-rooted cultural and legal differences between the US and the EU that effect how each look at the gig economy. Andrea Glorioso from the Delegation of the European Union to the USA also addressed the Digital single market strategy. He explained that antitrust investigations in the EU concerning the tech industry have mostly been toward US companies, but that’s because they are so successful. In other sectors, EU companies are investigated much more that US companies.

He took issue with the idea that the EU and the US have deep-rooted difference, stating that “we have so much more in common than we each do with other countries” and therefore much work together.

Adam Kovacevich from Google reflected on the past vision that the internet would be the “leveler” of government policy. This, he explained, was disproven, as illustrated by the strong incentive to have local policy reflect local norms. Kevin Martin from Facebook identified the importance of regulatory regimes for expanding infrastructure for internet access for areas that currently do not have it. He urged policymakers to not lose sight of the broader goal of connectivity. Peter Davidson from Verizon agreed that tax and regulatory policies should be viewed through the lens of connecting people. He identified digital protectionism as a concern and urged principles and norms for cross-border data flows to be included in trade agreements to encourage investment. Video of the discussion will be up on the TPI YouTube page.

There much more to come soon, including the a wrap-up of the IP themed luncheon discussion and tonight’s dinner keynote by Kelly Merryman, Vice President of Content Partnerships for YouTube. Stay tuned.

The NABU Network: A Great Lesson, But Not About Openness

Thursday, February 5th, 2015

When announcing his plan to regulate Internet Service Providers under Title II in Wired, FCC Chairman Tom Wheeler argued that his experience at NABU Network in the 1980s helped inform his decision. He writes that NABU failed because “The phone network was open whereas the cable networks were closed. End of story.”

But that’s not the whole story, and its lessons aren’t really about openness. Instead, it teaches us about the importance of investment and network effects.

NABU sprang from the mind of Canadian entrepreneur John Kelly, who realized that cable television networks were uniquely suited to high-speed data transmission. The service allowed a home computer to connect to a remote software library and, in principle, play games, shop, bank, and do email. And apparently it could do all that at speeds up to 6.5 Mbps—even more than Chairman Wheeler claimed in his recent Wired article.[1] Not too shabby. NABU first launched in Ottawa in 1983 and Sowa, Japan and Fairfax, VA in 1984. By the time it went out of business it had reached agreements with cable companies in 40 other regions.

As it turned out, the world wasn’t ready for NABU, and it failed in 1986.

Analyses of NABU, however, do not point to issues of openness as the cause of death. After all, other computer networks in the early 1980s that relied on the telephone network also failed.[2]

Instead, post-mortems point to issues we know are important in network industries: network effects and investment, or, rather, the lack thereof in both cases.

As has been written ad nauseam, the Internet is a two-sided (actually, multi-sided) platform. In order to succeed, it must attract both users and applications. In early stages, when uses and users are scarce, it can be difficult to get anyone on board. The presence of indirect network effects makes it worse, since the benefit from each new user or application is greater than the benefit that accrues just to the new subscriber or developer. That is, a new user benefits by being able to access all the available content, but the entire network benefits due to increased incentives to develop new applications. The new user, however, does not realize all those benefits, meaning that adoption, at least in the early stages, may be artificially slow.

Early commercial data networks faced precisely this problem. Why would someone pay to go online if there were nothing to do when he logged on? In order to subscribe to NABU, consumers paid not just a monthly fee, but also had to buy or lease a $299 NABU personal computer. Data networks tried to induce consumers to subscribe by making collections of software available. In the 1980s, however, most commercial data networks just could not provide enough of an incentive to attract or keep subscribers.

Creating reasons to subscribe played an important role in NABU’s failure. As one source put it, “ the NABU Network did not catch on due to lack of accessible resources.”

Another reason for its failure appears to have been the inability of the then-existing infrastructure to fully deliver on NABU’s promises. Cable TV systems were not built to handle much upstream traffic—an issue they still face today. Upgrading the cable infrastructure for two-way communication required significant investment.

Competition also made survival difficult for NABU. NABU faced direct competitors in the form of other data networks like AOL (founded in 1985), Prodigy, and the dominant firm, Compuserve. Additionally, to the extent that consumers would sign up to play games, NABU also faced competition from packaged software games and gaming consoles, and faced the same over-saturation of the market that led to the great video game crash. It even faced potential competition from The Games Network, a firm that was developing a system that used cable networks to distribute video games but failed to get off the ground.

In short, the market wasn’t quite ready for the kind of service NABU was selling, although NABU founder Kelly was right about the potential of cable networks. As Marty McFly might have said to potential subscribers in the 1980s, “your kids are gonna love it.

Openness is a key part of the Internet. It just wasn’t a key part of the NABU story. Instead, it reminds us of the importance of network effects, the economics of multi-sided networks, and network investment. Unlike the 1980s, these are now working together in a virtuous cycle favoring innovation. Let’s make sure any new rules don’t change that.


For a fascinating and detailed history of early data networks, including NABU, see

Zbigniew Stachniak, “Early Commercial Electronic Distribution of Software,” IEEE Annals of the History of Computing 36, no. 1 (2014): 39–51, doi:10.1109/MAHC.2013.55.

[1] Stachniak, “Early Commercial Electronic Distribution of Software”, n. 21.

[6] Stachniak, “Early Commercial Electronic Distribution of Software” Table 1.


Google, Search Ranking, and the Fight Against Piracy

Monday, October 20th, 2014

Last month, Rahul Telang and I blogged about research we conducted with Liron Sivan where we used a field experiment to analyze how the position of pirate links in search results impact consumer behavior. Given this research, we were very interested in Google’s announcement last Friday that they were changing their ranking algorithm to make pirate links harder to find in search results.

According to the announcement, Google changed their ranking algorithm to more aggressively demote links from sites that receive a large number of valid DMCA notices, and to make legal links more prominent in search results. The hope is that these changes will move links from many “notorious” pirate sites off the first page of Google’s search results and will make legal content easier to find.

One might ask whether these changes — moving pirate results from the first to the second page of search results and promoting legal results — could have any effect on user behavior. According to our experimental results, the answer seems to be “yes, they can.”

Specifically, in our experiment we gave users the task of finding a movie of their choosing online. We then randomly assigned users to a control group and to two treatment groups: one where pirate links were removed from the first page of search results and where legal links were highlighted (legal treatment), and one where legal links were removed from the first page of search results (piracy treatment).

Our results show that users are much more likely to purchase legally in the legal treatment condition than in the control. We also found that these results hold even among users who initially search using terms related to piracy (e.g., by including the terms “torrent” or “free” in their search, or by including the name of well-known pirate sites), suggesting that even users with a predisposition to pirate can be persuaded to purchase legally through small changes in search results.

Given our findings, reducing the prominence of pirated links and highlighting legal links seems like a very promising and productive decision by Google. While it remains to be seen just how dramatically Google’s new search algorithm will reduce the prominence of pirate links, we are hopeful that Google’s efforts to fight piracy will usher in a new era of cooperation with the creative industries to improve how consumers discover movies and other copyrighted content, and to encourage users to consume this content through legal channels instead of through theft. If implemented well, both Google and studios stand to benefit significantly from such a partnership.

Does Paying Olympians for Winning Medals Make Them More Likely to Win?

Wednesday, August 15th, 2012

Missy Franklin, America’s newest star swimmer and most winning Olympian this summer, took home $110,000 in addition to her four gold and one bronze medals. That’s because the United States Olympic Committee pays athletes $25,000 for each gold, $15,000 for each silver, and $10,000 for each bronze medal they win. The United States isn’t alone in rewarding winning athletes. At least 42 other countries also pay winners, with payouts for gold ranging from about $11,000 in Kenya to $1.2 million in Georgia.

(click the picture to enlarge it)

Incentives might mean more than money. In 2008, medalists from Belarus received not just money, but also free meat for life.[1] At least one country includes a stick in addition to its incentive carrot. North Korea, which is apparently the basis for The Hunger Games, reportedly rewards its gold medal winners “with handsome prize money, an apartment, a car, and additional perks like refrigerators and television sets,” while “poor performances” could yield time in a labor camp.[2]

Why do countries pay winners? The most obvious answer is that they want to create an additional incentive to win. But does it? We set out to answer this question.

We put together data on each country’s medal count and medal payout scheme for the 2012 Olympics in order to test for a correlation.[3] We also control for country size and income, since those are surely the biggest predictor of how many medals a country will win: more populous countries are more likely to have that rare human who is physically built and mentally able to become an Olympic athlete, while richer countries are more likely to be able to invest in training those people.

Thus, we estimate the following regression:

Medalc = f(populationc, per capita incomec, medal payoutc), where c indicates the country. The table below shows the results. As expected, bigger and richer countries win more medals, especially gold medals. Curiously, however, medal payouts are negatively correlated with winning medals, and this correlation is statistically significant for bronze and silver medal payouts.

Table 1: Relationship Between the Number of Gold, Silver, and Bronze Medals and Country Characteristics

(including only countries for which we could find information)

What could explain this counterintuitive result?

One possibility is that it is simply wrong. Data on medal payouts do not exist in one place, so we had to search for information on each country. As a result, just because we could not find data on a country’s payouts does not mean it doesn’t have any. In the analysis above I excluded countries for which we had no information on medals. Most of the countries omitted probably did not offer rewards, but we cannot say this with certainty.

We can partially address this question by assuming—and I emphasize that this may be an inappropriate assumption—that countries for which we found no data have no payouts and re-estimating the regression. The table below shows these results. Under this assumption, the results change slightly. Gold and silver payouts become positively correlated with number of medals won, but are far from statistically significant. In other words, this analysis simply suggests that there is little, if any, correlation between awards and medals. Payouts for bronze medals, however, become positive and statistically significant.

Table 2: Relationship Between the Number of Gold, Silver, and Bronze Medals and Country Characteristics

(including all countries and assuming no information on rewards means country does not offer rewards)

Another complication is that this analysis does not take into account non-monetary payouts, such as meat-for-life, or punishments like banishment to labor camps, but we do not have a way at the moment to incorporate that information empirically.

Finding no correlation between monetary payments and medals is not surprising in some countries. In the United States, for example, $25,000—while perhaps today a sizable per-medal sum for high school student Missy Franklin—is dwarfed by the millions of dollars she can earn in endorsements now that she has won her medals.

Such valuable endorsement opportunities, however, are probably less common in other countries, while the relative value of the rewards might be much higher. So, for example, $11,000 in Kenya may provide a much stronger incentive than $25,000 in the United States. We test this hypothesis by running the original regression and excluding richer countries. I exclude the tables here, but the results do not change: even including only poorer countries, where the rewards are more likely to make a material difference in someone’s life, rewards are not significantly correlated with winning more medals, even controlling for population and income.

Another explanation for these results is that elite athletes likely compete for a host of reasons unrelated to money: because they love their sport, because they want to be the best, or for other personal reasons that give them satisfaction.

But why would payouts be negatively correlated with winning medals? My guess is that in reality they aren’t. Instead, the payouts proxy for something else. Perhaps countries that give larger monetary payouts perform worse than one would expect given their size and wealth, and hope that the payouts incent better performance by their athletes. In this case, payouts would be negatively correlated with a country’s performance, all else equal, not because they are not incentives, but because they are there because the country should be doing better.

Still, overall the evidence suggests that these payments don’t increase the medal count. And because they benefit such a small number of athletes it’s hard to argue that they are important for making it possible to make a living as an Olympic-level athlete.

Overall, if the desire is to create additional incentives to win, countries might want to re-think their reward system.

Note: I thank Corwin Rhyan and Anjney Midha for for their excellent research assistance and indulging me by hunting laboriously for data on the olympics even though we work at a think tank devoted to technology issues.



[3] We tried to find information for 2008, but could confirm information for only 10 countries.

Where do vendors to cable think the industry is heading? Evidence from trade show data, 2012

Tuesday, May 29th, 2012

For the past three years I have been collecting data about exhibitors at the NCTA trade show from the show’s website. With this year’s show having just ended, it’s time to take a look.

Exhibitor attendance

This year, the website listed 259 exhibitors, down from 271 in 2011 and 345 in 2010. These statistics somewhat exaggerate the number of companies that participate since each company at each booth counts as a separate exhibitor. So, for example, Zodiac Interactive was represented at booth ES-59 and CableNET’s booth, so is included twice.

Hot or Not?

The show website shows the categories of products, services, or technologies each exhibitor selects to describe itself. An exhibitor can select several categories. To evaluate the prevalence of each category I total the number of times each category is selected, and then divide that by the number of exhibitors to make it comparable across years.

The figure below shows the top 20 categories for 2010, 2011, and 2012. The top three categories remain constant and are not surprising, given that this is the cable show: cable programming, video on demand, and IPTV.

Top 20 Product Categories, 2010-2012


The biggest winners were cloud services, mobile apps, and “multiscreen content” (although it is possible this last category was called something else in past years), which were not (officially) represented at all in previous years but were now in the top 20. Other new categories this year included social TV, broadband services, home networking, and content navigation.

Major New Product Categories 2012

“Broadband services” is rather vague and probably does not indicate any particular new product or service. The others, however, appear to represent new developments in cable. “Home networking” is related to cable companies’ interest in home monitoring services, and “content navigation” indicates interest in user interfaces that do more than change channels.

The following table shows the biggest gainers, in percentage points, for products and services that had also been exhibited in 2010 or 2011. WiFi products and services saw the biggest increase, followed by data analysis, home information services, and business services.

Biggest Gainers 2011-1012


The following table shows the category losers. The biggest losers appear to be categories most associated with traditional cable television: pay programming, pay-per-view, program guides, and video on demand. Personal video recorders showed a sharp dropoff, perhaps corresponding to the increase in cloud services, meaning that the industry sees consumers less likely to be recording content at home as opposed to downloading or streaming it from the cloud. Educational programming decreased significantly, although “children’s programming” increased a bit (see above).

Biggest Losers 2011-2012

What does this mean?

The data themselves have certain problems that make drawing strong conclusions difficult. For example, they don’t control for the size of the exhibitors’ booths. NBCUniversal’s exhibit space (449), for example, is hardly comparable to Cycle30’s booth (2242). This problem is partly mitigated by larger booths holding multiple exhibitors and more categories. Additionally, the categories are self-reported by the exhibitors and do not appear to have strict definitions. Exhibitors have no incentive to select grossly inaccurate categories, since that would attract people unlikely to purchase their products, but exhibitors probably tend towards being overly-inclusive so as not to miss potential clients. This tendency might bias towards especially popular technologies. For example, perhaps exhibitors take liberties in claiming they offer “social TV” or “cloud services” because those contain popular buzzwords rather than because their products truly offer much in the way of those services.

2012 Cable Show Floor Plan

Despite these shortcomings in the data, they provide one source of information on where economic actors with money at stake think the industry is headed over the next year. And, according to them, the industry is moving away from its traditional role as linear video distributor to storing content in the cloud, trying to capitalize on trends in social everything, and providing other services like home monitoring.

FCC Reform Bills

Friday, November 4th, 2011

Politico’s Morning Tech reported Thursday that the release of the text of the already-approved USF order would be delayed probably until next week.  The delay of yet another adopted FCC order in being released to the public makes legislation recently introduced all the more appropriate. 

Wednesday, Rep. Walden and Sen. Heller released legislation aimed at improving agency transparency and process at the FCC.  Although  some interest groups have voiced concern that the proposed reforms on transaction reviews would benefit telecom companies, or overall would curtail the agency’s ability to protect the public interest, the proposals concerning  a cost benefit analysis of regulations are sensible – and desperately needed. 

The reforms, as described in Sen. Heller’s press release, would:

Require the Commission to survey the state of the marketplace through a Notice of Inquiry before initiating new rulemakings to ensure the Commission has an up-to-date understanding of the rapidly evolving and job-creating telecommunications marketplace.

Require the Commission to identify a market failure, consumer harm, or regulatory barrier to investment before adopting economically significant rules. After identifying such an issue, the Commission must demonstrate that the benefits of regulation outweigh the costs while taking into account the need for regulation to impose the least burden on society.

Require the Commission to establish performance measures for all program activities so that when the Commission spends hundreds of millions of federal or consumer dollars, Congress and the public have a straightforward means of seeing what bang we’re getting for our buck.

Apply to the Commission, an independent agency, the regulatory reform principles that President Obama endorsed in his January 2011 Executive Order.

Prevent regulatory overreach by requiring any conditions imposed on transactions to be within the Commission’s existing authority and be tailored to transaction-specific harms.

Identifying an actual market failure a regulation is attempting to address should be a given for policymakers but, unfortunately, the FCC rarely takes that approach. Even if attempts at pre-emptive regulation are well-intended, it is virtually impossible to analyze the effects of a regulation without some measurable outcome.   TPI President Tom Lenard echoed both the need for an identified market problem and a cost-benefit analysis before enacting regulation in comments to the FCC in response to the Open Internet Order NPRM and in comments to the FTC regarding their proposed privacy framework, illustrating that such principles can, and should, apply across regulatory agencies. Recently, Scott Wallsten showed how the FCC could incorporate cost-effectiveness analysis into its decision-making process in the context of universal service reforms.

I’m crossing my fingers that some iteration of Rep. Walden and Sen. Heller’s legislation actually passes.  It’s a great start at sensible, meaningful reform to the agency.

Health Information Technology, High-Skilled Immigration, and Tax Administration: Radio Interview

Friday, October 14th, 2011

I was a guest on Progressive Radio Network’s “Of Consuming Interest” on September 9th, where I spoke about my work at TPI on health information technology, high-skilled immigration, and tax administration.

In my conversation with radio host Jim Turner, I discussed links between health policy and technology. I outlined the effects innovation can have on costs—to raise or reduce them—and the importance of looking at evidence to make sure policies are on the right track. I also talked about how technology affects privacy, both broadly and more specifically with regard to electronic health records. Privacy is important for consumers but privacy is not free—there are tradeoffs that require striking a balance. For example, stringent privacy rules have slowed hospitals’ adoption of electronic records, resulting in higher infant mortality.

Jim Turner and I also talked about issues involving federal subsidies to health information technology. While such technologies have the potential to spur innovation, reduce costs, and improve patient care, the roughly $30 billion provided to health care providers to speed the adoption of electronic health records in the 2009 economic stimulus could result in substantial waste and unintended consequences, even slowing the adoption of electronic records. As I argued in published comments to proposed program rules, these subsidies may end up funding activities already underway rather than inducing new investment and innovation. They can also backfire with results opposite to their intent if complex rules and uncertainties about qualifying for payment increase investment risk.

Health information technologies were the subject of the Aspen Forum workshop session I organized on the Internet, social media, and drug advertising. Consumers need information because they are playing an increasingly active role in their health care, and they are increasingly turning to the Internet and social media. Advertising goes hand in hand with public information and studies show that the benefits of prescription drug advertising outweigh the costs. Indeed, restricting information about approved products results in the dissemination of inaccurate information and counterfeit products. In a recent opinion, the Supreme Court reaffirmed that drug advertising is protected speech under the constitution.

Turning to immigration and innovation, I said that although immigration is always controversial, especially when unemployment is high, most analysts agree that lifting our stringent caps on immigration by scientists and engineers would boost innovation, productivity, and economic growth. What is less well understood is that high-skilled immigrants pay substantially more in taxes than they receive in federal benefits and are a plus for the federal budget, as my study showed. In response to Jim’s question about immigrants potentially displacing American workers, I pointed out that immigrants with advanced degrees tend to be complementary with domestic workers rather than substituting for them, resulting in higher earnings and more investment. But high-skilled immigration policy has been held hostage to comprehensive immigration reform, which is highly controversial as it involves border control issues and the problem of undocumented aliens.

Innovation in computer technology has led many people to assume that having the government prepare individual tax returns would reduce tax compliance costs. But, a study I co-wrote with Prof. Joseph Cordes of George Washington University examined the evidence and concluded that implementing such a program is not advisable. Filers may not realize significant cost savings because checking a return for completeness and accuracy requires much of the same work as preparing a return. Advances in tax preparation software and other assistance have sharply reduced the cost of tax preparation, reducing the potential savings from return-free filing. Further, additional costs to employers and other payers of income would be large and would disproportionally burden small businesses—employers’ data reporting deadlines would have to be advanced to allow tax refunds to be timely, which people count on. A return-free system would also introduce problems regarding privacy, security, and taxpayers retaining liability for errors in government-prepared returns, which could pose a particular issue for low-income filers.

Please go to the Of Consuming Interest Website to hear the full interview.

Spectrum Allocation in Japan

Tuesday, July 5th, 2011

I’m working on a case study of broadband in Japan. In the process I’ve translated the spectrum map for 335MHz-2.2GHz into English. Because I have not seen this in English I’m posting it here for anyone who might be interested. The translations are based on Google Translate — free to send me any corrections.

Research Roundup: Cyber Security, Network Neutrality and More

Monday, June 27th, 2011

This edition of Research Roundup highlights a paper by Amalia R. Miller and Catherine Tucker on the risks of publicized data breaches in the health sector. Miller and Tucker perform one of the first empirical analyses of the medical sector by looking at how hospitals have adopted encryption software over time. They find that “the use of encryption software does not reduce overall instances of publicized data loss. Instead, its installation is associated with an increase in the likelihood of publicized data loss due to fraud or loss of computer equipment.” (p 3) The authors speculate that focusing on encryption software may be to the detriment of implementing effective internal access controls and lead to employee carelessness. In other words, without human-based company processes that complement encryption’s effectiveness, the risks for data losses could increase with the software’s implementation.

(Click through to the full post to see the abstract and link to this paper and 11 others on topics from privacy to copyright policy)