Archive for the ‘Competition and Antitrust’ Category

2014 TPI Aspen Forum has Ended, but the Videos Live On…

Friday, August 22nd, 2014

Did you miss the Aspen Forum this year?  Or, do you just want to watch some of the panels again?  Videos of the panels and keynotes from the 2014 event are now up on the TPI website.

Some highlights from Monday night and Tuesday:

Comcast’s David Cohen was the Monday night dinner speaker.  In front of a packed room, Cohen spoke about the benefits of the Comcast/TWC deal, vertical and horizontal integration in the industry in general, and even revealed what keeps him up at night (hint: it’s not the communications industry).  His speech can be viewed here.

First up on Monday morning was a panel on copyright moderated by Mike Smith, TPI Senior Adjunct Fellow and Professor at Carnegie Mellon.  “Copyright Protection: Government vs. Voluntary Arrangements” featured Robert Brauneis from GW Law School, the Center for Copyright Information’s Jill Lesser, Jeff Lowenstein from the Office of Congressman Schiff, Shira Perlmutter from USPTO and NYU’s Chris Sprigman. Panelists discussed the copyright alert system, the state of the creative market in general, and the perennial question of what can be done to reduce piracy.  Video of the spirited panel can be viewed here.

Next up was the panel, “Internet Governance in Transition:  What’s the Destination?” moderated by Amb. David Gross.  The pretty impressive group of speakers discussed issues surrounding the transition of ICANN away from the loose oversight provided by the U.S. Dept. of Commerce.  Participants were ICANN Chair Steve Crocker, Reinhard Wieck from Deutsche Telekom, Shane Tews from AEI, Amb. Daniel Sepulveda, the U.S. Coordinator for International Communications and Information Policy, and NYU’s Lawrence White.  Video is here.

Finally, the Forum concluded with a panel on “Data and Trade,” moderated by TPI’s Scott Wallsten.  The panelists discussed how cybersecurity, local privacy laws, and national security issues are barriers to digital trade.  Speakers were USITC Chairman Meredith Broadbent, Anupam Chander from University of CA, Davis, PPI’s Michael Mandel, Joshua Meltzer from Brookings, and Facebook’s Matthew Perault.  Video of the discussion is here.

We hope all attendees and participants at the TPI Aspen Forum found it interesting, educational, and enjoyable.  We hope to see you next year!

Hope the FTC reads the Wall Street Journal

Monday, July 9th, 2012

This morning’s Wall Street Journal reported on the pending IPO of the travel website Kayak, which has been doing quite well.  Kayak’s revenue increased 39%, to $73 million, during the first quarter of 2012 compared to the same period a year earlier.  Over that period net income increased to $4 million from a loss of $7 million.

The Kayak IPO follows on the heels of a successful Yelp IPO in March.  The review site also has been prospering.  Just last week, the Journal reported that Apple was planning to incorporate Yelp into its new mapping application.

The Yelp IPO was preceded by the spinoff of travel site TravelAdvisor from Expedia last December.  Since that time, TravelAdvisor’s share price has increased by two-thirds.

What do all these companies have in common, in addition to the fact that they’ve been succeeding?  They have all been complaining to the Congress and the FTC about Google’s supposedly anticompetitive practices.

Now it’s possible that they could be doing well in spite of anticompetitive behavior on the part of Google.  It’s also possible that they could have done poorly despite lack of anticompetitive behavior.  But the fact is, the companies have been succeeding, even in these tough economic times.  Perhaps they’re the ones who, by lobbying the government, are competing unfairly.

Should Google Be a Public Utility?

Friday, June 8th, 2012

Jeffrey Katz, the CEO of price-comparison site Nextag, is an outlier to the virtually unanimous view that the Internet should remain unregulated.  In an op-ed in today’s Wall Street Journal, Mr. Katz takes the position that Google should be turned into a public utility, although he doesn’t use that terminology.

The op-ed is aimed at European Competition Commissioner Joaquin Almunia, who has set a July 2 deadline for Google to respond to the EU’s antitrust concerns. Commissioner Almunia will make a big mistake and risk serious damage to the Internet if he follows any part of Mr. Katz’s advice.

Mr. Katz is nostalgic for the old days.  Maybe he should get into a different, slower moving, industry.  He laments the fact that Google doesn’t work the way it “used to work.”  It now promotes its own (according to Mr. Katz) “less relevant and inferior” products.  Google used to highlight Nextag’s services, because they “were better – and they still are.  But Google’s latest changes are clearly no longer about helping users.”

In the U.S., antitrust authorities are skeptical about complaints from competitors and, hopefully, Mr. Almunia will be as well.  Indeed, there is no evidence that Google has engaged in the type of exclusionary practices that were the focus of the Microsoft case, for example.  It is true that both Google and Bing sometimes favor their own specialized search results.  Understandably, Mr. Katz doesn’t like this.  But both search engines have discovered this is a service their users do like.

The scope of Mr. Katz’s proposed remedy is astounding:

  • Google needs to be transparent about how its search engine operates.”  Presumably that means making Google’s algorithm, and the changes that occur continually, public.  Perhaps Mr. Katz would like a forum where Nextag could express its views on Google’s algorithm changes before they are implemented.  That would certainly speed innovation along.
  • When a competitor’s service is the best response for the user, Google should highlight it instead of its own service.”  Who determines the “best response”?  Does Mr. Katz want a say?
  • Google should provide consumers with access to unbiased search results.” Who determines what is “unbiased” and how is it even defined?
  • Google should grant all companies equal access to advertising opportunities regardless of whether they are considered a competitor.”  “Equal access” is a defining feature of public utility regulation.  It has no meaning in the absence of price regulation.  Is Mr. Katz suggesting price regulation for advertising on Google?

There is a large literature on public utility regulation that people tend to forget.  Suffice it to say, the experience overall was not beneficial for consumers.  That is why there has been a worldwide movement toward regulatory liberalization over the last few decades.  If regulating traditional industries was difficult, regulating an Internet company like Google, and a product like a search engine, in a pro-efficiency, pro-consumer manner would be far more complex – basically, impossible.

In the U.S., public officials and various other stakeholders are in the process of preparing for the international telecommunications negotiations at the December ITU meeting in Dubai, with the goal of keeping the Internet unregulated.  This argument becomes more difficult to make if we are in the process of doing the opposite.

Fundamentally, Mr. Katz wants Google to work “the way it used to work.”  That is not a recipe for innovation.  Hopefully, the authorities will see his recommendations for what they are – the self-interested proposals of a competitor – and discount them accordingly.

What Cable Monopoly?

Thursday, May 3rd, 2012

“The future is in fiber optic high-speed Internet access, as compared to DSL and cable modem service.”

“Many new business models are made possible by high-speed access, and fiber access in particular. By contrast, DSL and cable modem access are subject to sharp capacity limitations which are rapidly rendering them obsolete for the types of activities Americans want to engage in online.”

-      Crawford, Susan P. “Transporting Communications.” Boston University Law Review 89, no. 3 (2009): 871–937, pp. 928 & 930.

“…the broad consensus seems to be that the long-term fixed platform will likely be fiber, and cable plant too will likely become increasingly fiber-based over time, as the theoretical and long-term practical capacity of fiber to the home systems will be orders of magnitude larger than for cable systems.”

-       Benkler, Yochai, Rob Faris, Urs Gasser, Laura Miyakawa, and Stephen Schultze. Next Generation Connectivity: A Review of Broadband Internet Transitions and Policy from Around the World. The Berkman Center for Internet & Society, 2010, p.63

What a difference a few years makes! As late as 2009 Susan Crawford was arguing that cable broadband was becoming obsolete and Harvard’s Berkman Center believed the only long-term answer to increasing broadband demand was fiber.

Today, Crawford is warning of a looming cable monopoly. To be sure, DOCSIS 3.0 technology has given cable a relatively low-cost upgrade path while traditional telcos generally have to invest far more in fiber to achieve similar performance.[1]

So, what is really happening in the market? As the chart below shows, data on fixed broadband subscriptions contradict the claims of monopoly. The most recent FCC data only goes through December 2010, so we extend the figure to June 2011 using data from the OECD.[2] The data show that cable has always held the majority of connections, peaking around 2003 when it held close to 60 percent of the fixed broadband market.

Sources: FCC reports on local telephone competition and broadband deployment, and OECD http://www.oecd.org/document/23/0,3746,en_2649_34225_33987543_1_1_1_1,00.html

The share of cable connections is trending upwards, but, at least as of last year, did not appear to be significantly different from the past.

More recent data comes from companies’ financial reports. The following chart shows the quarter-to-quarter percentage change in the number of high-speed Internet subscribers for Comcast, Time Warner Cable, Verizon, and AT&T. Cable companies have been doing well in terms of net additions for several quarters, but not significantly better than Verizon, and even AT&T is reporting net gains from its U-Verse platform.

Sources: Company quarterly and trending reports.

Note: Time Warner Cable reported 10.716 million HIS subscribers in Q1 2012, which represented close to a 7 percent increase over Q4 2011. However, 550,000 of that increase came from TWC’s acquisition of Insight Communication and 42,000 of it from the acquisition of NewWave Communications. The percentage shown in the figure deducts increases due to acquisitions.[3]

None of this evidence means that Crawford’s warnings are necessarily wrong, of course. Whether cable’s cost advantage will ultimately translate into a monopoly or any increased market power, however, will depend not just on technological differences but also on changes in demand.

When an HD video stream comes from Netflix at less than 5 Mbps there is little advantage to cable DOCSIS 3.0 relative to a DSL connection with at least 5 mbps. But demand will surely change over time, and cable’s cost advantage will be an important point in its favor. That’s one reason why AllianceBernstein’s analyst Craig Moffett is so bullish on cable stocks.

Even as critics are pivoting from demanding that we focus only on fiber to warnings of a cable monopoly, the market is shifting under their feet again. Today, consumers are adopting smartphones and tablets in droves. The trend towards wireless is already affecting the development of Internet innovation (think mobile apps). Cable still has some advantages in that area—wireless providers need to offload their data somewhere, after all—but it may yet not end up as the dominant technology.

More generally, this market changes quickly. A few years ago policymakers were being urged to focus on fiber. Now they are being warned about a cable monopoly even as wireless broadband is taking center stage, as the FCC data shown in the figure below demonstrate. And surely in a few years technology and demand will have moved us in directions we can’t yet predict.

Source: FCC Internet Access Services Report, October 2011, Table 7 http://transition.fcc.gov/wcb/iatd/comp.html

Policymakers should, without a doubt, keep a close eye on market conditions and work to ensure an environment conducive to competition. But if this fast-changing market teaches us anything, it’s that we should think twice before we conclude we know the endgame.


[1] Christopher Yoo has pointed out the fiber-cable worry flipflop, so I can’t claim credit for noticing it.

[2] I have been very critical of the OECD rankings. However, data for a single country over time should be reliable if the within-country definitions remain constant. Judging from how closely the OECD data track the FCC data it is likely they come from similar sources.

[3] http://www.fiercecable.com/story/insight-leadership-team-departs-following-completion-time-warner-cable-acqu/2012-03-01 and http://www.businesswire.com/news/home/20110613005676/en/Time-Warner-Cable-Acquire-Cable-Systems-NewWave

The Search Neutrality Police

Monday, December 19th, 2011

Three months after holding a hearing on Google’s search engine business practices, Senators Kohl and Lee have written a letter to FTC Chairman Leibowitz urging a thorough investigation of the company.  As anyone with even the remotest interest in the subject knows, the FTC has had such an investigation underway for some time now, and it is undoubtedly the most high-profile antitrust issue currently on the agency’s agenda.  Thus, the only purpose for such a letter would seem to be to apply political pressure on the agency for what is, essentially, an antitrust law enforcement matter.

Most worrisome, the letter contains hardly a mention of what is in consumers’ best interests, which should be the focus of antitrust enforcement.  Instead, while the Senators write it is not their intention to protect any specific competitor, their arguments are based on the complaints of several competitors who testified during a committee hearing they sponsored to hear those complaints.

We hope and trust that the FTC is undertaking a thorough investigation based on antitrust law as opposed to bowing to pressure from elected officials.  This will increase the likelihood of a result that is truly in the interest of consumers.

Internet Hysteria – Are We Losing Our Edge?

Thursday, December 15th, 2011

Scott Wallsten and Amy Smorodin

From Anthony Wiener’s wiener to the FCC’s brave stand on Americans’ shameful inability to turn down the damn volume by themselves, 2011 has been a big year for tech and communications policy. But how has one of the Washington tech crowd’s most important products—Internet hype—fared this year?  In this post, we seek to answer this crucial question.

The Internet Hysteria Index

The Internet is without doubt the most powerful inspiration for hyperbole in the history of mankind. Some extol the Internet’s greatness, like Howard Dean, who called the Internet “the most important tool for re-democratizing the world since Gutenberg invented the printing press.”[1] Others fret about the future, like Canada’s Office of Privacy Commissioner, who claimed, “Nothing in society poses as grave a threat to privacy as the Internet Service Provider.”[2]

Sometimes the hyperbole is justified. For example, thanks to Twitter, attendees at this past summer’s TPI Aspen Summit were privy to a steady stream of misinformation even before the DC-area earthquake stopped.[3]

In the same spirit, we present the Internet Hysteria Index (IHI). The IHI, which the DOJ and FCC should take care not to confuse with the HHI, is the most rigorous and flexible tool ever conceived for gauging the Internet’s “worry zeitgeist”. It’s rigorous[4] because it uses numbers and flexible[5] because you can interpret it in so many different ways that it won’t threaten your preconceived ideas no matter what you believe.

The IHI has two components. The first tracks fears of an unrecognizable, but certainly Terminator-esque, future Internet. We count the number of times the exact phrases “the end of the internet as we know it” and “break the internet” appear in Nexis news searches each year since 2000.

Figure 1: The End of the Internet as we Know It!


Figure 1 shows that 2011 produced a bumper crop of “break the internet” stories, mostly related to the Stop Online Piracy Act and the Protect IP Act. The spike in 2006 reflects a wave of Net Neutrality stories after AT&T’s then-CEO proclaimed that “what they [content providers] would like to do is use my pipes free, and I ain’t going to let them do that because we have spent this capital and we have to have a return on it.”

As our research illustrates, the “End of the Internet” hyperbole shows a healthy, generally upward trend, reflecting the effectiveness of our collective fretting and hand-wringing. Our data do not allow us to identify[6] whether the trend is due to clever Washington PR, lazy hacks retreading old lines, real concerns, or collusion among interest groups simply ensuring they can all stay in business by responding to each other.

The second component of our index measures the incidence of hand-wringing regarding the state of broadband in the U.S. In particular, this measure counts the number of times phrases suggesting lagging U.S. broadband performance show up in Nexis since 2000.[7] Figure 2 shows the results of our analysis.

Figure 2: The Grass is So Much Greener on the Other Side of the Pond: U.S. Broadband Sucks


The big spike in 2010 is related to release of the National Broadband Plan. The prior high, in 2007, saw stories focusing on the OECD rankings, broadband mapping, and the beginnings of broadband plan discussions.

Unfortunately, 2011 was not a good year for misinterpreting shoddily-gathered statistics. Figure 2 shows a dramatic drop-off in bemoaning the dire state of U.S. broadband, possibly after everyone just got really, really tired of talking about the National Broadband Plan. We’re extremely concerned that as a result, the U.S. may have fallen dramatically in the OECD worry rankings. In fact, in a warning shot across our bow, on December 14 the BBC reported that “the UK remains in danger of falling behind when it comes to next-generation mobile services” and superfast broadband.[8] We’re hopeful American fretting will pick up once analysts actually read the FCC’s USF order that was promulgated under the cover of 23 days between approval and publication. On the other hand, there is a risk that the sheer volume of the Order—the equivalent of more than 4 million tweets—might dissuade people from talking about it ever again.

For generations, Americans have taken a back seat to nobody on the important issue of Internet hyperbole. Let’s hope the inside-the-beltway crowd pulls itself together and breathes some life back into the speech economy. Happy New Year.


[1] http://motherjones.com/politics/2007/06/interview-howard-dean-chairman-democratic-national-committee

[2] http://dpi.priv.gc.ca/index.php/essays/the-greatest-threat-to-privacy/

[3] Picture from Funny Potato, http://www.funny-potato.com/blog/august-23rd-2011-east-coast-quake.

[4] It’s not.

[5] In other words, “probably pretty meaningless.”

[6] Actually, they do, but we don’t want to do the work.

[7] Specifically, the search is ((“U.S. falling behind “OR “U.S. lagging”) AND broadband) OR ((“United States falling behind” OR “United States lagging”) AND broadband).

[8] http://www.bbc.co.uk/news/technology-16174745

The AT&T/T-Mobile Merger Conundrum: Increase Efficiency AND Create Jobs?

Friday, December 2nd, 2011

How did the proposed AT&T and T-Mobile merger, which many viewed as so certain when announced, end up on life support? Is it because of the decision by the Department of Justice (DOJ) to challenge the merger in court? Or maybe because of skeptics’ claims regarding the likelihood of the merger “creating jobs?”

Those factors certainly played a role, but another reason the merger reached the brink of collapse is arguably because the current jobs crisis made it impossible for AT&T to justify the merger to antitrust authorities while also making it palatable to politicians and the FCC with its broader “public interest” standard.

For antitrust purposes, AT&T had to demonstrate that it would not substantially reduce competition and that if it did, the increased efficiency of a merged company would greatly outweigh those costs. For political purposes, in an era of persistent unemployment AT&T decided it had to demonstrate that the merger would create jobs.

Horizontal mergers between large competitors, such as the proposed one between AT&T and T-Mobile, are generally subject to tough antitrust scrutiny. Antitrust policy is indifferent to the effect of a merger on jobs, instead focusing on the effects of the merger on competition and consumers while weighing those effects against the potential economic benefits of a more efficient merged firm.

As the DOJ-FTC Horizontal Merger Guidelines note, “Competition usually spurs firms to achieve efficiencies internally. Nevertheless, a primary benefit of mergers to the economy is their potential to generate significant efficiencies and thus enhance the merged firm’s ability and incentive to compete, which may result in lower prices, improved quality, enhanced service, or new products” (p.29).

The efficiency argument is always a high bar in a merger case since “the antitrust laws give competition, not internal operational efficiency, primacy in protecting customers” (p.31). One way the merged company might increase efficiency would be to lay off large numbers of workers if it believed it could maintain service quality while doing so. By appearing to take that option off the table and arguing that the merger was, in fact, good for jobs, AT&T raised the efficiency bar even higher than it normally is.

It is, of course, possible to increase employment and efficiency if the firm increases output by more than it increases costs. AT&T made an argument consistent with that outcome in its filings by contending that spectrum constraints are distorting investment decisions at both AT&T and T-Mobile.

AT&T’s biggest claim regarding jobs was that the merger would lead to more jobs through better mobile broadband. However, the empirical link demonstrating that broadband increases employment—rather than simply being correlated with higher employment—has not been rigorously established, as Georgetown Professor John Mayo and I demonstrate in a paper published earlier this year.

As a result, even if DOJ were willing to consider effects external to the firms, industry, and direct consumers, the speculative nature of the claims would probably cause the DOJ to disregard them. As the Merger Guidelines note,

Efficiency claims will not be considered if they are vague, speculative, or otherwise cannot be verified by reasonable means. Projections of efficiencies may be viewed with skepticism, particularly when generated outside of the usual business planning process. (p.30)

The FCC is more sympathetic to the effect on jobs than DOJ, but the staff report made it clear that it expected the merger to result in a net loss of direct employment and was highly skeptical of the claims regarding the indirect effects on employment (see Section V(G), beginning at paragraph 259 for the jobs discussion).

In short, even setting aside the substantive questions of the net effects on competition, consumers, and broadband availability, the merger was always going to be an especially tough sell in the current economic and political climate.

To win the day, AT&T had to convince antitrust authorities that improved efficiencies by the merged firm would outweigh any resulting reduction in competition while simultaneously convincing politicians that the merger was good for jobs. But convincing DOJ that the company would increase employment risked signaling to DOJ that the merger was not about efficiency, and convincing the FCC that the merger was good for efficiency risked signaling to the FCC that the merger would not produce jobs.

Unable to thread that needle, AT&T’s strategy collapsed. Whether it will succeed with a new strategy remains to be seen.

Penalizing Success – The FTC’s Google Investigation

Wednesday, June 29th, 2011

In theory, the antitrust laws do not penalize size, but it seems that virtually every firm that has become dominant in the technology sector—IBM, Microsoft, Intel, and now Google—ultimately becomes the subject of a major antitrust action.  The FTC started its investigation of Google formally last week and Paul Rubin and I wrote a piece on it that was published in Forbes.com.

We discuss the problems with antitrust action in high tech industries and, specifically, the nature of the complaints against Google:

Some websites are complaining that Google is manipulating its search results to advantage its own products and disadvantage its competitors. They want search to be “neutral.” But what does “search neutrality” mean? Does it mean that search engines should rank websites randomly?

Google’s market position was earned precisely because it found a way of ranking search results that is more useful for consumers, and it will quickly lose that position if someone can find an even better ranking algorithm. Before Google, the Web was much less useful precisely because search engines did not rank results in a way that consumers found informative. “Neutrality” could return us to that world.

Also problematic are the possible remedies the FTC could impose if it finds Google has violated antitrust law:

Google’s most valuable asset is its search algorithm, which is secret and constantly being refined. The secrecy of the algorithm is an integral part of its value because there is an entire industry trying to game it in order to achieve higher rankings. Would the FTC ask Google to reveal its algorithm so that the FTC lawyers and their technical advisors can try to determine how to make it neutral?

It is quite possible that the FTC investigation will not lead to further action because thus far there is no publicly available evidence that Google has violated the antitrust laws.  Let’s hope that the investigation doesn’t divert too much of Google’s attention and resources from what it should be doing—improving its current products and developing new ones.

The Google-ITA Merger Review Approaches the Finish Line

Monday, February 7th, 2011

The Department of Justice appears to be in the final stages of its review of Google’s acquisition of ITA Software.  Several travel sites, some (but not all) of which use ITA, oppose the deal.

Google is reportedly willing to honor ITA’s existing contracts with customers and to renew them.  Some of those customers who oppose the deal now want Google also to make available upgrades to the ITA software.  DOJ is reportedly considering challenging the deal if Google does not make such a commitment even though 6 other companies produce and market travel software.  In fact, the three biggest travel sites—Expedia, Travelocity, and Priceline—do not use ITA software.

Former DOJ chief economist Bruce Owen, in a recent paper, “Antitrust and Vertical Integration in ‘New Economy’ Industries,” prepared for a TPI conference, “Antitrust and the Dynamics of Competition in High Tech Industries,” found that the empirical evidence shows that vertical integration is generally welfare enhancing, even when market power is present.  This suggests there is a high bar for blocking an acquisition such as Google’s acquisition of ITA.

Owen made a more general observation that is perhaps even more noteworthy.  Merger reviews focus on the specific transaction under review, but the more important effect may be the signals that the authorities send about how they will view future transactions.  These signals are incorporated into the risk assessments and investment decisions of potential acquirers (e.g., Google) and acquirees (e.g., ITA).

By making it difficult for Google to improve its search engine product by incorporating better travel search features the government is sending a signal to large companies, particularly in the tech sector, that it is going to make it difficult for them to improve their products, at least by acquisition.  Other things equal, this reduces the potential acquirer’s value.

The signal sent to ITA and other potential acquirees is that it is going to be more difficult for them to be acquired.  Making a major exit strategy more difficult reduces the expected payoff to venture capitalists and other investors and, hence, their willingness to risk their capital.  Requiring Google to make its upgrades available to competitors would certainly diminish the value of ITA to Google.  Google might walk away from the deal altogether or go through with it and let DOJ sue.

Even if Google accepted the condition and closed the deal, being required to make software upgrades available to competitors would presumable reduce the incentives to upgrade.  Such a condition also raises the question of how the upgrades would be priced and whether the Justice Department would become involved in pricing decisions.  (This pricing issue arises even if Google commits only to make the existing product available.)

Some improvements might be Google-specific.  Would Google have to make those improvements available, possibly compromising proprietary information?  Would a court decide which improvements were specific to Google and which were more generally applicable?  Such determinations could easily turn into quite a mess.

The most important consideration for the antitrust authorities is the effect on consumers.  Enhanced travel search capabilities that are part of the Google search engine have the potential to produce significant benefits for consumers.  Those benefits, and broader benefits that could result from other tech acquisitions down the road, may be lost if DOJ kills the Google-ITA deal by putting too many conditions on it.

Net Neutrality Regulation’s First Target: Small Wireless Competitors?

Friday, January 14th, 2011

Telecommunications regulations have a long history of protecting incumbents, often because incumbents are able to use the regulatory process to insulate themselves from competition.  Unfortunately, we already see the seeds of that outcome in the response to a restrictive data plan offered by MetroPCS, but in this case due not to the actions of incumbents, but rather to the actions of some public interest groups.

MetroPCS, a regional mobile provider, offers a number of service plans with different voice and data combinations.  Its cheapest plan is $40 per month and offers unlimited voice, messaging, and web access.  The unlimited web access, however, does not allow access to certain sites like Netflix and Skype, but does allow access to YouTube.  Access to the full Internet requires a more expensive plan.

Net neutrality advocates argue that the restricted plans violate at least the spirit, if not the letter, of the new regulations.  The advocates may very well be correct, and that’s the problem.

MetroPCS is a small player in the mobile market, as the table from the FCC below demonstrates. It has no market power. Subscribers are not “locked in” when they sign up because they don’t have to sign contracts.

Wireless Subs Year-End 2009

Source: FCC 14th Annual Report and Analysis of Competitive Market Conditions With Respect to Mobile Wireless, Including Commercial Mobile Services. 2010. P.9. Note that these are voice subscribers.

MetroPCS must believe that this combination of unlimited voice and unlimited use of a restricted set of web services will appeal to some people, and that walling off certain parts of the Internet will reduce its costs.

As an entrant in a high fixed-cost market, MetroPCS must find ways to differentiate itself from the larger carriers and reduce costs if it is to succeed. While it sounds appealing on its face to make the entire web accessible to MetroPCS subscribers, requiring MetroPCS to offer precisely the same services as larger carriers could leave it with no sustainable business model.

Allowing MetroPCS to experiment with business plans does not, however, mean that it should mislead consumers.  Our perusal of its website and calls to customer service left us confused about which services, exactly, it excludes from the plan.  Presumably MetroPCS uses a well-defined algorithm for deciding which sites it excludes. It should be able to explain that algorithm to potential subscribers, though any harm is limited due to the absence of contracts, meaning that consumers can switch plans or cancel if they find the restrictions too onerous.

Despite this (hopefully soon-to-be-rectified) transparency issue, this plan is a business model that one of the smallest players in the mobile industry hopes will help it to compete successfully against its much bigger rivals.

Prohibiting MetroPCS from offering its new plan would benefit the large, incumbent carriers, not consumers. Let MetroPCS experiment.  It would be a shame if the Commission’s first enforcement action under the new regulation reduces wireless competition.