Archive for the ‘Wireless and Spectrum’ Category

Unleashing the Potential of Mobile Broadband: What Julius Missed

Thursday, March 7th, 2013

In yesterday’s Wall Street Journal op-ed, FCC Chairman Genachowski correctly focuses on the innovation potential of mobile broadband.  For that potential to be realized, he points out, the U.S. needs to make more spectrum available.  A spectrum price index developed by my colleague, Scott Wallsten, demonstrates what most observers believe – that spectrum has become increasingly scarce over the last few years.

The Chairman’s op-ed highlights three new policy initiatives the FCC and the Obama Administration are taking in an attempt to address the spectrum scarcity:  (1) the incentive auctions designed to reclaim as much as 120 MHz of high-quality broadcast spectrum for flexibly licensed – presumably, mobile broadband – uses;   (2) freeing up the TV white spaces for unlicensed uses; and (3) facilitating sharing of government spectrum by private users.

There are two notable omissions from the Chairman’s list.  First, he does not mention the 150 MHz of mobile satellite service (MSS) spectrum, which has been virtually unused for over twenty years due to gross government mismanagement.  A major portion of this spectrum, now licensed to three firms – LightSquared, Globalstar, and Dish – could quickly be made available for mobile broadband uses. The FCC is now considering a proposal from LightSquared that would enable at least some of its spectrum to be productively used.  That proposal should be approved ASAP.  The MSS spectrum truly represents the low-hanging fruit and making it available should be given the same priority as the other items on the Chairman’s list.

Second, if the FCC and NTIA truly want to be innovative with respect to government spectrum, they should focus on the elusive task of developing a system that requires government users to face the opportunity cost of the spectrum they use.  This is currently not the case, which is a major reason why it is so difficult to get government users to relinquish virtually any of the spectrum they control.  To introduce opportunity cost into government decision making, Larry White and I have proposed the establishment of a Government Spectrum Ownership Corporation (GSOC). A GSOC would operate similarly to the General Services Administration (GSA).  Government agencies would pay a market-based “rent” for spectrum to the GSOC, just as they do now to the GSA for the office space and other real estate they use.  Importantly, the GSOC could then sell surplus spectrum to the private sector (as the GSA does with real estate). The GSOC would hopefully give government agencies appropriate incentives to use spectrum efficiently, just as they now have that incentive with real estate.  This would be a true innovation.

In the short run, administrative mechanisms are probably a more feasible way to make more government spectrum available.  For example, White and I also proposed cash prizes for government employees who devise ways their agency can economize on its use of spectrum.  This would be consistent with other government bonuses that reward outstanding performance.

Sharing of government spectrum is a second-best solution.  It would be far better if government used its spectrum more efficiently and more of it was then made exclusively available to private sector users.  This is, admittedly, a difficult task, but worth the Administration’s efforts.

Life on the Dark Side of Network Effects: Why I Ditched My Windows Phone

Wednesday, January 2nd, 2013

For consumers, 2012 was a great year in wireless. Carriers rolled out 4G networks in earnest and smartphone competition heated up. Apple’s iPhone 5 release was no surprise. But no longer was Android relegated primarily to low-end phones. Ice Cream Sandwich received strong reviews and Samsung launched high end Android devices like the Galaxy S3 that rivaled the iPhone. Microsoft kept plugging away at the margins and introduced Windows Phone 8 with a new partner in Nokia, which had seen better days. For its part, RIM provided investors with numerous opportunities to short its stock.

I love gadgets. Especially new gadgets. So I eagerly awaited the day my wireless contract expired so I could participate in the ritual biennial changing of the phone. (I wish I could change it more frequently, but I wait to qualify for a subsidized upgrade because we also have to do things like occasionally buy food for the kids). But what phone to choose?

The iPhone 5 was mostly well-received, and even early skeptics like Farhad Manjoo wrote that once you held it you realized how awesome it was. Still, even though it had become a cliche critique, to me it just looked like a taller iPhone, not a newer iPhone, and I wanted to get something that felt really new, not really tall. Am I a little shallow for rejecting an upgrade for that reason? Yes, yes I am.

So after reading rave reviews and talking to friends who had already upgraded, I got the Samsung Galaxy S3.

I hated it.

The Android lock screen customizations and widgets should have made me happy, but they didn’t. I couldn’t find a setup I liked. The Samsung’s hardware didn’t work for me. Buttons on both the right and on the left sides of the phone meant that every time I tried to press the button on the right I would also press the button on the left, screwing up whatever important task I was doing (OK, maybe that “important task” was Angry Birds, but still). Those aren’t inherent criticisms of Android or the Galaxy S3. They’re just my own quirks. (It isn’t you, it’s me).

Finally I got so frustrated with my phone that one day I hopped off the Metro on my way to work, went to the nearest AT&T store, returned it, and re-activated my old iPhone 4.

My first reaction to reanimating my 4 was relief that I could once again operate the phone properly. My second reaction was, “holy **** this screen is tiny!” I was sure my iPhone 4 had turned itself into a Nano out of spite while languishing unused.

After that, the Nokia Lumia 920 with Windows Phone 8 caught my eye. Great reviews (including this thoughtful and thorough review by a self-professed “iPhone-loving Apple fangirl” at Mashable who switched to the Lumia for two weeks), beautiful phone. And those “Live Tiles” on the home screen! No more old-fashioned grid-style icons. This, finally, was something new.

I wanted to love it. I tried to love it. I brought it home to meet my family. Some features are wonderful. The People Hub, in particular, combines Facebook, Twitter, and LinkedIn feeds in a nicely readable format. Nokia helped by developing a suite of apps for it and making great hardware. The phone is a nice size, has an excellent camera and a two-LED flash (which makes it the most versatile, if not the most powerful, $450 flashlight on the market). And while some reviews have complained about its heft, I appreciate a phone that can be used for self-defense.

But at the end of the day — and after the return period, natch — I just couldn’t handle being on the wrong side of the network effects.

Network Effects

Network effects come in two flavors: direct and indirect. With direct network effects, every user benefits as other users adopt the technology. Old-fashioned voice telephones are the classic example. If you own the only phone it is worthless because you can’t call anybody. But when the next person buys a phone you immediately benefit because now you can call him or her. (Unless you can’t stand that person, in which case his phone reduces the value of your phone to you, especially since with only two phones in the world it’s not like you can just change your number).

Direct network effects aren’t a big issue with smartphones for most people. You can call any number from any device. (Though for curmudgeons like me that’s increasingly a cost rather than a benefit. Why am I expected to drop everything I’m doing and answer the phone just because someone else decided it was time to chat?) Popular apps like Facebook and Twitter, whose value derives from the size of their networks, are platform-agnostic, at least with respect to hardware and operating systems, so each user gets the benefit of additional users regardless of the (hardware and OS) platform.[1]

But, like The Force, indirect network effects are all-powerful among mobile operating systems. To paraphrase Obi-Wan Kenobi, “…indirect network effects are what give a smartphone its power….They surround us and penetrate us; they bind the users and app developers together.”

In other words, because the vast majority of all potential customers are on iOS or Android devices, it makes sense for developers to build apps for those platforms. If apps are successful there, then maybe it’s worth building apps for a small platform like Windows Phone. Those general incentives are true whether you are the proverbial kid in the garage or Google.

These incentives apparently even affect developers at Microsoft. While Microsoft seems to be putting significant resources into the Windows Phone operating system, it’s not clear that other Microsoft developers share the love. For example, although Microsoft owns Skype, the Windows Phone 8 Skype app was not available when the first phones went on sale. Skype is still only a “preview” app in the Windows Phone store.

As a result, Windows Phone users get the short end of the app stick.

To be sure, the Windows Phone store is far from empty, and some people will find everything they need. Certain apps I rely on, like Evernote and Expensify, are there and work well.

But, overall, the Windows Phone store feels like a dollar store in Chinatown. It has a lot of stuff–75,000 new apps added in 2012, according to Microsoft–but when you look closely you realize they’re selling Sherple pens rather than Sharpie pens. Sometimes the Sherple pen works fine. For example, Microsoft promised to deliver a Pandora app sometime in 2013, but in the meantime users can rely on the “MetroRadio” app, which somehow manages to play Pandora stations. God bless those third-party developers for stepping in and making popular services available to those of us who who love them so much we’re willing to pay any price, as long as the price is zero. But third-party apps can stop working anytime the original source changes something, and it feels like being a second class citizen in the app world.

Small platforms also have problems at the high and low end of the app ecosystem. Windows Phone is missing certain hugely popular apps like Instagram.  At the same time, because of the small customer base the odds of this month’s hot new app being readily available on (or much less, originating on) Windows Phone are tiny.

http://www.youtube.com/watch?v=Nn-dD-QKYN4

Relying on a competitor can be OK if you have some power

Even worse, not only does Microsoft need to overcome its network effects disadvantage in order to succeed, it must also have good access to products developed by its arch-nemesis, Google.

Relying on a competitor isn’t inherently disastrous. Apple clearly benefits from the excellent products Google makes for iOS. Recent stories have even suggested that some of Google’s iOS products are better than its companion Android products. There is no love lost between Google and Apple, but Google apparently needs Apple’s huge customer base as much as Apple needs Google.

That’s not to say such cooperation is easy or without risk. Apple buys chips for its mobile devices from archrival Samsung, but has become wary of relying on a competitor for such a crucial part of its golden goose. Similarly, Netflix relies on Amazon’s AWS data facilities for its video streaming, even though they compete in the video delivery market. That relationship, too, makes some uneasy, for example, when Netflix service went down over Christmas and Amazon’s did not. Nevertheless, Amazon and Netflix apparently believe each has enough to gain by working with the other that the relationship continues despite such hiccups.

But with only about two percent of the market, Microsoft is but a fart in a mobile windstorm. Even if Windows Phone were not a potential competitor to Android, it’s hard to make a business case for Google to care one whit about Windows Phone today. That is, Google faces the same lack of incentive to develop apps for Windows Phone that all developers face. And, given that Windows Phone is trying to compete with Android, it’s hard to come up with a good reason why Google should invest in the Windows Phone platform. In other words, Microsoft needs Google but Google doesn’t need Microsoft.

And Google’s lack of need for Windows Phone shows. YouTube doesn’t work well on Windows Phone, gMail on the Windows Phone web browser looks like it was designed for an old feature phone, and Google itself offers only one, lonely, app–a basic search app–in the Windows app store.[2]

This isn’t anti-competitive behavior by Google by a long shot. The small number of Windows Phone users means that Google is unlikely to earn much of a return on investments in Windows Phone. And given that those returns are likely to be even lower if the investments help the Windows platform succeed, it becomes difficult, indeed, to see a reason for Google to invest much. If Windows Phone acquires enough users to generate sufficient ad revenues, however, you can bet Google will develop apps for it.

A New Hope

A third mobile platform could still succeed, despite these obstacles. Overcoming them will require enormous resources, and Microsoft, with an estimated $66 billion in cash, clearly has them. Whether it will deploy those resources effectively remains to be seen. IMO, more resources developing apps and fewer on embarrassingly bad ads might be an effective approach.

Like I said, I wanted to love my Lumia 920. And I want this new platform to succeed–more competition is good. I just don’t want to see it enough to suffer on the wrong side of the network effects in the meantime.

My iPhone 5 comes tomorrow. Don’t tell my wife I used her upgrade.

____________

[1]There are exceptions, of course. For example, Apple’s FaceTime and Find Friends app work only on Apple devices, but–much to Apple’s dismay, I’m sure–these do not appear to have had much effect on aggregate sales, at least in part because of close cross-platform substitutes like Skype and Google Latitude.

[2]Again, some third-party developers come partly to the rescue. Gmaps Pro, for example, provides a wonderful Google Maps experience on Windows Phones.

Unintended—But Not Necessarily Bad—Consequences of the 700 MHz Open Access Provisions

Tuesday, November 6th, 2012

Wireless data pricing has been evolving almost as rapidly as new wireless devices are entering the marketplace. The FCC has mostly sat on the sidelines, watching developments but not intervening.

Mostly.

Last summer, the FCC decided that Verizon was violating the open access rules of the 700 MHz spectrum licenses it purchased in 2008 by charging customers an additional $20 per month to tether their smartphones to other devices. Verizon paid the fine and allowed tethering on all new data plans.[1]

Much digital ink has been spilled regarding how to choose a shared data plan best-tailored for families with a myriad of wireless devices and demand for data. Very little, however, appears to have been said about individual plans and, more specifically, about those targeted to light users.

One change that has gone largely unnoticed is that Verizon effectively abandoned the post-paid market for light users after the FCC decision.

Verizon no longer offers individual plans. Even consumers with only a single smartphone must purchase a shared data plan. That’s sensible from Verizon’s perspective since mandatory tethering means that Verizon effectively cannot enforce a single-user contract. The result is that Verizon no longer directly competes for light users.

The figure below shows the least amount of money a consumer can pay each month on a contract at the major wireless providers. As the table below the figure highlights, the figure does not present an apples-to-apples comparison, but that’s not the point—the point is to show the choices facing a user who wants voice and data, but the smallest possible amount of each.

Note: Assumes no data overages.

The figure shows that this thrifty consumer could spend $90/month at Verizon, $60/month at AT&T, $70/month at T-Mobile, and $65/month at Sprint if the consumer is willing to purchase voice/text and data plans separately. Even Verizon’s prepaid plan, at $80/month, costs more than the others’ cheapest postpaid plans.

Moreover, prior to the shift to “share everything” plans, this consumer could have purchased an individual plan from Verizon for $70/month—$20/month less than he could today. At AT&T the price was $55/month but increased by only $5/month. Again, the point is not to show that one plan is better than another. Verizon’s cheapest plan offers 2 GB of data, unlimited voice and texts, and tethering while AT&T’s cheapest plan offers 300 MB of data, 450 voice minutes, and no texts or tethering. Which plan is “better” depends on the consumer’s preferences. Instead, the point is to show the smallest amount of money a light user could spend on a postpaid plan at different carriers, and that comparison reveals that Verizon’s cheapest option is significantly more expensive than other post-paid options and, moreover, increased significantly with the introduction of the shared plan.

Is the FCC’s Verizon Tethering Decision Responsible for this Industry Price Structure?

There’s no way to know for sure. The rapidly increasing ubiquity of households with multiple wireless device means that shared data plans were probably inevitable. And carriers compete on a range of criteria other than just price, including network size, network quality, and handset availability, to name a few.

Nevertheless, Verizon introduced its “share everything” plans about a month before the FCC’s decision. If we make the not-so-controversial assumption that Verizon knew it would be required to allow “free” tethering before the decision was made public and that individual plans would no longer be realistic for it, then the timing supports the assertion that “share everything” was, at least in part, a response to the rule.

How Many Customers Use These “Light” Plans?

Cisco estimated that in 2011 the average North American mobile connection “generated” 324 megabytes. The average for 2012 will almost surely be higher and even higher among those with higher-end phones. Regardless, even average use close to 1 Gb would imply a large number of consumers who could benefit from buying light-use plans, regardless of whether they do.

Did the FCC’s Tethering Decision Benefit or Harm Consumers?

It probably did both.

The consumer benefits: First, Verizon customers who want to tether their devices can do so without an extra charge. Second, AT&T and Sprint followed Verizon in offering shared data plans, with AT&T’s shared plans also including tethering. Craig Moffett of Alliance Bernstein noted recently that “Family Share plans are not, as has often been characterized, price increases. They are price cuts…”[2] because the plans allow consumers to allocate their data more efficiently. As a result, he notes, these plans should cause investors to worry that the plans will reduce revenues. In other words, the shared plans on balance probably represent a shift from producer to consumer surplus.

The consumer costs: Verizon is no longer priced competitively for light users.

The balance: Given that other carriers still offer postpaid plans to light users and that a plethora of prepaid and other non-contract options exist for light users, the harm to consumers from Verizon’s exit is probably small, while the benefits to consumers may be nontrivial. In other words, the net effect was most likely a net benefit to consumers.

What Does This Experience Tell Us?

The FCC’s decision and industry reaction should serve as a gentle reminder to those who tend to favor regulatory intervention: even the smallest interventions can have unintended ripple effects. Rare indeed is the rule that affects only the firm and activity targeted and nothing else. More specifically, rules that especially help the technorati—those at the high end of the digital food chain—may hurt those at the other end of the spectrum.

But those who tend to oppose regulatory intervention should also take note: not all unintended consequences are disastrous, and some might even be beneficial.

Is That a Unique Observation?

Not really.

Could I Have Done Something Better With My Time Instead of Reading This?

Maybe. Read this paper to find out.


[1] The FCC allowed Verizon to continue charging customers with grandfathered “unlimited” data plans an additional fee for tethering.

[2] Moffett, Craig. The Perfect Storm. Weekend Media Blast. AllianceBernstein, November 2, 2012.

The AT&T/T-Mobile Merger Conundrum: Increase Efficiency AND Create Jobs?

Friday, December 2nd, 2011

How did the proposed AT&T and T-Mobile merger, which many viewed as so certain when announced, end up on life support? Is it because of the decision by the Department of Justice (DOJ) to challenge the merger in court? Or maybe because of skeptics’ claims regarding the likelihood of the merger “creating jobs?”

Those factors certainly played a role, but another reason the merger reached the brink of collapse is arguably because the current jobs crisis made it impossible for AT&T to justify the merger to antitrust authorities while also making it palatable to politicians and the FCC with its broader “public interest” standard.

For antitrust purposes, AT&T had to demonstrate that it would not substantially reduce competition and that if it did, the increased efficiency of a merged company would greatly outweigh those costs. For political purposes, in an era of persistent unemployment AT&T decided it had to demonstrate that the merger would create jobs.

Horizontal mergers between large competitors, such as the proposed one between AT&T and T-Mobile, are generally subject to tough antitrust scrutiny. Antitrust policy is indifferent to the effect of a merger on jobs, instead focusing on the effects of the merger on competition and consumers while weighing those effects against the potential economic benefits of a more efficient merged firm.

As the DOJ-FTC Horizontal Merger Guidelines note, “Competition usually spurs firms to achieve efficiencies internally. Nevertheless, a primary benefit of mergers to the economy is their potential to generate significant efficiencies and thus enhance the merged firm’s ability and incentive to compete, which may result in lower prices, improved quality, enhanced service, or new products” (p.29).

The efficiency argument is always a high bar in a merger case since “the antitrust laws give competition, not internal operational efficiency, primacy in protecting customers” (p.31). One way the merged company might increase efficiency would be to lay off large numbers of workers if it believed it could maintain service quality while doing so. By appearing to take that option off the table and arguing that the merger was, in fact, good for jobs, AT&T raised the efficiency bar even higher than it normally is.

It is, of course, possible to increase employment and efficiency if the firm increases output by more than it increases costs. AT&T made an argument consistent with that outcome in its filings by contending that spectrum constraints are distorting investment decisions at both AT&T and T-Mobile.

AT&T’s biggest claim regarding jobs was that the merger would lead to more jobs through better mobile broadband. However, the empirical link demonstrating that broadband increases employment—rather than simply being correlated with higher employment—has not been rigorously established, as Georgetown Professor John Mayo and I demonstrate in a paper published earlier this year.

As a result, even if DOJ were willing to consider effects external to the firms, industry, and direct consumers, the speculative nature of the claims would probably cause the DOJ to disregard them. As the Merger Guidelines note,

Efficiency claims will not be considered if they are vague, speculative, or otherwise cannot be verified by reasonable means. Projections of efficiencies may be viewed with skepticism, particularly when generated outside of the usual business planning process. (p.30)

The FCC is more sympathetic to the effect on jobs than DOJ, but the staff report made it clear that it expected the merger to result in a net loss of direct employment and was highly skeptical of the claims regarding the indirect effects on employment (see Section V(G), beginning at paragraph 259 for the jobs discussion).

In short, even setting aside the substantive questions of the net effects on competition, consumers, and broadband availability, the merger was always going to be an especially tough sell in the current economic and political climate.

To win the day, AT&T had to convince antitrust authorities that improved efficiencies by the merged firm would outweigh any resulting reduction in competition while simultaneously convincing politicians that the merger was good for jobs. But convincing DOJ that the company would increase employment risked signaling to DOJ that the merger was not about efficiency, and convincing the FCC that the merger was good for efficiency risked signaling to the FCC that the merger would not produce jobs.

Unable to thread that needle, AT&T’s strategy collapsed. Whether it will succeed with a new strategy remains to be seen.

Use the Market to Allocate Spectrum

Wednesday, November 2nd, 2011

TPI President Tom Lenard has a post on The Hill’s Congress Blog discussing the benefits of allocating spectrum via voluntary incentive auctions.  Authorizing the FCC to hold auctions would not only make more spectrum available for the development of wireless broadband, but will also be a big step in creating a more efficient, market oriented spectrum regime.

Purchasers of spectrum through an FCC auction receive an “exclusive license” allowing them to use the spectrum for whatever purpose they want, so long as they don’t interfere with other licensees.  Those uses can change as new technologies emerge—e.g., as subscription TV overtakes over-the-air TV.  This is why this market-based system is flexible and can be expected to achieve an efficient allocation over time.  Moreover, these quasi-property rights are necessary for providers to invest the tens of billions of dollars necessary for advanced wireless services.

Lenard also addresses calls to allocate a significant portion of spectrum freed-up by incentive auctions to unlicensed uses.

Under the unlicensed model, the FCC establishes rules—such as power limits for approved devices—under which any device and any user can operate.  While this approach has yielded benefits—WiFi most notably—as with the legacy command-and-control model, there is no market mechanism in an unlicensed regime to move spectrum to its highest-valued uses.  It is also extremely difficult to determine the opportunity cost of allocating spectrum to unlicensed uses, and no way—other than relative lobbying clout—to determine how much, if any, should be so allocated.

Lenard warns that the amount of spectrum obtained from incentive auctions that is set aside for unlicensed uses would have a direct impact on the amount of funds available for reducing the federal deficit.

The Congressional Budget Office estimates that incentive auctions would yield about $16 billion assuming proposals on the table to allocate spectrum and money to a public safety network are adopted.  The net contribution of incentive auctions to deficit reduction would be reduced substantially if any significant part of the spectrum is not auctioned and instead is set aside for unlicensed uses.

Read the entire post on The Hill’s Congress Blog.

Where Does the Cable Industry think It’s Going? Empirical Observations from the 2010 and 2011 Cable Shows: More Programming and Consumer Interface Applications

Monday, June 20th, 2011

Many aspects of the 2011 Cable Show were the same as the previous year. Like last year, the show featured:

  • Lots of swag,
  • My inability to understand why some people wait in lines of 30 minutes or more to get a free backpack (do they really value their time that little?),
  • The need to stay far away from the booth with the purple dinosaur crooning about how he loves you and you love him except that clearly nobody loves him, probably because of his pathetic cries for attention,
  • Company slogans that make you hope they put more thought into their products, like Huawei’s “Innovation Through Technology” (which is kind of like “construction through equipment”),innovation through technology
  • Lots of white, grey, and black boxes packed with all kinds of cool stuff, but still just look like white, grey, and black boxes, and
  • Painful feet at the end of the day from too much walking and not enough sitting.

Despite those consistencies, some things were conspicuously (almost) absent this year. Most notably, the 2010 show floor was full of 3D television exhibits. This year a few booths had a 3D TV, but it was typically shoved into a corner, and nobody ever seemed to be watching it. Whether this means that companies that sell to cable have decided consumer demand for 3DTV is less than expected or simply decided nobody wanted to see that display again is hard to know.

Aside from the (thankfully, in my opinion) missing 3D experience, the plethora of inscrutable metal boxes makes it almost impossible to determine just from browsing the show floor what is new this year even if I were able to remember last year’s boxes.

Fortunately, the Cable Show categorizes exhibitors by what they do. These data make it possible to take an empirical look at where current industry participants think the cable industry is headed compared to what they thought last year.

The 2011 show featured 271 exhibitors, compared to 345 in 2010. On average, however, each exhibitor claimed to be promoting products in 4.0 product categories in 2011 compared to 2.7 product categories per exhibitor in 2010.  Because exhibitors chose more categories and the number of categories remained roughly constant, the average share of firms in each category increased by almost one percentage point. Even recognizing that general increase, certain product categories showed large increases. The share of firms offering programming increased by 21 percentage points, consumer interface technologies (e.g., set-top boxes, program guides) increased by 8.4 percentage points, and wireless technologies increased by 8 percentage points. The biggest decrease was among exhibitors offering system management, by about two percentage points.

Data

Presumably to make it easier for attendees to find the products that interest them, the Cable Show website groups exhibitors into business categories: 130 categories in 2010 and 128 in 2011. Most categories appear in both years, but 2010 had 11 categories not represented in 2011, while 2011 had 9 categories not represented in 2010. Table 1 lists the categories in alphabetical order and the number of firms in each.

It is not possible to compare the numbers directly, however, due to changes in the number of exhibitors. As Table 1 shows, the number of exhibitors fell from 2010 to 2011 while each exhibitor identified itself, on average, as offering products in more categories.

Table 1: Cable Show Number of Exhibitors and Categories

Number exhibitors Average categories per exhibitor
2010 345 2.7
2011 271 4.0

Who’s at the show and how did that change from 2010 to 2011?

Figure 1 shows how well represented each category is at the show. In particular, it shows the share of exhibitors in each category, ordered from least to most in 2010. This approach only partially normalizes the data—it controls for the smaller show size but does not control for possible reasons why firms chose to include themselves in so many more categories in 2011 than they did in 2010. Nevertheless, the figure provides a good view of which categories are the most popular.

Figure 1

Share of exhibitors in each category

Figure 2 shows the percentage point change in the share of firms in each category. Because firms chose so many more categories in 2011, the average change is about 0.9 percentage points. Thus, we can assume that any change bigger than 0.9 means that the category is better represented while any change less than 0.9 means the category is less prevalent at the 2011 show.

The Figure shows that the share of exhibitors categorizing themselves as “programming” increased substantially, as did exhibitors focusing on end-user interfaces including set-top boxes, personal video recording, and interactive services. Mobile also increased from 2010. Systems management appeared to have the biggest decrease from the previous year.

Figure 2

Conclusions

The 2011 show had about 20 percent fewer exhibitors than did the 2010 show. Those exhibitors placed themselves into far more categories, on average, than they did the previous year.

Controlling for the smaller show size, programming was substantially better represented in 2011 than in 2010, as were all manner of devices and software targeted at end-user interfaces, and wireless. Systems management showed the biggest decrease.

These changes are broadly consistent with what we observe in the broader communications landscape: the power of content companies relative to distributors and the growing importance of wireless. Firms that sell to cable apparently see growing expected profits in those areas, as well. Whether they turn out to be correct remains to be seen.

Table 2: Total Number Exhibitors in Each Category

Category 2010 2011
Accounting 3 5
Advertising 20 23
Amplifiers 3 6
Antennas 1 2
Architectural/Drafting 1 0
Billing Systems 14 14
Broadband Service Provider 5 4
Brokerage 0 1
Business Services 13 11
Cable Drop Installation 5 1
Cable Information 3 1
Cable Modem Manufacturer 0 3
Cable Modem Reseller 0 1
Cable Modems 3 4
Cable Programming 57 77
Cable Residential Gateways 7 14
Cable Supplies 1 0
Cablecasting Equipment 1 2
Calibrators 1 0
Children’s Programming 6 4
CMTS 4 6
Coaxial Cable Connectors 4 1
Coaxial Drop Cable 4 2
Commercial Insertion Equipment 2 2
Competitive Intelligence 2 4
Computer Aided Dispatch 1 1
Computer Services 3 5
Computer Software 22 24
Conditional Access 3 12
Construction Materials & Equipment 1 1
Consultants 10 6
Customer Retention 4 10
Datacommunications Equipment 2 8
Datacommunications Services 1 5
Digital Cable Receiver 4 4
Digital Compression 4 2
Digital Headend Equipment 14 18
Digital Video 14 23
Distribution Equipment 8 4
DVB Product 2 7
EAS Systems 1 0
Educational Programming 7 19
Electronic Entertainment 3 3
Electronic Recycling 1 1
Emergency Warning Systems 1 1
Engineering & Construction Services 0 1
Enhanced Systems 2 2
Equipment Recovery 2 0
Equipment Repair 3 1
Fiber Optic Cable 6 4
Fiber Optic Distribution Systems 5 6
Fiber Optic Equipment 6 7
Field Services 4 5
Filters 2 0
Financial Services 0 2
Fleet Management Services 3 4
Games 6 3
HDTV 36 36
Headend Equipment 17 14
HFC Cable Demodulators 3 1
HFC Cable Modulators 3 1
High-Speed Internet Access 4 3
Home Information Services 0 2
Home Shopping Program/Services 3 4
Installation Services 4 1
Intelligent Networking 6 5
Interactive Databases 4 4
Interactive Programming 14 21
Interactive Services 24 34
International Supplier 3 4
Internet Service Provider 4 6
Internet TV Provider 7 14
IPTV 42 46
Market Research 1 2
Marketing 7 7
Microwave Equipment 3 3
MMDS Equipment 1 0
Mobile 17 26
Multi-Media Systems 5 7
Music Library 1 0
Music Programming/Services 5 3
Network Management Systems 16 20
New Networks 3 6
News Services 3 6
Non-Profit Organization 6 2
Operational Support Systems Solutions 10 13
Optical Networking 8 6
Outside Plant, Fiber & Cable Enclosures 1 1
Pay Cable Programming 8 27
Pay-Per-View Equipment 1 1
Pay-Per-View Service 2 9
Personal Video Recording (PVR) 6 17
Primary Interactive Programming 0 2
Program Guides 7 10
Program Navigation Systems 1 4
Program Networks 29 19
Promotional Programs 3 0
Publications 3 2
Religious Programming 5 3
Remote Controls 5 6
Research & Development 5 2
Return Path Products 4 3
Routing Systems 1 3
Satellite 10 11
Security Dealer Programs 0 1
Security Systems 3 4
Set Top Boxes 18 25
Signal Security 2 1
Sound Services/Audio Equipment 2 0
Splitters 4 2
Sports Programming 9 7
Status Monitoring 6 3
Studios 0 6
Subscriber Authorization Systems 7 8
Subscriber Collection Services 2 5
Subscriber Pre-Screening 1 1
Subscriber Promotion 3 5
System Auditing 1 1
System Management 16 7
Systems Integrator 11 12
Telecommunications Equipment 20 14
Telecommunications Services 24 23
Telemarketing Services 1 0
Telephony Services 4 5
Test Equipment 6 6
Tools 2 2
Training Services 1 2
tru2way 19 20
Trunk & Distribution Cable 3 2
Video on Demand 52 52
VOIP 17 16
Voting/Polling 4 4
Weather Forecast Services 2 3
Weather Programming 2 3
WiFi Products/Services 6 9
Wire and Cable 3 1
Wireless Networking 9 9
Wireless Telephony Systems 1 4
Workforce Management System 7 14

[1] For an overview of the focus of the 2010 show, see http://www.cablefax.com/cfp/just_in/Cable-Show-Takeaways_41407.html

Net Neutrality Regulation’s First Target: Small Wireless Competitors?

Friday, January 14th, 2011

Telecommunications regulations have a long history of protecting incumbents, often because incumbents are able to use the regulatory process to insulate themselves from competition.  Unfortunately, we already see the seeds of that outcome in the response to a restrictive data plan offered by MetroPCS, but in this case due not to the actions of incumbents, but rather to the actions of some public interest groups.

MetroPCS, a regional mobile provider, offers a number of service plans with different voice and data combinations.  Its cheapest plan is $40 per month and offers unlimited voice, messaging, and web access.  The unlimited web access, however, does not allow access to certain sites like Netflix and Skype, but does allow access to YouTube.  Access to the full Internet requires a more expensive plan.

Net neutrality advocates argue that the restricted plans violate at least the spirit, if not the letter, of the new regulations.  The advocates may very well be correct, and that’s the problem.

MetroPCS is a small player in the mobile market, as the table from the FCC below demonstrates. It has no market power. Subscribers are not “locked in” when they sign up because they don’t have to sign contracts.

Wireless Subs Year-End 2009

Source: FCC 14th Annual Report and Analysis of Competitive Market Conditions With Respect to Mobile Wireless, Including Commercial Mobile Services. 2010. P.9. Note that these are voice subscribers.

MetroPCS must believe that this combination of unlimited voice and unlimited use of a restricted set of web services will appeal to some people, and that walling off certain parts of the Internet will reduce its costs.

As an entrant in a high fixed-cost market, MetroPCS must find ways to differentiate itself from the larger carriers and reduce costs if it is to succeed. While it sounds appealing on its face to make the entire web accessible to MetroPCS subscribers, requiring MetroPCS to offer precisely the same services as larger carriers could leave it with no sustainable business model.

Allowing MetroPCS to experiment with business plans does not, however, mean that it should mislead consumers.  Our perusal of its website and calls to customer service left us confused about which services, exactly, it excludes from the plan.  Presumably MetroPCS uses a well-defined algorithm for deciding which sites it excludes. It should be able to explain that algorithm to potential subscribers, though any harm is limited due to the absence of contracts, meaning that consumers can switch plans or cancel if they find the restrictions too onerous.

Despite this (hopefully soon-to-be-rectified) transparency issue, this plan is a business model that one of the smallest players in the mobile industry hopes will help it to compete successfully against its much bigger rivals.

Prohibiting MetroPCS from offering its new plan would benefit the large, incumbent carriers, not consumers. Let MetroPCS experiment.  It would be a shame if the Commission’s first enforcement action under the new regulation reduces wireless competition.

Satellite Broadband: Line-of-Sight, Not Out of Mind

Wednesday, August 11th, 2010

The National Broadband Plan (NBP) estimates that firms would need subsidies totaling $23.5 billion to invest in the infrastructure necessary for universal broadband coverage in the United States (Exhibit 1-A, click to enlarge).[1]

Base-case broadband availability gap

The problem with the Plan’s estimate is that it includes only DSL and 4G wireless and omits broadband-over-satellite, which is by far the cheapest option for serving the most costly areas.  Thus this “base case” grossly overstates the necessary costs of achieving 100% broadband availability.

The Broadband Plan notes that “while satellite is capable of delivering speeds that meet the National Broadband Availability Target, satellite capacity can meet only a small portion of broadband demand in unserved areas for the foreseeable future….[w]hile satellite can serve any given household, satellite capacity does not appear sufficient to serve every unserved household.” (p 137)

But satellite need not serve all “unserved” households.  Serving only the highest-cost households would yield enormous savings.

The 250,000 housing units (0.2% of the U.S. total) with the highest costs account for $13.4 billion of the claimed investment gap (OBI Technical Paper No. 1, p 41).  This eye-popping estimate reflects hypothetical decisions such as one to build out DSL to a single house in Orange County, NY for $366,126, which exceeds the county’s median home value, and to 30 dwellings in Kauai County, Hawaii at an average cost of $205,890 each, or about half of that county’s median.[2] Exhibit 3-H graphs the steep “hockey stick” costs implied by the base-case model.

In its technical supplement explaining the investment gap, the Broadband Team estimates that using satellite (with minor federal support) to serve those 250,000 homes would reduce the gap by at least $11.4 billion, or almost 50%.[3]

The authors have clearly considered the tremendous efficiencies afforded by satellite access, and acknowledge the adequacy of broadband-over-satellite at meeting the NBP requirement for connection quality.[4] Recommendation 8.13 urges the FCC to consider “alternative approaches, such as satellite broadband, for addressing the most costly areas of the country” (p 150).  As such, the “broadband availability gap” as calculated should not be considered a strict endorsement of the technologies assumed (DSL and 4G), but rather a starting point for comparing the costs and benefits of alternative proposals.

To be sure, broadband-over-satellite has some drawbacks compared to other technologies.  Existing satellite broadband plans offer slower download and upload speeds than most wireline or other wireless technologies, are more expensive, and exhibit higher latency due to extreme length of the “last mile” (more than 20,000 miles) to orbiting geostationary satellites.  Speeds will become less of an issue with two new satellites expected to go into service in the next two years, both offering up to 10 Mbps downstream to homes; Hughes says it will even sell business plans of up to 25 Mbps.

The question then becomes whether it is worth spending an additional $12 billion to give those households a DSL or 4G wireless broadband option.  To put that in perspective, consider that the U.S. government (NIH) budgets $50 million for discovery and development of drugs for “rare diseases”—defined as those affecting 200,000 or fewer people.[5] Many of those illnesses are deadly.  Does it make sense to spend billions to allow 250,000 households the option of reducing delays in their Internet transmissions by half a second?

Apparently the Omnibus Broadband team didn’t think so.  And thanks to the recent stimulus package, the USDA (the federal government’s longstanding supporter of rural broadband) is increasingly on board.  It’s time we unite to make satellite broadband a priority in proposals for access in America’s most remote communities.


[1] The chart is actually taken from the corresponding technical supplement.  In the Broadband Plan itself (See Section 8.1) the gap is referred to as the “broadband availability gap” and was pegged at $24.3 billion before the estimate was revised.

[2] Estimated costs of buildout reflect net cost (initial capex and ongoing support less revenue) with a 20-year time horizon and 11.25% discount rate (the NBP standard). Data on gap by county available at http://www.broadband.gov/maps/availability.htm

[3] The team reports the gap to be $10.1 billion—that is, reduced by a full $13.4 billion—when factoring in satellite “even with a potential buy-down” (p 41).  It appears they have not factored in their estimate of a $800 million-$2 billion buy down, a program in which the government would subsidize subscriptions to existing (planned) satellite capacity to bring the expected high subscription charges to a level approaching terrestrial service (p 93-94).  If necessitated, this cost would rightfully be subtracted from the savings, yielding the possible low of $11.4B reported above.

[4] That is, they acknowledge that satellite broadband will be sufficient for the “actual” 4 Mbps download, 1 Mbps upload minimum (NBP p 137).

[5] NIH requested this amount for the Therapeutic Rare and Neglected Diseases Initiative (see FY 2011 budget, p 5).  The amount overstates the magnitude of spending per patient because the program also covers neglected diseases, from which very few Americans suffer, and because it includes more than 6,800 diseases classified as rare, which together afflict an estimated 25-30 million Americans.

The FCC’s New Wireless Competition Report: The Right Way to Look at the Industry

Saturday, May 22nd, 2010

“If we had any more innovation [in wireless] I think our heads would explode.”
– Professor Gerald Faulhaber, comment at wireless conference in Berkeley, April 2010.

Hats off to the FCC for its new approach to evaluating wireless competition.  Its latest report on wireless competition explicitly recognizes that wireless services now include such a broad range of industries, activities, and linkages to other sectors that it no longer makes sense to think of wireless as a single, overarching “industry.”  Many observers believe—happily or indignantly, depending on who they are—that by failing to apply the phrase “effective competition” to everything wireless the FCC is sending a signal that it sees reasons to be concerned.

Perhaps that is the Commission’s intent.  Perhaps not.  I’ll leave divining its intentions to the Kremlinologists.  Instead, let’s step back and take a look at some of the economics underlying the analysis and the report’s central conclusions.

Until the 1980s economic analysis relied on the so-called “structure-conduct-performance paradigm” (SCPP) in which market structure was taken as given and a concentrated market was assumed to allow firms to behave as monopolists and therefore raise prices and reduce output.  Therefore, a small number of firms was, by itself, cause for concern.  It sounds reasonable, and policymakers still seem implicitly to embrace the SCPP.  But a funny thing happened along the way to testing this seemingly obvious theory.  The empirical relationship between market structure and firm performance turned out to be weak.

It remains true that it is easier for a smaller number of firms to collude to raise prices and lower output than it is for a larger number of firms, so estimating market concentration can be a useful starting point for analysis.  However, economists realized in the 1980s that analyses of competition had to recognize that there is no straight line between market structure and performance, other factors are involved, and, indeed, firm performance itself plays an important role in determining market structure.  That means analyses of competition must focus on firm behavior and actual market outcomes to determine whether an industry is competitive.

Which brings us back to the FCC’s latest wireless competition report.

The report compiles lots of data on both market structure and the various aspects of behavior and performance of firms related to wireless.

Any concerns about the industry must come from certain features of market structure.  In particular, the report notes that the weighted average national Herfindahl-Hirschman Index increased to 2848, which indicates a concentrated industry, though the indicator varies widely across geographic areas.

Key indicators of behavior and performance in the industry—and therefore the most relevant features to evaluating its competitiveness—are eye-opening.

  • Churn (a measure of how many people switch providers and therefore an indicator of switching costs): increased slightly from 1.9% to 2.1% per month.  A recent study suggests that this rate of churn is higher than the average across industries.
  • Prices: The annual cellular consumer price index (CPI) decreased by 0.2% from 2007-2008, while the overall CPI increased by 3.8% during the same time period.
  • Pricing plans: New pricing plans emerged, based on prepaid and unlimited models, in addition to the more standard post-paid “bucket” of minutes.
  • Average Revenue per User (ARPU): ARPU has been flat, in nominal dollars, for years at about $47.
  • Profit margins: The report notes, “While the seven largest mobile wireless service providers all had EBITDA margins over 20 percent during the second quarter of 2009, only four – AT&T, MetroPCS, T-Mobile, and Verizon Wireless – had EBITDA margins greater than 30 percent, and the two largest providers had the highest EBITDA margins.”  The weighted average EBITDA for 2008 Q4 (the latest data the report provides that allow us to create the weights) is about 35 percent, down slightly from about 36 percent three years earlier.
  • Investment inputs: Wireless providers invested somewhere between $20-$25 billion in their networks in 2008 (the report notes that different sources had different estimates), which represents either a small decrease or increase from the previous year and a decrease in terms of investment as a share of revenue.
  • Investment outputs: The number of cell sites increased from about 18,000 in 2007 to almost 29,000 in 2008.
  • Advertising: In 2008 the wireless industry was the sixth-highest spender on advertising among product categories.  It was the second-highest spender of advertising on Spanish-language television.
  • Handsets: Between 2006 and 2009 the number of manufacturers selling mobile handsets in the U.S. increased from 8 to 16, and the number of available handset models increased from 124 to 260.
  • Device innovation: In addition to the rise of smartphones, the report notes that entirely new wireless devices, like mifi cards that receive a cellular wireless signal and transmit a wifi signal, and machine-to-machine hardware are emerging.
  • Entry and exit: We see lots of both, with entry by providers like Leap, Metro PCS, and Clearwire, and exit through mergers.
  • Call quality: Problems per hundred calls decreased to its lowest level ever in 2008 and remained there in 2009.  Moreover, the gap in quality across providers decreased.

Other data are more ambiguous.  The report notes, for example, that some providers have increased early termination fees, but that those increases seem to be associated with higher handset subsidies.  The report further notes that the same handsets are available without early termination fees at much higher prices.

In short, we see in wireless an excellent example of why economists have largely abandoned the SCPP approach to evaluating competition in favor of looking at actual outcomes.  Thus, even if we accept the premise that the market for wireless providers has become more concentrated, we nevertheless see an incredibly dynamic market that is yielding new devices, new services, and lower prices.  Professor Gerry Faulhaber remarked at a conference on wireless in Berkeley last month, “if we had any more innovation I think our heads would explode” (see video at 9:35).

The FCC made a smart decision to gather lots of data about the myriad components of wireless and to focus most of its efforts on examining outcomes.  This approach will allow the Commission to make changes to its reports almost in real-time—a necessity given the rate of change in the industry itself.