Author Archive

The NABU Network: A Great Lesson, But Not About Openness

Thursday, February 5th, 2015

When announcing his plan to regulate Internet Service Providers under Title II in Wired, FCC Chairman Tom Wheeler argued that his experience at NABU Network in the 1980s helped inform his decision. He writes that NABU failed because “The phone network was open whereas the cable networks were closed. End of story.”

But that’s not the whole story, and its lessons aren’t really about openness. Instead, it teaches us about the importance of investment and network effects.

NABU sprang from the mind of Canadian entrepreneur John Kelly, who realized that cable television networks were uniquely suited to high-speed data transmission. The service allowed a home computer to connect to a remote software library and, in principle, play games, shop, bank, and do email. And apparently it could do all that at speeds up to 6.5 Mbps—even more than Chairman Wheeler claimed in his recent Wired article.[1] Not too shabby. NABU first launched in Ottawa in 1983 and Sowa, Japan and Fairfax, VA in 1984. By the time it went out of business it had reached agreements with cable companies in 40 other regions.

As it turned out, the world wasn’t ready for NABU, and it failed in 1986.

Analyses of NABU, however, do not point to issues of openness as the cause of death. After all, other computer networks in the early 1980s that relied on the telephone network also failed.[2]

Instead, post-mortems point to issues we know are important in network industries: network effects and investment, or, rather, the lack thereof in both cases.

As has been written ad nauseam, the Internet is a two-sided (actually, multi-sided) platform. In order to succeed, it must attract both users and applications. In early stages, when uses and users are scarce, it can be difficult to get anyone on board. The presence of indirect network effects makes it worse, since the benefit from each new user or application is greater than the benefit that accrues just to the new subscriber or developer. That is, a new user benefits by being able to access all the available content, but the entire network benefits due to increased incentives to develop new applications. The new user, however, does not realize all those benefits, meaning that adoption, at least in the early stages, may be artificially slow.

Early commercial data networks faced precisely this problem. Why would someone pay to go online if there were nothing to do when he logged on? In order to subscribe to NABU, consumers paid not just a monthly fee, but also had to buy or lease a $299 NABU personal computer. Data networks tried to induce consumers to subscribe by making collections of software available. In the 1980s, however, most commercial data networks just could not provide enough of an incentive to attract or keep subscribers.

Creating reasons to subscribe played an important role in NABU’s failure. As one source put it, “ the NABU Network did not catch on due to lack of accessible resources.”

Another reason for its failure appears to have been the inability of the then-existing infrastructure to fully deliver on NABU’s promises. Cable TV systems were not built to handle much upstream traffic—an issue they still face today. Upgrading the cable infrastructure for two-way communication required significant investment.

Competition also made survival difficult for NABU. NABU faced direct competitors in the form of other data networks like AOL (founded in 1985), Prodigy, and the dominant firm, Compuserve. Additionally, to the extent that consumers would sign up to play games, NABU also faced competition from packaged software games and gaming consoles, and faced the same over-saturation of the market that led to the great video game crash. It even faced potential competition from The Games Network, a firm that was developing a system that used cable networks to distribute video games but failed to get off the ground.

In short, the market wasn’t quite ready for the kind of service NABU was selling, although NABU founder Kelly was right about the potential of cable networks. As Marty McFly might have said to potential subscribers in the 1980s, “your kids are gonna love it.

Openness is a key part of the Internet. It just wasn’t a key part of the NABU story. Instead, it reminds us of the importance of network effects, the economics of multi-sided networks, and network investment. Unlike the 1980s, these are now working together in a virtuous cycle favoring innovation. Let’s make sure any new rules don’t change that.


For a fascinating and detailed history of early data networks, including NABU, see

Zbigniew Stachniak, “Early Commercial Electronic Distribution of Software,” IEEE Annals of the History of Computing 36, no. 1 (2014): 39–51, doi:10.1109/MAHC.2013.55.

[1] Stachniak, “Early Commercial Electronic Distribution of Software”, n. 21.

[6] Stachniak, “Early Commercial Electronic Distribution of Software” Table 1.


A Closer Look at Those FCC Emails

Monday, November 24th, 2014

Recently, Vice News received 623 pages of emails from the FCC in response to a Freedom of Information Act request. Vice News has kindly made the entire PDF file available for download.

We decided to categorize the emails to get a picture of who contacts the FCC and what they want to talk about. This simple categorization is time consuming given the need to review each page to pull out the relevant information. Nevertheless, our intrepid research associate, Nathan Kliewer, managed to slog his way through the pile, leaving us with a clean dataset. The fruits of his labor are printed below.

The statistics derived from this dataset come with important caveats. First, and most importantly, we categorize only the initial email in any given chain of emails. As a result, this analysis tells us nothing about the extent of a given email interaction. Second, it is possible that some emails are mischaracterized (seriously, you try reading 623 pages of blurry PDFs). Third, because the FCC released only selected emails, we do not know if these emails are representative of FCC email correspondence.

Nevertheless, let’s see what we’ve got.

Figure 1 shows the number of emails from different types of organizations.

Figure 1

Number of Emails by Type of Organization


The figure shows that most emails were initiated by news organizations, followed closely by industry. The FCC itself appears as the originator of a good number of these emails, most of which are from one FCC staff member to another. Eleven emails are from law firms (which represent industry clients), nine from people affiliated with universities, eight from other government agencies, seven from consumer advocacy groups, and six from think tanks. Among the unexpected emails is one from a representative of the government of Serbia simply inquiring about “current regulatory challenges,” and another from someone applying for an internship at the FCC (the latter we did not include in the figure).

Figure 2 highlights the general subject or topic of the email. The largest number of emails, not surprisingly, contains the sender’s views on policy issues relevant to net neutrality. The second largest number is news items people forward to FCC staff. Next are requests for comments, followed by information about events and requests for meetings.

Figure 2

emails by subject

Figure 3 combines these two categories to reveal which type of organizations focus on which issues. Industry, consumer groups, and other government agencies tend to send emails discussing views on policy issues. News organizations send requests for comments. Industry and law firms, generally representing industry, send ex parte notices.

Figure 3

email by org and topic


Unfortunately, this meta-analysis tells us little about whether those emails mattered in any real way. I also can’t believe I spent so much time on this.

According to the Vice News story, the FCC plans on releasing more emails on November 26. I look forward to seeing an updated meta-analysis of those emails, but prepared by somebody else.

Where Do Vendors To Cable Think The Industry Is Heading? Evidence From Cable Show Data In 2014

Friday, April 25th, 2014

Scott Wallsten and Corwin Rhyan

For the past five years we have collected data about the exhibitors at the annual NCTA Cable Show from its website.  Each year we analyze trends in the industry through the categories used to classify the exhibitors.  Key observations this year include:

       »      The number of exhibitors continues to fall as it has in each of the past 4 years, from 345 in 2010 to 241 in 2014 (Figure 1).

       »      Cable Programming, Video on Demand, IPTV, and Multi-Screen Content are the first, second, third, and fourth most popular in 2014. The top three increased in popularity since last year and Multi-Screen Content decreased slightly (Figure 2).

       »      New popular categories this year included RDK (Reference Design Kit)[1] and Content Search/Navigation Systems, with each having over 10 exhibitors in their first year. Fiber was absent in 2013, but has a few exhibitors in 2014 (Figure 4).

       »      Games, Consultants, and Research & Development show some of the largest year over year increases from 2013-2014 after Video on Demand, and IPTV (Figure 5).

       »      Four previously popular categories are absent from the 2014 show—HDTV, New Networks, tru2way, and VOIP. Other notable decliners include 3D TV and Mobile Apps. These highlight some difficulties in interpreting the results without other information. HDTV likely disappeared because it is so ubiquitous while 3D TV disappeared because it has generally been a market disappointment (Figure 7).

Number of Participants

The number of exhibitors in 2014 is down over 30% from 2010. After a large drop in 2011, the last 3 years have decreased at a more moderate 3% annual rate. The number of exhibitors is biased slightly upwards due to the fact that an exhibitor with multiple booth locations gets classified as two separate exhibitors in our data. However, the number of duplicates over the years is relatively small and consistent.

Figure 1: Number of Exhibitors 2010-2014


Hot Tech This Year

The Cable Show allows its exhibitors to define their companies by categorical labels which signal to potential customers the types of products and services offered.  An exhibitor can select multiple categories for their products.  In 2014, the average number of categories per exhibitor was 3.87, down slightly from 4.33 in 2013.  In general, we expect exhibitors to classify their products as generally and widely as possible, with the hope of attracting interested attendees to their booths.  In order to normalize the data for year over year comparisons, we divide the number of exhibitors in each category by the total number of exhibitors, yielding a percentage of exhibitors that select each category.  The top 20 categories are listed below for the last 3 years, with Cable Programming defining over a third of all exhibitors this year.

Figure 2: Most Popular Categories 2012 – 2014


In graphical form below, we plot the trends of this year’s top 5 most popular categories over the past 5 years.  While many of these categories have traditionally been near the top, most have grown over the past 5 years. 

Figure 3: Top 5 Most Popular Categories


While we cannot rule out or in any particular hypothesis based on these data, it is worth noting that the large increase in programming-related exhibitors coincides with unprecedented increases in retransmission fees cable companies pay to programmers. It would be consistent with economic theory to see entry into this market as price increases.

What’s In and Out in 2014

The categories used to classify products and services change regularly.  The new categories used in 2014 are listed in Figure 4. Some are similar to previous categories, such as Content Search/Navigation Services, which likely evolved from the separate category of Content Navigation, while others come with little previous background like RDK (Reference Design Kit).

Figure 4: New 2014 Categories


Many of the most popular 2013 categories continued to gain ground in 2014, with Video on Demand, IPTV, and Cable Programming showing strong gains.  Games and Consultants showed a strong increase in representation as well.  A complete list of the top gainers in 2014 is shown in Figure 5.  Some gainers declined in 2013 but returned with stronger showings in 2014.  This list includes categories such as Video on Demand, Billing, Internet TV Providers, IPTV, and Telecommunications Providers.  A chart of these categories is shown in Figure 6.

Figure 5: Biggest 2013-2014 Gainers


Figure 6: Categories that switched to growth in 2014


At the same time, some categories disappear between years.  In 2014 some notable categories are no longer present.

The categories that declined in 2014 included many that disappeared from the list completely such as HDTV, New Networks, tru2way, and VOIP. Several possible theories could explain these disappearances: perhaps some categories became so ubiquitous as to be meaningless in the context of a trade show (e.g., HDTV or VOIP), show organizers decided to no longer include a category because it overlapped too heavily with other categories (e.g., was “New Network” the same as “Program Networks?”), or because it is no longer relevant?

Other notable decliners include once “up and coming” technologies such as 3D TV, Mobile Apps, and Social TV. A decrease in a category is probably easier to interpret than an outright disappearance. 3D television, for example, has been a notable market disappointment and it is no surprise to see it disappearing from the show.

Figure 7: Biggest 2013-2014 Losers


Figure 8: Categories that switched to decline in 2014



Using data from the Cable Show’s exhibitors is advantageous because it is representative of the actors in the industry who have real money on the line.  In a tech world that loves to exaggerate the next “big thing”, using data directly from the industry members might help provide a better understanding of where the industry is headed.  However, this data must be used with caution.  First, the categories are self-reported by exhibitors, and while they have a clear incentive to accurately categorize their products and services, some might also see advantages in identifying with certain hyped industry technologies to attract customers. Secondly, the analysis weighs each exhibitor identically, which clearly isn’t accurate as some booths are massive and staffed by dozens of people while others are little more than a table and the company owner (Figure 9).

Despite these data shortcomings the data show a continued trend towards a cable industry more focused on its traditional roles as a television service provider, with programming, television, video, and networks topping our list in 2014, while the hyped technologies that were set to revolutionize the cable industry in 2012 and 2013 fell in 2014.

Figure 9: 2014 Cable Show Floor Plan


[1] According to, RDK is “a pre-integrated software bundle that provides a common framework for powering customer-premises equipment (CPE) from TV service providers, including set-top boxes, gateways, and converged devices.”

Where do vendors to cable think the industry is heading? Evidence from 2013 Cable Show data

Tuesday, June 11th, 2013

For the past four years (2010 – 2013) I have been collecting data about exhibitors at the Cable Show. Key observations based on the most recent data:

  • The number of exhibitors continues to decline, down to 251 in 2013 from 345 in 2010 (Figure 1).
  • Programming is the most popular exhibitor category, and has been steadily increasing in popularity since 2010. In 2013 nearly one-third of exhibitors classify themselves under programming. Multi-screen content, HDTV, video on demand, and IPTV are the second, third, fourth, and fifth most popular categories (Figure 2).
  • The categories with the biggest increases in representation since 2010 are multi-screen content, programming, HDTV, new technology, and cloud services (Figure 3).
  • The categories with the biggest decreases in representation since 2010 include telecommunications equipment, services, and VOIP (Figure 4).

Exhibitor attendance

This year, the website listed 251 exhibitors, continuing a steady decline from 2010 (Figure 1). The number is biased upwards because an exhibitor can be counted multiple times if it appears in multiple booths.

Figure 1: Number of Cable Show Exhibitors, 2010-2013

Number of Exhibitors


Hot or Not?

The website shows the categories of products, services, or technologies each exhibitor selects to describe itself. An exhibitor can select several categories. To evaluate the prevalence of each category I total the number of times each category is selected, and then divide that by the number of exhibitors to make it comparable across years.

The table below shows the top 20 categories for 2010 – 2013. Programming has remained the top category for all four years. However, multi-screen content jumped to second place, followed by HDTV, pushing video on demand and IPTV to numbers four and five.




Figure 2 shows how the top 5 exhibitor categories for 2013 have evolved over the past four years. Fully one-third of all exhibitors classify themselves as programming, nearly twice as many as in 2010. Multi-screen content did not exist as a category in 2010 while 16 percent of all exhibitors included themselves in this category in 2013.

Figure 2: Share of Exhibitors with Products in Top 5 2013 Categories Over Time



Consistent with the above figure, from 2010 – 2013 cable programming increased in representation more than any other category. Multi-screen content saw the second-largest increase, followed by mobile apps, new technology, and cloud services.

Figure 3: Categories with Biggest Increase in Representation Since 2010biggestincreases


Telecommunications services and equipment has seen the biggest decrease in representation since 2010, followed by VOIP, program guides, and optical networking. However, because “program guides” was not included as a category in 2013 it is not clear if the category truly became less popular or is now simply called something else.

Figure 4: Categories with Biggest Decreases in Representation Since 2010



What does this mean?

The data themselves have certain problems that make drawing strong conclusions difficult. For example, counting exhibitors and categories implicitly assumes that each exhibitor is identical in size and importance, which clearly is not true (Figure 5). Additionally, the categories are self-reported by the exhibitors and do not appear to have strict definitions. Exhibitors have no incentive to select grossly inaccurate categories, since that would attract people unlikely to purchase their products, but exhibitors probably tend towards being overly-inclusive so as not to miss potential clients. This tendency might bias towards especially popular technologies. For example, perhaps exhibitors take liberties in claiming they offer “cloud services” because those contain popular buzzwords rather than because their products truly offer much in the way of those services.

Despite these shortcomings in the data, they provide one source of information on where economic actors with money at stake think the industry is headed over the next year. And, according to them, this year the industry is trending more towards its traditional role as video provider, focusing on programming and multi-screen content.

Figure 5: Exhibitor Map, 2013 Cable Show



Life on the Dark Side of Network Effects: Why I Ditched My Windows Phone

Wednesday, January 2nd, 2013

For consumers, 2012 was a great year in wireless. Carriers rolled out 4G networks in earnest and smartphone competition heated up. Apple’s iPhone 5 release was no surprise. But no longer was Android relegated primarily to low-end phones. Ice Cream Sandwich received strong reviews and Samsung launched high end Android devices like the Galaxy S3 that rivaled the iPhone. Microsoft kept plugging away at the margins and introduced Windows Phone 8 with a new partner in Nokia, which had seen better days. For its part, RIM provided investors with numerous opportunities to short its stock.

I love gadgets. Especially new gadgets. So I eagerly awaited the day my wireless contract expired so I could participate in the ritual biennial changing of the phone. (I wish I could change it more frequently, but I wait to qualify for a subsidized upgrade because we also have to do things like occasionally buy food for the kids). But what phone to choose?

The iPhone 5 was mostly well-received, and even early skeptics like Farhad Manjoo wrote that once you held it you realized how awesome it was. Still, even though it had become a cliche critique, to me it just looked like a taller iPhone, not a newer iPhone, and I wanted to get something that felt really new, not really tall. Am I a little shallow for rejecting an upgrade for that reason? Yes, yes I am.

So after reading rave reviews and talking to friends who had already upgraded, I got the Samsung Galaxy S3.

I hated it.

The Android lock screen customizations and widgets should have made me happy, but they didn’t. I couldn’t find a setup I liked. The Samsung’s hardware didn’t work for me. Buttons on both the right and on the left sides of the phone meant that every time I tried to press the button on the right I would also press the button on the left, screwing up whatever important task I was doing (OK, maybe that “important task” was Angry Birds, but still). Those aren’t inherent criticisms of Android or the Galaxy S3. They’re just my own quirks. (It isn’t you, it’s me).

Finally I got so frustrated with my phone that one day I hopped off the Metro on my way to work, went to the nearest AT&T store, returned it, and re-activated my old iPhone 4.

My first reaction to reanimating my 4 was relief that I could once again operate the phone properly. My second reaction was, “holy **** this screen is tiny!” I was sure my iPhone 4 had turned itself into a Nano out of spite while languishing unused.

After that, the Nokia Lumia 920 with Windows Phone 8 caught my eye. Great reviews (including this thoughtful and thorough review by a self-professed “iPhone-loving Apple fangirl” at Mashable who switched to the Lumia for two weeks), beautiful phone. And those “Live Tiles” on the home screen! No more old-fashioned grid-style icons. This, finally, was something new.

I wanted to love it. I tried to love it. I brought it home to meet my family. Some features are wonderful. The People Hub, in particular, combines Facebook, Twitter, and LinkedIn feeds in a nicely readable format. Nokia helped by developing a suite of apps for it and making great hardware. The phone is a nice size, has an excellent camera and a two-LED flash (which makes it the most versatile, if not the most powerful, $450 flashlight on the market). And while some reviews have complained about its heft, I appreciate a phone that can be used for self-defense.

But at the end of the day — and after the return period, natch — I just couldn’t handle being on the wrong side of the network effects.

Network Effects

Network effects come in two flavors: direct and indirect. With direct network effects, every user benefits as other users adopt the technology. Old-fashioned voice telephones are the classic example. If you own the only phone it is worthless because you can’t call anybody. But when the next person buys a phone you immediately benefit because now you can call him or her. (Unless you can’t stand that person, in which case his phone reduces the value of your phone to you, especially since with only two phones in the world it’s not like you can just change your number).

Direct network effects aren’t a big issue with smartphones for most people. You can call any number from any device. (Though for curmudgeons like me that’s increasingly a cost rather than a benefit. Why am I expected to drop everything I’m doing and answer the phone just because someone else decided it was time to chat?) Popular apps like Facebook and Twitter, whose value derives from the size of their networks, are platform-agnostic, at least with respect to hardware and operating systems, so each user gets the benefit of additional users regardless of the (hardware and OS) platform.[1]

But, like The Force, indirect network effects are all-powerful among mobile operating systems. To paraphrase Obi-Wan Kenobi, “…indirect network effects are what give a smartphone its power….They surround us and penetrate us; they bind the users and app developers together.”

In other words, because the vast majority of all potential customers are on iOS or Android devices, it makes sense for developers to build apps for those platforms. If apps are successful there, then maybe it’s worth building apps for a small platform like Windows Phone. Those general incentives are true whether you are the proverbial kid in the garage or Google.

These incentives apparently even affect developers at Microsoft. While Microsoft seems to be putting significant resources into the Windows Phone operating system, it’s not clear that other Microsoft developers share the love. For example, although Microsoft owns Skype, the Windows Phone 8 Skype app was not available when the first phones went on sale. Skype is still only a “preview” app in the Windows Phone store.

As a result, Windows Phone users get the short end of the app stick.

To be sure, the Windows Phone store is far from empty, and some people will find everything they need. Certain apps I rely on, like Evernote and Expensify, are there and work well.

But, overall, the Windows Phone store feels like a dollar store in Chinatown. It has a lot of stuff–75,000 new apps added in 2012, according to Microsoft–but when you look closely you realize they’re selling Sherple pens rather than Sharpie pens. Sometimes the Sherple pen works fine. For example, Microsoft promised to deliver a Pandora app sometime in 2013, but in the meantime users can rely on the “MetroRadio” app, which somehow manages to play Pandora stations. God bless those third-party developers for stepping in and making popular services available to those of us who who love them so much we’re willing to pay any price, as long as the price is zero. But third-party apps can stop working anytime the original source changes something, and it feels like being a second class citizen in the app world.

Small platforms also have problems at the high and low end of the app ecosystem. Windows Phone is missing certain hugely popular apps like Instagram.  At the same time, because of the small customer base the odds of this month’s hot new app being readily available on (or much less, originating on) Windows Phone are tiny.

Relying on a competitor can be OK if you have some power

Even worse, not only does Microsoft need to overcome its network effects disadvantage in order to succeed, it must also have good access to products developed by its arch-nemesis, Google.

Relying on a competitor isn’t inherently disastrous. Apple clearly benefits from the excellent products Google makes for iOS. Recent stories have even suggested that some of Google’s iOS products are better than its companion Android products. There is no love lost between Google and Apple, but Google apparently needs Apple’s huge customer base as much as Apple needs Google.

That’s not to say such cooperation is easy or without risk. Apple buys chips for its mobile devices from archrival Samsung, but has become wary of relying on a competitor for such a crucial part of its golden goose. Similarly, Netflix relies on Amazon’s AWS data facilities for its video streaming, even though they compete in the video delivery market. That relationship, too, makes some uneasy, for example, when Netflix service went down over Christmas and Amazon’s did not. Nevertheless, Amazon and Netflix apparently believe each has enough to gain by working with the other that the relationship continues despite such hiccups.

But with only about two percent of the market, Microsoft is but a fart in a mobile windstorm. Even if Windows Phone were not a potential competitor to Android, it’s hard to make a business case for Google to care one whit about Windows Phone today. That is, Google faces the same lack of incentive to develop apps for Windows Phone that all developers face. And, given that Windows Phone is trying to compete with Android, it’s hard to come up with a good reason why Google should invest in the Windows Phone platform. In other words, Microsoft needs Google but Google doesn’t need Microsoft.

And Google’s lack of need for Windows Phone shows. YouTube doesn’t work well on Windows Phone, gMail on the Windows Phone web browser looks like it was designed for an old feature phone, and Google itself offers only one, lonely, app–a basic search app–in the Windows app store.[2]

This isn’t anti-competitive behavior by Google by a long shot. The small number of Windows Phone users means that Google is unlikely to earn much of a return on investments in Windows Phone. And given that those returns are likely to be even lower if the investments help the Windows platform succeed, it becomes difficult, indeed, to see a reason for Google to invest much. If Windows Phone acquires enough users to generate sufficient ad revenues, however, you can bet Google will develop apps for it.

A New Hope

A third mobile platform could still succeed, despite these obstacles. Overcoming them will require enormous resources, and Microsoft, with an estimated $66 billion in cash, clearly has them. Whether it will deploy those resources effectively remains to be seen. IMO, more resources developing apps and fewer on embarrassingly bad ads might be an effective approach.

Like I said, I wanted to love my Lumia 920. And I want this new platform to succeed–more competition is good. I just don’t want to see it enough to suffer on the wrong side of the network effects in the meantime.

My iPhone 5 comes tomorrow. Don’t tell my wife I used her upgrade.


[1]There are exceptions, of course. For example, Apple’s FaceTime and Find Friends app work only on Apple devices, but–much to Apple’s dismay, I’m sure–these do not appear to have had much effect on aggregate sales, at least in part because of close cross-platform substitutes like Skype and Google Latitude.

[2]Again, some third-party developers come partly to the rescue. Gmaps Pro, for example, provides a wonderful Google Maps experience on Windows Phones.

Unintended—But Not Necessarily Bad—Consequences of the 700 MHz Open Access Provisions

Tuesday, November 6th, 2012

Wireless data pricing has been evolving almost as rapidly as new wireless devices are entering the marketplace. The FCC has mostly sat on the sidelines, watching developments but not intervening.


Last summer, the FCC decided that Verizon was violating the open access rules of the 700 MHz spectrum licenses it purchased in 2008 by charging customers an additional $20 per month to tether their smartphones to other devices. Verizon paid the fine and allowed tethering on all new data plans.[1]

Much digital ink has been spilled regarding how to choose a shared data plan best-tailored for families with a myriad of wireless devices and demand for data. Very little, however, appears to have been said about individual plans and, more specifically, about those targeted to light users.

One change that has gone largely unnoticed is that Verizon effectively abandoned the post-paid market for light users after the FCC decision.

Verizon no longer offers individual plans. Even consumers with only a single smartphone must purchase a shared data plan. That’s sensible from Verizon’s perspective since mandatory tethering means that Verizon effectively cannot enforce a single-user contract. The result is that Verizon no longer directly competes for light users.

The figure below shows the least amount of money a consumer can pay each month on a contract at the major wireless providers. As the table below the figure highlights, the figure does not present an apples-to-apples comparison, but that’s not the point—the point is to show the choices facing a user who wants voice and data, but the smallest possible amount of each.

Note: Assumes no data overages.

The figure shows that this thrifty consumer could spend $90/month at Verizon, $60/month at AT&T, $70/month at T-Mobile, and $65/month at Sprint if the consumer is willing to purchase voice/text and data plans separately. Even Verizon’s prepaid plan, at $80/month, costs more than the others’ cheapest postpaid plans.

Moreover, prior to the shift to “share everything” plans, this consumer could have purchased an individual plan from Verizon for $70/month—$20/month less than he could today. At AT&T the price was $55/month but increased by only $5/month. Again, the point is not to show that one plan is better than another. Verizon’s cheapest plan offers 2 GB of data, unlimited voice and texts, and tethering while AT&T’s cheapest plan offers 300 MB of data, 450 voice minutes, and no texts or tethering. Which plan is “better” depends on the consumer’s preferences. Instead, the point is to show the smallest amount of money a light user could spend on a postpaid plan at different carriers, and that comparison reveals that Verizon’s cheapest option is significantly more expensive than other post-paid options and, moreover, increased significantly with the introduction of the shared plan.

Is the FCC’s Verizon Tethering Decision Responsible for this Industry Price Structure?

There’s no way to know for sure. The rapidly increasing ubiquity of households with multiple wireless device means that shared data plans were probably inevitable. And carriers compete on a range of criteria other than just price, including network size, network quality, and handset availability, to name a few.

Nevertheless, Verizon introduced its “share everything” plans about a month before the FCC’s decision. If we make the not-so-controversial assumption that Verizon knew it would be required to allow “free” tethering before the decision was made public and that individual plans would no longer be realistic for it, then the timing supports the assertion that “share everything” was, at least in part, a response to the rule.

How Many Customers Use These “Light” Plans?

Cisco estimated that in 2011 the average North American mobile connection “generated” 324 megabytes. The average for 2012 will almost surely be higher and even higher among those with higher-end phones. Regardless, even average use close to 1 Gb would imply a large number of consumers who could benefit from buying light-use plans, regardless of whether they do.

Did the FCC’s Tethering Decision Benefit or Harm Consumers?

It probably did both.

The consumer benefits: First, Verizon customers who want to tether their devices can do so without an extra charge. Second, AT&T and Sprint followed Verizon in offering shared data plans, with AT&T’s shared plans also including tethering. Craig Moffett of Alliance Bernstein noted recently that “Family Share plans are not, as has often been characterized, price increases. They are price cuts…”[2] because the plans allow consumers to allocate their data more efficiently. As a result, he notes, these plans should cause investors to worry that the plans will reduce revenues. In other words, the shared plans on balance probably represent a shift from producer to consumer surplus.

The consumer costs: Verizon is no longer priced competitively for light users.

The balance: Given that other carriers still offer postpaid plans to light users and that a plethora of prepaid and other non-contract options exist for light users, the harm to consumers from Verizon’s exit is probably small, while the benefits to consumers may be nontrivial. In other words, the net effect was most likely a net benefit to consumers.

What Does This Experience Tell Us?

The FCC’s decision and industry reaction should serve as a gentle reminder to those who tend to favor regulatory intervention: even the smallest interventions can have unintended ripple effects. Rare indeed is the rule that affects only the firm and activity targeted and nothing else. More specifically, rules that especially help the technorati—those at the high end of the digital food chain—may hurt those at the other end of the spectrum.

But those who tend to oppose regulatory intervention should also take note: not all unintended consequences are disastrous, and some might even be beneficial.

Is That a Unique Observation?

Not really.

Could I Have Done Something Better With My Time Instead of Reading This?

Maybe. Read this paper to find out.

[1] The FCC allowed Verizon to continue charging customers with grandfathered “unlimited” data plans an additional fee for tethering.

[2] Moffett, Craig. The Perfect Storm. Weekend Media Blast. AllianceBernstein, November 2, 2012.

Is a broadband tax a good idea?

Thursday, August 30th, 2012

The FCC recently asked for comments on a proposal to raise money for universal service obligations by taxing broadband connections. Let’s set aside, for the moment, the question of whether the universal service program has worked (it hasn’t), whether it is efficient (it isn’t), and whether the reforms will actually improve it (they won’t). Instead, let’s focus on the specific question of whether taxing broadband is the best way to raise money for any given program telecommunications policymakers want to fund.

The answer, in typical economist fashion, is that it depends.

A tax is generally evaluated on two criteria: efficiency and equity. The more “deadweight loss” the tax causes, the more inefficient it is. Deadweight loss results from people changing their behavior in response to the tax and, in principle, can be calculated as a welfare loss.

Closely related to efficiency is how the tax affects policy goals. This question is particularly relevant here because the service being taxed is precisely the service the tax is also suppose to support, making it possible that the tax itself could undo any benefits of the spending it funds.

Equity—in general, how much people of different income levels pay—is simple in concept but difficult in practice since it is not possible to say what the “right” share of the tax any given group should pay.

Perhaps surprisingly to some, a broadband tax may actually be efficient relative to some other options, including income taxes (i.e., coming from general revenues). Historically, universal service funds were raised by taxes on long distance service, which is highly price sensitive, making the tax quite inefficient.

By contrast, for the typical household, fixed (and, increasingly, mobile) broadband has likely become quite inelastic. In 2010, one study estimated that the typical household was willing to pay about $80 per month for “fast” broadband service, while the median monthly price for that service was about $40. Since then, the number of applications and available online services has increased, meaning that consumer willingness to pay has presumably also increased, while according to the Bureau of Labor Statistics broadband prices have remained about the same.

Consumer Price Index, Internet Services and Electronic Information Providers

Source: Bureau of Labor Statistics, Series ID CUUR0000SEEE03, adjusted so January 2007=100

While no recent study has specifically evaluated price elasticity, the large gap between prices and willingness to pay suggests that a tax of any size likely to be considered might not be hugely inefficient overall.

The problem is that even if the tax does not affect subscription decisions by most people, it can still affect precisely the population policymakers want to help. Even though only 10 percent of people who do not have broadband cite price as the barrier, there is some lower price at which people will subscribe. A tax effectively increases the price consumers pay, meaning that it puts people at that margin—people who may be on the verge of subscribing—that much further away from deciding broadband is worthwhile. Similarly, people on the other side of that margin—those who believe broadband is worthwhile, but just barely—will either cancel their subscriptions or subscribe to less robust offerings.

To be sure, people who would be eligible for low-income support would probably receive more in subsidies than they pay in taxes, but this is an absurdly inefficient way to connect more people. As one astute observer noted, it is not merely like trying to fill a leaky bucket, but perhaps more like trying to fill that bucket upside-down through the holes.[1]

Higher prices for everyone highlights the equity problem. A connection tax is the same for everyone regardless of income, making the tax regressive. The tax becomes even more regressive because much of the payments go to rural residents regardless of their income while everyone pays regardless of their income, meaning the tax includes a transfer from the urban poor to the rural rich.

Even without an income test, methods exist to mitigate the equity problem. Unfortunately, the methods the FCC proposes are likely to undermine other policy goals. In particular, the FCC asks about the effects of taxing by tier of service, presumably with higher tiers of service paying more (paragraph 249). The FCC does not specifically mention equity in its discussion, but if higher income people are more likely to have faster connections then it would help mitigate equity issues.

This tiered tax approach is commonly used for other services, including electricity and water, where a low tier of use is taxed at a low rate, and higher usage rates are taxed incrementally more. Therefore, in the case of water, for example, a family that uses water only for cooking and cleaning will pay a lower tax rate than a family that also waters its lawn and fills a swimming pool. And while it is not a perfect measure of income, in general wealthier people are more likely to have big lawns and pools.

The problem with this approach in broadband is that while willingness to pay for “fast” broadband is relatively high, most people are not yet willing to pay much more at all for “very fast” broadband. Thus, taxing higher tiers of service at a higher price, while more equitable, may lead to other efficiency problems if it reduces demand for higher tiers of service.

So what’s the solution?

The FCC should decide which objectives are the most important: efficiency, equity, or other policy objectives such as inducing more people to subscribe or upgrade their speeds, and then design the tax system that best achieves that goal. Then it should compare this “best” tax to the outcome if the system were simply funded from general revenues and compare which of those would lead to a better outcome.

But no tax is worthwhile if the program it supports is itself inefficient and inequitable. The real solution is to dramatically reduce spending on ineffective universal service programs in order to minimize the amount of money needed to fund them. Unfortunately, the reforms appear to do just the opposite. In 2011, the high cost fund spent $4.03 billion and had been projected to decrease even further. The reforms, however, specified that spending should not fall below $4.5 billion (see paragraph 560 of the order), meaning that the first real effect of the reforms was to increase spending by a half billion dollars. And, as the GAO noted, the FCC “has not addressed its inability to determine the effect of the fund and lacks a specific data-analysis plan for carrier data it will collect” and “lacks a mechanism to link carrier rates and revenues with support payments.”

The right reforms include integrating true, third-party, evaluation mechanisms into the program and, given the vast evidence of inefficiency and ineffectiveness, a future path of steady and significant budget cuts. Those changes combined with an efficient tax-collection method, might yield a program that efficiently targets those truly in need.

[1] This excellent analogy comes from Greg Rosston via personal communications.

Does Paying Olympians for Winning Medals Make Them More Likely to Win?

Wednesday, August 15th, 2012

Missy Franklin, America’s newest star swimmer and most winning Olympian this summer, took home $110,000 in addition to her four gold and one bronze medals. That’s because the United States Olympic Committee pays athletes $25,000 for each gold, $15,000 for each silver, and $10,000 for each bronze medal they win. The United States isn’t alone in rewarding winning athletes. At least 42 other countries also pay winners, with payouts for gold ranging from about $11,000 in Kenya to $1.2 million in Georgia.

(click the picture to enlarge it)

Incentives might mean more than money. In 2008, medalists from Belarus received not just money, but also free meat for life.[1] At least one country includes a stick in addition to its incentive carrot. North Korea, which is apparently the basis for The Hunger Games, reportedly rewards its gold medal winners “with handsome prize money, an apartment, a car, and additional perks like refrigerators and television sets,” while “poor performances” could yield time in a labor camp.[2]

Why do countries pay winners? The most obvious answer is that they want to create an additional incentive to win. But does it? We set out to answer this question.

We put together data on each country’s medal count and medal payout scheme for the 2012 Olympics in order to test for a correlation.[3] We also control for country size and income, since those are surely the biggest predictor of how many medals a country will win: more populous countries are more likely to have that rare human who is physically built and mentally able to become an Olympic athlete, while richer countries are more likely to be able to invest in training those people.

Thus, we estimate the following regression:

Medalc = f(populationc, per capita incomec, medal payoutc), where c indicates the country. The table below shows the results. As expected, bigger and richer countries win more medals, especially gold medals. Curiously, however, medal payouts are negatively correlated with winning medals, and this correlation is statistically significant for bronze and silver medal payouts.

Table 1: Relationship Between the Number of Gold, Silver, and Bronze Medals and Country Characteristics

(including only countries for which we could find information)

What could explain this counterintuitive result?

One possibility is that it is simply wrong. Data on medal payouts do not exist in one place, so we had to search for information on each country. As a result, just because we could not find data on a country’s payouts does not mean it doesn’t have any. In the analysis above I excluded countries for which we had no information on medals. Most of the countries omitted probably did not offer rewards, but we cannot say this with certainty.

We can partially address this question by assuming—and I emphasize that this may be an inappropriate assumption—that countries for which we found no data have no payouts and re-estimating the regression. The table below shows these results. Under this assumption, the results change slightly. Gold and silver payouts become positively correlated with number of medals won, but are far from statistically significant. In other words, this analysis simply suggests that there is little, if any, correlation between awards and medals. Payouts for bronze medals, however, become positive and statistically significant.

Table 2: Relationship Between the Number of Gold, Silver, and Bronze Medals and Country Characteristics

(including all countries and assuming no information on rewards means country does not offer rewards)

Another complication is that this analysis does not take into account non-monetary payouts, such as meat-for-life, or punishments like banishment to labor camps, but we do not have a way at the moment to incorporate that information empirically.

Finding no correlation between monetary payments and medals is not surprising in some countries. In the United States, for example, $25,000—while perhaps today a sizable per-medal sum for high school student Missy Franklin—is dwarfed by the millions of dollars she can earn in endorsements now that she has won her medals.

Such valuable endorsement opportunities, however, are probably less common in other countries, while the relative value of the rewards might be much higher. So, for example, $11,000 in Kenya may provide a much stronger incentive than $25,000 in the United States. We test this hypothesis by running the original regression and excluding richer countries. I exclude the tables here, but the results do not change: even including only poorer countries, where the rewards are more likely to make a material difference in someone’s life, rewards are not significantly correlated with winning more medals, even controlling for population and income.

Another explanation for these results is that elite athletes likely compete for a host of reasons unrelated to money: because they love their sport, because they want to be the best, or for other personal reasons that give them satisfaction.

But why would payouts be negatively correlated with winning medals? My guess is that in reality they aren’t. Instead, the payouts proxy for something else. Perhaps countries that give larger monetary payouts perform worse than one would expect given their size and wealth, and hope that the payouts incent better performance by their athletes. In this case, payouts would be negatively correlated with a country’s performance, all else equal, not because they are not incentives, but because they are there because the country should be doing better.

Still, overall the evidence suggests that these payments don’t increase the medal count. And because they benefit such a small number of athletes it’s hard to argue that they are important for making it possible to make a living as an Olympic-level athlete.

Overall, if the desire is to create additional incentives to win, countries might want to re-think their reward system.

Note: I thank Corwin Rhyan and Anjney Midha for for their excellent research assistance and indulging me by hunting laboriously for data on the olympics even though we work at a think tank devoted to technology issues.



[3] We tried to find information for 2008, but could confirm information for only 10 countries.

Hey, FCC: Stop Counting!

Friday, June 1st, 2012

By June 2011, nearly one-third of American households relied solely on wireless voice service, with  lower income households more likely to be wireless-only. This information doesn’t come from the FCC, as you might expect. Instead, it comes from the twice-yearly National Health Interview Survey, conducted by the U.S. Census for the Centers for Disease Control and Prevention (CDC).[1] The example highlights three points policymakers should take to heart for data collection relevant to telecommunications:

  • FCC does not always produce the most relevant telecommunications data.
  • Careful, representative surveys—not population counts, which the FCC uses for measuring voice and broadband markets—are usually the most effective and efficient way to gather data.
  • Policymaking agencies like the FCC can obtain relevant data from other agencies like the U.S. Census that specialize in data collection but have no vested interest in any particular policy outcome.

Counting telephones began at the turn of the 20th Century

The U.S. Census began to collect data on telephones as they became an increasingly important part of American life. In 1922 the Bureau noted, “The census of telephones has been taken quinquennially since 1902, and statistics of telephones were compiled and published in the decennial censuses of 1880 and 1890.”[2]

The FCC has largely continued this tradition, attempting to count each line or connection for communications technologies. (Some—not me, of course—might say delays in producing some reports indicate a desire to revert to the quinquennial release schedule).

Maintaining a consistent approach to data-gathering has certain advantages, such as facilitating comparisons over time. However, that advantage diminishes as it becomes less clear what, exactly, we should measure and as market changes make any particular count less relevant.

Counting is inefficient and misses the most important data

Most economic and social policy is based on surveys conducted by agencies such as the Census Bureau and the Bureau of Labor Statistics. We rely on surveys because gathering information on an entire population is typically not feasible. For the constitutionally-mandated decennial census, for example, the U.S. Census spends about $11 billion and hires about one million temporary workers.[3] By contrast, in a non-census year, the Census bureau spends about $1 billion on all its data collection efforts.[4] Additionally, surveys make it possible to gather data about particular groups and estimate the likelihood that different measures truly reflect the actual population.

The FCC attempts to count all lines, connections, and other factors related to telecommunications by requiring companies to provide certain data. Large firms spend significant resources providing these data. Small firms often do not have the resources to provide this information, and the FCC’s skilled data staff then must spend enormous time and effort trying to gather this information from firms who either will not or cannot respond.

The result is that the FCC has the least reliable count data in precisely the topical and geographic areas that it needs data for sound decisionmaking. For example, counts of broadband connections provide some measure of the intersection of supply (availability) and demand, but not good information on either separately. The counts provide no information on how those connections are used nor on how they break down demographically.[5]

This telecommunications counting fetish has spread to other parts of the government, as well. The National Broadband Map is based on the same flawed premise: the belief that the best dataset comes from observing every detail of every broadband connection. The effort cost about $350 million and still apparently yields inaccurate results in rural areas where policymakers want to direct resources.[6]

The FCC Should Stop Counting and Start Contracting the Census Bureau to do Surveys

Nearly all other areas of economic policy are informed by surveys, many of which are conducted monthly to provide real-time information to markets and policymakers. Nothing in particular about telecommunications requires a total population count rather than survey data.

Additionally, there is no reason why the FCC itself should be responsible for data collection. The U.S. Census Bureau is much better equipped to design and implement surveys. It is not uncommon for Census to do survey work for (and funded by) other agencies. In addition to the CDC survey mentioned above, Census also does surveys for the Department of Justice,[7] the National Center for Education Statistics,[8] and State Library Agencies[9] to name a few.

Embracing surveys conducted by other agencies would have several advantages:

  • Surveys are almost certain to be cheaper than counts both to the government and to the private sector.
  • Surveys of users, rather than counts submitted by providers, are more likely to yield data not influenced by providers’ incentives to game the data collection process to their own benefit.
  • Data collection by outside agencies would reduce any inherent conflict of interest the FCC might face when gathering data related to its agenda.

Surveys by other agencies, of course, are not a silver bullet for obtaining better and more timely data. They can be done poorly. And the FCC should remain involved. As the expert agency it should largely determine the questions it needs answered and the type of information necessary for policymaking and provide the resources necessary to do it. Additionally, the FCC needs the ability to compel data from regulated companies for specific decisions when necessary.

Today, unfortunately, surveys are being subject to attacks by Congressional Republicans, who want to reduce the ability of the U.S. Census to collect data.[10] These attacks have been roundly and correctly criticized by conservative and liberal commentators, who note that these data are crucial to good policymaking.[11]

Despite the Congressional statistical ignorance de jour, surveys by agencies expert in data collection will yield far better data at lower cost than today’s methods. Hopefully the FCC will take note and begin to move our ability to study telecommunications out of the 19th Century.





[5] It is possible to merge geographic counts with demographic data from the Census, but even this approach would be more effective if done in a way that explicitly incorporates connections to the Current Population Survey.






[11] See, for example, Matthew Yglesias’s discussion:

Where do vendors to cable think the industry is heading? Evidence from trade show data, 2012

Tuesday, May 29th, 2012

For the past three years I have been collecting data about exhibitors at the NCTA trade show from the show’s website. With this year’s show having just ended, it’s time to take a look.

Exhibitor attendance

This year, the website listed 259 exhibitors, down from 271 in 2011 and 345 in 2010. These statistics somewhat exaggerate the number of companies that participate since each company at each booth counts as a separate exhibitor. So, for example, Zodiac Interactive was represented at booth ES-59 and CableNET’s booth, so is included twice.

Hot or Not?

The show website shows the categories of products, services, or technologies each exhibitor selects to describe itself. An exhibitor can select several categories. To evaluate the prevalence of each category I total the number of times each category is selected, and then divide that by the number of exhibitors to make it comparable across years.

The figure below shows the top 20 categories for 2010, 2011, and 2012. The top three categories remain constant and are not surprising, given that this is the cable show: cable programming, video on demand, and IPTV.

Top 20 Product Categories, 2010-2012


The biggest winners were cloud services, mobile apps, and “multiscreen content” (although it is possible this last category was called something else in past years), which were not (officially) represented at all in previous years but were now in the top 20. Other new categories this year included social TV, broadband services, home networking, and content navigation.

Major New Product Categories 2012

“Broadband services” is rather vague and probably does not indicate any particular new product or service. The others, however, appear to represent new developments in cable. “Home networking” is related to cable companies’ interest in home monitoring services, and “content navigation” indicates interest in user interfaces that do more than change channels.

The following table shows the biggest gainers, in percentage points, for products and services that had also been exhibited in 2010 or 2011. WiFi products and services saw the biggest increase, followed by data analysis, home information services, and business services.

Biggest Gainers 2011-1012


The following table shows the category losers. The biggest losers appear to be categories most associated with traditional cable television: pay programming, pay-per-view, program guides, and video on demand. Personal video recorders showed a sharp dropoff, perhaps corresponding to the increase in cloud services, meaning that the industry sees consumers less likely to be recording content at home as opposed to downloading or streaming it from the cloud. Educational programming decreased significantly, although “children’s programming” increased a bit (see above).

Biggest Losers 2011-2012

What does this mean?

The data themselves have certain problems that make drawing strong conclusions difficult. For example, they don’t control for the size of the exhibitors’ booths. NBCUniversal’s exhibit space (449), for example, is hardly comparable to Cycle30’s booth (2242). This problem is partly mitigated by larger booths holding multiple exhibitors and more categories. Additionally, the categories are self-reported by the exhibitors and do not appear to have strict definitions. Exhibitors have no incentive to select grossly inaccurate categories, since that would attract people unlikely to purchase their products, but exhibitors probably tend towards being overly-inclusive so as not to miss potential clients. This tendency might bias towards especially popular technologies. For example, perhaps exhibitors take liberties in claiming they offer “social TV” or “cloud services” because those contain popular buzzwords rather than because their products truly offer much in the way of those services.

2012 Cable Show Floor Plan

Despite these shortcomings in the data, they provide one source of information on where economic actors with money at stake think the industry is headed over the next year. And, according to them, the industry is moving away from its traditional role as linear video distributor to storing content in the cloud, trying to capitalize on trends in social everything, and providing other services like home monitoring.