Author Archive

Takeaways from the White House Big Data Reports

Monday, May 5th, 2014

On May 1, the White House released its two eagerly-awaited reports on “big data” resulting from the 90-day study President Obama announced on January 17—one by a team led by Presidential Counselor John Podesta, and a complementary study by the President’s Council of Advisors on Science and Technology (PCAST).  The reports contain valuable detail about the uses of big data in both the public and private sector.  At the risk of oversimplifying, I see three major takeaways from the reports.

First, the reports recognize big data’s enormous benefits and potential.  Indeed, the Podesta report starts out by observing that “properly implemented, big data will become an historic driver of progress.”  It adds, “Unprecedented computational power and sophistication make possible unexpected discoveries, innovations, and advancements in our quality of life.”  The report is filled with examples of the value of big data in medical research and health care delivery, education, homeland security, fraud detection, improving efficiency and reducing costs across the economy, as well as in providing targeted information to consumers and the raw material for the advertising-supported internet ecosystem.  The report states that the “Administration remains committed to supporting the digital economy and the free flow of data that drives its innovation.”

Second, neither report provides any actual evidence of harms from big data.  While the reports provide concrete examples of beneficial uses of big data, the harmful uses are hypothetical.  Perhaps the most publicized conclusion of the Podesta report concerns the possibility of discrimination—that “big data analytics have the potential to [italics added] eclipse longstanding civil rights protections in how personal information is used in housing, credit, employment, health, education, and the marketplace.”  However, the two examples of discrimination cited turn out to be almost non-examples.

The first example involves StreetBump, a mobile application developed to collect information about potholes and other road conditions in Boston.  Even before its launch the city recognized that this app, by itself, would be biased toward identifying problems in wealthier neighborhoods, because wealthier individuals would be more likely to own smartphones and make use of the app.  As a result, the city adjusted accordingly to ensure reporting of road conditions was accurate and consistent throughout the city.

The second example involves the E-verify program used by employers to check the eligibility of employees to work legally in the United States.  The report cites a study that “found the rate at which U.S. citizen have their authorization to work be initially erroneously unconfirmed by the system was 0.3 percent, compared to 2.1 percent for non-citizens.  However, after a few days many of these workers’ status was confirmed.”  It seems almost inevitable that the error rate for citizens would be lower since citizens automatically are eligible to work, whereas additional information is needed to confirm eligibility for non-citizens (i.e., evidence of some sort of work permit).  Hence, it is not clear this is an example of discrimination.

It is notable that both these examples are of government activities.  The reports do not present examples of commercial uses of big data that discriminate against particular groups.  To the contrary, the PCAST report notes the private-sector use of big data to help underserved individuals with loan and credit-building alternatives.

Finally, and perhaps most importantly, both reports indicate that the Fair Information Practice Principles (FIPPs) that focus on limiting data collection are increasingly irrelevant and, indeed, harmful in a big data world.  The Podesta report observes that “these trends may require us to look closely at the notice and consent framework that has been a central pillar of how privacy practices have been organized for more than four decades.”  The PCAST report notes, “The beneficial uses of near-ubiquitous data collection are large, and they fuel an increasingly important set of economic activities.  Taken together, these considerations suggest that a policy focus on limiting data collection will not be a broadly applicable or scalable strategy—nor one likely to achieve the right balance between beneficial results and unintended negative consequences (such as inhibiting economic growth).”  The Podesta report suggests examining “whether a greater focus on how data is used and reused would be a more productive basis for managing privacy rights in a big data environment.”  The PCAST report is even clearer:

Policy attention should focus more on the actual uses of big data and less on its collection and analysis.  By actual uses, we mean the specific events where something happens that can cause an adverse consequence or harm to an individual or class of individuals….By contrast, PCAST judges that policies focused on the regulation of data collection, storage, retention, a priori limitations on applications, and analysis…are unlikely to yield effective strategies for improving privacy.  Such policies would be unlikely to be scalable over time, or to be enforceable by other than severe and economically damaging measures.

In sum, the two reports have much to like:  their acknowledgement of the importance and widespread use of big data and their attempt, particularly in the PCAST report, to refocus the policy discussion in a more productive direction.  The reports also, however, suffer from a lack of evidence to substantiate their claim of harms.

Comcast and Netflix—What’s the Big Deal?

Wednesday, February 26th, 2014

Netflix and Comcast recently announced an agreement whereby Netflix will pay Comcast for direct access to its network.  This agreement addresses congestion that is slowing delivery of Netflix videos to Comcast’s broadband subscribers and resolves a dispute between the two companies concerning how to pay for the needed network upgrades.  Netflix and Verizon are currently working through a similar dispute.  While some commentators think deals such as the one between Netflix and Comcast are problematic, the reality is that the agreement reflects a common market transaction that yields an outcome more efficient and more quickly than any regulatory intervention could have.

The following series of stylized figures illustrate how the growth of Netflix and other streaming video services have affected the volume and flow of internet traffic and corresponding payments in recent years.  Traditionally (Figure 1), Internet backbone providers and ISPs entered into “peering” agreements, which did not call for payments on either side, reflecting a relatively balanced flow of traffic.  Content distributors paid backbone providers for “transit,” reflecting the unbalanced flow of traffic along that route.

Slide1

With the growth of online video and with Netflix accounting for 30 percent of traffic at some times of the day, this system was bound to become strained, as we are now seeing and as shown in Figure 2.  The flow of traffic between the backbone provider and the ISP is unbalanced and has grown enormously, requiring investments in additional capacity.

Slide2

One way to address this problem is for the backbone provider to pay the ISP, reflecting the greater amount of traffic (and greater capacity needed) going in that direction (see Figure 3).  In fact, that is what happened following a dispute between Level 3 and Comcast in late 2010.

Slide3

Another solution is the just-announced Comcast-Netflix deal, reflected in Figure 4.  In this case, Netflix/Comcast is bypassing the intermediate backbone provider (either partially or completely), presumably because it is more efficient to do so.  One or both of them is investing in the needed capacity.  Regulatory interference with such a deal runs the risk of blocking an advance that would lower costs and/or raise quality to consumers.

Slide4

The Wall Street Journal has described the debate as being “over who should bear the cost of upgrading the Internet’s pipes to carry the nation’s growing volume of online video:  broadband providers like cable and phone companies, or content companies like Netflix, which make money by sending news or entertainment through those pipes.”  Ultimately, of course, consumers pay one way or the other.  When Netflix pays Comcast, the cost is passed through to Netflix subscribers.  This is both efficient and fair, because the consumer of Netflix services is paying for the cost of that service.

In the absence of such an agreement, quality would suffer or the ISP would bear the cost.  The ISP might recover these costs by increasing prices to subscribers generally.  This would involve a cross-subsidy of Netflix subscribers by non-subscribers, which would be neither efficient nor fair.  Alternatively, Comcast could increase prices for those subscribers who consume a lot of bandwidth, which might have similar effects to the just-announced deal, but would probably lose some efficiencies.  In any event, it is difficult to see how such an arrangement would be better for consumers than the announced agreement.

 

 

The FCC Tries Yet Again

Wednesday, February 19th, 2014

FCC Chairman Tom Wheeler’s official response to the DC Appeals Court decision on the Commission’s “net neutrality” rules promises to keep the issue on the table for the foreseeable future.  That is unfortunate, because there are better ways for the Commission and its staff to spend their time.

The Appeals Court took away from the Commission with one hand, while giving back with the other:  It struck down the more onerous provisions of the net neutrality rules—the “anti-discrimination” and “anti-blocking” provisions—because they imposed common carrier obligations and broadband is not classified as a Title II common carrier service.  However, the Court affirmed the Commission’s argument that it has general authority (under section 706 of the Communications Act) to regulate in order to encourage broadband deployment.

Since the Appeals Court decision came down, the FCC has been under considerable pressure from net neutrality proponents to reclassify broadband as a Title II common carrier.  In today’s announcement, the Commission declined to do that. However, the Commission also declined to close the Title II docket, keeping the threat of reclassification and the regulatory burdens and oversight that go with it, alive.

In addition, the Commission announced its intention to start yet another net neutrality rulemaking, under its section 706 authority, in order to fulfill the Commission’s no blocking and non-discrimination goals as well as to enhance the transparency rule (the one major provision that the court upheld).

With all the activity aimed towards asserting legal justification for its net neutrality rules, it sometimes gets lost that the FCC had no convincing economic or consumer welfare justification for the rules in the first place.

While there is widespread agreement that the Internet should be open and provide consumers with access to content, applications and services of their choice, the rule was always a solution in search of a problem, a sentiment echoed today by FCC Commissioner Pai.  The Commission never provided the necessary data and analysis to show that the rules would address a significant market failure, did not identify harms to users that the rules would remedy, and did not demonstrate that the benefits of the rules would exceed their costs.  In other words, the Commission neglected to explain why the broadband market, which has generally thrived under minimal regulation, should now be subject to an enhanced regulatory regime.   Indeed, a good argument can be made that, by making the adoption of innovative business models more difficult, the rules would have hindered rather than encouraged the deployment of broadband infrastructure, notwithstanding the Commission’s assertions to the contrary.

There is now substantial concern that the Appeals Court has expanded the Commission’s authority to include the entire Internet ecosystem—including potentially content, applications, and service providers—as long as it can make some plausible argument that its actions encourage broadband deployment.  Expanding the Commission’s domain in this way would be a serious mistake and would compound the harm.

A major goal of the Commission in promulgating its net neutrality rules initially was to “provide greater predictability.”  It clearly has not achieved that goal.  Starting yet another proceeding, and keeping the Title II docket open, will create even more uncertainty for the entire Internet ecosystem.

Unleashing the Potential of Mobile Broadband: What Julius Missed

Thursday, March 7th, 2013

In yesterday’s Wall Street Journal op-ed, FCC Chairman Genachowski correctly focuses on the innovation potential of mobile broadband.  For that potential to be realized, he points out, the U.S. needs to make more spectrum available.  A spectrum price index developed by my colleague, Scott Wallsten, demonstrates what most observers believe – that spectrum has become increasingly scarce over the last few years.

The Chairman’s op-ed highlights three new policy initiatives the FCC and the Obama Administration are taking in an attempt to address the spectrum scarcity:  (1) the incentive auctions designed to reclaim as much as 120 MHz of high-quality broadcast spectrum for flexibly licensed – presumably, mobile broadband – uses;   (2) freeing up the TV white spaces for unlicensed uses; and (3) facilitating sharing of government spectrum by private users.

There are two notable omissions from the Chairman’s list.  First, he does not mention the 150 MHz of mobile satellite service (MSS) spectrum, which has been virtually unused for over twenty years due to gross government mismanagement.  A major portion of this spectrum, now licensed to three firms – LightSquared, Globalstar, and Dish – could quickly be made available for mobile broadband uses. The FCC is now considering a proposal from LightSquared that would enable at least some of its spectrum to be productively used.  That proposal should be approved ASAP.  The MSS spectrum truly represents the low-hanging fruit and making it available should be given the same priority as the other items on the Chairman’s list.

Second, if the FCC and NTIA truly want to be innovative with respect to government spectrum, they should focus on the elusive task of developing a system that requires government users to face the opportunity cost of the spectrum they use.  This is currently not the case, which is a major reason why it is so difficult to get government users to relinquish virtually any of the spectrum they control.  To introduce opportunity cost into government decision making, Larry White and I have proposed the establishment of a Government Spectrum Ownership Corporation (GSOC). A GSOC would operate similarly to the General Services Administration (GSA).  Government agencies would pay a market-based “rent” for spectrum to the GSOC, just as they do now to the GSA for the office space and other real estate they use.  Importantly, the GSOC could then sell surplus spectrum to the private sector (as the GSA does with real estate). The GSOC would hopefully give government agencies appropriate incentives to use spectrum efficiently, just as they now have that incentive with real estate.  This would be a true innovation.

In the short run, administrative mechanisms are probably a more feasible way to make more government spectrum available.  For example, White and I also proposed cash prizes for government employees who devise ways their agency can economize on its use of spectrum.  This would be consistent with other government bonuses that reward outstanding performance.

Sharing of government spectrum is a second-best solution.  It would be far better if government used its spectrum more efficiently and more of it was then made exclusively available to private sector users.  This is, admittedly, a difficult task, but worth the Administration’s efforts.

Lessons from the Federal Trade Commission’s $22.5 million Google fine

Wednesday, August 15th, 2012

Those who favor expanding the FTC’s role with respect to privacy should take a close look at what the agency does with the authority it already has. The most recent exhibit is the FTC’s imposition of a $22.5 million penalty on Google for bypassing the privacy settings on Apple’s Safari browser and thereby violating the terms of Google’s 2011 consent decree with the FTC. Since this is the largest fine the FTC has ever imposed, one would think Google must have committed a pretty serious violation that resulted in substantial harm to consumers. But there is no evidence that consumers have been harmed at all. (Dan Castro has written a nice blog post on this). Instead, the FTC has uncovered just enough of a technical violation to be able to say to Google “gotcha again.”

The issue is difficult to explain briefly, but essentially what happened is this: Google’s social network, Google +, has a “+1” button that, like Facebook’s “Like” button, gives users a way to indicate content they like. This feature doesn’t work with Apple’s Safari browser, which has a do-not-track feature that is turned on by default, so Google developed a tool that made the Safari browser work like other browsers.

Following research of a Stanford graduate student which was reported in the Wall Street Journal, the FTC began investigating and discovered two sentences in a 2009 Google help center page that the FTC claims misrepresent what Google is doing. That language dated from a year before Apple adopted its current cookie policy, two years before the 2011 consent decree, and two years before the +1 button was introduced.

Whether or not Google technically violated its consent decree, it is difficult to see how the Commission’s action will benefit consumers. Paradoxically, the action is likely to undermine one of the Commission’s principle recommendations: “greater transparency” concerning information collection and use practices. The $22 million fine sends exactly the opposite message to Google as well as other firms subject to FTC jurisdiction. The more transparent a company is about how it collects and uses data, the greater the risk of making a mistake and getting in trouble with the FTC. So, companies will find it in their interest to give users less information about web site privacy practices.

In addition, there is a cost to the +1 users the FTC is supposedly protecting. Now that Google has “corrected” the problem, Safari users who want to use +1 need to manually log in to their Google account, which equates to submitting a form, which then allows additional Google cookies to be installed anyway. This is quite a cumbersome process. Moreover, the pre-correction Google workaround meant that only additional cookies from Google’s Doubleclick network could be installed, while blocking cookies from any other third party. The current fix forces users who want to use the +1 function to change the cookie settings for the entire browser, opening their phones to cookies from any website, unless they take the trouble to switch settings back to ‘never accept’ cookies after they have successfully ‘+1′ the content they set out to share.

That FTC privacy-related enforcement is not based on demonstrable consumer benefits should not come as a surprise to those who have been following the agency’s work in this area. In the past two years, the Commission has released two privacy reports (here and here) that contain no evidence of consumer harm from current privacy practices. In fact, the Commission explicitly rejects the harm-based approach to privacy. This, of course, makes analysis of the benefits of proposed measures difficult, since if there are benefits they will consist of reduced harms.

So, what are the broader lessons from this episode? First, we should be wary of privacy legislation that gives the FTC additional authority to write new rules and enforce them (which virtually all privacy legislative proposals would do). If new legislation is enacted, it should only be with a strict mandate that any new regulations address significant harms and pass a cost-benefit test.

Another lesson may be for companies like Google, who understandably are anxious to avoid protracted litigation and get on with their businesses. These companies probably need to reassess the cost-benefit calculation that induced them to settle in the first place.

Hope the FTC reads the Wall Street Journal

Monday, July 9th, 2012

This morning’s Wall Street Journal reported on the pending IPO of the travel website Kayak, which has been doing quite well.  Kayak’s revenue increased 39%, to $73 million, during the first quarter of 2012 compared to the same period a year earlier.  Over that period net income increased to $4 million from a loss of $7 million.

The Kayak IPO follows on the heels of a successful Yelp IPO in March.  The review site also has been prospering.  Just last week, the Journal reported that Apple was planning to incorporate Yelp into its new mapping application.

The Yelp IPO was preceded by the spinoff of travel site TravelAdvisor from Expedia last December.  Since that time, TravelAdvisor’s share price has increased by two-thirds.

What do all these companies have in common, in addition to the fact that they’ve been succeeding?  They have all been complaining to the Congress and the FTC about Google’s supposedly anticompetitive practices.

Now it’s possible that they could be doing well in spite of anticompetitive behavior on the part of Google.  It’s also possible that they could have done poorly despite lack of anticompetitive behavior.  But the fact is, the companies have been succeeding, even in these tough economic times.  Perhaps they’re the ones who, by lobbying the government, are competing unfairly.

Should Google Be a Public Utility?

Friday, June 8th, 2012

Jeffrey Katz, the CEO of price-comparison site Nextag, is an outlier to the virtually unanimous view that the Internet should remain unregulated.  In an op-ed in today’s Wall Street Journal, Mr. Katz takes the position that Google should be turned into a public utility, although he doesn’t use that terminology.

The op-ed is aimed at European Competition Commissioner Joaquin Almunia, who has set a July 2 deadline for Google to respond to the EU’s antitrust concerns. Commissioner Almunia will make a big mistake and risk serious damage to the Internet if he follows any part of Mr. Katz’s advice.

Mr. Katz is nostalgic for the old days.  Maybe he should get into a different, slower moving, industry.  He laments the fact that Google doesn’t work the way it “used to work.”  It now promotes its own (according to Mr. Katz) “less relevant and inferior” products.  Google used to highlight Nextag’s services, because they “were better – and they still are.  But Google’s latest changes are clearly no longer about helping users.”

In the U.S., antitrust authorities are skeptical about complaints from competitors and, hopefully, Mr. Almunia will be as well.  Indeed, there is no evidence that Google has engaged in the type of exclusionary practices that were the focus of the Microsoft case, for example.  It is true that both Google and Bing sometimes favor their own specialized search results.  Understandably, Mr. Katz doesn’t like this.  But both search engines have discovered this is a service their users do like.

The scope of Mr. Katz’s proposed remedy is astounding:

  • Google needs to be transparent about how its search engine operates.”  Presumably that means making Google’s algorithm, and the changes that occur continually, public.  Perhaps Mr. Katz would like a forum where Nextag could express its views on Google’s algorithm changes before they are implemented.  That would certainly speed innovation along.
  • When a competitor’s service is the best response for the user, Google should highlight it instead of its own service.”  Who determines the “best response”?  Does Mr. Katz want a say?
  • Google should provide consumers with access to unbiased search results.” Who determines what is “unbiased” and how is it even defined?
  • Google should grant all companies equal access to advertising opportunities regardless of whether they are considered a competitor.”  “Equal access” is a defining feature of public utility regulation.  It has no meaning in the absence of price regulation.  Is Mr. Katz suggesting price regulation for advertising on Google?

There is a large literature on public utility regulation that people tend to forget.  Suffice it to say, the experience overall was not beneficial for consumers.  That is why there has been a worldwide movement toward regulatory liberalization over the last few decades.  If regulating traditional industries was difficult, regulating an Internet company like Google, and a product like a search engine, in a pro-efficiency, pro-consumer manner would be far more complex – basically, impossible.

In the U.S., public officials and various other stakeholders are in the process of preparing for the international telecommunications negotiations at the December ITU meeting in Dubai, with the goal of keeping the Internet unregulated.  This argument becomes more difficult to make if we are in the process of doing the opposite.

Fundamentally, Mr. Katz wants Google to work “the way it used to work.”  That is not a recipe for innovation.  Hopefully, the authorities will see his recommendations for what they are – the self-interested proposals of a competitor – and discount them accordingly.

Observations on Senate Privacy Hearing

Thursday, May 10th, 2012

The Senate Commerce Committee held a privacy hearing yesterday with three government witnesses from the agencies responsible for this issue:  Federal Trade Commission Chairman Jon Liebowitz and Commissioner Maureen Ohlhausen, and Commerce Department General Counsel Cameron Kerry.  The Senators and witnesses went over a lot of familiar ground.  A few takeaways from the hearing:

- Perhaps because of sparse attendance on the part of Committee members, the privacy issue appeared to be more partisan than it used to be.  The two skeptics about the need for legislation were Senator Pat Toomey (the only Republican to show up) and newly-confirmed Commissioner Ohlhausen.  Senator Toomey stressed the need for evidence of market failure, harms to consumers and cost-benefit analysis (a position with which I agree and have made before this committee).   Senator Kerry, on the other hand, stated that the record is clear on the need for a privacy law, even suggesting that Senator Toomey’s concerns have been addressed at previous hearings (they have not).  Commissioner Ohlhausen expressed “concerns about the ability of legislative or regulatory efforts to keep up with the innovations and advances of the Internet without imposing unintended chilling effects on many of the enormous benefits consumers have gained from these advances.”  Senator Rockefeller acknowledged that a consensus doesn’t yet exist on legislation, but indicated after the hearing, “I really don’t see it as that complicated a subject.”  In fact, it is a complicated subject.

- The issue is viewed as a consumer protection issue (which it is), but it is perhaps more importantly an innovation issue, as suggested by Commissioner Ohlhausen.  This is because virtually all innovation on the Internet depends in one way or another on the use of information – to develop the product itself and/or the financial resources for it.  Thus, privacy regulation, which necessarily limits the collection and use of information, can have a profound effect on both the magnitude and direction of innovation on the Internet.  The legislation proponents do not acknowledge these tradeoffs.  They simply assume that regulations can be adopted without any adverse effect on innovation.

- There remains substantial confusion about the anonymity of data.  Much of the discussion conflated data from social networks – clearly not anonymous – with data used anonymously for a variety of commercial purposes on the Internet.  Individuals understandably get upset when personal information posted on social networking sites which was previously available to one group of people becomes unexpectedly available to a wider group.  This is the type of information at issue in the recent FTC consent decrees with Facebook and Google.  In these instances, both companies were forced by their users to stop the questionable practices as soon as they became known, long before the consent decrees were entered into.  In any event, some combination of consumer unhappiness and the FTC’s existing statutory authority was sufficient to stop the questionable practices.  But information on social networking sites is different from the vast amount of data collected and used for behavioral advertising or to refine search engines, to take two examples.  These data are “known” to computers, not to individuals.  No one is sitting around saying, “What can I sell Tom Lenard today.”  Rather, computers are using algorithms to serve advertisements to consumers who have certain interests.

- There is a lot of confusion about the market for privacy and whether firms compete on the basis of privacy.  The two government reports did not do a good job of illuminating this central issue.  Senator Toomey suggested  companies are competing on privacy, while the pro-legislation group at the hearing argued that companies always lose profits by providing more privacy (i.e., sacrificing some data), so they will never want to do it.  But companies do things like this all the time – i.e., provide better service, which costs money, in order to attract more customers, which makes them more money.  What the pro-legislation camp seems to be arguing is that companies won’t be able to attract consumers by offering more privacy, even though consumers are unhappy with the privacy protections they’re currently receiving.  This is not a compelling argument.  In fact, we really don’t know that consumers are, on the whole, unhappy with current privacy protections, which gets us back to Senator Toomey’s opening remark:  “Seems to me neither this committee nor the FTC nor the Commerce Department fully understands what consumers’ expectations are when it comes to their online privacy.”  We should know more with all these reports.

    Observations on the White House Privacy Report

    Monday, February 27th, 2012

    Last week, the Administration released its long-awaited privacy report.  The new privacy framework includes a Consumer Privacy Bill of Rights and a Multistakeholder (MSH) process to develop “enforceable codes of conduct” that put those rights into practice.

    The inclusion of this “Bill of Rights” raises some serious concerns. In adopting the language of “rights” the Administration is moving toward the European approach, which also discusses privacy in terms of rights.  This sends the wrong signal.  The U.S. has created an environment that is much more conducive to IT innovation, partly as a result of our less regulatory privacy regime.  It is not an accident that the U.S. has spawned literally all the great IT companies of the last couple of decades.  Google, Facebook, Amazon, Microsoft and others all depend on personal information in one way or another.  So, why we would want to move in the direction of Europe is a bit of a mystery.

    Adopting the language of rights also provides a rationale for not subjecting privacy proposals to any kind of regulatory analysis.  Rights are absolute.  Once we label something a right, we’re saying we’re beyond the point of considering its costs and benefits.  But privacy regulation involves major tradeoffs that we would be better off to consider explicitly.  The White House report does not do that and suggests there is no intention to do so in the future.

    In the report, the Administration also voices its support for legislation.  However, this seems somewhat inconsistent with the MSH approach described in the report.  A key advantage of the MSH approach, if structured properly, should be greater flexibility relative to regulation that would typically result from legislation.  This flexibility is vital for the tech sector, which is constantly changing.  We should give the MSH process a chance to work before trying to adopt something more formal.   Therefore, Congress should put efforts to enact privacy legislation on hold.

    Raising the Cost of Innovation

    Thursday, February 9th, 2012

    Google stirred up a hornet’s nest when it announced its new privacy policy, including questions from Congress, a request from the EU for Google to delay implementing the new policy pending an investigation and, yesterday, a Complaint for Injunctive Relief filed by EPIC alleging that the new policy violates the FTC’s Consent Order.

    Google’s new privacy policy appears to represent a relatively small change that is also pro-consumer.  The company is proposing to consolidate privacy policies across its various products, such as Gmail, Maps and YouTube.  Google says it is not collecting any new or additional data, is not changing the visibility of any information it stores (i.e., private information remains private), and is leaving users’ existing privacy settings as they are now.

    Google has indicated it will merge user data from its various products, and this is what has riled up critics, who apparently believe that combining information on users, even within a company, is harmful. Yet, combining the data Google already has will increase the value of those data, both for the company and its users.  As its understanding of users increases, Google will be able to provide more personalized services, such as more relevant search results. And, of course, if it can serve users more useful ads then it can charge advertisers more for those ads.

    It is important to note that the new policy has not actually been implemented.  No actual users of Google products have experienced how the policy will affect their user experience or had a chance to react to it. If users feel the change negatively impacts their experience, they will presumably let Google know.

    Not being a lawyer, I’m not going to opine on whether this policy is or is not consistent with the FTC Consent Order.  But the episode is troubling if one thinks about its potential effect on innovation on the Internet, which largely depends on the use of information—either to develop and improve products or to fund them.  It seems now that the cost of making even a modest innovation has ratcheted up.