Archive for the ‘Privacy and Security’ Category

Chairman Rockefeller and Data Brokers

Thursday, September 26th, 2013

Chairman Rockefeller recently sent letters to a dozen different companies seeking information on how they share information with third parties.  The letters are an extension of previous requests sent to “data brokers” asking for clarification of the companies’ “data collection, use and sharing practices.”  In the letters, the Chairman opines that the privacy policies on many websites “appear to leave room for sharing a consumer’s information with data brokers or other third parties who in turn may share with data brokers.”  He also stresses the importance of transparent privacy practices for consumers.

While a call for more information and data is certainly commendable, one should ask, “Where is this all going?”    Is the Chairman suddenly seeing the need for some data to inform policy making in this area?

While we would hope so, the Chairman’s letter infers the assumption that there is something inherently harmful about data collection and sharing, although this harm is not explicitly described.  He also posits that consumers may not be aware that their information is being collected or how it’s being used.  Again, there is no information offered on how this conclusion is reached.

Overall, more data to inform privacy policy-making would be a good thing.  As Tom Lenard has pointed out in filings, Congressional testimony, and a recent book chapter submission, the last comprehensive survey of privacy policies was back in 2001, a lifetime ago in the technology industry.  Ideally, any privacy proposals from Congress or the FTC should be based upon a survey of the actual current events on the ground, as opposed to opinions and assumptions.  Only with relevant data can policies be drafted that are targeting towards specific harms.  Additionally, data-driven policymaking can be evaluated to ensure that specific policy is performing as intended, and that benefits derived outweigh the costs of the regulation.

Data collection is burdensome and time consuming for companies involved. Any other government entity (besides Congress) would be required under the Paperwork Reduction Act to have its proposal be assessed, as they are required to “reduce information collection burdens on the public.” Since it doesn’t appear that Rockefeller’s recent requests for information are part of any systematic study or plan, it is understandable why some companies would bristle at the thought of spending time and resources on answering a list of questions.

The FTC recently conducted its own query in preparation for a study on “big data” and the privacy practices of data brokers.  One hopes the study, expected to be out by the end of the year, is well-designed and an objective look at the industry without a predetermination of results. Such a study would be useful going forward.

Lessons from the Federal Trade Commission’s $22.5 million Google fine

Wednesday, August 15th, 2012

Those who favor expanding the FTC’s role with respect to privacy should take a close look at what the agency does with the authority it already has. The most recent exhibit is the FTC’s imposition of a $22.5 million penalty on Google for bypassing the privacy settings on Apple’s Safari browser and thereby violating the terms of Google’s 2011 consent decree with the FTC. Since this is the largest fine the FTC has ever imposed, one would think Google must have committed a pretty serious violation that resulted in substantial harm to consumers. But there is no evidence that consumers have been harmed at all. (Dan Castro has written a nice blog post on this). Instead, the FTC has uncovered just enough of a technical violation to be able to say to Google “gotcha again.”

The issue is difficult to explain briefly, but essentially what happened is this: Google’s social network, Google +, has a “+1” button that, like Facebook’s “Like” button, gives users a way to indicate content they like. This feature doesn’t work with Apple’s Safari browser, which has a do-not-track feature that is turned on by default, so Google developed a tool that made the Safari browser work like other browsers.

Following research of a Stanford graduate student which was reported in the Wall Street Journal, the FTC began investigating and discovered two sentences in a 2009 Google help center page that the FTC claims misrepresent what Google is doing. That language dated from a year before Apple adopted its current cookie policy, two years before the 2011 consent decree, and two years before the +1 button was introduced.

Whether or not Google technically violated its consent decree, it is difficult to see how the Commission’s action will benefit consumers. Paradoxically, the action is likely to undermine one of the Commission’s principle recommendations: “greater transparency” concerning information collection and use practices. The $22 million fine sends exactly the opposite message to Google as well as other firms subject to FTC jurisdiction. The more transparent a company is about how it collects and uses data, the greater the risk of making a mistake and getting in trouble with the FTC. So, companies will find it in their interest to give users less information about web site privacy practices.

In addition, there is a cost to the +1 users the FTC is supposedly protecting. Now that Google has “corrected” the problem, Safari users who want to use +1 need to manually log in to their Google account, which equates to submitting a form, which then allows additional Google cookies to be installed anyway. This is quite a cumbersome process. Moreover, the pre-correction Google workaround meant that only additional cookies from Google’s Doubleclick network could be installed, while blocking cookies from any other third party. The current fix forces users who want to use the +1 function to change the cookie settings for the entire browser, opening their phones to cookies from any website, unless they take the trouble to switch settings back to ‘never accept’ cookies after they have successfully ‘+1′ the content they set out to share.

That FTC privacy-related enforcement is not based on demonstrable consumer benefits should not come as a surprise to those who have been following the agency’s work in this area. In the past two years, the Commission has released two privacy reports (here and here) that contain no evidence of consumer harm from current privacy practices. In fact, the Commission explicitly rejects the harm-based approach to privacy. This, of course, makes analysis of the benefits of proposed measures difficult, since if there are benefits they will consist of reduced harms.

So, what are the broader lessons from this episode? First, we should be wary of privacy legislation that gives the FTC additional authority to write new rules and enforce them (which virtually all privacy legislative proposals would do). If new legislation is enacted, it should only be with a strict mandate that any new regulations address significant harms and pass a cost-benefit test.

Another lesson may be for companies like Google, who understandably are anxious to avoid protracted litigation and get on with their businesses. These companies probably need to reassess the cost-benefit calculation that induced them to settle in the first place.

New Technology in Europe

Tuesday, May 22nd, 2012

Last week the New York Times ran an article, “Building the Next Facebook a Tough Task in Europe“, by Eric Pfanner, discussing the lack of major high tech innovation in Europe. Eric Pfanner discusses the importance of such investment, and then speculates on the reason for the lack of such innovation. The ultimate conclusion is that there is a lack of venture capital in Europe for various cultural and historical reasons. This explanation of course makes no sense. Capital is geographically mobile and if European tech start ups were a profitable investment that Europeans were afraid to bankroll, American investors would be on the next plane.

Here is a better explanation. In the name of “privacy,” the EU greatly restricts the use of consumer online information. Josh Lerner has a recent paper, “The Impact of Privacy Policy Changes on Venture Capital Investment in Online Advertising Companies” (based in part on the work of Avi Goldfarb and Catherine E. Tucker, “Privacy Regulation and Online Advertising“) finding that this restriction on the use of information is a large part of the explanation for the lack of tech investment in Europe. Tom Lenard and I have written extensively about the costs of privacy regulation (for example, here) and this is just another example of these costs, although the costs are much greater in Europe than they are here (so far.)

Observations on Senate Privacy Hearing

Thursday, May 10th, 2012

The Senate Commerce Committee held a privacy hearing yesterday with three government witnesses from the agencies responsible for this issue:  Federal Trade Commission Chairman Jon Liebowitz and Commissioner Maureen Ohlhausen, and Commerce Department General Counsel Cameron Kerry.  The Senators and witnesses went over a lot of familiar ground.  A few takeaways from the hearing:

- Perhaps because of sparse attendance on the part of Committee members, the privacy issue appeared to be more partisan than it used to be.  The two skeptics about the need for legislation were Senator Pat Toomey (the only Republican to show up) and newly-confirmed Commissioner Ohlhausen.  Senator Toomey stressed the need for evidence of market failure, harms to consumers and cost-benefit analysis (a position with which I agree and have made before this committee).   Senator Kerry, on the other hand, stated that the record is clear on the need for a privacy law, even suggesting that Senator Toomey’s concerns have been addressed at previous hearings (they have not).  Commissioner Ohlhausen expressed “concerns about the ability of legislative or regulatory efforts to keep up with the innovations and advances of the Internet without imposing unintended chilling effects on many of the enormous benefits consumers have gained from these advances.”  Senator Rockefeller acknowledged that a consensus doesn’t yet exist on legislation, but indicated after the hearing, “I really don’t see it as that complicated a subject.”  In fact, it is a complicated subject.

- The issue is viewed as a consumer protection issue (which it is), but it is perhaps more importantly an innovation issue, as suggested by Commissioner Ohlhausen.  This is because virtually all innovation on the Internet depends in one way or another on the use of information – to develop the product itself and/or the financial resources for it.  Thus, privacy regulation, which necessarily limits the collection and use of information, can have a profound effect on both the magnitude and direction of innovation on the Internet.  The legislation proponents do not acknowledge these tradeoffs.  They simply assume that regulations can be adopted without any adverse effect on innovation.

- There remains substantial confusion about the anonymity of data.  Much of the discussion conflated data from social networks – clearly not anonymous – with data used anonymously for a variety of commercial purposes on the Internet.  Individuals understandably get upset when personal information posted on social networking sites which was previously available to one group of people becomes unexpectedly available to a wider group.  This is the type of information at issue in the recent FTC consent decrees with Facebook and Google.  In these instances, both companies were forced by their users to stop the questionable practices as soon as they became known, long before the consent decrees were entered into.  In any event, some combination of consumer unhappiness and the FTC’s existing statutory authority was sufficient to stop the questionable practices.  But information on social networking sites is different from the vast amount of data collected and used for behavioral advertising or to refine search engines, to take two examples.  These data are “known” to computers, not to individuals.  No one is sitting around saying, “What can I sell Tom Lenard today.”  Rather, computers are using algorithms to serve advertisements to consumers who have certain interests.

- There is a lot of confusion about the market for privacy and whether firms compete on the basis of privacy.  The two government reports did not do a good job of illuminating this central issue.  Senator Toomey suggested  companies are competing on privacy, while the pro-legislation group at the hearing argued that companies always lose profits by providing more privacy (i.e., sacrificing some data), so they will never want to do it.  But companies do things like this all the time – i.e., provide better service, which costs money, in order to attract more customers, which makes them more money.  What the pro-legislation camp seems to be arguing is that companies won’t be able to attract consumers by offering more privacy, even though consumers are unhappy with the privacy protections they’re currently receiving.  This is not a compelling argument.  In fact, we really don’t know that consumers are, on the whole, unhappy with current privacy protections, which gets us back to Senator Toomey’s opening remark:  “Seems to me neither this committee nor the FTC nor the Commerce Department fully understands what consumers’ expectations are when it comes to their online privacy.”  We should know more with all these reports.

    Lenard to NTIA: Cost-Benefit Analysis can Ensure all Internet Users are Represented in Privacy Code of Conduct

    Wednesday, April 4th, 2012

    On Monday, Tom Lenard filed comments with the National Telecommunications and Information Administration (NTIA) regarding the proposed multistakeholder (MSH) process for developing a code of conduct.

    Among the 80 comments filed with NTIA, many referenced the need to ensure both firms and Internet users were represented in the process.  In his comments, Tom identified one way to ensure the needs of all involved parties are taken into account: requiring a cost-benefit analysis of any proposed code of conduct.

    Since the code will apply to many more consumers and firms than can be directly involved in the process, code provisions should be analyzed in much the same way as a regulation in order to assure that they produce benefits in excess of costs.  Tom also described the proposed code of conduct as similar to agency guidance, which is subjected to the regulatory review requirements of Executive Order 12866, including “a reasoned determination that its benefits justify its costs.”

    In addition to urging a cost-benefit analysis of any proposed codes, Tom also warns of the need to try to protect against anticompetitive behavior.  NTIA and the MSH process should ensure any privacy code is neutral with respect to technology, business models and organizational structures.  In addition, procedures should guard against the process and resulting code being dominated by incumbents, which could raise the costs of entry and inhibit innovation in the Internet space.

    Read more of Tom’s comments here.

    Observations on the White House Privacy Report

    Monday, February 27th, 2012

    Last week, the Administration released its long-awaited privacy report.  The new privacy framework includes a Consumer Privacy Bill of Rights and a Multistakeholder (MSH) process to develop “enforceable codes of conduct” that put those rights into practice.

    The inclusion of this “Bill of Rights” raises some serious concerns. In adopting the language of “rights” the Administration is moving toward the European approach, which also discusses privacy in terms of rights.  This sends the wrong signal.  The U.S. has created an environment that is much more conducive to IT innovation, partly as a result of our less regulatory privacy regime.  It is not an accident that the U.S. has spawned literally all the great IT companies of the last couple of decades.  Google, Facebook, Amazon, Microsoft and others all depend on personal information in one way or another.  So, why we would want to move in the direction of Europe is a bit of a mystery.

    Adopting the language of rights also provides a rationale for not subjecting privacy proposals to any kind of regulatory analysis.  Rights are absolute.  Once we label something a right, we’re saying we’re beyond the point of considering its costs and benefits.  But privacy regulation involves major tradeoffs that we would be better off to consider explicitly.  The White House report does not do that and suggests there is no intention to do so in the future.

    In the report, the Administration also voices its support for legislation.  However, this seems somewhat inconsistent with the MSH approach described in the report.  A key advantage of the MSH approach, if structured properly, should be greater flexibility relative to regulation that would typically result from legislation.  This flexibility is vital for the tech sector, which is constantly changing.  We should give the MSH process a chance to work before trying to adopt something more formal.   Therefore, Congress should put efforts to enact privacy legislation on hold.

    Raising the Cost of Innovation

    Thursday, February 9th, 2012

    Google stirred up a hornet’s nest when it announced its new privacy policy, including questions from Congress, a request from the EU for Google to delay implementing the new policy pending an investigation and, yesterday, a Complaint for Injunctive Relief filed by EPIC alleging that the new policy violates the FTC’s Consent Order.

    Google’s new privacy policy appears to represent a relatively small change that is also pro-consumer.  The company is proposing to consolidate privacy policies across its various products, such as Gmail, Maps and YouTube.  Google says it is not collecting any new or additional data, is not changing the visibility of any information it stores (i.e., private information remains private), and is leaving users’ existing privacy settings as they are now.

    Google has indicated it will merge user data from its various products, and this is what has riled up critics, who apparently believe that combining information on users, even within a company, is harmful. Yet, combining the data Google already has will increase the value of those data, both for the company and its users.  As its understanding of users increases, Google will be able to provide more personalized services, such as more relevant search results. And, of course, if it can serve users more useful ads then it can charge advertisers more for those ads.

    It is important to note that the new policy has not actually been implemented.  No actual users of Google products have experienced how the policy will affect their user experience or had a chance to react to it. If users feel the change negatively impacts their experience, they will presumably let Google know.

    Not being a lawyer, I’m not going to opine on whether this policy is or is not consistent with the FTC Consent Order.  But the episode is troubling if one thinks about its potential effect on innovation on the Internet, which largely depends on the use of information—either to develop and improve products or to fund them.  It seems now that the cost of making even a modest innovation has ratcheted up.

    Privacy in Europe

    Friday, January 27th, 2012

    The EU is apparently thinking of adopting common and highly restrictive privacy standards which would make use of information by firms much more difficult and would require, for example, that data be retained only as long as necessary. This is touted as pro-consumer legislation. However, the effects would be profoundly anti-consumer. For one thing, ads would be much less targeted, and so consumers would get less valuable ads and would not learn as much about valuable prodcts and services aimed at their interests. For another effect, fraud and identity theft would become more common as sellers could not use stored information to verify identity. Finally, costs of doing buisness would increase, and so we would expect to see fewer innovations aimed at the European market, and some sellers might avoid that market entirely.

    (Cross-posted from the Truth on the Market blog.)

    Internet Hysteria – Are We Losing Our Edge?

    Thursday, December 15th, 2011

    Scott Wallsten and Amy Smorodin

    From Anthony Wiener’s wiener to the FCC’s brave stand on Americans’ shameful inability to turn down the damn volume by themselves, 2011 has been a big year for tech and communications policy. But how has one of the Washington tech crowd’s most important products—Internet hype—fared this year?  In this post, we seek to answer this crucial question.

    The Internet Hysteria Index

    The Internet is without doubt the most powerful inspiration for hyperbole in the history of mankind. Some extol the Internet’s greatness, like Howard Dean, who called the Internet “the most important tool for re-democratizing the world since Gutenberg invented the printing press.”[1] Others fret about the future, like Canada’s Office of Privacy Commissioner, who claimed, “Nothing in society poses as grave a threat to privacy as the Internet Service Provider.”[2]

    Sometimes the hyperbole is justified. For example, thanks to Twitter, attendees at this past summer’s TPI Aspen Summit were privy to a steady stream of misinformation even before the DC-area earthquake stopped.[3]

    In the same spirit, we present the Internet Hysteria Index (IHI). The IHI, which the DOJ and FCC should take care not to confuse with the HHI, is the most rigorous and flexible tool ever conceived for gauging the Internet’s “worry zeitgeist”. It’s rigorous[4] because it uses numbers and flexible[5] because you can interpret it in so many different ways that it won’t threaten your preconceived ideas no matter what you believe.

    The IHI has two components. The first tracks fears of an unrecognizable, but certainly Terminator-esque, future Internet. We count the number of times the exact phrases “the end of the internet as we know it” and “break the internet” appear in Nexis news searches each year since 2000.

    Figure 1: The End of the Internet as we Know It!


    Figure 1 shows that 2011 produced a bumper crop of “break the internet” stories, mostly related to the Stop Online Piracy Act and the Protect IP Act. The spike in 2006 reflects a wave of Net Neutrality stories after AT&T’s then-CEO proclaimed that “what they [content providers] would like to do is use my pipes free, and I ain’t going to let them do that because we have spent this capital and we have to have a return on it.”

    As our research illustrates, the “End of the Internet” hyperbole shows a healthy, generally upward trend, reflecting the effectiveness of our collective fretting and hand-wringing. Our data do not allow us to identify[6] whether the trend is due to clever Washington PR, lazy hacks retreading old lines, real concerns, or collusion among interest groups simply ensuring they can all stay in business by responding to each other.

    The second component of our index measures the incidence of hand-wringing regarding the state of broadband in the U.S. In particular, this measure counts the number of times phrases suggesting lagging U.S. broadband performance show up in Nexis since 2000.[7] Figure 2 shows the results of our analysis.

    Figure 2: The Grass is So Much Greener on the Other Side of the Pond: U.S. Broadband Sucks


    The big spike in 2010 is related to release of the National Broadband Plan. The prior high, in 2007, saw stories focusing on the OECD rankings, broadband mapping, and the beginnings of broadband plan discussions.

    Unfortunately, 2011 was not a good year for misinterpreting shoddily-gathered statistics. Figure 2 shows a dramatic drop-off in bemoaning the dire state of U.S. broadband, possibly after everyone just got really, really tired of talking about the National Broadband Plan. We’re extremely concerned that as a result, the U.S. may have fallen dramatically in the OECD worry rankings. In fact, in a warning shot across our bow, on December 14 the BBC reported that “the UK remains in danger of falling behind when it comes to next-generation mobile services” and superfast broadband.[8] We’re hopeful American fretting will pick up once analysts actually read the FCC’s USF order that was promulgated under the cover of 23 days between approval and publication. On the other hand, there is a risk that the sheer volume of the Order—the equivalent of more than 4 million tweets—might dissuade people from talking about it ever again.

    For generations, Americans have taken a back seat to nobody on the important issue of Internet hyperbole. Let’s hope the inside-the-beltway crowd pulls itself together and breathes some life back into the speech economy. Happy New Year.


    [1] http://motherjones.com/politics/2007/06/interview-howard-dean-chairman-democratic-national-committee

    [2] http://dpi.priv.gc.ca/index.php/essays/the-greatest-threat-to-privacy/

    [3] Picture from Funny Potato, http://www.funny-potato.com/blog/august-23rd-2011-east-coast-quake.

    [4] It’s not.

    [5] In other words, “probably pretty meaningless.”

    [6] Actually, they do, but we don’t want to do the work.

    [7] Specifically, the search is ((“U.S. falling behind “OR “U.S. lagging”) AND broadband) OR ((“United States falling behind” OR “United States lagging”) AND broadband).

    [8] http://www.bbc.co.uk/news/technology-16174745

    Carrier IQ: Another Silly Privacy Panic

    Friday, December 2nd, 2011

    By now everyone is probably aware of the “tracking” of certain cellphones (Sprint, iPhone, T-Mobile, AT&T perhaps others) by a company called Carrier IQ.  There are lots of discussions available; a good summary is on one of my favorite websites, Lifehacker;  also here from CNET. Apparently the program gathers lots of anonymous data mainly for the purpose of helping carriers improve their service. Nonetheless, there are lawsuits and calls for the FTC to investigate.

    Aside from the fact that the data is used only to improve service, it is also useful to ask just what people are afraid of.  Clearly the phone companies already have access to SMS messages if they want it since these go through the phone system anyway.  Moreover, of course, no person would see the data even if it were somehow collected.  The fear is perhaps that “… marketers can use that data to sell you more stuff or send targeted ads…” (from the Lifehacker site) but even if so, so what?  If apps are using data to try to sell you stuff that they think that you want, what is the harm? If you do want it, then the app has done you a service.  If you don’t want it, then you don’t buy it.  Ads tailored to your behavior are likely to be more useful than ads randomly assigned.

    The Lifehacker story does use phrases like “freak people out” and “scary” and “creepy.”  But except for the possibility of being sold stuff, the story never explains what is harmful about the behavior.  As I have said before, I think the basic problem is that people cannot understand the notion that something is known but no person knows it.  If some server somewhere knows where your phone has been, so what?

    The end result of this episode will probably be somewhat worse phone service.

    (Cross posted from the Truth on the Market blog)