Archive for the ‘Privacy and Security’ Category

2014 TPI Aspen Forum has Ended, but the Videos Live On…

Friday, August 22nd, 2014

Did you miss the Aspen Forum this year?  Or, do you just want to watch some of the panels again?  Videos of the panels and keynotes from the 2014 event are now up on the TPI website.

Some highlights from Monday night and Tuesday:

Comcast’s David Cohen was the Monday night dinner speaker.  In front of a packed room, Cohen spoke about the benefits of the Comcast/TWC deal, vertical and horizontal integration in the industry in general, and even revealed what keeps him up at night (hint: it’s not the communications industry).  His speech can be viewed here.

First up on Monday morning was a panel on copyright moderated by Mike Smith, TPI Senior Adjunct Fellow and Professor at Carnegie Mellon.  “Copyright Protection: Government vs. Voluntary Arrangements” featured Robert Brauneis from GW Law School, the Center for Copyright Information’s Jill Lesser, Jeff Lowenstein from the Office of Congressman Schiff, Shira Perlmutter from USPTO and NYU’s Chris Sprigman. Panelists discussed the copyright alert system, the state of the creative market in general, and the perennial question of what can be done to reduce piracy.  Video of the spirited panel can be viewed here.

Next up was the panel, “Internet Governance in Transition:  What’s the Destination?” moderated by Amb. David Gross.  The pretty impressive group of speakers discussed issues surrounding the transition of ICANN away from the loose oversight provided by the U.S. Dept. of Commerce.  Participants were ICANN Chair Steve Crocker, Reinhard Wieck from Deutsche Telekom, Shane Tews from AEI, Amb. Daniel Sepulveda, the U.S. Coordinator for International Communications and Information Policy, and NYU’s Lawrence White.  Video is here.

Finally, the Forum concluded with a panel on “Data and Trade,” moderated by TPI’s Scott Wallsten.  The panelists discussed how cybersecurity, local privacy laws, and national security issues are barriers to digital trade.  Speakers were USITC Chairman Meredith Broadbent, Anupam Chander from University of CA, Davis, PPI’s Michael Mandel, Joshua Meltzer from Brookings, and Facebook’s Matthew Perault.  Video of the discussion is here.

We hope all attendees and participants at the TPI Aspen Forum found it interesting, educational, and enjoyable.  We hope to see you next year!

Takeaways from the White House Big Data Reports

Monday, May 5th, 2014

On May 1, the White House released its two eagerly-awaited reports on “big data” resulting from the 90-day study President Obama announced on January 17—one by a team led by Presidential Counselor John Podesta, and a complementary study by the President’s Council of Advisors on Science and Technology (PCAST).  The reports contain valuable detail about the uses of big data in both the public and private sector.  At the risk of oversimplifying, I see three major takeaways from the reports.

First, the reports recognize big data’s enormous benefits and potential.  Indeed, the Podesta report starts out by observing that “properly implemented, big data will become an historic driver of progress.”  It adds, “Unprecedented computational power and sophistication make possible unexpected discoveries, innovations, and advancements in our quality of life.”  The report is filled with examples of the value of big data in medical research and health care delivery, education, homeland security, fraud detection, improving efficiency and reducing costs across the economy, as well as in providing targeted information to consumers and the raw material for the advertising-supported internet ecosystem.  The report states that the “Administration remains committed to supporting the digital economy and the free flow of data that drives its innovation.”

Second, neither report provides any actual evidence of harms from big data.  While the reports provide concrete examples of beneficial uses of big data, the harmful uses are hypothetical.  Perhaps the most publicized conclusion of the Podesta report concerns the possibility of discrimination—that “big data analytics have the potential to [italics added] eclipse longstanding civil rights protections in how personal information is used in housing, credit, employment, health, education, and the marketplace.”  However, the two examples of discrimination cited turn out to be almost non-examples.

The first example involves StreetBump, a mobile application developed to collect information about potholes and other road conditions in Boston.  Even before its launch the city recognized that this app, by itself, would be biased toward identifying problems in wealthier neighborhoods, because wealthier individuals would be more likely to own smartphones and make use of the app.  As a result, the city adjusted accordingly to ensure reporting of road conditions was accurate and consistent throughout the city.

The second example involves the E-verify program used by employers to check the eligibility of employees to work legally in the United States.  The report cites a study that “found the rate at which U.S. citizen have their authorization to work be initially erroneously unconfirmed by the system was 0.3 percent, compared to 2.1 percent for non-citizens.  However, after a few days many of these workers’ status was confirmed.”  It seems almost inevitable that the error rate for citizens would be lower since citizens automatically are eligible to work, whereas additional information is needed to confirm eligibility for non-citizens (i.e., evidence of some sort of work permit).  Hence, it is not clear this is an example of discrimination.

It is notable that both these examples are of government activities.  The reports do not present examples of commercial uses of big data that discriminate against particular groups.  To the contrary, the PCAST report notes the private-sector use of big data to help underserved individuals with loan and credit-building alternatives.

Finally, and perhaps most importantly, both reports indicate that the Fair Information Practice Principles (FIPPs) that focus on limiting data collection are increasingly irrelevant and, indeed, harmful in a big data world.  The Podesta report observes that “these trends may require us to look closely at the notice and consent framework that has been a central pillar of how privacy practices have been organized for more than four decades.”  The PCAST report notes, “The beneficial uses of near-ubiquitous data collection are large, and they fuel an increasingly important set of economic activities.  Taken together, these considerations suggest that a policy focus on limiting data collection will not be a broadly applicable or scalable strategy—nor one likely to achieve the right balance between beneficial results and unintended negative consequences (such as inhibiting economic growth).”  The Podesta report suggests examining “whether a greater focus on how data is used and reused would be a more productive basis for managing privacy rights in a big data environment.”  The PCAST report is even clearer:

Policy attention should focus more on the actual uses of big data and less on its collection and analysis.  By actual uses, we mean the specific events where something happens that can cause an adverse consequence or harm to an individual or class of individuals….By contrast, PCAST judges that policies focused on the regulation of data collection, storage, retention, a priori limitations on applications, and analysis…are unlikely to yield effective strategies for improving privacy.  Such policies would be unlikely to be scalable over time, or to be enforceable by other than severe and economically damaging measures.

In sum, the two reports have much to like:  their acknowledgement of the importance and widespread use of big data and their attempt, particularly in the PCAST report, to refocus the policy discussion in a more productive direction.  The reports also, however, suffer from a lack of evidence to substantiate their claim of harms.

Chairman Rockefeller and Data Brokers

Thursday, September 26th, 2013

Chairman Rockefeller recently sent letters to a dozen different companies seeking information on how they share information with third parties.  The letters are an extension of previous requests sent to “data brokers” asking for clarification of the companies’ “data collection, use and sharing practices.”  In the letters, the Chairman opines that the privacy policies on many websites “appear to leave room for sharing a consumer’s information with data brokers or other third parties who in turn may share with data brokers.”  He also stresses the importance of transparent privacy practices for consumers.

While a call for more information and data is certainly commendable, one should ask, “Where is this all going?”    Is the Chairman suddenly seeing the need for some data to inform policy making in this area?

While we would hope so, the Chairman’s letter infers the assumption that there is something inherently harmful about data collection and sharing, although this harm is not explicitly described.  He also posits that consumers may not be aware that their information is being collected or how it’s being used.  Again, there is no information offered on how this conclusion is reached.

Overall, more data to inform privacy policy-making would be a good thing.  As Tom Lenard has pointed out in filings, Congressional testimony, and a recent book chapter submission, the last comprehensive survey of privacy policies was back in 2001, a lifetime ago in the technology industry.  Ideally, any privacy proposals from Congress or the FTC should be based upon a survey of the actual current events on the ground, as opposed to opinions and assumptions.  Only with relevant data can policies be drafted that are targeting towards specific harms.  Additionally, data-driven policymaking can be evaluated to ensure that specific policy is performing as intended, and that benefits derived outweigh the costs of the regulation.

Data collection is burdensome and time consuming for companies involved. Any other government entity (besides Congress) would be required under the Paperwork Reduction Act to have its proposal be assessed, as they are required to “reduce information collection burdens on the public.” Since it doesn’t appear that Rockefeller’s recent requests for information are part of any systematic study or plan, it is understandable why some companies would bristle at the thought of spending time and resources on answering a list of questions.

The FTC recently conducted its own query in preparation for a study on “big data” and the privacy practices of data brokers.  One hopes the study, expected to be out by the end of the year, is well-designed and an objective look at the industry without a predetermination of results. Such a study would be useful going forward.

Lessons from the Federal Trade Commission’s $22.5 million Google fine

Wednesday, August 15th, 2012

Those who favor expanding the FTC’s role with respect to privacy should take a close look at what the agency does with the authority it already has. The most recent exhibit is the FTC’s imposition of a $22.5 million penalty on Google for bypassing the privacy settings on Apple’s Safari browser and thereby violating the terms of Google’s 2011 consent decree with the FTC. Since this is the largest fine the FTC has ever imposed, one would think Google must have committed a pretty serious violation that resulted in substantial harm to consumers. But there is no evidence that consumers have been harmed at all. (Dan Castro has written a nice blog post on this). Instead, the FTC has uncovered just enough of a technical violation to be able to say to Google “gotcha again.”

The issue is difficult to explain briefly, but essentially what happened is this: Google’s social network, Google +, has a “+1” button that, like Facebook’s “Like” button, gives users a way to indicate content they like. This feature doesn’t work with Apple’s Safari browser, which has a do-not-track feature that is turned on by default, so Google developed a tool that made the Safari browser work like other browsers.

Following research of a Stanford graduate student which was reported in the Wall Street Journal, the FTC began investigating and discovered two sentences in a 2009 Google help center page that the FTC claims misrepresent what Google is doing. That language dated from a year before Apple adopted its current cookie policy, two years before the 2011 consent decree, and two years before the +1 button was introduced.

Whether or not Google technically violated its consent decree, it is difficult to see how the Commission’s action will benefit consumers. Paradoxically, the action is likely to undermine one of the Commission’s principle recommendations: “greater transparency” concerning information collection and use practices. The $22 million fine sends exactly the opposite message to Google as well as other firms subject to FTC jurisdiction. The more transparent a company is about how it collects and uses data, the greater the risk of making a mistake and getting in trouble with the FTC. So, companies will find it in their interest to give users less information about web site privacy practices.

In addition, there is a cost to the +1 users the FTC is supposedly protecting. Now that Google has “corrected” the problem, Safari users who want to use +1 need to manually log in to their Google account, which equates to submitting a form, which then allows additional Google cookies to be installed anyway. This is quite a cumbersome process. Moreover, the pre-correction Google workaround meant that only additional cookies from Google’s Doubleclick network could be installed, while blocking cookies from any other third party. The current fix forces users who want to use the +1 function to change the cookie settings for the entire browser, opening their phones to cookies from any website, unless they take the trouble to switch settings back to ‘never accept’ cookies after they have successfully ‘+1′ the content they set out to share.

That FTC privacy-related enforcement is not based on demonstrable consumer benefits should not come as a surprise to those who have been following the agency’s work in this area. In the past two years, the Commission has released two privacy reports (here and here) that contain no evidence of consumer harm from current privacy practices. In fact, the Commission explicitly rejects the harm-based approach to privacy. This, of course, makes analysis of the benefits of proposed measures difficult, since if there are benefits they will consist of reduced harms.

So, what are the broader lessons from this episode? First, we should be wary of privacy legislation that gives the FTC additional authority to write new rules and enforce them (which virtually all privacy legislative proposals would do). If new legislation is enacted, it should only be with a strict mandate that any new regulations address significant harms and pass a cost-benefit test.

Another lesson may be for companies like Google, who understandably are anxious to avoid protracted litigation and get on with their businesses. These companies probably need to reassess the cost-benefit calculation that induced them to settle in the first place.

New Technology in Europe

Tuesday, May 22nd, 2012

Last week the New York Times ran an article, “Building the Next Facebook a Tough Task in Europe“, by Eric Pfanner, discussing the lack of major high tech innovation in Europe. Eric Pfanner discusses the importance of such investment, and then speculates on the reason for the lack of such innovation. The ultimate conclusion is that there is a lack of venture capital in Europe for various cultural and historical reasons. This explanation of course makes no sense. Capital is geographically mobile and if European tech start ups were a profitable investment that Europeans were afraid to bankroll, American investors would be on the next plane.

Here is a better explanation. In the name of “privacy,” the EU greatly restricts the use of consumer online information. Josh Lerner has a recent paper, “The Impact of Privacy Policy Changes on Venture Capital Investment in Online Advertising Companies” (based in part on the work of Avi Goldfarb and Catherine E. Tucker, “Privacy Regulation and Online Advertising“) finding that this restriction on the use of information is a large part of the explanation for the lack of tech investment in Europe. Tom Lenard and I have written extensively about the costs of privacy regulation (for example, here) and this is just another example of these costs, although the costs are much greater in Europe than they are here (so far.)

Observations on Senate Privacy Hearing

Thursday, May 10th, 2012

The Senate Commerce Committee held a privacy hearing yesterday with three government witnesses from the agencies responsible for this issue:  Federal Trade Commission Chairman Jon Liebowitz and Commissioner Maureen Ohlhausen, and Commerce Department General Counsel Cameron Kerry.  The Senators and witnesses went over a lot of familiar ground.  A few takeaways from the hearing:

- Perhaps because of sparse attendance on the part of Committee members, the privacy issue appeared to be more partisan than it used to be.  The two skeptics about the need for legislation were Senator Pat Toomey (the only Republican to show up) and newly-confirmed Commissioner Ohlhausen.  Senator Toomey stressed the need for evidence of market failure, harms to consumers and cost-benefit analysis (a position with which I agree and have made before this committee).   Senator Kerry, on the other hand, stated that the record is clear on the need for a privacy law, even suggesting that Senator Toomey’s concerns have been addressed at previous hearings (they have not).  Commissioner Ohlhausen expressed “concerns about the ability of legislative or regulatory efforts to keep up with the innovations and advances of the Internet without imposing unintended chilling effects on many of the enormous benefits consumers have gained from these advances.”  Senator Rockefeller acknowledged that a consensus doesn’t yet exist on legislation, but indicated after the hearing, “I really don’t see it as that complicated a subject.”  In fact, it is a complicated subject.

- The issue is viewed as a consumer protection issue (which it is), but it is perhaps more importantly an innovation issue, as suggested by Commissioner Ohlhausen.  This is because virtually all innovation on the Internet depends in one way or another on the use of information – to develop the product itself and/or the financial resources for it.  Thus, privacy regulation, which necessarily limits the collection and use of information, can have a profound effect on both the magnitude and direction of innovation on the Internet.  The legislation proponents do not acknowledge these tradeoffs.  They simply assume that regulations can be adopted without any adverse effect on innovation.

- There remains substantial confusion about the anonymity of data.  Much of the discussion conflated data from social networks – clearly not anonymous – with data used anonymously for a variety of commercial purposes on the Internet.  Individuals understandably get upset when personal information posted on social networking sites which was previously available to one group of people becomes unexpectedly available to a wider group.  This is the type of information at issue in the recent FTC consent decrees with Facebook and Google.  In these instances, both companies were forced by their users to stop the questionable practices as soon as they became known, long before the consent decrees were entered into.  In any event, some combination of consumer unhappiness and the FTC’s existing statutory authority was sufficient to stop the questionable practices.  But information on social networking sites is different from the vast amount of data collected and used for behavioral advertising or to refine search engines, to take two examples.  These data are “known” to computers, not to individuals.  No one is sitting around saying, “What can I sell Tom Lenard today.”  Rather, computers are using algorithms to serve advertisements to consumers who have certain interests.

- There is a lot of confusion about the market for privacy and whether firms compete on the basis of privacy.  The two government reports did not do a good job of illuminating this central issue.  Senator Toomey suggested  companies are competing on privacy, while the pro-legislation group at the hearing argued that companies always lose profits by providing more privacy (i.e., sacrificing some data), so they will never want to do it.  But companies do things like this all the time – i.e., provide better service, which costs money, in order to attract more customers, which makes them more money.  What the pro-legislation camp seems to be arguing is that companies won’t be able to attract consumers by offering more privacy, even though consumers are unhappy with the privacy protections they’re currently receiving.  This is not a compelling argument.  In fact, we really don’t know that consumers are, on the whole, unhappy with current privacy protections, which gets us back to Senator Toomey’s opening remark:  “Seems to me neither this committee nor the FTC nor the Commerce Department fully understands what consumers’ expectations are when it comes to their online privacy.”  We should know more with all these reports.

    Lenard to NTIA: Cost-Benefit Analysis can Ensure all Internet Users are Represented in Privacy Code of Conduct

    Wednesday, April 4th, 2012

    On Monday, Tom Lenard filed comments with the National Telecommunications and Information Administration (NTIA) regarding the proposed multistakeholder (MSH) process for developing a code of conduct.

    Among the 80 comments filed with NTIA, many referenced the need to ensure both firms and Internet users were represented in the process.  In his comments, Tom identified one way to ensure the needs of all involved parties are taken into account: requiring a cost-benefit analysis of any proposed code of conduct.

    Since the code will apply to many more consumers and firms than can be directly involved in the process, code provisions should be analyzed in much the same way as a regulation in order to assure that they produce benefits in excess of costs.  Tom also described the proposed code of conduct as similar to agency guidance, which is subjected to the regulatory review requirements of Executive Order 12866, including “a reasoned determination that its benefits justify its costs.”

    In addition to urging a cost-benefit analysis of any proposed codes, Tom also warns of the need to try to protect against anticompetitive behavior.  NTIA and the MSH process should ensure any privacy code is neutral with respect to technology, business models and organizational structures.  In addition, procedures should guard against the process and resulting code being dominated by incumbents, which could raise the costs of entry and inhibit innovation in the Internet space.

    Read more of Tom’s comments here.

    Observations on the White House Privacy Report

    Monday, February 27th, 2012

    Last week, the Administration released its long-awaited privacy report.  The new privacy framework includes a Consumer Privacy Bill of Rights and a Multistakeholder (MSH) process to develop “enforceable codes of conduct” that put those rights into practice.

    The inclusion of this “Bill of Rights” raises some serious concerns. In adopting the language of “rights” the Administration is moving toward the European approach, which also discusses privacy in terms of rights.  This sends the wrong signal.  The U.S. has created an environment that is much more conducive to IT innovation, partly as a result of our less regulatory privacy regime.  It is not an accident that the U.S. has spawned literally all the great IT companies of the last couple of decades.  Google, Facebook, Amazon, Microsoft and others all depend on personal information in one way or another.  So, why we would want to move in the direction of Europe is a bit of a mystery.

    Adopting the language of rights also provides a rationale for not subjecting privacy proposals to any kind of regulatory analysis.  Rights are absolute.  Once we label something a right, we’re saying we’re beyond the point of considering its costs and benefits.  But privacy regulation involves major tradeoffs that we would be better off to consider explicitly.  The White House report does not do that and suggests there is no intention to do so in the future.

    In the report, the Administration also voices its support for legislation.  However, this seems somewhat inconsistent with the MSH approach described in the report.  A key advantage of the MSH approach, if structured properly, should be greater flexibility relative to regulation that would typically result from legislation.  This flexibility is vital for the tech sector, which is constantly changing.  We should give the MSH process a chance to work before trying to adopt something more formal.   Therefore, Congress should put efforts to enact privacy legislation on hold.

    Raising the Cost of Innovation

    Thursday, February 9th, 2012

    Google stirred up a hornet’s nest when it announced its new privacy policy, including questions from Congress, a request from the EU for Google to delay implementing the new policy pending an investigation and, yesterday, a Complaint for Injunctive Relief filed by EPIC alleging that the new policy violates the FTC’s Consent Order.

    Google’s new privacy policy appears to represent a relatively small change that is also pro-consumer.  The company is proposing to consolidate privacy policies across its various products, such as Gmail, Maps and YouTube.  Google says it is not collecting any new or additional data, is not changing the visibility of any information it stores (i.e., private information remains private), and is leaving users’ existing privacy settings as they are now.

    Google has indicated it will merge user data from its various products, and this is what has riled up critics, who apparently believe that combining information on users, even within a company, is harmful. Yet, combining the data Google already has will increase the value of those data, both for the company and its users.  As its understanding of users increases, Google will be able to provide more personalized services, such as more relevant search results. And, of course, if it can serve users more useful ads then it can charge advertisers more for those ads.

    It is important to note that the new policy has not actually been implemented.  No actual users of Google products have experienced how the policy will affect their user experience or had a chance to react to it. If users feel the change negatively impacts their experience, they will presumably let Google know.

    Not being a lawyer, I’m not going to opine on whether this policy is or is not consistent with the FTC Consent Order.  But the episode is troubling if one thinks about its potential effect on innovation on the Internet, which largely depends on the use of information—either to develop and improve products or to fund them.  It seems now that the cost of making even a modest innovation has ratcheted up.

    Privacy in Europe

    Friday, January 27th, 2012

    The EU is apparently thinking of adopting common and highly restrictive privacy standards which would make use of information by firms much more difficult and would require, for example, that data be retained only as long as necessary. This is touted as pro-consumer legislation. However, the effects would be profoundly anti-consumer. For one thing, ads would be much less targeted, and so consumers would get less valuable ads and would not learn as much about valuable prodcts and services aimed at their interests. For another effect, fraud and identity theft would become more common as sellers could not use stored information to verify identity. Finally, costs of doing buisness would increase, and so we would expect to see fewer innovations aimed at the European market, and some sellers might avoid that market entirely.

    (Cross-posted from the Truth on the Market blog.)