Mergers, Acquisitions and Combining Data

An article in PC World tells us of renewed interest in privacy issues that sometimes arise in mergers and acquisitions. The particular subject of that interest is pooling personal information after the merger or acquisition – combining the personal information that was in the possession of the parties before the merger or acquisition.

The issue was raised in a February 6 letter to Commissioner Edith Ramirez, the Chairwoman of the Federal Trade Commission (the “Letter”).  The Letter was sent by Center for Digital Democracy, Public Citizen, United States Public Interest Research Group and Consumer Watchdog. (PC World calls them privacy groups, but it’s likely those groups don’t see their work as limited to privacy issues.)

Given that these groups are well-known and influential, we should probably pay some attention to what they’ve said in the Letter.  The Letter is a little rambling, but the essential point seems to be:

Big Data is Scary – Bigger Data is Scarier

The Letter says that “privacy and consumer protection online” will be adversely affected by: 

  • increasing consolidation in the data-driven consumer-marketing sector, with companies amassing vast holdings of the key element that drives much of online commerce – information on or about individual
  • merging of far-reaching and powerful datasets
  • merging of even more powerful datasets on U.S. consumers
  • numerous new alliances and special relationships among companies that enable the “pooling” of data

Big Data can be used to compromise privacy and harm consumers.  Bigger Data can compromise privacy and harm consumers even more.

Fair enough.  But:

What Do They Want?

The groups’ list of demands includes:

  • the commission should be more proactively involved with consumer-information oriented mergers and acquisitions
  • an investigation into the impact on the American public of the growth in data-oriented mergers and acquisitions and the consolidation of datasets and targeted marketing applications
  • a more effective approach to identifying new problems and threats to competition and consumer protection in the Big Data era

The FTC doesn’t seem to need much prodding to start investigations though.  Also, as we’ve previously discussed here at Big Data and the Law, the FTC has already shown particular interest in Big Data.  And last year the FTC has looked into privacy issues in the context of mergers and acquisitions, when the FTC got involved in the Facebook – Whatsapp acquisition.   In fact,

In spite of this:

There’s Something New Here

To begin with, the data combining issue was not a concern in the FTC’s review of the Facebook – Whatsapp acquisition.  That’s important.  In mergers and acquisitions, privacy is not the only legal question arising from combining data after the transaction merger or acquisition is complete. The Letter raises another one – whether the aggregation of consumer data may have an anti-competitive effect. 

The Letter specifically references “effective competition in the digital marketplace.”   In addition, the Letter calls for the involvement of the Department of Justice in the review of “data-oriented mergers and acquisitions.”  In fact, the Letter states that “the FTC should have urged the Department of Justice to engage in extra scrutiny” of the Oracle – Datalogix transaction, the event that triggered the Letter.  What’s new then is the possibility of competition law becoming a tool in the regulation of Big Data. 




Posted in Big Data, Data Blending, Federal Trade Commission, Privacy | Tagged , , , , , ,

Big Data, Elections and the Law – Does Big Data Matter in Elections?

At least people with money seem to think so – people inside the both the Democratic Party and the Republican Party, and people working independently of the parties.

The 2012 Election

The success of the 2012 Obama re-election campaign has been attributed, at least in part, to the campaign’s use of Big Data.  An overview of the campaign’s Big Data operation can be found herehere and especially here.

Investment in Democratic data efforts, and the influence of interested and wealthy individuals, is ongoing.  According to Datanami:

A lot has been written about the data savvy of President Obama’s re-election campaign and how it changed the game from gut-level guesswork to data-driven campaigning. Much of the direction of this effort was reportedly originally based on the counsel of Google’s Eric Schmidt, who it was revealed this week has made an investment to keep the Obama Campaign’s analytics group, previously known as “the Cave,” together.

Republican Efforts

After the 2012 election, the Republican National Committee publicly acknowledged the edge that President Obama’s reelection campaign gained from its use of Big Data.

Some wondered how the Republicans could overcome the perceived Republican technology deficit. But the Republican Party began investing in Big Data before the 2012 election.

For example, there’s Data Trust. The Huffington Post tells us that Data Trust is a:

…private-sector company set up to house and maintain the RNC’s voter file while avoiding campaign finance rules that apply to federal committees….

Data Trust was formed in 2011. In a January 16, 2014 article, National Review Online  said that Data Trust was “created in 2011 to shoulder the cost of building and managing the GOP’s voter file” and is “effectively a subsidiary of the RNC.”

The Themis database is a Big Data effort of certain Republican Party supporters. According to Reuters:

People with direct knowledge of the group as well as political technology industry veterans say it is backed by the Koch brothers, although their names do not appear on an annual regulatory filing and Koch Industries spokespeople did not respond to requests for comment.

i360 is a for-profit business that describes itself as – “the leading data and technology resource for the pro-free-market political and advocacy community.” There is, reportedly, a relationship between Themis and i360, but it isn’t clear to us what that relationship is. On the other hand, there is a clear relationship between i360 and Data Trust. It’s clear because i360 announced it in this press release.

Why It Matters


Evidently, a Big Data makes campaign spending more efficient.  Using your campaign funds more efficiently gives you an advantage over less efficient campaigns. You get more votes for each dollar. 

A general description of how this is supposed to work can be found here, in a blog post summarizing Political Campaigns and Big Data, which is an article that appeared in the Spring 2014 issue of the Journal of Economic Perspectives. The essential thing is that campaigns use data to “target individual voters” and thereby “concentrate their resources where they will be most effective.” Similar discussions here and here, among many other places.

In the past, some financially disadvantaged campaigns compensated by using comparatively larger groups of volunteer campaign workers. But, volunteers guided by Big Data are more effective than less data-focused volunteers.  So money wins again.

Finally, if Big Data campaigns are more effective in targeting voters, it seems reasonable to conclude that Big Data campaigns can more effectively target campaign contributors.  The rich get richer.

Not Everyone Agrees

After the 2014, Big Data took some hits.  In How Election 2014 humbled the high priests of American politics, the Christian Science Monitor said:

Well, the political class bet big on big data this fall, and it didn’t turn out so hot.

The title of a The Wall Street Journal by Patrick O’Connor and Dante Chinni tells us: Results Show Limits of Big Data.

But the Big Data investment continues.

Campaign Law Issues 

The investment of money in Big Data isn’t the only indication of the importance of Big Data in politics.  The collection and use of Big Data in elections has also now become the subject of election law disputes.   

In this complaint filed with the Federal Election Commission before the 2014 election, there are allegations of wrongdoing that involve i360, Data Trust, Themis and others. The allegations relate directly to the manner in which campaign data is collected and shared.

After the election we had another data-related issue.  As described in a CNN report:

Republicans and outside groups used anonymous Twitter accounts to share internal polling data ahead of the midterm elections, CNN has learned, a practice that raises questions about whether they violated campaign finance laws that prohibit coordination.

The Twitter accounts were hidden in plain sight. The profiles were publicly available but meaningless without knowledge of how to find them and decode the information, according to a source with knowledge of the activities.

Other Legal Issues?

We’re talking about elections here.  The winners make the law.


Posted in Big Data, Elections, Politics | Tagged , , , ,

The Federal Trade Commission Wants In On Big Data Regulation

The FTC has a tendency to see a role anywhere it wants to see a role.

As described on its website, the FTC has a very broad mandate. Among other things, the FTC has the authority to investigate and prosecute cases of “unfair or deceptive acts or practices in or affecting commerce.”  As a practical matter, “unfair or deceptive acts or practices” means whatever the FTC says it means.

Apparently the FTC is now looking for “unfair or deceptive acts or practices” in the Big Data world.  

The FTC gave clues about where it is looking in its recent event – “Big Data: A Tool for Inclusion or Exclusion?”

The FTC Chairwoman explained the purpose of the event as such:

“A growing number of companies are increasingly using big data analytics techniques to categorize consumers and make predictions about their behavior,” said FTC Chairwoman Edith Ramirez. “As part of the FTC’s ongoing work to shed light on the full scope of big data practices, our workshop will examine the potentially positive and negative effects of big data on low income and underserved populations.”

It’s not clear what “underserved” means in this context, but certainly Big Data can be used to discriminate (unfairly) in a number of different ways. Example discussions of this here and here.  On the other hand, as suggested in the quote above, (and as discussed here) Big Data might be useful in combating discrimination.                                                   

These are real concerns, and it’s great that the FTC wants to keep up with changing technology. Unfortunately, if this event is any indication, the FTC has a lot of catching up to do.

Consider this from the FTC’s event notice:

The FTC has found that, in some cases, companies are targeting ads based on racial or other assumptions, said Latanya Sweeney, the agency’s CTO. At a website for members of Omega Psi Phi, an African-American fraternity, the agency found ads for defense lawyers and for users to check their own criminal backgrounds, she said. The site also had a large number of ads for poorly rated credit cards, she said.

This is not a particularly current issue. Targeted advertising has been talked about for a long time – certainly long before we started talking about Big Data. 

That said, it is true that the FTC has a role to play in privacy law and in some data-related matters. To the extent that role extends to Big Data when the data includes personal information, the FTC has a role to play in Big Data.

However – the FTC is not very good at recognizing the limits of its jurisdiction. Consider data security breach issues.

As FTC Chairwoman Edith Ramirez stated in testimony before Congress that:

Under current laws, the FTC only has the authority to seek civil penalties for data security violations involving companies that fail to protect children’s information provided online in violation of the COPPA Rule or credit report information in violation of the FCRA. The Commission also recommends data security legislation that would provide the agency with jurisdiction over non-profits, which have been the source of a substantial number of breaches.

In light of this statement, it’s difficult to understand how the FTC decided it has the authority to prosecute enforcement actions with respect to data breaches.

(Note also that the FTC’s self defined jurisdiction in this regard, as well as the manner in which the FTC presumes to exercise that jurisdiction, have been challenged in currently active litigation, although not successfully yet.)

Obviously, it’s good that the FTC wants to increase its understanding of relevant issues. But government agencies should not be looking for a role in the latest cool thing just because it wants one.

Posted in Big Data, Federal Trade Commission, Regulation | Tagged , , ,

Crossing Borders with Big Data in the Cloud – and the Law

An article in Gigaom lists some of the geographical issues you might want to consideration when choosing a Cloud service provider.  The focus of the article is on technical issues – latency and redundancy – and how locating data centers in different countries might affect the significance of those issues. 

What about data location and the law?

Although it’s not really called one, there is a reference in the article to one legal issue. Specifically, the article states:

Data protection: Different types of data have different locality requirements, e.g. requiring personal data to remain within the EU.

That is true.  Bringing your data across a border might be a privacy law problem. 

But there are others of course.  Export control regulation is a particularly important example.  (We touched on this in a post here at Big Data and the Law when we talked about the guy who made guns by printing them.)

Export Control – the Basics

The essential thing is that, under U.S. law, some data can’t be exported from the U.S. without the permission of the U.S. government.  U.S. law also provides that some data can’t be moved outside the U.S. at all, and pretty nothing can be moved to certain countries.

Certainly not all data gives you an export control problem, and not all export destinations give you an export control problem.  You’re probably OK moving your data to Canada.  (That’s probably – not definitely.  See disclaimer of legal advice.)  Don’t plan to send anything to North Korea though.

The application of U.S. export control is complicated in some cases and a little counter-intuitive in some cases as well.  For example, sometimes you can have a U.S. export control problem with data you bring into the country and then move it out again.  That’s right.  You can receive data from someone who is outside the U.S. and not be permitted to send the same data back to the same person you got it from.  This has been an issue with encryption technology.

While we’re at it, you should know that under export control law location isn’t just a question of geography.  Location also means a place where certain people have access to your data.  Even if your data is located in a nice secure facility in the United States, you have a potential problem if people who are not U.S. citizens can access your data in that nice secure facility in the United States.

Export Control and the Cloud

There are two basic facts you need to know to determine whether, in your case, moving your data to the Cloud is a problem under U.S. export control regulations.  What is the data that you’re putting in the Cloud and where is the data is going to be stored?

Easier said than known.

Your first problem is – how do you know where your data is going?  Unless you get a commitment from the service provider to keep your data in the countries you designate you don’t know.  Vendors have their reasons for choosing where they locate their data centers and those reasons might not be consistent with your expectations or assumptions. 

You also have the problem of change. 

For starters, in all likelihood the substance of your data will change.  Today’s data might not present you with an export control problem but tomorrow’s data might be a different matter entirely. Yet another reason for good data management practices.

Your data might get moved – or replicated to systems in other data centers, that might be for disaster recovery purposes, archiving or other reasons.  Those new places might be in places the law doesn’t want that data to be – and don’t forget the thing about who has access to your data.

Export Control – the Risks

A number of different things can happen to you if you violate U.S. export control law.  You can be fined.  You can be barred from doing business with the U.S. government.  And you can be sent to prison.  Here’s an example of a criminal prosecution from the U.S. Department of Justice March, 2014 Summary of Major U.S. Export Enforcement, Economic Espionage, Trade Secret and Embargo-Related Criminal Cases:

Hwa obtained contracts to supply circuit boards to the U.S. Navy, by falsely claiming the boards would be manufactured in the United States. Instead, Hwa illegally sent restricted information to a company in Taiwan for the boards to be manufactured there.

That’s data. 

Export Control in Other Countries

Finally, don’t assume that the United States is the only country with export control law. Among others, the Federal Republic of Germany does.  You can find some information about that here in the 2013 Brief Outline on Export Controls prepared by the Bundesamt für Wirtschaft und Ausfuhrkontrolle.  Note this snippet from that document:

The provision of software and technology in the companies’ intranets or in the internet is also subject to licensing if the access to software and technology is possible from third states. Please note that a licensing requirement does not presuppose that the access took place or not.

Very much like U.S. export control law.

You can find links to the export control laws of other countries here at the U.S. Department of State website. 

Posted in Big Data, Cloud, Export Control | Tagged , , ,

Follow-up on Data Collection and Personal Responsibility – More about the Google Glass Debate

As we discussed in our last post, people are collecting lots and lots of the personal information that ends up being publicly disclosed.  A couple of interesting articles have been published since then that shed further light on the problem.

In this article in Business Insider, we see more evidence that Google Glass is more than just a potential annoyance; it’s a potential vehicle for significant personal data security disclosure.  The article notes, for example, some of the things that can be recorded by Google Glass that are scarier than your behavior at that party the other night.  For example, standing behind you at an ATM gives the Google Glass wearer a chance to record your pin number.

The key point of the article, however, is less the kinds of data that can be collected with Google Glass than the fact that Google Glass can be hacked – with the potential result that data collected with no bad intent can be accessed by people who do have bad intent.  So merely collecting data with Google Glass is a matter of personal responsibility no matter the circumstances.  And yes, that is true now with respect to any connected device.  That doesn’t let the Google Glass user off the hook; rather it should put the Google Glass user on notice of the risks.

One more thing I haven’t seen in the Google Glass debate –  if you believe that the NSA is collecting all data that you send through any network, and if you don’t like that, think about what happens to all those images you’re collecting.  Think about NSA facial recognition programs.

Maybe you’re collecting data for the Man.

On the other side of the issue, an article in Mashable discusses the possibility of jamming Google Glass network access:

The technology, called, detects Glass devices on a Wi-Fi network by their media access control (MAC) addresses and blocks their access. The program, which runs on Raspberry Pi and BeagleBone mini computers, can also “emit a beep to signal the Glass-wearer’s presence to anyone nearby,” according to Wired. The program works via a USB network antenna. It uses Aircrack-NG to impersonate the network and send a deauthorization command, which blocks Glass’s Wi-Fi connection.

One cultural note about the Mashable article.  This is the first sentence:

A Berlin artist may have escalated the persecution of Google Glass enthusiasts with a program that lets you jam their Wi-Fi reception


An article on TweakTown also deals with the Google Glass jamming issue.

The TweakTown article will point you to Julian Oliver of Stop the Cyborgs.  Whatever your perspective on the Google Glass debate and the general discussion of privacy and data collection, Stop the Cyborgs is a place to find thoughtful and informative reading.  If you are at all interested in the subject, you should go there.

Considering these issues in the abstract is easier than considering them in the context of real events.  Here’s an example: 

A summary of the facts as reported in this article in the StarTribune:

    • Two paramedics examined a man outside his apartment building. We’ll call him the “Patient.”
    •  Andrew J. Henderson happened to be there and happened to have a camera with him.
    •  Mr. Henderson filmed the paramedics as they went about their work.
    •  At some point, the paramedics asked the Patient about his medical history.
    •  One of the paramedics asked Mr. Henderson to stop filming.
    •  The paramedic says he was concerned for the Patient’s privacy.
    •  Deputy Sherriff Jacqueline Muellner was also present at the scene.
    •  Deputy Muellner took Mr. Henderson’s camera.
    •  The camera was returned three weeks later.
    •  Mr. Henderson says his film of the incident was gone when the camera was returned.

So – what’s right in this case?

    • What’s more important – privacy or freedom to film?
    •  Does it make a difference personal medical information was involved?
    •  Does it matter that, in this situation, the Patient wasn’t in a public space voluntarily?
    •  Should Mr. Henderson have asked for permission to film?
    •  If so, who should he have asked?
    • Would the answer to any of the questions be different if the film was posted or not?

Perhaps most importantly, who gets to decide what’s right?





Posted in Big Data, Confidentiality, Data Collection, Ethics, NSA | Tagged , , , , , , ,

Data Collection, Personal Responsibility and the Law

In a recent article in the Huffington Post, John Whitehead asks this question:

What would happen if the most powerful technology company in the world and the largest clandestine spying agency in the world joined forces?

Similarly, a recent article in Politico asks – Who watches the watchers?

It’s perfectly reasonable to be concerned about the perceived Google – NSA axis of evil. However, our discussions about privacy seem to overlook the role of the individual.

Mr. Whitehead notes that: 

 … we’re helping Corporate America build a dossier for its government counterparts on who we know, what we think, how we spend our money, and how we spend our time.

By doing so, he’s really referring to the information we volunteer about ourselves.

What about information that we collect and disclose – information about other people? We’re collecting and disclosing a lot of that – in a lot of ways. You do it when you post a picture of your friends. You do it when you post a video that you take in public. Intentional or not, in many such cases we’re sharing personal information about those other people. (Remember, we don’t say personally identifiable information here at Big Data and the Law.)

Geo-location metadata is one way we might disclose people’s personal information – specifically, where the people are when their images are when their images are captured in a photo or video. This is how John McAfee (of the security software company McAfee) was found when he was a fugitive a few years ago.  He let his picture be taken and posted online – with metadata that gave away his whereabouts.  You would think he would know better.

Apparently he does now. McAfee (the company) later blogged about the risks of geo-location metadata. They say this:

For instance, you may not mind sharing your exact location with friends and family, but what about with people you don’t know? When your location is broadcast on social networks such as Facebook and Twitter, you lose control of the information. Anyone can see it. Say you check into a hotel while on vacation. A thief could see your check-in, do an online search for your home address and rob you while you’re away.

Or you could get arrested.

Face recognition technology provides the means to connect names and faces, and possibly places and times as well. It seems that the NSA is up on this technology. But then so is Facebook

Now consider the many ways people are collecting images.  More than just with our mobile devices and cameras – Google Glass for example. 

In an article in The New York Times, Dr. Joseph Atick gets right at the issue of the consequences of individuals collecting people’s picture and the use of face image recognition technology:

Dr. Atick sees convenience in these kinds of uses as well. But he provides a cautionary counterexample to make his case. Just a few months back, he heard about NameTag, an app that, according to its news release, was available in an early form to people trying out Google Glass. Users had only to glance at a stranger and NameTag would instantly return a match complete with that stranger’s name, occupation and public Facebook profile information. “We are basically allowing our fellow citizens to surveil us,” Dr. Atick told me on the trade-show floor.

On the other hand, there is at least one Google Glass app that you might conclude has an intended use that, on balance, justifies the potential privacy loss.  You can get it here.

You can download the source code for face recognition software from CNET by the way.  A face recognition app for Google Glass is also under development, although that app is supposed to be an unauthorized app.

People are using their own personal drones now as well. (Drone lawyers are now a thing by the way – one lawyer has a Drone Law Journal.)

With this in mind, it’s important to know what information we collect and disclose. Not everyone knows if or when their phones or cameras collect geo-location metadata. Not everyone knows that geo-location metadata are sometimes found where their pictures are posted or otherwise shared.

Some people don’t want their image posted on Facebook or YouTube. Do we get consent from everyone before we do that?

In the end, the problem with relying on personal responsibility is that there are so many different perspectives about what is right – as we have discussed here in the past.

So that’s the personal responsibility part. So what about the law?

When it comes to people collecting and disclosing personal information, the law is pretty spotty. For the most part, the law regarding privacy is focused upon businesses and their collection and disclosure (and use) practices. One reason for this is jurisdiction. For example, the Federal Trade Commission is the principal regulator of data privacy issues in the U.S. Federal Government. The FTC’s jurisdiction is very broad in some ways, but its jurisdiction is limited to business practices so the FTC can’t take on issues with respect to individuals.

So the law in this regard has developed on an issue-by-issue basis. Online bullying has received attention by lawmakers, as has revenge porn and some other things. But there is no broad – hey you can’t do that – kind of regulation of individual conduct.

People can be as angry as they want about corporate collection, use and disclosure – but this is where Mr. Whitehead is correct – that information comes from somewhere.

There is no reason why individual collection, use and disclosure of personal information can’t be regulated in some way.  Intellectual property law applies to individual conduct and intellectual property certainly gets enforced.

When people, legislatures and regulators figure that out, there could be some movement on this issue.  But it’s a reasonable assumption that it’s not going to happen. That kind of thing gets dealt with on an issue-by-issue basis because it typically takes a lot of issue-specific public outrage for anything to get done. So we’ll probably be waiting forever for the behavior of individual people to become subject to anything close to the scrutiny of the Googles of this world.

In the meantime, the next time you see someone wearing Google Glass – ask that person to disclose their privacy policy.



Posted in Big Data, Data Collection, Internet of Things, NSA, Privacy | Tagged , , , , , , , , , ,

The Federal Trade Commission Wants Transparency in Data Practices – But FTC Commissioners Are Not Transparent about Their Activities

It appears that we’ve got a problem with FTC Commissioners having undisclosed meetings with business representatives and business groups.  At a time when the FTC is seeking to expand its authority over data and privacy matters this is particularly troubling.

On its website, the Electronic Privacy Information Center describes one specific instance of the problem:

FTC Commissioner Wright Meets with Industry Lobbyists, Not Consumer Representatives: Through a Freedom of Information Act request, EPIC obtained the appointment calendar of FTC Commissioner Wright. The Commissioner’s calendar reveals many meetings with corporate presentatives [sic] but no meetings with public interest organizations representing consumers. One of FTC’s primary missions is to protect consumers from unfair and deceptive business practices. Commissioner Wright became an FTC Commissioner in January 2013. Since then he has met with representatives from Apple, Microsoft, Verizon, Qualcomm, the Network Advertising Initiative, and the Consumer Data Industry Association. He has attended industry conferences and given talks at trade association meetings.

Commissioner Wright isn’t the only Commissioner that has not been forthcoming about meetings with corporate interests.  Big Data and the Law has stumbled across another one.

It appears that Commissioner Julie Brill has had at least two meetings with industry representatives that do not appear on the published list of her “Speeches, Articles, and Statements” found on the FTC website. 

Commissioner Brill was scheduled to take part in presentations to the Centre for Information Policy Leadership Annual Membership Retreat on June 12-13, 2012 and June 13, 2013.  If you believe in transparency in government, the Centre is a particularly troubling group.

To begin with, Centre membership is expensive enough to be pretty exclusive.  Depending upon the level of membership, the Centre charges either $50,000 or $30,000 annually.  Not very many companies have the budget for that, which is probably why there are only 36 Centre members, and that they are companies like American Express, Apple, Bank of America, Boeing, Facebook, Google, MasterCard, Oracle, Verizon, Visa, Wal-Mart and Yahoo.

The Centre also seems to think that transparency is a bad thing.

In 2012 the National Telecommunications and Information Administration proposed a “Multistakeholder Process to Develop Consumer Data Privacy Codes of Conduct.” In a letter to the NTIA about that process, the previous President of the Centre had this to say:

When applied to the actual development of enforceable best practices, however, the Centre believes an effective process requires that the business community serve as the primary drafter of codes and be allowed a closed forum in which to probe sensitive questions and test the workability of proposed approaches.

The Centre is concerned that the Administration’s multi-stakeholder process…  While any process to create industry codes or guidance should be open and inclusive, it must also include ample opportunity for testing ideas, candidly airing points of concern and disagreement, and discussing matters that may involve proprietary information or controversial data practices.  Such robust debate would be discouraged by the presence of media or the possibility of a written transcript of discussions.  This is particularly important when best practices will likely have a direct effect on the internal processes of companies, and will be enforceable against companies that voluntarily adopt them.

This does not mean that policymakers, experts, advocates and the public have no role in the development of best practices.  The Centre urges their full engagement.  Once the initial draft of best practices is developed by industry….

The Centre’s position then, is that industry should create the Codes of Conduct in secret, otherwise industry might have to publicly disclose their “controversial data practices.”  Policymakers (presumably regulators and legislators) can say what they think after the secret drafting is done. 

If that’s the Centre’s view of things, how is it appropriate for Commissioner Brill to meet with the Centre’s members in private?  At a minimum it looks bad.  It looks worse when you see the Centre’s description of the 2013 presentation in which Commissioner Brill is listed as a participant:

Legitimacy, Fairness and Big Data

In the big data context, how should companies reconcile requirements of legitimate processing in data protection regimes with fair processing concerns in the U.S.? Have legitimate process requirements in places like Europe and fair processing concerns in the U.S. begun to coalesce over time? How should regulators approach enforcement in this context?

Let’s summarize. 

We have an industry group with a very exclusive membership. In the recent past, the principal executive of that group has expressed hostility to transparency in the development of enforceable codes of business practices – that presumably would be applicable to its members.  We have a regulator meeting in private with that group, notwithstanding that the group’s members are subject to the authority of that regulator’s agency.   It appears that in one such meeting the topic of the meeting was how such regulators should “approach enforcement” within their jurisdiction. 

One more thing – providing a detailed and lengthy list of appearances and presentations gives the impression that there aren’t any other appearances or presentations to list.  EPIC had to resort to a Freedom of Information Act request to get full disclosure of Commissioner Wright’s meetings with industry.  Big Data and the Law learned about Commissioners Brill’s meetings with the Centre by happenstance.  We have no idea whether there are any other such undisclosed meetings.

This all seems inconsistent with Commissioner Brill’s stated views on enforcement and transparency, which we here at Big Data and the Law have no reason to doubt.  

However, at a time when the FTC wants more authority over data privacy and security matters, it is more important than ever that we know about the relationships between the FTC and those subject to regulation by the FTC.  It’s clear we haven’t been told everything we should know.

Posted in Big Data, Federal Trade Commission, Policy, Privacy, Regulation | Tagged , , , , , , ,

Paper is the New Black

It’s Friday, it’s April and it’s snowing – this is no time for serious stuff.  We’ll do the Big Data thing next week.  Today let’s talk more about paper instead.

As we said earlier this week, paper has many advantages not found in the digital world – security being one. Now we find that it’s possible to sync paper to the cloud. All you need is the Mod. It’s an old school notebook that lets you think analog and store digital.

If you need instruction in pencil sharpening, go here and order – How To Sharpen Pencils A Practical & Theoretical Treatise on the Artisanal Craft of Pencil Sharpening for Writers, Artists, Contractors, Flange Turners, Anglesmiths, & Civil Servants.

Yes Fast Company, paper is awesome.

Posted in Big Data, Friday Fun, Fun Facts, Old School | Tagged , ,

For the Ultimate in Bitcoin Security – Use Paper

You can’t make up this stuff.

From an Ars Technica article:

Carlson continues to store bitcoins both on hosted platforms like Coinbase and He also keeps bitcoins on his own hardware and uses paper wallets stored in a bank vault as a sort of low-tech backup. Bitcoin paper wallets contain a wallet’s Bitcoin address and private key, and, if secured properly, it can be one of the safest ways to store bitcoins.

From – self-described as “Bitcoin’s most popular bitcoin wallet and block explorer.”

Paper Wallet Advantages

  • Protection from malware and keyloggers.
  • Maintain 100% ownership of your private keys. You own the coins not a 3rd party service.
  • No dependence on the security of any website.
  • Keeping a piece of paper safe is easier than keeping your computer secure.

That’s right – keeping a piece of paper safe is easier than keeping your computer secure.

Here’s another idea:

Want to communicate with no risk of interception by the NSA?  Use the mail – the one with stamps and envelopes. The only code you’ll need is a zip code.  Plus – it’s open source technology.  Get some stationery and ask someone over 50 how it works.

Neither snow nor rain nor heat nor gloom of night can stop it!

This is fun.

Posted in Big Data, Bitcoin, Data Security, Fun Facts, Privacy, Technology | Tagged , , , ,

Data Governance and the Law – Facebook’s Acquisition of WhatsApp Might Bring Some Needed Clarity

By now it should be clear to everyone that when you have a privacy policy you are expected to abide by the terms of that policy.  So, for example, if your privacy policy says you will not disclose any personal information that you collect, you should not disclose any information that you collect. 

But what if you change your privacy policy?  What if your new privacy policy says that you have the right to disclose personal information that you collect after, let’s say, December 31, 2014?  Presumably that means that you can disclose any personal information that is collected after December 31, 2014.  But that should not give you the right to disclose personal information that you collected before January 1, 2015. 

This raises a data governance problem.  How do you separate (and keep separate) two bodies of information that are collected during at two different times under two different rules?

Here at Big Data and the Law we assume this happens more frequently than we hear about.  Perhaps in some cases the changes in privacy policy aren’t significant and don’t require a change in information practices.  In other cases, the collecting parties might have data governance practices that can handle any problems that result from privacy policy changes.  It’s likely, however, that the issue is just ignored in some cases and no one noticed. 

In this case — people noticed.  The Electronic Privacy Information Center and The Center for Digital Democracy filed a complaint with the Federal Trade Commission (FTC) in which they assert:

    • Facebook routinely incorporates data from companies it has acquired.
  • WhatsApp’s privacy policies and official blog posts reflect a strong commitment to user privacy.
  • WhatsApp’s messaging service regularly collects and stores virtually all available user data.
  • The Commission has previously found that a company may not repurpose user data for a use other than the one for which the user’s data was collected without first obtaining the user’s “express affirmative consent.”
  • By failing to make special provisions to protect user data in the event of an acquisition, WhatsApp “unreasonably creates or takes advantage of an obstacle to the free exercise of consumer decisionmaking.”
  • Specifically, WhatsApp users could not reasonably have anticipated that by selecting a pro-privacy messaging service, they would subject their data to Facebook’s data collection practices.
  • Therefore, WhatsApp’s inadequate disclosures constitute unfair acts or practices in violation of Section 5 of the FTC Act, 15 U.S.C. § 45(n).

We’re skipping around a bit here, but we invoke the blogger’s right to summarize and generalize for the sake of brevity.

The complaint makes these requests for FTC action:

EPIC urges the Commission to investigate WhatsApp, Inc., and enjoin its unfair and deceptive data collection practices for any future changes to its privacy policy.

Specifically, EPIC requests the Commission to:

a. Initiate an investigation of the proposed acquisition of WhatsApp by Facebook, specifically with regard to the ability of Facebook to access WhatsApp’s store of user mobile phone numbers and metadata;

b. Until the issues identified in this Complaint are adequately resolved, use the Commission’s authority to review mergers to halt Facebook’s proposed acquisition of WhatsApp;

c. In the event that the acquisition proceeds, order Facebook to insulate WhatsApp users’ information from access by Facebook’s data collection practices; and

d. Provide such other relief as the Commission finds necessary and appropriate.

Which brings us to the point – what might we learn from the FTC addressing the EPIC/CDD complaint?  We’re hoping the FTC answers these questions:

1.         Can information collected under the terms of a privacy policy be used in a manner that is inconsistent with the terms of that privacy policy?

2.         Can the FTC intervene in a situation where there is only a possibility or risk of comingling information collected under two or more different rules or assumptions?

3.         What remedies can the FTC impose if the FTC finds that possibility or risk?

Justice Oliver Wendell Holmes, Jr. said, “Great cases, like hard cases, make bad law.”  We’re hopeful for good law to come out of this case, because the facts of the case are clear – even if the issues are not. 

The complaint can be found here:

Posted in Big Data, Data Blending, Data Governance, Facebook, Federal Trade Commission, Privacy | Tagged , , , , , ,