If Big Data is supposed to be anything, it’s supposed to a tool for gaining knowledge and insight. There are times when having knowledge and insight is non-optional. One of those times is when you make products that can hurt people.
As we have previously noted here at Big Data and the Law, it’s dangerous to make broad, general, simple statements about legal issues, but this is a blog – so here we go.
American Law of Products Liability 3d Treatise tells us this:
For the purpose of determining whether a product manufacturer has sufficient knowledge to give rise to a duty to warn, the manufacturer is held to the degree of knowledge and skill of an expert. In their capacity as experts, manufacturers must keep abreast of scientific knowledge, discoveries, and advances, and are presumed to know what is imparted thereby. They must be aware of all current information that may be gleaned from research, adverse reaction reports, scientific literature, and other available methods. This high standard ensures that the public will be protected from dangers as those dangers are discovered.
I give particular significance to and other available methods.
Take medical research. The work described in this New York Times article pretty much puts to rest any doubt that Big Data is now a legitimate available method – to be ignored at one’s own peril.
The short version:
A Stanford graduate student created an algorithm that, when used with medical records from the Stanford University Medical Center, identified a combination of two commonly prescribed drugs that causes a rise in blood sugar – that neither drug cause on their own.
And now courts are beginning to connect Big Data as a source of knowledge and insight with manufacturers’ obligation to, as described above:
… be aware of all current information that may be gleaned from research, adverse reaction reports, scientific literature, and other available methods
This past April, in a product liability case dealing with the drug Fosamax, a Federal judge ruled on the qualifications of an individual as an expert witness. That person is Dr. David Madigan, Professor and Chair of Statistics at Columbia University. In qualifying him as an expert [oversimplification alert], the court also endorsed the idea that data analysis like the work at Stanford is one way to determine what a manufacturer should know.
The court said:
In fact, “[p]harmaceutical companies, health authorities, and drug monitoring centers use SRS databases for global screening for signals of new adverse events or changes in the frequency, character, or severity of existing adverse events (AEs) after regulatory authorization for use in clinical practice.” “SRS systems provide the primary data for day-to-day drug safety surveillance by regulators and manufacturers worldwide.” In addition, the QScan software Dr. Madigan used in formulating his opinion is generally accepted by the scientific community because it “has been in widespread use for over 10 years and has been validated extensively.” Moreover, “[m]any peer-reviewed publications report results derived from QScan.”
When you read that quote, it seem obvious that the manufacturer wasn’t up to date, but I guess it wasn’t obvious to the manufacturer – even though the tool Dr. Madigan used “has been in widespread use for over 10 years.”
It makes you wonder how many businesses are doing it the same way they’ve always done it. They might be one curious grad student away from disaster.