The Neuroscience Revolution, Ethics, and the Law
Henry T. Greely
"There's no art to find the mind's construction in the face;
He was a gentleman on whom I built an absolute trust."1
The lament of Duncan, King of Scotland, for the treason of the Thane of Cawdor, his trusted nobleman, echoes through time as we continue to feel the sting of not knowing the minds of those people with whom we deal. From "we have a deal" to "will you still love me tomorrow?", we continue to live in fundamental uncertainty about the minds of others. Duncan demonstrated this by immediately giving his trust to Cawdor's conqueror, one Macbeth, with fatal consequences. But at least some of this uncertainty may be about to lift, for better or for worse.
Neuroscience is rapidly increasing our knowledge of the functioning, and malfunctioning, of that intricate three-pound organ, the human brain. When science expands our understanding of something so central to human existence, these advances will necessarily cause changes in both our society and its laws. This paper seeks to forecast and explore the social and legal changes that neuroscience might bring in four areas: prediction, litigation, confidentiality and privacy, and patents. It complements the paper in this volume written by Professor Stephen Morse, which covers issues of personhood and responsibility, informed consent, the reform of existing legal doctrines, enhancement of normal brain functions, and the admissibility of neuroscience evidence.
Two notes of caution are in order. First, this paper may appear to paint a gloomy picture of future threats and abuses. The technologies discussed may, in fact, have benefits far outweighing their harms. It is the job of people looking for ethical, legal, and social consequences of new technologies to look disproportionately for troublesome consequences — or, at least, that's the convention. Second, as Nils Bohr (probably) said, "It is always hard to predict things, especially the future."2 This paper builds on experience gained in studying the ethical, legal, and social implications of human genetics over the last decade. That experience, for me and for the whole field, has included both successes and failures. In neuroscience, as in genetics, accurately envisioning the future is particularly difficult as one must foresee successfully both what changes will occur in the science and how they will affect society. I am confident about only two things concerning this paper: first, it discusses at length some things that will never happen, and, second, it ignores what will prove to be some of the most important social and legal implications of neuroscience. Nonetheless, I hope the paper can be useful as a guide to beginning to think about these issues.
Advances in neuroscience may well improve our ability to make predictions about an individual's future. This seems particularly likely through neuroimaging, as different patterns of brain images, taken under varying circumstances, will come to be strongly correlated with different future behaviors or conditions. The images may reveal the structure of the living brain, through technologies such as computer-assisted tomography (CAT) scans or magnetic resonance imaging (MRI), or they may show how different parts of the brain function, through positron emission tomography (PET) scans, single photon emission tomography (SPET) scans, or functional magnetic resonance imaging (fMRI).
Neuroscience might make many different kinds of predictions about people. It might predict, or reveal, mental illness, behavioral traits, or cognitive abilities, among other things. For the purposes of this paper, I have organized these predictive areas not by the nature of the prediction but by who might use the predictions: the health care system, the criminal justice system, schools, businesses, and parents.
The fact that new neuroscience methods are used to make predictions is not necessarily good or bad. Our society makes predictions about people all the time: from a doctor determining a patient's prognosis, to a judge (or a legislature) sentencing a criminal, to colleges using the Scholastic Aptitude Test, to automobile liability insurers setting rates. But although prediction is common, it is not always uncontroversial.
The Analogy to Genetic Predictions
The issues raised by predictions based on neuroscience are often similar to those raised by genetic predictions. Indeed, in some cases the two areas are the same — genetic analysis can powerfully predict several diseases of the brain, including Huntington disease and some cases of early-onset Alzheimer disease. Experience of genetic predictions teaches at least three important lessons.
First, a claimed ability to predict may not, in fact, exist. Many associations between genetic variations and various diseases have been claimed, only to fail the test of replication. Interestingly, many of these failures have involved two mental illnesses, schizophrenia and bipolar disorder.
Second, and more important, the strength of the predictions can vary enormously. For some genetic diseases, prediction is overwhelmingly powerful. As far as we know, the only way a person with the genetic variation that causes Huntington disease can avoid dying of that disease is to die first from something else. On the other hand, the widely heralded "breast cancer genes," BRCA 1 and BRCA 2, though they substantially increase the likelihood that a woman will be diagnosed with breast or ovarian cancer, are not close to determinative. Somewhere between 50 and 85 percent of women born with a pathogenic mutation in either of those genes will get breast cancer; 20 to 30 percent (well under half) will get ovarian cancer. Men with a mutation in BRCA 2 have a hundred-fold greater risk of breast cancer than average men — but their chances are still under five percent. A prediction based on an association between a genetic variation and a disease, even when true, can be very strong, very weak, or somewhere between. The popular perception of genes as extremely powerful is probably a result of ascertainment bias: the diseases first found to be caused by genetic variations were very powerful — because powerful associations were the easiest to find. If, as seems likely, the same holds true for predictions from neuroscience, such predictions will need to be used very carefully.
Finally, the use of genetic predictions has proven controversial, both in medical practice and in social settings. Much of the debate about the uses of human genetics has concerned its use to predict the future health or traits of patients, insureds, employees, fetuses, or embryos. Neuroscience seems likely to raise many similar issues.
Much of health care is about prediction — predicting the outcome of a disease, predicting the results of a treatment for a disease, predicting the risk of getting a disease. When medicine, through neuroscience, genetics, or other methods, makes an accurate prediction that leads to a useful intervention, the prediction is clearly valuable. But predictions also can cause problems when they are inaccurate (or are perceived inaccurately by patients). Even if the predictions are accurate, they still have uncertain value if no useful interventions are possible. These problems may justify regulation of predictive neuroscientific medical testing.
Some predictive tests are inaccurate, either because the scientific understanding behind them is wrong or because the test is poorly performed. In other cases the test may be accurate in the sense that it gives an accurate assessment of the probability of a certain result, but any individual patient may not have the most likely outcome. In addition, patients or others may misinterpret the test results. In genetic testing, for example, a woman who tests positive for a BRCA 1 mutation may believe that a fatal breast cancer is inevitable, when, in fact, her lifetime risk of breast cancer is between 50 and 85 percent and her chance of dying from a breast cancer is roughly one-third of the risk of diagnosis. Alternatively, a woman who tests negative for the mutation may falsely believe that she has no risk for breast cancer and could stop breast self-examinations or mammograms to her harm. Even very accurate tests may not be very useful. Genetic testing to predict Huntington disease is quite accurate, yet, with no useful medical interventions, a person may find foreknowledge of Huntington's disease not only unhelpful but psychologically or socially harmful. These concerns have led to widespread calls for regulation of genetic testing.3
The same issues can easily arise through neuroscience. Neuroimaging, for example, might easily lead to predictions, with greater or lesser accuracy, of a variety of neurodegenerative diseases. Such imaging tests may be inaccurate, may present information patients find difficult to evaluate, and may provide information of dubious value and some harm. One might want to regulate some such tests along the lines proposed for genetic tests: proof that the test was effective at predicting the condition in question, assessment of the competency of those performing the tests, required informed consent so that patients appreciate the test's possible consequences, and assurance of post-test counseling to assure that patients understand the results.
The Food and Drug Administration (FDA) has statutory jurisdiction over the use of drugs, biologicals, or medical devices. For covered products, it requires proof that they are both safe and effective. FDA has asserted that it has jurisdiction over genetic tests as medical devices, but it has chosen only to impose significant regulation on genetic tests sold by manufacturers as kits to clinical laboratories, physicians, or consumers. Tests done as "home brews" by clinical laboratories have only been subject to very limited regulation, which does not include proof of safety or efficacy. Neuroscience tests might well be subject to even less FDA regulation. If the test used an existing, approved medical device, such as an MRI machine, no FDA approval of this additional use would be necessary. The test would be part of the "practice of medicine," expressly not regulated by the FDA.
The FDA also implements the Clinical Laboratory Improvement Amendments Act (CLIA), along with the Center for Disease Prevention and Control and the Center for Medicare and Medicaid Services. CLIA sets standards for the training and working conditions of clinical laboratory personnel and requires periodic testing of laboratories' proficiency at different tests. Unless the tests were done in a clinical laboratory, through, for example, pathological examination of brain tissue samples or analysis of chemicals from the brain, neuroscience testing would also seem to avoid regulation under CLIA.
At present, neuroscience-based testing, particularly through neuroimaging using existing (approved) devices seems to be entirely unregulated except, to a very limited extent, by malpractice law. One important policy question should be whether to regulate such tests, through government action or by professional self-regulation.
The criminal justice system makes predictions about individuals' future behavior in sentencing, parole, and other decisions, such as civil commitment for sex offenders.4 The trend in recent years has been to limit the discretion of judges and parole boards to use predictions by setting stronger sentencing guidelines or mandatory sentences. Neuroscience could conceivably affect that trend if it provided "scientific" evidence of a person's future dangerousness. Such evidence might be used to increase sentencing discretion - or it might provide yet another way to limit such discretion.5
One can imagine neuroscience tests that show a convicted defendant was particularly likely to commit dangerous future crimes by showing that he has, for example, poor control over his anger, his aggressiveness, or his sexual urges. This kind of evidence has been used in the past; neuroscience may come up with ways that either are more accurate or that appear more accurate (or more impressive). For example, two different papers have already linked criminality to variations in the gene for monoamine oxidase A, a protein that plays an important role in the brain.6 Genetic tests may seem more scientific and more impressive to a judge, jury, or parole board than a psychologist's report. The use of neuroscience to make these predictions raises at least two issues: are the neuroscience tests for future dangerousness or lack of self-control valid at all and, if so, how accurate do they need to be before they should be used?
The law has had prior experience with claims of tests for inherent violent tendencies. The XYY syndrome was widely discussed and , accepted, , in the literature though not by the courts7, in the late 1960s and early 1970s. Men born with an additional copy of the Y chromosome were said to be much more likely to become violent criminals. Further research revealed, about a decade later, that XYY men were somewhat more likely to have low intelligence and to have long arrest records, typically for petty or property offenses. They did not have any higher than average predisposition to violence.
If, unlike XYY syndrome, a tested condition were shown reliably to predict future dangerousness or lack of control, the question would then become how accurate the test must be in order for it to be used. A test of dangerousness or lack of control that was only slightly better than flipping coins should not be given much weight; a perfect test could be. At what accuracy level should the line be set?
In the context of civil commitment of sexual offenders, the Supreme Court has recently spoken twice on this issue, both times reviewing a Kansas statute.8 The Kansas act authorizes civil commitment of a "sexually violent predator," defined as "any person who has been convicted of or charged with a sexually violent offense and who suffers from a mental abnormality or personality disorder which makes the person likely to engage in repeat acts of sexual violence."9 In Kansas v. Hendricks, the Court held the Act constitutional against a substantive due process claim because it required, in addition to proof of dangerousness, proof of the defendant's lack of control. "This admitted lack of volitional control, coupled with a prediction of future dangerousness, adequately distinguishes Hendricks from other dangerous persons who are perhaps more properly dealt with exclusively through criminal proceedings."10 Id. at 360. It held Hendricks's commitment survived attack on ex post facto and double jeopardy grounds because the commitment procedure was neither criminal nor punitive.11
Five years later, the Court revisited this statute in Kansas v. Crane.12 It held that the Kansas statute could only be applied constitutionally if there were a determination of the defendant's lack of control and not just proof of the existence of a relevant "mental abnormality or personality disorder":
It is enough to say that there must be proof of serious difficulty in controlling behavior. And this, when viewed in light of such features of the case as the nature of the psychiatric diagnosis, and the severity of the mental abnormality itself, must be sufficient to distinguish the dangerous sexual offender whose serious mental illness, abnormality, or disorder subjects him to civil commitment from the dangerous but typical recidivist convicted in an ordinary criminal case.13
We know then that, at least in civil commitment cases related to prior sexually violent criminal offenses, proof that the particular defendant had limited power to control his actions is constitutionally necessary. There is no requirement that this evidence, or proof adduced in sentencing or parole hearings, convince the trier of fact beyond a reasonable doubt. The Court gives no indication of how strong that evidence must be or how its scientific basis would be established. Would any evidence that passed Daubert or Frye hearings be sufficient for civil commitment (or for enhancing sentencing or denying parole) or would some higher standard be required?
It is also interesting to speculate on how evidence of the accuracy of such tests would be collected. It is unlikely that a state or federal criminal justice system would allow a randomized double-blind trial, performing the neuroscientific dangerousness or volition tests on all convicted defendants at the time of their conviction and then releasing them to see which ones would commit future crimes. That judges, parole boards, or legislatures would insist on rigorous scientific proof of connections between neuroscience evidence and future mental states seems doubtful.
Schools commonly use predictions of individual cognitive abilities. Undergraduate and graduate admissions are powerfully influenced by applicants' scores on an alphabet's worth of tests: ACT, SAT, LSAT, MCAT, and GRE among others. Even those tests, such as the MCAT, that claim to test knowledge rather than aptitude use the applicant's tested knowledge as a predictor of her ability to function well in school, either because she has that background knowledge or because her acquisition of the knowledge demonstrates her abilities. American primary and secondary education uses aptitude tests less frequently, although some tracking does go on. And almost all of those schools use grading (after a certain level), which others can use to make predictions within the school or by others — such as other schools, employers, and parents.
It is conceivable that neuroscience could provide other methods of testing ability or aptitude. Of course, the standard questions of the accuracy of those tests would apply. Tests that are highly inaccurate usually should not be used. But even assuming the tests are accurate, they would raise concerns. Those tests might be used only positively, as Dr. Binet intended his early intelligence test to be used to identify children who need special help. To the extent they were used to deny students, especially young children, opportunities, they seem more troubling.
It is not clear why a society that uses aptitude tests so commonly for admission into elite schools should worry about their neuroscience equivalents. The SAT and other similar aptitude tests claim that student preparation or effort will not substantially affect student results, just as, presumably, preparation (at least in the short term) seems at least as unlikely to alter neuroscience tests of aptitude. The existing aptitude tests, though widely used, remain controversial. Neuroscience tests, particularly if given and acted upon at an early age, are likely to exacerbate the discomfort we already feel with predictive uses of aptitude tests in education.
Perhaps the most discussed social issue in human genetics has been the possible use — or abuse — of genetic data by businesses, particularly insurers and employers. Most, but not all, commentators have favored restrictions on the use of genetic information by health insurers and employers.14 And legislators have largely agreed. Over 45 states and, to some extent, the federal government restrict the use of genetic information in health insurance. Eleven states impose limits on the use of genetic information by life insurers, but those constraints are typically weak. About 30 states limit employer-ordered genetic testing or the use of genetic information in employment decisions, as does, to some very unclear extent, the federal government through the Americans with Disabilities Act.15 And 2004 may well mark the year when broad federal legislation against "genetic discrimination" is finally passed.16 Should similar legislation be passed to protect people against "neuroscience" discrimination?
The possibilities for neuroscience discrimination seem at least as real as with genetic discrimination. A predictive test showing that a person has a high likelihood of developing schizophrenia, bipolar disorder, early-onset Alzheimer disease, early-onset Parkinson disease, or Huntington disease could certainly provide insurers or employers with an incentive to avoid that person. To the extent one believes that health coverage should be universal or that employment should be denied or terminated only for good cause, banning "neuroscientific discrimination" might be justified as an incremental step toward this good end. Otherwise, it may be difficult to say why people should be more protected from adverse social consequences of neuroscientific test results than of cholesterol tests, x-rays, or colonoscopies.
Special protection for genetic tests has been urged on the ground that genes are more fundamental, more deterministic, and less the result of personal actions or chance than other influences on health. Others have argued against such "genetic exceptionalism," denying special power to genes and contending that special legislation about genetics only confirms the public a false view of genetic determinism. Still others, including me, have argued that the public's particularly strong fear of genetic test results, even though exaggerated, justifies regulation in order to gain concrete benefits from reducing that fear. The same arguments could be played out with respect to predictive neuroscience tests. Although this is an open empirical question, it does seem likely that the public's perception of the fundamental or deterministic nature of genes does not exist with respect to neuroscience.
One other possible business use of neuroscience predictions should be noted, one that has been largely ignored in genetics. Neuroscience might be used in marketing. Firms might use neuroscience techniques on test subjects to enhance the appeal of their products or the effectiveness of their advertising. Individuals or focus groups could, in the future, be examined under fMRI. At least one firm, Brighthouse Institute for Thought Sciences, has embraced this technology, and, in a press release from 2002, "announced its intentions of revolutionizing the marketing industry."17
More alarmingly, if neuro-monitoring devices were perfected that could study a person's mental function without his knowledge, information to predict a consumer's preferences might be collected for marketing purposes. Privacy regulation seems appropriate for the undisclosed monitoring in the latter example. Regulating the former seems less likely, although it might prove attractive if such neuroscience-enhanced market research proved too effective an aid to selling.
The prenatal use of genetic tests to predict the future characteristics of fetuses, embryos, or as-yet unconceived offspring is one of the most controversial and interesting issues in human genetics. Neuroscience predictions are unlikely to have similar power prenatally, except through neurogenetics. It is possible that neuroimaging or other non-genetic neuroscience tests might be performed on a fetus during pregnancy. Structural MRI has been used as early as about 24 weeks to look for major brain malformations, following up on earlier suspicious sonograms. At this point, no one appears to have done fMRI on the brain of a fetus; the classic method of stimulating the subject and watching which brain regions react would be challenging in utero, though not necessarily impossible. In any event, fetal neuroimaging seems likely to give meaningful results only for serious brain problems and even then at fairly late stage of fetal development so that the most plausible intervention, abortion, would be rarely used and only in the most extreme cases.18
Parents, however, like schools, might make use of predictive neuroscience tests during childhood to help plan, guide, or control their children's lives. Of course, parents already try to guide their children's lives, based on everything from good data to wishful thinking about a child's abilities. Would neuroscience change anything? It might be argued that parents would take neuroscience testing more seriously than other evidence of a child's abilities because of its scientific nature, and thus perhaps exaggerate its accuracy. More fundamentally, it could be argued that, even if the test predictions were powerfully accurate, too extreme parental control over a child's life is a bad thing. From this perspective, any procedures that are likely to add strength to parents' desire or ability to exercise that control should be discouraged. On the other hand, society vests parents with enormous control over their children's upbringing, intervening only in strong cases of abuse. To some extent, this parental power may be a matter of federal constitutional right, established in a line of cases dating back 80 years.19
This issue is perhaps too difficult to be tackled. It is worth noting, though, that government regulation is not the only way to approach it. Professional self-regulation, insurance coverage policies, and parental education might all be methods to discourage any perceived overuse of children's neuroscience tests by their parents.
II. LITIGATION USES
Predictions may themselves be relevant in some litigation, particularly the criminal cases discussed above, but other, non-predictive uses of neurosciences might also become central to litigated cases. Neuroscience might be able to provide relevant, and possibly determinative, evidence of a witness's mental state at the time of testimony, ways of eliciting or evaluating a witness's memories, or other evidence relevant to a litigant's claims. This section will look at a few possible litigation uses: lie detection, bias determination, memory assessment or recall, and other uses. Whether any of these uses is scientifically possible remains to be seen. It is also worth noting that the extent of the use of any of these methods will also depend on their cost and intrusiveness. A method of, for example, truth determination that required an intravenous infusion or examination inside a full scale MRI machine would be used much less than a simple and portable headset.
The implications of any of these technologies for litigation seem to depend largely on four evidentiary issues. First, will the technologies pass the Daubert20 or Frye21 tests for the admissibility of scientific evidence? (I leave questions of Daubert and Frye entirely to Professor Morse.) Second, if they are held sufficiently scientifically reliable to pass Daubert or Frye, are there other reasons to forbid or to compel the admissibility of the results of such technologies when used voluntarily by a witness? Third, would the refusal — or the agreement — of a witness to use one of these technologies itself be admissible in evidence? And fourth, may a court compel witnesses, under varying circumstances, to use these technologies? The answers to these questions will vary with the setting (especially criminal or civil), with the technology, and with other circumstances of the case, but they provide a useful framework for analysis.
Detecting Lies or Compelling Truth
The concept behind current polygraph machines dates back to the early 20th century.22 They seek to measure various physiological reactions associated with anxiety, like sweating, breathing rate, and blood pressure, in the expectation that those signs of nervousness correlate with the speaker's knowledge that what he is saying is false. American courts have generally, but not universally, rejected them, although they are commonly used by the federal government for various security clearances and investigations.23 It has been estimated that their accuracy is about 85 to 90 percent.24
Now imagine that neuroscience leads to new ways to determine whether or not a witness is telling a lie or even to compel a witness to tell the truth. A brain imaging device might, for example, be able to detect patterns or locations of brain activity known from experiments to be highly correlated with the subject's consciousness of falsehood. (I will refer to this as "lie detection.") Alternatively, drugs or other stimuli might be administered that made it impossible for a witness to do anything but tell the truth — an effective truth serum. (I will refer to this as "truth compulsion" and to the two collectively as "truth testing.") Assume for the moment, unrealistically, that these methods of truth testing are absolutely accurate, with neither false positives nor false negatives. How would, and should, courts treat the results of such truth testing? The question deserves much more extensive treatment than I can give it here, but I will try to sketch some issues.
Consider first the non-scientific issues of admissibility. One argument against admissibility was made by four justices of the Supreme Court in United States v. Scheffer25, a case involving a blanket ban on the admissibility of polygraph evidence. Scheffer, a enlisted man in the Air Force working with military police as an informant in drug investigations, wanted to introduce the results of a polygraph examination at his court-martial for illegal drug use.26 The polygraph examination, performed by the military as a routine part of his work as an informant, showed that he denied illegal drug use during the same period that a urine test detected the presence of methamphetamine.27 Military Rule of Evidence 707, promulgated by President George H.W. Bush in 1991, provides that "Notwithstanding any other provision of law, the results of a polygraph examination, the opinion of a polygraph examiner, or any reference to an offer to take, failure to take, or taking of a polygraph examination, shall not be admitted into evidence."
The court-martial refused to admit Scheffer's evidence on the basis of Rule 707. His conviction was overturned by the Court of Appeals for the Armed Forces, which held that this per se exclusion of all polygraph evidence violated the Sixth Amendment.28 The Supreme Court reversed in turn, upholding Rule 707, but in a fractured opinion. Justice Thomas wrote the opinion announcing the decision of the Court and finding the rule constitutional on three grounds: continued question about the reliability of polygraph evidence, the need to "preserve the jury's core function of making credibility determinations in criminal trials," and the avoidance of collateral litigation.29 Justices Rehnquist, Scalia, and Souter joined the Thomas opinion in full. Justice Kennedy, joined by Justices O'Connor, Ginsburg, and Breyer, concurred in the section of the Thomas opinion based on reliability of polygraph evidence. Those four justices did not agree with the other two grounds.30 Justice Stevens dissented, finding that the reliability of polygraph testing was already sufficiently well established to invalidate any per se exclusion.31
Our hypothesized perfect truth testing methods would not run afoul of the reliability issue. Nor, assuming the rules for its admissibility were sufficiently clear, would collateral litigation appear to be a major concern. It would seem, however, even more than the polygraph, to evoke the concerns of four justices about invading the sphere of the jury even when the witness had agreed to the use. Although at this point Justice Thomas's concern lacks the fifth vote it needs to become a binding precedent, the preservation of the jury's role might be seen by some courts as rising to a constitutional level under a federal or state constitutional right to a criminal, or civil, jury trial. It could certainly be used as a policy argument against allowing such evidence and, as an underlying concern of the judiciary, it might influence judicial findings under Daubert or Frye about the reliability of the methods.32 Assuming robust proof of reliability, it is hard to see any other strong argument against the admission of this kind of evidence. (Whether Justice Thomas's rationale, either as a constitutional or a policy matter, would apply to non-jury trials seems more doubtful.)
On the other hand, some defendants might have strong arguments for the admission of such evidence, at least in criminal cases. Courts have found in the Sixth Amendment, perhaps in combination with the Fifth Amendment, a constitutional right for criminal defendants to present evidence in their own defense. Scheffer made this very claim, that Rule 707, in the context of his case, violated his constitutional right to present a defense. The Supreme Court has two lines of cases dealing with this right. In Chambers v. Mississippi, the Court resolved the defendant's claim by balancing the importance of the evidence to the defendant's case with the reliability of the evidence.33 In Rock v. Arkansas, a criminal defendant alleged that she could remember the events only after having her memory "hypnotically refreshed."34 The Court struck down Arkansas's per se rule against hypnotically refreshed testimony on the ground that the rule, as a per se rule, was arbitrary and therefore violated the Sixth Amendment's rights to present a defense and to testify in her own defense. The Rock opinion also stressed that the Arkansas rule prevented the defendant from telling her own story in any meaningful way. That might argue in favor of the admissibility of a criminal defendant's own testimony, under truth compulsion, as opposed to an examiner giving his expert opinion about the truthfulness of the witness's statements based on the truth detector results. These constitutional arguments for the admission of such evidence would not seem to arise with the prosecution's case or with either the plaintiff's or defendant's case in a civil matter (unless some state constitutional provisions were relevant).35
Assuming "truth tested" testimony were admissible, should either a party's, or a witness's, offer or refusal to undergo truth testing be admissible in evidence as relevant to their honesty? Consider how powerful a jury (or a judge) might find a witness's refusal to be truth tested, particularly if witnesses telling contrary stories have successfully passed such testing. Such a refusal could well prove fatal to the witness's credibility.
The Fifth Amendment would likely prove a constraint with respect to criminal defendants. The fact that a defendant has invoked the Fifth Amendment's privilege against self-incrimination cannot normally be admitted into evidence or considered by the trier of fact. Otherwise, the courts have held, the defendant would be penalized for having invoked the privilege. A defendant who takes the stand might well be held to have waived that right and so might be impeached by his refusal to undergo truth testing. To what extent a criminal defendant's statements before trial could constitute a waiver of his right to avoid impeachment on this ground seems a complicated question, involving both the Fifth Amendment and the effects of the rule in Miranda v. Arizona.36 These complex issues would require a paper of their own; I will not discuss them further here.
Apart from a defendant in a criminal trial, it would seem that any other witnesses should be impeachable for their refusal to be truth tested; they might invoke the privilege against self-incrimination but the trier of fact, in weighing their credibility in this trial, would not be using that information against them. And this should be true for prosecution witnesses as well as defense witnesses. Both parties and non-party witnesses at civil trials would seem generally to be impeachable for their refusal to be truth-tested, except in some jurisdictions that hold that a civil party's invocation of the Fifth Amendment may not be commented upon even in a civil trial.
It seems unlikely that a witness's willingness to undergo truth testing would add anything to the results of a test in most cases. It might, however, be relevant, and presumably admissible, if for some reason the test did not work on that witness or, unbeknownst to the witness at the time she made the offer, the test results turned out to be inadmissible.
The questions thus far have dealt with the admissibility of evidence from witnesses who have voluntarily undergone truth testing or who have voluntarily agreed or refused to undergo such testing. Could, or should, either side have the power to compel a witness to undergo either method of truth testing? At its simplest, this might be a right to re-test a witness tested by the other side, a claim that could be quite compelling if the results of these methods, like the results of polygraphy, were believed to be significantly affected by the means by which it was administered — not just the scientific process but the substance and style of the questioning. More broadly, could either side compel a witness, in a criminal or a civil case, to undergo such truth testing as part of either a courtroom examination or in pretrial discovery?
Witnesses certainly can be compelled to testify, at trial or in deposition. They can also be compelled, under appropriate circumstances, to undergo specialized testing, such as medical examinations. (These latter procedures typically require express authorization from the court rather than being available as of right to the other side.) Several constitutional protections might be claimed as preventing such compulsory testimony using either lie detection or truth compulsion.
A witness might argue that the method of truth testing involved was so great an intrusion into the person's bodily (or mental) integrity as to "shock the conscience" and violate the Fifth or Fourteenth Amendment, as did the stomach pumping in Rochin v. California.37 A test method involving something like the wearing of headphones might seem quite different from one involving an intravenous infusion of a drug or envelopment in the coffin-like confines of a full-sized MRI machine. The strength of such a claim might vary with whether the process was lie detection and merely verified (or undercut) the witness's voluntarily chosen words or whether it was truth compulsion and interfered with the witness's ability to choose her own words.
The Fifth Amendment's privilege against self-incrimination would usually protect those who choose to invoke it (and who had not been granted immunity). As noted above, that would not necessarily protect either a party in a civil case or a non-defendant witness in a criminal case from impeachment for invoking the privilege.
Would a witness have a possible Fourth Amendment claim that such testing, compelled by court order, was an unreasonable search and seizure by the government? I know of no precedent for considering questioning itself as a search or seizure, but this form of questioning could be seen as close to searching the confines of the witness's mind. In that case, would a search warrant or other court order suffice to authorize the test against a Fourth Amendment claim? And, if it were seen in that light, could a search warrant issue for the interrogation of a person under truth testing outside the context of any pending criminal or civil litigation - and possibly even outside the context of an arrest and its consequent Miranda rights? If this seems implausible, consider what an attractive addition statutory authorization of such "mental searches" might seem to the Administration or the Congress in the next version of the USA PATRIOT Act.38
In some circumstances, First Amendment claims might be plausible. Truth compulsion might be held to violate in some respects the right not to speak, although the precedents on this point are quite distant, involving a right not to be forced to say, or to publish, specific statements. It also seems conceivable that some religious groups could object to these practices and might be able to make a free exercise clause argument against such compelled speech.
These constitutional questions are many and knotty. Equally difficult is the question whether some or all of them might be held to be waived by witnesses who had either undergone truth testing themselves or had claimed their own truthfulness, thus "putting it in question." And, of course, even if parties or witnesses have no constitutional rights against being ordered to undergo truth testing, that does not resolve the policy issue of whether such rights should exist as a matter of statute, rule, or judicial decision.
Parties and witnesses are not the only relevant actors in trials. Truth testing might also be used in voir dire. Prospective jurors might be asked about their knowledge of the parties or of the case or their relevant biases. Could a defendant claim that his right to an unbiased juror was infringed if such methods were not used and hence compel prospective jurors to undergo truth testing? Could one side or the other challenge for cause a prospective juror who was unwilling to undergo such testing? In capital cases, jurors are asked whether they could vote to convict in light of a possible death penalty; truth testing might be demanded by the prosecution to make sure the prospective jurors are being honest.
It is also worth considering how the existence of such methods might change the pretrial maneuvers of the parties. Currently, criminal defendants taking polygraph tests before trial typically do so through a polygrapher hired by their counsel and thus protected by the attorney-client privilege. Whatever rules are adopted concerning the admissibility of evidence from truth testing will undoubtedly affect the incentives of the parties, in civil and criminal cases, to undergo truth testing. This may, in turn, have substantial, and perhaps unexpected, repercussions for the practices of criminal plea bargaining and civil settlement. As the vast majority of criminal and civil cases are resolved before trial, the effects of truth testing could be substantial.
Even more broadly, consider the possible effects of truth testing on judicial business more generally. Certainly not every case depends on the honesty of witness testimony. Some hinge on conclusions about reasonableness or negligence; others are determined by questions of law. Even factual questions might be the focus of subjectively honest, but nevertheless contradictory, testimony from different witnesses. Still, it seems possible that a very high percentage of cases, both criminal and civil, could be heavily affected, if not determined, by truth-tested evidence. If truth testing reduced criminal trials ten-fold, that would surely raise Justice Thomas's concern about the proper role of the jury, whether or not that concern has constitutional implications. It would also have major effects on the workload of the judiciary and, perhaps, on the structure of the courts.
The questions raised by a perfect method of truth testing are numerous and complicated. They are also probably unrealistic given that no test will be perfect. Most of these questions would require reconsideration if truth testing turned out to be only 99.9% accurate, or 99% accurate, or 90% accurate. That reconsideration would have to consider not just overall "accuracy" but the rates of both false positives (the identification of a false statement as true) and false negatives (the identification of a true statement as false), as those may have different implications. Similarly, decisions on admissibility might differ if accuracy rates varied with a witness's age, sex, training in "beating" the machine, or other traits. And, of course, proving the accuracy of such methods as they are first introduced or as they are altered will be a major issue in court systems under the Daubert or Frye tests.
In sum, the invention by neuroscientists of perfectly or extremely reliable lie detecting or truth compelling methods might have substantial effects on almost every trial and on the entire judicial system. How those effects would play out in light of our current criminal justice system, including the constitutional protections of the Bill of Rights, is not obvious.
Evidence produced by neuroscience may play other significant roles in the courtroom. Consider the possibility of testing, through neuroimaging, whether a witness or a juror reacts negatively to particular groups. Already, neuroimaging work is going on that looks for — and finds — differences in a subject's brain's reaction to people of different races. If that research is able to associate certain patterns of activity with negative bias, its possible use in litigation could be widespread.
As with truth testing, courts would have to decide whether bias testing met Daubert or Frye, whether voluntary test results would be admissible, whether a party's or witness's refusal or agreement to take the test could be admitted into evidence, and whether the testing could ever be compelled. The analysis on these points seems similar to that for truth testing, with the possible exception of a lesser role for the privilege against self-incrimination.
If allowed, neuroscience testing for racial bias might be used where bias was a relevant fact in the case, as in claims of employment discrimination based on race. It might be used to test any witness for bias for or against a party of a particular race. It might be used to test jurors to ensure that they were not biased against the parties because of their race. One could even, barely, imagine it being used to test judges for bias, perhaps as part of a motion to disqualify for bias. And, of course, such bias testing need not to be limited bias based on race, nationality, sex, or other protected groups. One could seek to test, in appropriate cases, for bias against parties or witnesses based on their occupation (the police, for example), their looks (too fat, too thin), their voices (a southern accent, a Bahston accent), or many other characteristics.
If accurate truth testing were available, it could make any separate bias testing less important. Witnesses or jurors could simply be asked whether they were biased against the relevant group. On the other hand, it is possible that people might be able to answer honestly that they were not biased, when they were in fact biased. Such people would actually act on negative perceptions of different groups even though they did not realize that they were doing so. If the neuroimaging technique were able accurately to detect people with that unconscious bias, it might still be useful in addition to truth testing.
Bias testing might even force us to re-evaluate some truisms. We say that the parties to litigation are entitled to unbiased judges and juries, but we mean that they are entitled to judges and juries that are not demonstrably biased in a context where demonstrating bias is difficult. What if demonstrating bias becomes easy — and bias is ubiquitous? Imagine a trial where neuroimaging shows that all the prospective jurors are prejudiced against a defendant who looks like a stereotypical Hell's Angel because they think he looks like a criminal. Or what if the only potential jurors who didn't show bias were themselves members of quasi-criminal motorcycle gangs? What would his right to a fair trial mean in that context?
Evaluating or Eliciting Memory
The two methods discussed so far involve analyzing (or in the case of truth compulsion, creating) a present state of mind. It is conceivable that neuroscience might also provide courts with at least three relevant tools concerning memory. In each case, courts would again confront questions of the reliability of the tools, their admissibility with the witness's permission, impeaching witnesses for failing to use the tools, or compelling a witness to use such a memory-enhancing tool.
The first tool might be an intervention, pharmacological or otherwise, that improved a witness's ability to remember events. It is certainly conceivable that researchers studying memory-linked diseases might create drugs that help people retrieve old memories or retrieve them in more detail. This kind of intervention would not be new in litigation. The courts have seen great controversy over the past few years over "repressed" or "recovered" memories, typically traumatic early childhood experiences brought back to adult witnesses by therapy or hypnosis. Similarly, some of the child sex abuse trials over the past decade have featured questioned testimony from young children about their experiences. In both cases, the validity of these memories has been questioned. We do know from research that people often will come to remember, in good faith, things that did not happen, particularly when those memories have been suggested to them.39 Similar problems might arise with "enhanced" memories.40
A second tool might be the power to assess the validity of a witness's memory. What if neuroscience could give us tools to distinguish between "true" and "false" memory? One could imagine different parts of a witness's brain being used while recounting a "true" memory, a "false" memory, or a creative fiction. Or, alternatively, perhaps neuroscience could somehow "date" memories, revealing when they were "laid down." These methods seem more speculative than either truth testing or bias testing, but, if either one (or some other method of testing memory) turned out to be feasible, courts would, after the Daubert or Frye hearings, again face questions of admitting testimony concerning their voluntary use, allowing comment on a witness's refusal to take the test, and possibly compelling their use.
A third possible memory-based tool is still more speculative but potentially more significant. There have long been reports that electrical stimulation can, sometimes, trigger a subject to have what appears to be an extremely detailed and vivid memory of a past scene, almost like reliving the experience. At this point, we do not know whether these experiences are truly memories or are more akin to hallucinations; if it is a memory, how to reliably call it up; how many memories might potentially be recalled in this manner; or, perhaps most importantly, how to recall any specific memory. Whatever filing system the brain uses for memories seems to be, at this point, a mystery. Assume that it proves possible to cause a witness to recall a specific memory in its entirety, perhaps by localizing the site of the memory first through neuroimaging the witness while she calls up her own existing memories of the event. A witness could then, perhaps, relive an event important to trial, either before trial or on the witness stand. One could even, just barely, imagine a technology that might be able to "read out" the witness's memories, intercepted as neuronal firings, and translate it directly into voice, text, or the equivalent of a movie for review by the finder of fact. Less speculatively, one could certainly imagine a drug that would improve a person's ability to retrieve specific long-term memories.
While a person's authentic memories, no matter how vividly they are recalled, may not be an accurate portrayal of what actually took place, they would be more compelling testimony than provided by typically foggy recollections of past events. Once again, if the validity of these methods were established, the key questions would seem to be whether to allow the admission of evidence from such a recall experience, voluntarily undertaken; whether to admit the fact of a party's or witness's refusal or agreement to use such method; and whether, under any circumstances, to compel the use of such a technique.41
Other Litigation-Related Uses
Neuroscience covers a wide range of brain-related activities. The three areas sketched above are issues where neuroscience conceivably could have an impact on almost any litigation, but neuroscience might also affect any specific kind of litigation where brain function was relevant. Consider four examples.
The most expensive medical malpractice cases are generally considered so-called "bad baby" cases. In these cases, children are born with profound brain damage. Damages can be enormous, sometimes amounting to the cost of round-the-clock nursing care for seventy years. Evidence of causation, however, is often very unclear. The plaintiff parents will allege that the defendants managed the delivery negligently, which led to a lack of oxygen that in turn caused the brain damage. Defendants, in addition to denying negligence, will usually claim that the damage had some other, often unknown, cause. Jurors are left with a family facing a catastrophic situation and no strong evidence about what caused it. Trial verdicts, and settlements, can be extremely high, accounting in part for the high price of malpractice insurance for obstetricians. If neuroscience would reliably distinguish between brain damage caused by oxygen deprivation near birth and that caused earlier, these cases would have more accurate results, in terms of compensating only families where the damage was caused around delivery. Similarly, if fetal neuroimaging could reveal serious brain damage before labor, those images could be evidence about the cause of the damage. (One can even imagine obstetricians insisting on prenatal brain scans before delivery in order to establish a baseline.) By making the determination of causation more certain, it should also lead to more settlements and less wasteful litigation. (Of course, in cases where neuroscience showed that the damage was consistent with lack of oxygen around delivery, the defendants' negligence would still be in question.)
In many personal injury cases, the existence of intractable pain may be an issue. In some of those cases there may be a question whether the plaintiff is exaggerating the extent of the pain. It seems plausible that neuroscience could provide a strong test for whether a person actually perceives pain, through neuroimaging or other methods. It might be able to show whether signals were being sent by the sensory nerves to the brain from the painful location on the plaintiff's body. Alternatively, it might locate a region of the brain that is always activated when a person feels pain or a pattern of brain activation that is always found during physically painful experiences. Again, by reducing uncertainty about a very subjective (and hence falsifiable) aspect of a case, neuroscience could improve the litigation system.
A person's competency is relevant in several legal settings, including disputed guardianships and competency to stand trial. Neuroscience might be able to establish some more objective measures that could be considered relevant to competency. (It might also reveal that what the law seems pleased to regard as a general, undifferentiated competency does not, in fact, exist.) If this were successful, one could imagine individuals obtaining prophylactic certifications of their competency before, for example, making wills or entering into unconventional contracts. The degree of mental ability is also relevant in capital punishment, where the Supreme Court has recently held that executing the mentally retarded violates the Eighth Amendment.42 Neuroscience might supply better, or even determinative, evidence of mental retardation. Or, again, it may be that neuroscience would force the courts to recognize that "mental retardation" is not a discrete condition.
Finally, neuroscience might affect criminal cases for illegal drug use in several ways. Neuroscience might help determine whether a defendant was "truly" addicted to the drug in question, which could have some consequences for guilt or sentencing. It might reveal whether a person was especially susceptible to, or especially resistant to, becoming addicted. Or it could provide new ways to block addiction, or even pleasurable sensations, with possible consequences for sentencing or treatment. Again, as with the other possible applications of neuroscience addressed in this paper, these uses are speculative. It would be wrong to count on neuroscience to solve, deus ex machina, our drug problems. It does not seem irresponsible, however, to consider the possible implications of neuroscience breakthroughs in this area.43
III. CONFIDENTIALITY AND PRIVACY
I am using these two often conflated terms to mean different things. I am using "confidentiality" to refer to the obligation of a professional or an entity to limit appropriately the availability of information about people (in this context, usually patients or research subjects). "Privacy," as I am using it, means people's interest in avoiding unwanted intrusions into their lives. The first focuses on limiting the distribution of information appropriately gathered; the second concerns avoiding intrusions, including the inappropriate gathering of information. Neuroscience will raise challenges concerning both concepts.
Maintaining —and Breaking — Confidentiality
Neuroscience may lead to the generation of sensitive information about individual patients or research subjects, information whose distribution they may wish to see restricted. Personal health information is everywhere protected in the United States, by varying theories under state law, by new federal privacy regulations under the Health Insurance Portability and Accountability Act (HIPAA),44 and by codes of professional ethics. Personal information about research subjects must also be appropriately protected under the Common Rule, the federal regulation governing most (but not all) biomedical research in the United States.45 The special issue with neuroscience-derived information is whether some or all of it requires additional protection.
Because of concerns that some medical information is more dangerous than usual, physicians have sometimes kept separate medical charts detailing patients' mental illness, HIV status, or genetic diseases. Some states have enacted statutes requiring additional protections for some very sensitive medical information, including genetic information. Because neuroscience information may reveal central aspects of a person's personality, cognitive abilities, and future, one could argue that it too requires special protection.
Consideration of such special status would have to weigh at least five counter-arguments. First, any additional recordkeeping or data protection requirements both increase costs and risk making important information unavailable to physicians or patients who need it. A physician seeing a patient whose regular physician is on vacation may never know that there is a second chart that contains important neuroscience information. Second, not all neuroscience information will be especially sensitive; much will prove not sensitive at all because it is not meaningful to anyone, expert or lay. Third, defining "neuroscience information" will prove difficult. Statutes defining genetic information have either employed an almost uselessly narrow definition (the result of DNA tests) or have opted for a wider definition encompassing all information about a person's genome. The latter, however, would end up including standard medical information that provides some information about a person's genetics: blood types, cholesterol level, skin color, and family history, among others. Fourth, mandating special protection for a class of information sends the message that the information is especially important even if it is not. In genetics, it is argued that legislation based on such "genetic exceptionalism" increases a false and harmful public sense of "genetic determinism." Similar arguments might apply to neuroscience. Finally, given the many legitimate and often unpredictable needs for access to medical information, confidentiality provisions will often prove ineffective at keeping neuroscience information private, especially from the health insurers and employers who are paying for the medical care. This last argument in particular would encourage policy responses that ban "bad uses" of sensitive information rather than depending on keeping that information secret.
Laws and policies on confidentiality also need to consider the limits on confidentiality. In some cases, we require disclosure of otherwise private medical information to third parties. Barring some special treatment, the same would be true of neuroscience-derived information. A physician (including, perhaps, a physician-researcher) may have an obligation to report to a county health agency or the Centers for Disease Control neuroscience-derived information about a patient that is linked to a reportable disease (an MRI scan showing, for example, a case of new variant Creutzfeldt-Jakob disease, the human version of "mad cow disease"); to a motor vehicle department information linked to loss-of-consciousness disorders; and to a variety of governmental bodies information leading to a suspicion of child abuse, elder abuse, pesticide poisoning, or other topics as specified by statute. In some cases, it might be argued, as it has been in genetics, that a physician has a responsibility to disclose a patient's condition to a family member if the family member is at higher risk of the same condition as a result. Finally, neuroscience information showing an imminent and serious threat from a patient to a third party might have to be reported under the Tarasoff doctrine.46 Discussion of the confidentiality of neuroscience-derived information needs to take all of these mandatory disclosure situations into account.
Privacy Protections Against Mental Intrusions
Privacy issues, as I am using the term in this paper, would arise as a result of neuroscience through unconsented and inappropriate intrusions into a person's life. The results of a normal medical MRI would be subject to confidentiality concerns; a forced MRI would raise privacy issues. Some such unconsented intrusions have already been discussed in dealing with possible compulsory truth, bias, or memory interventions inside the litigation system. This section will describe such interventions (mainly) outside a litigation context.
Intrusions by the government are subject to the Constitution and its protections of privacy, contained in and emanating from the penumbra of the Bill of Rights. Whether or not interventions were permitted in the courtroom, under judicial supervision, the government might use them in other contexts, just as polygraphs are used in security clearance investigations. All of these non-litigation governmental uses share a greater possibility of abuse than the use of such a technology in a court-supervised setting.
Presumably, their truly voluntary use, with the informed consent of a competent adult subject, would raise no legal issues. Situations where agreement to take the test could be viewed as less than wholly voluntary would raise their own set of sticky problems about the degree of coercion. Consider the possibility of truth tests for those seeking government jobs, benefits, or licenses. Admission to a state college (or eligibility for government-provided scholarships or government-guaranteed loans) might, for example, be conditioned on passing a lie detection examination on illegal drug use.
Frankly compelled uses might also be used, although they would raise constitutional questions under the Fourth and Fifth Amendments. One could imagine law enforcement officials deciding to interrogate one member of a criminal gang under truth compulsion in violation of Miranda and of the Fifth Amendment (and hence to forego bringing him to trial) in order to get information about his colleagues. Even if a person had been given a sufficiently broad grant of immunity to avoid any Fifth Amendment issues, would that really protect the interests of a person forced to undergo a truth compulsion process? Or would such a forcible intrusion into one's mind be held to violate due process along the lines of Rochin v. California?47
Of course, even if the interrogated party could bring a constitutional tort claim against the police, how often would such a claim be brought? And would we — or our courts — always find such interrogations improper? Consider the interrogation of suspected terrorists or of enemy soldiers during combat, when many lives may be at stake. (This also raises the interesting question of how the U.S. could protect its soldiers or agents from similar questioning).
Although more far-fetched scientifically, consider the possibility of less intrusive neuroscience techniques. What if the government developed a neuroimaging device that could be used at a distance from a moving subject or one that could fit into the arch of a airport metal detector? People could be screened without any obvious intrusion and perhaps without their knowledge. Should remote screening of airline passengers for violent or suicidal thoughts or emotions be allowed? Would it matter whether the airport had signs saying that all travelers, by their presence, consented to such screening?
Private parties have less ability than the government to compel someone to undergo a neuroscience intervention - at least without being liable to arrest for assault. Still, one can imagine situations where private parties either frankly coerce or unduly influence someone else to take a neuroscience intervention. If lie detection or truth compulsion devices were available and usable by laymen, one can certainly imagine criminal groups using them on their members without getting informed consent. Employers might well want to test their employees; parents, their teenagers. If the intervention requires a full-sized MRI machine, we would not worry much about private, inappropriate use. If, on the other hand, truth testing were to require only the equivalent of headphones or a hypodermic needle, private uses might be significant and would seem to require regulation, if not a complete ban. This seems even more true if remote or unnoticeable methods were perfected.
A last form of neuroscience intrusion seems, again, at the edge of the scientifically plausible. Imagine an intervention that allowed an outsider to control the actions or motions, and possibly even the speech, emotions, or thoughts, of a person. Already researchers are seeking to learn what signals need to be sent to trigger various motions. Dr. Miguel Nicolelis of Duke University has been working to determine what neural activity triggers particular motions in rats and in monkeys and he hopes to be able to stimulate it artificially.48 One goal is to trigger the implanted electrodes and have the monkey's arm move in a predictable and controlled fashion. The potential benefits of this research are enormous, particularly to people with spinal cord injuries or other interruptions in their motor neurons. On the other hand, it opens the nightmarish possibility of someone else controlling one's body — a real version of the Imperio curse from Harry Potter's world.
Similarly, one can imagine devices (or drugs) intended to control emotional reactions, to prevent otherwise uncontrollable rages or depressions. One could imagine a court ordering implantation of such a device in sexual offenders to prevent the emotions that give rise to their crimes or, perhaps more plausibly, offering such treatment as an option, in place of a long prison term. Castration, an old-fashioned method of accomplishing a similar result, either surgical or chemical, is already a possibility for convicted sex offenders in some states. Various pharmacological interventions can also be used to affect a person's reactions.
These kinds of interventions may never become more than the ravings of victims of paranoia, though it is at least interesting that the Defense Advanced Research Projects Administration (DARPA) is providing $26 million in support of Nicolelis's research through its "Brain-Machine Interfaces" program.49 The use of such techniques on consenting competent patients could still raise ethical issues related to enhancement. Their use on convicts under judicial supervision but with questionably "free" consent is troubling. Their possible use on unconsenting victims is terrifying. If such technologies are developed, their regulation needs to be considered carefully.
Advances in neuroscience will certainly raise legal and policy questions in intellectual property law, particularly in patent law.50 Fortunately, few of those questions seem novel, as most seem likely to parallel issues already raised in genetics. In some important respects, however, the issues seem less likely to be charged than those encountered in genetics.
Two kinds of neuroscience patents seem likely. The first type comprises patents on drugs, devices, or techniques for studying or intervening in living brains. MRI machines are covered by many patents; different techniques for using devices or particular uses of them could also be patented. So, for example, the first person to use an MRI machine to search for a particular atom or molecule might be able to patent that use, unless it were an obvious extension of existing practice. Similarly, someone using an MRI machine, or a drug, for the purpose of determining whether the subject was telling the truth could patent that use of that machine or drug, even if she did not have own a patent on the machine or drug itself.
The second type would be a patent on a particular pattern of activity in the brain. (I will refer to these as "neural pattern patents.") The claims could be that this pattern could be used to diagnose conditions, to predict future conditions, or as an opportunity for an intervention. This would parallel the common approach to patenting genes for diagnosis, for prediction, and for possible gene therapy. Neuroimaging results seem the obvious candidates for this kind of patent, although the patented pattern might show up, for example, as a set of gene expression results revealed by microarrays or gene chips.
I will discuss the likely issues these kinds of patents raise in three categories: standard bioscience patent issues, "owning thoughts," and medical treatments.
Standard Bioscience Patent Issues
Patents in the biological science, especially those relating to genetics, have raised a number of different concerns. Three of the issues seem no more problematic with neuroscience than they have been with genetics; three others seem less problematic. Whether this is troublesome, of course, depends largely on one's assessment of the current state of genetic patents. My own assessment is relatively sanguine; I believe we are muddling through the issues of genetic patents with research and treatment continuing to thrive. I am optimistic, therefore, that none of these standard patent issues will cause broad problems in neuroscience.
Two concerns are based on the fact of the patent monopoly. Some complain that patents allow the patent owner to restrict the use and increase the price of the patented invention, thus depriving some people of its benefits.51 This is, of course, true of all patents and is a core idea behind the patent system: the time-limited monopoly provides the economic returns that encourage inventors to invent. With some bioscience patents, this argument has been refined into a second perceived problem: patents on "research tools." Control over a tool essential to the future of a particular field could, some say, give the patent owner too much power over the field and could end up retarding research progress. This issue has been discussed widely, most notably in the 1998 Report of the National Institutes of Health (NIH) Working Group on Research Tools, which made extensive recommendations on the subject.52 Some neuroscience patents may raise concerns about monopolization of basic research tools, but it is not clear that those problems cannot be handled if and as they arise.
A third issue concerns the effects of patents on universities. Under the Bayh-Dole Act, passed in 1980, universities and other non-profit organizations where inventions were made using federal grant or contract funds can claim ownership of the resulting inventions, subject to certain conditions. Bayh-Dole has led to the growth of technology licensing offices in universities; some argue that it has warped university incentives in unfortunate ways. Neuroscience patents might expand the number of favored, money-making departments in universities, but seem unlikely to make a qualitative difference.
Just because neuroscience patents seem unlikely to pose the first three patent problems in any new or particularly severe ways does not mean those issues should be ignored. Individual neuroscience patents might cause substantial problems that call for intervention; the cumulative weight of neuroscience patents when added to other bioscience patents may make systemic reform of one kind or another more pressing. But the outlines of the problems are known.
Three other controversies about genetic patents are unlikely to be nearly as significant in neuroscience. They seem relevant, if at all, to neural pattern patents, not to device or process patents.
Two of the controversies grew out of patents on DNA sequences. In 1998 Rebecca Eisenberg and Michael Heller pointed out "the tragedy of the anti-commons," the concern that having too many different patents for DNA sequences under different ownership could increase transaction costs so greatly as to foreclose useful products or research.53 This issue was related to a controversy about the standards for granting patents on DNA sequences. Researchers were applying for tens of thousands of patents on small stretches of DNA without necessarily knowing what, if anything, the DNA did. Often these were "expressed sequence tags" or "ESTs," stretches of DNA that were known to be in genes and hence to play some role in the body's function because they were found in transcribed form as messenger RNA in cells. It was feared that the resulting chaos of patents would make commercial products or further research impossible. This concern eventually led the Patent and Trademark Office to issue revised guidelines tightening the utility requirement for gene patents.
However strong or weak these concerns may be in genetics, neither issue seems likely to be very important in neuroscience (except of course in neurogenetics). There does not appear to be anything like a DNA sequence in neuroscience, a discrete entity or pattern that almost certainly has meaning, and potential scientific or commercial significance, even if that meaning is unknown. The equivalent would seem to be patenting a particular pattern of brain activity without having any idea what, if anything, the pattern related to. That was plausible in genetics because the sequence could be used as a marker for the still unknown gene; nothing seems equivalent in neuroscience. Similarly, it seems unlikely that hundreds or thousands of different neural patterns, each patented by different entities, would need to be combined into one product or tool for commercial or research purposes.
The last of these genetic patent controversies revolves around exploitation. Some have argued that genetic patents have often stemmed from the alleged inventors' exploitation of individuals or indigenous peoples who provided access to or traditional knowledge about medicinal uses of living things, who had created and maintained various genetically varied strains of crops, or who had actually provided human DNA with which a valuable discovery was made. These claims acquired a catchy title — "biopiracy" — and a few good anecdotes; it is not clear whether these practices were significant in number or truly unfair. Neuroscience should face few if any such claims. The main patterns of the research will not involve seeking genetic variations from crops or other living things, nor does it seem likely (apart from neurogenetics) that searches for patterns found in unique individuals or distinct human populations will be common.
Patents on human genes have been extremely controversial for a wide variety of reasons. Some have opposed them for religious reasons, others because they were thought not to involve true "inventions," others because they believed human genes should be "the common heritage of mankind," and still others because they believe such gene patents "commodify" humans. (Similar but slightly different arguments have raged over the patentability of other kinds of human biological materials or of non-human life-forms.) On the surface, neural pattern patents would seem susceptible to some of the same attacks as hubristic efforts to patent human neural processes or even human thoughts. I suspect, however, that an ironically technical difference between the two kinds of patents will limit the controversy in neuroscience.
Patents on human genes — or, more accurately, patents on DNA or RNA molecules of specified nucleotide sequences — are typically written to claim a wide range of conceivable use of those sequences. A gene patent, for example, might claim the use of a sequence to predict, to diagnose, or to treat a disease. But it will also claim the molecule itself as a "composition of matter." The composition of matter claim gives the owner rights over any other uses of the sequence even though he has not foreseen them. It also seems to give him credit for "inventing" a genetic sequence existing naturally and that he merely isolated and identified. It is the composition of matter claims that have driven the controversy over gene patents. Few opponents claim that the researchers who, for example, discovered the gene linked to cystic fibrosis should not be able to patent beneficial uses of that gene, such as diagnosis or treatment. It is the assertion of ownership of the thing itself that rankles even though that claim may add little value to the other "use" claims.
Neural pattern patents would differ from gene patents in that there is no composition of matter to be patented. The claim would be to certain patterns used for certain purposes. The pattern itself is not material — it is not a structure or a molecule — and so should not be claimable as a "composition of matter." Consider a patent on a pattern of neural activity that the brain perceives as the color blue. A researcher might patent the use of the pattern to tell if someone was seeing blue or perhaps to allow a person whose retina did not perceive blue to "see" blue. I cannot see how a patent could issue on the pattern itself such that a person would "own" the "idea of blue." Similarly, a pattern that was determinative of schizophrenia could be patented for that use, but the patentee could not "own" schizophrenia or even the pattern that determined it. If a researcher created a pattern by altering cells, then he could patent, as a composition of matter, the altered cells, perhaps defined in part by the pattern they created. Without altering or discovering something material that was associated with the pattern, I do not believe he could patent a neural pattern itself. The fact that neural pattern patents will be patents to uses of the patterns, not for the patterns themselves, may well prevent the kinds of controversies that have attended gene patents.
Patents and Medical Treatment
Neuroscience "pattern" patents might, or might not, run into a problem genetics patents have largely avoided: the Ganske-Frist Act. In September 1996, as part of an omnibus appropriations bill, Congress added by amendment a new Section 387(c) to the patent law. This section states that
With respect to a medical practitioner's performance of a medical activity that constitutes an infringement under section 271(a) or (b) of this title, the provisions of sections 281, 283, 284, and 285 of this title shall not apply against the medical practitioner or against a related health care entity with respect to such medical activity.54
This section exempts a physician and her hospital, clinic, HMO, or other "related health care entity" from liability for damages or an injunction for infringing a patent during the performance of a "medical activity." The amendment defines "medical activity" as "the performance of a medical or surgical procedure on a body," but it excludes from that definition " the use of a patented machine, manufacture, or composition of matter in violation of such patent,  the practice of a patented use of a composition of matter in violation of such patent, or  the practice of a process in violation of a biotechnology patent."55 The statute does not define "a biotechnology patent."
Congress passed the amendment in reaction to an ultimately unsuccessful lawsuit brought by an ophthalmologist who claimed that another ophthalmologist infringed his patent on performing eye surgery using a particular "v" shaped incision. Medical procedure patents had been banned in many other countries and had been controversial in the United States for over a century; they had, however, clearly been allowed in the United States since 1954.56
Consider a neural pattern patent that claimed the use of a particular pattern of brain activity in the diagnosis or as a guide to the treatment of schizophrenia.57 A physician using that pattern without permission would not be using "a patented machine, manufacture, or composition of matter in violation of such patent." Nor would she be engaged in "the practice of a patented use of a composition of matter in violation of such patent." With no statutory definition, relevant legislative history, or judicial interpretation, it seems impossible to tell whether she would be engaged in the "practice of a process in violation of a biotechnology patent." Because molecules, including DNA, RNA, and proteins, can be the subjects of "composition of matter patents," most genetic patents should not be affected by the Ganske-Frist Act.58 Neural pattern patents might be. It is, of course, quite unclear how significant an influence this exception for patent liability might have in neuroscience research or related medical practice.
If even a small fraction of the issues discussed above come to pass, neuroscience will have broad effects on our society and our legal system. The project to which this paper contributes can help in beginning to sift out the likely from the merely plausible, the unlikely, and the bizarre, both in the expected development of the science and in the social and legal consequences of that science. Truly effective prediction of upcoming problems — and suggestions for viable solutions — will require an extensive continuing effort. How to create a useful process for managing the social and legal challenges of neuroscience is not the least important of the many questions raised by neuroscience.
* C. Wendell and Edith M. Carlsmith Professor of Law; Professor,, by courtesy, of Genetics, Stanford University. I want to thank particularly my colleagues John Barton, George Fisher, and Tino Cuellar for their helpful advice on intellectual property, evidentiary issues, and neuroscience predictions in the criminal justice system, respectively. I also want to thank my research assistant, Melanie Blunschi. Back
1. William Shakespeare, Macbeth, Act I, Scene 4 (1606). Back
2. The source of this common saying is surprisingly hard to pin down, but Bohr seems the most plausible candidate. See Henry T. Greely, Trusted Systems and Medical Records: Lowering Expectations, 52 STAN. L. REV. 1585, 1591 n. 9 (2001). Back
3. See, e.g., Secretary's Advisory Committee on Genetic Testing, Enhancing the Oversight of Genetic Tests: Recommendations of the SACGT, National Institutes of Health (July 2000), report available at http://www4.od.nih.gov/oba/sacgt/reports/oversight_report.htm; Holtzman, N.A.; Watson, M.S. (eds.) Promoting Safe and Effective Genetic Testing in the United States: Final Report of the Task Force on Genetic Testing . Baltimore: Johns Hopkins University Press, (1997); and Barbara A. Koenig, Henry T. Greely, Laura McConnell, Heather Silverberg, and Thomas A. Raffin, PGES Recommendations on Genetic Testing for Breast Cancer Susceptibility, JOURNAL OF WOMEN'S HEALTH 7:531-545 (June 1998). Back
4. Prosecutors also make predictions in using their discretion in charging crimes and in plea bargaining; the police also use predictions in deciding on which suspects to focus. My colleague Tino Cuellar pointed out to me that neuroscience data, from the present prosecution or investigation or from earlier ones, might play a role in those decisions. Back
5. The implications of neuroscientific assessments of a person's state of mind at the time of the crime for criminal liability are discussed in Professor Morse's paper. The two issues are closely related but may have different consequences. Back
6. See Brunner, H.G., Nelen, M., Breakefield, X.O., Ropers, H.H., Oost, B.A. van, Abnormal Behavior Associated with a Point Mutation in the Structural Gene for Monoamine Oxidase A., SCIENCE, 262:5133-36 (October 22, 1993), discussed in Virginia Morrell, Evidence Found for a Possible "Aggression" Gene, SCIENCE 260:1722-24 (June 18, 1993); and Avshalon Caspi, Joseph McClay, Terrie E. Moffitt, Jonathan Mill, Judy Martin, Ian W. Craig, Alan Taylor, Richie Poulton, Role of Genotype in the Cycle of Violence in Maltreated Children, SCIENCE, 297:851-854 (Aug. 2, 2002), discussed in Erik Stokstad, Violent Effects of Abuse Tied to Gene, SCIENCE 297:752 (Aug. 2, 2002). Back
7. See the discussion of the four unsuccessful efforts to use XYY status as a defense in criminal cases in Deborah W. Denno, Human Biology and Criminal Responsibility: Free Will or Free Ride? 137 U.Pa. L. Rev. 613, 620-22 (1988). Back
8. See two excellent recent discussions of these cases: Stephen J. Morse, Uncontrollable Urges and Irrational People, 88 VA. L. REV. 1025 (2002); and Peter C. Pfaffenroth, The Need for Coherence: States' Civil Commitment of Sex Offenders in the Wake of Kansas v. Crane, 55 STAN. L. REV. 2229 (2003). Back
9. Kan. Stat. Ann. §59-29a02(a) (2003). Back
10. 521 U.S. 346, 360 (1997). Back
11. Id. Back
12. 534 U.S. 407 (2002). Back
13. Id. at 413. Back
14. For a representative sample of views, see Kathy L. Hudson, Karen H. Rothenberg, Lori B. Andrews, Mary Jo Ellis Kahn, and Francis S. Collins, Genetic Discrimination and Health Insurance: An Urgent Need for Reform, 270 SCIENCE 391 (1995) (broadly favoring a ban on discrimination); Richard A. Epstein, The Legal Regulation of Genetic Discrimination: Old Responses to New Technology, 74 B.U. L. REV. 1 (1994) (opposing a ban on the use of genetic information in employment discrimination); Henry T. Greely, Genotype Discrimination: The Complex Case for Some Legislative Protection, 149 U.PA.L.REV. 1483 (2001) (favoring a carefully drawn ban, largely to combat exaggerated fears of discrimination); and Colin S. Diver and Jane M. Cohen, Genophobia: What Is Wrong with Genetic Discrimination?, 149 U. PA. L. REV. 1439 (2001) (opposing a ban on its use in health insurance). Back
15. For the most up-to-date information on state law in this area, see Ellen W. Clayton, Ethical, Legal, and Social Implications of Genomic Medicine, 349 NEW ENG. J. MED. 542 (2003). Back
16. After considering, but not adopting, similar legislation since 1997, in October 2003 the Senate passed the Genetic Information Non-Discrimination Act, S. 1053. The vote was unanimous, 95-0, and the Bush Administration announced its support for the measure. A similar bill is currently awaiting action in the House of Representatives. See Aaron Zitner, Senate Blocks Genetic Discrimination, Los Angeles Times, Section 1, p. 16 (Oct. 15, 2003). Back
17. Brighthouse Institute for Thought Sciences Launches First "Neuromarketing" Research Company, press release (June 22, 2002) found at http://www.prweb.com/releases/2002/6/prweb40936.php Back
18. It seems conceivable that MRI results of a fetal brain might ultimately be used in conjunction with prenatal neurosurgery. Back
19. See, e.g., Pierce v. Society of Sisters, 268 U.S.510 (1925); Meyer v. Nebraska, 262 U.S. 390 (1923). Back
20. Daubert v. Merrell Dow Pharmaceuticals, 516 U.S. 869; 116 S. Ct. 189; 133 L. Ed. 2d 126 (1993). Back
21. Frye v. United States, 54 App.D.C. 46, 293 F. 1013 (1923, D.C. Cir.) Back
22. A National Academy of Sciences panel examining polygraph evidence dated the birth of the polygraph machine to William Marston between 1915 and 1921. COMMITTEE TO REVIEW THE SCIENTIFIC EVIDENCE ON THE POLYGRAPH, NATIONAL RESEARCH COUNCIL, THE POLYGRAPH AND LIE DETECTION at 291-97 (Mark H. Moore and Anthony A. Braga, eds. 2003). Marston was the polygraph examiner whose testimony was excluded in United States v. Frye. Back
23. See the discussion in United States v. Scheffer, 523 U.S. 303, 310-11 (1998). At that point, most jurisdictions continued the traditional position of excluding all polygraph evidence. Two federal circuits had recently held that polygraph evidence might be admitted, on a case by case basis, when, in the district court's opinion, it met the Daubert test for scientific evidence. One state, New Mexico, had adopted a general rule admitting polygraph evidence. Back
There are a host of studies that place the reliability of polygraph tests at 85% to 90%. While critics of the polygraph argue that accuracy is much lower, even the studies cited by the critics place polygraph accuracy at 70%. Moreover, to the extent that the polygraph errs, studies have repeatedly shown that the polygraph is more likely to find innocent people guilty than vice versa. Thus, exculpatory polygraphs — like the one in this case — are likely to be more reliable than inculpatory ones.
United States v. Scheffer, 523 U.S. 303, 333 (1998) (Stevens, J., dissenting)(footnotes omitted)
A committee of the National Academy of Sciences has recently characterized the evidence as follows:
Notwithstanding the limitations of the quality of the empirical research and the limited ability to generalize to real-world settings, we conclude that in populations of examinees such as those represented in the polygraph research literature, untrained in countermeasures, specific-incident polygraph tests can discriminate lying from truth telling at rates well above chance, though well below perfection.
COMMITTEE TO REVIEW THE SCIENTIFIC EVIDENCE ON THE POLYGRAPH at 4. Back
25.523 U.S. 303 (1998). Back
26. Id. at 305. Back
27. Id. at 306. Back
28. 44 M.J. 442 (1996). Back
29. 532 U.S. at 312-13. Back
30. Id. at 318. Back
31. Id. at 320. Back
32. I owe this useful insight to Professor Fisher. Back
33. 410 U.S. 284 (1973). Back
34. 483 U.S. 44 (1987). Back
35. A constitutional right to admit such evidence might also argue for a constitutional right for indigent defendants to have the government pay the cost of such truth testing, which might be small or might be great. Back
36. 396 U.S. 868 (1969). Back
37. 342 U.S. 165 (1952). Back
38. Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism ("USA Patriot Act") Act of 2001, Pub. L. No. 107-56 (2001). Back
39. As with bias detection, truth testing could limit the need for such memory assessment when the witness was conscious of the falsity of the memory. Memory assessment, however, could be useful in cases where the witness had actually come to believe in the accuracy of a questioned "false" memory. Back
40. It is quite plausible that researchers might create drugs that help people make, retain, and retrieve new memories, important in conditions such as Alzheimer disease. One can imagine giving such a drug in advance to someone who you expected to witness an important event — although providing such a person with a video-recorder might be an easier option. Back
41. Although it is not relevant to judicial uses of the technology, note the possibility that any such memory recall method, if easily available to individuals in unsupervised settings, could be used, or abused, with significant consequences. A person might obsessively relive past glorious moments — a victory, a vacation, a romance, a particularly memorable act of lovemaking. A depressed person might dwell compulsively on bad memories. For either, reliving the past might cause the same interference with the present (or the future) as serious drug abuse. Back
42. Atkins v. Virginia, 536 U.S. 304 (2003). Back
43. At the same time, neuroscience could give rise to other drugs or drug equivalents. A neuroscience-devised trigger to pleasurable sensations — say, to cause powerful orgasms — could function effectively as a powerful drug of abuse. Back
44. 45 C.F.R. §160;.101, et seq. (2003). Back
45. Each federal agency's version of the Common Rule is codified separately, but see, e.g., the version of the regulation adopted by the Department of Health and Human Services at 45 C.F.R. §§ 46.101-46.409 et seq. (2003). Back
46. Tarasoff v .Regents of University of California, 17 Cal.3rd 425, 551 P.2d 334, 131 Cal. Rptr. 14 (1976). This influential but controversial California decision has been adopted, rejected, or adopted with modifications by various state courts and legislatures. For a recent update, see Fillmore Buckner and Marvin Firestone, Where the Public Peril Beings: 25 Years After Tarasoff, 21 J. Legal Med. 187 (2000). Back
47. See the discussion supra at note 34. Back
48. See Nicolelis, M.A.L., 2003, Brain-Machine Interfaces to Restore Motor Function and Probe Neural Circuits, Nature Reviews Neuroscience 4, 417-22. For a broader discussion of Nicolelis's work, see Jose M. Carmena, Mikhail A. Lebedev, Roy E. Crist, Joseph E. O'Doherty, David M. Santucci, Dragan F. Dimitrov, Parag G. Patil, Craig S. Henriquez, Miguel A.L. Nicolelis, Learning to Control a Brain-Machine Interface for Reaching and Grasping by Primates, Public Library of Science Biology, Vol. 1, Issue 2 (November 2003). Back
49. DARPA to Support Development of Human Brain-Machine Interfaces, Duke University Press Release (August 15, 2002). Back
50. I cannot think of any plausible issues in copyright or trademark law arising from neuroscience (except, of course, to the extent that litigation in either field might be affected by some of the possible methods discussed in the litigation section above). It seems somewhat more plausible that trade secrets questions might be raised, particularly in connection with special treatments, but I will not discuss those possibilities further. Back
51. Jon F. Merz, Antigone G. Kriss, Debra G.B. Leonard, and Mildred K. Cho, Diagnostic Testing Fails the Test, Nature, 415:577-579 (2002). Back
52. Report of the National Institutes of Health (NIH) Working Group on Research Tools (June 4, 1998). Back
53. Michael A. Heller and Rebecca S. Eisenberg, Can Patents Deter Innovation? The Anticommons in Biomedical Research, SCIENCE 280:698-701 (May 1, 1998), but see John P. Walsh, Ashish Arora, and Wesley M. Cohen, Research Tool Patenting and Licensing and Biomedical Innovation, in PATENTS IN THE KNOWLEDGE-BASED ECONOMY (W. M. Cohen and S. Merrill, eds. National Academies Press 2003) (finding no evidence for such a problem). Back
54. 35 U.S.C. 287(c) (2003). Back
55. 35 U.S.C. 287(c)(2)(a) (2003). Back
56. See the discussion of the Ganske-Frist amendment in Richard P. Burgoon, Jr., Silk Purses, Sows Ears and Other Nuances Regarding 35 U.S.C. §287(c), 4 U. BALT. INTELL. PROP. J. 69 (1996), and Scott D. Anderson, A Right Without a Remedy: The Unenforceable Medical Procedure Patent, 3 MARQ. INTELL. PROP. L. REV. 117 (1999). Back
57. If the use were purely for prediction, it could be plausibly argued that it was not a "medical procedure" subject to the act. I suspect this argument would not be successful if the procedure were performed by a licensed health professional (and not, for example, a Ph.D. neuroscientist). Back
58. Procedures using gene expression results might be vulnerable unless the expression array or gene chip was itself a patented machine or manufacture the use of which was specified in the patent. Back
These remarks were made by Henry T. Greely for the Regan Lecture.