Markkula Center of Applied Ethics

Reflections on Research Ethics

This text is a transcript of a presentation by Donald Kennedy, editor-in-chief of Science magazine and president emeritus of Stanford University.

Listen to the presentation

I want to talk about a number of respects in which science, as a community, as a discipline, as a way of approaching problems, confronts and contends with other kinds of institutions and other kinds of communities. It happens all the time, no matter whether you happen to be in supposed charge of a scientific journal, as I am now or whether you’re supposed to be in charge of a university, as I was for 12 years.

That list of problems and challenges runs all the way from trying to decide how the institution should treat researchers and the freedom of researchers to follow their own ethical imperatives and their convictions—it ranges from that to how referees treat authors and how authors treat referees, in the case of a scientific journal. It ranges, also, from the way in which science is used by governments to interpret realities and thus to guide public policy. And I want to talk about every one of those collisions from both of those recent experiences I’ve had in wearing the two hats I mentioned.

When I was president at Stanford, I had had the experience of being a researcher in the Biology Department and a chairman of a department who had to occasionally solve disputes between scientist authors among themselves and sometimes had to try and encourage my colleagues to take certain routes in approaching problems or to publish their findings in certain ways. I learned quite a lot from those, but they did not always give me the kinds of guidance I had to have when, occasionally, disputes would rise to a level at which neither department chairs nor deans could solve them and they found an uncomfortable place on the desk of the university’s president.
About three years into that role, I discovered that there were enough contentions about the ownership of intellectual property, that is the proprietorship of a certain result that was about to appear in a scientific journal, that I had to intervene in some way, either get the protagonists to come in separately or, eventually, together and talk about their disagreements, about who was responsible for what findings. And among the difficulties that we frequently encountered, there was a debate that sort of took the following tone. It was my idea. Well maybe it was your idea but I supplied all of the equipment and consulted heavily on the research strategy, so who should be the author. Well usually those questions got happily resolved with the answer, both.

But sometimes there were difficulties and later on I learned, as a journal editor, that sometimes these created so much tension that they had to be resolved by little daggers or asterisks behind or beside the names of the authors, referring one to a subscript at the bottom of page one of the journal article saying something like these authors contributed equally to this work. That tells you more than you really want to know about what led up to this paper. But sometimes they do.
Actually, I encountered a case late in my time at which two joint authors from two different laboratories were asked by their sponsors, the principal investigators on both projects, that actually, in fairness, both ought to be co-lead authors and ought to be asterisked in that way, and that did not solve the problem; that is, the question was not co-recognition. The question was exclusivity. Everybody, it turns out, has disparity detection devices on them and wants sometimes always to be first.

Well that’s a short sampling of the authors issue, and I found that, as I looked at different disciplines, that the customs in those disciplines as to entitlement for authorship were a little bit difficult to untangle. For example, there were English journals in physiology that required authors to list their names alphabetically, leaving all questions of seniority and particularity for the readers to guess their way through. In other laboratories within biology, particularly in molecular biology, the system decreed that the senior investigator in the laboratory, the person who wrote the grant and supplied most of the equipment, would be the last author on the paper regardless of whether or not he or she had made a significant intellectual contribution to the final product.
My own custom, in my own laboratory, was that any co-author had to be able to defend the paper in a public meeting and that everybody on the rather short authors list had to have played a significant part, both in the execution of the experiments and the writing up of the results. That limited the list and it also limited the amount of contention that occurred during the preparations for that work.

Now I want to turn a little more directly to problems that one encounters as the editor of a journal. I was a little surprised, comparing the problems I saw there with problems I had seen in my earlier incarnations, both of them, because it indicated a stiffer and more serious level of competitiveness than I had been used to. For example, in the life sciences, some 35 to 40 percent of our submissions—we get 12,000 of those a year at Science—contained a notation in the cover letter that listed several people that the authors preferred not be appointed to referee the manuscript. We actually allow authors to do that. I mean, after all, if somebody has had a deeply disturbing personal relationship with somebody—you can draw your own scenario for this, I won’t trouble you with it—then it would clearly be inappropriate for that person to referee. And, in fact, a referee, approached to review a paper and comment on its quality, ought to exclude himself or herself on those very grounds.

But, of course, authors don’t exactly expect that everybody will follow their conscience in that regard, so there are these lists. Well, we try to honor them, not always and fully, but we do try to honor them, unless, of course, a paper in a particular discipline attempts to exclude, to our definite knowledge, virtually everybody who is expert enough to review the paper. That seems to us a little bit extreme. But the number of people who actually do attempt to exclude particular referees amounts to somewhere between 30 and 40 percent of the submitted papers in the life sciences. Incidentally, apparently, according to our rather small sample, a significantly lower percentage in physics and chemistry, you’ll be glad to hear.

In fact, a quick digression here: We got sufficiently concerned with the level of competitiveness in the life sciences and, particularly, in biochemistry, molecular biology, human genetics, genomics, that range of disciplines, that not long ago, the American Society for Cell Biology actually induced a very distinguished Harvard labor economist, named Richard Freeman, to undertake a serious economist’s look at competition among elite departments in quite distinguished universities. And he went around and he asked a lot of questions. He was an expert on questionnaire design as a labor economist ought to be. And he found out a lot about how much time people were spending in the lab and how graduate students and post-doctoral fellows felt about life in the laboratory. And, in the end, he wrote a paper which we had urged him to submit to Science. And he and a group of colleagues from the American Society of Cell Biologists actually did submit a paper which we published.

Richard Freeman described what he saw in those laboratories as a tournament economy. A tournament economy is an interesting term of art in economics, which I had to look up, of course. What it means is that there is so much difference between the rewards for being first or second that it constitutes an enormous incentive to overexploit others; to sacrifice one’s family; to be rather nervous about one’s status in what can only be described as a prestige economy. And I found that analysis disturbing. I’ve circulated it to a number of people in the field and asked their view of it. Some of them say, “Yeah, they sort of got us,” and some of them think that’s just appalling nonsense and it’s not certainly like anything that goes on in their laboratory. So you get an interesting reaction when you test people who are practitioners in that business of how they feel about the quality of their own lives and the intensity of competition in it.

So I want to make a general point about special interests and possible conflicts of interest, which is a topic that arises endlessly, time and again, in considerations of ethical issues. My sense is that what we worry about most of the time is essentially financial conflict of interest. That is, if we see that a scientist who consults heavily for drug companies is on an advisory committee of the Food and Drug Administration deciding on the approval of those drugs, it is clear that that person might conceivably have a financial interest in the outcome of those discussions and probably ought recuse himself or herself from those conversations.

Little attention, by comparison, is paid to another kind of conflict, which is a conflict that takes place not in the dollar economy but in the prestige economy. That is, what particular interests has that person taken in an issue that often involves strong divisions of opinion? Is the objectivity of a scientist who is approaching a particular problem, let’s say as a reviewer, conditioned by previous announcements of positions that that individual has made? Should we require as much disclosure of previous potential conflicts of that prestige kind or history of personal interest kind as opposed to the financial kind? We work hard at Science to try to integrate that concept somehow into the instructions that we give to authors and reviewers about potential conflicts. But we have to emphasize, as we always do, not only the existence of real conflicts but also the existence of those that might be perceived as having an undue influence on the way that person is likely to react.

I think what I have described up to now, for the most part, are a set of issues that obtained in traditional times, almost running up to the present, as we used to know science in the period, let’s say, 1954 or 5 to almost 1980. Something happened in 1980 that drastically changed the ethical terrain on which science, particularly university science, is conducted. So the issues that I’ve just been talking about—who should be the author, what about conflicts of interest, and ethical challenges to referees and to authors—what about those issues that are restricted and contained within the domain of science? What happens when science collides with a different set of institutions?

In 1980, the first year I became president of Stanford, something interesting, much more interesting than that, was happening in Washington because Congress passed a set of amendments called the Bayh/Dole Amendments. What Congress had been worried about is that despite a lot of federal support for science, all of the National Institutes of Health and National Science Foundation grants, this really generous embodiment of public commitment in basic research was resulting in very little transfer of useful technology. What the senators were asking, essentially, is, What are we getting for this money? We would have supposed that there would be a lot of patents and that those patents would then be licensed to other kinds of institutions that would take the knowledge and develop it into things that were publicly helpful, useful to human service.

The argument was that because the yield had been so low, the government ought to give over any claim to intellectual property resulting from the funds that they delivered to these other science-rich institutions and say, instead, the University can patent it and conceivably make licenses to transfer the technology or individual faculty members can patent them if that’s the University’s policy. Those amendments were greeted by quite a lot of enthusiasm among some of my colleagues at Stanford, I can assure you.

First of all, this is, after all, Silicon Valley, isn’t it? You live here, too. We have a kind of culture that rewards entrepreneurism, good ideas, wants to see intellectual property developed into things that people use and buy and are glad to have. So what the Bayh/Dole Amendments did, suddenly, was to create at universities things called Offices of Technology Licensing. And ours, at Stanford, agreed to do the work of patenting faculty inventions. Actually, Stanford had, at that time, and the whole time I was president, a policy that faculty members were free to go out and patent things themselves that they had learned as a result from their research in university laboratories. The Office of Technology Licensing was so good at that and did it so easily that our faculty members basically didn’t want to engage in the hassle of tackling the U.S. Patent Office themselves. And so, basically, Stanford got most of those patents. And the question, then, was what happens now?

Stanford’s got a patent. Let’s say it’s the first one it got, recombinant DNA. Pretty good innovation. Lots of people wanted to make use of that as a tool. And Stanford patented it on behalf of it and the University of California because it resulted from a collaboration between a professor at UCSF, Herb Boyer, and a professor at Stanford, Stan Cohen. So the University of California said okay, Stanford. You go ahead and license that. I don’t know why they thought we were better at that than they were, but we did, and I think concluded something on the order of 275 licenses of that technology. The idea was to set the going-in price low, set the royalty low, get a lot of players, and hope that good things would happen. Well good things happened. I think Stanford and the University of California split about $260 million from the licenses that related to the Cohen-Boyer patent.

But now imagine the problem that that set up not only for the faculty members, but for the University. And I’m thinking of the kind of problems that I faced at the time. The first issue for the faculty member is, I’ve made this great discovery, but I’d like to do something good with it. And if I find somebody who’s willing to help me fund a small company to develop it into something else, wouldn’t that be nice? And wouldn’t the university be glad of it? Well the university would be glad of it in the sense that it wants to cheer its alumni on to doing good things and, if possible, to share in the results.

So in 1980, I find myself sitting in an office with Stanley Cohen talking about how we would manage the royalties that flowed in from licenses to those patents. And that was not difficult to describe. We, essentially, said about after the Office of Technology’s operating fees were deducted, we’d send a third of it to the school, a third of it to the department, a third of it to the investigators’ research program. That seemed reasonable. It didn’t take long to negotiate that and it became a sort of standard in other places because people soon heard about it.
But the more difficult question is, suppose a faculty member who has patented an idea through the university now meets with—now I’m going to make a scenario for you here—meets with a couple of trustees, let’s say, and a venture capitalist. And they say, “We ought to have lunch together and think about starting a company because you’ve got a great idea here. And what we’d like is if you did that and if, not only the venture capital company whose representatives we’re having lunch with here, but if the university invested some of its endowment in the same little company. So the university could not only realize income from the license, but it could also see the value of its equitable share in that young company increase.”

The president of Harvard suddenly found his picture on the front page of the New York Times, above the fold one day and discovered that Harvard, which he thought he had some responsibility for, was going to co-invest in a biotechnology firm with a member of its own faculty. And all of a sudden he had to wrestle with the question whether this was going to be seen as a good thing and as an ethical thing within the university community. Well, after some consideration, Harvard decided that it really shouldn’t do that. It didn’t quite say that it never would and a mischievous article in the New York Times describing this decision headlined the decision, “Harvard Shuns Apple, But Fails to Step on Serpent.” But the shunning of the apple was a pretty significant act.
And it set everybody thinking about what would it mean if the university picked winners among the possible faculty companies that might result from the intellectual work of that faculty? Suppose that the high-earning patents, those faculty members got much better treatment in terms of laboratory space and in terms of their own salary increases and the rapidity of their own promotion. Would that not suggest that the university had worked its own set of standards to suggest that commercial applicability somehow was more important than intellectual quality?
So, in the early 1980s, I think almost every research university in this country was trying to confront the problem of how it ought to treat what looked like a new opportunity in terms of research quality and intellectual activity but, at the same time, risked taking the university into the position of paying special benefits to an activity that was not central to its mission.

During the time I was president, we had appeals, too. For example, accept a relationship between an off-shore company and a laboratory on campus that would involve B helping to fund A. Or graduate students from A being asked by the major professor, wouldn’t you like to drop the thesis for a year and come over to the company and work on your project there? We referred that to a Committee on Graduate Studies. And they said never, no. That would be a bad idea.

So we set out, at a meeting in the mid-1980s, a set of guidelines that we intended to follow; that our obligation under Bayh/Dole was to do what we could for faculty members to not discourage and, rather, encourage them to find financing and form companies and undertake licenses. We said that it ought to be preferable to make those licenses non-exclusive so that we wouldn’t be seen to be favoring certain players in that field and, particularly, not necessarily our own, although it often turned out to be quite practically necessary since no other organization showed the particular capacity in the particular area that was the subject of the license, except the one that the faculty member was involved in.

But we insisted that there be a kind of firewall between the intellectual activity that was undertaken off campus and the faculty member’s basic science research operation that was undertaken on campus. In other words, D is out there, R is in here. They can’t avoid some talking back and forth, but the activities are separate and distinct and the people don’t cross over those lines.

Finally, I want to talk for a little while, and it will be no more than ten minutes or so, about a different set of problems that arises in the context of the journal since Bayh/Dole because increasingly, papers that are submitted among the 12,000 we get in each year, come not just from universities but they come from a mix of authors—some of whom are in universities; some of whom are in government laboratories; many of whom are in laboratories that exist within corporations. We’ve had to draw up a set of conflict of interest rules about what needs to be declared and committed to on the part of authors. And so I’m going to go through a few of those and I’m going to return to talk about the competitive character of the enterprise, particularly, in biomedicine and, finally, talk a little bit about the ethical challenges posed to us by that level of competitiveness and, particularly, about the existence of occasional fraud in scientific work with which we have had a very unfortunate recent experience that I’m still a little pained about.
With respect to declarations, every author needs to tell us now what role he or she had in a publication, in which he or she is one of a number of authors. That way we think that we can simplify the task of untangling a problem if one occurs, and we also put the authors on notice that they are going to be held to some standard about level of qualification for authorship. We now, by the way, when we get a paper in with a number of authors on it, we send, automatically, an e-mail to every one of the people listed saying we’ve just gotten a very interesting paper with the title so and so and so and so. It lists you as an author. Are you? And, of course, most every one of them says, “Yes, thank you very much for letting me know.” And, occasionally, somebody says, ”No, I never heard of this paper. How come they put my name on that paper?” And then we want to ask a little more about that relationship. Did somebody leave town and forget you or, in fact, did they not really consider that you had made a contribution that qualified you for authorship? So that’s part of it.

Second, every author needs to state any kind of financial or other commitment that might lead to a question of conflict of interest. Much of that is financial, especially since Bayh/Dole, but we would want to know, for example, whether an author, in reviewing the field on which he is an author, had made some rather explicit statements that make it very likely that he would have deeply wanted the result that this paper actually achieved to occur. That’s a little more difficult to evaluate, but we want to know that about authors.

We have also, as I said, asked the authors to name people who they think should not be asked to review the paper and then, of course, we will send that paper out for a technical review to in-depth reviewers. And we have questions for them as well. We want to know if they or if our own editors, in-house, at Science, our employees, have had a previous relationship with one or the other authors of the paper that would lead them to suppose that they might be disqualified or would trouble others if they knew that they had been involved in evaluating the paper.

So I want to suggest to you that there is a complicated thicket of ethical challenges involving relationships between authors, authors and reviewers, editors in the journal, policies of the journals themselves, policies that need to be out there and transparent and public if people are to understand the way that system works. It’s been a constant challenge to us and more of a challenge, I think, since the substantially greater involvement of corporate entities in basic research resulting from the Bayh/Dole Amendments.

Now a whole new set of ethical questions arises when something goes seriously wrong. And we’ve had, not many, but a few cases of this kind. I had to do an editorial in 2001 of a paper that had to be retracted from a distinguished molecular biology laboratory in Southern California in which the principal investigator had discovered that a post-doctoral fellow had actually falsified data. In that case, there was a very appropriate move on the part of the principal investigator who was responsible. He said we have a problem. He investigated the problem, communicated it to the dean, communicated it to us, retracted the paper, and we accepted the retraction.

I wrote an editorial at the time because it was my first experience with this. And I said, look, this is going to make people ask what is wrong with the peer review process? Does it mean that smart people have failed to detect a serious flaw in this experiment, in the method, and so forth? I tried to point out that, in fact, peer reviewers don’t generally ask the question, could this possibly be cooked? Their job is to ask whether the experiment is appropriately designed, whether it’s appropriate to test the hypothesis, whether the evaluation and the statistics are correct. They are not able to go into the laboratory and investigate whether something went wrong in the conduct of that experiment. In short, the scientific enterprise is built on a basis of trust.

I tried to make all those points. I don’t think it convinced anybody that peer reviewers shouldn’t be able, somehow, to detect fraud. And, sure enough, there was a series of papers published in Nature, in science and in physical review letters, this time in physics, from a group at Bell Telephone Laboratories, no less than eight papers, so distinguished that they were heavily cited and even rumored as potential Nobel Prize candidates. There were no less than 26 authors involved in various combinations in these eight papers. And it all turned out to be due to a very clever and ingenious fraud committed by one of those authors.

An investigating committee, a very distinguished one appointed by Bell Laboratories, looked through the whole business and that is what they concluded. If 25 co-authors, in addition to all those peer reviewers, can’t find that something is wrong, then you have to ask some questions about the whole system. Well that went away and, at least, we had the satisfaction of knowing, if there is any satisfaction in it, of knowing that other journals were burned as badly as we were.
But, the case of the papers about stem cells from South Korea, from a very distinguished group of investigators who had presented a stunningly convincing case that they had created stem cells from a clone blastocyst, turned out to be fraudulent. I spent probably two months talking to reporters, doing television interviews, and answering questions that tended to have the following form: How did you guys screw up? And I had to defend, not only some of our editorial processes, but also the peer review system itself, referring to some points I had made earlier, namely that this is too big a task to ask of the peer review system; that an occasional fraud may be part of the price we pay for having the scientific enterprise built on trust and knowing that the real way to confirm an experiment is to repeat it, not to review it the first time.

Well, it was a long process and a difficult one. We asked an outside committee, a quite distinguished one of scientists including people who were expert in the stem cell field, to look very hard at everything we had done. They got all the e-mails. They got all the telephone conversation memos. They got all the papers and every phase of review. And they said, basically, that we had done about what is excellent practice at high status journals, and that they couldn’t find anything wrong except that it’s a different world and we needed to be ready for it. In short, the version we got is that the level of competition now involved in this enterprise is so intense—it is such a tournament economy, is what they really said—that you need to apply some kind of risk assessment to the journal articles you’re reviewing and say that certain ones need an extra burden of review.

We’re trying to figure out what criteria to use in detecting that particular papers are high risks. We’re certainly not, for example, going to engage in the kind of profiling that says, well, it comes from a developing country on the threshold of scientific excellence but not one of the great powers. We can’t do that; it would be wrong. We can’t say any paper that reaches a conclusion that contests with conventional wisdom has to be given special scrutiny. We might say that in an area where there’s a lot of political tension and expectations are high, we ought to apply a harder standard. But I’m a little reluctant to do that because I value the piece that trust has in this system.

And so right now, I think it’s a challenging time for science as it contends with the ethical structure that it has to work out as it engages with different institutions and has a different character inside itself.

Donald Kennedy delivered this Regan Lecture May 24, 2007, at Santa Clara University. Funding for this lecture is provided by the New York Life Insurance Company in honor of William Regan III and a gift from Ann and William Regan.


New Materials

Center News