Workshop 3: Linsey McGoey, ‘Experimental dissidence: economies of credibility in drug regulation’

Linsey McGoey’s  (Science and Technology Studies, Oxford) presentation  can be read below in the original version:

In 2004, a former FDA medical officer named David Ross watched news coverage of one of the highest-profile pharmaceutical controversies in recent years: Merck’s withdrawal of Vioxx, its bestselling painkiller, from the global marketplace.

At the time, David Ross railed to his wife about the actions of David Graham, an associate director of drug safety at the FDA who drew international attention for testifying before the US Senate about the FDA’s handling of Vioxx. Graham said his supervisors ignored warnings that Vioxx could lead to cardiac arrest, and asked him to change his conclusions on an internal report about Vioxx’s risks.

Today, David Graham is still at the FDA. David Ross is not. But something both have in common is a concern that FDA has not amended policies much in the wake of the Vioxx controversy, and may be, in some ways, seeking to police internal criticism more strictly.

I met for an interview with Ross in Washington DC last November. He told me that when he first witnessed David Graham publicly criticizing the agency in 2004, he said to his wife, “How dare he!” With the implication being, how dare Graham publicize concerns better handled internally.

Two years later, Ross revised his own perceptions of Graham’s actions after struggling himself to convince supervisors about the risks of Ketek, an antibiotic drug linked to kidney failure. Ross’s efforts to flag Ketek’s risks embroiled him in a bureaucratic battle that threatened his own impartiality and credibility the more his concerns proved correct, impugning his authority and undermining his professional position the more influential his warnings turned out to be.

In this paper, I look at this paradox, exploring an obvious and yet underappreciated truism of organizational life: the way that individuals who call attention to regulatory errors are typically vilified more than those who quietly perpetuate them. The strength of regulatory systems is often contingent on the refusal to articulate or address systematic weaknesses, something relevant for understandings of the politics of “testimonial injustice,” to quote a term by Miranda Fricker.

The economies of credibility at play during the Vioxx and Ketek controversies illustrate examples of strategic ignorance in practice: the deliberate or unconscious mobilization of the unknowns and ambiguities of a situation in order to command more resources, deny liability in the aftermath of disaster, and elicit trust in the face of both unpredictable and foreseeable outcomes (see McGoey 2007).

Both cases also reveal examples of capitalized uncertainty, where the uncertain science of calculating a drug’s risk-benefit profile was harnessed by regulators and manufacturers in order to dismiss concerns from in-house medical reviewers who suggested the drugs were unsafe. As concerns over Ketek and Vioxx grew more persistent, manufacturers and regulators increasingly struggled over who could, quite paradoxically, demonstrate the least knowledge of the effects of drugs, something impeding public disclosures of adverse drug effects.

In this paper, I focus on the case of Ketek. My discussion draws on interviews carried out with regulators and health policymakers in the UK and US, in particular interviews carried out with David Graham and David Ross and other former and current FDA staff.

Ketek is an antibiotic drug manufactured by Sanofi-Aventis for the treatment of upper respiratory tract infections such as sinusitis and bronchitis. Aventis first sought an FDA license for the drug in 2000. At that time, FDA reviewers, including Ross, were alarmed when a patient treated with Ketek developed severe liver damage. In April 2001, an FDA advisory committee stipulated that the company should conduct a large clinical trial to help determine Ketek’s safety before a license could be granted. The company hired a contract research organization called Pharmaceutical Product Development (PPD) to carry out the study, which became known as Study 3014.

This study involved over a thousand physicians and thousands of patients across the US. One of those physicians was a woman named Anne Kirkman-Campbell, who is currently serving a nearly six-year jail term for fraud in connection with Study 3014.  Early on during the clinical trial, a former staff member at PPD, the research organization hired to conduct the study, became concerned that Kirkman-Campbell was enrolling patients who didn’t qualify for the trial, as well as underreporting adverse effects. Kirkman-Campbell, who was paid $400 each by Aventis to sign-up participants, had enrolled over 400 patients, 1 per cent of the total adult population of Gadsden, Alabama where her clinic was based. Alarmed by the volume of participants, as well as the fact that no patients had withdrawn from the trial - something unusual for such high numbers - Ann-Marie Cisneros, the staff member at PPD, began to investigate patient charts and found alarming irregularities. Most of the informed consent forms seemed to have been forged; most patients diagnosed with chronic bronchitis had no history of the ailment, and Kirkman-Campbell had enrolled her entire staff and most members of her family in the trial.[1]

Concerned, Cisneros emailed a summery of her findings to the head of quality assurance at PPD, copying Aventis personnel into the email. At the same time that Cisneros was investigating Kirkman-Campbell, a routine FDA inspection of the study uncovered the irregularities going on. The FDA alerted authorities to Kirkman-Campbell, and she was ultimately sentenced for a count of mail fraud and ordered to serve 57 months in a federal prison.

Meanwhile, the FDA investigations revealed violations of good clinical practice at nine other research sites for Study 3014. Despite knowing that data from the trial was compromised by these violations, the FDA presented results from the study to an advisory committee without mentioning that data has been deliberately distorted by trial investigators. Unaware of the unreliability of data, the advisory group recommended approval of Ketek (Hundley 2007; Ross 2007).

The FDA awarded a licence for Ketek in 2004. Between 2004 and 2006, more than five million prescriptions were written for the drug in the US. During those years, as the New York Times states, fourteen adult patients suffered liver failure after taking Ketek, at least four of whom died. Twenty-three others suffered serious liver injury. Each of the patients were otherwise healthy individuals (Harris 2006). In 2007, the FDA implemented label changes for the drug, banning its use for its three previously approved indications, and insisting on a black box warning for its third: community-acquired pneumonia (Soreth et al 2007).

Also in 2007, the US House of Representatives’ Committee on Energy and Commerce launched a series of hearings to determine whether a range of parties, from the FDA to Aventis, knew about the adverse risks of Ketek and chose to ignore or conceal them. 

Ann-Marie Cisneros, the staff member at PPD who blew the whistle on Kirkman-Campbell, was called before the hearings. She testified that she had been told by fellow staff at PPD that an Aventis representative had coached Kirkman-Campbell about how to justify irregularities at her research site. “What brings me here today,” Cisneros told the government subcommittee, “is “my disbelief at Aventis’s statement that it did not suspect fraud was being committed. Mr. Chairman, I knew, PPD knew it, Aventis knew it.”

In the court’s ruling against Kirkman-Campbell, not only was Sanofi-Aventis not penalized, the court, somewhat ironically, declared the drug company to be a victim of her fraud, and ordered her to pay nearly one million dollars to the drug firm in restitution. Kirkman-Campbell appealed the restitution order, arguing that Aventis had “been made aware of the fraud at my site by PPD. At NO TIME did they attempt to stop my participation.” Her appeal was denied (Hundley 2007).

During hearings, a member of the government subcommittee pressed Cisneros, as well as Henry Loveland, an FDA criminal investigator later assigned to investigate Aventis, about what exactly they felt Aventis knew or didn’t know about Kirkman-Campbell’s action: “Do you believe Aventis intentionally ignored evidence of fraud?” The representative asked. “Or is it a matter that their processes and procedures or verifying fraud were faulty and couldn’t have detected it?

Cisneros’ reply was clear: “I personally believe they ignored evidence of fraud. You had to have your head stuck in the sand to have missed this.” Although Loveland, who was assigned in 2007 to investigate Aventis, said that Aventis’s handling of the study was a “catastrophic failure,” his response was more equivocal. In his view, the company implemented an oversight system which, under the guise of seeking to examine Kirkland-Campbell’s practices, enabled them to avoid perceiving the fraud:

Mr. LOVELAND. The decision-making process that Aventis used to evaluate the warnings that Mrs. Cisneros and other PPD folks raised was illogical, ineffective…From start to finish, their process for analyzing information coming out of the trial was poor. When you get into a traffic accident, you call a traffic cop. These folks came in and they said, We have indicators of fraud, and they called a mathematician. A mathematician didn’t know what fraud looked like, and he couldn’t identify it. He looked at all the data, couldn’t figure out a rule to apply to the data set, came back and said, I don’t see fraud. They took that to convince themselves that two of the most serious allegations raised by Mrs. Cisneros and by other PPD folks weren’t indicators of fraud.

Aventis’s oversight system effectively made it impossible to determine whether the company’s earlier non-detection of Kirkman-Campbell’s actions was itself fraudulent or not. Through the guise of vigilance, Aventis managed to deflect the possibility of inconvenient findings, strategically finding solutions to the problem of how to remain convincingly ignorant of effects widely visible to many.

During debates over Ketek, and pharmaceutical safety in general, a common theme typically emerges: claimants often struggle over who can attest to the most ignorance of a drug’s effects, or of irregularities carried out during its testing. Such struggles emerge at every stage of a drug’s development and marketing, often in slightly ironic ways. The lead author, for example, of a 2003 article on the benefits of Vioxx published in the Annals of Internal Medicine, sought to excuse himself from liability for under-reporting the risks of heart attack by pointing to the fact that the article was ghostwritten by Merck and not by him. The published article stated that there was no significant difference between Vioxx and the control group, despite a five-fold higher rate of heart attacks on the Vioxx arm. As Sergio Sismondo describes, the lead author’s justification for the underreporting was that “Merck designed the trial, paid for the trial, ran the trial…Merck came to me after the study was completed and said, ‘We want your help to work on the paper.’ The initial paper was written at Merck, and then it was sent to me for editing” (Sismondo 2009: 172).

Preemptive ignorance in drug regulation - where refusing to know or understand an effect helps to exonerate blame – reached its pinnacle in a legal principle known as FDA pre-emption, institutionalized at the FDA during the administration of George Bush. In practice, the policy states that FDA approval of a product should immunize a manufacturer from liability suits by injured patients, with the rationale being that as the FDA did not warn against risks, companies should not be held not responsible (Annas 2009).

Consumer groups have long fought the policy, pointing out that companies have been shown to have routinely withheld data from regulators, bringing into question the reliability of FDA warning labels. Preemption was effectively struck down last year Wyeth vs Levine, a Supreme Court case where the court ruled that a drug “manufacturer must carry responsibility for the content of its label at all times” (Annas 2009: 1207). Although the legal doctrine has suffered recently, the ethos behind it, the assumption that, to put it crudely, ‘what you don’t know can’t hurt you,’ remains a significant resource, perhaps the greatest resource, in drug regulation – a rather obvious point that is only surprising for how persistently its implications are ignored.

What’s interesting about ecologies of ignorance, to borrow a phrase from Niklas Luhmann, is that the greater the number of missed warning signs, the greater the likelihood that liability is deflected for any one party. The pervasiveness of ignored signals inoculates people from individual blame. In the case of Vioxx, for example, the sheer ubiquity of missed signals became useful to the groups that ignored them: the failure to act on evidence is rendered simultaneously more surprising in theory and more excusable in practice the larger the problem becomes; the larger the number of actors enrolled in the pattern of dismissing the same disquieting things. A similar example appeared during the recent financial downturn, when investment banks such as Morgan Stanley and Goldman Sachs were, in ways, protected by the very immensity of the risks they alternatively exploited and ignored: a problem magnified is often a problem deferred.

Getting back to the case of Ketek – regardless of whether Aventis was at fault for failing to flag problems on Study 3014 to the FDA, the agency soon learned that trial data was compromised – but did little about it.

Since 2000, David Ross, who is now National Director of Clinical Public Health for the US Department of Veterans Affairs, repeatedly raised concerns with Ketek to his supervisors, including questioning why the agency did not tell an advisory group that data from Study 3014 was fabricated. Throughout 2005 and 2006, as post-market reports of Ketek-related death emerged, Ross pleaded with supervisors to act on the risks. In June 2006, a number of FDA reviewers, including Ross, were summoned to a meeting with the FDA Commissioner at the time, Andrew von Eschenbach. During the meeting, von Eschenbach compared the FDA to a football team and told reviewers that if they publicly discussed problems with Ketek outside the agency, they would be, to quote, “traded from the team.” At that stage, Ross chose to resign from the agency.

I have spoken with a number of FDA officials who left the agency because of frustrations with how concerns over drug safety were treated.

“What really breaks my heart,” one of them said to me during a phone interview from the UK, “is that aside from David Graham and Andy Mosholder, they pushed us all out. They’re the real heroes because they stayed. The only way you can stay is to either go underground, like Andy Mosholder, or get really loud, like Graham, but it has consequences. They’re never going to let David Graham work on anything important.”

Andrew Mosholder was an FDA medical officer prevented from presenting evidence on the risk of SSRI antidepressant use in children and adolescents to an FDA advisory committee in 2004.

The individuals I spoke indentified a catch-22. Either you stayed at the agency and risked the chance of limited opportunities for career advancement, or you could leave the agency and risk sacrificing the personal credibility and authority to draw attention to regulatory shortcomings at your place of work.

The difficulties in criticizing agency policies are compounded by the fact that individuals who speak out against supervisors or colleagues are often disregarded as disloyal or personally motivated, regardless of the veracity of their criticisms. One former FDA officer I spoke with, who left the agency after his concerns over clinical trial methodologies were ignored, said that in a way he feels he should simply try to move on and forget his own frustration at the agency’s politics and procedures, particularly as “we’re painted as disgruntled former employees and nothing we say matters.”

This reality – the way the sheer act of voicing concerns often tends to impugn the credibility of those voicing dissent – can be viewed as an example of testimonial injustice in drug regulation, and in political life more generally.

The term “testimonial injustice” was coined by Miranda Fricker to describe the specific type of epistemic injustice that arises when an individual is denied credibility in asserting facts or opinions because of a personal characteristic that has nothing to with the truth or falsity of a statement.

As she puts in bluntly, when the police do not believe the testimony of a black of individual because he is black, or when a professional bioethicist tacitly discredits the philosophical musings of her patients because she assumes they have inadequate expertise or knowledge of a topic, and then revises her opinion when they mention they have PhDs.

Another example she offers is that of an Egyptian woman, working in Cairo, who tells Fricker she has the habit writing down policy suggestions during meetings, passing them a male colleague, watching as the suggestions were well-received, and then joining in the discussion, something adopted after frustration with seeing her ideas ridiculed by male colleagues whenever she presented them herself (Fricker 2007).

Prejudice is, of course, a very well-trodden topic in the social sciences, but Fricker’s work has the advantage of encapsulating the specific prejudice that arises when someone’s epistemic credibility – their ability to attest to their own knowledge of a phenomenon – is reduced by virtue of a personal characteristic irrelevant to their testimony.

Testimonial injustice stemming from gender or ethnic stereotypes is perhaps the most obvious form of epistemic prejudice in daily life. I wish to suggest another type, one evident in the experiences of those who voiced concerns with drug safety. That injustice is the type that occurs when the repeated act of voicing an opinion delegitimizes the credibility of the opinion expressed.

Recent controversies over drug safety illustrate this phenomenon. Often those who have been the most instrumental and influential in raising concerns over licensed drugs are proportionately more resented by peers and supervisors their more their concerns prove prescient and not misguided. The gravest risk when voicing dissent tends not to be being wrong but being right.

Particularly in scientific arenas, where as, Nikolas Rose writes, “impersonality rather than status, wisdom or experiences” is called on to justify policies, the more persistently and relentlessly someone seeks to call attention to flawed policies or practice, the more they erode perceptions of their own impartiality – therefore sacrificing their credibility regardless of whether their warnings prove correct.  As simple as this point might appear, it has useful implications for understandings of the correlation between political engagement and efficacy, in both medicine and in politics more generally.

Despite the fact that social scientists have long illuminated how Merton’s norms of science are illusory in practice, many scientists, as work by Paul Rabinow and Kaushik Sunder Rajan illustrates, continue to perceive their disciplines through norms of how science should be rather than how scientific activities tend to unfold. The moral authority of objectivity, the trust in numbers which, as Porter writes, provide an answer to demands for impartiality and fairness in bureaucratic and political decision-making, generates the onus to appear aligned with Mertonion norms even when many doubt that scientific practice is as disinterested or community-minded as the norms imply.

The dream of how regulatory science should function limits the possibility for effective contestation by constraining the pool of legitimate dissenters to those who can convey a lack of personal commitment or interest in the stakes at hand, thereby enrolling themselves in a form of politics which succeeds most when actions are perceived as having as little effect as possible, when an action can’t be traced to a single instigator. It’s a politics that tends to succeed best at its own expense; a politics that reaches its pinnacle, to paraphrase an insight from Jacques Rancière, as its possibilities are diffused; a politics that reaches fulfilment only at its end (see Rancière 2007 (1992)). The problem is not that individuals are barred from voicing dissent, but the sheer of doing so impugns their impartiality, rendering their arguments proportionately more dismissible the louder and more persistently they are voiced.

The pacification of personal opinion is visible in numerous pharmaceutical controversies, such as in the case of SSRI antidepressants, where suggestions of a suicidal risk were first scoffed at, and then confirmed when a series of re-analyses of company-held clinical trial data revealed statistically significant risks between SSRIs and suicidal reactions that had previously been obscured or ignored during earlier regulatory inquiries (see Lenzer 2006).

The exemplar in this case is David Healy, a professor of psychiatry at Cardiff University who was one of the first to suggest SSRIs could lead to suicide in some users. Healy’s persistence in raising concerns seems, according to a number of practitioners I spoke with during a previous project on the SSRI controversy, to have indicted his professionalism regardless of the fact that his concerns were later confirmed.

A psychiatrist I spoke with suggested that Healy “has a particular view, which he’s sort of made a career out.” The psychiatrist went on, “I don’t know think he’s terribly scientific. I mean, he’s very good at linking into more sociological, pseudo-scientific” areas -  The psychiatrist stopped himself and glanced over at me, a sociologist, with a sly grin. “Not saying the two are the same thing,” he added.  

 The comment is similar to something a professor of epidemiology at Oxford said to me about David Graham: “I’ve heard him say Vioxx deaths are “the equivalent of two jumbo jets going into the Twin Towers every day. I don’t think that’s helpful. I think that’s alarmist, I think it suggests he’s more interested in headlines than getting the science right.”

It’s an innocent comment, one you’ll often hear said of media-friendly scientists. But its implication – that attracting a lot of headlines and getting the science right are somehow obviously incommensurate things – has consequences that may be less innocent.

To conclude, the pacification of dissent – the way that one’s credibility or authority to pronounce on an experiment’s success of failure is eroded through the sheer act of repeatedly publicizing one’s views – is hardly a new story. But more attention to it may help address an equally persistent problem – the question of why calling attention to regulatory failures is often treated as more of an aberration of correct procedure than quietly perpetuating problems.

The cynical view is that a functional bureaucracy is necessarily dependant on the suppression of any internal criticism or dissent that might impede the efficiency or impugn the reputation of the organization. Another perspective, one that takes on board the problem of testimonial injustice, is the less cynical, but also far more intractable possibility that dissenting opinions are persistently ignored because of the assumption that they should be: because of the subtle distaste for the voicing of personal opinion or doubt in arenas where tools of quantification are revered for the democratic ability to decide without appear to decide.

 

Linsey McGoey  (Said Business School)

[Portions of this paper are drawn from Bad Faith and the Economy of Ignorance, a draft book manuscript. For permission to cite, please contact author.]

 

References

Annas G. 2009. Good Law from Tragic Facts: Congress, the FDA, and Preemption New England Journal of Medicine 361:1206-11

Fricker M. 2007. Epistemic Injustice: Power and the Ethics of Knowing: Oxford University Press

Harris G. 2006. Approval of Antibiotic Worried Safety Officials New York Times July 18

Hundley K. 2007. Drug's chilling path to market: How a broken FDA approved a cold antibiotic despite a wide trail of alarms St. Petersburg Times May 27

Lenzer J. 2006. Manufacturer admits increase in suicidal behaviour in patients taking paroxetine. BMJ 332

McGoey L. 2007. On the will to ignorance in bureaucracy. Economy and Society 36:212-35

Rancière J. 2007 (1992). On the shores of politics. London / New York: Verso

Ross D. 2007. The FDA and the Case of Ketek. New England Journal of Medicine 356:1601-4

Sismondo S. 2009. Ghosts in the Machine: Publication Planning in the Medical Sciences. Social Studies of Science 39:171-98

Soreth J, Cox E, Kweder S, Jenkins J, Galson S. 2007. Ketek — The FDA Perspective (correspondence). New England Journal of Medicine 356:1675-6

 

 


[1] Testimony of Ann Marie Cisneros before the Subcommittee on Oversight and Investigations of the Committee on Energy and Commerce. Feb 12, 2008, pg 15.

US http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=110_house_hear...