Tag Archives: JournaLink

Antidepressant Drugs: Accountability and Open Data

Antidepressant drug effects (red line) are flat across the severity of patient depression (x-axis). In contrast, placebo effect (blue line) decreases as the severity of depression increases.
Figure 1. Antidepressant drug effects (red line) are flat across the severity of patient depression (x-axis). In contrast, placebo effect (blue line) decreases as the severity of depression increases. From Figure 3 in “Initial severity and antidepressant benefits: a meta-analysis of data submitted to the food and drug administration” (published February 2008 in PLoS Medicine). The article is being displayed in iPad using JournaLink version 1.5.

Current controversy surrounding the research, marketing, and the United States Food and Drug Administration (FDA) approval of prescription drugs points towards some of the benefits of open data (see note below for some general references). One piece of the controversy revolves around the question of whether or not the FDA approval process is working. That is, does the FDA approval process protect the consumer from ineffective or potentially harmful products?

Note: The principle investigator of the research paper discussed here published a book during 2010 titled “The Emperor’s New Drugs: Exploding the Antidepressant Myth.” During the same year a psychiatrist published “Unhinged: The Trouble with Psychiatry – A Doctor’s Revelations about a Profession in Crisis” and an investigative reporter published “Anatomy of an Epidemic: Magic Bullets, Psychiatric Drugs, and the Astonishing Rise of Mental Illness in America.”

A paper published a few years ago presented the reanalysis of data submitted to the FDA that led to FDA approval of four major antidepressant drugs for market. The authors of “Initial severity and antidepressant benefits: a meta-analysis of data submitted to the food and drug administration” (published February 2008 in PLoS Medicine) had to use the Freedom of Information Act to get “all publicly releasable information about the clinical trials” for fluoxetine (Prozac), venlafaxine (Effexor), nefazodone (Serzone), paroxetine (Paxil), sertraline (Zoloft), and citalopram (Celexa). These are all selective serotonin reuptake inhibitors (SSRIs) that are thought to work by making more serotonin available to postsynaptic neurons.

Interestingly, the data disclosed by the FDA was not sufficient to analyze the effects of sertraline (Zoloft), and citalopram (Celexa) and the authors were unable to fill the gap with data from pharmaceutical companies or the published literature. Results of the meta-analysis on the four other drugs fluoxetine (Prozac), venlafaxine (Effexor), nefazodone (Serzone), and paroxetine (Paxil) was presented.

The authors ran across other problems with the data. Sometimes discrepancies appeared between published versions of the data and those data provided by the FDA. In addition, they found that a pharmaceutical company would occasionally publish a trial more than once “with slight discrepancies in the data between publications.” When these discrepancies appeared the authors used the data submitted to the FDA.

The results of the meta-analysis found that the overall effect of these antidepressant medications was below recommended criteria for clinical significance. They also found that the clinically significant effect found for extremely depressed patients was due to a decrease in the response to placebo rather than an increase in the response to medication (see Figure 1 above).

Clearly data should be readily available so that third parties may reanalyze data relevant to the health and well being of millions of people. More eyes and brains working on and reviewing data will help everyone to understand those data better. Our understanding of how many of the body’s processes work is incomplete at best and often provisional. This is especially the case with our understanding of the brain. Let’s speed our understanding by making clinical trial data publicly available. Everyone will benefit, including pharmaceutical companies.

Large-Scale Neural Tissue Simulations

A basket cell interneuron in simulated cerebral cortical tissue.
Figure 1. A basket cell interneuron in simulated cerebral cortical tissue from Figure 11 in “An ultrascalable solution to large-scale neural tissue simulation” (published September 19, 2011 in Frontiers in Neuroinformatics). Below the basket cell, traces of electrical activity from its dendrites are displayed. The article is being displayed in iPad using JournaLink version 1.4.

The author’s of the recent paper “An ultrascalable solution to large-scale neural tissue simulation” (published September 19, 2011 in Frontiers in Neuroinformatics) define neural tissue simulations as having the following characteristics:

  • multi-compartment Hodgkin-Huxley models of neurons derived from anatomical reconstruction of real neurons
  • support synaptic coupling between compartments that attempt to match synaptic distributions from real tissue
  • incorporate the three-dimensional coordinate system of neural tissue

The incorporation of structural constraints is a critical factor in neural tissue simulations. They guide the arrangement in synaptic connectivity of simulated compartments that make up the component neurons and, ultimately, the neural tissue. These are the same constraints placed on real neural tissue. Simulations that adhere to these real world constraints have the potential to provide insights into the functioning of real brain tissue.

The paper reports on large-scale simulations of cerebral cortex including 1 million neurons comprised of 1 billion compartments and connected through 10 billion conductance-based synapses and gap junctions. The neurons were derived from the morphology data of real neurons accessed at the public neuromorphology data repository NeuroMorpho.org.

A significant feature of the reported neural tissue simulations is the use of complete compartment models of axons. The traversal of electrical current and action potentials are not only simulated across the dendrites and cell bodies but also along axon branches. This enables realistic modeling of action potential failures and conduction times.

Clearly simulations of this detail and scale need special machinery. The research team is from IBM’s T. J. Watson Research Center so, not surprisingly, they used the second generation Blue Gene supercomputer known as Blue Gene/P. Nevertheless, these simulations demanded computational ingenuity which takes up much of the discussion in the paper. The Neural Tissue Simulator utilizes what appear to be proprietary technologies known as Model Definition Language (MDL) and Model Graph Simulator (GSL).

Note: The authors provide MDL and GSL scripts and other files for creating and running the largest simulation reported in the paper. You may download them from the supplemental data link available at the paper’s website. The authors also say they’d like to share the Neural Tissue Simulator software and source code.

The current paper is the first report of simulations of more than one million neurons. The work demonstrates the computational feasibility of human brain scale neural tissue simulations within the next decade or so. To actually accomplish the feat, of course, more than computational capabilities will need to be met. For one thing, knowledge of connectivity in the human brain is far from complete. What will the relevant question or questions be when we do run these very large-scale brain simulations?