Functional Imaging in Clinical Assessment? The Rise of Neurodiagnostics with fMRI (2024)

Oxford Handbook of Personality Assessment

James N. Butcher (ed.)

A newer edition of this book is available.

Latest edition

Close

https://doi-org.libproxy.ucl.ac.uk/10.1093/oxfordhb/9780195366877.001.0001

Published:

2009

Online ISBN:

9780199940592

Print ISBN:

9780195366877

Contents

  • < Previous chapter
  • Next chapter >

Oxford Handbook of Personality Assessment

Chapter

Angus W. Macdonald, III,

Angus W. Macdonald, III

Psychology, University of Minnesota

Find on

Oxford Academic

Google Scholar

Angus W. Macdonald, III, Department of Psychology, University of Minnesota

Jessica A. H. Jones

Jessica A. H. Jones

Psychology, University of Minnesota

Find on

Oxford Academic

Google Scholar

Jessica Jones, Department of Psychology, University of Minnesota.

https://doi-org.libproxy.ucl.ac.uk/10.1093/oxfordhb/9780195366877.013.0019

Pages

364–374

  • Published:

    18 September 2012

Cite

Macdonald, Angus W., III, and Jessica A. H. Jones, ' Functional Imaging in Clinical Assessment? The Rise of Neurodiagnostics with fMRI', in James N. Butcher (ed.), Oxford Handbook of Personality Assessment, Oxford Library of Psychology (2009; online edn, Oxford Academic, 18 Sept. 2012), https://doi-org.libproxy.ucl.ac.uk/10.1093/oxfordhb/9780195366877.013.0019, accessed 30 Apr. 2024.

Close

Search

Close

Search

Advanced Search

Search Menu

Abstract

This article provides an update on progress in the use of neuroimaging for predicting clinical states, with particular attention to diagnosis. It discusses the underpinnings of the blood oxygenation level-dependent response used in fMRI, as well as issues involved in measuring this signal reliably. The article then considers the logic underpinning the development of models based on brain data to examine latent states, such as deception, and latent traits, such as the diagnosis of schizophrenia. It concludes that neuroimaging, while not currently a practical tool for clinical assessment, is likely to provide an important avenue of new ideas. Biomarkers, such as those derived from neuroimaging, are likely to have a role in understanding dimensionality and the common origins of certain disorders (for example, depression and anxiety) by providing biological principles around which to organize thinking in these areas.

Keywords: diagnosis, fMRI, latent states, biomarkers, schizophrenia, blood oxygenation level, latent traits, deception, clinical assessment

Subject

Psychological Assessment and Testing Psychology

Series

Oxford Library of Psychology

Collection: Oxford Handbooks Online

The science fiction scenario that has provoked you into reading this chapter is this: Your client sits down across from you looking quite nervous, and reluctantly asks, “How bad is it, Doc?”

You mouse through a brief file sent over yesterday from radiology featuring brain images with colourful circles and lines. Soon you arrive at a familiar table at the bottom of the computer-generated readout.

“The good news is that your brain scan rules out a schizophrenia spectrum disorder, as well as any indication of pathological appetitive processes.” Then, after a pause, “The bad news is that your default network and primary sensory components suggest very dominant avoidance processes … here and here,” you add, circling on the screen the highlighted pictures of the amygdala and medial orbitofrontal cortex.

She looks back, bewildered, “But, Doc, what does it all mean?”

You shuffle in your seat. You have hardly met the woman and yet you know the tests have placed her well within the confidence intervals of three different internalizing disorders. How can you tell her that if she's not depressed already she soon will be? Or, that the scan shows she is unlikely to be a good candidate for medication? “Well, it means that we know what to work on.”

Imagining such a scenario is easy. We have had tests that function in this manner long before there were electronic medical records in which to record them. What takes a little more work is imagining how such a scenario might come about. What are all the little pieces that need to be in place to probe the brain's psychological and psychopathological states in a manner useful for clinical assessment? In this chapter, we will check in with current efforts to evaluate the little pieces that underpin the logic of clinical assessment, including personality assessment, with functional imaging. We will first describe the increasingly well-understood physiological basis of functional magnetic resonance imaging (fMRI). Next, we will address a topic close to the hearts of psychometrists, which is the reliability of functional imaging. We will then explore current efforts to understand implicit brain states, using studies of lie detection as an illustration. Finally, we will examine recent efforts to diagnose schizophrenia and other psychiatric illnesses using neuroimaging, and reflect on their relevance for the future of clinical assessment, including personality assessment.

For many, the prospect of viewing states of the brain directly, and from this inferring some latent truth about a person's internal state, is as mysterious as it is awe-inspiring. Neuroimaging technology is indeed awe-inspiring, but throughout this chapter seasoned clinical scientists will recognize familiar statistical problems refracted through this new lens. In addition to test-retest reliability, issues such as sensitivity, and specificity, and generalizability must be addressed. We have not escaped from these familiar testing criteria simply because we are dealing with images of biological phenomena rather than behaviors. Despite their promise, imaging approaches still have a long way to go to catch up with established interview and paper-and-pencil assessment practices.

Neural Basis of the BOLD Response

The BOLD response is the “blood oxygenation level-dependent” response that is measured in 99% of fMRI studies. (The other 1% measures the actual level of blood flow itself, but this is tricky business indeed, and no more will be said of it in this chapter.) Since very few people actually care whether the blood in the brain is oxygenated or deoxygenated, it is instructive to unpack this signal further to understand why this is a useful way to understand regional brain function. However, the following discussion is not for everyone. Readers interested in the practicalities of fMRI for clinical assessment with no interest in its theoretical basis should feel welcomed to jump to the next section.

Now, it is a misunderstanding to think of an MR scanner as just a big magnet. It is in fact a number of magnets working in tandem. The biggest magnet, Bo, is a supercooled monstrosity that produces a magnetic field many thousand times greater than the strength of the Earth's magnetic field and whose strength is measured in T, or tesla. Most functional neuroimaging today occurs on magnets whose Bo strength is 1.5–3 T. Such fields are strong enough to accelerate a hammer through a block of concrete. For non-ferromagnetic people, MRI scanners are harmless and are found by many to be very soothing. Bo needs to be this strong because it aligns all the protons within its magnetic field in more or less the same orientation. Why “more or less”? Because protons in a magnetic field cannot help but wobble as they spin, which is known more elegantly as their precession. Luckily, the extent to which protons wobble depends on the strength of the magnetic field. This is where a collection of other magnets, known as gradients, comes in to play. The gradient magnets alter the main Bo field ever so slightly. As a result, all those protons process at an ever so slightly different frequency across space. This is the necessary setup for obtaining pretty pictures.

It is not enough to simply have a number of magnets working in tandem. MRI scanners must also have the capacity to transmit and receive radio signals. (Scanner rooms are surrounded with thick shielding to prevent outside radio signals from interfering with these local broadcasts.) The transmission and reception of signals is carefully orchestrated by a pulse sequence. A pulse sequence begins with a radio frequency (RF) pulse sent from a coil close to a person's head. This RF pulse knocks the protons from their alignment with Bo into some other, predetermined alignment. This also synchronizes their precessions, which is like setting the hands of a clock to 12:00. As each proton finds its way back into it own precession, a very slight signal is emitted which can be picked up with a very sensitive antenna if it is close to the source. By no coincidence at all (remember those gradients?), each location in space broadcasts at a slightly different frequency so by listening for a particular frequency, one can learn something about the proton's journey back to its original precession in that location. The broadcasts that are heard most readily are those from the hydrogen protons in H2O.

Yes, this is a simplification of a number of complicated equations and technical marvels that have consumed physicists and engineers for more than half a century. It is not so gross a simplification to obscure the following insight: a proton's journey back to its own precession reveals important information about its immediate environment. One aspect of that environment is a useful magnetic property of the nearby hemoglobin. Dispatched from the lungs, and propelled by the heart, hemoglobin carries oxygen to all tissue in the body. When hemoglobin is on its outward, oxygenated path, it is diamagnetic like most tissue. That is, it has no magnetic properties to speak of. After the hemoglobin has made its oxygen delivery, it becomes paramagnetic. That means that within a larger magnetic field deoxygenated hemoglobin has the capacity to generate its own small magnetic field. This small magnetic field then affects the local protons as they desynchronize in the milliseconds after the RF pulse. Voila. Just by going about its business, hemoglobin provides a remote means for measuring where blood in the body is oxygenated and where is it not.

While the contrast between oxygenated and deoxygenated hemoglobin was seized upon in the early 1990s as a marker of cognitive functioning, it took a decade before experimental evidence provided a more thorough account of the relationship between the BOLD response and neuronal activity. This account, by Logothetis, Pauls, Augath, Trinath, and Oeltermann (2001), utilized simultaneous BOLD response imaging and electrophysiological recording from neurons in visual cortex. These electrodes were capable of recording action potentials in single neurons, across multiple neurons, and the local field potentials. Local field potentials are thought to reflect an average of inputs to dendrites and the propagation of signals from dendrites down to the soma. The researchers' principal discovery was that whereas spiking activity returned to baseline regardless of how long a visual stimulus lasted, both the BOLD response and the local field potentials were sensitive to stimulus duration. This suggested that what the BOLD response was tied to was the input and local processing within that brain region, rather than its output or spiking activity. Although at first a surprise, these findings are easily reconciled with what is known about glucose metabolism in the brain. Excitatory pyramidal glutamate neurons use 80–90% of the brain's energy, and by implication, a similar proportion of oxygen. While axonal firing does account for some portion of this, the majority of this energy is expended on the propagation of excitatory potentials from the dendrites to the soma. Given these considerations, Logothetis's observation of the relationship between local processing and the MR signal is an intuitive way to think about the link between the BOLD response and cognitive functioning. Readers further interested in this linkage are referred to several helpful sources (Heeger & Ress, 2002; Mintun et al., 2001).

Even though the BOLD response is not a direct measure of neural activity, it is demonstrably linked to activity in localizable populations of neurons. Thus, to the extent that brain functioning underlies psychopathology and functional imaging can measure something about brain functioning, it is at least principled to ask whether such imaging modalities can assist in clinical assessment. However, there are many sources of noise when making measurements using fMRI. Thus, the next question we will turn to is whether these imaging modalities provide a consistent answer. That is, how reliable is fMRI?

Reliabiliy

A large number of factors affect the repeatability of BOLD measurements. One of the most common sources that may disrupt this fMRI signal is physiological “noise”; this often includes movement and individual differences such as cardiac variability (Liston et al., 2006; Shmueli et al., 2007) and rate of breathing (Thomason, Burrows, Gabrieli, & Glover, 2005). But noise is not limited to the basic functions to maintain life; it is also sensitive to what else is coursing through the blood stream. Substances such as caffeine (Laurienti et al., 2002); Liu et al., 2004), nicotine (Thiel & Fink, 2007), ethanol (Seifritz et al., 2000), and glucose (Anderson et al., 2006) change signal from session to session. Especially for single subject clinical assessments, it will be imperative to understand how these factors reduce the precision of fMRI measurement.

Given the potential for changes in noise, how do these affect the overall level of reliability? McGonigle et al. (2000) investigated the extent to which one session for a single subject “typifies” that subject's response across multiple sessions, using three different paradigms. In each paradigm, the context of the various sessions was found to have a significant effect on the subject's activation. The authors concluded that assessment of subjects with a single session should be interpreted with a great deal of caution. This finding was controversial, even outright depressing, as many of the hopes for fMRI lay in the realm of using single-session scans for both assessment and treatment aims. But the story soon became more nuanced. Upon conducting secondary analyses, Smith et al. (2005) concluded that while intersession variability played a significant role in a subject's activation, this variability was not significantly larger than within-session variability. That is, the tools (and power) necessary to cope with within-session variability—which the field has had to deal with since the outset—were likely going to be those needed to deal with between-session variability. This conclusion deflected much of the skepticism of the reliability of fMRI methods that had been cast with McGonigle et al. (2000) and restored belief that the activation of a subject from a single session might be a reasonable representation of subjects' brain activity over multiple time points.

Following the publication of McGonigle et al. 2000, but before the reanalysis by Smith et al. 2005, a second study investigated the test-retest reliability of fMRI activation using a checkerboard visual task with three conditions of varied attentional load (Specht, Willmes, Shah, & Jancke, 2003). Both individual and group contrast analyses of the intraclass correlation coefficient (ICC) were conducted. For all three conditions, the range of intraclass correlation coefficients observed was .4–1, with each condition showing a significant number of voxels in primary visual areas with ICC's above .8. Thus it would appear that primary visual areas can be reliability activated across testing sessions. As well, significantly more voxels were consistently activated in the two conditions that required greater attentional load.

Unfortunately, many of the brain regions of interest for clinical assessment are farther forward in the brain than primary visual areas and are only remotely linked to any given stimulus. Work by Manoach and colleagues (2001) have addressed this directly. They investigated test-retest reliability of fMRI activation using a working memory task known to activate prefrontal cortex. In addition, the researchers separately examined reliability in healthy controls and patients with schizophrenia. In regions associated with working memory function, such as dorsolateral prefrontal cortex, retest reliability among controls was ICC =.81, which was even stronger than the retest reliability in the primary motor areas associated with responding, which was ICC =.50. In contrast to the high level of reliability in controls, patients showed very low reliability in activating dorsolateral prefrontal cortex (ICC =.20), whereas primary motor cortex activation was similar to that of controls (ICC =.46). There are many factors that may play a role in this decreased reliability, including greater within-subject variability in behavioral performance or more motion. It is also possible that some aspect of the disorder itself conveys a higher level of variability. Overall the data suggested that, in controls anyway, regions outside of visual cortex might be reliably measured in single subjects.

These studies have all used fMRI in a rather “traditional” manner. That is, they have manipulated an independent variable (the stimulus) and evaluated the reliability of its effect on the dependent variable (the BOLD signal). Another approach to fMRI analysis that, as we shall see, is gaining currency is the use of independent components analysis (ICA). In this case, there is no independent variable, per se, but instead the relationships over time between different voxels are subject to analysis. Thus, brain maps can be calculated that reflect the extent to which different regions are linked through activity or inactivity at the same times. To anticipate the uses of ICA in neurodiagnostics, it is useful to consider the reliability of ICA brain maps at this time. In the only test-retest study of ICA to our knowledge, Lim et al. (2007) evaluated the resting state networks of seven healthy participants tested 6 weeks apart. The resting state networks identified included the posterior cingulate, ventral anterior cingulate, medial prefrontal, superior and inferior parietal cortices. The mean cross-correlation of the network identified at times 1 and 2 was r=.49, suggesting that while a consistent signal may be present there is much room for improvement.

Functional MRI has undergone some skepticism about its reliability. Many factors do impact the precision of fMRI both during and between sessions. However, the studies previously described illustrate that adequate levels of reliability can be found. At the same time, it is increasingly clear that MRI researchers have not been adept in translating the highly dimensional data they are obtaining into reliability coefficients that are either consistent within the field, or interpretable to interested observers. The field has yet a long way to develop before the definitive answer about the reliability of fMRI is known.

Inferring Latent States: The Case of Lie Detection

Assuming reliability will in time be demonstrated in a consistent and interpretable manner, one can turn to validity. In this regard, one of the great challenges in assessment, which is generally not present in fMRI research, is the use of measured variables to infer the presence of a latent factor—a personality profile, or a diagnosis. In fMRI research, the independent variables are generally known—the task condition, the patient groups—and the pattern of brain activity is the topic of investigation. But in the case of clinical assessment, that approach is off the board. That is, the brain activity is thrust into the role of the independent variable and is used to infer something that could not be otherwise measured. As we shall see in the next section, the capacity to infer a latent trait, such as a diagnosis, involves building a model to study the differences between people. Before we take that leap, it is informative to examine an intermediate case, which is the use of models to study state changes within the same person. In this case, the state in question is whether the person is telling the truth or not.

While the study of deception in our fellow citizens goes back to time immemorial, the capacity for using fMRI to do this began only recently. The first study of deception used a traditional approach in which periods of deception were established as a behavioral variable and the pattern of brain activity was the dependent variable and subjects were studied as a group (Lee et al., 2002). Subjects were instructed “to fake well, do it with skill, and avoid detection” (p. 159). When the investigators compared brain activity during a recall condition from cerebral activity during the deception condition, they found large regions of the brain including bilateral prefrontal, frontal, parietal, temporal, and subcortical regions were associated with the cognitive demands of lying. This study showed that, on average, people use particular parts of their brain more when lying, but it did not detect which events were lies per se.

Subsequent work has argued that deception events themselves can be measured in individual subjects (Langleben et al., 2005). This study used a forced-choice paradigm in which healthy undergraduate males were given two cards and were instructed to acknowledge possession of one card and deny possession of the other. At the beginning of the study, participants were given $20 and were instructed they could keep the money at the end of the study only if they successfully concealed their deception. In addition to the two cards given to the participant, distracter cards were presented to measure the activation associated with varying stimulus salience. The investigators found, on a group level, activation during the truth condition was increased in prefrontal and parietal areas compared to the lie condition. Most importantly, the authors examined the accuracy of using activation to distinguish truth from deception on a single-event level. The probability of their computational model correctly separating a pair of trials, with one trial as truth and one trial as deception, was 85%. While these results are impressive, and a necessary step in determining whether lie detection is feasible using fMRI, the presence or absence of a lie remained a measured rather than an inferred variable. Thus, the next step was to use brain activation alone to determine whether a subject was being deceitful.

To test the possibility of using fMRI as a lie detector in a manner analogous to, but hopefully better than, the polygraph, Kozel and colleagues (2005) asked subjects to take part in a mock ‘crime.’ This crime was to take a ring or watch from a specific room to the locker where the subject's personal belongings were kept, while being watched by an investigator. Money was used as an incentive to encourage skillful lying; the participants were instructed they would be given an additional $50 at the end of the experiment if a blind investigator could not tell when they were lying. The investigators started by collecting data on 30 participants and using the data to build a model for detecting deception. In building the model, deception events were known to the algorithm. The deception detection model then focused on three clusters of activation: right anterior cingulate, right orbitofrontal and inferior frontal areas, and right middle frontal areas. It was observed that relative increases in activity in these regions were positively correlated with the likelihood of lying. The researchers then tested their model on an independent group of 31 participants run through the same protocol. Using a combination of the regions associated with deception, the investigators were able to infer when the participants were being deceptive with a 90% accuracy rate.

There are a number of questions that this body of work raises specific to the problem of lie detection— does the model of Kozel and colleagues generalize to other crimes or just this one? Is 85–90% accuracy high enough to take action or convict? But beyond this work's capacity to increase the sale of MRI technology to law enforcement, such efforts suggest that the technique of building models based on fMRI data to predict unknown aspects of subsequently scanned subjects may be possible. In this case, the model was built to detect a change in state, from that of telling the truth to lying, within individuals. The challenge for the clinical assessment of personality or diagnosis is to build models that can distinguish between individuals.

Neurodiagnostics

Is there a pattern of activity in the brain that is so closely associated with, for example, an impulse control disorder that simply by knowing someone has that pattern of activity one can be reasonably certain they have the behavioral features of an impulse control disorder? A strictly theoretical approach to such a question might involve determining brain regions associated with reward-related activity, perhaps the ventral striatum, and brain regions associated with executive control, perhaps orbital and dorsolateral prefrontal cortices. One might then examine brain activity in these regions while performing a task that has been shown to be related to individual differences in impulse control. One could calculate the extent to which activity in these regions of interest predicts some external measure of impulse control. Indeed, a growing number of studies do indeed show correlations between brain activity and symptoms (e.g., MacDonald et al., 2005). However, very few of these studies have conceptualized this as a question of prediction, and therefore we have almost no knowledge of how good such approaches are for classification or prediction.

The most advanced efforts in using brain measurements to provide clinical information are theory-poor and data-driven. That is, as opposed to building up a clinical presentation based on an understanding of basic mechanisms, neurodiagnostics have used sophisticated statistical approaches applied to a lot of data. Machine learning tools are then used to sort through the data to pick out the data points that might be most relevant to differentiating between groups. The process would be recognizable to dust-bowl empiricists like Starke Hathaway who selected items for the Minnesota Multiphasic Personality Inventory (MMPI) based on their ability to discriminate groups. In this case, though, the data reflect patterns of brain activity rather than overt responses.

Apostolos Georgopoulos and colleagues have published the strongest proof of principle to date for the potential of brain data combined with modern computational algorithms to make differential diagnoses (Georgopoulos et al., 2007). In this case, their study used magnetoencephalography, or MEG, rather than fMRI. Like fMRI, MEG is a “noninvasive” method for detecting brain activity that measures changes in the local magnetic field. But there the comparisons end. Whereas fMRI measures blood oxygenation as discussed above, MEG measures the magnetic field disturbances associated with the electrical activity of neurons. To do this, MEG relies on a number of sensors placed around the skull. Like electroencephalography, or EEG, MEG has very fast temporal resolution. Thus investigators are able to measure 1,000 data points every second from each sensor (in contrast, fMRI acquires a whole brain image every 1.5–3 s). MEG is most sensitive to the activity of neurons in a particular orientation that are close to the sensors. Therefore, its strength lies not in producing uniform data about the brain, nor in measuring activity deep in the brain. Finally, the number of MEG machines in the world numbers in the tens, so it is currently an uncommon technology.

As many investigators had suggested before, Georgopoulos recognized that neurological and psychiatric disorders might occur when neurons stop communicating normally. For him, all those MEG sensors measuring activity on a millisecond timescale were ideal for measuring such communication problems. To this, Georgopoulos added two insights. First, he intuited that communication problems might be visible in the background chatter of neurons when staring at a point of light for 45 s. This solved the problems that come of asking participants with different ability levels to perform a task. The second insight addressed the problem that no one knew which signal would indicate the presence or absence of a disorder. Georgopoulos realized that he and his colleagues did not need to know the answer: computational algorithms can answer just this kind of question. “You know there is gold in America, but all you have is a map of the United States. The computer algorithm is there to show you where the gold is” (Georgopoulos, personal communication, October 2007).

To demonstrate the usefulness of this new formula, the investigators measured 52 participants with five different diagnoses—schizophrenia, alcoholism, Alzheimer's disease, multiple sclerosis, and Sjogren's syndrome, which is an autoimmune disorder that affects the glands that produce tears and saliva. They also included healthy control participants. After preprocessing the data, they computed correlations between the neural activity at each of the 271 sensors, and then identified sensor pairs that had the highest correlations. These steps served to reduce the data from about 271 (sensors) × 45 (s) × 1,000 (measurements per second) raw data points to a still very large, but somewhat more manageable, number of cross-correlations. About 18% or 5,500 of these cross-correlations differed significantly across the six groups. To sift through this still very large corpus of information, the investigators employed an evolutionary, or genetic, algorithm. Thus their algorithm for distinguishing diagnostic groups started with a random, and therefore nonoptimal, formula for separation. The algorithm then mutated and improved over many, many “generations” to better discriminate between groups. While the procedure employed was computationally intensive and the details somewhat arcane, the crucial point is this: a river of MEG information was available; through a series of principled decisions, this river was sifted for a small quantity of gold.

The discriminant function classification that emerged from these procedures has a number of properties of great interest to readers. First, there were many combinations of sensor pair correlations that correctly classified all of the 52 subjects, and far more than would be expected by chance. An example of such a combination is illustrated in Figure 19.1. But the true test came when they used the algorithm derived on the initial sample in an independent sample of 46 subjects. Of the classification schemes that had given 100% correct classification in the first sample, many provided excellent classification in the new sample (hundreds provided better than 95% correct classification, and thousands more provided better than 90% correct classifications). The cross-validation approach used in these analyses is quite powerful; however the relative scarcity of MEG devices around the world and an unproven ability to repeat the results at another location will slow the adoption of this approach in clinical practice. Thus, the question can well be asked whether the data available from functional MRI is capable of a similar trick.

Fig. 19.1

Functional Imaging in Clinical Assessment? The Rise of Neurodiagnostics with fMRI (4)

Open in new tabDownload slide

Classification results from 52 participants from six different groups included in Georgopoulos et al. (2007) using the correlations between MEG sensors. The six groups are plotted on two out of five canonical discriminant functions to illustrate the separation between them that occurred for this particular iteration of their analysis path. Reprinted with kind permission of the author and publisher.

The use of fMRI to distinguish between diagnostic categories remains in its early stages, but a growing body of work suggests explosive growth in this area may be just around the corner. The work of Calhoun and colleagues (Calhoun, 2005; Calhoun, Maciejewski, Pearlson, & Kiehl, 2007) has made the greatest contributions we know of using this kind of data. Calhoun shares with Georgopoulous a theory-poor, machine-driven approach to data analysis and classification. It also utilizes the incidental correlations that arise across the brain. Whereas Georgopoulos used the very high temporal resolution of MEG to observe correlations as they occurred in time, Calhoun's method relies on ICA, described above.

To study whether such ICA brain maps were indicative of psychopathology, Calhoun evaluated two components that emerged when 21 chronic schizophrenia patients, 14 bipolar patients, and 26 healthy controls were asked to perform an auditory oddball task during fMRI. The oddball task is simply a series of stimuli (one every 2 s) that establish a dominant pattern. Occasionally (25% of the time) something different from the dominant tone occurs, an oddball, and participants responded with a button press. Many studies of the oddball task have been conducted, and these have focused on brain regions associated with detecting and responding to the oddball stimuli. Analyzing the independent components that could be found within brains engaged in this task, however, was something quite different from these more traditional analyses. In this case, the researchers calculated maps of the default mode and temporal lobe networks from all of the participants. These networks are so named because they represent two different canonical sets of regions: the first is typified by activity in the posterior anterior cingulate, precuneus, and frontopolar cortex; the temporal lobe network was characterized by activity in the inferior temporal lobe and temporal poles. The researchers then left out one participant from each group during the training of an algorithm to distinguish people in each group. That is, they allowed the algorithm to learn the important differences between the different diagnoses in their default and temporal networks. (And conversely, the algorithm learned to ignore between-individual differences that were irrelevant to classification.) To test the resulting classification algorithm, he calculated how frequently it correctly identified the participants left out of that training. As illustrated in Figure 19.2, the average sensitivity of the models was 90%, whereas the average specificity was 95%. Among the different diagnostic groups, controls were classified correctly 95% of the time and bipolar patients were classified correctly 83% of the time, with schizophrenia patients in between. Interestingly, the inclusion of more ICA networks did not appreciably increase sensitivity or specificity and indeed risked overfitting the data and thereby losing the ability to generalize to the untrained cases, much less cases from other scanners.

Fig. 19.2

Functional Imaging in Clinical Assessment? The Rise of Neurodiagnostics with fMRI (5)

Open in new tabDownload slide

Classification results from 21 chronic schizophrenia patients, 14 bipolar patients, and 26 healthy controls included in Calhoun et al. 2007. Each case is represented with a dot, with the gray scale indicating diagnosis. (a) Lower left shaded region shows the a priori decision region for controls versus noncontrols; (b) lower right shaded region shows the a priori decision region for schizophrenia versus nonschizophrenia patients; (c) upper right shaded region shows the decision region for bipolar disorder versus nonbipolar patients. Reprinted with the kind permission of the author and publisher.

It is useful to remind readers before going on that this level of sensitivity and specificity is at least on a par with clinical ratings. For example, Jakobsen and colleagues (2005) reported the agreement between the hospital record-based OPCRIT (operationalized criteria) diagnosis of schizophrenia and that of an experienced psychiatrist was 85%, with 93% sensitivity and 62% specificity. Comparing DSM-IV emergency room diagnoses to inpatient discharge diagnoses from Woo, Sevilla, and Obrocea (2006), there was 86% agreement for all patients eventually diagnosed with either schizophrenia or bipolar disorder, with 88% sensitivity and 85% specificity for the diagnosis of schizophrenia or schizoaffective disorder. Compared to these efforts to apply the same phenotypic framework across different settings, the classification algorithm based on brain activity alone appears to be rather promising.

It is also useful to reflect on the importance of the tasks used in these two studies: they were largely irrelevant. That is, the cognitive demands of the tasks were not the basis of the diagnostic information. While Calhoun's participants were doing an oddball task, it would seem in principle that they may as well have been counting monkeys in the jungle. This is quite surprising for someone stepped in clinical cognitive neuroscience. One might think that tasks that tapped the deficits associated with schizophrenia would be those most useful in making diagnosis. But the implicit or explicit premise that the investigators began with is that schizophrenia is largely a disorder of connectivity; given enough machine power, functional data will reveal the nature of that disconnectivity in the absence of any particular task demand. This is important for many reasons, but two appear to be most directly relevant to the future of clinical assessment: first, the problems and questions associated with matching performance of patients and controls and the confounds that arise from individual differences in performance are ameliorated by using this kind of approach; second, schizophrenia may be an unusual disorder in this regard. Clinical assessment in domains other than psychosis, such as internalizing or externalizing pathology, may still require task-related activity rather than resting state activity to observe clinically relevant brain activity. That is, there are no basal patterns of activity or connectivity that distinguish someone with generalized anxiety disorder or a gambling addiction from controls. Thus, simply because in this case task demands and performance were not discussed at length, these may be important considerations for understanding some forms of psychopathology.

Conclusions

What are we to make of these developments at the edge of science fiction? On the one hand, the evidence for reliability as traditionally understood is poor. Test-retest of the reliability of voxels within an independent component, for example, is an unimpressive .49, on average. On the other hand, there is something of the feel of a large locomotive coming through a tunnel. At this point we can only feel the rumble and the wind being pushed in front of it; but soon, perhaps, it may change the landscape of clinical assessment. There are many questions that the preliminary steps in this domain give rise to. We will address four: the necessary next steps for neurodiagnosis to be validated; the potential role for neurodiagnosis in the clinic; the role of neurodiagnostic biomarkers in refining models of personality and diagnoses; and the shift toward theory-poor dust-bowl empiricism that this approach embraces.

First, there are a number of practical next steps that neurodiagnostics researchers need to address. The body of work to date is limited in scope, both to studies that take place on individual scanners, and to medicated patients, and to a limited number of diagnoses. Watch for developments in this domain in the near future, with increasing numbers of medication-naive patients being compared to other psychiatric groups. The importance of task demands and performance will also have to be evaluated to determine whether some tasks provide activation or correlation maps that provide better or worse classifications. Task performance and the issue of medication will also play into concerns about the repeatability of classification of the same individuals over time. In this chapter we examined test-retest reliability on a voxel-wise basis. It remains to be seen how important individual voxels are compared to the larger patterns of activity and correlations that have so far proved successful in neurodiagnostics. Furthermore, the use of neurodiagnostics will be limited to a few locations in major university medical centers until researchers develop classification algorithms that perform well on any scanner, not just the one on which they were developed. Finally, the cost effectiveness of neurodiagnostics will have to be addressed. How much incremental information can neurodiagnostics provide over and above traditional, less resource intensive (and less intimidating) practices such as interviews and self-report scales?

The issue of cost effectiveness is closely tied to questions one might ask about whether a neurodiagnostics assessment will help in treatment planning. Knowing that the patient sitting in your office has abnormal brain connectivity may come as a great relief to both the patient and his or her family. But it is only a variant of the current “chemical imbalances” explanation. Here the hope for neurodiagnostics, with fMRI or any other modality, is to go beyond brain maps and what is currently available within the assessment armamentarium. That is, these data will be most helpful in providing predictive information, such as the likelihood of future (comorbid) diagnoses that might be prevented, the likelihood of responding to different medications, or other ancillary risk factors such as activation patterns linked to the likelihood of suicide. This remains pure science fiction at this point, beyond the grasp of the current body of work. However, such value-added information is likely to be necessary before neuroimaging becomes a common aspect of clinical assessment.

If the impact of neurodiagnostics and other forms of imaging assessment on day-to-day clinical practice is still some ways away, the impact of imaging on how we conceptualize personality, and psychiatric diagnosis in particular, is likely to be much closer to hand. Since time immemorial, the distinction between neurology and psychiatry has been the putative likelihood of a biological explanation of the former and a psychological explanation of the later. While biological explanations for psychiatric phenomena have been growing since the introduction of chlorpromazine in the 1950s for the treatment of schizophrenia, there have never been definitive biomarkers indicating the presence or absence of a diagnosable condition. The power of biomarkers over the psyche cannot be underestimated. While much of this excitement about biomarkers may be spurious to the actual treatment of mental disorders, such markers may be very helpful in the process of refining diagnostic categories which may eventually have treatment implications. While the DSM-V criteria already under development are unlikely to include imaging findings as any component of diagnosis, future editions may feature more meaningful subtypes, algorithms for dealing with intermediate cases, or a means for understanding the dimensionality of psychopathology (perhaps colapsing Axis 1 and Axis 2) based on neurodiagnostic information. That is, biomarkers may have a role in understanding dimensionality and the common origins of certain disorders (e.g., depression and anxiety) by providing biological principles around which to organize thinking in these areas.

Finally, it is instructive to reflect on the machine-driven approach to classification as a variant of the dust-bowl empiricism of the mid-twentieth-century psychology. Like much item-driven personality assessment of the past, the watchword is again “Let the algorithms sort it out.” So, whereas much of clinical cognitive neuroscience is struggling to forge links between basic cognitive and affective sciences and how individual differences in these mechanisms translate into psychopathology, this algorithm-based work represents something of a step back to the future. In this respect, the work of basic psychologists and neuroscientists is not as important as the work of signal-processing scholars and statisticians.

This is doubtless overstating the dichotomy between empirical and theory-driven science for dramatic effect. While the emphasis at this phase in the development of assessment and neurodiagnostics is definitely on the empirical side of the balance, this is an iterative process that will give rise in turn to understanding the meaning of the differences detected. This is because the data derived from neuroimaging has biological meaning beyond that used for the purpose of the diagnosis itself (in contrast, for instance, with some questions on the MMPI, which have no real purpose beyond obtaining a scaled score). There is a great reservoir of knowledge about the brain regions that will differentiate cases and controls. This reservoir can be tapped to further understand the illness itself, which may, in turn, have the potential to suggest new interventions.

It is still too early to say whether neurodiagnostics will revolutionize clinical assessment or be an atrocious waste of time and treasure. As we have shown, there are now the first buds of promise in this neglected approach to neuroimaging data. The development of the field has been slow because it required the marriage of modern marvels of physics—MRI, or in one case MEG—to computationally intensive algorithms with the capacity to sift through immense amount of data such machines produce. As with any newly weds, there remains a great deal of work to do while this tech-heavy couple get their house in order. But there is reason to pay attention. This may be the avenue from which comes the next generation of big ideas in personality and clinical assessment.

References

Anderson,

A. W., Heptulla, R. A., Driesen, N., Flanagan, D., Goldberg, P. A., Jones, T. W., et al. (

2006

).

Effects of hypoglycemia on human brain activation measured with fMRI.

Magnetic Resonance Imaging,

24, 693–697.

OpenURL Placeholder Text

Calhoun,

V. D., Adali, T., Giuliani, N. R., Pekar, J. J., Kiehl, K. A., & Pearlson, G. D. (

2005

).

Method for multimodal analysis of independent source differences in schizophrenia: Combining gray matter structural and auditory oddball functional data.

Human Brain Mapping,

27(1), 47–62.

OpenURL Placeholder Text

Calhoun,

V. D., Maciejewski, P. K., Pearlson, G. D., & Kiehl, K. A. (

2007

).

Temporal lobe and ‘default’ hemodynamic brain modes discriminate between schizophrenia and bipolar disorder.

Human Brain Mapping,

29(11), 1265–1275.

OpenURL Placeholder Text

Georgopoulos,

A. P., Elissaios, K., Arthur, C. L., Scott, M. L., Joshua, K. L., Aurelio, A. A., et al. (

2007

).

Synchronous neural interactions assessed by magnetoencephalography: A functional biomarker for brain disorders.

Journal of Neural Engineering,

4, 349.

OpenURL Placeholder Text

Heeger,

D. J., & Ress, D. (

2002

).

What does fMRI tell us about neuronal activity.

Nature Reviews Neuroscience,

3, 142–151.

OpenURL Placeholder Text

Jakobsen,

K. D., Frederiksen, J. N., Hansen, T., Jansson, L. B., Parnas, J., & Werge, T. (

2005

).

Reliability of clinical ICD-10 schizophrenia diagnoses.

Nordic Journal of Psychiatry,

59, 209–212.

OpenURL Placeholder Text

Kozel,

F. A., Johnson, K. A., Mu, Q., Grenesko, E. L., Laken, S. J., & George, M. S. (

2005

).

Detecting deception using functional magnetic resonance imaging.

Biological Psychiatry,

58, 605–613.

OpenURL Placeholder Text

Langleben,

D. D., Loughead, J. W., Bilker, W. B., Ruparel, K., Childress, A. R., Busch, S. I., et al. (

2005

).

Telling truth from lie in individual subects with fast event-related fMRI.

Human Brain Mapping,

26, 262–272.

OpenURL Placeholder Text

Laurienti,

P. J., Field, A. S., Burdette, J. H., Maldjian, J. A., Yen, Y.-F., & Moody, D. M. (

2002

).

Dietary caffeine consumption modulates fMRI measures.

Nueroimage,

17, 751–757.

OpenURL Placeholder Text

Lee,

T. M. C., Liu, H.-L., Tan, L.-H., Chan, C. C. H., Mahankali, S., Feng, C.-M., et al. (

2002

).

Lie detection by functional magnetic resonance imaging.

Human Brain Mapping,

15, 157–164.

OpenURL Placeholder Text

Lim, K. O., Chamchong, J., Bell, C. J., Fried, P., Mueller, B. A., & MacDonald, A. W., III. (2007). Consistency of the resting state network within individual healthy participants across time. Paper presented at the 46th Annual Meeting of the American College of Neuropsychopharmacology. Retrieved.

Liston,

A. D., Lund, T. E., Salek-Haddadi, A., Hamandi, K., Friston, K. J., & Lemieux, L. (

2006

).

Modelling cardiac signal as a confound in EEG-fMRI and its application in focal epilepsy studies.

NeuroImage,

30, 827–834.

OpenURL Placeholder Text

Liu,

T. T., Behzadi, Y., Restom, K., Uludag, K., Lu, K., Buracas, G. T., et al. (

2004

).

Caffeine alters the temporal dynamics of the visual bold response.

NeuroImage,

23, 1402–1413.

OpenURL Placeholder Text

Logothetis,

N. K., Pauls, J., Augath, M., Trinath, T., & Oeltermann, A. (

2001

).

Neurophysiological investigation of the basis of the fMRI signal.

Nature,

412(6843), 150–157.

OpenURL Placeholder Text

MacDonald,

A. W., III, Carter, C. S., Kerns, J. G., Ursu, S., Barch, D. M., Holmes, A., et al. (

2005

).

Specificity of prefrontal dysfunction and context processing deficits to schizophrenia in a never-medicated first-episode sample.

American Journal of Psychiatry,

162, 475–484.

OpenURL Placeholder Text

Manoach,

D. S., Halpern, E. F., Kramer, T. S., Chang, Y., Goff, D. C., Rauch, S. L., et al. (

2001

).

Test-retest reliability of a functional MRI working memory paradigm in normal and schizophrenic subjects.

American Journal of Psychiatry,

158, 955–958.

OpenURL Placeholder Text

McGonigle,

D. J., Howseman, A. M., Athwal, B. S., Friston, K.J., Frackowiak, R. S. J., & Holmes, A. P. (

2000

).

Variability in fMRI: An examination of intersession differences.

NeuroImage,

11, 708–734.

OpenURL Placeholder Text

Mintun,

M., Lundstrom, B., Snyder, A. Z., Vlessenko, A. G., Shulman, G. L., & Raichle, M. E. (

2001

).

Blood flow and oxygen deliver to human brain during functional activity: Theoretical modeling and experimental data.

Proceedings of the National Academy of Sciences of the United States of America,

98, 6859–6864.

OpenURL Placeholder Text

Seifritz,

E., Bilecen, D., Hanggi, D., Haselhorst, R., Radu, E. W., Wetzel, S., et al. (

2000

).

Effect of ethanol on bold response to acoustic stimulation: Implications for neuropharmacological fMRI.

Psychiatry Research Neuroimaging,

99, 1–13.

OpenURL Placeholder Text

Smith,

S. M., Beckmann, C. F., Ramnani, N., Woolrich, M. W., Bannister, P. R., Jenkison, M., et al. (

2005

).

Variability in fMRI: A re-examination of intersession differences.

Human Brain Mapping,

24, 248–257.

OpenURL Placeholder Text

Shmueli,

K., van Gelderen, P., de Zwart, J. A., Horovitz, S. G., f*ckunaga, M., Jansma, J. M., et al. (

2007

). Low-frequency fluctuations in the cardiac rate as a source of variance in the resting-state fMRI bold signal.

NeuroImage,

38, 306–320.

Specht,

K., Willmes, K., Shah, N. J., & Jancke, L. (

2003

).

Assessment of reliability in functional imaging studies.

Journal of Magnetic Resonance Imaging,

17, 463–471.

OpenURL Placeholder Text

Thiel,

C. M. F., & Fink, G. R. (

2007

).

Visual and auditory altertness: Modality-specific and supramodel neural mechanisms and their modulation by nicotine.

Journal of Neurophysiology,

97, 2758–2768.

OpenURL Placeholder Text

Thomason,

M. E., Burrows, B. E., Gabrieli, J. D. E., & Glover, G. H. (

2005

).

Breath holding reveals differences in fMRI bold signal in children and adults.

Neuroimage,

25, 824–837.

OpenURL Placeholder Text

Woo,

K. P., Sevilla, C. C., & Obrocea, G. V. (

2006

).

Factors influencing the stability of psychiatric diagnoses in the emergency setting: Review of 934 consecutively inpatient admissions.

General Hospital Psychiatry,

28, 434–436.

OpenURL Placeholder Text

Download all slides

Metrics

Total Views 31

31 Pageviews

0 PDF Downloads

Since 10/1/2022

Month: Total Views:
October 2022 2
December 2022 2
January 2023 2
February 2023 4
March 2023 4
April 2023 1
May 2023 1
June 2023 2
July 2023 2
August 2023 1
September 2023 2
October 2023 2
November 2023 2
December 2023 1
March 2024 2
April 2024 1

Citations

Powered by Dimensions

Altmetrics

×

More from Oxford Academic

Psychological Assessment and Testing

Psychology

Science and Mathematics

Books

Journals

Functional Imaging in Clinical Assessment? The Rise of Neurodiagnostics with fMRI (2024)

FAQs

Is functional neuroimaging an fMRI? ›

In addition, resting state or functional connectivity MRI (fcMRI) is described, which is fMRI without a specific task, and provides a passive means of interrogating functional brain networks and their connectivity.

What are the clinical uses of fMRI? ›

Functional magnetic resonance imaging (fMRI) is an imaging scan that shows activity in specific areas of the brain. In medical settings, fMRI mainly helps plan brain surgeries and similar procedures.

Why is fMRI so popular? ›

fMRI enables the detection of abnormalities of the brain, as well as the assessment of the normal functional anatomy of the brain, which cannot be accomplished with other imaging techniques.

How does the functional MRI fMRI assess changes in the brain? ›

fMRI relies on detecting small changes in the signals used to produce magnetic resonance images that are associated with neuronal activity in the brain, and it is producing unique and valuable information for applications in both basic and clinical neuroscience.

What are the pros and cons of fMRI? ›

The procedure is generally considered safe, non-invasive. and fast, yet produces very detailed images. However, it can be expensive, can interact with metal objects in the body, and can be difficult for patients who are overweight or experience claustrophobia.

What is the difference between fMRI and MRI? ›

While an MRI scan allows doctors to examine a patient's organs, tissue, or bones, “an fMRI looks at the function of the brain,” Dr. Zucconi explains.

What are the criticisms of fMRI? ›

Common Criticisms of FMRI

This means it is not a truly quantitative measure of mental activity - when comparing the FMRI response between individuals it is impossible to say whether the differences are neural or physiological in origin.

Why do psychologists use fMRI scans? ›

These images also help researchers map the brain regions associated with different behaviors, often by studying people with specific brain injuries. A form of MRI known as functional MRI (fMRI) has emerged as the most prominent neuroimaging technology over the last two decades.

Why are functional MRIs useful to doctors? ›

Functional MRI can be used to examine the brain's anatomy and show which parts of the brain are handling critical functions, language and movements. This information can help guide decisions when considering someone for brain surgery.

When did fMRI become popular? ›

Since its inception in 1990, this method has been widely employed in thousands of studies of cognition for clinical applications such as surgical planning, for monitoring treatment outcomes, and as a biomarker in pharmacologic and training programs.

Why did the scientists use fMRI? ›

The fMRI technique, which emerged in the early 1990s, offered significant advances over other methods of studying brain function. For example, the widely available scanning technique visualizes activity in all areas of the brain, not just those close to the surface.

Why use fMRI instead of EEG? ›

The trouble with EEG is the inverse problem, whereby it is impossible to identify the source of voltage measurements on the scalp within the cranium (Luck, 2013). fMRI, on the other hand, has incredibly good spatial resolution but suffers from poor temporal resolution.

What is the clinical use of fMRI? ›

Current Clinical Applications of fMRI

This is mostly done in patients with brain tumours or epilepsy, since these diseases may cause substantial displacement of brain functions, and functional mapping with fMRI may help surgeons to localize important areas despite their unusual neuroanatomical localization (29).

What is an interesting fact about the fMRI? ›

The “f” in the fMRI refers to “function.” This special kind of MRI will help psychologists, for example, to “watch the mind in action”—to understand what areas of the brain are most active when a person performs a mental task, whether learning or remembering something, or when feeling angry or happy, etc.

What is fMRI used to diagnose? ›

fMRI is used to assess how your brain is working. Doctors also use fMRIs to help determine the potential risk of surgeries or other invasive procedures. fMRI scans help diagnose: Stroke.

What is a functional neuroimaging technique? ›

Functional neuroimaging is the use of neuroimaging technology to measure an aspect of brain function, often with a view to understanding the relationship between activity in certain brain areas and specific mental functions.

What are the two types of fMRI? ›

As we have indicated, there are two primary types of fMRI studies- those in which a cognitive task is used to modulate specific neuronal activity, and resting state studies.

Is a fMRI functional or structural? ›

A functional MRI (fMRI) is a more sophisticated type of MRI that creates a dynamic record of metabolic activities over time. Functional-MRI technology is focused exclusively on the brain. MRIs show us how the brain and other organs are structured; fMRIs tell us how the brain operates.

Is neuroimaging the same as MRI? ›

NCPRC uses a neuroimaging technique called magnetic resonance spectroscopy (MRS). MRS in our studies allows researchers to obtain biochemical information about the brain, while magnetic resonance imaging (MRI) only provides information about the brain's structure.

Top Articles
Latest Posts
Article information

Author: Arline Emard IV

Last Updated:

Views: 5694

Rating: 4.1 / 5 (72 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Arline Emard IV

Birthday: 1996-07-10

Address: 8912 Hintz Shore, West Louie, AZ 69363-0747

Phone: +13454700762376

Job: Administration Technician

Hobby: Paintball, Horseback riding, Cycling, Running, Macrame, Playing musical instruments, Soapmaking

Introduction: My name is Arline Emard IV, I am a cheerful, gorgeous, colorful, joyous, excited, super, inquisitive person who loves writing and wants to share my knowledge and understanding with you.