The purpose of preprocessing is to reduce, as much as possible, the impact of head movement on the fMRI signal.
The purpose of doing preprocessing separately from FEAT is that it saves time, by separating a one-time step from the analysis portion of the study, which is likely to be done and redone as you refine your post-hoc (of course!) analyses.
The single most important thing that happens in pre-processing is the spatial smoothing (1.9.1), a decision with far-reaching implications for your findings.
Open FSL and select FEAT to begin.
Step 1: Analysis level and Stats Buttons (at very top of GUI)
Open FEAT and locate the top left button.
Choosing an analysis level
This button, at the top left, allows you to specify whether you are doing a) an individual analysis (called ‘first-level’) or b) a within-subject or between-subject (called ‘higher level’) analysis. Because group analyses can be within-subject (eg, comparing a subject’s response to a Stroop task pre- and post- some condition, like receiving a medication or getting sad) or between subjects (eg, comparing all subjects on a Stroop task) this tab calls group analyses ‘higher level’ rather than ‘group’. For an individual subject’s single run on a task, select ‘First-level analysis.’
This button, at the top left, specifies whether you are pre-processing, applying regressors to your matrix, or determining cluster size. Depending on what you select, different tabs will be available in the GUI. You can play around with various settings to see how this works.
1. Pre-stats = pre-processing. You don’t specify regressors or obtain contrasts here, you just do motion correction and spatial smoothing. As explained above, by keeping this stage separate from stats, you save time in the long run by completing only pre-processing your data once, with the almost certain knowledge that you will try several different regressors to model your data.
2. Stats = applying regressors to your model, and generating contrasts. This is what you are most interested in.
3. Post-stats = specifying what a) voxel intensity and b) cluster size you are interested in. In general you will be minimally specific in FEAT, and then manipulate these variables in FSL View, explained several chapers from now.
Since you are doing pre-processing at this step, choose ‘pre-stats’ only. The stats tab should immediately grey out.
Step 2: Misc Tab
Balloon Help: Mostly annoying, but sometimes helpful. If ‘Balloon help’ is selected, holding the cursor over a GUI button displays a green information box containing a conveniently unintelligible explanation of the button’s function.
Featwatcher: When this is selected (yellow check), once you’ve pressed Go a window will pop up that will inform you of a) any errors that occur b) progress c) when the whole thing is done. Please note you only need to keep the first window open; each time it progresses to a new analysis (something that happens at the individual run level, but not during preprocessing) a new window opens that you can close immediately to reduce clutter.
Delay: Normally you don’t want to delay analysis (hours=0), but if you will be processing several scans at once, you may want to stagger them using this function. We generally allow six to run at any given time.
Brain/background threshold: At the preprocessing level, choose 10%. Some people (eg Grinband) believe that brains should not be thresholded after an initial thresholding at this preprocessing stage.
Step 3: Data Tab
First-level analysis & Stats Tab buttons (at the top, seen on all screens)
First-level analysis. Second level means groups. This is one person = first level. The reason it isn’t called ‘individual’ and ‘group’ is that within-subject designs are essentially group analyses on individuals.
Pre-stats only. The benefit of this is that it saves you time in the future, if you ever analyze your first-level stats differently. At the individual level you will run stats + post stats.
Select 4-D data tab
Here you are going to select the .hdr.gz folder (eg, for run 1, r1.hdr.gz) that specifies the .img file (of the same name) that you are going to analyze. This should be kept in the scan folder under the subject that you are interested in. EG, scan 1 = s1 folder. If you’ve done 3 runs, you should have 3 such folders, r1.hdr.gz, r2.hdr.gz, r3.hdr.gz.
It may be that yours is not zipped in which case it won’t say .gz at the end.
Once you press OK, the ‘total volumes’ and the ‘TR’ boxes should be filled in correctly. This will be discussed below, but if they aren’t, you have a problem and need to stop, think, and tinker your way towards happiness.
Output Directory/file name
You want to call this file ‘preproc’ and then 1,2,3 specifying which scan is involved. EG, for run1, call this preproc1. The directory into which preproc1 or 2 or 3 etc is deposited should be s1, the one containing r1, your original data, or s2 containg r2, etc. (easy, huh?)
Total Volumes: This is filled in automatically. You should know how many volumes the scan should have. For example, if you acquired 100 volumes, your number is 100! This number should appear once you have selected your 4D data. PLEASE BE FOREWARNED that if your expected number does not appear here, you are facing a serious problem that needs to be resolved before you move forward. [See box on AVWsize]
Deleting volumes: You always need to delete 6 seconds worth – 2 volumes if TR = 3, 3 volumes if TR = 2. This is the industry standard. Why do this? Practically this is because the first 6 seconds of your experiment were – if you designed it right – fixation as the subject waited for things to begin. Conceptually, the reason you designed things this way can be seen by opening any functional scan .hdr file before deleting the first volumes. Look at the difference between the first 3 (if TR = 2) or 2 (if TR = 3) acquisitions and the rest. See how much variation there is in signal early on? This is a product of the machine’s magnetic field first lining up the protons in the water molecules in your head, before they have gone into a stead state. This data is useless to you.
This should load automatically. Check your plan sheet to make sure it’s right!
High pass filter cutoff:
Set to 100 s most of the time. Consult balloon help to learn more about the theory behind this.
AVWsize, split and merge: ensuring proper file size!
The great catastrophe of running an analysis on the wrong data matrix – akin to trying to squeeze blood from a rock – can be avoided very simply: correct the size of your matrix so it is the correct number of volumes.
1. Know how many volumes you expect your preprocessed data to be. Then check. You can check by
a. In a unix terminal, in the folder in which the preprocessed data resides, type avwsize preproc1.feat/filtered_func_data
b. Loading the file into the FEAT GUI – it will automatically tell you how many volumes there are.
2. Is this the size you want? If so, bueno. But if it is too long you may have to trim it. For example, someone named Steve may have entered that you were going to acquire 130 volumes instead of 103 on the day of the scan, leaving you with over 25 extra volumes… in which case to avoid dampening your results with all that noise at the tail end you will need to remove it from the analysis.
a. Let’s say your file is named preproc1.feat, locate the r1 file on which it is based. It should be in the same folder.
b. Make a temporary folder in which to manipulate it – ‘mkdir r1temp’
c. Move it to that folder (after checking that the folder exists!) – ‘mv r1.* r1temp’. DONT COPY IT! Move it – so when you bring r1 back, it has a hole to fill.
d. Unzip it. command is: gunzip r1.* unzips it.
i. NOTE that you won’t see any evidence of the fact that unzipping has occurred – save that avwsplit (the next step) won’t work otherwise.
e. ‘avwsplit r1’ splits it. You will see a little ticker in the terminal window telling you as each volume is pared off the matrix. Now each volume is separate.
i. NOTE that the original r1 is kept, and the unziped volumes are copies. Remember this when you name the merged new result!
f. Remove the volumes you don’t want, eg ‘rm vol011?.*’
i. NOTE that you can also remove volumes at the front should you want to.
ii. Use the arrow keys to iteratively and quickly remove volumes. Note that ? is a placeholder and * is a wildcard to make this faster.
g. Merge what’s left. Type avwmerge [enter] for usage. But you will probably end up using this command:
avwmerge –t r1new vol0???.img
PLEASE NOTE: It will be a catastrophe if you type ‘vol0????.* instead of .img. Why? The * will concatenate the .hdr files in with the .img files – catastrophe.
PLEASE NOTE: you MUST call it r1new, not r1. You don’t want to obliterate the orignal r1.
h. Now copy these new r1s to the directory you removed them from: ‘cp r1new.* ../’
i. Then move up to this directory and copy these to r1s of the same name, ‘mv r1new.img.gz r1.img.gz’
j. VOILA! You have replaced your bad old r1 with a good new r1.
Step 4: Pre-Stats Tab
Slice timing correction
= Interleaved (0, 2, 4 … 1, 3, 5)
This is done because the scanner will take images from the bottom up, but will skip every other slice and make two passes through the entire brain. Now we compile these slices in the right anatomical order. Note that if you have a spiral sequence this gui may not make sense; consult your lab director about how to check this tab.
BET Brain extraction
= yellow (on). PLEASE make a conceptual note: your highres image is of course already brain extracted (you’ve done that elsewhere). Your functional image has, to start with, very little skull signal because the BOLD change doesn’t really affect the skull and scalp. Nevertheless what you are extracting here is the skull from the functional image, as well as randomly activated air voxels – the notrious ESPs (extra-skullar perceptions) that dog so many studies. That’s why it may seem like ‘hey, I already extracted the brain’ but you didn’t.
WARNING!! THIS IS A MAJOR DECISION! DISCUSS THIS THOUGHTFULLY WITH YOUR SUPERVISOR AND COLLEAGUES! THERE IS NO DEFAULT – YOU HAVE TO THINK.
Spatial smoothing does to voxel intensities what Robin Hood did to the rich: it redistributes the ‘wealth’ to make a more uniform picture of brain activation. From a biological perspective, spatial smoothing is necessary because adjacent voxels in a single tissue type may have markedly different values when, biologically, their values should be relatively close. Spatial smoothing essentially ‘redistributes’ every voxel’s activity to some of its neighbors, with near neighbors getting significant amounts and distant ones getting very little.
The most significant decision that the investigator must make is what size ‘smoothing kernel’ to use. Smoothing kernels represent the Gaussian half-maximum as explained below – large kernels imply large gaussian distributions and redistribute a great deal of wealth over a large number of neighboring voxels, while small kernels imply small gaussian distributions and redistribute less over fewer.
In a general way, you want to be very careful about choosing too large a kernel. In particular, if your kernel size is larger than your structure of interest (eg, a 20mm kernel applied to the seventh Cranial Nucleus, which is only ~ 5mm wide), any activations in that area on your final image may in fact be caused by activations in tissue outside of CN VII! As a rule of thumb, always use a kernel smaller than the smallest structure you anticipate making claims about in your discussion. EG, if you want to talk about the amygdala, don’t use a 20mm kernel.
A prudent, conservative kernel is 5mm. It is very common for people to use kernels as large as 12mm.
How spatial smoothing works:
= NOT yellow eg unchecked.
Melodic ICA is, in a sense, the opposite of how science should work. It looks for patterns, and they you see if this matches anythign that makes sense to you. You really should avoid temptation and just predict activity the old fashioned way, unless you are becoming an advanced methodologist.
Step 5: Stats Tab
De nada. There is nothing to do in stats at the preprocessing stage. That’s the whole point of pre-processing: you are assuming that since you may use different regressor combinations down the line, preprocessing first, without running any stats, will save you some serious time.
Step 6: Post-Stats Tab
De nada. Without stats there can be no post-stats. You will do this stuff at the individual level, next.
Step 7: Registration
When you pre-process your data you do not register it. No yellow checks here. Preprocessing is all about internal consistency, not mapping anything onto any external references. You’ll do that at the individual level.
Double-check everything – particularly that your results are going where you want them – and then click go.
So, the whole point of this is to deal with head movement. Want to see your results? click on the FEAT report tab in the FEATWATCHER that came up. FSL view will be reviewed later.
FAQs and How-To
View a design file in the terminal window (eg, if the GUI shows something funny and you want to check the code) enter the .feat directory of interest, then type ‘pico design.fsf’ and review the text. directions for how to operate in pico are given at the bottom of the screen, including how to get back to the terminal.
Blur Problems: Can the subject be included?
1: Checking for sinus artifact problems
Sinuses below the temporal cortex (above the ears) and below the orbitofrontal cortex (behind the nose) are one of the great tragedies in the neuroimager’s professional life, as they obliterate two regions fundamentally involved in reward-processing, the engine of emotional life and the centerpiece of psychiatric research.
Intellectual honesty requires that before activation images are published indicating activity in one of these regions the researcher ensure that each individual subject’s functional scan is not plagued by sinus artifact. To do this, follow the directions in ‘Sources of Blur.’
2: Checking for scanner drift
You can’t. This is not a problem because it is filtered out using a temporal filter; make sure you are satisfied with the one you used.
3: Checking for head motion
Deciding if head motion is too severe requires an eyeball test; there are no established standards for determining whether head motion, objectively, requires a subject to be thrown out of the study. This decision depends on several factors including:
1. How large the head movement was
2. When it occurred
3. Whether the subject returned to neutral position
How abrupt it was (gradual is better than abrupt).