top of page

Search Results

49 items found

  • Step 4: Developing a Coding Framework

    This step is EXTREMELY important when conducting a meta-analysis. The coding framework determines what data you will and will not collect from each study that you read. It is always better to collect more information (rather than less) from each study in order to try and prevent the potential tragedy of completing your data collection, only to discover you have to re-read every study and extract some other piece of information you previously missed. To begin creating the coding framework, you will want to create a spreadsheet (google docs work well for this so multiple people can code simultaneously). Click HERE for tips on making your spreadsheet the best it can be. In this spreadsheet, the columns will correspond to your moderators--anything that could potentially affect your variable of interest--and each row will become a study (also known as an effect). It is helpful to go ahead and enter your column titles in an SPSS-friendly way. Thus, every potential space between words should be replaced by an underscore. Some essential columns that every coding framework should include are: LastName_firstauthor year_published sample_size Number_effects (the importance of this column will be further explained in Step 5-Code) A column for your variable of interest as well as one for its standard deviation and one for its standard error. If your variable of interest is a pre-/post- measurement or changes over time, you will need to create the columns included in #5 for both the before and after values. The remainder of your columns should be created based on your variable of interest and potential factors that could moderate this variable. For example, if you are studying something involving humans, BMI, age, gender, race, and disease/health status are all essential factors to add to the coding framework with a column for standard deviation and standard error when applicable. Other questions to ask yourself and potentially code for are: Is some sort of treatment being administered? Does this treatment vary between studies? What information is necessary to describe the treatment? Are characteristics changing over time? Do we need a BMI pre- and BMI post- column? Is the variable of interest measured in multiple different ways? Are there disagreements on methodology? What information is necessary to distinguish one measurement technique from another? ​What is the study design? If you are only looking at randomized controls, you will need columns describing the control group. Are there potential psychological or sociological factors to take into consideration regarding your variable of interest? It is helpful to create a column for both the raw data and a code. Codes are numbers greater than zero arbitrarily assigned to various characteristics that will become important later on in the analysis phase. For example, you could create a code for BMI in which normal weight = 1, overweight = 2, obese = 3, and not reported = 4. The thing to keep in mind when assigning codes is everyone working on your meta-analysis MUST use the numbers in the code to represent the same values. Once you have established your coding framework and feel confident that you have included columns for any/all potential moderators, you may BEGIN CODING!

  • Data Analysis - Opening the File

    Overview ​The purpose of data analysis is twofold: Statistical Parametric Mapping (SPM) levels 1 and 2. Level 1 consists of preprocessing data as well as adding beta values to the hemodynamic response functions (hrf) of individual participants, whilst level 2 is where the hrf beta values from multiple participants are compared. Opening a file​ To begin level one, open “nirsLAB” from the desktop icon. After clicking a matlab window will open and start the program. With nirsLAB running, click “load data” as show in the image. Navigate to the subject data by clicking [OS Disk (C:)], then opening the [nirX folder] and selecting [data folder]. Once in the data folder, folders will be named by dates. Pick the date when the trial was run and chose the appropriate participant file and select the file ending in “config.txt”. This file should be located at the bottom of the subject’s folder and may have a name similar to the following: “NIRS-2016-05-27_002_config.txt.”. (Image configure text image) Once the config.txt file has been selected, the file will load and a message will be displayed at the bottom of the screen within the "Running Status Box" indicating that the file has loaded.

  • HRV Data Collection Steps

    ECG data collection using Physio16 Cords are in the brown box in the cabinet under the fridge. Take one blue and one white cord and plug them into the 2 ports labeled “1” on the Physio16 box. Using BIOPAC electrode pads, apply the first cord to the right side of the chest under the collarbone and the second cord to the left side of the body on the rib cage. Open Net Station Acquisition and input participant ID. On the right tool bar go to the “Hardware Settings” tab. Under “PNS Set” select “Physio16 HRV ECG Set Up” which will have all of the proper ECG settings already programmed. * Sampling rate should be at 250 s/s. Hit the “On” button and check the quality of data before recording. * Screen for any strange line noise, etc. After recording is complete simply save data into the corresponding participant folder. Chest strap data collection using Polar H10 There are two chest strap sizes, choose whichever you think will fit. Wet and place strap on participant, ensuring that the device is snapped into the band. * Band should fall around the bottom of the sternum. * Recording/device detection does not occur until skin contact is made. To connect the watch to H10 go to settings > General settings > Pair and Sync > Pair Sensor or other device Bring the watch close to the heart rate monitor, check to make sure it is the correct ID for the H10 and select the check mark. Exit the main menu, select start training > swipe until you reach walking > make sure the HR sensor has a blue circle to make sure it is paired properly and select walking to start the session. To end the session, press the knob once to pause, then hold for 3 seconds to stop.

  • Step 6: Analyze

    The analysis process may seem complicated at first, but do not despair! With practice you will soon become a pro at it. Before beginning the process of analysis, it is important to remind yourself about the all-important effect size. This effect size is the number that you will use to represent the findings from your meta-analysis. It indicates both the magnitude and the direction of your findings. There are 7 major components to analysis: (Click on a component for more information) Choose an effect size calculation Compute the effect size, SEM, W, and CI for each effect Add in a code column for each uncoded moderator Run MeanES on each moderator Graph your results Assign contrast weights Run the univariate analysis and the meta-regression Step 1: Choosing an effect size statistic When choosing an effect size statistic, the first aspect to consider is whether your data is standardized or unstandardized. The answer to this question is determined primarily by the method(s) of measurement each study uses to report the variable of interest. If all the data is reported using the same scale, then you can use the unstandardized formulas. However, if you need to compare data across scales (for example, if some studies report depression levels based on a therapist's diagnosis, and others use a self-reported questionnaire) your effect size calculation must be standardized to account for these differences. Once you have decided if your data needs to be standardized or can remain unstandardized, you must choose which model best represents the effect sizes produced by your study's designs. There are a total of seven options, but keep in mind that you must use the same effect size statistic for your entire data set. This page will discuss the two most common statistics for two-variable relationships. Pre-Post Contrasts (Mean Gain): This effect size statistic is selected if the variable of interest is primarily being compared across time to examine change. Studies will report the same variable, measured in the same way, at two or more time points. Studies that should be analyzed using the Mean-Gain model do not require a control group because the participants' baseline values function as their own control. Example: Does eating a high carbohydrate meal affect FMD? In this case, the primary variable of interest is the participant's FMD before and after carbohydrate ingestion. The comparison occurs between the pre- and post- meal values. Group Contrasts (Mean Difference): Select this effect size statistic if your variable of interest is measured on two or more groups and compared across groups. The need to compare an experimental group to a control group is typically an indicator that the Mean Difference model should be used. This effect size statistic can even be used for studies that follow a specific variable across time IF the change in that variable for group 1 is being compared to the change in the same variable for group 2. Example: Does exercise decrease depression levels? In this case, the variable of interest is participants' depression levels, but in order to understand if exercise has a beneficial effect on depression, the exercise group's results MUST be compared to the results of a control group. If your data does not fit either of these models, please consult Chapter 3 in Practical Meta-Analysis by Lipsey and Wilson for more information. Step 2: Computing the effect size and other numbers Whichever effect size statistic you end up choosing, there are specific formulas used in meta-analysis to calculate (1) effect size, (2) standard error, and (3) the inverse variance (w). You will need to calculate each of these numbers for each effect in your data set. The best way to do this is to enter the equations in Microsoft Excel and then copy&paste or drag the formula down the entire column. The equations are as follows: For more information on the variables in these equations, click HERE. Step 3: Adding in codes for each moderator In order to successfully accomplish steps 4-7, it is important to carefully complete this step of adding in codes for each moderator. Some of your moderators may already be in code-form thanks to the completion of the coding process. However, you must now assign codes for those moderators that were previously entered into your data set as the raw data. For example, age, which you most likely entered into the data set as the mean of participants for each effect, can be coded into intervals of 10 years so that every effect with a mean age between 10 and 20 years gets coded as the same number, etc. As you read in the section on how to develop a coding scheme, the codes assigned to each category are arbitrary as long as your research team agrees on them. Essentially, these codes will be used in the following steps to divide your data and analyze it in SPSS by moderator category. It is a good idea to wait and not add these codes into your spreadsheet until you have downloaded your data into SPSS. To learn how you can easily create a column of codes in SPSS, click HERE and watch the "Contrast Weights and Codes" video. Step 4: Running MeanES MeanES is the first of the SPSS macros you will be using. Briefly, a macro is a list of code that automatically performs a specific function each time you run it. This specific macro will soon become your best friend in the analysis process. MeanES calculates the average effect size of all the selected effects each time you run it. You will be using the values from this macro to create graphs of your moderators by selecting all the effects with a specific characteristic, running MeanEs, and recording the value the macro produces, before repeating the process with a new group of studies. It is important to record the confidence intervals, standard error, and K (number of effects) that the macro will produce as well. For example, using SPSS capabilities explained HERE, you will run MeanES on all the studies with normal weight participants, then select only those with overweight participants to run the macro on, and finally run it on the studies with obese participants. These results will tell you the differences (or lack thereof) in the effects of BMI on your variable of interest. ​ First, The MeanES macro should first be run on all cases to get the overall effect size. Then, the macro should be ran on selected cases to calculate the mean effect size for each category for each moderator. Examples of the output from the MeanES macro for all cases and selected cases, respectively, are shown below. The highlighted values are the numbers that you need. Step 5: Graphing your results Once you have the average effect sizes for your various moderators, you can begin graphing. This is an extremely rewarding phase of the meta-analyis process because you finally get to see a product from all your hours of reading and coding. The best tool to use for making quick and helpful graphs is Excel. Make sure that you clearly label each graph and its axes so as to avoid confusion later on. Once you are ready to create your final graphs, however, SigmaPlot is the way to go. Excel requires that you average your standard error across the categories in your graph, whereas SigmaPlot allows you to customize error bars. In addition, SigmaPlot has the capability to create Forrest plots, a meta-analysis-specific type of graph. Click HERE for tutorials on how to use SigmaPlot. ​ Below is an example of a graph using excel or google sheets (contrast weights which is discussed in the next step are already included in this screenshot). Step 6: Assigning contrast weights In order to assign contrast weights, you must first complete your graphs. Then, you will need to decide which moderators to add into the overall meta-regression as factors that potentially explain a part of the variance found between effect sizes. Once you have compiled these graphs, you may start assigning contrast weights. These weights should be entered into your SPSS spreadsheet in a column next to their corresponding code column. The concept of contrast weights is a somewhat technical and confusing aspect of meta-analysis. However, the basics are fairly easy to understand. Essentially, these weights are necessary because they tell the meta-regression macro how to interpret the moderators and factor their influence into the overall model. One set of contrast weights should correspond to each of your moderators. In a set, the weights must sum to zero, and cannot be greater than 1 or less than -1. A contrast weight of zero essentially tells the meta-regression to ignore all effects with that particular weight. In addition, the sign of the contrast weight does not necessarily need to correspond with the sign of the effect size it is representing. For example, if the effect of carbohydrate ingestion on men versus women is -2.05 and -0.20 respectively, the contrast weight for women would be +1 and -1 for men because the weights are describing the two groups in relation to each other. Step 7: Running the meta-regression Running the meta-regression and the univariate analysis are the final steps in the analysis process. The macro used is the MetaReg macro. This macro is run in a similar way to the MeanES macro (Click HERE for an explanation). However, they produce different levels of information. This macro takes the contrast weights into account. Therefore, you should put new variables in the spreadsheet for each moderator and the corresponding contrast weight for that study. If that study is not included in any classfication for the moderator (i.e. does not have a contrast weight) make sure to leave that cell blank. Call this new variable variable_cw (such as bmi_cw if bmi was the moderator). When calling this macro there are a few more variables than the MeanES variable that need to be defined. IVS is either one or a list of "variable_cw". If you are running it on multiple moderators, put spaces between the name. Another variable is the model. The model you should use is multilevel so the variable should be equal to "ML". The Univariate Analysis macro gives you information about your individual moderators. The only value you need from this macro is the p value, which will tell you whether there is significant difference between the groups of a specific moderator. For example, if p < 0.05 for sex, that means the group containing only men has a significantly different effect size from the group of women only, as well as from the mixed sex group. These p values will eventually be entered into Table 2 in your publication. To get the p value for each moderator, run the MetaReg macro on each moderator's contrast weight variable separately. For example to get the p value for age, IVS should be equal to age_cw only. Below is an example of the output for age. ​ The Meta-Regression itself tells you which moderators account for a significant part of the variation among effect sizes. It is highly likely that you will find some studies with a significantly larger effect size than others. The beauty of a meta-analysis is that the meta-regression can help suggest potential causes for these larger or smaller effect sizes. You can also add something called an interaction into the overall meta-regression model. Interactions occur when the combination of two moderators explains more variability than either moderator alone. Interactions can be chosen based on theoretical data, or by looking at graphs created using MeanES. In order for the meta-regression to read the interaction, you must assign contrast weights to the various groups. For example, you could find that BMI and sex interact, meaning that obese men could have a significantly greater effect than normal men, and obese women may have a smaller effect than normal men but still a greater effect than normal weight women, etc. To run the entire meta regression, change the IVS variable to be a list of all the moderator contrast weights. If your meta analysis does not have a lot of studies, you might get an error. If that happens it is probably because not enough cases overlap between certain moderators. Try playing around by taking some moderators out and doing different combinations while paying attention to "k", the number of cases being analyzed in the meta regression. Below is an example of an output with the important values highlighted. NOTE: A moderator may be significant according to the meta-regression but not the univariate analysis and vise-versa. If this happens, do not despair!! Simply be prepared to offer some potential explanations for these findings and to look for interactions.

  • Data Analysis - Prepping the File

    To begin prepping the file for level 2, click on the “check” option in the “data processing” box (image: check raw data). This will open the “Raw data checking” window seen below. In this window, check the “Gain Setting and CV” boxes, enter the desired setting and click “check”. This will label certain channels as good or bad depending on the values determined for Gain Setting and CV (%), the number of good and bad channels will be displayed in boxes within this window. After the channels have been checked, select “save and close”. Once the raw data has been checked, click on “set markers” in the “Experimental data and conditions window” This will prompt you to load the appropriate event marker, which is classified as a .evt file. Once the .evt file has been opened, you will be asked to input the stimulus duration in the appropriate box. The duration will be determined by the trial performed (e.g. motor tapping trial=30 seconds).With the Duration selected, click “save and close”. There will be a text update in the Running status box indicating that the event file has been edited and saved.

  • Data Analysis - Preprocessing the File

    The purpose of the preprocessing stage is to remove artifacts or unwanted segments from the data, as well as apply a filter. To begin preprocessing, click the "truncate” icon in the "Data Processing" Box. This option will open a window which allows you to remove specific sections of the data as you see fit. This can be done by selecting the portion of the data you wish to remove (either by mouse or entering the time points) and clicking “cut”. Once finished, click “save and close”. The “Remove discontinuities” option allows for discontinuities in the data set (e.g. periods of empty data resulting from removing sections via truncating) to be removed this is done by clicking the “remove” option. The use of these options is determined by how the data collection process went and may not always be needed.

  • How to Export an ECG Channel to a .txt File

    Steps Open EEGLAB in MatLab (type “eeglab” in MatLab and hit enter) In EEGLAB, go to “file” → “import” → “using EEGLAB functions and plugins” → “import Phillips .mff file” Select the mff file of the ECG recording you want to import For trigger/event type select “code” Go to “plot” → “channel data (scroll)” and check to see what channel number corresponds to the ECG channel (usually channel 66). You should be looking for a QRS wave signal Go to “edit” → “select data” and enter the ECG channel number into “channel range” Save the new data set Confirm the right channels were removed by navigating to “plot” → “channel data (scroll)” Then go to “file” → “export” → “data and ica activity to text file” Make sure the only boxes checked are “transpose matrix (elec → cols)” and “use comma separator (csv) instead of tabs”. All other boxes should be unchecked. Enter the file name with a proper title (should end in .txt) and select the proper folder destination Wait until the file is finished uploading before closing MatLab [Insert Video]

  • How to Filter Data and Correct Artifacts in AcqKnowledge

    Steps for an inverted QRS wave: Open AcqKnowledge Select “analyze only” and then click “ok” on the pop-up menu Change the file type to txt file, select your ECG file, and then click “open” Enter your sampling rate in milliseconds/sample for the sampling rate interval (250 Hz will be 4 ms/s) and click “ok” Go to “transform” → “expression” and then enter “-CH0” into the box and click “ok” Minimize CH0 by clicking on the drop-down arrow in the left hand corner of the channel Click on CH1, resize the channel by entering ctrl+y. Relabel the left side of the channel with “ECG” and the right hand side with “mV” To filter the data, go to “transform” → “digital filters” → “comb band stop” and click “ok” Steps for a normal QRS wave: Open AcqKnowledge Select “analyze only” and then click “ok” on the pop-up menu Change the file type to txt file, select your ECG file, and then click “open” Enter your sampling rate in milliseconds/sample for the sampling rate interval (250 Hz will be 4 ms/s) and click “ok” Relabel the left side of the channel with “ECG” and the right hand side with “mV” To filter the data, go to “transform” → “digital filters” → “comb band stop” and click “ok” After filtering:​ Resize the dataset by going to “auto scale” and then entering ctrl+y Scan for artifacts Zoom in on artifacts using the zoom tool To correct a missed QRS peak, take the I-beam tool and highlight from one end of the peak to the next. Go to “transform” → “waveform math” Make sure that Source 1 = the channel you are on Addition is selected Source 2 = K, a constant Destination = the channel you are on Enter a constant to add to the waveform and click “ok” To correct noise that could interfere with QRS detection, highlight from the end of one QRS peak to the start of the next QRS peak using the I-beam tool. Go to “transform” → “math functions” → “connect endpoints” Repeat this process for any artifacts. Be sure to scan the entire waveform for artifacts before exporting. Remove any extraneous channels (otherwise these will be exported with the corrected channel) by selecting the channel you want to remove, and going to “edit” → “remove waveform”. Then, once you are ready to export go to “file” → “save as” and select txt file from the file type menu. Name your file, then click “save” and then “ok” Tutorial - Part 1 Tutorial - Part 2

  • Step 7: Drawing Conclusions

    Once you have completed the analysis of your data, you can begin drawing conclusions and summarizing your findings in a presentable (and publishable) manner. There are several key components to include once you start drafting a manuscript: Table 1: This table contains all of your effects, listed in ascending order of effect size along with the number of participants, the 95% confidence interval, and a few other key components that characterize the study Figure 1: This is a flowchart of the studies you included beginning with the total number of articles found and diagraming the exclusion process Table 2: This table includes each of the moderators included in your model broken down into their respective groups. Each group is listed along with the number of effects, its mean effect size, 95% confidence interval, and contrast weight as well as the between-groups p value attained from the univariate macro. Forrest Plot: This will most likely be the only graph included in your manuscript (all others will be placed in supplemental materials). You can create forrest plots in SigmaPlot. Visit the "How to use SigmaPlot" page for further direction. Discussion: The discussion section of your manuscript should include a paragraph for each of the significant moderators, as well as one for each of those that were not significant but should have been. These paragraphs should cite articles included in your meta-analysis that support your findings, as well as any outside research that may help explain the underlying mechanisms driving your findings. ​ You can also create a poster to summarize the main findings from the meta analysis (mostly used at conferences.) The poster should include an abstract, the aims of the investigation, the methods, a results section and moderator effect sizes section with graphs, conclusions, and any references. An example of a poster is below. REMEMBER: Meta-analysis is an iterative process. Thus, you may have completed your final analysis, but new questions may arise, or new articles may be published, at which point you will need to revisit the analysis and coding steps respectively.

  • Data Analysis - Level 1

    Analyzing the Data There are two levels of analysis that are carried out when analyzing participant results, level 1 and level 2. The first level can be thought of as a preparation stage for individual data sets, while level 2 is where comparisons between participants take place. Conducting Level 1 (GLM analysis) To begin level 1 analysis of an individual participant’s data, select “SPM” level 1 from the “Data Analysis” box. This will open the “Statistical Parametric Mapping” window. To begin GLM analysis, first select the desired hemoglobin data (oxyHb, DeoxyHb, or TotalHb) from the “Hemoglobin data” drop box. Each state will be analyzed individually, for this tutorial, oxyHb will be used as an example. Load in the oxyHb file by clicking: [load]—[NIRS-date_subject number_nirsInfo]—[NIRS-date_subjectname_detector_OxyHb]. The pathway to access this file can be seen in image (Image: detector location). Once this file is selected, click “save”. At this point, a Help dialog box will appear notifying you that the file has been loaded. (Image: Help-loaded box). Once the file is loaded, click “set parameters” in the “GLM analysis” box (shown above). This will open the “Parameter Set up for GLM Analysis” window. Whilst in this window, specify the basis function (number 2 in model specification). Select “hrf (with time and dispersion derivatives)”. Next, under number 3, select “nirsLAB condition file”. (Image: Parameter setup for GLM Analysis). With these specifications selected, click “confirm”. This will prompt you to save the file. Be sure that the file is saved in the folder associated with the correct participant. Once the file is saved, A statistical analysis design will be generated. This design shows the conditions of the test. In the image of a motor tapping test, the canonical, temporal, and dispersion (light to dark colors respectively, with the vertical axis representing time and horizontal axis representing event types). After confirming that this statistical analysis image matches your conditions, click “Estimate GLM coefficients” in the “GLM analysis” box, this will generate a help dialog box notifying you that the estimated results have been saved, and shows you the location of the saved file. This help box is the end of the analysis process for oxyHb, repeat the above steps for deoxyHb and totalHb.

  • Data Analysis - Level 2

    Conducting Level 2 Analysis​ To begin level 2 data analysis, click on the “SPM level 2” option in the “Data Analysis” dox. This will open “Statistical Parametric Mapping for level 2”. In the “Multiple data specification” box, you are able to set up the parameters for your analysis by selecting the number of groups as well as the number of subjects in each group. For example, if there were two groups with 8 subjects in each group, the notation for number of subjects would be [8,8]. After the number of groups and subjects have been selected, begin loading the hemoglobin SPM file of your desired subjects. Note that when the level two load option is clicked, this will bring up the most recently opened window, be sure to confirm the folder that was opened contains the desired SMP subject file. An example of a pathway to an SPM file can be seen in the image. Navigate to and load all of the desired SPM files that are specific to the desired subject and hemoglobin characteristic. Once all the subjects have been loaded in, the load window will close itself, returning to the SPM level two menu. The next step is to specify the contrast for the data files selected. Click “Specify contrast” in the “Contrast specification, result visualization” box, the design matrix may look different depending on the group or subject numbers). This will open the SPM contrast manager window. From this window it is possible to select t-contrast or F-contrast. This selection will depend on the statistical needs of a specific analysis. To begin, enter the name of the test in the first box below the contrast selection box. The next step is to define the contrast, to do this, click “define new contrast” in the lower portion of the window. In the new window, enter a name in the designated box and check which type of contrast is desired. In the image, the specified contrast is examining a motor task with a t-contrast. R>L means that when right tapping occurs, it is expected that there will be a greater amount of oxyHb in the left hemisphere motor cortex compared to the right hemisphere motor cortex. The contrast weight values tell the program how to weight and compare events during the test. The notation in which this is written can be described as follows. Events with equal weight will be considered part of the same ‘category’ and compared against weighted of equal but opposite sign. In the image, the values “1” indicate components of the right finger tap event, specifically the canonical temporal and dispersion wave forms taken together. The “-1” values indicate the same waveforms but for the left finger tap event. The zeros at the end of the notation indicate constants, or baseline. When the submit button is clicked the program will determine if the notation is valid. This will be indicated by a green copy of the notation followed by the word “ok”. After inputing the desired contrast parameters, click “ok”. (Image: contrast manager) After entering the contrast parameters, chose the desired contrast and click “Done”. ​In the level 2 SPM window, chose the “view” option from”Contrast speficication, result visualization” box. This will open the “SPM result Viewer” window (Image: SPM result viewer) From this window it is possible to chose the statistical context the results may be viewed in from as well as load previous results. To view results simply choose the a statistic from the dropdown window and click “view”. The beta values for the hemoglobin type you have specified have now been compared across trials according to the parameters you have set. This concludes level 2 analysis of the data. ​ ​ ​ ​ ​ ​ ​ ​ ​

  • Meta Analysis in R

    Setting up R Preliminary Meta Meta Regression Data/Output Filtering Moderator Analysis Aggregating Data Plotting R Meta Poster

bottom of page