Time course correlation of Functional connectivity and power

Hello, Mike and list mates,

I read the paper “Effects of time lag and frequency matching on phase-based connectivity”.

It is mentioned that volume conduction predicts that the time course of connectivity and power are nearly perfectly correlated, if one’s functional connectivity measure is affected by volume conduction. In order to address this concern I computed the band power for the same frequency bands and time segments as my functional connectivity measure. This results in two band power vectors, one for each electrode on which the the functional connectivity metric is based. (The functional connectivity is always computed between two electrodes) Further, I applied a multiple linear regression to see how much of the variance (R²) in my functional connectivity time course (y) can be explained by the band power time course(s) (X). This procedure I performed for all frequency bands, electrode couples of interest and subjects.

However, the question I am asking myself is: What is considered a nearly perfect correlation? In other words, are there thresholds which are commonly used and accepted? Next to the R² I also obtained a P-value for the respective multiple linear regression (based on ANOVA, F statistic), if this is of help.

Furthermore, I performed a 10,000 permutations of the functional connectivity measures (y) in the time course and carried out the same multiple linear regression again to obtain an R² permutation distribution and computed a P-value (P = sum(R²_perm >= R²_obs)/1000) to get a feeling for what the probability is of observing an R² of R²_obs or higher. However, P-values aren’t a measure for how strong something is correlated (i.e., effect size =/= P-value).

Cheers,
Sebastian

Hi Sebastian. Interesting approach! How did you permute the time series – was it randomization of timepoints or cut-and-swap?

I suppose I should have gone into more detail about that claim in the paper. If there is exactly one source in the brain, then volume conduction would lead to correlations of r=1. But “one active source” is obviously not physiologically plausible. There are multiple sources, and noise. The noise will decrease the correlation, as will the additional sources (to the extent that they project differently to the two electrodes.

So a more accurate description would be to say that the total covariance between electrodes can be expressed as the sum of (1) true connectivity + (2) volume conduction artifact + (3) other sources + (4) noise.

Then the question is how to separate #1 from the other sources, so you can test that component against zero (or against another condition or other electrode pair, etc). Typical approaches are either to use spatial filtering or a non-zero-phase-lag method. The difficulty is that the data are non-stationary, so the contributions of #2-4 vary over time. I don’t have the answer :frowning:

As for quantifying the effect size, I think R^2 is a good one. The p-value would tell you whether to “believe” that R2 value, and the Z-score (R2_obs-mean(R2_perm))/std(R2_perm) would tell how you it compares to the shuffled distribution.

Thanks for the answer! In a way I am glad that my problem is not trivial.

I permuted (sample without replacement, cut and swap) the segments of the computed functional connectivity (FC) measure. Example: 5 min recording and 2 second segments for which I compute my FC. This results in 150 segments and therefore my y is a vector with 150 entries. And this vector I shuffled/permuted in order to break the “dependency” between X and y, while keeping the underlying data identical.

I applied a common average reference to the raw EEG but obviously this does not get rid off volume conduction effects. Next to the FC measure that is not robust to volume conduction I also employed the wSMI and wPLI, which are robust to it. However, the wPLI I can/should not compute for individual 2 sec segments since it needs the averaging over many segments in order to be accurate.

I agree I would have also gone with the R^2 and left it at a qualitative heatmap. Additionally, I would also provide the phase difference probability density plots for the respective frequency bands and electrode couples.

Yeah, nothing connectivity-related is trivial :wink:

I don’t recommend using average reference for connectivity analyses. Laplacian is a good spatial filter for electrode-level analyses.

Are concerns of Nunez et al. (1997), Thatcher (2012) and Jian et al. (2017) regarding spatial filtering (Laplacian) maybe causing distortion of EEG signals by mixing phase differences between electrodes negligible? For completeness Thatcher (2012) stated that the same applies for average reference.

The specific concerns for me are:

  • The unjust elimination of true zero-lag connectivity. This is also the reason why I used one functional connectivity measure which does not eliminate (near) zero-lag interactions.

  • The potential loss of phase information due to spatial filtering

If a Laplacian should still be used (even though i only have 16 channels and electrodes of interest are in some cases at the edges of the cap e.g., FC3) which one should be used? A nearest neighbors (small/large) or a Perrin Spline?

You can read my (now almost exactly 5 years old!) thoughts about Thatcher’s arguments here.

Ultimately, scalp-level connectivity is difficult; there is no way around it. You need to be willing to sacrifice something in order to get something. I would not argue that the Laplacian is the only way to go, but it seems to be the best option from my experience with simulations and real data. Phase-lag-based measures are also fine – they have their own set of advantages and limitations.

On the other hand, 16 channels is pretty low for the Laplacian. I recommend having a close look at the time before vs. after filtering to see whether it looks reasonable. In the end, it might be better to go with average referenced data and PLI (or any other non-zero-lag measure).

strongly opinionated loud-mouths

Love it :smile:. I read through a lot of the papers of Kayser and colleagues describing and showing through simulated and real data the advantages of the Laplacian. Unfortunately, what I could not find anywhere is an approach that describes the parameter selection for the (Perrin Spline Surface) Laplacian. There are essentially 3 variables which one can tune and they are seldomly reported to begin with. I dont want to take parameters that get me the best results, because that would be for the lack of a better word Laplacian-Hacking.

Or is an approach that pics the parameters within a reasonable range by using RMSE and the absolute correlation coefficient (EEG_SL vs. EEG_raw) in a cross-validation approach common practice?

  • lambda…smoothing/regularization parameter: [1e-8 to 0.1]

  • m…interpolation order/flexibility: [2 to 8]

  • n…Legendre polynomial of degree leg_order: [7 to 50]

Edit:
What i learned so far from Kayser and Tenke (2015):

  • a more flexible spline (i.e., smaller m) should be used when studying EEG coherence in the frequency domain (m=2 or 3). But a low density EEG montage as I have will itself be a spatial low-pass filter

  • a more flexible spline requires greater smoothing (i.e., greater [previously: small, which is incorrect] lambda) and lambda also acts as a spatial filter

  • greater legendre polynomial orders (i.e., greater n) will provide more accurate results.

I discuss these parameters a bit in my ANTS book, but I don’t know optimal settings for 16 channels. My recommendation is to try several different parameter settings on sample data, and then apply those parameters to the rest of the dataset.

I read through ANTS and found some more approaches to get insights in the FC measure and whether it may reflect artifacts of volume conduction.

  • Next to the R² values of the multiple linear regression that describes the amount of variance accounted by for the the two band powers (see first post). I can also investigate the two regression coefficients for BP at electrode 1 and 2 respectively when X and y are based on the segment to segment change in connectivity and band power. If the two regression coefficients are negative this would mean an increase connectivity is predicted by a decrease in band power, which cannot be due to volume conduction.

  • Pearson’s rank correlation for the FC measures and electrode distance. The squared rho would then describe the explained variance in the ranked variables (FC measure and distance). A great rho² would mean that a lot of the strength of the FC measure is simply due to proximity of electrodes.

  • Finally, I can compute all FC measures based on the surface Laplacian and compare the results.

Now to the Surface Laplacian selection: I thought correlating the wPLI computed based on and surface Laplacian may give a good indication if the parameters chosen are appropriate. Based on this approach m=2 or 3 and lambda = 0.1 would be valid. This would make sense interpreted through the lense of Kayser and Tenke (2015) as EEG coherence in the freq domain should be studied using a more flexible spline (smaller m) and flexible spline requires greater smoothing (greater lambda). While a lambda of 0.1 seems awfully large decreasing it reduces the correlation between wPLI_car and wPLI_SL drastically.

btw, keep in mind that I’m one of the “strongly opinionated loud-mouths.” I have thought a lot about these issues over the past decade, but I don’t know the “right” answers (if there are any); I only know what has worked well for me (which might not generalize) and what my intuition tells me (which might be wrong).

I don’t think people have really systematically evaluated the Laplacian parameters. As for Lap-hacking: that’s not an issue if you start from sample data. It’s only “hacking” if you do the full analysis with all of your data, varying the parameters, and picking the one that gives the nicest results without reporting all the other results. Everything before that is piloting.

On the other hand, systematically varying the parameters and reporting all the results would be great, but that gets pretty methods-heavy, and I’m not sure if that’s the primary goal.

don’t worry I am aware and had a chuckle when I read the “including yours truly” remake in that post :smile:

With regard to systematically varying the parameters and reporting all the results: I am afraid I don’t have the knowledge needed to provide sufficient and sound interpretation and simply presenting the results without the interpretation would not help anyone I feel like. Especially, since this not a controlled environment as one has in a simulation study. In other words, I don’t know where and even if true connectivity occurs, therefore I can’t make a statement regarding which parameters provide (more) accurate results. The wPLI seems reasonably robust and thats why I used it in the sample data for the correlation to get “optimal” parameters. However, it could be that the Laplacian that I applied had little effect and that’s why the wPLI_SL came out similar.

Finally, the only thing I can do is look at my results and check if and in how far they could be explained by volume conduction using the aforementioned methods and be humble about it. All the preprocessing and additional analyses in world cant magically increase the number of physical channels. That said only 2 electrode couples of interest have a distance of 4 cm between them (beeline), which I will keep in mind when interpreting results. The rest are all between 8 and 14.5 cm.

That all sounds very reasonable. Good luck!

Cheers and thank you for letting me pick your brain. It was a pleasure discussing this topic with you!