wPLI, consist time points, dwPLI, sampling rate

A couple of questions on wPLI and dwPLI

Does one need a consistent number of time points between conditions when calculating wPLI?
Does debiasing wPLI effectively account for this?

Is it generally regarded as necessary to use dwPLI as opposed to wPLI nowadays (for this reason)?

What is an adequate amount of continuous time or time points for calculating wPLI up to 30Hz? I have been arbitrarily selecting +10 second continuous chunks of data.

How do you deal with edge artifacts? Is chopping off two cycles (respective to freq band of interest) on either end of a time series adequate?

What is the appropriate sampling rate to downsample to? I see 200Hz is common while 100Hz should suffice for my maximally 30 hz frequency of interest. However, interestingly, I find slightly different results when downsampling to the two sampling rates…

Hi Josh. See below.

Yes, that’s the idea of the debiased PLI. Still, it’s a good idea to try to have conditions with a roughly comparable number of time points/trials.

I wouldn’t say necessary, but yes. What I’ve seen in my own analyses is that ISPC, PLI, wPLI, and dwPLI generally all show qualitatively similar results except for neighboring electrodes or with no spatial filtering applied. And then still all the *PLI results look pretty similar, with varying degrees of smoothness.

Impossible to say, because it depends on the effect size, noise level and characteristics, etc. I think a continuous segment of 10 seconds is probably too much for neural data, because of non-stationarities. Unless this is in-vitro or sleep/anesthetized data where you expect little variability. It might be better to cut the data into 1- or 2-second segments, compute dwPLI, then average the results together.

There are a few ways to deal with edge effects, which I discuss in my book. But basically you would just exclude those time points from the analysis.

Theoretically, you just need to be below Nyquist. But in practice, the SNR will decrease as you approach Nyquist even from far away (again, this depends on the effect size and noise). I generally recommend having a sampling rate of at least 5 times the highest frequency you want to analyze. But unless you have a huge amount of data and/or 1980’s style computer limitations, I’d leave the sampling rate at its original, or downsample to 1 kHz if it’s super-high resolution.

If you are getting different results between the two sampling rates, then you’re probably too close to Nyquist, and the signal quality is degrading.

Hey, just wan to say thankyou. This helped a lot.