Trial by Trial Granger Causality

Hello MIke,

Thank you for taking the time to answer this question.
I noticed in your book and the online code accompanying chapter 28, you do not discuss nor show how to calculate Granger Prediction in the frequency domain at the trial level.

Looking at your online code, Figure 28.5, you first reshape a 3d matrix of ( electrode by time by trial ) into a 2d matrix (electrode by time*trial). After this, you run armorf() function to get your autoregression error values.

I have slightly modified your code below which is essentially unchanged, however, I loop through trials rather than combining into a single long trial. Your book is so comprehensive and I am curious that there is a particular reason that this was left out. As far as you can tell, is the below script methodologically appropriate?

tempdata = eegdata(2, times, trialn);

for triali = 1:size(tempdata,3)
[Ax,Ex] = armorf(tempdata(1,:, triali), 1, window_length, order_points);
[Ay,Ey] = armorf(tempdata(2,:, triali), 1, window_length, order_points);
[Axy,E] = armorf(tempdata , 1, window_length, order_points);
% fit AR models (model estimation from bsmart toolbox)
% this is the original code, to be run on reshaped data
%[Ax,Ex] = armorf(tempdata(1,:), trialn, window_length, order_points);
%[Ay,Ey] = armorf(tempdata(2,:), trialn , window_length, order_points);
%[Axy,E] = armorf(tempdata , trialn, window_length, order_points);

% code below is adapted from bsmart toolbox function pwcausal.m
% corrected covariance
eyx = E(2,2) - E(1,2)^2/E(1,1);
exy = E(1,1) - E(2,1)^2/E(2,2);
N = size(E,1);

        for fi=1:length(frequencies)

    % transfer matrix (note the similarity to Fourier transform)
    H = eye(N);
                for m = 1:order_points
                     H = H + Axy(:,(m-1)*N+1:m*N)*exp(-1i*m*2*pi*frequencies(fi)/srate);
                end

    Hi = inv(H);
    S  = H\E*Hi'/srate;

    % granger prediction per frequency

%mini is the sliding time window index at which I store these results.

    tf_granger(1,fi,mini, triali) = log( abs(S(2,2))/abs(S(2,2)-(Hi(2,1)*exy*conj(Hi(2,1)))/srate) );
    tf_granger(2,fi,mini, triali) = log( abs(S(1,1))/abs(S(1,1)-(Hi(1,2)*eyx*conj(Hi(1,2)))/srate) );
        end

end

Thank you so much!

Hiro

Hi Hiro. Unless you have really amazingly clean data, I would not recommend single-trial spectral Granger analyses. Estimating autoregression models is tricky business, and requires either a lot of data or noiseless data. Even with dozens of trials, I’m still sometimes concerned that the model fits aren’t good enough.

Otherwise, your code looks fine from a quick glance. It’s more a matter of which the single-trial data have sufficient SNR.

Indeed, the resulting trial by trial GC maps appeared to be quite noisy. What you wrote makes a lot of sense. Thank you for clarifying!

If I may ask a related question. I am recording neural signals during a button-pressing task which involves a visual cue prior to movement selection.

Let’s say that I align my data to this visual cue. So when I calculate Granger Causality, I can baseline normalize to the same -1s to -1.5s time period prior to to the cue while the subject is resting. This analysis would be well-complemented by a movement aligned/locked analysis. For phase-synchrony or power analyses, I can align trials to movement onset by aligning each trial after baseline normalization, because I have trial level data. This enables me to compare cue lock and movement locked data to the exact same baseline period. Without trial by trial Granger Prediction, I cannot use this method to compare my cue locked and movement locked analyses to the same baseline period.

One solution that appears to have worked well was calculating 3 time-frequency granger maps.

  1. in the baseline period using cue locked data,
  2. in the trial period using cue locked data,
  3. the trial period using movement locked data.

I would then normalize the latter two matrices using the mean or mean/std of the 1st TF-matrix. The major issue that I see with this, is that when I concatenate trials to calculate GC, matrices 1 and 2 will have the exact same end to end shifts. Meanwhile the 3rd matrix will have different trial to trial shifts during data concatenation. This should only affect GC values at the edges of TF2 and TF3 though. And if I shorten TF2 and TF3 by at least window length on either side, I would suppose that the baseline normalized values would be comparable. Are there any issues with this baseline normalization method that I haven’t considered?

Again, thank you!

Yup, that sounds right: Use the same pre-cue baseline for the cue and response epochs.

Great, thanks a lot, MIke!