# Experience with spatial filtering (RESS)

Hi,
when calculating the component maps (from COV_signal * eigenvectors / (eigenvectors’ * COV_signal * eigenvectors), I often receive this warning from Matlab:

# Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = [some value]

Has anybody had the same issue? And how can one handle it (if it is possible at all?)

My take on it is that it probably relates to very small values close to zero in the eigenvector matrix.
For calculating the eigenvector matrix, I tried to use a regularization of the noise covariance matrix COV_noise. I.e., instead of

[eigenvecs, eigenvals] = eig(COV_signal, COV_noise);

I tried with

[eigenvecs, eigenvals] = eig(COV_signal, COV_noise+lambda*eye(size(COV_noise)));

where lambda is a somewhat arbitrary value like

lambda = 0.1 * trace(COV_noise)/size(COV_noise,1);

To make the long story short: It does not seem to help.

Any insights would be much appreciated
All the best
Norman

Hi Norman. You can try shrinkage regularization of the R matrix, which would look something like this:

``````gam = .01; % percent of regularization
Rr = R*(1-gam) + mean(eig(R))*gam*eye(length(R));
[evecs,evals] = eig(S,Rr);
``````

If that doesn’t work, then there might be a more serious problem. Can you check the rank of the covariance matrices vs. their size? Also try making images of the matrices to make sure they look like EEG covariance matrices. For example, are there any channels with 0 covariance across all other channels? NaN’s in the data?

Feel free to email me off-list with your matrices in attachment and I can take a quick look.

Hey Mike,
thanks for the super fast answer! I will try that out. I forgot to mention that the data is indeed rank deficient because of removing artifactual ICs. So this is a problem here as well, although no matrix inversion with inv()? Would reducing dimensionality via PCA help here?

You could also do PCA compression to a full-rank dataset. But then the GED vectors need to be transformed through the PCA to get back to channel space. In my experience, that doesn’t end up giving a big advantage.

How do the ranks compare to the number of channels? For 64 channel data, a rank of ~60+ should be fine. If the rank is, e.g., 14, then something needs to be fixed.

Hm, I see. I don’t have the data with me right now but data rank should be around 40 to 50 for these files.

I guess this issue won’t be fixed with different regularization schemes?

Btw the decompositions actually look sensisible, also experimental effects are comparable to sensor space data although not “bettering” them. Frequency-time course are more shaky and error bars vary quite a bit between frequencies.

I’m sorry, Norman, I’m just now re-reading your first post and I see the problem is with computing the forward models, not the GED. I still stand by my previous suggestion to apply some regularization. I like shrinkage because it works well, is easy to implement, and is a closed-form solution (that is, it doesn’t require iteratively sampling from a distribution to fit some optimization function). Other regularization methods will probably do just as well.

But the actual problem is from trying to invert reduced-rank matrices. Instead, you can get the spatial map for one component by multiplying the covariance matrix by the eigenvector, thus, `eigenvectors(:,1)'*COV_signal`.

I’m glad to hear the component time series looks good. Keep in mind that the component is simply a weighted combination of the electrodes, and so cannot produce anything that isn’t already there in the electrode data. If the component looks basically just as good as the electrode data, it means (1) your finding is fairly localized to one electrode, and (2) your data are fairly clean.

Thanks for clarifying things, Mike!
I tried the shrinkage regularization and it seems comparable to the previous one. It improves SNR of the evoked spectrum by 0.001 (at least for the dataset that I checked). Btw, it looks a bit like ridge regularization. Is it related to that?
Anyway, the warning, as you guessed, did not disappear.

Regarding your approach to compute the spatial maps, I did the following:

``````compmaps = eigenvectors(:,:)'*COV_signal;
[~,idx] = max(abs(compmaps(:,comp2plot)));
compmaps_n = compmaps * sign(compmaps(idx,comp2plot)); % forcing a positive sign
``````

However, this results in some awkward looking maps (also without the sign flip). What do you think why is that?

Funny thing: I tried with

``````inv(evecs')
``````

``````COV_signal * eigenvectors / (eigenvectors' * COV_signal * eigenvectors)
``````

Although data is rank deficient and resulting maps are identical, only the latter results in the warning mentioned in the first post.

You are right that the data is pretty clean and standard maps are quite localized. I also did not really expect RESS to find anything new. The idea was to have a data-driven solution to avoid electrode selection for every participant of the sample. Although spectra peaks are fairly localized, individual peak electrodes vary across the sample quite a bit. To account for this, I currently averaged a broader electrode cluster. But I thought it might be more straightforward to get a single time course for each participant without the need to decide between two ends of a dipole.

These are SSVEP datasets, so I am thinking canonical correlation analysis between the original sinusoid (the visual flicker) and the evoked (and narrow-band filtered) time series might be a neat approach as well. I guess there are some foul grounds, too, especially when using rank deficient data. Have you tried that approach?

Lots of good points here

Yes, most regularization methods are similar and boil down to replacing C with (C+aI), where a is some small constant and I is the identity matrix. The question is how to get a? Now that I look back at your code I see you were implementing shrinkage regularization, because the trace of a matrix equals the sum of its eigenvalues. Damn, this is the second time I didn’t read your post carefully enough. Shame on me! That said, I think 10% regularization is a bit much; my preference is to use as little regularization as possible that still gives a reasonable result.

For the entire eigenvectors matrix, drop the transpose. That’s just for a single column vector. Unfortunately, the code still works because it’s a square matrix.

Yes, this is equivalent to the longer line with the division. If you have some linear algebra background you can show this by writing out the formula for the following line of code, assume that all matrices are full-rank, and then just reduce and reduce and reduce.

``````COV_signal * eigenvectors / (eigenvectors' * COV_signal * eigenvectors)
``````

Note that the “denominator” above is actually just a normalization factor. It looks like division, but the / is actually interpreted by MATLAB as A*inv(B).

For linear algebra reasons I won’t go into here, the eigenvectors matrix is generally always full rank, even when the matrix is reduced rank. The “extra” eigenvectors span the null-space of the matrix. But the multiplication of a full-rank with a reduced-rank matrix will produce a reduced-rank product matrix, and it’s that product matrix that’s causing problems. That’s why you can simply use the numerator and ignore the denominator. You can consider z-normalizing to pool the maps across subjects, although I don’t think that will be necessary.

I haven’t tried it, but I think it should also work fine. Our reasoning in the RESS paper was that we want to assume that the SSVEP is narrowband but not necessarily sinusoidal (or square-shaped, or whatever is the temporal shape of the stimulus). But that was more an a priori assumption as opposed to something we tested rigorously.

Never mind, it still super helpful and instructive to talk to you!

oh, I did not use 10% regularization, this was a typo, mea culpa this time
so the “improvement” in fact was a rounding error

For the entire eigenvectors matrix, drop the transpose. That’s just for a single column vector. Unfortunately, the code still works because it’s a square matrix.

Then the component with maximum eigenvalue is here:

`````` compmaps(comp2plot,:)
``````

not here:

``````compmaps(:,comp2plot)
``````

, correct?

For the full matrix multiplication, you still need SW, in other words, `COV_signal*eigenvectors`. Matrix multiplication is non-commutative, so SW != WS. It works with vector w (in other words, w’S = Sw) because S is a symmetric matrix, so the difference between pre- and post-multiplying by the vector is just the orientation (row vs. column). But SW and WS cannot be interchanged.

Thus, you should keep the same order SW as in the original equation, and then the component map is still in the columns, thus compmaps(:,1). I’m sorry I didn’t make that more clear in my previous comment.

Sure your right. Somehow I got confused because the maps really looked awkward. But this was because I blindly dropped the transpose but left the matrices in the same position (i.e.: `eigenvectors(:,:)*COV_signal`)
Now it makes perfect sense. Thanks a lot again!