Can we filter the ERP eeg using band pass filter?

I want to classify normal vs schizophrenia subject using EEG. I have ERP data.
After feature extraction when i do classification, my accuracy is 70%, but if i filter data with bandpass filter, i got accuracy of 93%. I am bit confused is it a right approach.
Or bandpass filter should be applied on raw data only, not on ERP data/
Thanks

Hi Talha. I think your result indicates that the predictive power is in one frequency range, and thus including the other frequency ranges is only adding irrelevant features, which decreases the predictive power. One thing you can try is repeating your analysis for a range of narrowband frequencies, and then make a plot of classification accuracy as a function of the filter frequency.

Because ERP is a linear operation (sum and divide by N), it doesn’t actually matter whether you filter the raw data or the ERP. The result will be the same (except possibly at the edges, where there might be some edge effects that differ between the ERP and the raw signal).

Mike

I am confused by just changing order from 3 to 4 destroy the signal.
low freq=0.1
high=60
sampling frequency =1024

on order 3 , machine learning algorith is giving 70%, but when signal is lost, i am getting 93% accuracy, i am curios what my features are learning from the data obtained by applying order 4 filter. and which thing my algorithm is classifying.

when i change low to 1 instead of 0.1

IIR filters are less stable than FIR filters. I suggest either to use an FIR filter, or to filter separately (first high-pass at .1 Hz, then low-pass at 60 Hz). Always check your filters before applying them to data.

Sir, i am trying to understand , after applying an unstable filter, my data loss all informations. So what sort of information my feature are extracting. I think they are totallly garbage. Then how can my model classifying them so accurately ?

Classifiers can be a lot trickier than they are often presented. For example, they can misinterpret artifacts as real features. This is part of the reason why cross-validation is important. Without knowing your data and which classifier you used (there are many), I cannot say what went wrong. But you can see how important it is to inspect your data carefully and to be critical of results!

absolutely agree with you Sir. I am trying to understand, if classifier is considering artifact as features, than how can it classify normal vs disease subject, so well.

I have never seen such a strange behaviour before

I have tried decision Tree with 10 fold cross validation. Here is the result
array([1. , 1. , 1. , 1. , 1. ,0.625 , 0.375 , 1. , 1. , 0.42857143])
If my model is learning nothing, then it should not give low accuracy in some splits? is not it?

Here is result of SVM.
array([1. , 1. , 1. , 1. , 1. ,1. , 1. , 1. , 0.875 , 0.57142857])