Site is back :) Convert to integer type in Python

Site is back , thanks :wave: :raised_hand_with_fingers_splayed:
In the “Master Python for scientific programming by solving projects” and other Python course you gave, I encountered a lot of times you convert say ( N/2 + 1) to integer, say for use in range or other index using
int(N/2 + 1)

But there is other short cut way, that is using // , double division operator. (

N//2 + 1

If N is integer, // produces integer also. However, if N is float, it also gives float output.

Yeah, apologies for the site being down. There was an automatic update that didn’t install properly so I had to rebuild the container, and curiously enough, my internet at home was out for about 4 days at the same time (cosmic coincidence…).

Anyway, thanks for posting that solution. That also works, and there are indeed many solutions to the same problem in Python. Sometimes I show particular solutions that I think have educational value, sometimes I show particular solutions to match with something elsewhere in the code, and sometimes it’s just a random choice.

I believe I explained // vs / earlier in that course, perhaps in Part 1.

1 Like

Yes, you mention it in but I encountered it in so many places that shortcut using // would worth it.

By the way, in the “Project Clustering PCA, t-SNE, and k-means”, I noticed that normalization should be reversed,;

# covariances for features
cov_features  = cloud_demean.T@cloud_demean / cloud_demean.shape[1]
cov_featuresZ = cloudz.T@cloudz / cloud.shape[1]

# covariances for observations
cov_observations  = cloud_demean@cloud_demean.T   / cloud.shape[0]
cov_observationsZ = cloudz@cloudz.T / cloud.shape[0]

whereas they should be;

indent preformatted text by 4 spaces
# covariances for features
cov_features  = cloud_demean.T@cloud_demean / cloud_demean.shape[0]
cov_featuresZ = cloudz.T@cloudz / cloud.shape[0]
# covariances for observations
cov_observations  = cloud_demean@cloud_demean.T   / cloud.shape[1]
cov_observationsZ = cloudz@cloudz.T / cloud.shape[1]

I checked cov_featuresZ with pca.get_covariance() result.

Ah, indeed, the number of multiplications/summations corresponds to the “inner” dimensions of the matrix multiplication, which is the number of features for observation-covariance, and the number of observations for the feature-covariance. So I did indeed have those swapped. Good catch!

Fortunately it doesn’t matter, because it’s simply a normalization factor that doesn’t affect the eigenvectors. I’ve put a comment in the code about that.

1 Like

Thanks, yes its a multiplier affect.

Maybe you will like this catch more :slight_smile: In the interpolation project, I noticed that you have interpolated time also as well as signal ;
# downsample to 1/2 the original rate

ts_ds = signal.resample(ts,int(N/2))
timevec_ds = signal.resample(timevec,int(N/2))

# plt.xlim([.5,.56])

Actually, according to this documentation, time is indeed resampled evenly for this new signal, you dont have to resample time again with resample function; " The resampled signal starts at the same value as x but is sampled with a spacing of len(x) / num * (spacing of x); hence you have to form again it using linspace with new number of points only.

hence new code should be

ts_ds = signal.resample(ts,int(N/2))
timevec_ds = np.linspace(timevec[0], timevec[-1], int(N/2), endpoint=True)
# plt.xlim([.5,.56])

And yes, the problem you mentioned in the video about even time resampling also gone…

ts_us = signal.resample(ts,int(N*3))
# timevec_us = signal.resample(timevec,int(N*3))
timevec_us = np.linspace(timevec[0], timevec[-1], int(N*3), endpoint=True)