Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to visualize the learned features? #741

Open
jiangye-git opened this issue Apr 17, 2023 · 6 comments
Open

How to visualize the learned features? #741

jiangye-git opened this issue Apr 17, 2023 · 6 comments
Labels
under review Waiting for clarification, confirmation, etc

Comments

@jiangye-git
Copy link

Hi, I want to qualitatively observe the learned data features and compare the classification results of different models but I have no idea how to get the learned data features. I will appreciate it if you would like to give me some instruction. Thanks in advance!

@vrodriguezf
Copy link
Contributor

What do you mean with "learned data features"? If you mean analysisng the importance of each feature in the model, I believe there's something like feature_importance. If you want to analyse the internal activations of the network, there are a couple of functions in the explainability module that can help you with that.

@jiangye-git
Copy link
Author

I use t-sne to put time series in a figure, and I want to observe the classification effect of different models. The input of the model is multivariate time series so will feature_importance function work when the input is not the extracted features?

Also, I get confused about the functions in the explainability module, could you give me some instruction or example?

Really Really Really thank you for the reply!

@vrodriguezf
Copy link
Contributor

will feature_importance function work when the input is not the extracted features?

feature_importance is a method of the Learner class, so it will consider the inputs for which the learner has been trained on.

@Victordmz
Copy link

Victordmz commented Apr 18, 2023

If the explainability part is very important for you, you can try the XCM model that has a focus on that: it has explanation for both the 1D and 2D CNNs, so this means at max the granularity of one time series element of each variable. Alternatively, this repository has changed some popular models for greater explainability, or you can try to do so yourself (e.g. visualise the 1D CNNs of InceptionTime with GRAD-CAM, which I happened to have done). Of course, you can also calculate the permutation or ablation feature importances for all models in tsai, with the method mentioned above.

@jiangye-git
Copy link
Author

I tried the feature_importance function but I got 5 features (var_0 to var_4). The dataset has 5 types data but var_2 and var_3 got 0 in the permutation. I guess the 5 features are what I want

And I wonder how can I get the inputs for which the learner has been trained on? I can't find the function definition in the files.

Really Really Really thank you both for the support again!!!

@oguiza
Copy link
Contributor

oguiza commented May 4, 2023

Hi @jiangye-git,
I'm sorry but I don't understand. You provide the input (X) and the targets (y). But you are asking how to get the inputs ...

@oguiza oguiza added the under review Waiting for clarification, confirmation, etc label May 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
under review Waiting for clarification, confirmation, etc
Projects
None yet
Development

No branches or pull requests

4 participants