You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My 15 GB GPU Crashes when making predictions. Upon further reading, (link attached) I found that the function get_X_preds does some very inefficient reloading of gigabytes of data to make predictions whereas this could be done one at a time at a fraction of the memory costs. This may have already been fixed in fastai, but the fix hasn't made it to tsai.
Please read the attached forum for details.
Hello @lesego94,
I have resolved another problem (#695) that could potentially be responsible for the GPU memory spike. It would be great if you could install tsai from GitHub using:
pip install git+https://github.com/timeseriesAI/tsai.git
Hi Oguiza, appreciate you looking into this. It did not work. I realized I made a mistake earlier, the memory spike is occurring in my CPU ram, not GPU.
Let me give you some information about what I'm running. Im running the PatchTST model notebook 15_PatchTST_a_new_transformer_for_LTSF.ipynb using my own dataset, with 288,670 total parameters. The spike only occurs when I use:
My 15 GB GPU Crashes when making predictions. Upon further reading, (link attached) I found that the function get_X_preds does some very inefficient reloading of gigabytes of data to make predictions whereas this could be done one at a time at a fraction of the memory costs. This may have already been fixed in fastai, but the fix hasn't made it to tsai.
Please read the attached forum for details.
https://forums.fast.ai/t/learn-get-preds-memory-inefficiency-quick-fix/84029
Does anyone know how to get around this issue? or how I can load my model in batches perhaps?
The text was updated successfully, but these errors were encountered: