Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Laptop GPUs. Tries to use integrated GPU and as expected fails to load model. #54

Open
ledrose opened this issue Dec 29, 2022 · 3 comments

Comments

@ledrose
Copy link

ledrose commented Dec 29, 2022

Problem:
When trying to generate image using laptop python tries to use integrated GPU and fails to load the model.
Found solution:
To fix it by yourself you need to go windows settings (win+I), go to graphics and add python.exe file from Data/py to list and say that it should use discrete GPU by default.
Used device:

Windows 10
Laptop MSI Alpha 15
Processor: AMD Ryzen™ 5 5600H
Integrated GPU: AMD Radeon Graphics 512mb vram
Discrete GPU: AMD RX 6600m with 8gb vram.

Exception from sd.txt log:

[00000077] [12-29-2022 11:06:25] Traceback (most recent call last):
[00000078] [12-29-2022 11:06:25] File "C:\SD-GUI-1.8.1\Data\repo\sd_onnx\sd_onnx.py", line 64, in
[00000079] [12-29-2022 11:06:25] pipe = OnnxStableDiffusionPipeline.from_pretrained(opt.mdlpath, provider=prov, safety_checker=None)
[00000080] [12-29-2022 11:06:25] File "C:\SD-GUI-1.8.1\Data\venv\lib\site-packages\diffusers\pipeline_utils.py", line 709, in from_pretrained
[00000081] [12-29-2022 11:06:25] loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
[00000082] [12-29-2022 11:06:25] File "C:\SD-GUI-1.8.1\Data\venv\lib\site-packages\diffusers\onnx_utils.py", line 206, in from_pretrained
[00000083] [12-29-2022 11:06:25] return cls._from_pretrained(
[00000084] [12-29-2022 11:06:25] File "C:\SD-GUI-1.8.1\Data\venv\lib\site-packages\diffusers\onnx_utils.py", line 173, in _from_pretrained
[00000085] [12-29-2022 11:06:25] model = OnnxRuntimeModel.load_model(
[00000086] [12-29-2022 11:06:25] File "C:\SD-GUI-1.8.1\Data\venv\lib\site-packages\diffusers\onnx_utils.py", line 78, in load_model
[00000087] [12-29-2022 11:06:25] return ort.InferenceSession(path, providers=[provider], sess_options=sess_options)
[00000088] [12-29-2022 11:06:25] File "C:\SD-GUI-1.8.1\Data\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 347, in init
[00000089] [12-29-2022 11:06:25] self._create_inference_session(providers, provider_options, disabled_optimizers)
[00000090] [12-29-2022 11:06:25] File "C:\SD-GUI-1.8.1\Data\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 395, in _create_inference_session
[00000091] [12-29-2022 11:06:25] sess.initialize_session(providers, provider_options, disabled_optimizers)
[00000092] [12-29-2022 11:06:25] onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException

@n00mkrad
Copy link
Owner

I have not yet figured out how to switch GPUs with ONNX... I'll look into it for the next update

@iga2iga
Copy link

iga2iga commented Jan 14, 2023

I have both 2070s and 5700xt cards installed together. Thanks for the solution! I tried adding StableDiffusionGui.exe to different gpu in Windows UI and it did not work.

@ForserX
Copy link

ForserX commented Jan 29, 2023

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants