-
Notifications
You must be signed in to change notification settings - Fork 235
Issues: InternLM/xtuner
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
RuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != c 10::BFloat16
#772
opened Jun 15, 2024 by
Yanllan
updated Jun 15, 2024
"<image>" is absent in "llava_instruct_150k_zh.jsonl"
#771
opened Jun 15, 2024 by
wusize
updated Jun 15, 2024
Citation for OpenRLHF in relation to the XTuner RLHF code and architecture
#770
opened Jun 14, 2024 by
hijkzzz
updated Jun 15, 2024
How to perform validation during the fine-tune training process on llava_llama3_8b_instruct_full_clip_vit_large_p14_336_lora_e1_gpu8_finetune?
#749
opened Jun 6, 2024 by
J0eky
updated Jun 12, 2024
[Bug] Map with add_length will cause memory OOM
#758
opened Jun 10, 2024 by
tonysy
updated Jun 12, 2024
Errors of llava pretrain for phi3_mini_4k_instruct_clip_vit_large_p14_336
#713
opened May 23, 2024 by
JiamingLv
updated Jun 7, 2024
The sequence parallel is open when I don't use it.
#669
opened May 9, 2024 by
amulil
updated Jun 7, 2024
ImportError: Failed to import AutoModelForCausalLM from xtuner.model.transformers_models in None
#740
opened Jun 2, 2024 by
ldh127
updated Jun 5, 2024
TypeError: object of type 'NoneType' has no len()
#741
opened Jun 3, 2024 by
puffanddmx
updated Jun 3, 2024
执行 NPROC_PER_NODE=2 xtuner train /root/StableDiffusionGPT/config/internlm2_1_8b_qlora_alpaca_e3_copy.py --work-dir /root/test/ft/train --deepspeed deepspeed_zero2 指令运行报错
#719
opened May 26, 2024 by
LTtt456c
updated May 30, 2024
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.