-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LazyMergeKit - Tensor model.final_layernorm.weight required but not present in model ... #49
Comments
This problem might come from the fact that Microsoft changed the architecture after phi-2's release. The models that were fine-tuned still use the old one. It might work if you find a copy of the old base model. See the difference in mergekit: |
Thanks, I'll try it with the old version. But I somehow got the same error when attempting a pass-through merge between phi-2 and deepseek, but I got the error for the deepseek model. Is it not possible to merge llms with different architectures using pass-through in general? Is there like a Blogpost where you go into this already that I haven't seen? |
I just tried it again with amgadhasan/phi-2 as the basemodel, which should be the old Phi-2, but now i got this error: Do i need to change a setting in LazyMergeKit so it pulls the configuration for the old phi-2? |
Looks like you still don't have the same tensors in all of your models. You can quickly check the names of your layers on the model card by clicking on the arrow next to "safetensors". |
Thanks for the hint, I'll check that on my next attempt :) |
I got it to work with another Phi-2 Model. I think you were right. My merge models had different names in tensors (transformer...) than Microsoft Phi-2(model....). I got it to merge, now I'm trying to create gguf files using your notebook. Do you see whats the problem here? I used the collab from this blog artile of yours: |
Cool! You should be able to make GGUF versions of the model. Once again, maybe a problem with the old architecture? I can't really help you with that, unfortunately. |
Oh okay, I'll guess I'll try my luck by asking the people who made the ggufs for DolphinPhi and PhiOrange how they did it. Thank you a lot for helping me troubleshoot :) Information for these kind of tasks is still hard to find, so I really appreciate you answering my rookie questions :) |
I got it working now thanks to some help from another friendly Huggingface User. I had to use an older Version of llama.cpp and use convertHFtogguf.py first and then quanitizing it with quantize (https://huggingface.co/brittlewis12/phi-2-orange-GGUF/discussions/1) Thanks again for all your help. I finally have my first working merge now thanks to you :) I gave it a testrun and so far im quite satisfied wih the results. At least it doesnt seem to perform worse than the base models. Would you kindly do me the honor and run an eval on it for YALL? |
Haha well done! Sure, running the eval now :) |
Thank you :) I'm curious how it will score |
Congrats new SOTA in terms of phi-2 fine-tune: https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard 🎉 So I just use LLM AutoEval. It took 2 hours and 18 minutes to evaluate Phiter on a RTX 3090. |
Damn, I had a feeling it was good, but I didn't think it would outsmart both base models on all benchmarks and even outperform the phixtral models. Oh okay, so it doesn't take that much compute. Maybe I'll try running it myself sometime. I'm gonna close this issue now 😁 |
I credited you on my model card for helping me troubleshoot, I hope that's okay :) |
Hi there I'm trying to merge Phi-2 models using the following config:
`MODEL_NAME = "..."
yaml_config = """
models:
no parameters necessary for base model
parameters:
density: 0.5
weight: 0.5
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: microsoft/phi-2
parameters:
normalize: true
dtype: float16
"""`
but i get the following error:
RuntimeError: Tensor model.final_layernorm.weight required but not present in model rhysjones/phi-2-orange
I tried with lxuechen/phi-2-dpo before insead of phi-2-orange but got the same error.
I'm executinhg on Google Collab with CPU Runtime with Remote_Code set to true.
Can someone help and tell me if I'm doing something wrong or if it just oesnt work with Phi?
Here is the full log:
mergekit-yaml config.yaml merge --copy-tokenizer --allow-crimes --out-shard-size 1B --lazy-unpickle --trust-remote-code Warmup loader cache: 0% 0/3 [00:00<?, ?it/s] Fetching 10 files: 100% 10/10 [00:00<00:00, 9925.00it/s] Warmup loader cache: 33% 1/3 [00:00<00:00, 5.18it/s] Fetching 11 files: 100% 11/11 [00:00<00:00, 71977.14it/s] Warmup loader cache: 67% 2/3 [00:00<00:00, 5.58it/s] Fetching 10 files: 100% 10/10 [00:00<00:00, 31583.61it/s] Warmup loader cache: 100% 3/3 [00:00<00:00, 5.69it/s] 0% 1/2720 [00:00<00:02, 1276.42it/s] Traceback (most recent call last): File "/usr/local/bin/mergekit-yaml", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1157, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 783, in invoke return __callback(*args, **kwargs) File "/content/mergekit/mergekit/options.py", line 76, in wrapper f(*args, **kwargs) File "/content/mergekit/mergekit/scripts/run_yaml.py", line 47, in main run_merge( File "/content/mergekit/mergekit/merge.py", line 90, in run_merge for _task, value in exec.run(): File "/content/mergekit/mergekit/graph.py", line 191, in run res = task.execute(**arguments) File "/content/mergekit/mergekit/io/tasks.py", line 73, in execute raise RuntimeError( RuntimeError: Tensor model.final_layernorm.weight required but not present in model rhysjones/phi-2-orange
The text was updated successfully, but these errors were encountered: