Closed
Description
What happened?
When trying to convert this GGML model from hugging face to GGUF, the script encountered an error in this function but when trying to raise the ValueError
it encountered another exception.
how I called the python script:
python convert_llama_ggml_to_gguf.py --input models/bigtrans-13b.ggmlv3.q6_K --output q6_K
as it is obvious an input with wrong data type (int instead of GGMLQuantizationType) has been passed to this function. I fixed this issue in #8928
Name and Version
version: 3535 (1e6f655)
What operating system are you seeing the problem on?
Linux
Relevant log output
line 22, in quant_shape_from_byte_shape
raise ValueError(f"Quantized tensor bytes per row ({shape[-1]}) is not a multiple of {quant_type.name} type size ({type_size})")
^^^^^^^^^^^^^^^
AttributeError: 'int' object has no attribute 'name'