Skip to content

Bug: exception while rasing a another exception in convert_llama_ggml_to_gguf script #8929

Closed
@farbodbj

Description

@farbodbj

What happened?

When trying to convert this GGML model from hugging face to GGUF, the script encountered an error in this function but when trying to raise the ValueError it encountered another exception.
how I called the python script:
python convert_llama_ggml_to_gguf.py --input models/bigtrans-13b.ggmlv3.q6_K --output q6_K
as it is obvious an input with wrong data type (int instead of GGMLQuantizationType) has been passed to this function. I fixed this issue in #8928

Name and Version

version: 3535 (1e6f655)

What operating system are you seeing the problem on?

Linux

Relevant log output

line 22, in quant_shape_from_byte_shape
    raise ValueError(f"Quantized tensor bytes per row ({shape[-1]}) is not a multiple of {quant_type.name} type size ({type_size})")
                                                                                          ^^^^^^^^^^^^^^^
AttributeError: 'int' object has no attribute 'name'

Metadata

Metadata

Assignees

No one assigned

    Labels

    bug-unconfirmedlow severityUsed to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions