Closed
Description
We released and open-sourced Aquila 7B series , including AquilaChat-7B(https://github.com/FlagAI-Open/FlagAI/tree/master/examples/Aquila/Aquila-chat) and Aquila-7B(https://github.com/FlagAI-Open/FlagAI/tree/master/examples/Aquila/Aquila-pretrain), which support both Chinese and English knowledge.
The model architectures are almost same as LLaMa, except one GPT2-like BPE tokenizer is used.
Could llama.cpp repo add our Aquila 7B models and how to adapt for the BPE tokenizer? Thanks very much.