Description
Ever since #3228, completion requests to the server example occasionally return a good deal of consecutive colons before a readable response, and sometimes it's almost exclusively colons, for example:
{"content": "::::::::::::::::::::::::: Hello, I'm an AI created by ChatBot. How can I assist you today?"}
{"content": "::::::::::::::::?"}
I've tested on a range of models (Mythomax 13B, Mythomax Kimiko 13B, Luna 7B, MlewdBoros 13B, Synthia 7B) and get the same results. I can reproduce it by sending this body to the server continually:
{"n_predict":256,"prompt":"Text transcript of a never-ending conversation between User and Assistant.\n\n#User: hi there\n#Assistant:", "stop":["\n#","\nUser:","\nuser:","\n["]}
It does not happen on every response (about 1 in 5-10 responses experience this) but enough to be distracting and make me wonder if I'm doing something wrong. I know the repeat_penalty
and logit_bias
fields should help here, but they both seem to have no effect on the problem from my testing and also were not previously explicitly needed before the aforementioned PR.
I'm running on an M1 Max chip and writing this as of commit 9f6ede1.
Does anyone have any insights into how I could fix this or if this is perhaps a bug in the server example?