After making the change to fix missing default for llama_cpp per PR:
In packages/llm/llama_cpp/config.py, append default=True for llama_cpp decleration.
llama_cpp(‘0.3.8’, default=True),
I performed the following steps to try again for a working text-generation-webui:
$ sudo docker rmi $(sudo docker image ls -aq)$ git clone https://github.com/dusty-nv/jetson-containers
$ bash jetson-containers/install.sh
$ jetson-containers build text-generation-webui
While the progress was more substantial for this iteration, the process finally died with the following message:
RuntimeError: operator torchvision::nms does not exist
[22:14:25] Failed building: text-generation-webui
Traceback (most recent call last):
File "/ssd/projects/jetson-containers/jetson_containers/build.py", line 129, i
n <module>
build_container(**vars(args))
File "/ssd/projects/jetson-containers/jetson_containers/container.py", line 24
4, in build_container
test_container(container_name, pkg, simulate)
File "/ssd/projects/jetson-containers/jetson_containers/container.py", line 43
1, in test_container
status = subprocess.run(cmd.replace(_NEWLINE_, ' '), executable='/bin/bash',
shell=True, check=True)
File "/usr/lib/python3.10/subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'docker run -t --rm --network=host --runt
ime=nvidia --volume /ssd/projects/jetson-containers/packages/pytorch/torchvisi
on:/test --volume /ssd/projects/jetson-containers/data:/data --workdir /test
text-generation-webui:r36.4-cu126-22.04-torchvision /bin/bash -c 'python3
test.py' 2>&1 | tee /ssd/projects/jetson-containers/logs/20250511_220203/test/t
ext-generation-webui_r36.4-cu126-22.04-torchvision_test.py.txt; exit ${PIPESTATU
S[0]}' returned non-zero exit status 1.
Looking for futher guidance on getting a working version of text-generation-webui on my Nano. Thanks.
Regards.