-
Notifications
You must be signed in to change notification settings - Fork 94
Open
Description
I can run on server a CPU LLM with llama.cpp
and may with oobabooga one or two models on the GPUs each on different openai compatibility ports
like :5000 , :5001, :5002.
May it would be possible to ad some API Bots in Configuration,today i use the ChatGPT Bots with API Configuration and I change it to a local API. This can also be running GPT4ALL localy.
More API Bots 👍
BOT-A http://Local-IP:Port/v1
BOT-B http://Local-IP:Port/v1
BOT-C http://Local-IP:Port/v1
Metadata
Metadata
Assignees
Labels
No labels