Skip to content

add more API LMMs #36

@zwilch

Description

@zwilch

I can run on server a CPU LLM with llama.cpp
and may with oobabooga one or two models on the GPUs each on different openai compatibility ports
like :5000 , :5001, :5002.
May it would be possible to ad some API Bots in Configuration,today i use the ChatGPT Bots with API Configuration and I change it to a local API. This can also be running GPT4ALL localy.

More API Bots 👍
BOT-A http://Local-IP:Port/v1
BOT-B http://Local-IP:Port/v1
BOT-C http://Local-IP:Port/v1

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions