Prerequisites
Before you begin, make sure you have:Get Llama3 running in Docker
Run the following command in terminal to get a Docker container running:gpt-3.5-turbo
(or mapped in One-API). After renaming, the model name is gpt-3.5-turbo
, but indeed it is still Llama3.
Configure One API
Choose a directory with read&write permissions (replaceYOUR_PATH
in the following command) to save data and logs. For example, you can use the pwd
command in the mac terminal to view the current path and replace YOUR_PATH
with it.

localhost:3000
here to log in to One API dashboard.
The initial account username is
root
, password is 123456
.Configure Channel
Enter channel page, select Add a new channel. Fill in model information:- Type:
ollama
- Name:
Llama3
- Group:
default
- Model:
gpt-3.5-turbo
- Key: Anything (for example
SSSS|sssss|1111
) with formatAPPID|APISecret|APIKey
if ollama has not set up for key - Proxy: the IP address of the ollama container
http://host.docker.internal:11434
Configure API Keys
In the API keys page, click Add New Token, and fill in the Name (for exampleLlama3
) and Model scope (for example gpt-3.5-turbo
).
After clicking Submit, you will see the new API key in My keys list within API keys page. Click Copy to get a token starting with sk-
, with witch you can repalce YOUR_TOKEN
in the code below. If the code runs successfully in your terminal, it means that One API configuration is complete.
Configure Bytebase and run
In Bytebase Workspace, go to Settings -> General, and scroll down to AI Assistant section. FillYOUR_TOKEN
we generated in One API into OpenAI API Key
bar, and fill the OpenAI API Endpoint
bar with http://localhost:3000
. Click Update.

