Local LLM

You can use a local LLM server to run VisioPilot without any costs. This is a great way to get started with VisioPilot. On this page, you can find a tutorial on how to set up a local LLM server and configure VisioPilot to use it.

1. Install a local LLM server, such as Ollama* or LM Studio.
2. Sign in to VisioPilot, then click on the user account icon in the header of the VisioPilot widget.
3. Click the "Configure services" button.
4. Enter the URL of your local LLM server in the "LLM Local API" field. For example:
http://localhost:1234/v1
5. You can now select your local models from the model dropdown menus.
6. Click the "Save" button to save your configuration.

From now on, VisioPilot will use your local LLM server during the 'CompleteChat' step. Please note that if your server is unavailable, VisioPilot will not automatically switch to the cloud LLM server. You will need to switch it manually in the 'Configure Services' section.

* Ollama should be started with the OLLAMA_ORIGINS environment variable to allow requests from the VisioPilot browser extension. For example: OLLAMA_ORIGINS=* ollama serve