Bring Your Own LLM Model

By default, when asking a question to Navie, your code editor will interact with the AppMap hosted proxy for OpenAI. If you have a requirement to bring your own key or otherwise use your own OpenAI account you can specify your own OpenAI key; this will cause Navie to connect to OpenAI directly, without AppMap proxy acting as an intermediate.

AppMap Navie AI recommends avoiding models that do not support chat mode.

Navie AI Backend LLM Ease of Setup Quality of Responses
AppMap OpenAI Proxy (default) ⭐⭐⭐⭐⭐ ⭐⭐⭐⭐⭐
Self Managed OpenAI API Key ⭐⭐⭐⭐ ⭐⭐⭐⭐⭐
Azure Hosted OpenAI ⭐⭐⭐ ⭐⭐⭐⭐⭐
Anyscale Hosted Mixtral-8x7B ⭐⭐⭐ ⭐⭐⭐
Locally Hosted Mixtral-8x7B-Instruct-v0.1 ⭐⭐ ⭐⭐⭐
Codellama/Codeqwen ❌ Not Supported ❌ Not Supported

Bring Your Own OpenAI API Key (BYOK)

Navie AI uses the AppMap hosted proxy with an AppMap managed OpenAI API key. If you have requirements to use your existing OpenAI API key, you can configure that within AppMap. This will ensure all Navie requests will be interacting with your own OpenAI account.

Configuring Your OpenAI Key

In your code editor, open the Navie Chat window. If the model displays (default), this means that Navie is configured to use the AppMap hosted OpenAI proxy. Click on the gear icon in the top of the Navie Chat window to change the model.

Navie configuration gear

In the modal, select the option to Use your own OpenAI API key

Use your own key modal

After you enter your OpenAI API Key in the menu option, hit enter and your code editor will be prompted to reload.

In VS Code: VS Code popup to store API Key

In JetBrains: JetBrains popup to store API Key

NOTE: You can also use the environment variable in the configuration section to store your API key as an environment variable instead of using the gear icon in the Navie chat window.

After your code editor reloads, you can confirm your requests are being routed to OpenAI directly in the Navie Chat window. It will list the model OpenAI and the location, in this case via OpenAI.

OpenAI location

Modify which OpenAI Model to use

AppMap generally uses the latest OpenAI models as the default, but if you want to use an alternative model like gpt-3.5 or a preview model like gpt-4-vision-preview you can modify the APPMAP_NAVIE_MODEL environment variable after configuring your own OpenAI API key to use other OpenAI models.

After setting your APPMAP_NAVIE_MODEL with your chosen model reload/restart your code editor and then confirm it’s configuration by opening a new Navie chat window. In this example i’ve configured my model to be gpt-4o with my personal OpenAI API Key.

JetBrains OpenAI key modal

Reset Navie AI to use Default Navie Backend

At any time, you can unset your OpenAI API Key and revert usage back to using the AppMap hosted OpenAI proxy. Select the gear icon in the Navie Chat window and select Use Navie Backend in the modal.

Navie Recommended Models

Bring Your Own Model (BYOM)

This feature is in early access. We currently recommend GPT4-Turbo from OpenAI via OpenAI or Microsoft Azure, and Mixtral-8x7B-Instruct-v0.1. Refer to the AppMap Recommended Models documentation for more info

Another option is to use a different LLM entirely; you can use any OpenAI-compatible model running either locally or remotely. When configured like this, as in the BYOK case, Navie won’t contact the AppMap hosted proxy and your conversations will stay private between you and the model.

Configuration

In order to configure Navie for your own LLM, certain environment variables need to be set for AppMap services.

You can use the following variables to direct Navie to use any LLM with an OpenAI-compatible API. If only the API key is set, Navie will connect to OpenAI.com by default.

  • OPENAI_API_KEY — API key to use with OpenAI API.
  • OPENAI_BASE_URL — base URL for OpenAI API (defaults to the OpenAI.com endpoint).
  • APPMAP_NAVIE_MODEL — name of the model to use (the default is GPT-4).
  • APPMAP_NAVIE_TOKEN_LIMIT — maximum context size in tokens (default 8000).

For Azure OpenAI, you need to create a deployment and use these variables instead:

  • AZURE_OPENAI_API_KEY — API key to use with Azure OpenAI API.
  • AZURE_OPENAI_API_VERSION — API version to use when communicating with Azure OpenAI, eg. 2024-02-01
  • AZURE_OPENAI_API_INSTANCE_NAME — Azure OpenAI instance name (ie. the part of the URL before openai.azure.com)
  • AZURE_OPENAI_API_DEPLOYMENT_NAME — Azure OpenAI deployment name.

Configuring in JetBrains
Configuring in VS Code

Configuring in JetBrains

In JetBrains, go to settings.

a screenshot of the JetBrains menu

Go to ToolsAppMap.

a screenshot of the AppMap settings in JetBrains

Enter the environment editor. a screenshot of the entering the AppMap environment editor in JetBrains

Use the editor to define the relevant environment variables according to the BYOM documentation.

a screenshot of the environment editor in JetBrains

Reload your IDE for the changes to take effect.

After reloading you can confirm the model is configured correctly in the Navie Chat window.

Configuring in VS Code

Editing AppMap services environment

In VS Code, go to settings.

a screenshot of the Visual Studio Code menu

Search for “appmap environment” to reveal “AppMap: Command Line Environment” setting.

a screenshot of the AppMap: Command Line Environment settings section

Use Add Item to define the relevant environment variables according to the BYOM documentation.

a screenshot showing an example of the bring your own model key value entry

Reload your VS Code for the changes to take effect.

After reloading you can confirm the model is configured correctly in the Navie Chat window.

Examples

Refer to the Navie Reference Guide for detailed examples of using Navie with your own LLM backend.


Was this page helpful? thumb_up Yes thumb_down No
Thank you for your feedback!