- Set up your local GitLab instance
- Configure self-hosted models
- Configure features to use AI vendor models
- Testing
Setting up local development
Set up your local GitLab instance
- Configure GDK to set up Duo Features in the local environment
-
For GitLab Rails, enable
ai_custom_model
feature flag:Feature.enable(:ai_custom_model)
- For AI Gateway:
- Set
AIGW_CUSTOM_MODELS__ENABLED=True
- Set
AIGW_AUTH__BYPASS_EXTERNAL=False
orAIGW_GITLAB_URL=<your-gitlab-instance>
- Run
gitlab:duo:verify_self_hosted_setup
task to verify the setup
Configure self-hosted models
- Follow the instructions to configure self-hosted models
- Follow the instructions to configure features to use the models
AI-powered features are now powered by self-hosted models.
Configure features to use AI vendor models
After adding support for configuring features to either use self-hosted models for AI Vendor, setting CLOUD_CONNECTOR_SELF_SIGN_TOKENS
is no longer necessary for the customers. But it is harder for developers to configure the features to use AI vendored because we still want to send all requests to the local AI Gateway instead of Cloud Connector.
Setting CLOUD_CONNECTOR_BASE_URL
is not sufficient because we add /ai
suffix to it.
Currently, there are the following workarounds:
- Verify that
CLOUD_CONNECTOR_SELF_SIGN_TOKENS=1
- Remove
ai_feature_settings
record responsible to the configuration to fallback to usingAI_GATEWAY_URL
as Cloud Connector URL:
Ai::FeatureSetting.find_by(feature: :duo_chat).destroy!
Testing
To comprehensively test that a feature using Custom Models works as expected, you must write system
specs.
This is required because, unlike unit
tests, system
specs invoke all the components involved in the custom models stack. For example, the Puma, Workhorse, AI Gateway + LLM Mock server.
To write a new system
test and for it to run successfully, there are the following prerequisites:
-
AI Gateway must be running (usually on port
5052
), and you must configure the environment variableAI_GATEWAY_URL
:export AI_GATEWAY_URL="http://localhost:5052"
-
We use LiteLLM proxy to return mock responses. You must configure LiteLLM to return mock responses using a configuration file:
# config.yaml model_list: - model_name: codestral litellm_params: model: ollama/codestral mock_response: "Mock response from codestral"
-
LiteLLM proxy must be running (usually on port
4000
), and the you must configure the environment variableLITELLM_PROXY_URL
:litellm --config config.yaml export LITELLM_PROXY_URL="http://localhost:4000"
-
You must tag the RSpec file with
requires_custom_models_setup
.
For an example, see ee/spec/features/custom_models/code_suggestions_spec.rb
. In this file, we test that the code completions feature uses a self-hosted codestral
model.
Testing On CI
On CI, AI Gateway and LiteLLM proxy are already configured to run for all tests tagged with requires_custom_models_setup
.
However, you must also update the config
for LiteLLM if you are testing features that use newer models in the specs that have not been used before.
The configuration for LiteLLM is in .gitlab/ci/global.gitlab-ci.yml
.