-
Instructions for setting up GitLab Duo features in the local development environment
- Required: Install AI Gateway
- Required: Setup Licenses in GitLab-Rails
- Required: Enable feature flags in GitLab-Rails
- Recommended: Run
gitlab:duo:setup
Rake task to prepare the environment - Recommended: Set
CLOUD_CONNECTOR_SELF_SIGN_TOKENS
environment variable - Option A: Run GDK in SaaS mode and enable AI features for a test group
- Option B: Run GDK in Self-Managed mode and enable AI features for the instance
- Recommended: Test clients in Rails console
- Optional: Enable authentication and authorization in AI Gateway
- Help
- Tips for local development
- Feature development (Abstraction Layer)
- How to implement a new action
- How to migrate an existing action to the AI Gateway
- Monitoring
- Security
AI features based on 3rd-party integrations
Instructions for setting up GitLab Duo features in the local development environment
Required: Install AI Gateway
Why: Certain AI operations are provided by AI Gateway only, such as text completion, embedding and semantic search.
How: Follow these instructions to install the AI Gateway with GDK. We recommend this route for most users.
You can also install AI Gateway by:
We only recommend this for users who know what they are doing.
Required: Setup Licenses in GitLab-Rails
Why: GitLab Duo is available to Premium and Ultimate customers only. You likely want an Ultimate license for your GDK. Ultimate gets you access to all GitLab Duo features.
How:
Follow the process to obtain an EE license for your local instance and upload the license.
To verify that the license is applied, go to Admin area > Subscription and check the subscription plan.
Required: Enable feature flags in GitLab-Rails
Why: some GitLab Duo functionality is behind a feature flag.
How:
Enable all feature flags maintained by “group::ai framework” by running this command in your /gitlab
directory:
bundle exec rake gitlab:duo:enable_feature_flags
Recommended: Run gitlab:duo:setup
Rake task to prepare the environment
This Rake task ensures that the local environment is ready to run GitLab Duo. The task can be run in either SaaS or Self-Managed modes, depending on which installation you currently imitate in GDK.
If you currently run you local GDK as SaaS (imitating GitLab.com), you need to provide the argument to the task:
GITLAB_SIMULATE_SAAS=1 bundle exec 'rake gitlab:duo:setup[<test-group-name>]'
Replace <test-group-name>
with the name of any top-level group. Duo will be configured for that group. If the
group doesn’t exist, it creates a new one. Make sure the script succeeded. It prints error messages with links
on how to resolve the error. You can re-run the script until it succeeds.
If you currently run you local GDK as Self-Managed (default for GDK), no arguments for Rake task are expected:
GITLAB_SIMULATE_SAAS=0 bundle exec 'rake gitlab:duo:setup'
It’s recommended to run gdk restart
after the task succeeded.
If you need to use evaluation framework (as described here)
you can run special Rake task: GITLAB_SIMULATE_SAAS=1 bundle exec 'rake gitlab:duo:setup_evaluation[<test-group-name>]'
.
It repeats steps from original setup Rake task, and also imports specially prepared groups and projects.
Since we use Setup
class (under ee/lib/gitlab/duo/developments/setup.rb
) that requires “saas” mode to create a group
(necessary for importing subgroups), you need to set GITLAB_SIMULATE_SAAS=1
if it’s currently GITLAB_SIMULATE_SAAS=0
.
This is just to complete the import successfully, and then you can switch back to GITLAB_SIMULATE_SAAS=0
.
To run this task, your GDK server must be running. After running this Rake task, import process will be in progress for
said groups and projects.
Recommended: Set CLOUD_CONNECTOR_SELF_SIGN_TOKENS
environment variable
If you plan to run you local GDK as Self-Managed (for GDK), it is recommended to set this environment variable. It has no effect if you run you local GDK as SaaS, so you can always keep it set.
Setting this environment variable will allow the local GL instance to issue tokens itself, without syncing with CustomersDot first. This is similar how GitLab.com operates, and we allow it for development purposes to simplify the setup. With it you can skip the CustomersDot setup. This can done by either:
- setting it in the
env.runit
file in your GDK root - executing
export CLOUD_CONNECTOR_SELF_SIGN_TOKENS=1
in your shell (but you need to repeat it for every new session)
You need to restart GDK to apply the change.
If you plan to use local CustomersDot or test cross-service integration, you may want to unset this variable.
Option A: Run GDK in SaaS mode and enable AI features for a test group
This is automatically set up when setting up AI Gateway with GDK. If you would like to turn it off, set the env.runit
file in your GDK root as follows:
# <GDK-root>/env.runit
export GITLAB_SIMULATE_SAAS=1
or, just for a current session:
export CLOUD_CONNECTOR_SELF_SIGN_TOKENS=1 && gdk restart
Make sure you run gitlab:duo:setup
Rake task /gitlab
directory:
GITLAB_SIMULATE_SAAS=1 RAILS_ENV=development bundle exec rake 'gitlab:duo:setup[<test-group-name>]'
Membership to a group with Duo features enabled is what enables many AI
features. To enable AI feature access locally, make sure that your test user is
a member of the group with Duo features enabled (<test-group-name>
)
and (for some features) have a seat assigned.
Finally, you must clear the GitLab-Rails Redis cache. User access to GitLab Duo features in SaaS mode is cached in Redis. This cache expires every 60 minutes. A manual cache-clearing ensures that you can use Duo features immediately:
bundle exec rake cache:clear
Troubleshooting: If you have problems with your setup at this point, double-check your admin settings. When GDK is running, go to admin settings (Navigation -> Admin), then go to general settings (Settings -> General), and expand the “Account and limit” section. Scroll to the bottom of this section to make sure the setting “Allow use of licensed EE features” is toggled on.
Option B: Run GDK in Self-Managed mode and enable AI features for the instance
How: This is the default for GDK. To set it explicitly, the following should be set in the env.runit
file in your GDK root:
# <GDK-root>/env.runit
export GITLAB_SIMULATE_SAAS=0
or, just for a current session:
export CLOUD_CONNECTOR_SELF_SIGN_TOKENS=0 && gdk restart
Make sure you executed gitlab:duo:setup
Rake task in /gitlab
directory:
GITLAB_SIMULATE_SAAS=0 RAILS_ENV=development bundle exec rake 'gitlab:duo:setup'
and it succeeded.
Some AI features requires a seat to be assigned to a user to have access.
If you use CLOUD_CONNECTOR_SELF_SIGN_TOKENS=1
you need to assign root
/admin
user to a seat,
to receive a “Code completion test was successful” notification from the health check
on the http://localhost:3000/admin/code_suggestions
page.
Our customers (production environment) do not need to do that to run a Code Suggestions health check.
Recommended: Test clients in Rails console
Why: you’ve completed all of the setup steps, now it’s time to confirm that GitLab Duo is actually working.
How:
After the setup is complete, you can test clients in GitLab-Rails to see if it can correctly reach to AI Gateway:
- Run
gdk start
. - Login to Rails console with
gdk rails console
. - Talk to a model:
# Talk to Anthropic model
Gitlab::Llm::Anthropic::Client.new(User.first, unit_primitive: 'duo_chat').complete(prompt: "\n\nHuman: Hi, How are you?\n\nAssistant:")
# Talk to Vertex AI model
Gitlab::Llm::VertexAi::Client.new(User.first, unit_primitive: 'documentation_search').text_embeddings(content: "How can I create an issue?")
# Test `/v1/chat/agent` endpoint
Gitlab::Llm::Chain::Requests::AiGateway.new(User.first).request(prompt: [{role: "user", content: "Hi, how are you?"}])
Optional: Enable authentication and authorization in AI Gateway
Why: The AI Gateway has authentication and authorization flow to verify if clients have permission to access the features. Auth is enforced in any live environments hosted by GitLab infra team. You may want to test this flow in your local development environment.
To enable authorization checks, set AIGW_AUTH__BYPASS_EXTERNAL
to false
in the
application setting file
(<GDK-root>/gitlab-ai-gateway/.env
) in AI Gateway.
Option 1: Use your GitLab instance as a provider
Why: this is the simplest method of testing authentication and reflects our setup on GitLab.com.
How: Assuming that you are running the AI Gateway with GDK, apply the following configuration to GDK:
# <GDK-root>/env.runit
export GITLAB_SIMULATE_SAAS=1
Update the application settings file in AI Gateway:
# <GDK-root>/gitlab-ai-gateway/.env
AIGW_AUTH__BYPASS_EXTERNAL=false
AIGW_GITLAB_URL=<your-gdk-url>
and gdk restart
.
Option 2: Use your customersDot instance as a provider
Why: CustomersDot setup is required when you want to test or update functionality related to cloud licensing or if you are running GDK in non-SaaS mode.
If you need to get customersDot working for your local GitLab Rails instance for
any reason, reach out to #s_fulfillment_engineering
in Slack. For questions around the integration of CDot with other systems to deliver AI use cases, reach out to #g_cloud_connector
.
assistance.
Help
- Here’s how to reach us!
- View guidelines for working with GitLab Duo Chat.
Tips for local development
- When responses are taking too long to appear in the user interface, consider
restarting Sidekiq by running
gdk restart rails-background-jobs
. If that doesn’t work, trygdk kill
and thengdk start
. - Alternatively, bypass Sidekiq entirely and run the service synchronously.
This can help with debugging errors as GraphQL errors are now available in
the network inspector instead of the Sidekiq logs. To do that, temporarily alter
the
perform_for
method inLlm::CompletionWorker
class by changingperform_async
toperform_inline
.
Feature development (Abstraction Layer)
Feature flags
Apply the following feature flags to any AI feature work:
- A general flag (
ai_duo_chat_switch
) that applies to all GitLab Duo Chat features. It’s enabled by default. - A general flag (
ai_global_switch
) that applies to all other AI features. It’s enabled by default. - A flag specific to that feature. The feature flag name must be different than the licensed feature name.
See the feature flag tracker epic for the list of all feature flags and how to use them.
GraphQL API
To connect to the AI provider API using the Abstraction Layer, use an extendable
GraphQL API called aiAction
.
The input
accepts key/value pairs, where the key
is the action that needs to
be performed. We only allow one AI action per mutation request.
Example of a mutation:
mutation {
aiAction(input: {summarizeComments: {resourceId: "gid://gitlab/Issue/52"}}) {
clientMutationId
}
}
As an example, assume we want to build an “explain code” action. To do this, we extend the input
with a new key,
explainCode
. The mutation would look like this:
mutation {
aiAction(
input: {
explainCode: { resourceId: "gid://gitlab/MergeRequest/52", code: "foo() { console.log() }" }
}
) {
clientMutationId
}
}
The GraphQL API then uses the Anthropic Client to send the response.
How to receive a response
The API requests to AI providers are handled in a background job. We therefore do not keep the request alive and the Frontend needs to match the request to the response from the subscription.
userId
and resourceId
are used. For example, when two AI features use the same userId
and resourceId
both subscriptions will receive the response from each other. To prevent this interference, we introduced the clientSubscriptionId
.To match a response on the aiCompletionResponse
subscription, you can provide a clientSubscriptionId
to the aiAction
mutation.
- The
clientSubscriptionId
should be unique per feature and within a page to not interfere with other AI features. We recommend to use aUUID
. - Only when the
clientSubscriptionId
is provided as part of theaiAction
mutation, it will be used for broadcasting theaiCompletionResponse
. - If the
clientSubscriptionId
is not provided, onlyuserId
andresourceId
are used for theaiCompletionResponse
.
As an example mutation for summarizing comments, we provide a randomId
as part of the mutation:
mutation {
aiAction(
input: {
summarizeComments: { resourceId: "gid://gitlab/Issue/52" }
clientSubscriptionId: "randomId"
}
) {
clientMutationId
}
}
In our component, we then listen on the aiCompletionResponse
using the userId
, resourceId
and clientSubscriptionId
("randomId"
):
subscription aiCompletionResponse(
$userId: UserID
$resourceId: AiModelID
$clientSubscriptionId: String
) {
aiCompletionResponse(
userId: $userId
resourceId: $resourceId
clientSubscriptionId: $clientSubscriptionId
) {
content
errors
}
}
The subscription for chat behaves differently.
To not have many concurrent subscriptions, you should also only subscribe to the subscription once the mutation is sent by using skip()
.
Current abstraction layer flow
The following graph uses VertexAI as an example. You can use different providers.
How to implement a new action
Implementing a new AI action will require changes in the GitLab monolith as well as in the AI Gateway. We’ll use the example of wanting to implement an action that allows users to rewrite issue descriptions according to a given prompt.
1. Add your action to the Cloud Connector feature list
The Cloud Connector configuration stores the permissions needed to access your service, as well as additional metadata. For more information, see Cloud Connector: Configuration.
# ee/config/cloud_connector/access_data.yml
services:
# ...
rewrite_description:
backend: 'gitlab-ai-gateway'
bundled_with:
duo_enterprise:
unit_primitives:
- rewrite_issue_description
2. Create an Agent definition in the AI Gateway
In the AI Gateway project, create a
new agent definition under ai_gateway/agents/definitions
. Create a new subfolder corresponding to the name of your
AI action, and a new YAML file for your agent. Specify the model and provider you wish to use, and the prompts that
will be fed to the model. You can specify inputs to be plugged into the prompt by using {}
.
# ai_gateway/agents/definitions/rewrite_description/base.yml
name: Description rewriter
model:
name: claude-3-sonnet-20240229
params:
model_class_provider: anthropic
prompt_template:
system: |
You are a helpful assistant that rewrites the description of resources. You'll be given the current description, and a prompt on how you should rewrite it. Reply only with your rewritten description.
<description>{description}</description>
<prompt>{prompt}</prompt>
If your AI action is part of a broader feature, the definitions can be organized in a tree structure:
# ai_gateway/agents/definitions/code_suggestions/generations/base.yml
name: Code generations
model:
name: claude-3-sonnet-20240229
params:
model_class_provider: anthropic
...
To specify prompts for multiple models, use the name of the model as the filename for the definition:
# ai_gateway/agents/definitions/code_suggestions/generations/mistral.yml
name: Code generations
model:
name: mistral
params:
model_class_provider: litellm
...
3. Create a Completion class
- Create a new completion under
ee/lib/gitlab/llm/ai_gateway/completions/
and inherit it from theBase
AI Gateway Completion.
# ee/lib/gitlab/llm/ai_gateway/completions/rewrite_description.rb
module Gitlab
module Llm
module AiGateway
module Completions
class RewriteDescription < Base
def agent_name
'base' # Must match the name of the agent you defined on the AI Gateway
end
def inputs
{ description: resource.description, prompt: prompt_message.content }
end
end
end
end
end
end
4. Create a Service
- Create a new service under
ee/app/services/llm/
and inherit it from theBaseService
. - The
resource
is the object we want to act on. It can be any object that includes theAi::Model
concern. For example it could be aProject
,MergeRequest
, orIssue
.
# ee/app/services/llm/rewrite_description_service.rb
module Llm
class RewriteDescriptionService < BaseService
extend ::Gitlab::Utils::Override
override :valid
def valid?
super &&
# You can restrict which type of resources your service applies to
resource.to_ability_name == "issue" &&
# Always check that the user is allowed to perform this action on the resource
Ability.allowed?(user, :rewrite_description, resource)
end
private
def perform
schedule_completion_worker
end
end
end
5. Register the feature in the catalogue
Go to Gitlab::Llm::Utils::AiFeaturesCatalogue
and add a new entry for your AI action.
class AiFeaturesCatalogue
LIST = {
# ...
rewrite_description: {
service_class: ::Gitlab::Llm::AiGateway::Completions::RewriteDescription,
feature_category: :ai_abstraction_layer,
execute_method: ::Llm::RewriteDescriptionService,
maturity: :experimental,
self_managed: false,
internal: false
}
}.freeze
How to migrate an existing action to the AI Gateway
AI actions were initially implemented inside the GitLab monolith. As part of our AI Gateway as the Sole Access Point for Monolith to Access Models Epic we’re migrating prompts, model selection and model parameters into the AI Gateway. This will increase the speed at which we can deliver improvements to self-managed users, by decoupling prompt and model changes from monolith releases. To migrate an existing action:
- Follow steps 1 through 3 on How to implement a new action.
- Modify the entry for your AI action in the catalogue to list the new completion class as the
aigw_service_class
.
class AiFeaturesCatalogue
LIST = {
# ...
generate_description: {
service_class: ::Gitlab::Llm::Anthropic::Completions::GenerateDescription,
aigw_service_class: ::Gitlab::Llm::AiGateway::Completions::GenerateDescription,
prompt_class: ::Gitlab::Llm::Templates::GenerateDescription,
feature_category: :ai_abstraction_layer,
execute_method: ::Llm::GenerateDescriptionService,
maturity: :experimental,
self_managed: false,
internal: false
},
# ...
}.freeze
When the feature flag ai_gateway_agents
is enabled, the aigw_service_class
will be used to process the AI action.
Once you’ve validated the correct functioning of your action, you can remove the aigw_service_class
key and replace
the service_class
with the new AiGateway::Completions
class to make it the permanent provider.
For a complete example of the changes needed to migrate an AI action, see the following MRs:
Authorization in GitLab-Rails
We recommend to use policies to deal with authorization for a feature. Currently we need to make sure to cover the following checks:
- For GitLab Duo Chat feature,
ai_duo_chat_switch
is enabled. - For other general AI features,
ai_global_switch
is enabled. - Feature specific feature flag is enabled.
- The namespace has the required license for the feature.
- User is a member of the group/project.
-
experiment_features_enabled
settings are set on theNamespace
.
For our example, we need to implement the allowed?(:rewrite_description)
call. As an example, you can look at the Issue Policy for the summarize comments feature. In our example case, we want to implement the feature for Issues as well:
# ee/app/policies/ee/issue_policy.rb
module EE
module IssuePolicy
extend ActiveSupport::Concern
prepended do
with_scope :global
condition(:ai_available) do
::Feature.enabled?(:ai_global_switch, type: :ops)
end
with_scope :subject
condition(:rewrite_description_enabled) do
::Feature.enabled?(:rewrite_description, subject_container) &&
subject_container.licensed_feature_available?(:rewrite_description)
end
rule do
ai_available & rewrite_description_enabled & is_project_member
end.enable :rewrite_description
end
end
end
Pairing requests with responses
Because multiple users’ requests can be processed in parallel, when receiving responses,
it can be difficult to pair a response with its original request. The requestId
field can be used for this purpose, because both the request and response are assured
to have the same requestId
UUID.
Caching
AI requests and responses can be cached. Cached conversation is being used to display user interaction with AI features. In the current implementation, this cache is not used to skip consecutive calls to the AI service when a user repeats their requests.
query {
aiMessages {
nodes {
id
requestId
content
role
errors
timestamp
}
}
}
This cache is especially useful for chat functionality. For other services,
caching is disabled. You can enable this for a service by using the
cache_response: true
option.
Caching has following limitations:
- Messages are stored in Redis stream.
- There is a single stream of messages per user. This means that all services currently share the same cache. If needed, this could be extended to multiple streams per user (after checking with the infrastructure team that Redis can handle the estimated amount of messages).
- Only the last 50 messages (requests + responses) are kept.
- Expiration time of the stream is 3 days since adding last message.
- User can access only their own messages. There is no authorization on the caching level, and any authorization (if accessed by not current user) is expected on the service layer.
Check if feature is allowed for this resource based on namespace settings
There is one setting allowed on root namespace level that restrict the use of AI features:
experiment_features_enabled
To check if that feature is allowed for a given namespace, call:
Gitlab::Llm::StageCheck.available?(namespace, :name_of_the_feature)
Add the name of the feature to the Gitlab::Llm::StageCheck
class. There are
arrays there that differentiate between experimental and beta features.
This way we are ready for the following different cases:
- If the feature is not in any array, the check will return
true
. For example, the feature was moved to GA.
To move the feature from the experimental phase to the beta phase, move the name of the feature from the EXPERIMENTAL_FEATURES
array to the BETA_FEATURES
array.
Implement calls to AI APIs and the prompts
The CompletionWorker
will call the Completions::Factory
which will initialize the Service and execute the actual call to the API.
In our example, we will use VertexAI and implement two new classes:
# /ee/lib/gitlab/llm/vertex_ai/completions/rewrite_description.rb
module Gitlab
module Llm
module VertexAi
module Completions
class AmazingNewAiFeature < Gitlab::Llm::Completions::Base
def execute
prompt = ai_prompt_class.new(options[:user_input]).to_prompt
response = Gitlab::Llm::VertexAi::Client.new(user, unit_primitive: 'amazing_feature').text(content: prompt)
response_modifier = ::Gitlab::Llm::VertexAi::ResponseModifiers::Predictions.new(response)
::Gitlab::Llm::GraphqlSubscriptionResponseService.new(
user, nil, response_modifier, options: response_options
).execute
end
end
end
end
end
end
# /ee/lib/gitlab/llm/vertex_ai/templates/rewrite_description.rb
module Gitlab
module Llm
module VertexAi
module Templates
class AmazingNewAiFeature
def initialize(user_input)
@user_input = user_input
end
def to_prompt
<<~PROMPT
You are an assistant that writes code for the following context:
context: #{user_input}
PROMPT
end
end
end
end
end
end
Because we support multiple AI providers, you may also use those providers for the same example:
Gitlab::Llm::VertexAi::Client.new(user, unit_primitive: 'your_feature')
Gitlab::Llm::Anthropic::Client.new(user, unit_primitive: 'your_feature')
Add AI Action to GraphQL
TODO
Monitoring
- Error ratio and response latency apdex for each Ai action can be found on Sidekiq Service dashboard under SLI Detail:
llm_completion
. - Spent tokens, usage of each Ai feature and other statistics can be found on periscope dashboard.
- AI Gateway logs.
- AI Gateway metrics.
- Feature usage dashboard via proxy.
Security
Refer to the secure coding guidelines for Artificial Intelligence (AI) features.