- Background information
- Where should you store your files?
- Implementing Direct Upload support
- Processing uploads
- CarrierWave Uploaders
- GitLab modifications to CarrierWave
- When creating an uploader, make it a subclass of
- Add your uploader to the tables in this document
- Do not add new object storage buckets
- Implement direct upload
- If you need to process your uploads, decide where to do that
CarrierWave Uploaders determine where files get stored. When you create a new Uploader class you are deciding where to store the files of your new feature.
First of all, ask yourself if you need a new Uploader class. It is OK to use the same Uploader class for different mount points or different models.
If you do want or need your own Uploader class then you should make it
a subclass of
AttachmentUploader. You then inherit the storage
location and directory scheme from that class. The directory scheme
File.join(model.class.underscore, mounted_as.to_s, model.id.to_s)
If you look around in the GitLab code base you find quite a few Uploaders that have their own storage location. For object storage, this means Uploaders have their own buckets. We now discourage adding new buckets for the following reasons:
- Using a new bucket adds to development time because you need to make downstream changes in GDK, Omnibus GitLab and CNG.
- Using a new bucket requires GitLab.com Infrastructure changes, which slows down the roll-out of your new feature
- Using a new bucket slows down adoption of your new feature for self-managed GitLab installation: people cannot start using your new feature until their local GitLab administrator has configured the new bucket.
By using an existing bucket you avoid all this extra work
and friction. The
Gitlab.config.uploads storage location, which is what
AttachmentUploader uses, is guaranteed to already be configured.
Below we outline how to implement direct upload support.
Using direct upload is not always necessary but it is usually a good idea. Unless the uploads handled by your feature are both infrequent and small, you probably want to implement direct upload. An example of a feature with small and infrequent uploads is project avatars: these rarely change and the application imposes strict size limits on them.
If your feature handles uploads that are not both infrequent and small, then not implementing direct upload support means that you are taking on technical debt. At the very least, you should make sure that you can add direct upload support later.
To support Direct Upload you need two things:
- A pre-authorization endpoint in Rails
- A Workhorse routing rule
Workhorse does not know where to store your upload. To find out it makes a pre-authorization request. It also does not know whether or where to make a pre-authorization request. For that you need the routing rule.
A note to those of us who remember, Workhorse used to be a separate project: it is not necessary anymore to break these two steps into separate merge requests. In fact it is probably easier to do both in one merge request.
Routing rules are defined in workhorse/internal/upstream/routes.go. They consist of:
- An HTTP verb (usually “POST” or “PUT”)
- A path regular expression
- An upload type: MIME multipart or “full request body”
- Optionally, you can also match on HTTP headers like
u.route("PUT", apiProjectPattern+`packages/nuget/`, mimeMultipartUploader),
You should add a test for your routing rule to
You should also manually verify that when you perform an upload request for your new feature, Workhorse makes a pre-authorization request. You can check this by looking at the Rails access logs. This is necessary because if you make a mistake in your routing rule you don’t get a hard failure: you just end up using the less efficient default path.
We distinguish three cases: Rails controllers, Grape API endpoints and GraphQL resources.
To start with the bad news: direct upload for GraphQL is currently not supported. The reason for this is that Workhorse does not parse GraphQL queries. Also see issue #280819. Consider accepting your file upload via Grape instead.
For Grape pre-authorization endpoints, look for existing examples that
/authorize routes. One example is the
This particular example is using FileUploader, which means
that the upload is stored in the storage location (bucket) of
that Uploader class.
For Rails endpoints you can use the WorkhorseAuthorization concern.
Some features require us to process uploads, for example to extract metadata from the uploaded file. There are a couple of different ways you can implement this. The main choice is where to implement the processing, or “who is the processor”.
|Processor||Direct Upload possible?||Can reject HTTP request?||Implementation|
Processing in Rails looks appealing but it tends to lead to scaling problems down the road because you cannot use direct upload. You are then forced to rebuild your feature with processing in Workhorse. So if the requirements of your feature allows it, doing the processing in Sidekiq strikes a good balance between complexity and the ability to scale.
GitLab uses a modified version of CarrierWave to manage uploads. Below we describe how we use CarrierWave and how we modified it.
The central concept of CarrierWave is the Uploader class. The
Uploader defines where files get stored, and optionally contains
validation and processing logic. To use an Uploader you must associate
it with a text column on an ActiveRecord model. This is called “mounting”
and the column is called
mountpoint. For example:
class Project < ApplicationRecord mount_uploader :avatar, AttachmentUploader end
Now if you upload an avatar called
tanuki.png the idea is that in the
projects.avatar column for your project, CarrierWave stores the string
tanuki.png, and that the AttachmentUploader class contains the
configuration data and directory schema. For example if the project ID
is 123, the actual file may be in
was chosen by the Uploader using among others configuration
/var/opt/gitlab/gitlab-rails/uploads), the model name (
the model ID (
123) and the mount point (
The Uploader determines the individual storage directory of your upload. The
mountpointcolumn in your model contains the filename.
You never access the
mountpoint column directly because CarrierWave
defines a getter and setter on your model that operates on file handle
Besides determining the storage directory for your upload, a
CarrierWave Uploader can implement several other behaviors via
callbacks. Not all of these behaviors are usable in GitLab. In
particular, you currently cannot use the
version mechanism of
CarrierWave. Things you can do include:
- Filename validation
- Incompatible with direct upload: One time pre-processing of file contents, for example, image resizing
- Incompatible with direct upload: Encryption at rest
CarrierWave pre-processing behaviors such as image resizing or encryption require local access to the uploaded file. This forces you to upload the processed file from Ruby. This flies against direct upload, which is all about not doing the upload in Ruby. If you use direct upload with an Uploader with pre-processing behaviors then the pre-processing behaviors are skipped silently.
CarrierWave has 2 storage engines:
|CarrierWave class||GitLab name||Description|
|Local files, accessed through the Ruby |
|Cloud files, accessed through the Fog gem|
GitLab uses both of these engines, depending on configuration.
The typical way to choose a storage engine in CarrierWave is to use the
Uploader.storage class method. In GitLab we do not do this; we have
Uploader#storage instead. This allows us to vary the
storage engine file by file.
An Uploader is associated with two storage areas: regular storage and
cache storage. Each has its own storage engine. If you assign a file
to a mount point setter (
project.avatar = File.open('/tmp/tanuki.png'))
you have to copy/move the file to cache
storage as a side effect via the
cache! method. To persist the file
you must somehow call the
store! method. This either happens via
or by calling
store! on an Uploader instance.
Typically you do not need to interact with
store! but if
you need to debug GitLab CarrierWave modifications it is useful to
know that they are there and that they always get called.
Specifically, it is good to know that CarrierWave pre-processing
process etc.) are implemented as
before :cache hooks,
and in the case of direct upload, these hooks are ignored and do not
Direct upload skips all CarrierWave
GitLab uses a modified version of CarrierWave to make a number of things possible.
In app/uploaders/object_storage.rb there is code for migrating user data between local storage and object storage. This code exists because for a long time, GitLab.com stored uploads on local storage via NFS. This changed when as part of an infrastructure migration we had to move the uploads to object storage.
This is why the CarrierWave
storage varies from upload to upload in
GitLab, and why we have database columns like
Workhorse direct upload is a mechanism that lets us accept large uploads without spending a lot of Ruby CPU time. Workhorse is written in Go and goroutines have a much lower resource footprint than Ruby threads.
Direct upload works as follows.
- Workhorse accepts a user upload request
- Workhorse pre-authenticates the request with Rails, and receives a temporary upload location
- Workhorse stores the file upload in the user’s request to the temporary upload location
- Workhorse propagates the request to Rails
- Rails issues a remote copy operation to copy the uploaded file from its temporary location to the final location
- Rails deletes the temporary upload
- Workhorse deletes the temporary upload a second time in case Rails timed out
cache! returns an instance of
uploads that file using Fog.
In the case of object storage, with the modifications specific to GitLab, the
copying from the temporary location to the final location is
implemented by Rails fooling CarrierWave. When CarrierWave tries to
cache! the upload, we
CarrierWave::Storage::Fog::File file handle which points to the
temporary file. During the
store! phase, CarrierWave then
this file to its intended location.
The Scalability::Frameworks team is making object storage and uploads more easy to use and more robust. If you add or change uploaders, it helps us if you update this table too. This helps us keep an overview of where and how uploaders are used.
|Feature||Upload technology||Uploader||Bucket structure|
|Live job traces|
|Job traces archive|
|Autoscale runner caching||Not applicable|
|Design management files|
|Design management thumbnails|
|Generic file uploads|
|Generic file uploads - personal snippets|
|Global appearance settings|
|Package manager assets (except for NPM)|
|NPM Package manager assets|
|Debian Package manager assets|
|Dependency Proxy cache|
|Terraform state files|
|Pages content archives|