- Controllers and Services
- Example authentication
- Example clone
- Example push
- Deep Dive
- Including LFS blobs in project archives
- Related topics
Git LFS development guidelines
This page contains developer-centric information for GitLab team members. For the user documentation, see Git Large File Storage.
This diagram is a high-level explanation of a Git push
when Git LFS is in use:
This diagram is a high-level explanation of a Git pull
when Git LFS is in use:
Controllers and Services
Repositories::GitHttpClientController
The methods for authentication defined here are inherited by all the other LFS controllers.
Repositories::LfsApiController
#batch
After authentication the batch
action is the first action called by the Git LFS
client during downloads and uploads (such as pull, push, and clone).
Repositories::LfsStorageController
#upload_authorize
Provides payload to Workhorse including a path for Workhorse to save the file to. Could be remote object storage.
#upload_finalize
Handles requests from Workhorse that contain information on a file that workhorse already uploaded (see this middleware) so that gitlab
can either:
- Create an
LfsObject
. - Connect an existing
LfsObject
to a project with anLfsObjectsProject
.
LfsObject and LfsObjectsProject
- Only one
LfsObject
is created for a file with a givenoid
(a SHA256 checksum of the file) and file size. -
LfsObjectsProject
associateLfsObject
s withProject
s. They determine if a file can be accessed through a project. - These objects are also used for calculating the amount of LFS storage a given project is using.
For more information, see
ProjectStatistics#update_lfs_objects_size
.
Repositories::LfsLocksApiController
Handles the lock API for LFS. Delegates mostly to corresponding services:
Lfs::LockFileService
Lfs::UnlockFileService
Lfs::LocksFinderService
These services create and delete LfsFileLock
.
#verify
- This endpoint responds with a payload that allows a client to check if there are any files being pushed that have locks that belong to another user.
- A client-side
lfs.locksverify
configuration can be set so that the client aborts the push if locks exist that belong to another user. - The existence of locks belonging to other users is also validated on the server side.
Example authentication
- Clients can be configured to store credentials in a few different ways. See the Git LFS documentation on authentication.
- Running
gitlab-lfs-authenticate
ongitlab-shell
. See the Git LFS documentation concerninggitlab-lfs-authenticate
. -
gitlab-shell
makes a request to the GitLab API. - Responding to shell with token which is used in subsequent requests. See Git LFS documentation concerning authentication.
Example clone
- Git LFS requests the ability to download files with authorization header from authorization.
-
gitlab
responds with the list of objects and where to find them. See LfsApiController#batch. - Git LFS makes a request for each file for the
href
in the previous response. See how downloads are handled with the basic transfer mode. -
gitlab
redirects to the remote URL if remote object storage is enabled. See SendFileUpload.
Example push
- Git LFS requests the ability to upload files.
-
gitlab
responds with the list of objects and uploads to find them. See LfsApiController#batch. - Git LFS makes a request for each file for the
href
in the previous response. See how uploads are handled with the basic transfer mode. -
gitlab
responds with a payload including a path for Workhorse to save the file to. Could be remote object storage. See LfsStorageController#upload_authorize. - Workhorse does the work of saving the file.
- Workhorse makes a request to
gitlab
with information on the uploaded file so thatgitlab
can create anLfsObject
. See LfsStorageController#upload_finalize.
Deep Dive
In April 2019, Francisco Javier López hosted a Deep Dive (GitLab team members only: https://gitlab.com/gitlab-org/create-stage/-/issues/1
)
on the GitLab Git LFS implementation to share domain-specific
knowledge with anyone who may work in this part of the codebase in the future.
You can find the recording on YouTube,
and the slides on Google Slides
and in PDF.
This deep dive was accurate as of GitLab 11.10, and while specific
details may have changed, it should still serve as a good introduction.
Including LFS blobs in project archives
The following diagram illustrates how GitLab resolves LFS files for project archives:
- The user requests the project archive from the UI.
- Workhorse forwards this request to Rails.
- If the user is authorized to download the archive, Rails replies with
an HTTP header of
Gitlab-Workhorse-Send-Data
with a base64-encoded JSON payload prefaced withgit-archive
. This payload includes theSendArchiveRequest
binary message, which is encoded again in base64. - Workhorse decodes the
Gitlab-Workhorse-Send-Data
payload. If the archive already exists in the archive cache, Workhorse sends that file. Otherwise, Workhorse sends theSendArchiveRequest
to the appropriate Gitaly server. - The Gitaly server calls
git archive <ref>
to begin generating the Git archive on-the-fly. If theinclude_lfs_blobs
flag is enabled, Gitaly enables a custom LFS smudge filter with the-c filter.lfs.smudge=/path/to/gitaly-lfs-smudge
Git option. - When
git
identifies a possible LFS pointer using the.gitattributes
file,git
callsgitaly-lfs-smudge
and provides the LFS pointer via the standard input. Gitaly providesGL_PROJECT_PATH
andGL_INTERNAL_CONFIG
as environment variables to enable lookup of the LFS object. - If a valid LFS pointer is decoded,
gitaly-lfs-smudge
makes an internal API call to Workhorse to download the LFS object from GitLab. - Workhorse forwards this request to Rails. If the LFS object exists
and is associated with the project, Rails sends
ArchivePath
either with a path where the LFS object resides (for local disk) or a pre-signed URL (when object storage is enabled) with theGitlab-Workhorse-Send-Data
HTTP header with a payload prefaced withsend-url
. - Workhorse retrieves the file and send it to the
gitaly-lfs-smudge
process, which writes the contents to the standard output. -
git
reads this output and sends it back to the Gitaly process. - Gitaly sends the data back to Rails.
- The archive data is sent back to the client.
In step 7, the gitaly-lfs-smudge
filter must talk to Workhorse, not to
Rails, or an invalid LFS blob is saved. To support this, GitLab
changed the default Omnibus configuration to have Gitaly talk to the Workhorse
instead of Rails.
One side effect of this change: the correlation ID of the original
request is not preserved for the internal API requests made by Gitaly
(or gitaly-lfs-smudge
), such as the one made in step 8. The
correlation IDs for those API requests are random values until
this Workhorse issue is
resolved.
Related topics
- Blog post: Getting started with Git LFS
- User documentation: Git Large File Storage (LFS)
- GitLab Git Large File Storage (LFS) Administration for self-managed instances