note
Due to our focus on improving the overall availability of GitLab.com and reducing tech debt, we do not have capacity to act on this blueprint. We will re-evaluate in Q1-FY23.

Composable GitLab codebase - using Rails Engines

The one of the major risks of a single codebase is an infinite growth of the whole application. The more code being added results in not only ever increasing resource requirements for running the application, but increased application coupling and explosion of the complexity.

Executive summary

This blueprint discusses an impact of introducing Application Layers as a way to reduce and improve the application codebase. This discusses the positive and negative outcomes of the proposed solution and tries to estimate the impact on GitLab.com and smaller installations.

Application Layers tries to split GitLab Rails codebase horizontally following the pattern of how we actually run GitLab instead of vertical split. This follows the idea that a single feature needs to run in many different ways (CI for example has Web interface, uses API, and performs background processing), and we are not able to easily run only a given feature separate to the rest application (like CI) due to coupling.

The proposal itself does allow us to disconnect some aspects of the features. These aspects could be treated as components that are run separately from the rest of the stack, but still sharing a large portion of core. This model could be implemented to provide an API interface for external tooling (Runners API, Packages API, Feature Flags Unleash API) and allow us to have much better resiliency and much easier way to scale application in the future.

The actual split was tested with the usage of Rails Engines implementing separate gems in a single repository. The Rails Engines allowed us to well describe the individual components with its dependencies and run an application consisting of many Rails Engines.

The blueprint aims to retain all key aspects of GitLab success: single and monolithic codebase (with a single data-store), but allows us to better model application and make our codebase more composable.

Challenges of the Monolith (a current state)

Today the usage of monolith proves to be challenging in many cases. A single big monolith codebase without clear boundaries results in a number of problems and inefficiencies, some of them being:

  • Deep coupling makes application harder to develop in longer term, as it leads to a spaghetti implementation instead of considering building more interface-based architecture
  • Deep coupling between parts of the codebase making it harder to test. To test only a small portion of application we usually need to run a whole test suite to confidently know which parts are affected. This to some extent can be improved by building a heuristic to aid this process, but it is prone to errors and hard to keep accurate at all times
  • All components need to be loaded at all times in order to run only parts of the application
  • Increased resource usage, as we load parts of the application that are rarely used in a given context
  • The high memory usage results in slowing the whole application as it increases GC cycles duration creating significantly longer latency for processing requests or worse cache usage of CPUs
  • Increased application boot-up times as we need to load and parse significantly more files
  • Longer boot-up times slows down the development, as running application or tests takes significantly longer reducing velocity and amount of iterations

Composable codebase dimensions

In general, we can think about two ways how codebase can be modeled:

  • vertically in Bounded Contexts, each representing a domain of the application, ex.: All features related to CI are in a given context
  • horizontally in Application Layers: Sidekiq, GraphQL, REST API, Web Controllers, all Domain Models and Services that interface with DB directly

This blueprint explicitly talks about horizontal split and Application Layers.

Current state of Bounded Contexts (vertical split)

The Bounded Contexts is a topic that was discussed extensively number of times for a couple of years. Reflected in number of issues:

We are partially executing a Bounded Contexts idea:

  • Make each team to own their own namespace, namespace which is defined as a module in a codebase
  • Make each team to own their own tests, as namespaces would define a clear boundaries
  • Since we use namespaces, individual contributor or reviewer can know who to reach from domain experts about help with the given context

The module namespaces are actively being used today to model codebase around team boundaries. Currently, the most prominent namespaces being used today are Ci:: and Packages::. They provide a good way to contain the code owned by a group in a well-defined structure.

However, the Bounded Contexts while it helps development, it does not help with the above stated goals. This is purely a logical split of the code. This does not prevent deep-coupling. It is still possible to create a circular dependency (and it often happens) between a background processing of CI pipeline and Runner API interface. API can call Sidekiq Worker, Sidekiq can use API to create an endpoint path.

The Bounded Contexts do not make our codebase smarter to know what depends on what, as the whole codebase is treated as single package that needs to be loaded and executed.

Possible additional considerations to the disadvantages of Bounded Context:

  • It can lead to tribal knowledge and duplicate code
  • The deep coupling can make it difficult to iterate and make minimal changes
  • Changes may have cascading effects that are difficult to isolate due to the vertical split

The Application Layers (*horizontal split)

While we continue leveraging Bounded Contexts in form of namespace separation that aids development and review process the Application Layers can provide a way to create a clean separation between different functional parts.

Our main codebase (GitLab Rails after a GitLab running on Ruby on Rails) consists many of implicit Application Layers. There are no clear boundaries between each layer which results in a deep coupling.

The concept of Application Layers looks at the application from the perspective of how we run the application instead of perspective of individual features (like CI or Packages). GitLab application today can be decomposed into the following application layers. This list is not exhaustive, but shows a general list of the different parts of a single monolithic codebase:

  • Web Controllers: process Web requests coming from users visiting web interface
  • Web API: API calls coming from the automated tooling, in some cases also users visiting web interface
  • Web Runners API: API calls from the Runners, that allows Runner to fetch new jobs, or update trace log
  • Web GraphQL: provide a flexible API interface, allowing the Web frontend to fetch only the data needed thereby reducing the amount of compute and data transfer
  • Web ActionCable: provide bi-directional connection to enable real-time features for Users visiting web interface
  • Web Feature Flags Unleash Backend: provide an Unleash-compatible Server that uses GitLab API
  • Web Packages API: provide a REST API compatible with the packaging tools: Debian, Maven, Container Registry Proxy, etc.
  • Git nodes: all code required to authorize git pull/push over SSH or HTTPS
  • Sidekiq: run background jobs
  • Services/Models/DB: all code required to maintain our database structure, data validation, business logic and policies models that needs to be shared with other components

The best way to likely describe how the actual GitLab Rails split would look like. It is a satellite model. Where we have a single core, that is shared across all satellite components. The design of that implies that satellite components have a limited way to communicate with each other. In a single monolithic application in most of cases application would communicate with a code. In a satellite model the communication needs to be performed externally to the component. This can be via Database, Redis or using a well defined exposed API.

flowchart TD subgraph Data Store D[Database] R[Redis] end subgraph Rails Engines subgraph Data Access Layer C[Core] end subgraph Web Processing W[Web] end subgraph Background Processing S[Sidekiq] end end C --> D & R W & S -- using application models --> C R -- push background job --> S W -- via async schedule --> S S -- via Web API --> W

Application Layers for on-premise installations

The on-premise installations are significantly smaller and they usually run GitLab Rails in two main flavors:

graph LR gitlab_node[GitLab Node with Load Balancer] gitlab_node_web[Web running Puma] gitlab_node_sidekiq[Background jobs running Sidekiq] gitlab_node_git[Git running Puma and SSH] subgraph GitLab Rails gitlab_rails_web_controllers[Controllers] gitlab_rails_api[API] gitlab_rails_api_runners[API Runner] gitlab_rails_graphql[GraphQL] gitlab_rails_actioncable[ActionCable] gitlab_rails_services[Services] gitlab_rails_models[Models] gitlab_rails_sidekiq[Sidekiq Workers] end postgresql_db[(PostgreSQL Database)] redis_db[(Redis Database)] gitlab_node --> gitlab_node_web gitlab_node --> gitlab_node_sidekiq gitlab_node --> gitlab_node_git gitlab_node_web --> gitlab_rails_web_controllers gitlab_node_web --> gitlab_rails_api gitlab_node_web --> gitlab_rails_api_runners gitlab_node_web --> gitlab_rails_graphql gitlab_node_web --> gitlab_rails_actioncable gitlab_node_git --> gitlab_rails_api gitlab_node_sidekiq --> gitlab_rails_sidekiq gitlab_rails_web_controllers --> gitlab_rails_services gitlab_rails_api --> gitlab_rails_services gitlab_rails_api_runners --> gitlab_rails_services gitlab_rails_graphql --> gitlab_rails_services gitlab_rails_actioncable --> gitlab_rails_services gitlab_rails_sidekiq --> gitlab_rails_services gitlab_rails_services --> gitlab_rails_models gitlab_rails_models --> postgresql_db gitlab_rails_models --> redis_db

Application Layers on GitLab.com

Due to its scale, GitLab.com requires much more attention to run. This is needed in order to better manage resources and provide SLAs for different functional parts. The chart below provides a simplistic view of GitLab.com application layers. It does not include all components, like Object Storage nor Gitaly nodes, but shows the GitLab Rails dependencies between different components and how they are configured on GitLab.com today:

graph LR gitlab_com_lb[GitLab.com Load Balancer] gitlab_com_web[Web Nodes running Puma] gitlab_com_api[API Nodes running Puma] gitlab_com_websockets[WebSockets Nodes running Puma] gitlab_com_sidekiq[Background Jobs running Sidekiq] gitlab_com_git[Git Nodes running Puma and SSH] subgraph GitLab Rails gitlab_rails_web_controllers[Controllers] gitlab_rails_api[API] gitlab_rails_api_runners[API Runner] gitlab_rails_graphql[GraphQL] gitlab_rails_actioncable[ActionCable] gitlab_rails_services[Services] gitlab_rails_models[Models] gitlab_rails_sidekiq[Sidekiq Workers] end postgresql_db[(PostgreSQL Database)] redis_db[(Redis Database)] gitlab_com_lb --> gitlab_com_web gitlab_com_lb --> gitlab_com_api gitlab_com_lb --> gitlab_com_websockets gitlab_com_lb --> gitlab_com_git gitlab_com_web --> gitlab_rails_web_controllers gitlab_com_api --> gitlab_rails_api gitlab_com_api --> gitlab_rails_api_runners gitlab_com_api --> gitlab_rails_graphql gitlab_com_websockets --> gitlab_rails_actioncable gitlab_com_git --> gitlab_rails_api gitlab_com_sidekiq --> gitlab_rails_sidekiq gitlab_rails_web_controllers --> gitlab_rails_services gitlab_rails_api --> gitlab_rails_services gitlab_rails_api_runners --> gitlab_rails_services gitlab_rails_graphql --> gitlab_rails_services gitlab_rails_actioncable --> gitlab_rails_services gitlab_rails_sidekiq --> gitlab_rails_services gitlab_rails_services --> gitlab_rails_models gitlab_rails_models --> postgresql_db gitlab_rails_models --> redis_db

Layer dependencies

The differences in how GitLab is run for on-premise versus how we run GitLab.com does show a main division line in GitLab Rails:

  • Web: containing all API, all Controllers, all GraphQL and ActionCable functionality
  • Sidekiq: containing all background processing jobs
  • Core: containing all database, models and services that needs to be shared between Web and Sidekiq

Each of these top-level application layers do depend only on a fraction of the codebase with all relevant dependencies:

  • In all cases we need the underlying database structure and application models
  • In some cases we need dependent services
  • We only need a part of the application common library
  • We need gems to support the requested functionality
  • Individual layers should not use another sibling layer (tight coupling), rather connect via API, Redis or DB to share data (loose coupling)

Proposal

The Memory team group conducted a Proof-of-Concept phase to understand the impact of introducing Application Layers. We did this to understand the complexity, impact, and needed iterations to execute this proposal.

The proposals here should be treated as evaluation of the impact of this blueprint, but not a final solution to be implemented. The PoC as defined is not something that should be merged, but serves as a basis for future work.

PoC using Rails Engines

We decided to use Rails Engines by modeling a Web Application Layer. The Web Engine contained Controllers, API, GraphQL. This allowed us to run Web Nodes with all dependencies, but measure the impact on Sidekiq not having these components loaded.

All work can be found in these merge requests:

What was done?

  • We used Rails Engines
  • The 99% of changes as visible in the above MRs is moving files as-is
  • We moved all GraphQL code and specs into engines/web_engine/ as-is
  • We moved all API and Controllers code and specs into engines/web_engine
  • We adapted CI to test engines/web_engine/ as a self-sufficient component of stack
  • We configured GitLab to load gem web_engine running Web nodes (Puma web server)
  • We disabled loading web_engine when running Background processing nodes (Sidekiq)

Implementation details for proposed solution

  1. Introduce new Rails Engine for each application layer.

    We created engines folder, which could contain different engines for each application layer we introduce in the future.

    In the above PoCs we introduced the new Web Application Layer, located in engines/web_engine folder.

  2. Move all code and specs into engines/web_engine/

    • We moved all GraphQL code and specs into engines/web_engine/ without changing files itself
    • We moved all Grape API and Controllers code into engines/web_engine/ without changing files itself
  3. Move gems to the engines/web_engine/

    • We moved all GraphQL gems to the actual web_engine Gemfile
    • We moved Grape API gem to the actual web_engine Gemfile
     Gem::Specification.new do |spec|
       spec.