Getting Started
The cache server is available as a Docker image and can be deployed via Docker Compose or Kubernetes.
1. Deploying the Cache Server
Using Docker Compose
services:
cache-server:
image: ghcr.io/falcondev-oss/github-actions-cache-server:latest
ports:
- '3000:3000'
environment:
API_BASE_URL: http://localhost:3000
volumes:
- cache-data:/app/.data
volumes:
cache-data:
Using Kubernetes with Helm
You can deploy the cache server in Kubernetes using the Helm chart hosted in a OCI repository.
Prerequisites
- Helm version 3.8.0 or later (required for OCI support).
- A running Kubernetes cluster.
Steps
- Install the Helm chart:
helm install <release-name> oci://ghcr.io/falcondev-oss/charts/github-actions-cache-server
Replace <release-name>
with your desired release name (e.g., cache-server
). This will deploy the cache server with all default values.
- Verify the deployment:
kubectl get deployments
kubectl get svc
Ensure the deployment <release-name>-github-actions-cache-server
is running and the service is accessible.
Customizing the Deployment
To customize the deployment, you can override the default values by creating a values.yaml
file.
For all possible configuration options, refer to the values.yaml file.
For more details on customizing Helm charts, see the Customizing the Chart Before Installing.
Then install the chart with your custom values:
helm install <release-name> oci://ghcr.io/falcondev-oss/charts/github-actions-cache-server -f values.yaml
Environment Variables
API_BASE_URL
- Example:
http://localhost:3000
The base URL of your cache server. This needs to be accessible by your runners as it is used for making API requests and downloading cached files.
STORAGE_DRIVER
- Default:
filesystem
The storage driver to use for storing cache data. For more information, see Storage Drivers.
DB_DRIVER
- Default
sqlite
The database driver to use for storing cache metadata. For more information, see Database Drivers.
ENABLE_DIRECT_DOWNLOADS
- Default:
false
If set to true
, will send a signed URL to the runner. The runner can then download the cache directly from the storage provider. This is useful if you have a large cache and don't want to proxy the download through the cache server.
CACHE_CLEANUP_OLDER_THAN_DAYS
- Default:
90
The number of days to keep stale cache data and metadata before deleting it. Set to 0
to disable cache cleanup.
CACHE_CLEANUP_CRON
- Default:
0 0 * * *
The cron schedule for running the cache cleanup job.
UPLOAD_CLEANUP_CRON
- Default:
*/10 * * * *
The cron schedule for running the upload cleanup job. This job will delete any dangling (failed or incomplete) uploads.
NITRO_PORT
- Default:
3000
The port the server should listen on.
TEMP_DIR
- Default: os temp dir
The directory to use for temporary files.
2. Setup with Self-Hosted Runners
Set the ACTIONS_RESULTS_URL
on your runner to the API URL (with a trailing slash).
Runner Configuration
For Docker:
FROM ghcr.io/actions/actions-runner:latest
# Modify runner binary to retain custom ACTIONS_RESULTS_URL
RUN sed -i 's/\x41\x00\x43\x00\x54\x00\x49\x00\x4F\x00\x4E\x00\x53\x00\x5F\x00\x52\x00\x45\x00\x53\x00\x55\x00\x4C\x00\x54\x00\x53\x00\x5F\x00\x55\x00\x52\x00\x4C\x00/\x41\x00\x43\x00\x54\x00\x49\x00\x4F\x00\x4E\x00\x53\x00\x5F\x00\x52\x00\x45\x00\x53\x00\x55\x00\x4C\x00\x54\x00\x53\x00\x5F\x00\x4F\x00\x52\x00\x4C\x00/g' /home/runner/bin/Runner.Worker.dll
For Bare Metal, similar commands apply:
sed -i 's/\x41\x00\x43\x00\x54\x00\x49\x00\x4F\x00\x4E\x00\x53\x00\x5F\x00\x52\x00\x45\x00\x53\x00\x55\x00\x4C\x00\x54\x00\x53\x00\x5F\x00\x55\x00\x52\x00\x4C\x00/\x41\x00\x43\x00\x54\x00\x49\x00\x4F\x00\x4E\x00\x53\x00\x5F\x00\x52\x00\x45\x00\x53\x00\x55\x00\x4C\x00\x54\x00\x53\x00\x5F\x00\x4F\x00\x52\x00\x4C\x00/g' /path_to_your_runner/bin/Runner.Worker.dll
This patch prevents the runner from overwriting your custom ACTIONS_RESULTS_URL
.
3. Using the Cache Server
There is no need to change any of your workflows! 🔥
If you've set up your self-hosted runners correctly, they will automatically use the cache server for caching.