Doing dumb stuff with docker and saving containers for offline installs
Once I starting extensively using Docker, I wanted to always have offline copies of the base container image, in case they were ever pulled down. This post lays out the two ways I archive containers, one in a local Forgejo registry and the other as raw container files using docker save.
Using the internet over the last 20 years has taught me that if I find something useful or if it important to me, I need to make a local copy. Time and again I get burned counting on something digital being available forever.
I also want local copies because I’m a prepper and a data hoarder. Having local copies means the internet can go down, most likely from a personal financial catastrophe, and I have a snapshot of the things that were the most important to me. This is also why I run a local copy of an apt repo and make offline copies of flatpaks, on top of a lot of other stuff.
Running a container registry
About this time last year I started learning git. When I did so, I setup a self-hosted instance of Forgejo on my homelab so I had a safe place for testing. I didn’t want to start uploading things to Github and have more data exposed than I thought. Plus, I’m an avid self-hoster, so spinning up my own instance before using Github is expected behavior from me.
Recently, I learned that Forgejo has a built-in container registry. Using standard Docker tools, you can re-tag images and then push them to your own container registry, making it so you can put your own registry location in compose files and they will pull down like normal, just without ever leaving your own infrastructure.
I’m not a dev and I’m sure this feature is designed for people who are building their own containers and are constantly push/pulling them in their work. All I want to do is save certain Docker images on my LAN so I can pull them even without the internet and don’t lose them if a project changes direction, touches grass, or pulls a Mullenweg.
Personally, I think Forgejo’s container registry is pretty fucking rad and its easy to setup. In fact, you don’t need to do anything with Forgejo. All you need to do is get a container image, re-tag it, and the push it to your Forgejo instance.
How-to push a container to Forgejo
First, login:
docker login example.com
Then, find an existing container, and create a new tag. Here is how I did it with an existing image on my laptop, replacing the URL with my Forgejo URL and username:
docker tag lscr.io/linuxserver/nginx:latest example.com/USER/ngnix:latest
Now we can push it to Forgejo:
docker push example.com/USER/ngnix:latest
We can tag this any way we want. I put latest, but I also tried with x86_20250111 and works as the tag. This is important if I have multiple versions of containers and then want to purge out certain dates.
To use, pull like normal but referencing the right registry:
docker pull example.com/USER/nginx
Custom containers
Obviously you can use this for also managing your own custom containers. Whether you modify an existing container for your personal needs, our build one from scratch, this system works for any OCI container.
Automating
Let me say up front, I am not a fan of A.I. and I have tried it multiple times with awful results. Recently, I read a post from a dev talking about using A.I. to help them finish more side projects. These aren’t their main projects or anything they code for a living. I’m talking creating things for fun or only for personal use. So, I decided to give Gemini a try with helping me make a bash script to automate grabbing containers, re-tagging them, and then pushing to my Forgejo instance.
I am a hobbyist and a landscape contractor by trade. I never went to college and I am completely self-taught for everything with computers, which is why homelabbing has been so fun for me. I get to learn so much, on my own time, all as a hobby. This also means I know fuck all about programming, development, or system administration.
I also don’t have any local friends or family who know anything about what I enjoy doing in my free time. I can see how using a tool like Gemini is helpful as like a project partner. I don’t take anything it gives me as truth. For this project I used it to start and give me things to research, then modify what it does give me.
In the end, I was able to create this fairly basic script to automate pushing to my registry. If you want to use this script:
- Replace
"example-container-#"with the remote container you wish to save. - Replace
YOUR_PRIVATE_REGISTRYwith your Forgejo/Gitea instance domain. Leave off thehttps://. - The end of the script will remove the images from the local system. Up to you what you want to do with them.
- Add as many containers as you wish. My list is 32 containers I manage.
#!/bin/bash
# Define the list of containers to pull
containers=(
"example-container-1"
"example-container-2"
"example-container-3"
)
# Get today's date in YYYYMMDD format
today=$(date +%Y%m%d)
# Prompt for registry credentials (outside the script for security)
read -p "Username for YOUR_PRIVATE_REGISTRY: " username
read -sp "Password for YOUR_PRIVATE_REGISTRY: " password
# Docker login with stored credentials (assuming they are securely stored)
echo "$password" | docker login -u "$username" --password-stdin YOUR_PRIVATE_REGISTRY
# Pull, re-tag, and push each container
for container in "${containers[@]}"; do
echo "Pulling container: $container"
docker pull "$container"
if [ $? -eq 0 ]; then
echo "Successfully pulled $container"
# Re-tag the container with arm64 and today's date as version
new_tag="YOUR_PRIVATE_REGISTRY/${container##*/}:x86-${today}"
echo "Re-tagging $container as $new_tag"
docker tag "$container" "$new_tag"
echo "Pushing $new_tag to registry"
docker push "$new_tag"
# Remove the original image
docker rmi "$container"
# Remove the intermediate re-tagged image
docker rmi "$new_tag"
else
echo "Error pulling $container"
fi
done
echo "All container pull, re-tag, push, and removal operations completed."
Using this script I can automate the process and even schedule with cron.
Saving without a registry
Although the above way is super awesome, I prefer simply use the docker save command to export container images and then use docker load on another system. Doing it this way means I can use a portable drive to move the files between systems without the server requirement.
This is helpful for two reasons:
- I can carry them with me in my EDC bag and install on my laptop, or any other device I wish.
- If I’m in an offline situation, I doubt I will have my homelab running.
Having the containers on Forgejo makes working on my LAN easier. But, for a data hoarder who puts offline functionality first, having all of the containers just as tar files on a drive is a much simpler (and less likely to fail) setup.
How-to save a container
Once you’ve identified the container you want to save and make sharing via sneakernet, you will save it like this:
docker save -o output/path/filename.tar name:tag
Here is an example from my archive:
docker save -o /mnt/usb_storage/libreoffice_`date +%Y-%m-%d`.tar lscr.io/linuxserver/libreoffice:latest
In this command I am saving to /mnt/usb_storage with the file name and appending the date so I know when it was saved. Last is where to grab the image.
How to load a container
Getting it onto the new system is simple. Obviously, you are gonna need Docker installed and configured on the new system. Then, you can load the image onto the system like this:
docker load -i /path/to/file/filename.tar
Once this completes, you can run docker images and see it on the system. Now you can use like you would normally, either with docker run or docker compose files.
Doing this can obviously be scripted and automated with cron.
Cool, but why?
As someone who lived in a time before the internet, I never take having always on and fast internet for granted. This is one of the reasons why I dislike “cloud native” approaches, like Nix or immutable distros like Silverblue. They completely depend on always having access to the internet. I want all my stuff to be available offline.
This became more important as Docker, the company, started messing with Docker Hub. For lack of a better term, it is only a matter of time for Docker and Github to enshittify their platforms.
Last reason: Why not? This has been a reason to better learn Docker, bash scripting, and Forgejo. Even if I don’t keep the registry or I delete all my personal archives, it was still a useful project and I learned a lot about all these systems and all it cost me was some data bits stored on a hard drive.
- - - - -
Did you like this post? Give it an upvote by clicking on the arrows below! Sending me an upvote is like you and I giving each other a high five.
🙏 😎
Thank you for reading! If you would like to comment on this post you can start a conversation on the Fediverse. Message me on Mastodon at @cinimodev@masto.ctms.me. Or, you may email me at blog.discourse904@8alias.com. This is an intentionally masked email address that will be forwarded to the correct inbox.If you enjoy the random stuff I write here, post to Mastodon, or watch on YouTube, and are feeling generous, I am open to tips of Ko-fi.