Introduction to Immutable Linux Systems




    > 4.3. Facts §
    >  - NixOS / Guix are doing it right in my opinion

Neither one is as accessible imo, fedora has long history of distro making and it shows


NixOS predates Fedora by a few months.


NixOS has a desktop installer now. If they had an easy way to setup Flathub and turn on auto updates, it would be as user-friendly as Silverblue. I actually have several family members on it, since I can do that first post-install setup.


Flathub is only okay with NixOS because a lot of flatpaks escape their sandbox and call out to outside programs. Sometimes you need to compile a NixOS version since the paths are not the same as a generic Linux distro. I've asked the developers to take a look at it, and one of them closed the issue, and the other is going to try to make it work with steam run

Basically, nixpkgs is still the best way to run something on NixOS


nixpkgs is still the best way to run something, period.

Flatpak is an ugly hack that mixes together the completely unrelated tasks of packaging and sandboxing, while not being particularly good at any.


i just thought it was funny that their opinion was in their facts section


hey, that that's their opinion is an indisputable fact!




Another really nice Immutable Linux system that I'm using is VyOS.. It's targeted primarily at a router OS, but you can run containers on it now to make it pretty versatile.

Basically, it's an image based OS that configures everything from a single config file on boot.


Same thing for OpenWrt I think. I believe it works by using a squashfs and tmpfs and using overlayfs to overlay the tmpfs on top of the r/o filesystem. But I'm not sure that fits the definition used here for immutable OS:

> We could say that a Linux LIVE-CD is immutable, because every time you boot it, you get the exact same programs running, and you can't change anything as the disk media is read only. But while the LIVE-CD is running, you can make changes to it, you can create files and directories, install packages, it's not stuck in an immutable state.


It don't understand what's the big deal about it. To make the system immutable, isn't it enough during boot to just

    1. create a ram-backed filesystem
    2. copy `/`s contents to that new filesystem
    3. Optional: Unmount `/`
    4. Mount the ram-backed filesystem on `/`

You can easily do that from the initramfs during booting. I'm applying this patch to the roofs created by debootstrapping Debian Buster. You can use the system just fine and make changes as you please. But when you shut it down, it's all lost. Everything I want to keep (like the permanent storage this system makes available over sshfs, NFS) is on seperate disks anyway. Sure, you need enough RAM to hold the entire rootfs (1.2G in case of Debian Buster) and it increases boot time a bit. For server applications, I don't care at all.

    --- a/usr/share/initramfs-tools/scripts/local   2021-11-05 12:50:23.541088057 +0100
    +++ b/usr/share/initramfs-tools/scripts/local   2021-11-05 13:02:14.483203576 +0100
    @@ -180,9 +180,20 @@
        # Mount root
        # shellcheck disable=SC2086
    -   if ! mount ${roflag} ${FSTYPE:+-t "${FSTYPE}"} ${ROOTFLAGS} "${ROOT}" "${rootmnt?}"; then
    -       panic "Failed to mount ${ROOT} as root file system."
    -   fi
    +   #if ! mount ${roflag} ${FSTYPE:+-t "${FSTYPE}"} ${ROOTFLAGS} "${ROOT}" "${rootmnt?}"; then
    +   #   panic "Failed to mount ${ROOT} as root file system."
    +   #fi
    +   mkdir --parents /tmp/diskroot
    +   mount -t ${FSTYPE} ${roflag} ${ROOTFLAGS} ${ROOT} /tmp/diskroot
    +   mount -t tmpfs -o size=6G none ${rootmnt?}
    +   chmod 755 ${rootmnt}
    +   cp --force --archive --verbose /tmp/diskroot/* ${rootmnt}
    +   umount /tmp/diskroot
    +   rm -r --force /tmp/diskroot

The pros section feels a little light. The reason an immutable OS is attractive is to be able to cleanly remove files/folders from the system.

[deleted by user]

Relatedly, does anyone know if the security guarantees around distrobox have gotten any stronger? Last I looked, they promised nothing, but curious if there has been any movement there.

I would love if there was a seamless way to launch a distrobox os with a separate user home that could not touch my host system.

Likely a Real Hard Problem, but even some isolation would probably be an improvement of running everything under the same user account.


Last I looked at Distrobox, DNS didn't work on Ubuntu LTS or Debian Bookworm. It was literally useless.

How can anyone adopt an unpolished hobby project that only seems to be tested on the dev's Arch box is beyond me.


Distrobox and toolbox are developer tools, they explicitly are there as a convenience wrapper for podman/docker for development.

If you want hard separation you'd need a VM. It'd be awesome if you could just --firecracker on distrobox create and get that. :D


Since we’re here, does anyone have a good solution for compiling PyInstaller binaries for different Linux distros? Would it be sufficient, say, to use a Ubuntu 16 compiled binary on Ubuntu 18, 20, etc. Presently we use different Docker images for each target distribution, however this is very resource intensive. Our target distros are several versions of Ubuntu, Centos, RockyLinux and Debian.


While I am glad Silverblue is on this list, not having Fedora CoreOS on it too is a shame. FCOS is an amazing OS to run in production and it has come a very long way since the CoreOS acquisition. I find that FCOS is a good middle ground of being usable and easy to learn while still being immutable compared to Nix.

The FCOS devs introduced a new feature called CoreOS Layering which lets you define your system in a Dockerfile and FCOS will rebase to that state and all you have to do is reboot to configure your server. It is super powerful.

Anyways, your next project needs a VM, give it a shot. I made a Python based CLI tool to help you develop locally on a Linux workstation to create a Butane file to fit your needs. Below is the GitHub for Bupy and a good example of running an app (Paperless NGX) on FCOS with the CoreOS Layering features.


Oh, CoreOS Layering looks really useful! I'm using openSUSE MicroOS today on some Raspberry Pi's and a x86_64 server. One reason I picked MicroOS was because it was quite simple to install on Raspberry Pi.

How hard is it to install CoreOS on a Raspberry Pis? Some installation guides on the Internet look quite complex...?


FCOS has support for RPi 4s which works well


For raspberry pi projects, I've used which has some rough edges but is great for runing some podman containers as systemd services. It (and all of fedora IIRC) requires a pi2 or greater (armv7 or v8/aarch64) though.

But I haven't actually tried CoreOS on a pi yet, could be interesting.


I actually use Fedora IOT in my datacenter rack to access my switches over LTE in an oh shit scenario. My uptime on my RPi3 B+ has been fantastic with this flavor of Fedora. I plan on switching over to FCOS on a RPi 4 soon tho.


I'm currently running Fedora IoT on a pi4B, it works very well for its purpose (running containerized services on Podman+Nomad). What does CoreOS offer over IoT?


Thanks for making this tool and showing how to get started with the layering!

Do you have any thoughts you’d like to share on flatcar as the other project with CoreOS lineage?

As for me my main difficulties have been figuring out what to do with these projects in a bare metal environment. Building VM images is cool, but much of the time I want to do things like install to an existing drive or even onto a ZFS pool underneath.


I think Flatcar is alive and well. I haven't used it personally so I can't really comment much on it.

As for building VM images, I don't actually do that in my setup. I just use the base FCOS image, boot it with a barebones Butane to configure disks and then use the CoreOS Layering features to setup my workload.

If you want to use ZFS on your setup, check out which has an example of building the ZFS on Linux module so you can setup your ZFS pools.


The problem I had with flatpak and the immutable approach in general is that I can't modify them in ways that aren't supported by the developer. For example, I use decsync to sync my calandars but as far as I can tell, it's impossible to add the decsync plugin to the evolution flatpak.

Until these immutable systems support the stacking of custom overlay filesystems as a first class feature^1, people will continue to run mutable systems.

1: to account for use cases the developer can't or won't support.


> stacking of custom overlay filesystems as a first class feature

At that point why not just use a mutable system? This reminds me of how a decade ago everybody rushed to move to nosql and then immediately reinvented schemae in their projects.


You'd want a mutable system that still supports snapshots. And also that uses mutation very sparingly.

Compare Haskell: both Haskell and C support both mutable variables and constant ones. But the ecosystems and idioms are very different.


You can add plugins to applications in flatpak; package these plugins as extensions.

How it is done can be seen with OBS: flathub has several OBS plugins available (com.obsproject.Studio.Plugin.*).


On the audio production side of things it's actually pretty good. The selection is pretty comprehensive and the jsons on github are simple and clear enough that you can build your own for any they missed. They're versioned with the runtime so there's the usual September problem of program A having updated while program B hasn't yet, but that's the nature of the beast.

A problem that's less well-solved is situations like Cantor, where a single application is a front-end for multiple executables. To get it to actually work you'd have to package Scilab, Sage, Maxima, Octave, R, and Julia in the flatpak itself, and these are non-trivial programs to package. There are workarounds with flatpak-spawn, but at that point why not just install the application natively? (I think the "right" answer is to set up a dbus service for like "system-octave" or whatever and have a separately flatpak'ed interpreter register it, but as aesthetically pleasing as this solution is it doesn't seem to have induced me or anybody else to actually do it...)


> add the decsync plugin to the evolution flatpak

This sort of thing is very common with GUI software on Linux. Outside of this bubble, most software comes with batteries included. Eg Solidworks never makes me download optional dependencies, but FreeCAD made me do it literally every 15 minutes as I move to the next step in a CAD/CAM/sim/render workflow.

Also see

> it’s never the same 20%. Everybody uses a different set of features. In the last 10 years I have probably heard of dozens of companies who, determined not to learn from each other, tried to release “lite” word processors that only implement 20% of the features. This story is as old as the PC. Most of the time, what happens is that they give their program to a journalist to review, and the journalist reviews it by writing their review using the new word processor, and then the journalist tries to find the “word count” feature which they need because most journalists have precise word count requirements, and it’s not there, because it’s in the “80% that nobody uses,” and the journalist ends up writing a story that attempts to claim simultaneously that lite programs are good, bloat is bad, and I can’t use this damn thing ’cause it won’t count my words. If I had a dollar for every time this has happened I would be very happy.


Great link. However, I don’t know how Joel could see a newcomer can challenge an incumbent, short of reimplementing all features.

Certainly, Google Sheets didn’t do that. Nor has airtable (valuation considerations aside, it has seen some adoption).

In a new category, or a new market, there is no benchmark. So that’s the east case. But sometimes new products do displace old products, and sometimes just by distribution or pricing at the cost of features.

So how do you reconcile that, Joel?!


Fair point, but Sheets hasn't replaced Excel. Sheets only competes in the in-browser market and Sheets by itself does not make money for Google. If you go back a few years, Microsoft was the 80%-er that lacked features (notably real-time collaboration) whereas Google offered an entire suite of software with free real-time collaboration. People pay $6/mo to get the entire suite for their company. Gsuite is not comparable to the "lite" word processors that Joel was talking about or the open-source tools like FreeCAD that I was talking about. Both are objectively worse than their paid counterparts and extremely limited in functionality.

Also worth mentioning that Sheets has competition in this space now that in-browser 365 Excel is free. Tho neither is feature-complete vs desktop Excel


Well, it's never the same 20%, which is precisely why we have plugins and extensions even outside of the Linux GUI bubble. This include IDEs, browsers, office suites, media production software (e.g. VST), and of course engineering software. For a complex software product, it is never possible to include batteries for everyone.

FreeCAD might not come with enough battery to power a full workflow, but I have also seen entire engineering firm not able to work without a particular AutoCAD plugin.


Outside of this bubble, you see vim plugins, VSCode plugins, JetBrains IDE plugins… Photoshop plugins, Premiere plugins…


"What the hell, Steve, you had all week, now the fucking Bugle has scooped us."

"Sorry boss, been sorting out my init.el"


NixOS gives you precisely this kind of control- through a few different mechanisms. Some packages in nixpkgs, and most NixOS and home manager modules, expose a lot of configuration options where you can configure various plugins, add extra packages, etc. Nix also lets you provide overlays and overrides to add custom packages (new packages, or customized versions of existing packages) and gives you the option of changing parts of a package. If none of that is sufficient, you can even supply your own patches to the code, or build the package from your own fork of the upstream repository.

In practice this is one of my favorite things about nix. I find that I contribute much more to open source in general because it’s so easy to say “replace this dependency with my version when you build this package”.


Would you have any examples of this to hand? I've been trying to find a clean way to share an altered qemu derivation that isn't just me copying and maintaining the qemu/default.nix but so far that has been the easiest.




This post is a bit old now, and might not be entirely what you were looking for, but I wrote about getting my personal blog building with nix, including an example of using an overlay and patching and upstream dependency:


Solene's blog is great overall


Glad to see EndlessOS included!

Silverblue/Sircea gets all the attention these days, but Endless is the oldest OSTree-based user distro by a long shot, and it’s still actively developed by the Endless Foundation.

It’s also the one most suitable for non-technical users. Definitely worth considering for that use case, particularly for very young users since it now includes plenty of tutorial content intended for that audience.


If you ran root on ZFS with a single mount / form, and snapshotted it immediately after install, would that count as "immutable" in this logic?


If you did it before and after every system alteration, I'd count that. That's tedious and error-prone to do by hand, though, so you'd want a set of tools to do it automatically. At which point you've invented a new immutable distro.


I'd say yes, to an extent, although you don't get all of the benefits of immutable Linux this way. A good immutable Linux distro does some work to make sure that

  - upgrades/changes are atomic
  - it's very clear whose responsibility it is to track what state, i.e., what parts of the filesystem are snapshotted and when
  - it's easy to revert to any snapshot at boot time
  - you're unlikely to have 'gaps' where you wish there was a snapshot but there isn't one
  - snapshotting applies to configuration changes as well as package (un)installation
You can get most of those benefits of the box on openSUSE or other distros where the default filesystem is copy-on-write and the package manager is configured to take snapshots for you. You can also get something like that for configuration management via etckeeper, which gives you version control for system-wide config files. A distro which only takes these steps may not always be considered 'immutable Linux' as immutable distros typically go further, either in limiting what changes are possible outside the blessed methods (which generate new snapshots) or how what kinds of configuration/persistence it manages. But at some point it's just a question of degrees.

As it happens, snapshotting with automatic reversion is an approach some take with NixOS to better ensure that its configuration management is comprehensive. You can force all of your persistence requirements to be explicit by reverting to an old snapshot on boot.

Blog post which (afaik) first presented this idea to the community:

Common implementation for NixOS:

Presentation on that implementation from NixCon 2023 (just a couple weeks ago!):


Wait, I thought the reason sip on macOS was so terrible was because you couldn’t overwrite whatever system files you wanted without jumping through hoops… why would you bring an anti feature like that to Linux?


With this, you have a structured, transactional way of writing to whatever system files you wanted, with rollbacks and all. On macOS, only Apple does.


Been using Fedora Silverblue since its release and it's absolutely the future. ostree is what everyone should be using.


Don't represent everyone.


> ostree is what everyone should be using

I currently use Ubuntu for my server, and I see there's ostree in my repos, I haven't gotten around to trying it yet but I'd like to just start versioning my system as is with it. If that's too difficult/not possible to do, then I intend to switch my server over to Silverblue at some point because I really like the idea behind ostree.


Funny enaugh, once an update rendered my Silverblue install unbootable. I had to boot to a live image to fix GRUB misconfiguration. Now I know it's not invincible, but it does work well otherwise.


For what reasons?


It's a combination of two things, open source packages have matured to a very stable level. Fedora uses very recent packages and it just works somehow because they've all become very good. So to keep a stable desktop system there is no more reason to use "sta(b)le" packages.

The other reason is snapshots, because no system is perfect. An operating system should in fact be designed around the fact that something will go wrong. Windows has had this for years, and some linux users have done it with btrfs.

But Fedora Silverblue does it seamlessly with ostree and grub, so if something does go wrong in a newly applied update, you simply choose the last working one in the grub menu and go on with your day until you have time to resolve it.

This for me to use Fedora as my daily driver is crucial.


I suppose snapshots, so you can retirn to a previous state in case of trouble


Introducing, BTRFS + Snapper.


I like the idea of ostree but having glanced at it, as a casual/intermediate user, it didn't seem as user-friendly as, say, Docker was (which a home user can learn in an afternoon). It wasn't obvious how to get to "Debian distro deployed as an ostree snapshot".

Is this one of those things designed for career sysadmins/system builders only?


You don't consume ostree directly like that, what happens is someone would make Debian ostree-enabled OCI images for users to consume and adapt. is one such effort to bring ostree to debian.


EndlessOS (mentioned in the article) is a Debian derivative based on OSTree, in development since around 2016 (maybe earlier).

You might find their forum (intended for end users, not much about development) or some of their repos useful:


I'm familiar with endless (and their excellent team!). The current issue is that it doesn't support layering like Silverblue does via rpm-ostree.

I'd be awesome if there was a community effort around bringing the layering functionality and bootc enablement to all of Debian and then we could have our cake and eat it too!


Have you tried Nix? If so, how does it compare? I haven't tried Silverblue, but Nix also feels like it's the future.


I'm one year in and hell yes. I wonder if Red Hat realize what they have here. Not just in silverblue but in Fedora.

I read a quora answer that estimated the Windows OS development budget at around 18 billion dollars, based on salaries. Imagine if Red Hat invested 2bn into Fedora to make it the Firefox of the desktop OS world. Just a 10% share is very significant against Microsoft.

They've come so far with so little, on the back of thousands of open source packages. That money could be used to keep those projects alive, and to sponsor them while they're being developed. Red Hat employees are already involved in a lot of them.


I'm not sure any company, Microsoft included, really cares about the personal desktop/laptop OS market at this point. Maybe Apple, but only because it's bundled with the hardware, which is what they really care about. Microsoft and IBM care far more cloud platforms and enterprise users. Until Fedora comes out with something like Active Directory, SCCM, Sharepoint, and the Office Suite, businesses will continue to overwhelmingly use Windows.

And most of those people will use Windows at home, too, if for no other reason than to not have to learn how to use two different desktop systems.


You're conflating development budgets and marketing budgets.

Redhat could invest 200bn into Fedora or any other project, and still people won't switch to it because even if it's "better" by some arbitrary metric, it's not what people are used to.

It's definitely possible, but it looks like Linux is more likely to gain popularity via WSL than via Fedora or similar.


I mean, where do you think Ubuntu came from?


The day Debian will make this immutable stuff mandatory will be the day I will ditch Linux from my computer.


I found Silverblue to not be flexible enough for my person computers. Maybe I use Linux in a hacky way, not having write access to /usr or /bin or other folders drove me crazy about once every two weeks.

For example, I was using a script written by an ubuntu user that was looking for a library with the name/location ubuntu puts it in. Fedora uses a different name for the library. My instinct here is to create a symbolic link with the Ubuntu name that points to the Fedora rpm managed library. Instead, to get it to work I forked the script, got it to build locally, made it look for either library name, ran local test cases, submitted the code as a PR upstream, etc etc etc. It took something that would normally cost me 1 line of shell to fix to something that took 90 minutes.


> not having write access to /usr or /bin or other folders drove me crazy about once every two weeks.

How do you keep track of your custom modifications to /usr or /bin without using the package manager? Do you record them somewhere for reference?


There are 3 ways to get what you want more easily:

1. With Fedora Silverblue 39 the most straightforward way is to add a Dockerfile layer just which makes your changes directly to the base image.

2. You can also create your own RPM and make the changes there.

3. Best is to actually run your script in an Ubuntu container (distrobox, toolbx, or podman).


>1. With Fedora Silverblue 39 the most straightforward way is to add a Dockerfile layer just which makes your changes directly to the base image.

Does that put the onus on you to rebuild container image in order to receive system updates?


Or just run the script in an ubuntu environment in toolbox/distrobox?


Use toolbox and do all your CLI scripting there.


> My instinct here is to create a symbolic link with the Ubuntu name that points to the Fedora rpm managed library.

I sympathize, but also AIUI this is exactly the sort of monkey patching that these systems are trying hard to avoid; yes, it fixes your immediate problem, but it leaves an undocumented, unmanaged change in how your system finds libraries. In my personal experience, this is the kind of change that leads to machines growing weird behavior that ends when I give up and so a clean reinstall.


Wow you're using Silverblue all wrong, you're even using regular Fedora wrong.

Even before I switched to Silverblue I was known to create Ubuntu containers to run tools from there.

That's what you should have done, run it in a container. If you're not comfortable with containers then the workflow in Silverblue will feel very strange.


Wouldn't it have been easier in this case to edit the script?


There is also this:

Fully statically linked.


" system upgrades aren't done on the live system packages changes are applied on the next boot you can roll back a change

Depending on the implementation, a system may offer more features. But this list is what a Linux distribution should have to be labelled "immutable" at the moment."

Immutable. I do not think it means what you think it means.


Did you finish the article? The author brought up that point, with the caveat that they couldn’t think of a better term.


Why are we still talking about broken notions of immutable systems when we actually have trusted execution environments and secure boot and cryptographically sealed strong assurances of what is executing at any point and can do secure upgrades, even with remotely attached secure storage and do it across a huge fleet of machines at very large scales?


I think the definition should be:

Installing any number of packages, then removing them in any order, at any future point(s) in time, is equivalent to never having installed them at all.

This leaves some distros out, but I feel like it’s the important part of the concept.


This is called "reproduceability", meaning the same config always results in the same system state.


How deep do you want that property to apply?

Have a look at 'uniquely represented datastructures' and 'History Independent Data Structures'.

You'd need to take special care to make sure that your block device (eg SSD) allocates blocks independent of history.


I like that. Maybe you could say the package collection forms a lattice, where there is only one state for any subset of packages no matter how you got there.


As far as I can tell, her point is this is effectively impossible, at least for a user-facing system. Do you want all the files you wrote in your word processor program or text editor gone when you uninstall those? All the files you downloaded from your browser gone if you uninstall it? If not, there is no reliable way to tell what files are created by a program automatically and which are created by a human using that program. You can easily enough remove everything that was created during the installation process, but not all future changes.

Beyond that, consider other changes, like say you decide to change DNS implementations and then later decide to change your default DNS server. If you uninstall the provider to change back to previous one, do you also want to change back to the old server you were using or do you want to retain that change? Personally, I'd want to only change the provider but keep the new server.

Then consider difficulties with directories shared across multiple machines. Let's say you have /home/${USER} set up as an NFS or Samba mount so you can keep the same files across multiple workstations. If a program respects XDG config dirs and stores stuff in there and you uninstall on one workstation, should it remove the files from all of them? Do you want all your devices to be identical or do you only want the home directory to be identical? There is no possible way for a package manager on a single system to know this.


I'm saying that installing and removing packages is equivalent to not installing them at all, not that if your boyfriend leaves you, uninstalling LibreOffice should undo it.


> immutability is a lie, many parts of the systems are mutable, although I don't know how to describe this family with a different word (transactional something?).

In the case of Nix, it sounds like it's more focused on reproduce-ability? It sounds like I should be able to take the Nix configuration file, plop it on another computer, and get the same system (except, perhaps, for /home).

Some of the others sound more like existing tools that provide snapshot/rollback capability, just with different implementations.


"Immutable" is a strange term for this:

* system upgrades aren't done on the live system

* packages changes are applied on the next boot

* you can roll back a change

That's a atomic transaction, like a database. Although having to shut down the system to do a commit is a bit much.

Microsoft put atomic transactions into their file system years ago, but file system transactions were never used much. You'd like to have an install system where all changes commit all at once, and if anything goes wrong during install, nothing commits and you roll back to the previous state. In theory a transactional file system could do that. In practice, there's probably too much other non file systems state involved.


Is anyone running diskless Alpine in production? It seems optimal to me but extremely uncommon.

In the past I’ve tried running a small USB drive in rented bare metal w/ diskless alpine, but the machine seemed to reboot randomly IIRC.


I am using this for my home lab. Extremely useful.


I did for a while on a raspberry pi. It was my home server for some months but i ended up not using it.


Alpine is awesome, but as the article says it is terribly badly documented. On my todo list is contributing some better RPi install docs, and a more sensible A/B boot partition process.


I have been using Fedora Sericea since it came out (it's basically Fedora Silverblue, but uses Sway-wm instead of Gnome-wm). The system is actually pretty usable, and you don't really need to reboot after each rpm-ostree install command (`rpm-ostree live-apply` takes care of it via systemd-based overlay).


But you still need to reboot for the new kernel to go live right? Or is the kernel switched via kexec or something?


Yes, there's nothing special about the kernel upgrade. You either reboot or kexec.


in the last two weeks I've been using Fedora Workstation, I haven't use Linux in 20 years and i have to say this is an incredibly improved experience.

I haven't yet needed to boot back into windows! If it stays like that for the next 6 months I'll cut over to Linux permanently and wipe the windows partition.


You may already know this but you can play most Windows games using Proton via Steam and it works really, really well.


But what about Office, Adobe , Affinity?


i dont believe anyone needs such mega-bad bloatware


your belief is not very grounded in reality unfortunately


yeah Office365 is a problem. If I really need that I'll either run up a VM for windows or I'll use works citrix session (or the work laptop)


Lutris has installers for those too, and Flatpak makes Lutris' installation a breeze. Those are propietary, but you have Krita, Only Office, Inkscape, LibreOffice, Blender...


> 4.1. Pros §

> 4.1.1 you can roll back changes if something went wrong.

> 4.1.2 transactional-updates allows you to keep the system running correctly during packages changes.

Last time I needed to roll back was OpenSSL in Ubuntu 18.04 and that was on one system.

I don think I've ever had a problem that 4.1.2 solves.

I don't want to have yet another Linux OS to solve a problem that happens once a decade.

If I wanted an immutable OS, I'd use a container OS on which I'd run apps as containers. Oh, wait, I already have that.


I agree.

Just like Wayland, they're pushing it for other reasons (breaking existing stuff, locking you in to the GNOME desktop, etc), and using spurious reasons to promote it.


Depends on what kind of machine you're running.

I purchased a 7900XTX on release day, with the Linux drivers being in a fairly rough state and new fixes being added daily. So, for the next 3-4 months, I was running Fedora Rawhide with the Koji repo added - about as bleeding edge as it gets, short of building locally from source. Rolling back definely came in handy once or twice.

Once things stabilized, I rebased back to non-Rawhide Fedora 37 and stayed there. Then, a few weeks ago, I found out that AMD had been working on ROCM for the 7xxx series, so I've rebased to Rawhide again to play around with AI tools on the 6.6 kernel.

I've also occasionally encountered non-critical bugs, suspected it might have been fixed upstream, and temporarily rebased to Rawhide just to check out if that was the case before reporting it. Pretty nice.


There is Ubuntu Core too.


What these sort of introductions to immutable always fail to consider is the other side of the coin, image-based. I'm working on along with many people much more skilled than me. We build OCI container images on top of vanilla Fedora Silverblue & many other editions with different desktops. Those images can then be booted to (or rather rebased to) using rpm-ostree. This is a more robust way of extending the system than layering, and the same changes can be easily benefited or inherited from by anyone. You can even make your own image really easily!

I think that VanillaOS and SUSE are working on similar things, but we're not an OS project, just a downstream from Fedora. Fedora's full support is underway but with what's already working perfectly our methods are already IME some of the most robust and easy ways of delivering Nvidia drivers for example.


Tangential, but I had my mind blown in about 2009 by a big hypervisor running Windows remote desktop hosts. I believe it was Citrix.

The VMs booted from images. The image and the mutable differencing disks were entirely in RAM (although user profiles were on spinning rust). A desktop host for 25 users would boot to accepting remote logins in about 4 seconds.

Least painful Windows system to patch.


> spinning rust

Weird phrasing. Haven't seen that before.


If you Google it, it's a pretty common phrase referring to magnetic disk as opposed to SSD. Could even be applied to drum, I suppose, if you could still find any...


Can't be that common since I've never come across it in 20+ years.

[deleted by user]

As annoying as it is irrelevant.


100% relevant and I was going to point it out to you myself.

It is OK to not know stuff. It is not OK to argue that because you didn't know something lots of other people do, that the thing is is obscure or nonstandard or weird.


No, it isn't relevant.

Just because you've been exposed to that term does'nt mean it's as widespread as you seem to think it is.

You remind me of people I would meet in one state, who would use certain slang or have certain customs particular to that state and insisted it was a USA wide custom, despite having never left their state.

I very quickly get the impression you are one of those "need to have the last word" types, so I won't be responding to you further.



Very common in the communities I frequent where filesystems like ZFS are discussed, such as and formerly the ZFS reddit sub, now


I can definitely see it being popular in specific groups as a kind of in-group slang. I don't think it's particular popular as a general term though.


I'm sorry, but you are dead wrong.

I would go so far as to say this is the standard informal/slang way to distinguish rotating magnetic media from optical/solid-state media, and off the cuff I'd say it has been since the advent of consumer optical media about 30 years ago.

A definition from a decade and a half ago:

From 8Y ago:

It's in Wiktionary:

And Urban Dictionary:


> I'm sorry, but you are dead wrong.

I'm not, but I appreciate your confidence in asserting so.

Finding evidence of that term being used has no bearing on how widespread that term was. Just because you've encountered it doesn't mean it is as widespread as you seem to assume.

You remind me of people I would meet in one state, who would use certain slang or have certain customs particular to that state and insisted it was a USA wide custom, despite having never left their state.

I very quickly get the impression you are one of those "need to have the last word" types, so I won't be responding to you further.



They mean hard drives. I found Torvalds saying it:


I figured, but it's still a weird phrase.


I see that UBlue uses Github actions to rebuild the images regularly to roll in package updates. Who foots the bill? Are the images also served from Github? Does Github charge for egress? What happens when a lot of users want to download the same image?


Github free tier served us well. For a while we had a paid tier for GH orgs to get better builders, but now I think we use I'm not too familiar with this aspect, though.

I think the bills are paid by Jorge (the kind of "founder" of the project), at least I think so, though some of the other top members with jobs in the Linux/Cloud world might be helping. Donation paths and such have been considered, but the bills aren't too huge so nothing has been rushed in.

For the registry, GHCR serves us entirely for free. No egress costs, no ingress costs, nothing. No plans to change providers, and I don't think they have plans to raise pricing either. We could probably find an alternative host pretty easily, though, through the cloud contacts and knowledge some of the devs here have.


I'm confused, isn't Fedora Silverblue also image-based? I thought the default installation doesn't use layering, and that layering only comes when you want to install extra RPM packages.


Yes, Fedora Silverblue is image-based. We just use extending those stock images as an alternative to layering and easy way of shipping the same system configuration to many people.


Isn’t an OCI image essentially layers + metadata?

How is it different from what you call “layering”?

Legit question, just trying to understand why you feel it’s an advantage.


An OCI image is pretty simple, yes, so is the sort of image that is in an OSTree repository. The difference is that when using `rpm-ostree` packages installed with `rpm-ostree install` are "layered" on top of the base image, while packages in the "base image" (be it OCI or OSTree) are part of the system and thus not "layered".

Adding packages in an image has the benefit of pseudo-reproducability (have the same image on multiple computers) and the added robustness of your base system being built elsewhere daily. Your computer just pulls the diffs. For example, there have been issues with rpmfusion on Fedora that ublue users completely avoided. Codecs & other essential rpmfusion packages are included in the images, and the rpmfusion repository is removed after they are installed. This way, if something package-related breaks it breaks at the image build stage, and an ordinary user wont even notice it before it is fixed.

The most noticeable benefit IMO, though, is being able to ship the same changes on top of a base image every day for multiple machines. This is not only packages, but for example udev rules, and other QoL things like our `justfile`s, configuration for that has some useful scripts for adding the kargs necesarry for Nvidia drivers to work and `just update` for updating the system, flatpaks & distroboxes.


On the server-side, there's Bottlerocket OS [1] (Amazon). They use A/B partitions for upgrades, and the idea is that you just run containers for anything non-base. Boot containers are used to do custom configuration at boot, and host-container (or DaemonSet, if you run K8S) is used for long-running services.



I'm less interested in immutable systems and more in pre-configured systems. NixOS with Home Manager is the one that stands out here, but the configuration is just awful. I want to be able to have my full config in source control and know that is the state of my current system, with anything else being wiped on reboot. Anything that's changed before reboot should be highlighted.

In my (limited) experience with something like Silverblue, the base system can be configured but when you start adding applications (like say, Firefox), it is lacking when it comes to configuring that because you're using Flatpak and I don't know of a way to tell it to both install all Flatpaks I want along with all of the configuration.

I guess there's some way of installing Flatpaks en masse and then dotfiles can take care of the rest?


Have you tried the nix package manager on silverblue?

This helps keep the (I agree, awful) configuration portion of nix minimal.


> anything else being wiped on reboot

Have you tried Impermanence?


How does working with Docker work on Immutable systems like Fedora Silverblue. Like e.g. developing an application (in a Devcontainer like e.g. Toolbox to avoid having to install all the devtools on os-tree) and then building and debugging a Docker container from within the devcontainer? Or am I thinking in a wrong way?

Any good blogposts on developer workflows on Silverblue?


VSCode with the devcontainer and docker is great on Silverblue.

I've been prototyping some developer workflows with friends here:

So far the major patterns are vscode with distrobox, vscode with devcontainers, vscode with devpod, jetbrains toolbox thing (which just runs everything out of the home directory, the OS doesn't care).

And then devbox/nix and homebrew in ~ is also an option if you're into that.


This sounds painful for no reason. Especially the debugging part. Why would you want that? Is it really such a strain to install the tools you need for development on your computer?

I can understand wanting an immutable system for a server, as it will likely cut down on maintenance, but for personal use... that just sends shivers down my spine... As someone having to support other (especially not very savvy) programmers when it comes to tool usage and their environment I hate to imagine having to deal with someone who'd want that kind of setup.


You don't know anyone who develops in containers? It's a pretty common pattern these days.


Yes, of course I do, but there's always some degree of mutability. Eg. typically you'd mount some local volumes into container for example...

But even with "escape hatches" programming in container is very painful and uncomfortable. I've only ever seen this done by people who chose to work on a very restrictive system (perhaps for its appeal to their aesthetic feelings rather than any practical concerns). Or, maybe, their employer has bad IT, which both strictly enforces the rules and creates rules that acutely inconvenience the employees. In either case, it's a big hit to productivity. But, in some cases, there weren't much in terms of productivity to begin with (the programmer was bad with or without good programming environment), so the losses are imperceptible.


Good timing, I'm minutes away from installing openSUSE Aeon.

MicroOS Desktop turned into Kalpa (KDE) and Aeon (GNOME), but the latter has all the momentum.


> MicroOS Desktop turned into Kalpa (KDE) and Aeon (GNOME), but the latter has all the momentum.

Honestly tying the OS to the desktop environment is the only reason I haven't installed microos. Making a derivative with my preferred DE is on my "maybe someday" list, but that's a long list...


As someone who refuses to use anything except a heavily-customized AwesomeWM setup, I wholeheartedly agree ^_^


I am currently on Mint, but will soon switch to Fedora Silverblue again, and then I encapsulate things for dev with Distrobox, which also can encapsulate home directory and export apps to the host system, see

That's basically what I want, encapsulation and buttons :)


I feel pretty strongly against the idea of immutable infrastructure when you're "infrastructure" (shared systems, running other people's software), but this article isn't about that.

In my observation (and in datasets that I have access to), computers systems tend to follow the "infant-mortality" curve. This means that if they run for a little bit, they're likely to run for a long time (and in addition, if you have many of them, they tend to die around the same time). My conjecture is that many computer systems have initialization routines which are not as thoroughly tested as the normal operating state of the system. Due to this, we tend to run into more issues in "immutable" systems than you otherwise would in "mutable" systems.


This could be used as an argument in favour of immutability. If you effectively spin up a "new" computer every time you restart a box, then it forcibly surfaces any issues with your deployment or first-run.

It's not like you don't have all the same issues with deploying your long-lived mutable systems -- you're just feeling the pain less frequently, and postponing all the work to debug them until you're in e.g. a disaster recovery scenario.


No mention of proprietary drivers such as nVidia's? §


They work perfectly fine on NixOS at least, can't speak for the others.


NixOS is in some ways the best distro for NVIDIA, because it just holds your upgrade back if there's a compatibility problem between the NVIDIA proprietary driver and the kernel, instead of breaking your system or installing a non-usable kernel alongside your working ones.


I'm really, really happy with my current setup of Fedora immutable + toolbox [0]. This tool lets you create rootless containers that are fully integrated with the system, so you have acces to a regular mutable system, can install whatever without layering on the base system, run graphical apps, etc. while still having everything inside a container in your home directory. That means no Flatpak required. Highly recommended.



Our team from Triton DataCenter & SmartOS[1] is also working on an immutable Linux distribution[2] based on Debian + ZFS + LXC.

Currently this is supported on Triton DataCenter only, but our internal roadmap has us building a standalone version similar to how folks use SmartOS standalone.




Does MX Linux Frugal count as immutable?


How would you approach making immutable live-cd like Linux? No persistence at all, just boot it and run some app - think some kind of presentation panel which shows predefined program/URL. Ideally net booted to avoid having storage at all.


The more extreme form of that is to have the OS run from read-only memory. Some embedded systems work that way. Reset, and you're back to the cold start state. QNX can be built to run that way, for systems with no disk.


Ideally, Kali is meant to be used this way, so you can use it to perform forensics on a potentially compromised system without inadvertently changing anything and destroying evidence. You can really trivially do this with any Live CD by putting it on write-once read many media. You typically need at least /var and /tmp to be writeable, but that can be accomplished via tmpfs so they only write to memory and not disk. You don't really need to do anything to enable this at the distro level, other than maybe make the mounts default, but in practice, live CDs tend to mount the root filesystem as SquashFS, which is also read-only at the filesystem level, and then use OverlayFS for partitions that need to be writeable for software to work, not retaining the writeable layer on shutdown.

If you mean how would you do it yourself, you can use the tooling used by real distros. I'm not sure what tools they all provide, but Archiso ( is probably the simplest to understand and modify because it's purely shell scripts.


Thanks I'll look into the arch links.


The initramfs-overlay package became a trivial install with OverlayFS being accepted into the kernel a few years back.

This meant the mess systemd created each boot, could be dropped into the ram-drive with zero impact on the OS image. Effectively turning any Debian based system into a read-only OS backing image, but retaining the ability to boot into a normal writable system with a single boot flag.

This trick is a lot less finicky these days. =)


I got into Tinycore this summer. Useful complement to the security philosophy "One OS, one function" which is kinda the thing behind Qubes, Tails and Whonix we talked about here a few days ago.

It's so light, you can spin up VMs, one for a mail-server, one for a database, one for a firewall/router, each in a couple of seconds.

Tinycore is itself immutable, so you add a vdisk with a "package" and some config, mark it read-only, and job done. A single Virsh script handles the startup and shutdown of "services" - each being a Tinycore instance. Fun, and robust so far, but not sure if I'd put it into anyone's production just yet.


Yeah TinyCore is always missing from these intros to immutable Linux… it’s been around a while and has a great design (though the implementation leaves some to be desired, and probably has no corporate backing to explain people not knowing about it…)

It’s solid and simple, unlike these other immutable Linux distros


Off topic, but does anyone know how to find out where mutable data for NixOS modules are stored at (e.g. the data directory for a database) without reading the source? Occasionally, it's mildly annoying, and would be comforting to know with certainty where all my state is.


If it’s a systemd service (which it usually is) you can simply run systemctl cat <service in question> which pretty much always will have WorkingDirectory property set or RuntimeDirectory or similar (RuntimeDirectory you’ll have to prefix with /var/run which you just sort of have to know but that’s not NixOS specific)


It's rather unlikely the RuntimeDirectory= contains state as it's wiped on service stop unless RuntimeDirectoryPreserve= is set. NB: these days, /var/run/ is a symlink to /run/


I’ve found that’s usually in /var or ~. Can’t think of any instance where it wasn’t one of those.


Unfortunately, that's not possible. However, as NixOS modules usually are Systemd services, the StateDirectory= of the service is a good starting point (systemctl cat <service>).


It's interesting to watch these immutable images being adopted in the wider computing community. This has been a thing forever in the embedded world with Yocto/Peta Linux/Buildroot images. The image is a usually a straight disk image and is read-only in operation. This doesn't inherently fix all the insecure IOT stuff as you usually need some way to reimage the device with updates and it takes at least some skill to do the bootloader signature verification right. It does help though as well as keep things deterministic.


It has not been possible/practical to use immutable system images until recent advances like ostree/flatpak/systemd-homed.

We didn't "just discover" some secret only known to the embedded community, lots of people have been working toward this exact goal for a long time, because we already knew for a long time that it has a lot of advantages.


"It has not been possible/practical to use immutable system images until recent advances like ostree/flatpak/systemd-homed."

no, you can just mount overlayfs with ram-backing for example