I acquired a fairly powerful Dell Latitude 5520 laptop a while ago:

I stuck with integrated Intel graphics since it wasn’t meant for leisure. Rather, it was intended to be something I could work on while away from my desk. Battery life was important; gaming performance was irrelevant. The 15.6″ display was the best compromise between usability and portability. I can’t remember whether 4K resolution was an option, but given my aversion towards tiny text on tiny screens, I only wanted 1080p (and, in days of yore, would even have preferred 1680×1050).

Once purchased, the laptop languished in disuse for a long time because the actual idea of working while out of town was unappealing. This May, however, with two longer trips on the horizon, I decided that notwithstanding what I might want or not want, I needed access. It was becoming a significant problem to rely entirely on my phone. I have too much work to ignore it for the duration of the trip and spend days catching up later. In addition, my beloved behemoth of a PC both is a single point of failure and chains me to my desk.

The time of the laptop had come.

More power, at a price

The first order of business was to reinforce the hardware further with:

I also needed to replace the battery, which had discharged completely and ceased functioning. The whole affair took me two hours:

  1. Go to the first Dell store I see (in a market focused on technology and computers) to get the dock and upgrade the parts, my phone battery at 33%.
  2. Find out they don’t have the dock: that’s only available at the combination Dell-Alienware sister shop 20 steps down, while the upgrades are available at the Dell service centre some way away.
  3. Go to the sister store and ask for the dock.
  4. Get the price and make a booking with a token payment. It’ll take 10–15 minutes to arrive from their warehouse, so go do the rest in the mean time.
  5. Trudge 10 minutes through the heat and find my way up to the mostly unmarked service centre for a variety of brands.
  6. Get my ticket and eye the large queue with trepidation.
  7. Wait for a few minutes before concluding it’s not worth it.[1]
  8. Trudge 10 more minutes back through the heat, to a building opposite the first one I went to.
  9. Head for the prominent Logitech sign on the first floor in search of the MX Anywhere 3S mouse (cousin of my desktop’s MX Master 3) only to discover it’s a warehouse or something and be redirected to the actual shop on the ground floor.
  10. Go to the actual shop.
  11. Buy the mouse, forgetting how to pay with my debit card at first, and turn around to leave.
  12. Have my attention forcibly captured by the K580 keyboard (non-ChromeOS edition) and decide to get that too since I detest laptop keyboards.
  13. Wait an inexplicably long time for this second bill.
  14. Finish paying and leave with my two items, 15 minutes after entering the shop, which has had no other customers.
  15. Go to the place selling various peripherals next door thinking I’ll pick up a headset as well but find nothing that fulfills my criteria: wireless, portable, and with a microphone.
  16. Go to another shop, also next door, also selling peripherals, but find nothing again, because everything is apparently too big.
  17. Change tack and ask about folding headphones instead of just small, compact, or portable headphones.
  18. Successfully find and buy a JBL headset that’s just right.[2]
  19. Go back to the Dell-Alienware shop.
  20. Be informed that the dock isn’t in stock; ask if they can get it from somewhere because I have to travel the next day.
  21. Wait five minutes to learn it’s in stock but the earlier price didn’t include taxes, requiring me to confirm once more that I want it and… start waiting.
  22. Lament the long line at the service center when asked whether I was able to get my work done.
  23. Explain that I wanted RAM and a battery only to hear, ‘But we can do that here!’, much to my combined irritation and relief.
  24. Wait an hour and a half as various parts are sourced from around the market, realizing along the way that I might as well also upgrade my smaller SSD right there, and marvel at just how ugly the overpriced flagship Alienware desktop on display in front is.
  25. Watch my phone battery soon drop to 15%, leaving me with nothing to do except stare out the window unless I’m willing to completely deplete it in short order.
  26. Hear more than once that something isn’t in stock; ask every time if they can’t find some way to source it since I’m going out of town and won’t get another chance before that.
  27. Eventually see the 512 GB Hynix SSD replaced with a 2 TB WD SN350 last of all, which isn’t ideal but will suffice, with my phone still at 6% thanks to a host of desperate power-saving measures.
  28. Double check all my purchases.
  29. Forget to have the tiny booking fee deducted from the bill.[3]
  30. Leave the market two hours after arriving, backpack stuffed with loot.

My upgraded specs were as follows (ignoring the unchanged bits):

I would have preferred a 6-cell battery, as offered on the Latitude E5520 in America. Indeed, the engineer at the shop sheepishly pointed out there was space for one and offered to install it, but Dell simply doesn’t seem to support it on this non-E version even though the battery itself is available online, and I was disinclined to gamble.

I must admit that the dock has proven to be a bit of a waste of money so far: I have neither peripherals nor screens to connect to it, so I’ve barely even taken it out of my bag after the first day, preferring to charge the laptop using my original high speed charger. I’ll keep lugging the thing everywhere anyway on the off-chance that it comes in handy somewhere.

Breaking free of Windows

I got my start with computers when I was around three years old, on a succession of PCs belonging to my father, all dual-booting Windows and Linux. I’ve been comfortable with both right from then, but have exclusively used Windows as my desktop OS for about 20 years, since the first machine I could call my own. While I’ve maintained my familiarity with Linux, it was always in virtual machines, containers, and so on.

Now, although Windows is a practical choice, I’m not in any way a fan. It’s frustrating, flawed, and problematic software built and controlled by a corporation with a decades-long history of user-hostile behaviour. (And I simply refuse to upgrade from Windows 10 to the disastrous 11.) Therefore, when planning this second system, I made the decision to escape the monoculture by installing Linux.

As I got things ready, I discovered Dell enables a long list of misfeatures by default: RAID via Intel RST, some sort of persistent locator from Absolute, Intel’s pre-boot network setup, and so on. I disabled all of it before installing Kubuntu (I’m most familiar with Ubuntu, so it was the easiest choice). I was impressed by how well the desktop mimicked the general appearance of Windows, though that wasn’t something I needed or wanted. More importantly, I had omitted to enable LVM and ended up with unencrypted disks.

Given that the installation only took 10 minutes in the first place, I elected to simply repeat it—normal Ubuntu this time, giving me a GNOME-based desktop[4]—and made sure to set up Full Disk Encryption. Afterwards, it needed a little extra configuration to mount the secondary drive at startup:

  1. Create a keyfile for use with LUKS:

    Bashsudo dd if=/dev/urandom of=/root/crypttab.key bs=1024 count=4
    sudo chmod 400 /root/crypttab.key
  2. Add the key (using the UUID I got from GNOME’s Disks view):

    Bashsudo cryptsetup luksAddKey UUID=$UUID /root/crypttab.key
  3. Mount the drive in the current session:

    Bashsudo cryptsetup -v luksOpen UUID=$UUID $MOUNTPOINT
  4. Add an entry to /etc/crypttab (where the first column should, I believe, match the label shown when mounting it manually in Disks):

    Bashecho "luks-1234-5678 UUID=$UUID /root/crypttab.key luks" | sudo tee -a /etc/crypttab

I’m told it isn’t necessary to do this with a keyfile. The single password can unlock the second disk in memory just like it does the first, but I need to format and encrypt the disk inside the OS. Then all it should require is adding the /etc/crypttab entry.

And then I was done. When powering on the laptop, I’m presented with a well-designed password prompt to unlock the disks. A couple of seconds after I enter the password, it transitions neatly into the Ubuntu login screen. That’s a level of polish I’ve never seen on Windows, and from a computer that wasn’t designed to run Linux in the first place! (In all fairness, my Windows machine must unlock four large hard disks alongside a few SSDs, so a longer delay is only to be expected. I can’t say the same for its more primitive approach to booting.)

The return of desktop Linux

Input has been a mixed bag. I’m relying on the Compose key to produce some of the special characters I get through my custom keyboard layouts on Windows. Keyboard Layout Converter can supposedly transform those layouts into a Linux-compatible format; I haven’t tried it. I don’t understand how to integrate the Compose key’s third-level chooser, which apparently provides many additional characters. I also couldn’t swap Ctrl & Caps Lock without GNOME Tweaks, and even then I could only make Caps Lock a second Ctrl rather than swap them entirely. On the other hand, the nifty Solaar utility lets me connect my mouse using the same wireless receiver as the keyboard instead of needing to separate them.

Syncthing is an excellent tool for automatically keeping files synchronized across my systems and my phone: smart, flexible, capable, and reliable in a way that software rarely is. I was confused by what looked like ineffectual negations until I understood order is significant for includes, meaning negations must come before exclusions. The biggest issue so far is that on Windows it doesn’t seem to be able to watch continuously for file changes, so it only sends those at intervals. It also, unfortunately, can’t support symlinks there.

Sharing Git repositories is a bit tricky. I exclude the .git directory in the (one-way) mirrors on my laptop. Any work in those repositories happens in a full local clone. However, I keep tripping myself up by making changes in the latter and forgetting to push them. This isn’t an major issue when going from the desktop to the laptop, thanks to the mirrors, but I haven’t yet made up my mind whether and how to go the other way, so I can end up with forgotten changes on the laptop. I suppose the obvious answer is to just set up the inverse mirrors too, stop excluding the .git directory, and get used to pushing my changes often.

When it comes to gaming—which was, of course, never the point—Steam just worked, though it has amusing font-related issues during updates. I plugged in the PlayStation 5 DualSense controller I use on my desktop; that worked too. I picked up right where I left off last year in Teenage Mutant Ninja Turtles: Shredder’s Revenge with the magic of cloud saves. I was genuinely impressed by how seamless that was. The much more heavyweight Aperture Desk Job refused to run: I heard the audio but the screen was black. I expected no more from the Intel Iris Xe GPU.

I was disappointed to see how few games Steam said I could run on Linux until I looked up a guide to Proton and realized those were only the ones with native support, and I can force any game to use Proton, with varying compatibility documented at ProtonDB. There’s also a tool called Bottles that appears to create Wine-like wrappers for applications, primarily aimed at games on the Steam Deck. Given that Proton itself is based on Wine, it’s unclear what Bottles adds.

I built Emacs from source like on my desktop. It was a piece of cake under Linux, as one might expect.

Safe browsing, anywhere

I wanted to use a VPN to get into my home network from wherever I may be in order to avoid exposing my traffic to insecure wireless networks, but I didn’t have any systems running 24×7 that I could use as an exit node. I compromised by creating a dedicated DigitalOcean server. My research turned up many convoluted discussions of different kinds of VPNs and their limitations. All of it boils down to two practical options:

Having settled on the second option, I came across a set of Ansible roles, Algo, that sounded almost too good to be true: near-instant WireGuard server initialization with almost no configuration required. I’m delighted to report it worked perfectly to set up the server and produce the client configuration.

In my testing at home, the VPN adds 15–20ms of latency and cuts maybe 20 Mbps from my 150–200 Mbps symmetric connection over Wi-fi. I’ve left it on permanently, which causes issues with a very few websites; perhaps recreating the Droplet would help there, but it’s a minor concern. I noticed that the promising VPN button in Ubuntu’s settings panel doesn’t work correctly: if I use it to turn off the VPN, I lose all connectivity and have to restart the WireGuard systemd service, at which point the button also stops reflecting the true state.

I can’t have the VPN enabled when signing into networks that have sign-in pages, like many public Wi-fi networks do. It’s usually not a problem to be outside it for that one minute, but it does bother me. wg-captive-browser is a clever solution from the the author of the age encryption tool and the Go cryptography library. I’ll give it a whirl next time.

Something I still need to investigate is split tunnelling. Perhaps it would be a good idea to send all Steam traffic outside the VPN, for instance, to avoid the additional latency. I’m wary of how much configuration it might require, though, and don’t know how reliable and flexible it would be. Playing something like Shredder’s Revenge with others over the VPN is already a tolerably good experience in any case. I might just forget about it for now.

It’s an open question how I’ll deal with the VPN server ever malfunctioning while I’m away and relying on it. One silly temporary workaround I could implement is to set up a CronJob in my Kubernetes cluster that runs Terraform and other tools to make sure the instance is up every n minutes. I could use a Floating IP to ensure it has a static address, too.

Come to think of it, when I set up my NAS box, I might add a Raspberry Pi dedicated to WireGuard. I’ll need static IPs and will have to set up clients to try both external IPs (from my primary and backup connections) somehow. I could potentially use a dynamic DNS service. And I would probably add PiHole & unbound. I wonder, if I have a WireGuard server inside my network, does that mean the laptop can get in through the VPN and connect to the NAS and the workstation without either of those needing to connect to the VPN? I think so, but I’m not entirely sure.

Hook me up on the go-go

I’d like to be able to access my media library remotely, maybe with the familiar VLC media player. (My favourite player, MPC-HC, is only available for Windows.) VLC supports DLNA, which apparently means the Jellyfin ‘Media System’ is an option too.[5] Alternatives include Kodi and the official Jellyfin player. As an aside, I can’t tell whether on-the-fly HDR-to-SDR conversion can be done on the server without a GPU, or how it works with an integrated GPU on the client.

At any rate, hypothetically, to serve remote media with Jellyfin, I’d need to deploy it in Kubernetes and add a DNS route, restricted to specific IPs for safety. I could store the media quite cheaply in Backblaze B2 given that I’d only want it there temporarily, while I was away. (I haven’t taken into account the cost of uploading, downloading, or listing media, however.) Whether remote or local, I must remember to add the Playback Statistics plugin and the reports plugin for proper tracking.

Ultimately, I really want that NAS box. It hopefully won’t be too long now. In the interim, I’ve settled for simply copying a subset of media onto the large internal disks whenever I’ll be away. At home, I can merrily access the full library over a network share.

The untethered life

Much of this was accomplished in a single rushed week before the first trip, and then only in my spare time. I steeled myself to encounter numerous rough edges and kept my expectations low. I never dreamed it would work so smoothly and easily. In a matter of days, I went from being a prisoner of my desk to sitting on a patio in the mountains and enjoying the bright sunlight while I chatted, surfed, played, watched, and worked. The laptop proved its worth almost immediately, as I resolved two separate work crises and handled a complex finance-related transition in my personal life during that single trip, leaving practically nothing to catch up on back at home.

Using desktop Linux for the first time in 20-odd years has been a wonderful experience. For one thing, it’s fast. I’m so used to the slow and resource-hungry Windows that I simply wasn’t prepared for how quick and responsive this laptop with one quarter the cores (or one-eighth the logical processors), a quarter as much memory, and a puny little integrated GPU would be with Linux. Snaps are a notable blemish on that experience; replacing the snap-based Firefox with the native version is on my list of things to do. Still, I’m awed by its overall fluidity and rapidity.

The only real issues are odd behaviour related to processes being killed randomly (aspell started via Emacs being a particular offender) and something I haven’t pinpointed about how playing videos, on rare occasion, causes the X session—and even more rarely the entire system—to crash.

One perpetual annoyance is that the Tab Session Manager extension can’t seem to synchronize Firefox sessions between the desktop and laptop via Google Drive. Manually starting a sync on either machine has at various times shown messages about uploading and downloading sessions but, with one exception early on, neither instance of Firefox ever shows sessions from the other. I can only share a session by exporting it as JSON from the settings and importing said JSON on the other machine.

Going from desktop to laptop is certainly displeasing, especially the tiny screen and small keyboard, but I’m satisfied that I’ve minimized the downsides. Being able to step away from the desk is hardly the only benefit, either. This second machine has been invaluable on those occasions when my desktop was unusable thanks to malfunctioning hardware or a Windows reinstallation in progress. Slowly but surely, I’m shrinking the part of my life that’s confined to four fragile hard disks in one temperamental (if powerful) computer.

What lies ahead

This certainly wasn’t the final word in my Linux experience, only the starting point. After the first trip, I asked for advice on the DSI Discord:

So… setting up a secondary machine with Linux after nearly 20 years away from desktop Linux has gone really well so far, especially considering that I just picked Ubuntu as the easiest choice, didn’t plan ahead, and tried a bunch of things I’d never worked with before. Now I’m interested in setting up something more carefully considered with a view towards the long term, and I’m also planning to very slowly migrate my development work on the Windows side to WSL for the sake of parity.

I’d love to get people’s thoughts on a good distro for someone like me who’s reasonably well versed with Linux on servers, in containers, etc., but hasn’t been using it as a daily driver. I know that Ubuntu itself made things easier for me, so I’m not expecting everything to be smooth sailing. I just want to avoid Canonical, and I’m ready to put in the effort. I also want to do as much as I can with Nix once I have the base system set up. (I’ve used Nix as a build tool before; never like this.)

This is a fairly powerful 2022 model Dell laptop with integrated graphics, so I’d hope drivers won’t be an issue. I need to use it for everything, particularly while traveling: surfing, media, development, (light) gaming, etc. Working with creative content (photography, videos, etc.) is probably the only thing I’m not expecting to be able to do much of in Linux.

snuufix suggested Pop!_OS:

I could recommend just using good old reliable Mint for work but it’s not set up for gaming. So something like Pop_OS, maybe?

I'm using it at home myself, cos apt + optimized for Threadrippers, VM passthrough works perfectly, Steam works out of the box too, if you own a card, it has a version with included drivers, so you don't have to suffer setting that shit up. Default design seems too "trendy and pop-y" (if that's a word ) so obviously it needs tweaking. Thankfully it runs latest Gnome so basically almost everything on the internet works. Idk, I just like it.

I have to agree that I don’t much care for the æsthetics, but the tiling window manager looks exceedingly useful, as do the use of Aptitude and the generalist approach to compatibility. I believe it’s the right choice for me. I just need to set aside time to migrate.

Apart from that, I need a declarative configuration for my development environment to maintain parity on Windows. I’d be interested in NixOS but I don’t think I can manage a fully Nix-based system. A better approach is to constrain its scope by using a different host OS (like Pop!_OS) on Linux and NixOS via WSL on Windows, with home-manager to manage my environment everywhere. Whenever the NAS box is up and running, I can add a Nix cache and possibly a remote builder for both to use. Maybe I can even serve the Nix store from there.

Not really a shared configuration

As time passes and I see how good life can be under Linux, I often find myself musing on how feasible it would be to adopt it on my desktop too: I’m sure it would take a fair amount of money and a lot of tearing out my hair to set up a Windows virtual machine with passthrough so I could continue to use those applications that I can’t do without, but all the tears might be worth it to fully emancipate myself.

For the time being, a sensible middle ground would be to migrate my development work to WSL so that Windows is less of a factor. Sadly, the crucial flaw in that plan is that filesystem performance across the WSL boundary is debilitatingly slow and can never be otherwise in WSL 2:

In WSL1, file performance was slow compared to Linux because it had to emulate Linux behavior on top of the Windows IO subsystem, which has a very different design than Linux and is slow for many operations that are extremely fast in Linux and therefore used a lot by Linux tools. This was the case for both the "internal" Linux file system, and the Windows file systems mounted in /mnt, because they both use regular NTFS files.

With WSL2, the Linux file system is now an ext4 partition in a VHD. It cuts Windows out of the loop except for having to read/write from/to the VHD file, so that's much, much faster.

Windows files are now accessed across the VM boundary, however. In WSL1, we could just call the NT kernel from our lxcore.sys kernel driver, and that was that. In WSL2, every operation has to send data to the host, exit the VM, wait for the host to perform the operation (which still involves emulating Linux behavior and the cost of Windows IO operations), send data back to the VM, trigger an interrupt in the VM, schedule the virtual processor to run, and continue executing in the VM.

Essentially, Windows files are now a "remote" file system. To access them, we use the 9p file system protocol, a protocol also used by QEMU to provide host file access, and the same protocol we already used to provide Linux file access through \wsl$. Communication is done over Hyper-V sockets, which is faster than a real network, but it still has many of the same drawbacks as a network file system.

To make matter worse, to ensure the same behavior as WSL1, we don't use any caching. Typically network file system like NFS or SMB will cache both file data and attributes for some amount of time to avoid too many round trips to the server. We are in the unique position where the host and the guest are tightly integrated through things like interop. It's entirely possible that you have a script which runs a git command in WSL followed immediately by some Windows tool. In that case, any cache inconsistency between the host and the VM could cause incorrect behavior, or worse, data loss. Even the 1 second attribute cache used by NFS could be problematic for this reason.

I could imagine putting a few projects in the Linux filesystem, but with this design, there can never be transparent interoperability, mostly defeating the purpose of my shared Nix configuration. There are consequently few things that I can use NixOS for, and the middle ground I mentioned above is unattainable. Instead, I’m effectively maintaining parallel sets of configuration for Windows and Linux.

I’m not too worried. Who knows what miracles the future holds? Until seven years ago, WSL itself was an impossible idea. At the very least, I know my father would have been overjoyed to see me go from having embraced Emacs (so long ago now) to adopting Vim in Emacs to, after all these years, voluntarily setting up a Linux laptop. That I’m also still using Windows is hardly important.

On a more prosaic note, the Dell backpack developed a small tear as soon as I added all the new hardware, and lacked space as well as pockets. After three months of constant irritation, I marched into a Samsonite outlet just before my last trip, picked out their top-of-the-line backpack, liked how it felt, bought it on the spot, and had my initials embossed on it for free. It was a touch uncomfortable the first day I had it fully loaded; after I adjusted the straps, it became reasonably easy to carry, and the vastly superior layout and capacity allow me to properly organize everything.


  1. Covid was still a minor concern, for one thing.
  2. Also ask about ordering high-end Sennheiser headphones, get a business card, send a WhatsApp message later, receive no response.
  3. This was undoubtedly a deliberate omission on their part, but I suppose they earned it.
  4. Incidentally, touchpad scrolling didn’t seem to work under Kubuntu but does here.
  5. Which requires host networking if containerized.