Suranyami

Polyglot developer, geometric tessellation fan, ambient DJ.

Just before Christmas, I spent an hour or so setting up uncloud on my homelab, and I am stunned at how easy it was to get working.

The motivation for doing this is because I’ve known for a long time that Swarmpit is basically abandoned. Disappointing, but true. The latest release of DietPi, my preferred distro for my Raspberry Pi and RockChip SBCs, included an update to docker and docker-compose completely broke all operability with Swarmpit. Queue panicked hunting for alternatives and a fortuitous discovery of Uncloud

Here's what I've done:

  • Added a wildcard DNS record pointing *.suranyami.com to my dynamic DNS address: suranyami.duckdns.org.
  • Installed tailscale on each of the machines (Installation Instructions), and connected them to my free tailnet (free tier allows up to 100 nodes). This gives me a stable URL for each individual machine that I can SSH into without needing to do NAT redirection on the router. For instance, my machine called node1 is available to me (and only me) at ssh dietpi@node1.tailxxxxx.ts.net.
  • Updated my ~/.ssh/config with entries for all the machines that look like this:
Host node1
  Hostname node1.tailxxxxx.ts.net
  User dietpi
  • Installed uncloud on my laptop: curl -fsS https://get.uncloud.run/install.sh | sh
  • Initialized the cluster by picking one of the above machines as a first server: uc machine init dietpi@node1.tailxxxxx.ts.net --name node1
  • Add other machines using uc machine add dietpi@node2.tailxxxxx.ts.net --name node2
  • Deploy services using uc deploy -f plex.yml where plex.yml is a subset of a docker-compose file, but with minor changes. For instance, to deploy to a specific machine (which I have to do because I need to redirect port 32400 from the router to a specific machine, because plex is annoying like that), I do this:
services:
plex:
  image: linuxserver/plex:arm64v8-latest
# ...
  x-machines:
    - node2
  x-ports:
    - 32400:32400@host
    - plex.suranyami.com:32400/https

And that's about it. No reverse-proxy configuration, no manual entry of IP addresses, everything is just automatically given a letsencrypt SSL certificate and load-balanced to wherever the servers are running.

This is honestly the easiest way to self-host anything I've found.

It's been 2 weeks or so now, and now that I've got the knack of the x-ports port-mapping syntax, I've also managed to get all my other services running everywhere.

Notable edge cases were:

Minecraft

x-ports:
  - 25565:25565@host

Plex

x-ports:
  - 32400:32400@host
  - plex.suranyami.com:32400/https

Needed 2 mappings, one for the internal subnet for use by the AppleTV, because of some idiosyncrasy of the way the native Plex app works with behind the NAT versus over t'interwebz.

Jellyfin

x-ports:
    - 1900:1900@host
    - 7359:7359@host
    - jellyfin.suranyami.com:8096/https

Only outages I've had so far were purely hardware-related: robo-vacuum somehow knocked out a power cord that was already loose… derp. That won't happen again. And, the fan software wasn't installed on my RockPi 4 NAS box, so it overheated and shut down. Fixed that this morning.

global deployment

I'm currently using Netdata to monitor my nodes. It's WAY overkill for what I'm running, but hey, whatever. For this we need to do a global deployment:

services:
  netdata:
    image: netdata/netdata:latest
    hostname: "{{.Node.Hostname}}"
# ...
    volumes:
# ...
      - /etc/hostname:/host/etc/hostname:ro
    deploy:
      mode: global

This is essentially the same as a normal docker-swarm compose file, but because it's not actually docker-swarm, this line is a hack to get the hostname: - /etc/hostname:/host/etc/hostname:ro.

There is also a quirk that (hopefully) might be fixed in future versions of uncloud: the volumes don't get created automatically on each machine. For that I had to execute a bunch of uc volume create commands like this:

c volume create netdataconfig -m node2
uc volume create netdataconfig -m node3
uc volume create netdataconfig -m node4
uc volume create netdatalib -m node2
uc volume create netdatalib -m node3
uc volume create netdatalib -m node4
uc volume create netdatacache -m node2
uc volume create netdatacache -m node3
uc volume create netdatacache -m node4

Replicated deployment

One very nice feature is replicated deployment with automatic load balancing. There's not a lot of documentation about how it works at the moment, so I'm a bit suss on it, but essentially it looks like this in the compose file:

    deploy:
      mode: replicated
      replicas: 4

This will cause it to pick a random set of machines and deploy a container on each, and load-balance incoming requests.

There are caveats to this, of course. The service configuration will need to be on a shared volume, for instance, and some services do NOT behave well in this situation. plex is the worst example of this… if you store its configuration, caches and DB on a shared volume, you are gonna have a very bad time indeed because of race-conditions, non-atomicity, file corruption etc.

Which is a shame, because Plex is the service I'd most like to be replicated. I dunno what the solution is. Use something other than Plex seems like the most obvious answer, but as far as I know the alternatives have the same issue.

Discuss...

  1. Oil, filtered from the previous deep fry, repackaged in a Nikka whisky bottle, because they’re cute, and Nikka whisky is frikkin’ fantastic, so of course we’ve got some old bottles lying around.

  2. Soya sauce, decanted daintily into a maple syrup bottle, which is totally not at all confusing some times.

  3. Salt and pepper shakers that are big and chunky and sit in a really crappy wooden base I made to stop them falling over all the time because I’m the clumsiest person I know.

  4. HomePod, playing bleepy noises. The other half of the stereo pair is on the other side of the kitchen, because stereo separation is important and don’t lecture me about how speakers are arranged. Tonight there was a decent selection featuring Wolfram Spyra, Space Frogs, International People’s Gang, Woob, Si Begg, Basement Jaxx, Tosca, and a fun Grimes track from before she went a bit mental after hanging out with mister nazi-baby-maker.

  5. The handle of my Chinese cleaver, the knife I use for literally everything.

  6. The handles of 2 quite nice Global brand knives that were a wedding present from a family member. These are very good Japanese knives. The other knives (and these) are all hanging from the magnetic knife holder I installed.

  7. Preserved lemons. Gotta start using them now, because it’s been long enough. Better decant some into smaller bottles to give away. Funny that I don’t have any Moroccan cook books. Probably still traumatized by “that event in Morocco” 25+ years ago.

  8. Garlic oil. Such an amazingly simple thing to make, and you end up with 2 x awesome things. Chop up garlic. Fry in oil till crispy. Scoop out crispy garlic and use as topping. Add garlic-infused oil to literally anything to make it taste amazing.

  9. Super-cheap oil spray thingy from Aldi. Think it was $10. So useful.

  10. Rosemary-salt. In a blender, combine rosemary, salt crystals, lemon rind, peppercorns. Whizz. Zero to hero.

  11. Left-over Sichuan pepper-salt. Grind Sichuan peppercorns with salt. Left over from making 白切鸡, bái qiē jī, “white-cooked chicken”, then shallow-frying half the chicken the following day for a lovely crispy-skin experience.

Discuss...

Sometimes, I just want to know “what's the IP address” of a machine, and I don't want to see every damn network adapter: loopback, virtual, wireless or wired…

Here's the wall of text that ip addr typically returns on a Raspberry Pi running various docker containers:

dietpi@eon:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether e4:5f:01:67:d8:10 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.238/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2403:5811:6ed:0:e65f:1ff:fe67:d810/64 scope global dynamic mngtmpaddr 
       valid_lft 86212sec preferred_lft 14212sec
    inet6 fe80::e65f:1ff:fe67:d810/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:7f:98:38:dc brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:69:e3:50:d0 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global docker_gwbridge
       valid_lft forever preferred_lft forever
    inet6 fe80::42:69ff:fee3:50d0/64 scope link 
       valid_lft forever preferred_lft forever
1034: veth5012f18@if1033: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
    link/ether 0a:a4:4f:99:a3:ca brd ff:ff:ff:ff:ff:ff link-netnsid 21
    inet6 fe80::8a4:4fff:fe99:a3ca/64 scope link 
       valid_lft forever preferred_lft forever
1038: veth0a22673@if1037: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
    link/ether 5a:a6:f4:85:ad:01 brd ff:ff:ff:ff:ff:ff link-netnsid 22
    inet6 fe80::58a6:f4ff:fe85:ad01/64 scope link 
       valid_lft forever preferred_lft forever
1040: veth7324567@if1039: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
    link/ether 06:7a:fc:bc:ba:3b brd ff:ff:ff:ff:ff:ff link-netnsid 24
    inet6 fe80::47a:fcff:febc:ba3b/64 scope link 
       valid_lft forever preferred_lft forever
1062: vethf34c1d6@if1061: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
    link/ether be:97:31:ce:d1:20 brd ff:ff:ff:ff:ff:ff link-netnsid 27
    inet6 fe80::bc97:31ff:fece:d120/64 scope link 
       valid_lft forever preferred_lft forever
1068: vethd64996a@if1067: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
    link/ether 2e:10:e5:36:24:cc brd ff:ff:ff:ff:ff:ff link-netnsid 6
    inet6 fe80::2c10:e5ff:fe36:24cc/64 scope link 
       valid_lft forever preferred_lft forever
976: veth5190b0e@if975: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
    link/ether 06:bb:39:60:81:26 brd ff:ff:ff:ff:ff:ff link-netnsid 5
    inet6 fe80::4bb:39ff:fe60:8126/64 scope link 
       valid_lft forever preferred_lft forever
984: veth88f88fe@if983: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
    link/ether 66:cb:ec:91:57:68 brd ff:ff:ff:ff:ff:ff link-netnsid 9
    inet6 fe80::64cb:ecff:fe91:5768/64 scope link 
       valid_lft forever preferred_lft forever
1007: veth49e1be0@if1006: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
    link/ether fa:4d:cc:02:3c:6a brd ff:ff:ff:ff:ff:ff link-netnsid 14
    inet6 fe80::f84d:ccff:fe02:3c6a/64 scope link 
       valid_lft forever preferred_lft forever
1016: veth7a3a81d@if1015: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
    link/ether 36:75:87:35:42:7f brd ff:ff:ff:ff:ff:ff link-netnsid 18
    inet6 fe80::3475:87ff:fe35:427f/64 scope link 
       valid_lft forever preferred_lft forever
1022: vetha80dc19@if1021: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
    link/ether 72:d5:af:ce:a5:6c brd ff:ff:ff:ff:ff:ff link-netnsid 20
    inet6 fe80::70d5:afff:fece:a56c/64 scope link 
       valid_lft forever preferred_lft forever

So, what's the alternative?

These worked for me:

hostname -I | awk '{print $1}'

# ...
$ hostname -I | awk '{print $1}'
192.168.1.238

or using the results from a DNS query:

ip route get 8.8.8.8 | awk '{print $7}'

# ...
$ ip route get 8.8.8.8 | awk '{print $7}'
192.168.1.238

Discuss...

These last few years, I've been trying out a Kamado Joe smoker.

It's been challenging.

There are a number of things that make them completely different to a normal barbecue. I previously had a simple Weber barbecue. It worked fine for hot and fast cooking and after a bit of burn-down and ash-over, you could easily do a few low & slow things as long as you kept your expectations within an hour or two. There was no thermal insulation to speak of, except for a dome of steel with an adjustable outlet.

A smoker of the sort similar to a Kamado Joe (there are plenty of them…) is a different animal all together.

The first thing to know is this: refractory bricks stay hot for up to 24 hours.

The insides of these types of smokers are modelled similarly to the best pizza ovens and metal refineries. Heat dissipates, and that's bad, if you need to make something hot. It means you need to keep supplying fuel to keep things hot.

That's where refractory bricks come in: they reflect heat back from where they came from.

The outside of a Kamado Joe needs to be running for quite some time before it even gets slightly hot, because of these bricks.

This presents 2 problems:

  1. If I want something to cook low and slow for a long time, I need lots of fuel.
  2. If I ignite a lot of fuel, everything will get very hot and generally have a texture like leather.

So what's the solution?

I found 2 helpful procedures that, after experimenting with them, I can verify they do help make things a lot easier:

  1. The “minion method”. This entails making a semi-circular chain of charcoal with smoking chunks positioned above each section, such that when lit at one end, it slowly burns through the entire chain, a bit like a very slow fuse.

  2. A tray of water. This is so obvious in retrospect! Ideally, the internal temperature should be in the 120-150°C range. Water will have a beneficial feedback effect: too hot and the steam will suppress the burning, too low and it won't do much except act as thermal inertia.

A combination of the 2 above hints, allowing the first burn to properly ash over, and not moving the air inlets/outlets too chaotically has led to much more predictable outcomes.

Today's bounty was:

  • Totally bodaciously tender pork shoulder with brown sugar, mustard and left-over Nong-Shim Ramyun spicy instant noodle powder rub.
  • Tender-as roast chicken from the last 2 hours of the above pork cook, prepped with lemon zest, smoked paprika and oregano rub with olive oil and smoked salt.
  • Purple Japanese sweet potatoes wrapped in foil with olive oil, pepper and salt.
  • Smoked long, sweet peppers
  • Baked and smoked aubergine.

One of the things that made all of the above much, much easier was having a decent temperature probe. I have a Meater probe, and it's worth every cent.

Discuss...

Create a network for nginx-proxy-manager to use:

docker network create nginx-proxy-manager-network

Then add this to all the compose files that want to be referenced by Nginx-Proxy-Manager:

networks:
  nginx-proxy-manager-network:
    external: true

Now you can use the service name as the host in the Nginx-Proxy-Manager GUI:

Source Destination
awwesome.suranyami.com http://awwesome:8088

Discuss...

My current Homelab server rack:

  • Turing Pi 2 cluster, now with 4 x RK1 3588 8-core arm_64 CPUs with 32GB RAM each (not shown in pictures below), 6 TOPS NPU, in a 2U Silverstone rack-mount case, fans, 8-bay hot-swappable SSDs, 4TB SSDs x 4
  • a bunch or Raspberry Pi 4s (4GB x 2, 8GB x 1, 1TB NVMe SSDs in Argon One cases)
  • Radxa Rockchip 5B 16GB RAM, 4TB NVMe. 8-core arm_64, 6 TOPS NPU + case
  • Argon Eon, RasPi 4, 4GB RAM 4 x 4TB SSD
  • Radxa Penta NAS kit with RockChip 4 4GB + 4 x 4TB SATA SSDs
  • UPS + surge protection, because we had a power surge during a storm here a few months ago that destroyed one of the RasPi CM4s I had in the Turing Pi 2, a power supply, and a 15” portable LCD monitor… expensive power surge! It’s the 21st century… you’d think these things wouldn’t happen any more, but they do.

This is all instrumented by a Portainer/Docker Swarm setup running:

  • Nginx-Proxy-Manager (Simple reverse proxy manager)
  • DuckDNS (Dynamic DNS)
  • Minecraft Server (suranyami.duckdns.org:25565)
  • Awwesome Self-Hosted Browser link (no login required)
  • Excalidraw (FOSS collaborative drawing webapp, no login required)
  • Homarr (Home page for all my services)
  • Home Assistant (IoT control for smart devices)
  • Jackett
  • Joplin Server (Knowledge Base)
  • Netdata monitoring on most nodes
  • Ollama + Ollama WebUI (really slow ATM… installing NPU drivers this weekend, coz the RockChip 3588s have 6 TOPS of neural processing)
  • Overseer
  • Plex Media Server
  • Radarr
  • Sonarr
  • Tautulli
  • Tdarr
  • IT-Tools (No login needed). Check it out! It’s very useful!
  • Transmission (Torrents)
  • Uptime-Kuma (Uptime monitoring)
  • WG-Easy Wireguard VPN management
  • GlusterFS distributed File System with 2 x redundancy, 1 unified storage volume of 18TB in total

Discuss...

Today I needed to different layouts for public-facing and authorised pages in a LiveView app.

After an annoying amount of digging in documentation and forums, the following was the most elegant solution I found.

Assume we have these layouts in myapp/lib/myapp_web/components/layouts/:

authenticated.html.heex
public.html.heex

And, also assuming that there is something different in each layout: stuff you can't use unless signed in.

In lib/myapp_web/router.ex modify these authentication routes to add a layout: in the live_session statements:


  scope "/", MyappWeb do
    pipe_through [:browser, :redirect_if_user_is_authenticated]

    live_session :redirect_if_user_is_authenticated,
      on_mount: [{MyappWeb.UserAuth, :redirect_if_user_is_authenticated}],
      layout: {MyappWeb.Layouts, :public} do
      live "/users/register", UserRegistrationLive, :new
      live "/users/log_in", UserLoginLive, :new
      live "/users/reset_password", UserForgotPasswordLive, :new
      live "/users/reset_password/:token", UserResetPasswordLive, :edit
    end

    post "/users/log_in", UserSessionController, :create
  end

  scope "/", ReflectalWeb do
    pipe_through [:browser, :require_authenticated_user]

    live_session :require_authenticated_user,
      on_mount: [{MyappWeb.UserAuth, :ensure_authenticated}],
      layout: {MyappWeb.Layouts, :authenticated} do
      live "/dashboard", DashboardLive.Show, :show
      live "/users/settings", UserSettingsLive, :edit
      live "/users/settings/confirm_email/:token", UserSettingsLive, :confirm_email
    end
  end

The important lines here are:

      layout: {MyappWeb.Layouts, :public} do

and

      layout: {MyappWeb.Layouts, :authenticated} do

Simple!

Discuss...

Here's a top tip to avoid:

Ironbark smoke is abso-fucking-lutely vile. Do not use a couple of ironbark logs in your smoker unless they have completely burnt down to embers and can be used to ignite something that doesn’t make everything taste like sadness and other people’s headaches.

I just managed to ruin an entire barbecue this way. Learn from my stupidity: just never do this.

Discuss...

Countries

Language

(ni) ほん (hon)(go) Japanese (language)


えい (ei)(go) English (language)

Nationality

(ni) ほん (hon) じん ( jin) Japanese (person)
えい (ei)(go)
English (language)

Nationality

(ni) ほん (hon) じん ( jin)
Japanese (person)

Discuss...

Enter your email to subscribe to updates.