

Except k3s does not provide a deb, a flatpak, or a rpm.
Except k3s does not provide a deb, a flatpak, or a rpm.
Canonical’s snap use a proprietary backend, and comes at a risk of vendor lock in to their ecosystem.
The bash installer is fully open source.
You can make the bad decision of locking yourself into a closed ecosystem, but many sensible people recognize that snap is “of the devil” for a good reason.
I’ve tried snap, juju, and Canonical’s suite. They were uniquely frustrating and I’m not interested in interacting with them again.
The future of installing system components like k3s on generic distros is probably systemd sysexts, which are extension images that can be overlayed onto a base system. It’s designed for immutable distros, but it can be used on any standard enough distro.
There is a k3s sysext, but it’s still in the “bakery”. Plus sysext isn’t in stable release distros anyways.
Until it’s out and stable, I’ll stick to the one time bash script to install Suse k3s.
I think that distributing general software via curl | sh
is pretty bad for all the reasons that curl sh is bad and frustrating.
But I do make an exception for “platforms” and package managers. The question I ask myself is: “Does this software enable me to install more software from a variety of programming languages?”
If the answer to that question is yes, which is is for k3s, then I think it’s an acceptable exception. curl | sh
is okay for bootstrapping things like Nix on non Nix systems, because then you get a package manager to install various versions of tools that would normally try to get you to install themselves with curl | bash
but then you can use Nix instead.
K3s is pretty similar, because Kubernetes is a whole platform, with it’s own package manager (helm), and applications you can install. It’s especially difficult to get the latest versions of Kubernetes on stable release distros, as they don’t package it at all, so getting it from the developers is kinda the only way to get it installed.
Relevant discussion on another thread: https://programming.dev/post/33626778/18025432
One of my frustrations that I express in the linked discussion is that it’s “developers” who are making bash scripts to install. But k3s is not just developers, it’s made by Suse who has their own distro, OpenSuse, using OpenSuse tooling. It’s “packagers” making k3s and it’s install script, and that’s another reason why I find it more acceptable.
that all those CD tools were specifically tailored to run as workers in a deployment pipeline
That’s CI 🙃
Confusing terms, but yeah. With ArgoCD and FluxCD, they just read from a git repo and apply it to the cluster. In my linked git repo, flux is used to install “helmreleases” but argo has something similar.
garden seems similar to GitOps solutions like ArgoCD or FluxCD for deploying helm charts.
Here is an example of authentik deployed using helm and fluxcd.
Firstly, I want to say that I started with podman (alternative to docker) and ansible, but I quickly ran into issues. The last issue I encountered, and the last straw, was that creating a container, I was frustrated because Ansible would not actually change the container unless I used ansible to destroy and recreate it.
Without quadlets, podman manages it’s own state, which has issues, and was the entire reason I was looking into alternatives to podman for managing state.
More research: https://github.com/linux-system-roles/podman: I found an ansible role to generate podman quadlets, but I don’t really want to include ansible roles in my existing ansible roles. Also, it intakes kubernetes yaml, which is very complex for what I am trying to do. At that point, why not just use a single node kubernetes cluster and let kubernetes manage state?
So I switched to Kubernetes.
To answer some of your questions:
Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too “by hand”! Is there a more scripted way to do it? Should I stay with everything in Ansible ??
So what I (and the industry) uses is called “GitOps”. It’s essentially you have a git repo, and the software automatically pulls the git repo and applies the configs.
Here is my gitops repo: https://github.com/moonpiedumplings/flux-config. I use FluxCD for GitOps, but there are other options like Rancher’s Fleet or the most popular ArgoCD.
As a tip, you can search github for pieces of code to reuse. I usually do path:*.y*ml keywords keywords
to search for appropriate pieces of yaml.
I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?
So the first issue is that Kubernetes doesn’t really have “containers”. Instead, the smallest controllable unit in Kubernetes is a “pod”, which is a collection of containers that share a network device. Of course, pods for selfhosted services like the type this community is interested in will rarely have more than one container in them.
There are ways to convert a docker-compose to a kubernetes pod.
But in general, Kubernetes doesn’t use compose files for premade services, but instead helm charts. If you are having issues installing specific helm charts, you should ask for help here so we can iron them out. Helm charts are pretty reliable in my experience, but they do seem to be more involved to set up than docker-compose.
Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard
So what you’re supposed to do is deploy an “ingress”, (k3s comes with traefik by default), and then use cert-manager to automatically apply get letsencrypt certs for ingress “objects”.
Actually, traefik comes with it’s own way to get SSL certs (in addition to ingresses and cert manager), so you can look into that as well, but I decided to use the standardized ingress + cert-manager method because it was also compatible with other ingress software.
Although it seems complex, I’ve come to really, really love Kubernetes because of features mentioned here. Especially the declarative part, where all my services can be code in a git repo.
Maybe nginx proxy manager can do this.
I took a look through the twitter, which someone mentioned in another thread.
Given the 4chan like aestetic of your twitter post, I decided to take a look through the boards and it only took me less than a minute to find the n word being used.
Oh, and all the accounts are truly anonymous, rather than pseudoanonymous, which must make moderation a nightmare. Moderation being technically possible doesn’t make it easy or practical to do.
I don’t want an unmoderated experience by default, either.
No, I’m good. I think I’ll stay far away from plebbit.
To be pedantic, lemmy is federated, rather than decentralized (e.g. a direct p2p architecture).
With decentralization, moderation is much harder than federation, so many people aren’t a fan.
I’m not spotting it. “AI” is only mentioned once.
The key and secret in the docker compose don’t seem to be API keys, but keys for directus itself (which upon a careful reread of the article, I realize is not FOSS, which might be anpther reason people don’t like it").
Directus does seem to have some integration with openai, but it requires at least an api key and this blog post doesn’t mention any of that.
The current setup they are using doesn’t seem to actually connect to openai at all.
There are a few reasons why I really like it being public, even though it means I have to be careful not to share sensitive stuff.
This isn’t exactly what you want. But I use a static site generator, with a fulltext search engine (that operates entirely locally!), called quarto. (although there are other options).
Although I call it a “blog”, it really is more of a personal data dump for me, where I put all my notes down and also record all my processes as I work through projects. Whenever I am redoing something I know I did in an old project, or something I saved here (but disguised as a blogpost), I can just search for it.
Here is my site: https://moonpiedumplings.github.io/ . You can try search at the top right (requires javascript).
Lol I misread it too.
There is literally no way to do performant e2ee at large scale. e2ee works by encrypting every message for every recipient, on the users device.
At 1000 users, that’s basically a public room.
There a source port of at least portal 1.
https://github.com/AruMoon/source-engine
Here’s the active fork of the original project. Going through the issues of the original project, it seems to have support for building for 64 bit platforms.
No portal 2 support though. Although mentioned in the issues of nileusr’s repo is this: https://github.com/EpicSentry/P2ASW , which is interesting
This is only one half of the open source. Those scripts are not poweshell or bash scripts, but instead something simimar to Ansible, run through the Windows AME wizard.
Which I cannot find the source code for. Great!
I think this is the command line onlu version, but the GUI versiom appears to be closed source.
No, this one is different. It’s not an ISO you download (those are extremely sus and you would be right to be skeptical of them), but instead an open source set of scripts you apply to an existing Windows OS.
Edit: see my comment below, it seems to be partially closed source.
There are open source tools for analyzing if github stars are fake, and they work reliably.
The kind of people that fake reviews/stars target are not the kind of people that are going to be verifying things.
As long as Amazon doesn’t crack down, there isn’t really a need to game the system.
So instead you decided to go with Canonical’s snap and it’s proprietary backend, a non standard deployment tool that was forced on the community.
Do you avoid all containers because they weren’t the standard way of deploying software for “decades” as well? (I know people that actually do do that though). And many of my issues about developers and vendoring, which I have mentioned in the other thread I linked earlier, apply to containers as well.
In fact, they also apply to snap as well, or even custom packages distributed by the developer. Arch packages are little more than shell scripts, Deb packages have pre/post hooks which run arbitrary bash or python code, rpm is similar. These “hooks” are almost always used for things like installing. It’s hypocritical to be against
curl | bash
but be for solutions like any form of packages distributed by the developers themselves, because all of the issues and problems withcurl | bash
apply to any form of non-distro distributed packages — including snaps.You are are willing to criticize bash for not immediately knowing what it does to your machine, and I recognize those problems, but guess what snap is doing under the hood to install software: A bash script. Did you read that bash script before installing the microk8s snap? Did you read the 10s of others in the repo’s used for doing tertiary tasks that the snap installer also calls?
The bash script used for installation doesn’t seem to be sandboxed, either, and it runs as root. I struggle to see any difference between this and a generic bash script used to install software.
Although, almost all package managers have commonly used pre/during/post install hooks, except for Nix/Guix, so it’s not really a valid criticism to put say, Deb on a pedestal, while dogging on other package managers for using arbitrary bash (also python gets used) hooks.
But back on topic, in addition to this, you can’t even verify that the bash script in the repo is the one you’re getting. Because the snap backend is proprietary. Snap is literally a bash installer, but worse in every way.