

The hard links aren’t between the source and backup, they’re between Friday’s backup and Saturday’s backup
The hard links aren’t between the source and backup, they’re between Friday’s backup and Saturday’s backup
If you want a “time travel” feature, your only option is to duplicate data.
Not true. Look at the --link-dest flag. Encryption, sure, rsync can’t do that, but incremental backups work fine and compression is better handled at the filesystem level anyway IMO.
There are two ways to maintain a persistent data store for Docker containers: bind mounts and docker-managed volumes.
A Docker managed volume looks like:
datavolume:/data
And then later on in the compose file you’ll have
volumes:
datavolume:
When you start this container, Docker will create this volume for you in /var/lib/docker/volumes/ and will manage access and permissions. They’re a little easier in that Docker handles permissions for you, but they’re also kind of a PITA because now your compose file and your data are split apart in different locations and you have to spend time tracking down where the hell Docker decided to put the volumes for your service, especially when it comes to backups/migration.
A bind mount looks like:
./datavolume:/data
When you start this container, if it doesn’t already exist, “datavolume” will be created in the same location as your compose file, and the data will be stored there. This is a little more manual since some containers don’t set up permissions properly and, once the volume is created, you may have to shut down the container and then chown the volume so it can use it, but once up and running it makes things much more convenient, since now all of the data needed by that service is in a directory right next to the compose file (or wherever you decide to put it, since bind mounts let you put the directory anywhere you like).
Also with Docker-managed volumes, you have to be VERY careful running your docker prune commands, since if you run “docker prune --volumes” and you have any stopped containers, Docker will wipe out all of the persistent data for them. That’s not an issue with bind mounts.
Docker is far cleaner than native installs once you get used to it. Yes native installs are nice at first, but they aren’t portable, and unless the software is built specifically for the distro you’re running you will very quickly run into dependency hell trying to set up your system to support multiple services that all want different versions of libraries. Plus what if you want or need to move a service to another system, or restore a single service from a backup? Reinstalling a service from scratch and migrating over the libraries and config files in all of their separate locations can be a PITA.
It’s pretty much a requirement to start spinning up separate VMs for each service to get them to not interfere with each other and to allow backup and migration to other hosts, and managing 50 different VMs is much more involved and resource-intensive than managing 50 different containers on one machine.
Also you said that native installs just need an apt update && apt upgrade, but that’s not true. Services that are built into your package manager sure, but most services do not have pre-built packages for all distros. For the vast majority, you have to git clone the source, then build from scratch and install. Updating those services is not a simple apt update && apt upgrade, you have to cd into the repo, git pull, then recompile and reinstall, and pray to god that the dependencies haven’t changed.
docker compose pull/up/down is pretty much all you need, wrap it in a small shell script and you can bring up/down or update every service with a single command. Also if you use bind mounts and place them in the directory for the service along side the compose file, now your entire service is self-contained in one directory. To back it up you just “docker compose down”, rsync the directory to the backup location, then “docker compose up”. To restore you do the exact same thing, just reverse the direction of the rsync. To move a service to a different host, you do the exact same thing, just the rsync and docker compose up are now being run on another system.
Docker lets you compact an entire service with all of its dependencies, databases, config files, and data, into a single directory that can be backed up and/or moved to any other system with nothing more than a “down”, “copy”, and “up”, with zero interference with other services running on your system.
I have 158 containers running on my systems at home. With some wrapper scripts, management is trivial. The thought of trying to manage native installs on over a hundred individual VMs is frightening. The thought of trying to manage this setup with native installs on one machine, if that was even possible, is even more frightening.
Pretty much guaranteed you’ll spend an order of magnitude more time (or more) doing than than just auto-updating and fixing things on the rare occasion that they break. If you have a service that likes to throw out breaking changes on a regular basis, it might make sense to read the release notes and manually update that one, but not everything.
I haven’t tried it, but my understanding is it’s still somewhat of a beta feature
A lot of it depends on your distro. I use exclusively Mint and Debian (primarily Debian), and everything works fine on both of those. My laptop runs Debian 13 and has the iGPU and an RTX4070, and one of my servers has both an RTX A6000 and a T400, both being passed through Proxmox into two different Debian 13 VMs. Everything works without issue. Before Debian 13 on the laptop I had Mint 22, and before that Ubuntu 23.10, and both worked without issue as well. The laptop before this one had the iGPU and a GTX1060 I believe, it ran Mint 18, then 19, then 20, then 21 all without any problems either.
That’s how that user types all of their posts. It’s really fucking annoying. They get called out on it a lot, downvoted for it, and just keep doing it for some reason.
It’s got dual graphics cards, with the graphics an Nvidia one. I’ve heard that they are finicky with Linux…
Not really. I’ve been using Nvidia cards on Linux for decades, the complaints are blown way, way out of proportion. Just install the proprietary drivers from the distro’s repos and 99% of the time that’s all that’s needed. The people who complain usually screwed something up, like installing drivers from the wrong source or not installing the meta package for their kernel headers so the drivers can’t rebuild on kernel updates. Just follow the official instructions for your distro and that should be all you have to do, there’s a lot of bad advice floating around on forums and blogs, so just stick to the official docs.
It’s literally one checkbox in the settings to shut those external media sources off
Either a lifetime pass, or you actually configured local access correctly instead of botching it (or ingoring it entirely) and then coming to lemmy to complain.
Marketing absolutely works on Nerds, what a ridiculous statement. Just because certain types marketing will push us away doesn’t mean all marketing is pointless. Be honest, let me know what your product does, give me a proper datasheet and a price, and I’ll explore it. Try to shove some hyperbolic BS down by throat while hiding the things I actually care about and I’ll never buy from your company.
There’s enough people who genuinely believe the company is worth that to keep the value high for a very long time.
I don’t think there are. I just think there are a lot of people who believe they’re going to be able to get in and out before the Tesla bubble pops. Actual, realistic value of the company has nothing to do with it.
it’s based on what people think the company will be worth in the future
Not a single person in their right mind thinks that Tesla will ever be worth its current $1.3T market cap. Stock price is based on whether the market movers (not you or I) think that the price will be higher or lower a few weeks/months from now, that’s it. The actual intrinsic value/worth of the company makes no difference.
Don’t stick your backups on a drive that’s plugged into the same machine as the primary copy, it defeats almost the entire purpose of having a backup.
I host my own via Hetzner VPS and Mailcow. I use SMTP2GO as an outbound relay so I don’t have to worry about IP reputation issues. It’s all very straight-forward, no issues to speak of. I use unique aliases for each account, so spam is a non-issue as well. If an alias gets leaked I just shut it down, no more spam.
As long as there’s a simple way to determine which containers use outdated images, I’m good
Yeah you can either have it update the containers itself, or just print out their names. With a custom plugin you can make it output the names of any containers that have available updates in whatever format you like. This discussion on the github page goes through some example scripts you can use to serve the list of containers with available updates over a REST API to be pulled into any other system you like (eg: Homepage dashboard).
I use node_exporter (for machines/VMs) and cAdvisor (for Docker containers) + VictoriaMetrics + AlertManager/Grafana for resource usage tracking, visualization, and alerts.
For updates, I use a combination of dockcheck.sh and OliveTin with some custom wrappers to dynamically build a page with a button for every stack that includes a container with an update. Clicking the button applies the update and cycles the container. Once the container is updated, its button disappears from the page. So just loading the page will tell you how many and which containers have available updates and you can update them whenever you like from anywhere, including your phone/tablet, with one button click. I also have apt updates for VMs and hosts integrated onto this page, so I can update the host machines as well in the same way.
What’s the point? Even if you pay extra for “4K” streaming, it’s compressed to hell and the quality is no better than 1080p. What are you going to even watch on an 8K TV?
Got a friend or family member willing to let you drop a miniPC at their place?
You could also go the offline route - buy two identical external drive setups, plug one into your machine and make regular backups to it, drop the other one in a drawer in your office at work. Then once a month or so swap them to keep the off-site one fresh.
Also there’s really nothing wrong with cloud storage as long as you encrypt before uploading so they never have access to your data.
Personally I do both. The off-site offline drive is for full backups of everything because space is cheap, while cloud storage is use for more of a “delta” style backup, just the stuff the changes frequently, because of the price. If the worst were to happen, I’d use the offsite drive to get the bulk infrastructure back up and running, and then the latest cloud copy for any recently added/modified files.