

Damn leechers. And doubly so. First they steal the books, and then they don’t even give back to the pirates. And it’s not like Anna’s Archive or Libgen weren’t struggling already. So Meta is just harming everyone involved.
A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.
I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.
Damn leechers. And doubly so. First they steal the books, and then they don’t even give back to the pirates. And it’s not like Anna’s Archive or Libgen weren’t struggling already. So Meta is just harming everyone involved.
Heheh. Sure. If I’m not at home, my sex-toys are up another stranger’s bum. It made no sense for us to own anything anymore. My brain implant keeps me indifferent and drugged. Consuming and obedient… I’m happy.
Thanks, and I happen to already be aware of it. It doesn’t have any of that. And it’s more complicated to hook it into other things, since the good old postfix is the default case and well-trodden path. I think I’ll try Stalwart anyways. It’s a bit of a risk, though. Since it’s a small project with few developers and the future isn’t 100% certain. And I have to learn all the glue in between the mailserver stuff, since there aren’t any tutorials out there. But both the frontend, and the configuration and setup seem to make sense.
I’ve always been looking for an all-in-one mailserver with a few added features like mailing lists and something like AnonAddy (anonymous mail forwarding). Sadly there doesn’t seem anything like that out there. So I have to configure postfix and dovecot myself. Or make ends meet with a bit more basic features.
Maybe try McDonald’s workers for further research, if it’s the constant and annoying beeping of machines. Or any Japanese store where you get 3 songs blaring at the same time from different aisles, then there’s some offering on a seperate stand, of course also blinking and begging for attention with additional sounds… I believe you can simulate 10 years of UK longterm exposure with a one day trip to Japan.
Most backup software allow you to configure backup retention. I think I went with some pretty standard once per day for a week. After that they get deleted, and it keeps just one per week of the older ones, for one or two months. And after that it’s down to monthly snapshots. I think that aligns well with what I need. Sometimes I find out something broke the day before yesterday. But I don’t think I ever needed a backup from exactly the 12th of December or something like that. So I’m fine if they get more sparse after some time. And I don’t need full backups more than necessary. An incremental backup will do unless there’s some technical reason to do full ones.
But it entirely depends on the use-case. Maybe for a server or stuff you work on, you don’t want to lose more than a day. While it can be perfectly alright to back up a laptop once a week. Especially if you save your documents in the cloud anyway. Or you’re busy during the week and just mess with your server configuration on weekends. In that case you might be alright with taking a snapshot on fridays. Idk.
(And there are incremental backups, full backups, filesystem snapshots. On a desktop you could just use something like time machine… You can do different filesystems at different intervals…)
Seems it means all together. (5600MT/s / 1000) x 2 sticks simultaneously x 64bit / 8bits/Byte = 89.6 GB/s
or 2933/1000 x 4 x 64bit / 8 = 93.9 GB/s
so they calculated with double the DDR bus width in the one example, and 4 times the bus width in the other one. That means dual or quad channel is already factored in in those numbers. And yes, the old one seems to be slightly better than the new one. At least regarding memory throughput. I suppose everything else has been improved on. And you need to put in 4 correct RAM sticks to make use of it in the first place.
Well, the numbers I find on google are: a Nvidia 4090 can transfer 1008 GB/s. And a i9 does something like 90 GB/s. So you’d expect the CPU to be roughly 11 times slower than that GPU at fetching an enormous amount of numbers from memory.
I think if you double the amount of DDR channels for your CPU, and if that also meant your transfer rate would double to 180 GB/s, you’d just be roughly 6 times slower than the 4090. I’m not sure if it works exactly like that. But I’d guess so. And there doesn’t seem to be a recent i9 with quad channel. So you’re stuck with a small fraction of the speed of a GPU if you’re set on an i9. That’s why I mentioned AMD Epyc or Apple processors. Those have a way higher memory throughput.
And a larger model also means more numbers to transfer. So if you now also use your larger memory to use a 70B parameter model instead of an 12B parameter model (or whatever fits on a GPU), your tokens will now come in at a 65th of the speed in the end. Or phrased differently: you don’t wait 6 seconds, but 6 and a half minutes.
AI inference is memory-bound. So, memory bus width is the main bottleneck. I also do AI on an (old) CPU, but the CPU itself is mainly idle and waiting for the memory. I’d say it’ll likely be very slow, like waiting 10 minutes for a longer answer. I believe all the AI people use Apple silicon because of the unified memory and it’s bus width. Or some CPU with multiple memory channels. The CPU speed doesn’t really matter, you could choose a way slower one, because the actual multiplications aren’t what slows it down. But you seem to be doing the opposite, get a very fast processor with just 2 memory channels.
I’d say this is the correct answer. If you’re actually using that much RAM, you probably want it connected to the processor with a wide (fast) bus. I rarely see people do it with desktop or gaming processors. It might be useful for some edge-cases, but usually you want an Epyc processor or something like that, or it’s way too much RAM that isn’t connected fast enough.
In my experience, idle power consumption mainly depends on the mainboard used. The processors all(?) clock down to some more or less energy-efficient level. But the specific design of the mainboard and the components on it could double or half energy consumption.
In my experience, all the 3 big ones work just fine. Caddy, Traefik, Nginx. I use Nginx.
I think a lot comes down to usage. It just depends whether you connect 1 camera to Frigate, or 6. And if you enable some AI features. Whether you download a lot of TV series or a few and delete old stuff. Or use ZFS or other demanding things. I personally like to keep the amount of servers low. So I probably wouldn’t buy server 2 and try to run those services on 1 as well. I’m not sure. You did a good job seperating the stuff. And I think you got some good advice already. I’d add more harddisks, 6TB wouldn’t do it for me. And some space for backups. But you can always keep an eye on actual resource usage and just buy RAM and harddisks as needed. As long as your servers have some slots left for future upgrades. But I think you already got way more servers and RAM than you’d need. I probably run half of those services on a smaller server.
Couldn’t agree more. And a phone number is kind of important. I don’t want to hand that out to 50 random companies for “security”, tracking, and them to sell it to advertisers. Or lose it to hackers, which also happens regularly. And I really don’t like to pull down my pants for Discord (or whoever) to inspect my private parts.
Btw, the cross-post still leads to an error page for me.
Yeah, that just depends on what you’re trying to achieve. Depending on what kind of AI workload you have, you can scale it across 4 GPUs. Or it’ll become super slow if it needs to transfer a lot of data between these GPUs. And depending on what kinds of maths is involved, a Pascal generation GPU might be perfectly fine, or it’ll lack support for some of the operations involved. So yes, of course you can build that rig. Whether it’s going to be useful in your scenario is a different question. But I’d argue, if you need 96GB of VRAM for more than just the sake of it, you should be able to tell… I’ve seen people discuss these rigs with several P40 or similar, on Reddit and in some forums and Github discussions of the software involved. You might just have to do some research and find out if your AI inference framework and the model does well on specific hardware.
Sure. And that’s the case for most big tech companies. I think the fact is a bit unrelated to this topic, though. The government already knows everyone’s birthday… At least for their own citizens, they don’t really need to ask Alphabet to provide that to them.
Certainly the correct answer. I mean their whole business model includes exactly these kinds of “algorithms”. Knowing people’s age, amongst other factors, is what Google is about, and the reason for the majority of their income.
I’d say it’s bound to be more difficult for Google, as every Youtube comment for example, sounds like being made by an 8 yo. Whereas Meta can just tell. You’re still on Facebook? Probably 50+… And on WhatsApp, the amount of boomer memes re-posted would be an immediate tell…
I’ve tested it (on NixOS). But just for two weeks. I’d say it’s pretty impressive. Certainly works. It was just missing some important (to me) feature (forwarding mail to external mailboxes). But they’ve added it since, so I would like to try again. It doesn’t seem to have all the bells and whistles (and I didn’t have a look at the program code) but the basic features of a mailserver seem to be solid. I can’t really comment on the sustainability of the project, quality of the documentation… I mean if your setup includes niche edge-cases, custom tweaking and hooking into other software, maybe stick with the popular choice. But if you just want a regular, more or less simple mailserver, I’d say go for it.
I don’t think the internet gave particularly good advice here. Sure, there are use-cases for both, and that’s why we have both approaches available. But you can’t say VMs are better than containers. They’re a different thing. They might even be worse in your case. But I mean in the end, all “simple thruths” are wrong.