

I was going to do an origin character as a solo play-through and a custom character for a group play-through with my mates, but now I might do it the other way around… which means hours in the character creator! Ha.
I was going to do an origin character as a solo play-through and a custom character for a group play-through with my mates, but now I might do it the other way around… which means hours in the character creator! Ha.
Often the question marked as a duplicate isn’t a duplicate, just the person marking it as such didn’t spend the time to properly understand the question and realise how it differs. I also see lots of answers to questions mis-understanding the question or trying to force the person asking down their own particular preference, and get tons of votes whilst doing it.
Don’t get me wrong, some questions are definitely useful - and some go above-and-beyond - but on average the quality isn’t great these days and hasn’t been for a while.
Google’s first quarter 2023 report shows they made massive profits off vast revenue due to advertising.
It is about control though. The thing that caught my eye is that they’re saying that only “approved” browsers will be able to access these WEI sites. So what does that mean for crawlers/scrapers? That the big tech companies on the approval board will be able to lock potential competitors out of accessing the web - new browsers, search engines, etc. but much more importantly… Machine Learning.
Google’s biggest fear right now is that ML systems will completely eliminate most people’s reason to use Google’s search, and therefore their main source of revenue will plummet. And they’re right to be scared, it’s already starting to happen and it’s showing us very quickly just how bad Google’s search results are.
So this seems to me like an attempt to control things from that side. It’s essentially the “big boys” trying to consolidate and firm-up their hold in the industry and not let newcomers rival them, as with ML the barrier to entry has never been lower.
How do Linux distro’s deal with this? I feel like however that’s done, I’d like node packages to work in a similar way - “package distro’s”. You could have rolling-release, long-term service w/security patches, an application and verification process for being included in a distro, etc.
It wouldn’t eliminate all problems, of course, but could help with several methods of attack, and also help focus communities and reduce duplication of effort.
If I’m okay with the software (not just trying it out) am I missing out by not using dockers?
No, I think in your use case you’re good. A lot of the key features of containers, such as immutability, reproduceability, scaling, portability, etc. don’t really apply to your use case.
If you reach a point where you find you want a stand-alone linux server, or an auto-reconfiguring reverse proxy to map domains to your services, or something like that - then it starts to have some additional benefit and I’d recommend it.
In fact, using native builds of this software on Windows is probably much more performant.
Containers can be based on operating systems that are different to your computer.
Containers utilise the host’s kernel - which is why there needs to be some hoops to run Linux container on Windows (VM/WSL).
That’s one of the most key differences between VMs and containers. VMs virtualise all the hardware, so you can have a totally different guest and host operating systems; whereas because a container is using the host kernel, it must use the same kind of operating system and accesses the host’s hardware through the kernel.
The big advantage of that approach, over VMs, is that containers are much more lightweight and performant because they don’t have a virtual kernel/hardware/etc. I find its best to think of them as a process wrapper, kind of like chroot for a specific application - you’re just giving the application you’re running a box to run in - but the host OS is still doing the heavy lifting.
I was using file merging, but one issue I found was that arrays don’t get merged - and since switching to use Traefik (which is great) there are a lot of arrays in the config! And I’ve since started using labels for my own tooling too.
I was recently helping someone working on a mini-project to do a bit of parsing of docker compose files, when I discovered that the docker compose spec is published as JSON Schema here.
I converted that into TypeScript types using JSON Schema to TypeScript. So I can create docker compose config in code and then just export it as yaml - I have a build/deploy script that does this at the end.
But now the great thing is that I can export/import that config, share it between projects, extend configs, mix-in, and so on. I’ve just started doing it and it’s been really nice so far, when I get a chance and it’s stabilised a bit I’m going to tidy it up and share it. But there’s not much I’ve added beyond the above at the moment (just some bits to mix-in arrays, which was what set me off on this whole thing!)
I just have a static page that I randomly change - you can see mine here. In this case I was testing the idea of having text within an SVG for better scaling from mobile to desktop, and also I’m loving orange and purple at the moment for some reason! Oh, and I was testing automated deployments from CI/CD, so I always use my own base domain with those first tests!
This is a truly excellent pair of articles, brilliantly written.
Explains the problem, show the solution iterating step by step so we start to build an intuition about it, and goes as far as most people actually need for their applications.
“Out of the frying pan, into the fire”
From a personal perspective, I absolutely agree - I only check my email when I’m specifically expecting something, which is rarely. But at work emails are still incredibly important.
Are there any protocols/services designed specifically for one time codes? Receipts? I think something that’s dedicated to those kinds of tasks would be great from an ease-of-use perspective - no more messing about waiting for delivery, searching through hordes of emails, checking spam folder, etc.
Another problem we have is the rise of oauth - the core idea is great, but the reality is that it ties a lot of people to these Big Tech services.
I’m new to it too, I’ve known about its existence but have been thinking about adding support for it to a project I’m starting soon - really to learn more about it (I tend to learn best by doing!)
It’s goal is for each of us to have personal ownership of all our data online, and full control over who can access what. That’s certainly something I can get behind! You do this by creating a “pod”, which is essentially a database of all your data (I think organised into groups, e.g. each organisation can have their own group of data), which you can self-host if you like, along with the ability to control access.
It’s current impact I would say is near zero. But TBL is a person with a reasonable amount of pull, and he’s setup his own company providing commercial services (presumably, consulting). My guess is they’re dealing with governments and mega-corps - there seems to be very little effort pushing it to “the masses” (i.e. application developers).
The theory sounds interesting but the practicalities of it seem to offer a lot of challenges, so I think the best way to get a real sense of whether it has legs or not is to build something!
He’s pushing for a decentralised web, he’s specifically focussed on personally owned data through his Solid project. But it feels like maybe this month or so could be a tipping-point, so it would be great to get his input and/or for him to see how we all work away at it!
Glad you sorted it though! It’s a nightmare when you get such an opaque error and there’s so many moving parts that could be responsible!
Tim Berners-Lee would be interesting I think, given the direction he’s gone into personal ownership/control of data.
Assume nothing! Test every little assumption and you’ll find the problem. Some things to get you started:
While not a direct solution to your problem, I no longer manually configure my reverse proxies at all now and use auto-configuring ones instead. The nginx-proxy image is great, along with it’s ACME companion image for automatic SSL cert generation with certbot - you’ll be up and running in under 30 mins. I used that for a long time and it was great.
I’ve since moved to using Traefik as it’s more flexible and offers more features, but it’s a bit more involved to configure (simple, but the additional flexibility means everything requires more config).
That way you just bring up your container and the reverse proxy pulls meta-data from it (e.g. host to map/certbot email) and off it goes.
That makes sense, thank you. Yes, it’s specifically “test quality” I’m looking to measure, as 100% coverage is effectively meaningless if the tests are poor.
I use coverage tools like nyc/c8, but I can easily get 100% coverage on buggy, exploitable, and unstable code. You can have two projects, both with 100% coverage, and one be a shit show and the other be rock solid - so I was wondering if there’s a way to measure quality of tests, or to identify code that really needs extra attention (despite being 100%). Mutation testing has been suggested and that’s really interesting, I’m going to give it a go tomorrow and see what it throws up!
Definitely give Ruthless a go, I love it… reminds me of early game ARPG’s on higher difficulties. Positioning really matters, you have to adapt based on what you get. It seems to have been the proving ground for PoE2’s new tempo.