cultural reviewer and dabbler in stylistic premonitions
deleted by creator
They have to know who the message needs to go to, granted. But they don’t have to know who the message comes from, hence why the sealed sender technique works. The recipient verifies the message via the keys that are exchanged if they have been communicating with that correspondent before or else it is a new message request.
So I don’t see how they can build social graphs if they don’t know who the sender if all messages are, they can only plot recipients which is not enough.
You need to identify yourself to receive your messages, and you send and receive messages from the same IP address, and there are typically not many if any other Signal users sharing the same IP address. So, the cryptography of “sealed sender” is just for show - the metadata privacy remains dependent on them keeping their promise not to correlate your receiving identity with the identities of the people you’re sending to. If you assume that they’ll keep that promise, then the sealed sender cryptography provides no benefit; if they don’t keep the promise, sealed sender doesn’t really help. They outsource the keeping of their promises to Amazon, btw (a major intelligence contractor).
Just in case sealed sender was actually making it inconvenient for the server to know who is talking to who… Signal silently falls back to “unsealed sender” messages if server returns 401 when trying to send “sealed sender” messages, which the server actually does sometimes. As the current lead dev of Signal-for-Android explains: “Sealed sender is not a guarantee, but rather a best-effort sort of thing” so “I don’t think notifying the user of a unsealed send fallback is necessary”.
Given the above, don’t you think the fact that they’ve actually gone to the trouble of building sealed sender at all, which causes many people to espouse the belief you just did (that their cryptographic design renders them incapable of learning the social graph, not to mention learning which edges in the graph are most active, and when) puts them rather squarely in doth protest too much territory? 🤔
i bet you’re going to love to hate this wikipedia article https://en.wikipedia.org/wiki/Monochrome_painting 😂
because it’s stupid.
you were bamboozled
presumably you find value in some things that some other people think are stupid too; it’s OK
I see. What a mess.
The instructions at https://docs.searxng.org/admin/installation-docker.html mention that the docker image (which that page tells you to just pull and run) has its “sources hosted at” https://github.com/searxng/searxng-docker and has instructions for running it the image without docker-compose.
But, the Dockerfile
source for the image is actually in the main repo at https://github.com/searxng/searxng/blob/master/Dockerfile and the searxng-docker
repo actually contains a docker-compose.yaml
and different instructions for running it under compose instead.
Anyway, in the docker-compose
deployment, SEARXNG_BASE_URL
(yet another name for this… neither SEARXNG_URL
or BASE_URL
, but apparently it sets base_url
from it) is constructed from SEARXNG_HOSTNAME
on line 58 here: https://github.com/searxng/searxng-docker/blob/a899b72a507074d8618d32d82f5355e23ecbe477/docker-compose.yaml#L58
If I had a github account associated with this pseudonym I might open an issue or PR about this, but I don’t and it isn’t easy to make one anymore 😢
Changing
SEARXNG_HOSTNAME
in my.env
file solved it.
nice. (but, i assume you actually mean SEARXNG_URL
? either that or you’re deploying it under some environment other than one described in the official repo, because the string HOSTNAME
does not appear anywhere in the searxng repo.)
https://docs.searxng.org/admin/settings/settings_server.html says you need to set base_url
, and that by default it’s set to $SEARXNG_URL
.
however, https://docs.searxng.org/admin/installation-docker.html#searxng-searxng says that if you are running it under docker the environment variable which controls base_url
in the config is actually BASE_URL
rather than SEARXNG_URL
.
(possibly whichever variable it is is currently empty, which might make it construct a URL based on the IP address it is configured to listen on.)
in my experience DeepL has the best results for some language pairs while Google is better for others (and has a lot more languages).
But, these days I’m happy to say Firefox translate is the first thing I try and it is often sufficient. I mostly only try the others now when the Firefox result doesn’t make sense or the language is unsupported.
Yeah, that would make sense - language detection is trivial and can be done with a small statistical model; nothing as complicated as a neural network is needed, i think just looking at bigram frequency is accurate enough when you have more than a few words.
If that is what is happening, and it is only leaking the language pair to the server the first time that pair is needed, that would be nice… I wish they made it clear if that is what is happening 😢
Probably that’s when it does online connection?
since the help says it is downloading “partial language files” automatically, and the button never changes from “Download” to “Remove” if you don’t click Download, logically it must sometimes need to download more of a language which you have previous downloaded a “partial language file” of.
i am curious if the choice of which parts of the “language file” (aka model) it is downloading really does not reveal anything about the text you’re translating; i suspect it most likely does reveal something about the input text to the server… but i’m not motivated enough to research it further at the moment.
Wow, thanks for the about:translations
tip - I was wondering how to do that!
Besides “Translate page” there is also a “Translate selection” option in the right-click menu so you can translate part of a page.
However, unless you download languages in the “Translation” section of Firefox preferences, it doesn’t actually always work while offline:
As you pointed out, the help page explicitly says there is “no privacy risk of sending text to third parties for analysis because translation happens on your device, not externally”, but, after I translate something in a new language I haven’t before, it still doesn’t appear as downloaded (eg having a “Remove” button instead of a “Download” button) in the preferences.
The FAQ has a question Why do I need to install languages? with this answer:
Installing languages enables Firefox to perform translations locally within your browser, prioritizing your privacy and security. As you translate, Firefox downloads partial language files as you need them. To pre-install complete languages yourself, access the language settings in Firefox Settings,
General
panel, in the Language and Appearance section under Translations.
I wonder what the difference between the “partial” language files and the full download is, and if that is really not leaking any information about the text being translated. In doing a few experiments just now, I certainly can’t translate to new languages while offline, but after I’ve translated one paragraph in a language I do seem to be able to translate subsequent paragraphs while offline. 🤔
Anyway, it probably is a good idea to click “Download” on all the languages you want to be able to translate.
In finder you cannot cut files
I thought you could when I last used it, back when it was called Mac OS X, so I just searched and TIL they removed cmd-X for files in 2015, but, you actually can still cut files; it’s just another hidden keyboard shortcut now: after you copy a file with cmd-C you can retroactively make it a cut when pasting by typing cmd-option-V instead of cmd-V. Intuitive, no?
maybe showing him this would help?
thanks!
here is a side-by-side comparison of the neural network upscaling slop (left) versus a conventional zoom in on the original (right):
and then of course there is the text:
nice work!
selecting the sixth option in the base menu puts html in the output:
(this is in Tor Browser, which is based on the “ESR” release of Firefox)
deleted by creator
why bother opening a pathway in the first place
i’ve never had an IG account myself, but i think your mistake is in assuming that someone accepting your follow request on a restricted IG account is an indicator of desire for chatting with strangers. accepting your follow request might just mean they glanced at your profile and assessed that you aren’t a spammer or bot, not that they want to chat with you.
perhaps just need to find out somewhere in the real world where I could bond more easily with real people?
for sure that is a good idea 😂
but there are also many places online where it is much more reasonable to assume people are interested in chatting with strangers.
Having some distrust in Wikipedia is healthy; you certainly shouldn’t take it as the final word about facts you’re depending on the accuracy of. But, it is very often a good starting point for learning about a new subject.
Spending a minute or two reading that “source code” article (or another version of it which is likely available in your first language) would give you a much better understanding of the concept of source code (which is a prerequisite for understanding what “closed source” means) than any of the answers in this thread so far.
Good question.
I see that the file served from https://packages.mozilla.org/apt/repo-signing-key.gpg is the same as the file at https://packages.cloud.google.com/apt/doc/apt-key.gpg
Apparently Mozilla outsources the operation of the Firefox APT repo to the Google Cloud “Artifact Registry” service 😦