• 0 Posts
  • 49 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle
  • Technical summary: it seems OK against an observer who can see the network traffic but hasn’t infiltrated the phone of the source or the computer of the news organization.

    Any real message is stored locally on the smartphone by the CoverDrop module and sent as the next CoverDrop message, i.e. replacing the dummy message which would otherwise have been sent. Consequently a network observer cannot determine whether any communication is taking place and CoverDrop therefore provides the potential source with plausible deniability.

    The CoverNode and each journalist has their own public-private key pair. These keys are published by the news organization and available to the CoverDrop module directly so the user does not need know about them. When the CoverDrop module is used for the first time, it generates a new, random public-private key pair for the user.

    All real CoverDrop messages sent by the CoverDrop module to the CoverNode include the text written by the potential source as well as their own public key. The message is first encrypted using the public key of the journalist who will ultimately receive the message, then encrypted a second time using the public key of the CoverNode. All dummy CoverDrop messages are encrypted using the public key of the CoverNode. All messages, real or dummy, are arranged to be the same, fixed length. Encryption and length constraints ensure that only the CoverNode can distinguish between real and dummy messages.





  • I will use the opportunity to remind that Signal is operated by a non-profit in the jurisdiction called “the US”. This could have implications.

    A somewhat more anarchist option might be TOX. There is no single client, TOX is a protocol, you can choose from half a dozen clients. I personally use qTox.

    Upside: no phone number required. No questions asked.

    Downside: no servers to store and forward messages. You can talk if both parties are online.


    • Not providing a platform for activities that harm society (e.g. scams, disinformation).
    • Not providing a platform for activities that will get you sued or prosecuted (e.g. piracy, child porn).
    • They had to pay a considerable amount for the service.

    On social media, putting the burden of blocking on a million users is naive because:

    • Blocks can be worked around with bots, someone has to actively fight circumvention.
    • Some users don’t have the time to block, simply conclude “this is a hostile environment” and leave.
    • Some users fall for scams / believe the disinfo.

    I have once helped others build an anonymous mix network (I2P). I’m also an anarchist. On Lemmy however, support decentralization, defederating from instances that have bad policies or corrupt management, and harsh moderation. Because the operator of a Lemmy instance is fully exposed.

    Experience has shown that total freedom is a suitable policy for apps that support 1-to-1 conversations via short text messages. Everything else invites too much abuse. If it’s public, it will have rules. If it’s totally private, it can have total freedom.




  • how did you do it?

    In the BIOS options of that specific server (nothing fancy, a generic Dell with some Xeon processor) the option to enable/disable ME was just plainly offered.

    Chipset features > Intel AMT (active management technology) > disable (or something similar, my memory is a bit fuzzy). I researched the option, got worried about the outcomes if someone learned to exploit it, and made it a policy of turning it off. It was about 2 years ago.

    P.S.

    I’m sure there exist tools for the really security-conscious folks to verify whether ME has become disabled, but I was installing a boring warehouse system, so I didn’t check.


  • please read up on intel management engine

    I’m already familiar with it. On the systems I buy and intall, if they are Intel based, ME gets disabled since I haven’t found a reasonable use for it.

    Oh yeah, ARM also has something similar.

    Since this is more relevant to me (numerically, most of the systems that I install are Raspberry Pi based robots), I’m happy to announce that TrustZone is not supported on Pi 4 (I haven’t checked about other models). I haven’t tested, however - don’t trust my word.

    Who would you buy from in this case?

    From the Raspberry Pi Foundation, who are doubtless ordering silicon from TSMC for the Pico series and ready-made CPUs for their bigger products, and various other services from other companies. If they didn’t exist, I would likely fall back on RockChip based products from China.

    https://www.cryptomuseum.com/covert/bugs/nsaant/firewalk/index.htm

    Wow. :) Neat trick. (Would be revealed in competent hands, though. Snap an X-ray photo and find excess electronics in the socket.)

    However, a radio transceiver is an extremely poor candidate for embedding on a chip. It’s good for bugging boards, not chips.


  • The first and central provision of the bill is the requirement for tracking technology to be embedded in any high-end processor module or device that falls under the U.S. export restrictions.

    As a coder with some hardware awareness, I find the concept laughable.

    How does he think they (read: the Taiwanese, if they are willing to) would go about doing it?

    Add a GPS receiver onto every GPU? Add an inertial navigation module to every GPU? Add a radio to every GPU? :D

    The poor politician needs a technically competent advisor forced on him. To make him aware (preferably in the most blunt way) of real possibilities in the real world.

    In the real world, you can prevent a chip from knowing where it’s running and you can’t add random shit onto a chip, and if someone does, you can stop buying bugged hardware or prevent that random addition from getting a reading.




  • From the article (emphasis mine):

    Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.

    /…/

    “It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says.

    From elsewhere:

    Sycophancy in GPT-4o: What happened and what we’re doing about it

    We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable—often described as sycophantic.

    I don’t know what large language model these people used, but evidence of some language models exhibiting response patterns that people interpret as sycophantic (praising or encouraging the user needlessly) is not new. Neither is hallucinatory behaviour.

    Apparently, people who are susceptible and close to falling over the edge, may end up pushing themselves over the edge with AI assistance.

    What I suspect: someone has trained their LLM on somethig like religious literature, fiction about religious experiences, or descriptions of religious experiences. If the AI is suitably prompted, it can re-enact such scenarios in text, while adapting the experience to the user at least somewhat. To a person susceptible to religious illusions (and let’s not deny it, people are suscpecptible to finding deep meaning and purpose with shallow evidence), apparently an LLM can play the role of an indoctrinating co-believer, indoctrinating prophet or supportive follower.



  • The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.

    This seems to be the kind of a situation where, if the researchers truly believe their study is necessary, they have to:

    • accept that negative publicity will result
    • accept that people may stop cooperating with them on this work
    • accept that their reputation will suffer as a result
    • ensure that they won’t do anything illegal

    After that, if they still feel their study is necesary, maybe they should run it and publish the results.

    If then, some eager redditors start sending death threats, that’s unfortunate. I would catalouge them, but not report them anywhere unless something actually happens.

    As for the question of whether a tailor-made response considering someone’s background can sway opinions better - that’s been obvious through ages of diplomacy. (If you approach an influential person with a weighty proposal, it has always been worthwhile to know their background, think of several ways of how they might perceive the proposal, and advance your explanation in a way that relates better with their viewpoint.)

    AI bots which take into consideration a person’s background will - if implemented right - indeed be more powerful at swaying opinions.

    As to whether secrecy was really needed - the article points to other studies which apparently managed to prove the persuasive capability of AI bots without deception and secrecy. So maybe it wasn’t needed after all.




  • I’m not from the US, but I straight out recommend quickly educating oneself about military stuff at this point - about fiber guided drones (here in Eastern Europe we like them) and remote weapons stations (we like those too). Because the US is heading somewhere at a rapid pace. Let’s hope it won’t get there (the simplest and most civil obstacle would be lots of court cases and Trumpists losing midterm elections), but if it does, then strongly worded letters will not suffice.

    Trump’s administration:

    “Agency,” unless otherwise indicated, means any authority of the United States that is an “agency” under 44 U.S.C. 3502(1), and shall also include the Federal Election Commission.

    Vance, in his old interviews:

    “I think that what Trump should do, if I was giving him one piece of advice: Fire every single midlevel bureaucrat, every civil servant in the administrative state, replace them with our people.”

    Also Vance:

    “We are in a late republican period,” Vance said later, evoking the common New Right view of America as Rome awaiting its Caesar. “If we’re going to push back against it, we’re going to have to get pretty wild, and pretty far out there, and go in directions that a lot of conservatives right now are uncomfortable with.”

    Googling “how to remove a dictator?” when you already have one is doing it too late. On the day the self-admitted wannabe Caesar crosses his Rubicon, it better be so that some people already know what to aim at him.

    Tesla dealerships… nah. I would not advise spending energy on them. But people, being only people, get emotional and do that kind of things.


  • perestroika@lemm.eetoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    edit-2
    4 months ago

    As an exception to most regulations that we hear about from China, this approach actually seems well considered - something that might benefit people and work.

    Similar regulations should be considered by other countries. Labeling generated content at the source, hopefully without the metadata being too extensive (this is where China might go off the handle) would help avoid at least two things:

    • casual deception
    • training AI with material generated by another AI, leading to degradation of ability to generate realistic content