• 0 Posts
  • 56 Comments
Joined 9 months ago
cake
Cake day: May 29th, 2024

help-circle
    1. You can host a webserver on a Raspberry Pi. I don’t know what you’re doing with your setup but you absolutely do not need hundreds of watts to serve a few hundred KB worth of static webpage or PDF file. This website is powered by a 30 watt solar panel attached to a car battery on some guy’s apartment balcony. As of writing its at 71% charge.

    2. An Ampere Altra Max CPU has 128 ARM cores (the same architecture that a raspberry pi uses), with a 250 watt max TDP. That works out to about 2 watts per core. Each of those cores is more than enough to serve a little static webpage on its own, but in reality since a lot of these sites get less than 200 hits per day the power cost can be amortized over thousands of them, and the individual cores can go to sleep if there’s still not enough work to do. Go ahead and multiply that number by 4 for failover if you want, its still not a lot. (Not that the restaurant knows or cares about any of this, all this would be decided by a team of people at a massive IT company that the restaurant bought webpage hosting from).


  • Don’t listen to the people who say it works by displacing oxygen. It would never be used as a general anesthetic if that was the mechanism of action.

    Xenon has been used as a general anesthetic, but it is more expensive than conventional anesthetics.

    Xenon is a high-affinity glycine-site NMDA receptor antagonist.[155] However, xenon is different from certain other NMDA receptor antagonists in that it is not neurotoxic and it inhibits the neurotoxicity of ketamine and nitrous oxide (N2O), while actually producing neuroprotective effects.[156][157] Unlike ketamine and nitrous oxide, xenon does not stimulate a dopamine efflux in the nucleus accumbens.[158]

    Xenon has a minimum alveolar concentration (MAC) of 72% at age 40, making it 44% more potent than N2O as an anesthetic.[164] Thus, it can be used with oxygen in concentrations that have a lower risk of hypoxia.

    https://en.wikipedia.org/wiki/Xenon


  • So, keep in mind that single photon sensors have been around for awhile, in the form of avalanche photodiodes and photomultiplier tubes. And avalanche photodiodes are pretty commonly used in LiDAR systems already.

    The ones talked about in the article I linked collect about 50 points per square meter at a horizontal resolution of about 23 cm. Obviously that’s way worse than what’s presented in the phys.org article, but that’s also measuring from 3km away while covering an area of 700 square km per hour (because these systems are used for wide area terrain scanning from airplanes). With the way LiDAR works the system in the phys.org article could be scanning with a very narrow beam to get way more datapoints per square meter.

    Now, this doesn’t mean that the system is useless crap or whatever. It could be that the superconducting nanowire sensor they’re using lets them measure the arrival time much more precisely than normal LiDAR systems, which would give them much better depth resolution. Or it could be that the sensor has much less noise (false photon detections) than the commonly used avalanche diodes. I didn’t read the actual paper, and honestly I don’t know enough about LiDAR and photon detectors to really be able to compare those stats.

    But I do know enough to say that the range and single-photon capability of this system aren’t really the special parts of it, if it’s special at all.


  • At to end of the day it comes down to this:

    Is it cheaper to store steel stock in a warehouse or terrawatt-hours of electricity in a battery farm?

    Is it cheaper to perform maintainance on 2 or 3x the number of smelters or is it cheaper to maintain millions of battery or pumped hydro facilities?

    I’m sure production companies would love it if governments or electrical companies bore the costs of evening out fluctuations in production, just like I’m sure farmers would love it if money got teleported into their bank account for free and they never had to worry about growing seasons. But I’m not sure that’s the best situation for society as a whole.

    EDIT: I guess there’s a third factor which is transmission. We could build transmission cables between the northern and southern hemispheres. So, is it cheaper to build and maintain enormous HVDC (or even superconducting) cables than it is to do either of the two things above? And how do governments feel about being made so dependent on each other?

    We can do a combination of all three of course, picking and choosing the optimal strategy for each situation, but like I said above I tend to think that one of those strategies will be disproportionately favorable over the others.



  • Specifically they are completely incapable of unifying information into a self consistent model.

    To use an analogy you see a shadow and know its being cast by some object with a definite shape, even if you can’t be sure what that shape is. An LLM sees a shadow and its idea of what’s casting it is as fuzzy and mutable as the shadow itself.

    Funnily enough old school AI from the 70s, like logic engines, possessed a super-human ability for logical self consistancy. A human can hold contradictory beliefs without realizing it, a logic engine is incapable of self-contradiction once all of the facts in its database have been collated. (This is where the SciFi idea of robots like HAL-9000 and Data from Star Trek come from.) However this perfect reasoning ability left logic engines completely unable to deal with contradictory or ambiguous information, as well as logical paradoxes. They were also severely limited by the fact that practically everything they knew had to be explicitly programmed into them. So if you wanted one to be able to hold a conversion in plain English you would have to enter all kinds of information that we know implicitly, like the fact that water makes things wet or that most, but not all, people have two legs. A basically impossible task.

    With the rise of machine learning and large artificial neural networks we solved the problem of dealing with implicit, ambiguous, and paradoxical information but in the process completely removed the ability to logically reason.


  • That sounds absolutely fine to me.

    Compared to an NVME SSD, which is what I have my OS and software installed on, every spinning disk drive is glacially slow. So it really doesn’t make much of a difference if my archive drive is a little bit slower at random R/W than it otherwise would be.

    In fact I wish tape drives weren’t so expensive because I’m pretty sure I’d rather have one of those.

    If you need high R/W performance and huge capacity at the same time (like for editing gigantic high resolution videos) you probably want some kind of RAID array.








  • That’s what Google was trying to do, yeah, but IMO they weren’t doing a very good job of it (really old Google search was good if you knew how to structure your queries, but then they tried to make it so you could ask plain English questions instead of having to think about what keywords you were using and that ruined it IMO). And you also weren’t able to run it against your own documents.

    LLMs on the other hand are so good at statistical correlation that they’re able to pass the Turing test. They know what words mean in context (in as much they “know” anything) instead of just matching keywords and a short list of synonyms. So there’s reason to believe that if you were able to see which parts of the source text the LLM considered to be the most similar to a query that could be pretty good.

    There is also the possibility of running one locally to search your own notes and documents. But like I said I’m not sure I want to max out my GPU to do a document search.



  • Being able to summarize and answer questions about a specific corpus of text was a use case I was excited for even knowing that LLMs can’t really answer general questions or logically reason.

    But if Google search summaries are any indication they can’t even do that. And I’m not just talking about the screenshots people post, this is my own experience with it.

    Maybe if you could run the LLM in an entirely different way such that you could enter a question and then it tells you which part of the source text statistically correlates the most with the words you typed; instead of trying to generate new text. That way in a worse case scenario it just points you to a part of the source text that’s irrelevant instead of giving you answers that are subtly wrong or misleading.

    Even then I’m not sure the huge computational requirements make it worth it over ctrl-f or a slightly more sophisticated search algorithm.





  • Are you misreading “preparing” as literally any writing

    “Prepare derivative works” means not just any writing, but literally anything creative. If you paint a picture of a character from a book, using specific details described in that book such as their appearance and name, you are creating a derivative work.

    https://law.stackexchange.com/questions/78442/what-is-considered-a-derivative-work

    Even that Wikipedia article goes into fair use.

    Fair use carves out an exception for parody, criticism, discussion, and education. “Entertainment” or “because I like the series and these characters” are not one of those reasons. Fan fiction might qualify as parody though.

    What effect on the market can there be for a fan remaster of a 20 year old game that isn’t for sale anymore? Hard to argue that doesn’t fall under fair use.

    This is not how “the effect of the use upon the potential market for or the value of the copyrighted work” part of fair use works.

    A company can create a work, sit on it for literally 100 years doing nothing with it and making not a single cent from it, then sue you for making a nonprofit fan work of it. Steamboat Willie is 95 years old and until just this year you could have been sued for drawing him. Note that, in the eyes of the law, Steamboat Willie is effectively a different character than Mickey Mouse.

    Again, I cannot stress enough how it doesn’t matter at all whether you are personally profiting from something or whether you are affecting a market. The word “potential” in that quote above is doing a lot of work:

    A father in the UK wanted to put spiderman on the grave stone of his 4 year old son who loved the character. Disney said “no”. Disney does not make tombstones. You are not eating into their profits by putting spiderman on a tombstone. And yet in the eyes of the law Disney has every right to stop you since they might decide to start up a tombstone business next week.

    Nothing I have written here is legal advice.

    EDIT: I am not a fan of any of this. I think you should be able to write nonprofit fanfiction without worrying that some corporation might sue you. I am on your side on this. But this is the reality we live in.