• 0 Posts
  • 43 Comments
Joined 8 months ago
cake
Cake day: July 7th, 2024

help-circle
  • Interesting you get downvoted for this when I mocked someone for saying the opposite who claimed that $0.5m was some enormous amount of money we shouldn’t be wasting, and I simply pointed out that we waste literally billions around the world on endless wars killing random people for now reason, so it is silly to come after small bean quantum computing if budgeting is your actual concern. People seemed to really hate me for saying that, or maybe it was because they just actually like wasting moneys on bombs to drop on children and so they want to cut everything but that.






  • bunchberry@lemmy.worldtoScience Memes@mander.xyzSHINY
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    3 months ago

    It’s always funny seeing arguments like this as someone with a computer science education. A lot of people act like you can’t have anything complex unless some intelligent being deterministically writes a lot of if-else statements to implement it, which requires them to know and understand in detail what they are implementing at every step.

    But what people don’t realize is that this is not how it works at all, there are many problems that are just impractical to actually “know” how to solve yet we solve them all the time, such as voice recognition. Nobody in human history has ever written a bunch of if-else statements to be able to accurately translate someone’s voice to text, because it’s too complicated of a problem, no one on earth knows how it works.

    Yet, of course, your phone can do voice recognition just fine. That is because you can put together a generic class of algorithms which find solutions to problems on their own, without you even understanding how to solve problem. These algorithms are known as metaheuristics. Metaheuristics fundamentally cannot be deterministic, they require random noise to work properly, because something that is deterministic will always greedily go in the direction of a more correct solution, and will never explore more incorrect solutions, whereby an even better solution may be beyond the horizon of many incorrect ones. They also do have to be somewhat deterministic as well, because you need some greed or else the random exploration would be aimless.

    A simple example of a metaheuristic is that of annealing. If you want to strengthen a sword, you can heat up the metal really hot and let it slowly cool. While it’s really hot, the atoms in the sword will randomly explore different configurations, and as it cools, they will explore less and less, and the overall process leads them to finding rather optimal configurations that strengthen the crystaline structure of the metal.

    This simple process can actually be applied generally to solve pretty much any problem. For example, if you are trying to figure out the optimal route to deliver packages, you can simulate this annealing process but rather than atoms searching for an optimal crystaline structure, you have different orders of stops on a graph searching for the shortest path. The “temperature” would be a variable that represents how much random exploration you are willing to accept, i.e. if you alter the configuration and it’s worse, how much worse does it have to be for you to not accept it. A higher temperature would accept worse solutions, at very low temperatures you would only accept solutions that improve upon the route.

    I once implemented this algorithm to solve sudoku puzzles and it was very quick at doing so, and the funny thing is, I’ve never even played sudoku before! I do not know how to efficiently solve a sudoku puzzle, I’ve honestly never even solved one by hand, but with sudoku it is very easy to verify whether or not a solution is correct even if you have no idea how to find the solution and even if finding it is very difficult, verifying it is trivially easy. So all I had to do is right the annealing algorithm so that the greedy aspect is based on verifying how many rows/columns are correct, and the exploration part is just randomly moving numbers around.

    There are tons of metaheuristic algorithms, and much of them we learn from nature, like annealing, however, there’s also genetic algorithms. The random exploration is done through random mutations through each generation, but the deterministic and greedy aspect of it is the fact that only the most optimal generations are chosen to produce the next generation. This is also a generic algorithm that can be applied to solve any problem. You can see a person here who uses a genetic algorithm to teach a computer how to fly a plane in a simulation.

    Modern AI is based on neural networks, which the greedy aspect of them is something called backpropagation, although this on its own is not a metaheuristic, but modern AI tech arguably qualifies because it does not actually work until you introduce random exploration like a method known as drop out whereby you randomly remove neurons during training to encourage the neural network to not overfit. Backpropagation+dropout forms a kind of metaheuristic with both a greedy and exploratory aspect to it, and can be used to solve just about any generic problem. (Technically, ANNs are just function-approximators, so if you want to think of it as a metaheuristic, the full metaheuristic would have to include all the steps of creating, training, and then applying the ANN in practice, as a metaheuristic is a list of steps to solve any generic problem, whereas an ANN on its own is just a function-approximator.)

    Indeed, that’s how we get phones to recognize speech and convert it to text. Nobody sat down and wrote a bunch of if-else statements to translate speech into text. Rather, we took a generic nature-inspired algorithm that can produce solutions for any problem, and just applied it to speech recognition, and kept increasing the amount of compute until it could solve the problem on its own. Once it solves it, the solution it spits out is kind of a black box. You can put in speech as an input, and it gives you text as an output, but nobody really even knows fully what is going on in between.

    People often act like somehow computers could not solve problems unless humans could also solve them, but computers already have solved millions of problems which not only has no human ever solved but no human can even possibly understand the solution the computer spits out. All we know from studying nature is that there are clever ways to combine random exploration and deterministic greed to form processes which can solve any arbitrary problem given enough time and resources, so we just implement those processes into computers and then keep throwing more time and resources at it until it spits out an answer.

    We already understand how nature can produce complex things without anyone “knowing” how it works, because we do that all the time already! You do not need a sentient being to tell the beetle how to evolve to fit into its environment. There is random exploration caused by genetic mutations, but also a deterministic greedy aspect caused by “survival of the fittest.” This causes living organisms to gradually develop over many generations to something fit for its environment. And life has had plenty of time and resources to become more suited to its environment, life has been evolving for billions of years, with the whole resources of the planet earth and the sun.




  • It is ultimately a philosophical choice not demanded by the mathematics to actually interpret reality as oscillating waves. Erwin Schrodinger for example argued against the notion that particles really “spread out” as waves and instead argued that the particle just kind of hops from interaction to interaction without having meaningful existence in between interaction. If you go this route, then the wave function doesn’t “describe” anything, but rather predicts where particles would hop to during an interaction.

    The reason Schrodinger argued in favor of this is because he said treating particles as actually spreading out as waves contradicts with the fact we only measure particles, so you need an additional postulate that says these waves suddenly collapse back into particles the moment you try to measure them, and he did not see why “measurement” should play a fundamental role in the theory. This is sometimes called the “measurement problem” and Heisenberg’s formulation and interpretation does not have this problem.

    If you mean, can you get rid of the wave function entirely, the answer is also yes. When quantum mechanics was first formulated, it was formulated using Heisenberg’s matrix mechanics, which make all the same predictions but does not use the wave function. The wave function is a result of a particular mathematical formalism. There is another formulation of quantum mechanics called the path integral formulation, and yet another called the ensemble in state space formulation.

    The probability of finding an electron or any other particle at one point or another can be imagined as a diffuse cloud, denser where the probability of seeing the particle is stronger. Sometimes it is useful to visualize this cloud as if it were a real thing. For instance, the cloud that represents an electron around its nucleus indicates where it is more likely that the electron appears if we look at it. Perhaps you encountered them at school: these are the atomic ‘orbitals’.

    This cloud is described by a mathematical object called wave function.The Austrian physicist Erwin Schrödinger has written an equation describing its evolution in time. Quantum mechanics is often mistakenly identified with this equation. Schrödinger had hopes that the ‘wave’ could be used to explain the oddities of quantum theory: from those of the sea to electromagnetic ones, waves are something we understand well. Even today, some physicists try to understand quantum mechanics by thinking that reality is the Schrödinger wave.

    But Heisenberg and Dirac understood at once that this would not do. To view Schrödinger’s wave as something real is to give it too much weight – it doesn’t help us to understand the theory; on the contrary, it leads to greater confusion. Except for special cases, the Schrödinger wave is not in physical space, and this divests it of all its intuitive character. But the main reason why Schrödinger’s wave is a bad image of reality is the fact that, when a particle collides with something else, it is always at a point: it is never spread out in space like a wave. If we conceive an electron as a wave, we get in trouble explaining how this wave instantly concentrates to a point at each collision.

    Schrödinger’s wave is not a useful representation of reality: it is an aid to calculation which permits us to predict with some degree of precision where the electron will reappear. The reality of the electron is not a wave: it is how it manifests itself in interactions, like the man who appeared in the pools of lamplight while the young Heisenberg wandered pensively in the Copenhagen night.

    — Carlo Rovelli, “Reality is Not what it Seems”

    Of course, you might say that this is still not “macroscopically similar to ours” because in our classical world we do not need to treat objects as if they only exist in the moment of interaction. There is always a tradeoff in quantum mechanics. It’s not a classical theory. There will always be some differences, so it really depends upon what differences you find the most intuitive/acceptable. If you find the oscillating wave picture to be too bizarre then you can think of them just as particles, with the tradeoff that they only exist relative to what they are interacting with in the moment.



  • Honestly, the random number generation on quantum computers is practically useless. Speeds will not get anywhere near as close to a pseudorandom number generator, and there are very simple ones you can implement that are blazing fast, far faster than any quantum computer will spit out, and produce numbers that are widely considered in the industry to be cryptographically secure. You can use AES for example as a PRNG and most modern CPUs like x86 processor have hardware-level AES implementation. This is why modern computers allow you to encrypt your drive, because you can have like a file that is a terabyte big that is encrypted but your CPU can decrypt it as fast as it takes for the window to pop up after you double-click it.

    While PRNG does require an entropy pool, the entropy pool does not need to be large, you can spit out terabytes of cryptographically secure pseudorandom numbers on a fraction of a kilobyte of entropy data, and again, most modern CPUs actually include instructions to grab this entropy data, such as Intel’s CPUs have an RDSEED instruction which let you grab thermal noise from the CPU. In order to avoid someone discovering a potential exploit, most modern OSes will mix into this pool other sources as well, like fluctuations in fan voltage.

    Indeed, used to with Linux, you had a separate way to read random numbers directly from the entropy pool and another way to read pseudorandom numbers, those being /dev/random and /dev/urandom. If you read from the entropy pool, if it ran out, the program would freeze until it could collect more, so some old Linux programs you would see the program freeze until you did things like move your mouse around.

    But you don’t see this anymore because generating enormous amounts of cryptographysically secure random nubmers is so easy with modern algorithms that modern Linux just collects a little bit of entropy at boot and it uses that to generate all pseudorandom numbers after, and just got rid of needing to read it directly, both /dev/random and /dev/urandom now just internally in the OS have the same behavior. Any time your PC needs a random number it just pulls from the pseudorandom number generator that was configured at boot, and you have just from the short window of collecting entropy data at boot the ability to generate sufficient pseudorandom numbers basically forever, and these are the numbers used for any cryptographic application you may choose to run.

    The point of all this is to just say random number generation is genuinely a solved problem, people don’t get just how easy it is to basically produce practically infinite cryptographically secure pseudorandom numbers. While on paper quantum computers are “more secure” because their random numbers would be truly random, in practice you literally would never notice a difference. If you gave two PhD mathematicians or statisticians the same message, one encrypted using a quantum random number generator and one encrypted with a PRNG like AES or ChaCha20, and asked them to decipher them, they would not be able to decipher either. In fact, I doubt they would even be able to identify which one was even encoded using the quantum random number generator. A string of random numbers looks just as “random” to any random number test suite whether or not it came from a QRNG or a high-quality PRNG (usually called CSPRNG).

    I do think at least on paper quantum computers could be a big deal if the engineering challenge can ever be overcome, but quantum cryptography such as “the quantum internet” are largely a scam. All the cryptographic aspects of quantum computers are practically the same, if not worse, than traditional cryptography, with only theoretical benefits that are technically there on paper but nobody would ever notice in practice.



  • Schrödinger was not “rejecting” quantum mechanics, he was rejecting people treating things described in a superposition of states as literally existing in “two places at once.” And Schrödinger’s argument still holds up perfectly. What you are doing is equating a very dubious philosophical take on quantum mechanics with quantum mechanics itself, as if anyone who does not adhere to this dubious philosophical take is “denying quantum mechanics.” But this was not what Schrödinger was doing at all.

    What you say here is a popular opinion, but it just doesn’t make any sense if you apply any scrutiny to it, which is what Schrödinger was trying to show. Quantum mechanics is a statistical theory where probability amplitudes are complex-valued, so things can have a -100% chance of occurring, or even a 100i% chance of occurring. This gives rise to interference effects which are unique to quantum mechanics. You interpret what these probabilities mean in physical reality based on how far they are away from zero (the further from zero, the more probable), but the negative signs allow for things to cancel out in ways that would not occur in normal probability theory, known as interference effects. Interference effects are the hallmark of quantum mechanics.

    Because quantum probabilities have this difference, some people have wondered if maybe they are not probabilities at all but describe some sort of physical entity. If you believe this, then when you describe a particle as having a 50% probability of being here and a 50% probability of being there, then this is not just a statistical prediction but there must be some sort of “smeared out” entity that is both here and there simultaneously. Schrödinger showed that believing this leads to nonsense as you could trivially set up a chain reaction that scales up the effect of a single particle in a superposition of states to eventually affect a big system, forcing you to describe the big system, like a cat, in a superposition of states. If you believe particles really are “smeared out” here and there simultaneously, then you have to believe cats can be both “smeared out” here and there simultaneously.

    Ironically, it was Schrödinger himself that spawned this way of thinking. Quantum mechanics was originally formulated without superposition in what is known as matrix mechanics. Matrix mechanics is complete, meaning, it fully makes all the same predictions as traditional quantum mechanics. It is a mathematically equivalent theory. Yet, what is different about it is that it does not include any sort of continuous evolution of a quantum state. It only describes discrete observables and how they change when they undergo discrete interactions.

    Schrödinger did not like this on philosophical grounds due to the lack of continuity. There were discrete “gaps” between interactions. He criticized it saying that “I do not believe that the electron hops about like a flea” and came up with his famous wave equation as a replacement. This wave equation describes a list of probability amplitudes evolving like a wave in between interactions, and makes the same predictions as matrix mechanics. People then use the wave equation to argue that the particle literally becomes smeared out like a wave in between interactions.

    However, Schrödinger later abandoned this point of view because it leads to nonsense. He pointed in one of his books that while his wave equation gets rid of the gaps in between interactions, it introduces a new gap in between the wave and the particle, as the moment you measure the wave it “jumps” into being a particle randomly, which is sometimes called the “collapse of the wave function.” This made even less sense because suddenly there is a special role for measurement. Take the cat example. Why doesn’t the cat’s observation of this wave not cause it to “collapse” but the person’s observation does? There is no special role for “measurement” in quantum mechanics, so it is unclear how to even answer this in the framework of quantum mechanics.

    Schrödinger was thus arguing to go back to the position of treating quantum mechanics as a theory of discrete interactions. There are just “gaps” between interactions we cannot fill. The probability distribution does not represent a literal physical entity, it is just a predictive tool, a list of probabilities assigned to predict the outcome of an experiment. If we say a particle has a 50% chance of being here or a 50% chance of being there, it is just a prediction of where it will be if we were to measure it and shouldn’t be interpreted as the particle being literally smeared out between here and there at the same time.

    There is no reason you have to actually believe particles can be smeared out between here and there at the same time. This is a philosophical interpretation which, if you believe it, it has an enormous amount of problems with it, such as what Schrödinger pointed out which ultimately gets to the heart of the measurement problem, but there are even larger problems. Wigner had also pointed out a paradox whereby two observers would assign different probability distributions to the same system. If it is merely probabilities, this isn’t a problem. If I flip a coin and look at the outcome and it’s heads, I would say it has a 100% chance of being heads because I saw it as heads, but if I asked you and covered it up so you did not see it, you would assign a 50% probability of it being heads or tails. If you believe the wave function represents a physical entity, then you could setup something similar in quantum mechanics whereby two different observers would describe two different waves, and so the physical shape of the wave would have to differ based on the observer.

    There are a lot more problems as well. A probability distribution scales up in terms of its dimensions exponentially. With a single bit, there are two possible outcomes, 0 and 1. With two bits, there’s four possible outcomes, 00, 01, 10, and 11. With three bits, eight outcomes. With four bits, sixteen outcomes. If we assign a probability amplitude to each possible outcome, then the number of degrees of freedom grows exponentially the more bits we have under consideration.

    This is also true in quantum mechanics for the wave function, since it is again basically a list of probability amplitudes. If we treat the wave function as representing a physical wave, then this wave would not exist in our four-dimensional spacetime, but instead in an infinitely dimensional space known as a Hilbert space. If you want to believe the universe actually physically made up of infinitely dimensional waves, have at ya. But personally, I find it much easier to just treat a probability distribution as, well, a probability distribution.


  • It is weird that you start by criticizing our physical theories being descriptions of reality then end criticizing the Copenhagen interpretation, since this is the Copenhagen interpretation, which says that physics is not about describing nature but describing what we can say about nature. It doesn’t make claims about underlying ontological reality but specifically says we cannot make those claims from physics and thus treats the maths in a more utilitarian fashion.

    The only interpretation of quantum mechanics that actually tries to interpret it at face value as a theory of the natural world is relational quantum mechanics which isn’t that popular as most people dislike the notion of reality being relative all the way down. Almost all philosophers in academia define objective reality in terms of something being absolute and point-of-view independent, and so most academics struggle to comprehend what it even means to say that reality is relative all the way down, and thus interpreting quantum mechanics as a theory of nature at face-value is actually very unpopular.

    All other interpretations either: (1) treat quantum mechanics as incomplete and therefore something needs to be added to it in order to complete it, such as hidden variables in the case of pilot wave theory or superdeterminism, or a universal psi with some underlying mathematics from which to derive the Born rule in the Many Worlds Interpretation, or (2) avoid saying anything about physical reality at all, such as Copenhagen or QBism.

    Since you talk about “free will,” I suppose you are talking about superdeterminism? Superdeterminism works by pointing out that at the Big Bang, everything was localized to a single place, and thus locally causally connected, so all apparent nonlocality could be explained if the correlations between things were all established at the Big Bang. The problem with this point of view, however, is that it only works if you know the initial configuration of all particles in the universe and a supercomputer powerful to trace them out to modern day.

    Without it, you cannot actually predict any of these correlations ahead of time. You have to just assume that the particles “know” how to correlate to one another at a distance even though you cannot account for how this happens. Mathematically, this would be the same as a nonlocal hidden variable theory. While you might have a nice underlying philosophical story to go along with it as to how it isn’t truly nonlocal, the maths would still run into contradictions with special relativity. You would find it difficult to construe the maths in such a way that the hidden variables would be Lorentz invariant.

    Superdeterministic models thus struggle to ever get off the ground. They only all exist as toy models. None of them can reproduce all the predictions of quantum field theory, which requires more than just accounting for quantum mechanics, but doing so in a way that is also compatible with special relativity.


  • i’d agree that we don’t really understand consciousness. i’d argue it’s more an issue of defining consciousness and what that encompasses than knowing its biological background.

    Personally, no offense, but I think this a contradiction in terms. If we cannot define “consciousness” then you cannot say we don’t understand it. Don’t understand what? If you have not defined it, then saying we don’t understand it is like saying we don’t understand akokasdo. There is nothing to understand about akokasdo because it doesn’t mean anything.

    In my opinion, “consciousness” is largely a buzzword, so there is just nothing to understand about it. When we actually talk about meaningful things like intelligence, self-awareness, experience, etc, I can at least have an idea of what is being talked about. But when people talk about “consciousness” it just becomes entirely unclear what the conversation is even about, and in none of these cases is it ever an additional substance that needs some sort of special explanation.

    I have never been convinced of panpsychism, IIT, idealism, dualism, or any of these philosophies or models because they seem to be solutions in search of a problem. They have to convince you there really is a problem in the first place, but they only do so by talking about consciousness vaguely so that you can’t pin down what it is, which makes people think we need some sort of special theory of consciousness, but if you can’t pin down what consciousness is then we don’t need a theory of it at all as there is simply nothing of meaning being discussed.

    They cannot justify themselves in a vacuum. Take IIT for example. In a vacuum, you can say it gives a quantifiable prediction of consciousness, but “consciousness” would just be defined as whatever IIT is quantifying. The issue here is that IIT has not given me a reason to why I should care about them quantifying what they are quantifying. There is a reason, of course, it is implicit. The implicit reason is that what they are quantifying is the same as the “special” consciousness that supposedly needs some sort of “special” explanation (i.e. the “hard problem”), but this implicit reason requires you to not treat IIT in a vacuum.


  • Bruh. We literally don’t even know what consciousness is.

    You are starting from the premise that there is this thing out there called “consciousness” that needs some sort of unique “explanation.” You have to justify that premise. I do agree there is difficulty in figuring out the precise algorithms and physical mechanics that the brain uses to learn so efficiently, but somehow I don’t think this is what you mean by that.

    We don’t know how anesthesia works either, so he looked into that and the best he got was it interrupts a quantom wave collapse in our brains

    There is no such thing as “wave function collapse.” The state vector is just a list of probability amplitudes and you reduce those list of probability amplitudes to a definite outcome because you observed what that outcome is. If I flip a coin and it has a 50% chance of being heads and a 50% chance of being tails, and it lands on tails, I reduce the probability distribution to 100% probability for tails. There is no “collapse” going on here. Objectifying the state vector is a popular trend when talking about quantum mechanics but has never made any sense at all.

    So maybe Roger Penrose just wasted his retirement on this passion project?

    Depends on whether or not he is enjoying himself. If he’s having fun, then it isn’t a waste.


  • The only observer of the mind would be an outside observer looking at you. You yourself are not an observer of your own mind nor could you ever be. I think it was Feuerbach who originally made the analogy that if your eyeballs evolved to look inwardly at themselves, then they could not look outwardly at the outside world. We cannot observe our own brains as they only exist to build models of reality, if our brains had a model of itself it would have no room left over to model the outside world.

    We can only assign an object to be what is “sensing” our thoughts through reflection. Reflection is ultimately still building models of the outside world but the outside world contains a piece of ourselves in a reflection, and this allows us to have some limited sense of what we are. If we lived in a universe where we somehow could never leave an impression upon the world, if we could not see our own hands or see our own faces in the reflection upon a still lake, we would never assign an entity to ourselves at all.

    We assign an entity onto ourselves for the specific purpose of distinguishing ourselves as an object from other objects, but this is not an a priori notion (“I think therefore I am” is lazy sophistry). It is an a posteriori notion derived through reflection upon what we observe. We never actually observe ourselves as such a thing is impossible. At best we can over reflections of ourselves and derive some limited model of what “we” are, but there will always be a gap between what we really are and the reflection of what we are.

    Precisely what is “sensing your thoughts” is yourself derived through reflection which inherently derives from observation of the natural world. Without reflection, it is meaningless to even ask the question as to what is “behind” it. If we could not reflect, we would have no reason to assign anything there at all. If we do include reflection, then the answer to what is there is trivially obvious: what you see in a mirror.




  • Why are you isolating a single algorithm? There are tons of them that speed up various aspects of linear algebra and not just that single one, and many improvements to these algorithms since they were first introduced, there are a lot more in the literature than just in the popular consciousness.

    The point is not that it will speed up every major calculation, but these are calculations that could be made use of, and there will likely even be more similar algorithms discovered if quantum computers are more commonplace. There is a whole branch of research called quantum machine learning that is centered solely around figuring out how to make use of these algorithms to provide performance benefits for machine learning algorithms.

    If they would offer speed benefits, then why wouldn’t you want to have the chip that offers the speed benefits in your phone? Of course, in practical terms, we likely will not have this due to the difficulty and expense of quantum chips, and the fact they currently have to be cooled below to near zero degrees Kelvin. But your argument suggests that if somehow consumers could have access to technology in their phone that would offer performance benefits to their software that they wouldn’t want it.

    That just makes no sense to me. The issue is not that quantum computers could not offer performance benefits in theory. The issue is more about whether or not the theory can be implemented in practical engineering terms, as well as a cost-to-performance ratio. The engineering would have to be good enough to both bring the price down and make the performance benefits high enough to make it worth it.

    It is the same with GPUs. A GPU can only speed up certain problems, and it would thus be even more inefficient to try and force every calculation through the GPU. You have libraries that only call the GPU when it is needed for certain calculations. This ends up offering major performance benefits and if the price of the GPU is low enough and the performance benefits high enough to match what the consumers want, they will buy it. We also have separate AI chips now as well which are making their way into some phones. While there’s no reason at the current moment to believe we will see quantum technology shrunk small and cheap enough to show up in consumer phones, if hypothetically that was the case, I don’t see why consumers wouldn’t want it.

    I am sure clever software developers would figure out how to make use of them if they were available like that. They likely will not be available like that any time in the near future, if ever, but assuming they are, there would probably be a lot of interesting use cases for them that have not even been thought of yet. They will likely remain something largely used by businesses but in my view it will be mostly because of practical concerns. The benefits of them won’t outweigh the cost anytime soon.