There is many languages that you can build anything with… Although I’ll agree the front end side is more tedious
There is many languages that you can build anything with… Although I’ll agree the front end side is more tedious


You call that a criticism? It’s a first impression.


Not a single part of your answer is about how the brain works.
Concepts are not things in your brain.
Consciousness is a concept. It doesn’t exist in your brain.
Thinking is how a human uses their brain.
I’m asking about how the brain itself functions to intepret natural language.


That doesn’t answer the question you quoted.


Of course the “understanding” of an LLM is limited. Because the entire technology is new, and it’s far from being anywhere close to being able to understand to the level of a human.
But I disagree with your understanding of how an LLM works. At its lower level, it’s a bunch on connected artifical neurons, not that different from a human brain. Now please don’t read this as me saying it’s as good as a human brain. It’s definitely not, but its inner workings are not so far. As a matter of fact, there is active effort to make artificial neurons behave as close as possible to a human neuron.
If it was just statistics, it wouldn’t be so difficult to look at the trained model and identify what does what. But just like the human brain, it is incredidbly difficult to understand that. We just have a general idea.
So it does understand, to a limited extent. Just like a human, it won’t understand what it hasn’t been exposed to. And unlike a human, it is exposed to a very limited set of data.
You’re putting the difference between a human’s “understanding” and an LLM’s “understanding” in the meaning of the word “understanding”, which is just a shortcut to say that they can’t be compared. The actual difference is in the scope of understanding.
A lot of the efforts in the AI fields gravitate around imitating a human brain. Which makes sense, as it is the only thing we know that is capable of doing what we want an AI to do. LLMs are no different, but their scope is limited.


They are talking at a technical level only on one side of the comparison. It makes the entire discussion pointless. If you’re going to compare the understanding of a neural network and the understanding of a human brain, you have to go into depth on both sides.
Mysticism? Lmao. Where? Do you know what the word means?


You’re entering a more philosophical debate than a technical one, because for this point to make any sense, you’d have to define what “understanding” language means for a human in a level as low as what you’re describing for an LLM.
Can you affirm that what a human brain does to understand language is so different to what an LLM does?
I’m not saying an LLM is smart, but saying that it doesn’t understand, when having computers “understand” natural language is the core of NLP, is meh.


That is actually incorrect. It is also a language understanding tool. You don’t have an LLM without NLP. NLP includes processing and understanding natural language.


Haha. Ha.
Now, your friend with the birlliant idea doesn’t need you anymore and can ask a chatbot to make his brilliant app all by himself!
That is definitely a great benefit of vibe coding: it’s an idiot magnet and frees up our brainspace.
Same here. I obviously don’t remember everything because I rarely if ever have to use them, but at least when the time finally comes that I need “git bisect”, I’ll know that “git bisect” exists and I’ll be able to go straight to the manual page that documents it.
No one expects anyone to read the manual and remember it all… But you will naturally remember the big lines and be able to refer to the right place when you need something.
I am using KDE’s Plasma 6 as a DE with Wayland. The compositor (window managers are a Xorg thing) is KWin
The shortcuts I use are Meta+Up/Down/Left/Right. I can’t remember if they’re default or if I set them this way.
I prefer to switch down to the VD with the doc on fullscreen than noving my head to another monitor
When I discovered it can be arranged in a grid, it made VDs so much more useful.
Cause a line of the same amount of VDs (9)… Ugh, not fun haha
Even though you can map each to a shortcut, it’s still tougher to use than a grid with directional shortcuts!
Maybe a cross setup would work for you if you ever need a 5th VD :)
Haha that’s fair
Although it’s a habit thing. Most of these are fixed, I never switch them to a different position. So the only ones I have to remember is A1-2 if I am using them, the rest is as easy as knowing where your glasses are stored in your cupboards.
Faster switch. Think each column being 1-3 and each row as A-C
B2 is my terminals, B3 is my IDE, B1 is a secondary IDE (for instance, DataGrip), C row is browser windows, A1-2 is temporary, not often used windows, A3 is communication apps. I mostly use A3, B2-3 and C2-3. It’s all mapped in my head so I can instantly switch to whichever VD I need.
Same. So now I am renting a place to work and it’s much better :)
People can joke about little problems of their life while knowing that there are much bigger problems in the world.
I would say your biggest issue here is needing precise decimal point computations and using imprecise data types. Any software that requires precision in the decimals needs to use types that are made for precise decimals. No floating point error.