To be honest, I feel like what you describe in the second part (the monkey analogy) is more of a genetic algorithm than a machine learning one, but I get your point.
Quick side note, I wasn’t at all including a discussion about energy consumption and in that case ML based algorithms, whatever form they take, will mostly consume more energy (assuming not completely inefficient “classical” algorithms). I do admit, I am not sure how much more (especially after training), but at least the LLMs with their large vector/matrix based approaches eat a lot (I mean that in the case for cross-checking tokens in different vectors or such). Non LLM, ML, may be much more power efficient.
My main point, however, was that people only remember AI from ~2022 and forgot about things from before (e.g. non LLM, ML algorithms) that were actively used in code completion. Obviously, there are things like ruff, clang-tidy (as you rightfully mentioned) and more that can work without and machine learning. Although, I didn’t check if there literally is none, though I assume it.
On the point of game “AI”, as in AI opponents, I wasn’t talking of that at all (though since deep mind, they did tend to be a bit more ML based also, and better at games, see Starcraft 2, instead of cheating only to get an advantage)
Yeah, my main point with all those examples was to put the point that “AI” always has been a marketing term.
Curve-fitting and data-point clustering are both pretty efficient if used for the thing they are made for.
But if you then start brute-forcing multiple nodes of the same thing just to get a semblance of something else, that is otherwise not what it is made for, of course you will end up using a lot of energy.
We humans have it pretty hard. Our brain is pretty illogical. We then generate multiple layers of abstractions make a world view, trying to match the world we live in. Over those multiple layers, comes a semblance of logic.
Then we make machine.
We make machines to be inherently logical and that makes it better at logical operations than us humans. Hence calculators.
Now someone comes and says - let’s make an abstraction layer on top of the machine to represent illogical behaviour (kinda like our brains).
(┛`Д´)┛彡┻━┻
And then on top of that, they want that illogical abstract machine to itself create abstractions inside it to be able to first mimic human output and then further to do logical stuff.
All of that, just so one can mindlessly feed data into it to “train” it, instead of think themselves and feed it proper logic.
This is like saying they want to install an OS on browser WASM and then install a web browser inside that OS, to do the same thing that they would have otherwise done with the original browser.
In the monkeys analogy, you can add that the monkeys are a simulation on a computer.
To be honest, I feel like what you describe in the second part (the monkey analogy) is more of a genetic algorithm than a machine learning one, but I get your point.
Quick side note, I wasn’t at all including a discussion about energy consumption and in that case ML based algorithms, whatever form they take, will mostly consume more energy (assuming not completely inefficient “classical” algorithms). I do admit, I am not sure how much more (especially after training), but at least the LLMs with their large vector/matrix based approaches eat a lot (I mean that in the case for cross-checking tokens in different vectors or such). Non LLM, ML, may be much more power efficient.
My main point, however, was that people only remember AI from ~2022 and forgot about things from before (e.g. non LLM, ML algorithms) that were actively used in code completion. Obviously, there are things like ruff, clang-tidy (as you rightfully mentioned) and more that can work without and machine learning. Although, I didn’t check if there literally is none, though I assume it.
On the point of game “AI”, as in AI opponents, I wasn’t talking of that at all (though since deep mind, they did tend to be a bit more ML based also, and better at games, see Starcraft 2, instead of cheating only to get an advantage)
Yeah, my main point with all those examples was to put the point that “AI” always has been a marketing term.
Curve-fitting and data-point clustering are both pretty efficient if used for the thing they are made for. But if you then start brute-forcing multiple nodes of the same thing just to get a semblance of something else, that is otherwise not what it is made for, of course you will end up using a lot of energy.
We humans have it pretty hard. Our brain is pretty illogical. We then generate multiple layers of abstractions make a world view, trying to match the world we live in. Over those multiple layers, comes a semblance of logic.
Then we make machine.
We make machines to be inherently logical and that makes it better at logical operations than us humans. Hence calculators.
Now someone comes and says - let’s make an abstraction layer on top of the machine to represent illogical behaviour (kinda like our brains).
(┛`Д´)┛彡┻━┻
And then on top of that, they want that illogical abstract machine to itself create abstractions inside it to be able to first mimic human output and then further to do logical stuff. All of that, just so one can mindlessly feed data into it to “train” it, instead of think themselves and feed it proper logic.
This is like saying they want to install an OS on browser WASM and then install a web browser inside that OS, to do the same thing that they would have otherwise done with the original browser.
In the monkeys analogy, you can add that the monkeys are a simulation on a computer.