Does this specify the kinds of AI? Are none of these devs using code completion on their IDEs? Or refactoring tools? Because the bulk of them use AI these says.
I am not talking about what it does, I am talking about what it is.
And all tools do tend to replace human labor. For example, tractors replaced many farmhands.
The thing we face nowadays, and this is by no means limited to things like AI, is that less jobs are created by new tools than old destroyed (in my earlier simile, a tractor needs mechanics and such).
The definition of something is entirely disconnected from its usage (mainly).
And just because everyone calls LLMs now AI, there are plenty of scientific literature and things that have been called AI before. As of now, as it boils down all of these are algorithms.
The thing with machine learning is just that it is an algorithm that fine tunes itself (which is often blackbox-ish btw). And strictly speaking LLMs, commonly refered to as AI, are a subclass of ML with new technology.
I make and did not make any statement of the values of that technology or my stance on it
Those models generally have much smaller context windows, so the energy concern isn’t quite as extreme.
You could also reasonably make a claim that the model is legally in the clear as far as licensing, if the training data was entirely open source (non-attribution, non-share-alike, and commercial-allowed) licensed code. (A big “if”)
All of that to say: I don’t think I would label code-completion-using anti-AI devs as hypocrites. I think the general sentiment is less “what the technology does” and more “who it does it to”. Code completion, for the most part, isn’t deskilling labor, or turning experts into chatbot-wrangling accountability sinks.
Like, I don’t think the Luddites would’ve had a problem with an artisan using a knitting frame in their own home. They were too busy fighting against factories locking children inside for 18-hour shifts, getting maimed by the machines or dying trapped in a fire. It was never the technology itself, but the social order that was imposed through the technology.
Personally speaking I don’t care at all about dev tools, as they have always been used. Vibe coding does bother me though - if you don’t know HOW to code, you probably shouldn’t be doing it.
The real issue though is using AI generated assets. If you have a game that uses human made art, story, and music, no one is going to complain about you using AI. Even if you somehow managed to get there via vibe coding.
Even yesteryear’s code completion systems (that didn’t rely on LLMs) are technically speaking, AI systems.
While the term “AI” became the next “crypto” or “Blockchain”, in reality we’ve been using various AI products for the better part of the past 30 years.
And honestly lightweight neural nets can make for some interesting enemy behavior as well. I’ve seen a couple games using that and wouldn’t be surprised if it caught on in the future.
You mean code completion that just parses a file into an AST and does fuzzy string matching against tokens used to build that AST? I would not personally classify that as AI. It’s code that was written by humans and is perfectly understandable by humans. There is no probabilistic component present, there is no generated matrix, there’s no training process, it’s just simple parsing and string matching.
It’s early and I’m tired and probably in a poor mood and being needlessly fussy, so I apologize if this completely misses the point of your comment. I agree that there’s other stuff we’ve been using for ages which could be reasonably classified as “AI,” but I don’t feel like traditional code completion systems fit there.
AI doesn’t have to be probabilistic, a classical computer science definition of AI states that it has to be an actor that reacts to some percepts according to some policy
yes we could definitely say that a calculator, technically, is an AI. but we usually don’t think of the calculator as an agent, and it doesn’t really make any decisions, as it just displays the result when prompted
I would primarily understand it as being free of generative AI (picture and sound), which is what is most obvious when actually playing a game. I’m personally not against using LLMs for coding if you actually know what you’re doing and properly review the output. However at that point most will come to the conclusion that you could write the code manually anyways and probably save time.
But it still removes labor from the working class. My point is that the lines are blurry. You practically cannot draw a useful line based on the tooling used.
The AI label needs to be present if the finished product contains AI generated assets. So AI generated code, or AI generated art.
In the example above you grey boxed in AI but then replaced all the assets with ones that humans made. There is no distinction there between doing that and just having literal grey boxes.
You couldn’t require an AI label in that scenario because it would be utterly unenforceable. How would a developer prove if they did or did not use AI for temporary art?
So yes you can draw a line. Does the finished product contain AI generated assets. You don’t like that definition because you’re being pedantic but your pedantry interpretation isn’t enforceable, so it’s useless.
This is exactly my thoughts. You need to specify. Is a product AI when Windows is used to develop it? Windows is an “AI” product as in assisted to be produced by AI.
Labels are meaningless without sensible rules and enforcement.
Does this specify the kinds of AI? Are none of these devs using code completion on their IDEs? Or refactoring tools? Because the bulk of them use AI these says.
I’m sure everyone has always explained this to you given the number of down votes, but algorithms aren’t equal to AI.
Ever since the evolution of AI people seem to have lost the ability to recall things prior to 2019.
I mean doesn’t it heavily depend what you refer to as AI?
ML algorithms, come very close to LLMs and have been back in the day refered to as AI. They are also used in code completion.
Also both of these rely - in principle - on algorithms. One just has an algorithm with weights defined by data input.
No because AI replaces a human role.
Code completion does not replace a human role, that’s like saying that spell check is AI.
I am not talking about what it does, I am talking about what it is.
And all tools do tend to replace human labor. For example, tractors replaced many farmhands.
The thing we face nowadays, and this is by no means limited to things like AI, is that less jobs are created by new tools than old destroyed (in my earlier simile, a tractor needs mechanics and such).
The definition of something is entirely disconnected from its usage (mainly).
And just because everyone calls LLMs now AI, there are plenty of scientific literature and things that have been called AI before. As of now, as it boils down all of these are algorithms.
The thing with machine learning is just that it is an algorithm that fine tunes itself (which is often blackbox-ish btw). And strictly speaking LLMs, commonly refered to as AI, are a subclass of ML with new technology.
I make and did not make any statement of the values of that technology or my stance on it
But these tools are not mere algorithms or ML products, they are LLM backed
Emmet has been around since 2015. So it was definitely not LLM backed.
My friend, nobody says all of them are LLM backed, but some are
Jesus fuck that’s some goal post moving.
The seal looks like this:
Code completion is probably a gray area.
Those models generally have much smaller context windows, so the energy concern isn’t quite as extreme.
You could also reasonably make a claim that the model is legally in the clear as far as licensing, if the training data was entirely open source (non-attribution, non-share-alike, and commercial-allowed) licensed code. (A big “if”)
All of that to say: I don’t think I would label code-completion-using anti-AI devs as hypocrites. I think the general sentiment is less “what the technology does” and more “who it does it to”. Code completion, for the most part, isn’t deskilling labor, or turning experts into chatbot-wrangling accountability sinks.
Like, I don’t think the Luddites would’ve had a problem with an artisan using a knitting frame in their own home. They were too busy fighting against factories locking children inside for 18-hour shifts, getting maimed by the machines or dying trapped in a fire. It was never the technology itself, but the social order that was imposed through the technology.
Personally speaking I don’t care at all about dev tools, as they have always been used. Vibe coding does bother me though - if you don’t know HOW to code, you probably shouldn’t be doing it.
The real issue though is using AI generated assets. If you have a game that uses human made art, story, and music, no one is going to complain about you using AI. Even if you somehow managed to get there via vibe coding.
Here is a frog, please help me split its hairs
Even yesteryear’s code completion systems (that didn’t rely on LLMs) are technically speaking, AI systems.
While the term “AI” became the next “crypto” or “Blockchain”, in reality we’ve been using various AI products for the better part of the past 30 years.
“AI” has become synonymous with “Generative AI”
We used to call the code that determined NPC behaviour AI.
It wasn’t AI as we know it now but it was intended to give vaguely realistic behaviour (such as taking a sensible route from A to B).
Used to?
And honestly lightweight neural nets can make for some interesting enemy behavior as well. I’ve seen a couple games using that and wouldn’t be surprised if it caught on in the future.
You mean code completion that just parses a file into an AST and does fuzzy string matching against tokens used to build that AST? I would not personally classify that as AI. It’s code that was written by humans and is perfectly understandable by humans. There is no probabilistic component present, there is no generated matrix, there’s no training process, it’s just simple parsing and string matching.
It’s early and I’m tired and probably in a poor mood and being needlessly fussy, so I apologize if this completely misses the point of your comment. I agree that there’s other stuff we’ve been using for ages which could be reasonably classified as “AI,” but I don’t feel like traditional code completion systems fit there.
AI doesn’t have to be probabilistic, a classical computer science definition of AI states that it has to be an actor that reacts to some percepts according to some policy
By that definition a calculator is AI.
yes we could definitely say that a calculator, technically, is an AI. but we usually don’t think of the calculator as an agent, and it doesn’t really make any decisions, as it just displays the result when prompted
I would primarily understand it as being free of generative AI (picture and sound), which is what is most obvious when actually playing a game. I’m personally not against using LLMs for coding if you actually know what you’re doing and properly review the output. However at that point most will come to the conclusion that you could write the code manually anyways and probably save time.
Using ai to generate samples to get a framework of the product would be permitted or not? Is placeholder generation allowed?
Since you would never see it that’s pretty much irrelevant. Clearly this is about AI generated art and AI generated assets
Whether or not you use AI to grey box something is a pointless distinction given the fact that there’s no way to prove it one way or the other.
But it still removes labor from the working class. My point is that the lines are blurry. You practically cannot draw a useful line based on the tooling used.
The AI label needs to be present if the finished product contains AI generated assets. So AI generated code, or AI generated art.
In the example above you grey boxed in AI but then replaced all the assets with ones that humans made. There is no distinction there between doing that and just having literal grey boxes.
You couldn’t require an AI label in that scenario because it would be utterly unenforceable. How would a developer prove if they did or did not use AI for temporary art?
So yes you can draw a line. Does the finished product contain AI generated assets. You don’t like that definition because you’re being pedantic but your pedantry interpretation isn’t enforceable, so it’s useless.
Is a scene arranged by AI not undesirable since it does not have artistic intent?
This is exactly my thoughts. You need to specify. Is a product AI when Windows is used to develop it? Windows is an “AI” product as in assisted to be produced by AI.
Labels are meaningless without sensible rules and enforcement.