I feel like AI is a 5G language, in that we have moved on from writing code directly to writing md files to command the bots to write the code. It seems like a higher abstraction of the code. It does make you think less about the code directly, and more about the bigger picture, but you still need those skills to check the bots output.
Many people believe this, and it couldn’t be more wrong. It’s like saying that a product manager can code, if their tickets are detailed enough to give a general vision of a piece of software.
Implementation still matters. Context still matters. Vibe coded projects all follow these patterns where each change is a thousand lines of code out, two thousand in. And there’s a breaking point where reading and understanding these changes is not only unpractical, but also counterproductive.
But then, there’s the bigger question of language expressivity and determinism: even if LLMs could achieve a certain level of consistency of outputs given certain inputs, how do we make a natural language like English expressive enough, and more importantly, non ambiguous enough, to work like an actual programming language?
Don’t know why you got downvoted, you’re 100% right. It’s just another layer of abstraction. Like a super high level non-deterministic level of abstraction.
I would say that it’s doesn’t count as a “5G language” if you have to understand and check the underlying “assembly code” it outputs every time you use it.
It seems there a lot of people on Lemmy who dislike anything AI. I have no choice at my work so I have to make the best of it as I’m not leaving my job in this economy.
Unless you’re checking every line and have a good enough and comprehensive enough understanding of the codebase to spot subtle bugs it will try to introduce that aren’t caught by your tests, you’re still opening yourself up to problems.
I feel like AI is a 5G language, in that we have moved on from writing code directly to writing md files to command the bots to write the code. It seems like a higher abstraction of the code. It does make you think less about the code directly, and more about the bigger picture, but you still need those skills to check the bots output.
Many people believe this, and it couldn’t be more wrong. It’s like saying that a product manager can code, if their tickets are detailed enough to give a general vision of a piece of software.
Implementation still matters. Context still matters. Vibe coded projects all follow these patterns where each change is a thousand lines of code out, two thousand in. And there’s a breaking point where reading and understanding these changes is not only unpractical, but also counterproductive.
But then, there’s the bigger question of language expressivity and determinism: even if LLMs could achieve a certain level of consistency of outputs given certain inputs, how do we make a natural language like English expressive enough, and more importantly, non ambiguous enough, to work like an actual programming language?
Don’t know why you got downvoted, you’re 100% right. It’s just another layer of abstraction. Like a super high level non-deterministic level of abstraction.
If natural languages were just another level of abstraction, we would already have a successful English like programming language.
I would say that it’s doesn’t count as a “5G language” if you have to understand and check the underlying “assembly code” it outputs every time you use it.
It seems there a lot of people on Lemmy who dislike anything AI. I have no choice at my work so I have to make the best of it as I’m not leaving my job in this economy.
You will learn to like something because you’re being extorted to use it.
Sounds about right.
This is a recipe for SQL injections, race conditions, memory leaks, and keys being placed directly in code.
Trust the output of an LLM at your peril. Literally.
Well I did say you still need to use your skills to check the bots code.
Unless you’re checking every line and have a good enough and comprehensive enough understanding of the codebase to spot subtle bugs it will try to introduce that aren’t caught by your tests, you’re still opening yourself up to problems.