

I’m saying that code completion does not constitute AI and certainly isn’t LLMs.
I then provided an example of why that isn’t the case.
You decided to respond to this by pointing out that some LLM may be involved in some code completion. Although you didn’t provide an example, so who knows if that’s actually true, it seems sort of weird to use in LLM for code completion as it’s completely unnecessary and entirely inefficient, so I kind of doubt it.
I just want to point it out for a minute, because it’s sort of feels like you don’t know this, code completion is basically autocomplete for programmers. It’s doing basic string matching, so that if you type fnc it also completes to function(), hardly the stuff of AI



People who lived in the 1960s did not by definition live in the 21st century so their definitions of what things may or may not be is immaterial.
We know what we mean by AI, and attempting to redefine that in the service of some kind of all “sides have a point” fence sitter, is a brainless arguement and is is definitively unhelpful. Defining AI strictly by “a definition of a system that does a thing based on an input”, is both overly broad and demonstrably unhelpful. It’s like arguing that a building that has been reduced to ash by a fire still contains the same constituent elements. Intellectually it’s correct, practically it’s ridiculous.
Broadly, you are attempting to define AI as anything that any computerised system does. How can you not see that that is an overly broad definition that entirely skirts anything remotely close to the realms of helpfulness.