Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.
in other words please help us, use our AI
Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.
in other words please help us, use our AI
This is one of the cases where AI is worse. LLMs will generate the tests based on how the code works and not how it is supposed to work. Granted lots of mediocre engineers also use the “freeze the results” method for meaningless test coverage, but at least human beings have ability to reflect on what the hell they are doing at some point.
I’d be interested what you mean by this? Isn’t all unit tests just freezing the result? A method is an algorithm for certain inputs you expect certain outputs, you unit tests these inputs and matching outputs, and add coverage for edge cases because it’s cheap to do with unit tests and these “freeze the results” or rather lock them in so you know that piece of code always works as expected or it’s “frozen/locked in”
It is pretty common to write unit tests for functionality that doesn’t exist (test driven development). It gets you to think about, and test, everything that needs to exist in the program before writing the program. This approach doesn’t always work, particularly in large code bases where you need to learn the structure of a module before you can even think about design.
‘Freezing the results’ is ok too, as long as you know the results are currently correct. The AI has no way of knowing this and poor programmers often don’t verify either.
It is very easy to write a shit test.
Oh yeah for sure, I have seen tests that are pretty useless, for me the way I do it is I write the first one or two tests then instruct copilot to follow the patterns and then it does well, ofc I have to double check it, but reading is easier than having to write it.
You could have it write unit tests as black box tests, where you only give it access to the function signature. Though even then, it still needs to understand what the test results should be, which will vary from case to case.
I think machine learning has a vast potential in this area, specifically things like running iterative tests in a laboratory, or parsing very large data sets. But a fuckin LLM is not the solution. It makes a nice translation layer, so I don’t need to speak and understand bleep bloop and can tell it what I want in plain language. But after that LLM seems useless to me outside of fancy search uses. It’s should be the initial processing layer to figure out what type of actual AI (ML) to utilize to accomplish the task. I just want an automator that I can direct in plain language, why is that not what’s happening? I know that I don’t know enough to have an opinion but I do anyway!
You can tell it to generate based on how it’s supposed to work you know