Maybe we need a way to generate checksums during version creation (like file version history) and during test runs of code that would be submitted along side the code as a sort of proof of work that AI couldn’t easily recreate. It would make code creation harder for actual developers as well but it may reduce people trying to quickly contribute code the LLMs shit out.
A lightweight plugin that runs in your IDE maybe. So anytime you are writing code and testing it, the plugin is modifying a validation file that shows what you were doing and the results of your tests and debugging. Could then write an algorithm that gives a confidence score to the validation file and either triggers manual review or submits obviously bespoke code.
Maybe we need a way to generate checksums during version creation (like file version history) and during test runs of code that would be submitted along side the code as a sort of proof of work that AI couldn’t easily recreate. It would make code creation harder for actual developers as well but it may reduce people trying to quickly contribute code the LLMs shit out.
A lightweight plugin that runs in your IDE maybe. So anytime you are writing code and testing it, the plugin is modifying a validation file that shows what you were doing and the results of your tests and debugging. Could then write an algorithm that gives a confidence score to the validation file and either triggers manual review or submits obviously bespoke code.
This could, in theory, also be used by universities to validate submitted papers to weed out AI essays.