Are there really no safe guards to the merging process except for human oversight?
Isnt there some “In Review State” where people who want to see the experimental stuff, can pull this experimental stuff and if enough™ people say “This new shit is okay” it gets merged?
So the Main Project doesnt get poisoned and everyone can still contribute in a way and those who want to Experiment can test the New Stuff.
There are automated checks which can help enforce correctness of the parts of the code that are being checked. For example, we could imagine a check that says “when I add a sprite to the list of assets, then the list of assets becomes one item longer than it was before”. And if I wrote code that had a bug here, the automated check would catch it and show the problem without any humans needing to take the time.
But since code can do whatever you write it to do, there’s always human review needed. If I wrote code so that adding a sprite also sent a single message to my enemy’s Minecraft server then it’s not going to fail any tests or show up anywhere, but we need humans to look at the code and see that I’m trying to turn other developers into a DDoS engine.
As others replied, you could choose to find and run someone’s branch. This actually does happen with open-source projects: the original author disappears or abandons the project, other people want changes, and someone says “hey I have a copy of the project but with all those changes you want” and we all end up using that fork instead.
But as a tool for evaluating code that’ll get merged, it does not work. Imagine you want to check out the new bleeding-edge version of Godot. There’s currently ~4700 possible bleeding-edge versions, so which one will you use? You can’t do this organically.
Most big projects do have something like beta releases. The humans decide what code changes to merge and they do all that and produce a new godot-beta. The people who want to test out the latest stuff use that and report problems which get fixed before they finally release the finished version to the public. But they could never just merge in random crap and then see if it was a good idea afterward.
It is my understanding that pull requests say “Hey, I forked and modified your project. Look at it and consider adopting my changes in your project.” So anyone who wants to look at the “experimental stuff” can just pull that fork. Someone in charge of the main branch decides if and when to merge pull requests.
The problem becomes the volume of requests; they’re kinda getting DDOS’d.
Yup! Replace the word “fork” with “branch” and that basically matches the workflow. Forking implies you are copying the code in its current state and going off to do your own thing, never to return (but maybe grabbing updates from time to time).
One would hope that the users submitting these PRs vetted to LLM’s output before submitting, but instead all of that work is getting shifted onto the maintainers.
Most projects don’t have enough people or external interest for that kind of process.
It would be possible to establish some tooling like that, but standard forges don’t provide that. So it’d feel cumbersome.
And in the end you’re back at having contributors, trustworthiness, and quality control. Because testing and reviewing are contributions too. You don’t want just a popularity contest (I want this) nor blindly trust unknown contribute.
Even if there’s nothing wrong with this one for instance. Someone will like “get rid of hard-coded” where as I would oppose this change because it makes it harder to read.
So you still need core team to look over it. If ai gives you 1000 of these in different areas it’s wasting time. While people can read about standards, ai doesn’t rather it just does what it’s told.
Many do have automated checking, testing, rules for the PR maker to follow and such.
If they don’t have it set up, and the project is big, TBH the maintainers should set it up.
The issue is that these submitters are (often) drive-by spammers. They aren’t honest, they don’t care about the project, they just want quick kudos for a GitHub PR on a major project.
Filtering a sea of scammers is a whole different ballgame than guiding earnest, interested contributors. Automated tooling isn’t set up for that because (outside the occasional attempt to sneak malware into code) it wasn’t really a thing.
Stupid question:
Are there really no safe guards to the merging process except for human oversight?
Isnt there some “In Review State” where people who want to see the experimental stuff, can pull this experimental stuff and if enough™ people say “This new shit is okay” it gets merged?
So the Main Project doesnt get poisoned and everyone can still contribute in a way and those who want to Experiment can test the New Stuff.
There are automated checks which can help enforce correctness of the parts of the code that are being checked. For example, we could imagine a check that says “when I add a sprite to the list of assets, then the list of assets becomes one item longer than it was before”. And if I wrote code that had a bug here, the automated check would catch it and show the problem without any humans needing to take the time.
But since code can do whatever you write it to do, there’s always human review needed. If I wrote code so that adding a sprite also sent a single message to my enemy’s Minecraft server then it’s not going to fail any tests or show up anywhere, but we need humans to look at the code and see that I’m trying to turn other developers into a DDoS engine.
As others replied, you could choose to find and run someone’s branch. This actually does happen with open-source projects: the original author disappears or abandons the project, other people want changes, and someone says “hey I have a copy of the project but with all those changes you want” and we all end up using that fork instead.
But as a tool for evaluating code that’ll get merged, it does not work. Imagine you want to check out the new bleeding-edge version of Godot. There’s currently ~4700 possible bleeding-edge versions, so which one will you use? You can’t do this organically.
Most big projects do have something like beta releases. The humans decide what code changes to merge and they do all that and produce a new godot-beta. The people who want to test out the latest stuff use that and report problems which get fixed before they finally release the finished version to the public. But they could never just merge in random crap and then see if it was a good idea afterward.
It is my understanding that pull requests say “Hey, I forked and modified your project. Look at it and consider adopting my changes in your project.” So anyone who wants to look at the “experimental stuff” can just pull that fork. Someone in charge of the main branch decides if and when to merge pull requests.
The problem becomes the volume of requests; they’re kinda getting DDOS’d.
Yup! Replace the word “fork” with “branch” and that basically matches the workflow. Forking implies you are copying the code in its current state and going off to do your own thing, never to return (but maybe grabbing updates from time to time).
One would hope that the users submitting these PRs vetted to LLM’s output before submitting, but instead all of that work is getting shifted onto the maintainers.
Most projects don’t have enough people or external interest for that kind of process.
It would be possible to establish some tooling like that, but standard forges don’t provide that. So it’d feel cumbersome.
And in the end you’re back at having contributors, trustworthiness, and quality control. Because testing and reviewing are contributions too. You don’t want just a popularity contest (I want this) nor blindly trust unknown contribute.
https://github.com/godotengine/godot/pulls
Is what you’re referring to but even if you have dedicated testers that’s still people who have to go through the influx of pulls.
Then there’s preference changes as well.
https://github.com/godotengine/godot/pull/116434/commits/6a2fc8561da8fcf168cea3aff5a8cba77266b026
Even if there’s nothing wrong with this one for instance. Someone will like “get rid of hard-coded” where as I would oppose this change because it makes it harder to read.
So you still need core team to look over it. If ai gives you 1000 of these in different areas it’s wasting time. While people can read about standards, ai doesn’t rather it just does what it’s told.
Many do have automated checking, testing, rules for the PR maker to follow and such.
If they don’t have it set up, and the project is big, TBH the maintainers should set it up.
The issue is that these submitters are (often) drive-by spammers. They aren’t honest, they don’t care about the project, they just want quick kudos for a GitHub PR on a major project.
Filtering a sea of scammers is a whole different ballgame than guiding earnest, interested contributors. Automated tooling isn’t set up for that because (outside the occasional attempt to sneak malware into code) it wasn’t really a thing.
It would be nice to bump upthe useful stuff through the community but even then there could be bot accounts that push the crap to the top
You can always checkout the branch and run it yourself.