NThe Prayer Network
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
SDL bans AI-written commits (github.com)
manoDev 1 days ago [-]
We’ll need “Organic software” seal of approval soon.
charlie90 22 hours ago [-]
That would be a negative signal for me personally. It shows the authors care more about process than results.
sph 16 hours ago [-]
Journey before destination. In both eastern and western philosophy, caring about the results and not the process is the recipe to unhappiness.

Not to be condescending, but everyone goes through this phase, then they grow up, it’s literally what separates the amateur from the master.

djhn 14 hours ago [-]
Which approach are you saying is a phase people grow out of?
sph 10 hours ago [-]
One is supposed to grow out of caring about the result. A master at their craft creates masterpieces because day in, day out they sit at their desk and create. When they're sad, they sit down and create. If you love the process, you keep at doing something; doing something repeatedly is how you become a master.

If you want to go spiritual, there's karma yoga from the Bhagavad Gita: "You have a right to perform your prescribed duties, but you are not entitled to the fruits of your actions. Never consider yourself to be the cause of the results of your activities, nor be attached to inaction."

Did Leonardo work for fame and monies, or simply because he found massive enjoyment in it? What about Hemingway, or Einstein?

This might all sound like new age bullshit, but it's taken me literally 15 years of my life to understand this and grow out of chronic procrastination and dissatisfaction.

em-bee 5 hours ago [-]
the problem is that with AI code the results are either not verified by a human, or verifying them is more work than writing them from scratch.

i want all my software verified by a human, even an unexperienced human is more reliable than AI at this point. (this may change, but it hasn't yet)

krapp 21 hours ago [-]
The process is what creates the results.
hackable_sand 20 hours ago [-]
Process is the result
giancarlostoro 1 days ago [-]
We'll get right on it after we stop people from hacking computers forever.
dim13 23 hours ago [-]
Had same idea some time ago: https://imgur.com/a/11StYkd ;)
sph 16 hours ago [-]
Did you really had to use AI to create this?
a34729t 17 hours ago [-]
Do we need a campaign for real humans? Because i wouldnt be opposed to that!
registeredcorn 1 days ago [-]
Not really. The opposite is far, far more desirable in my eyes.

Example:

* Do I care if an LLM was used to determine the volume of my doorbell? Not particularly.

* Do I care if an LLM was used to generate code to unlock my front door remotely? Absolutely!

I need a warning label cautioning me of the risks associated with generative materials. I don't care in the slightest when it isn't present, because the inherent risks associated are inherently lesser.

Batteries, not chicken breasts.

em-bee 5 hours ago [-]
how do you know the door volume code hasn't somehow touched the unlocking code?
aspenmartin 22 hours ago [-]
You sure the door lock companies are hiring the best and brightest engineers? Not clear to me an LLM is not attractive in that scenario.
djhn 13 hours ago [-]
My mistrust of digital locks isn’t based on negligence from the reputable(?) manufacturers (Abloy? Reputation is in the eye of the beholder).

It’s who else has access: property and facility management, maintenance, etc. In the age of physical keys, I trusted these SMBs to be relatively capable, let’s say 7/10, in protecting those keys from most local would-be criminals and opportunists. That goes down to 2/10 for protecting digital assets, like remote unlock capabilities, from cybercrime.

As soon as there is a viable market connecting cybercriminals with local criminals, whether it’s vertically integrated organised crime or something like carding forums, physical access exploitation is bound to become a problem.

whateveracct 1 days ago [-]
"is this library natty?"
LocalH 1 days ago [-]
Can we implant an upgraded 10NES chip inside every human at birth so that they can handshake to prove that they're human? /s
skybrian 24 hours ago [-]
Since using AI costs money, some way of contributing AI patches when asked might make sense here? Let the project maintainers decide what’s worth attempting to solve with AI.

Suppose there were a website that helped would-be contributors of AI assistance to match up with projects that want help?

throw5 1 days ago [-]
Why are these projects still on Github? Isn't it better to move away from Github than go through all this shenanigans? This AI slopam nonsense isn't going to stop. Github is no longer the "social network" for software dev. It's just a vehicle to shove more and more Copilot stuff.

The userbase is also changing. There are vast numbers of new users on Github who have no desire to learn the architecture or culture of the project they are contributing to. They just spin up their favorite LLM and make a PR out of whatever slop comes out.

At this point why not move to something like Codeberg? It's based in Europe. It's run by a non-profit. Good chance it won't suffer from the same fate a greedy corporate owned platform would suffer?

raincole 1 days ago [-]
> It's based in Europe. It's run by a non-profit

The main SDL maintainer is paid by a US for-profit company, Valve. They don't necessarily share your EU = automatically good attitude.

But anyway, if Codeberg really takes off it'll be flooded with AI bots as well. All popular sites will.

embedding-shape 1 days ago [-]
> But anyway, if Codeberg really takes off it'll be flooded with AI bots as well. All popular sites will.

History might prove me wrong on this one, but I really believe that the platforms that are pushing people to use as much LLMs as possible for everything (Microsoft-GitHub) will surely be more flooded by AI bots than the platforms that are focusing on just hosting code instead (Codeberg).

throw5 1 days ago [-]
> The main SDL maintainer is paid by a US for-profit company, Valve. They don't necessarily share your EU = automatically good attitude.

I'm not sure how one follows from the other. I am paid by a US for-profit company. But I still think EU has done some things better. People's beliefs are not determined by the company they work for. It would be a very sad world if people couldn't think outside the bubble of their employers.

kdhaskjdhadjk 1 days ago [-]
In a "existential war" type situation, people who don't wave the flag and shout the slogans of their "home" country and have known sympathies for other places (any at all) will automatically be suspect, and their names will end up in a database for later use.

You can be assured that the leanings of Valve are always going to be USA, USA, USA, for reasons that will be clear when you follow the chain of ownership to its source.

hurricanepootis 1 days ago [-]
Pretty sure Gabe's been partying it up in New Zeland ever since he got stuck there because of Covid
jamesfinlayson 21 hours ago [-]
And he recently said that he's effectively retired.
kdhaskjdhadjk 1 days ago [-]
1) Gabe's a front man. He doesn't run Valve.

2) New Zealand is a favorite place for Western apparatchiks to build their bunkers. They don't move there out of a love for Kiwi culture and desire to integrate with the locals. Much like their interest in Wyoming/Montana also; they see a place they like, and they go take it over and drive out/murder whoever was there before.

hurricanepootis 1 days ago [-]
Gabe may be a the front man, but he's still like the benevolent dictator for life of Valve. Kind of like how Linux Torvalds is the BDFL of Linux
ahartmetz 1 hours ago [-]
Not to mention that it's a private company and he personally owns it (to the best of my knowledge).
anymouse123456 1 days ago [-]
> There are vast numbers of new users on Github who have no desire to learn the architecture or culture of the project they are contributing to.

The Eternal September eventually comes for us all.

fuhsnn 1 days ago [-]
TinyCC's mob branch on repo.or.cz just got trolled with AI commits today. Nowhere is safe it seems.
MiiMe19 1 days ago [-]
How does something being based in Europe actually help anyone?
21 hours ago [-]
embedding-shape 1 days ago [-]
> Why are these projects still on Github?

At this point, projects are already on GitHub due to inertia, or they're chasing vanity-metrics together with all the other people on GitHub chasing vanity-metrics.

Since the advent of the "README-profiles" many started using with badges/metrics, it been painfully obvious how large this group of people are, where everything is about getting more stars, merging more PRs and having more visits to your website, rather than the code and project itself.

These same people put their project on GitHub because the "value" they want is quite literally "GitHub Stars" and try to find more followers. It's basically a platform they hope will help them get discovered via.

Besides Codeberg, hosting your own git server (via Forgejo or Gitea) is relatively easy and let you do so how private/public you want.

duskdozer 1 days ago [-]
>Besides Codeberg, hosting your own git server (via Forgejo or Gitea) is relatively easy and let you do so how private/public you want.

As I've seen it, there's a lot of git=GitHub going on. It wasn't even clear to me for a while that you didn't even need a "git server" and could just use a filepath or ssh location for example.

level09 23 hours ago [-]
I would judge commits by what it does not by who wrote it.
em-bee 5 hours ago [-]
you judge commits by a junior developer that you don't know well the same as commits by an experienced colleague that you have been working with for years the same?

your AI coder is worse than a junior developer, because junior devs may write bad code but generally they won't write code that they don't understand. AI on the other hand has no clue what it is writing.

sph 1 days ago [-]
Good move, and a good reminder of how much of an echo chamber Hacker News is on AI matters.

In here, and big tech at large, it's touted like the unavoidable future that either you adapt or you die. LLMs are always a few months away from the (u|dys)topia of never having to write code ever again. Elsewhere, especially in fields where craft and artistry are valued (i.e. game development), AI is synonym of wanting to cut corners, poor quality, and to put it simply, slop. Sure, we're now inundated from people with a Claude subscription and a dream hoping to create the next Minecraft, but no one is taking them seriously. They're not making the game forum front pages, that's for sure.

Personally, I have made my existential worries a little better by pivoting away from big tech where the only metric is line of code committed per day, and moving towards those fields where human craftsmanship is still king.

fnimick 1 days ago [-]
And who knows how much of that "unavoidable future" "adapt or die" rhetoric is driven by motivated actors using LLM tools to shape the conversation?
duskdozer 1 days ago [-]
The incentives are clearly that way. Otherwise, why would random people care if other developers fell hopelessly behind? It would only increase the high status of the AI experts.
LLMCodeAuditor 1 days ago [-]
FWIW I do think most of it is "grassroots," ordinary rank-and-file STEM workers adopting zero-sum industrialist mindsets. And speaking personally, the psychology works the same way for both sides of the AI debate:

- I have refused to use LLMs since 2023, when I caught ChatGPT stealing 200 lines of my own 2019-era F#. So in 2026 I have some anxiety that I need to practice AI-assisted development or else Be Left Behind. This makes me especially cross and uncharitable when speaking with AI boosters.

- Instead of LLMs I have tripled-down on improving my own code quality and CS fundamentals. I imagine a lot of AI boosters are somewhat anxious that LLM skills will become dime-a-dozen in a few years, and people whose organic brains actually understand computers will be highly in-demand. So they probably have the same thing going on as me - "nuh uh you're wrong and stupid."

I hope it's clear I'm trying to be charitable!

tkel 1 days ago [-]
Curious , what have you pivoted towards? A different field?
sph 1 days ago [-]
Game development, and writing small tools in the game dev space. This week I've been working on an image editing app, mostly to play with dithering algorithms and palettes, using Odin and SDL.

I mean, it's either that or I quit software development completely; it would be a shame to throw away two decades of experience in the field.

ryandvm 1 days ago [-]
I don't know. For as long as I can remember, game dev has had the reputation of being the most sweat-shoppish of all the software engineering disciplines. I have a hard time believing that game devs aren't also going to find themselves being crushed under the CTO imperative to "use AI or else" like the rest of us.
sph 1 days ago [-]
Ok I should’ve said indie/solo game dev
em-bee 5 hours ago [-]
does that pay?

you could have chosen indy/solo dev in general. solo game dev in my understanding is very hard to make a living in.

quikoa 1 days ago [-]
I'm interested in tools (or blog posts about this) for image editing apps. Would you mind sharing what you've build?
sph 1 days ago [-]
Nothing ready to ship just yet; I was thinking of building an image editing app that simply focuses on transformations — imagine Photoshop, without the editing part. Instead of having layers, you have a series of transformation you can tweak visually and then export to be reused and applied in batch later.

The itch I want to scratch is that I'm on Linux, and our native image editing apps are very clunky, or you have to spend a weekend every time reacquainting yourself with ImageMagick.

The other project in the back of my head is a font repository, manager and downloader for Linux. It's an unserved niche, and there is no popular central repository of fonts, despite a large majority of them are released with permissive licenses. I just want to be able to do `font-app install Inter Iosevka "IBM Plex"` and they appear under ~/.local/share/fonts

quikoa 1 days ago [-]
Alright, if you do build something I hope you share it here. I'm always looking forward to any image editing/processing apps or techniques.
PeterStuer 1 days ago [-]
"AI is synonym of wanting to cut corners, poor quality, and to put it simply, slop"

A craftsman knows how to use his tools. You can with AI produce very complete, polished, maintainable and tested, secure, performant high quality code.

It does take planning and lots of work on your part, but there is a high payoff.

So many people just dump a one paragraph brainfart into a prompt and then label the AI "slop".

Slop in , slop out. Play silly games, win stupid prizes. Don't blame your tools. Sometimes, you are 'holding it wrong'.

em-bee 5 hours ago [-]
It does take planning and lots of work on your part, but there is a high payoff.

less hard work than writing code myself? a higher payoff than the satisfaction of having written code myself?

i want to be a coder, not a prompt manager. (not sure i want to call that engineer)

qsera 6 hours ago [-]
>people just dump a one paragraph

So how much is enough?

JKCalhoun 1 days ago [-]
I'm not sure.

I think it likely that a typical HN'er [1] has actually used an LLM in coding and if they sound like they are proposing that LLMs in coding are inevitable ("the unavoidable future") it may well be from an informed, personal experience.

(Of course there's no reason not to believe that those pushing back against LLM-Assisted-Coding are also doing so from personal experience. Me, I am on "Team-LLMAC".)

[1] Never used that term before, not sure I like it.

palmotea 1 days ago [-]
> Good move, and a good reminder of how much of an echo chamber Hacker News is on AI matters. In here, and big tech at large, it's touted like the unavoidable future that either you adapt or you die.

When you look across all software development, I think this kind of AI contribution ban is probably the exception. Because open source maintainers can have standards and have the ability to decide to enforce them.

Corporate America is enraptured by an even dumber and less thoughtful version of the HN echo chamber.

> Elsewhere, especially in fields where craft and artistry are valued (i.e. game development), AI is synonym of wanting to cut corners, poor quality, and to put it simply, slop. Sure, we're now inundated from people with a Claude subscription and a dream hoping to create the next Minecraft, but no one is taking them seriously. They're not making the game forum front pages, that's for sure.

Are you talking about indie games? Because I could see that having a similar dynamic to open source. I would think a big studio would be similar to any other corporate America office.

luxuryballs 1 days ago [-]
I don’t use public repos very often but I had toyed with the idea of just creating a git user specifically for an agent to use for this purpose so it would not be my user account, is this not standard practice already? Kinda seems obvious to me, I mean so people can tell which parts of my public project were commits managed by an agent.
jmalicki 1 days ago [-]
I do this so that AI can only have limited GitHub permissions. It can't merge, doesn't have admin rights, etc.

This after I started catching it commit directly to upstream main without PRs among other things.

em-bee 5 hours ago [-]
that makes a lot of sense. unfortunately github doesn't allow multiple accounts per person. at least it didn't last time i checked. i hope they change their policy for AI agents though.
jmalicki 1 hours ago [-]
It does!

"You must be a human to create an Account. Accounts registered by "bots" or other automated methods are not permitted. We do permit machine accounts: A machine account is an Account set up by an individual human who accepts the Terms on behalf of the Account, provides a valid email address, and is responsible for its actions. A machine account is used exclusively for performing automated tasks. Multiple users may direct the actions of a machine account, but the owner of the Account is ultimately responsible for the machine's actions. You may maintain no more than one free machine account in addition to your free Personal Account. One person or legal entity may maintain no more than one free Account (if you choose to control a machine account as well, that's fine, but it can only be used for running a machine)."

https://docs.github.com/en/site-policy/github-terms/github-t...

SuperV1234 20 hours ago [-]
Incredibly dumb and nonenforceable policy. What matters is human review and correctness.

You're never going to be able to prove that a contributor didn't ask an LLM to help them make some changes, or review/optimize changes that were made.

Capable people who like to get stuff done will use LLMs, review their work carefully, and never disclose it. And you'll never be able to tell.

People who generated slop PRs won't even read your policy before submitting a slop PR.

em-bee 5 hours ago [-]
if a commit is so good that i can't tell, well, ok. if the committer hides that they used AI, same.

the policy allows me to reject the things i know are done with AI and it allows me to punish (ban) devs who lie to me when i find out. without a policy i have no argument.

spicyusername 1 days ago [-]
On the one hand open source projects are going to be overrun with AI code that no one reviewed.

On the other hand, code produced with AI and reviewed by humans can be perfectly good, maintainable, and indistinguishable from regular old code.

So many processes are no longer sufficient to manage a world where thousands of lines of working code are easy to conjure out of thin air. Already strained open source review processes are definitely one.

I get wanting to blanket reject AI generated code, but the reality is that no one's going to be able to tell what's what in many cases. Something like a more thorough review process for onboarding trusted contributors, or some other method of cutting down on the volume of review, is probably going to be needed.

xxs 1 days ago [-]
>reviewed by humans can be perfectly good, maintainable, and indistinguishable from regular old code

That depends on the 'regular old code' but most stuff I have seen doesn't come close to 'maintainable'. The amount of cruft is proper.

yarn_ 1 days ago [-]
Another good example of "the people writing good code with AI are the people who could have done it regardless"
simiones 1 days ago [-]
A policy like this has two points. One, to give good faith potential contributors a guideline on what the project expects. Two, to help reviewers have a clear policy they can point to to reject AI slop PRs, without feeling bad or getting into conflicts about minutiae of the code.
LLMCodeAuditor 1 days ago [-]
Right, "good faith" is a key idea that is being ignored. If you want to lie to the lead SDL maintainers and claim your code is 100% human-written, you can probably get away with it. But that is unethical and cynical behavior in pursuit of an astonishingly petty goal. And it's correct for SDL to simply ignore the contribution because it came from a dishonest developer, even if the specific code appears to be very good.
bakugo 1 days ago [-]
> On the other hand, code produced with AI and reviewed by humans can be perfectly good, maintainable, and indistinguishable from regular old code.

I have yet to see a single example of this. The way you make AI generated code good and maintainable is by rewriting it yourself.

llmssuck 1 days ago [-]
I know it's unpopular to say (here), but I see it all the time. Myself I sometimes cannot recognize what I wrote and what the agent wrote. It's just that I often have a physical memory of typing it, but that's it. (I also saw a lot of garbage, to be fair.)

There is quite a bit of skill to it, however. You cannot just take an AI from blank to "good code" without doing work. Yes, it takes work and quite a bit of it. By this I mean you have to write a good code style guide and a proper explanation of your architectural style(s), your preferences, your goals, plenty of examples, etc. Proper thought has to be put into this.

If you come across bad code, you need to investigate not castigate: why did this happen? How can we prevent this in the future? Those sort of processes need to become second nature. They actually should be already, because it's not that much different from managing a bunch of humans.

Humans come with lots of implicit knowledge and you also select them to match your company's style when you're hiring them. When they sit down at their keyboards you (and society) has already guided them towards a desirable path. (And even then they often still misfire.)

AI agents operate different. Their range of expression is completely alien to us. We cannot be both von Neumanns and complete morons. LLMs have no problem there. It takes a good while to get used to that.

bheadmaster 1 days ago [-]
> On the other hand, code produced with AI and reviewed by humans can be perfectly good and indistinguishable from regular old code.

Obligatory xkcd:

https://xkcd.com/810/

juped 1 days ago [-]
While this is a perfectly fine policy in the space of possible policies (it's probably what I'd pick, for what it's worth) the arguments being given for it leave a bad taste in my mouth.
or_am_i 1 days ago [-]
Same. Plenty of perfectly valid reasons to outright ban generated PRs, but "Look, I asked ChatGPT to generate a PR which would break SDL, and it did not bother reading AGENTS.md" is a pretty weak take - gotta know thy enemy a little bit better than that.
raincole 1 days ago [-]
It's not the argument the maintainer gives. I unironically suggest at least use AI to summarize that thread if you don't bother reading it before commenting.
duskdozer 1 days ago [-]
That seemed like just a curiosity after they already decided on the policy.
ratrace 1 days ago [-]
[dead]
pelasaco 1 days ago [-]
What’s the point? People will just fork it and improve it with AI anyway. In another hand, it would be an interesting experiment to watch how the original and the fork diverge over time. Especially in terms of security discoveries and feature development.
sph 1 days ago [-]
Go ahead, we're all still waiting for these "AI-improved" projects to appear.

Meanwhile I'll keep using SDL from the official maintainers which have been working on it for decades.

pelasaco 1 days ago [-]
> Meanwhile I'll keep using SDL from the official maintainers which have been working on it for decades.

That's just Virtue signaling.

"AI-improved" projects like "rewrite $FOO in rust" are popping up everywhere. I dont support it, sqlite3 being rewritten in rust makes me just sad https://turso.tech/blog/introducing-limbo-a-complete-rewrite..., but this "$PROJECT bans AI" is just ridiculous. Ideally we should try to use it for the good, instead of ban it.

xxs 1 days ago [-]
> "$PROJECT bans AI" is just ridiculous

why so? If they don't feel like reviewing code (or ensure copyright compliance) they are free to reject that.

If you feel strong about it, go fork and maintain it on your own.

orwin 1 days ago [-]
I think you don't understand how tiring it is to review full-llm code. I think banning it temporarily until people calm down with AI-generated PRs is a very sane solution. If it is still the solution in 3 years, maybe you would have a point then.

I only manage 3 'new' hires and I am of the mind of banning AI usage myself despite my heavy usage (the new hires don't level up, that's my main issue now, but the reviewing loops and the shit that got through our reviews are also issues).

ratrace 1 days ago [-]
[dead]
LLMCodeAuditor 1 days ago [-]
I am not sad about rewriting sqlite in Rust because this is the third such attempt I've seen, and just like the other two it looks like this project is totally doomed: https://github.com/tursodatabase/turso/

Like, look: https://github.com/tursodatabase/turso/issues/6412 It's stunning considering this project is advertised as a beta. There are hundreds of bugs like this. It's AI slop that gets worse the more AI is thrown at it.

SDL is 100% correct to keep this AI mess as far away from their project as possible.

raincole 1 days ago [-]
I'm pretty pro-AI, but I find it very amusing that every single time an open source project enacts no-AI policy, someone will chime in and explain how it will be outcompeted by the yes-AI version, while in reality it never happens.
pelasaco 1 days ago [-]
> while in reality it never happens.

it never happens in 3 weeks? The AI revolution is just starting.. too soon to jump in conclusions, i guess?

ethin 1 days ago [-]
Huh? I've been seeing the "hopelessly doomed because of AI" trope practically since ChatGPT came out. It wasn't even remotely as bad as it is now, but it's been there all along.
skydhash 1 days ago [-]
Make it to 2 or more years. That’s the amount of times that I’ve been seeing comments equating not using AI with hopelessly doomed project/career.
pelasaco 1 days ago [-]
I am sure you noticed how fast the things started to change since the beginning of 2026 right? In terms of tooling, model, context, pricing, etc?
thunderfork 1 days ago [-]
This is also something we've all been hearing for ages. "<Model version>/MCP/agents/yadda yadda are totally like anything that's come before!"
pelasaco 1 days ago [-]
> "<Model version>/MCP/agents/yadda yadda are totally like anything that's come before!"

and they are right. We never saw that before. That's why we all fear it.

ethin 1 days ago [-]
> and they are right. We never saw that before. That's why we all fear it.

Please please please tell me this is sarcasm. Because if you are serious, I think a lot of people have a long list of bridges to cell you.

arnvald 1 days ago [-]
Will they? Will someone have enough time, skill and dedication to maintain it? I don’t think using AI will by itself make a big enough difference, it’s still a lot of work to maintain a project
pelasaco 1 days ago [-]
> I don’t think using AI will by itself make a big enough difference, it’s still a lot of work to maintain a project

I think you are wrong. The "a lot of work maintaining a project" would be reduced, specially issues investigation, code improvement, security issues detection and fixes. SDL isn't a that relevant project, but "ban AI-written commit" - which reading the issue, sounds more like ban "AI usage" - is counterproductive to project.

spookie 14 hours ago [-]
> SDL isn't a that relevant project

Unreal 5 uses SDL to be able to create "windows" in a cross platform manner (specific use case, but not just a thing on Linux [1]). Many others do as well.

[1] https://dev.epicgames.com/documentation/unreal-engine/updati...

skydhash 1 days ago [-]
> SDL isn't a that relevant project,

SDL is kinda the king of “I want graphic, but not enough to bring a whole toolkit, or suffer with opengl”. I have a small digital audio player (shangling m0) where the whole interface is built with SDL.

krapp 4 hours ago [-]
>SDL isn't a that relevant project

Many, many things use SDL. It's one of those bottom pieces in the Jenga tower of infrastructure dependency[0].

Not maintained by some random person (that would be Sean Barret's STB library) but still, it seems irrelevant only because it's already ubiquitous.

[0]https://xkcd.com/2347/

nottorp 1 days ago [-]
> and improve it with AI anyway

No. My impression is that most AI PRs aren't made to improve anything, but to inflate the requester's reputation as an "AI" expert.

> and feature development

There's also this misconception that more features == better...

pelasaco 1 days ago [-]
there is no misconception here. Bug fixes, issue triage and feature implementation reduced time is a thing.
nottorp 1 days ago [-]
The misconception is that new features are always necessary, not that it would be nice if they were done faster.
ChrisRR 1 days ago [-]
If people want to fork at and work in their own manner then that's fine, but that doesn't mean you shouldn't protect the project that you're personally working on
signa11 1 days ago [-]
don't mind if you do 'guv, don't mind at all.
democracy 1 days ago [-]
tbh if the change works and the code is ok who cares what was used to build it? ChatGPT or C++ code generator. If the code looks crap - reject PR, why drama?
orwin 1 days ago [-]
Because to decide if it's crap, you still have to read it.And because AI respect coding guidelines, you have to actually understand what the code does to detect crap. Also the sheer number is unmanageable.
democracy 18 hours ago [-]
Oh no, reading the code, so before AI era noone was reading the code? Unless you have automated checks done by AI to red flag AI submissions... what else can you do? Ask 100 times not to submit AI code or click a checkbox or add this really serious terms and conditions paragraph?
tapoxi 1 days ago [-]
In the Monkey Selfie case - https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput... - courts decided that copyright requires a human author and a human merely setting the conditions for a copyrighted work to appear is not enough.

This reasonably means AI contributions where a human has guided the AI are not subject to copyright, and thus can't be supported by a project's license.

dtech 1 days ago [-]
That's quite a stretch, and untested in court.

At least a monkey is an unambiguous autonomous entity. A LLM is a - heck of a complicated - piece of software, and could very well be ruled a tool like any other

redwall_hp 1 days ago [-]
Tested all the way up to the Supreme Court, who declined to hear an appeal, so the precedent stands in the context of AI output.

https://www.reuters.com/legal/government/us-supreme-court-de...

It's still early, but this is absolutely going to be precedent used in a software related case, and it's going to lead to fun times with SOX/PCI style compliance issues, where developers will have to attest that merges did not use AI so compliance can ensure repos don't pass a threshold where there's too much LLM code.

tapoxi 1 days ago [-]
I mean, aren't we all bragging about autonomous agents doing the coding for us? I don't see how that's remotely a stretch.

The legal question was "did a human author the work"?

Sharlin 1 days ago [-]
From a less self-centered viewpoint there are plenty of reasons to be critical of LLMs and their use.
sscaryterry 1 days ago [-]
Stopping a flood with a tissue.
sscaryterry 1 days ago [-]
Don't understand the down vote. Policies like these are not truly enforceable. There are many many unscrupulous humans out there, that are more than willing to make any code they submit, look like a human wrote it, even though an LLM created it.
duskdozer 1 days ago [-]
Maybe, but the fact that a restaurant owner probably can't enforce a rule for the waiters not to spit in the food isn't an argument that they should say it's ok to spit in the food.
cwillu 1 days ago [-]
Illusion of transparency: you think your analogy was clear, other people found it opaque and dismissive, and expended what they considered to be a similar level of effort to engage with it as was used in creating it.
sscaryterry 1 days ago [-]
Wow, slow clap.
thunderfork 1 days ago [-]
The purpose of rules is not limited to enforcement. This seems to be a common misconception in these threads.
ecopoesis 1 days ago [-]
What’s next? Are they going to forbid the use of Intellisrnse? Maybe IDEs in general?

Why not just specify all contributions must be written with a steady hand and a strong magnet.

throwawayqqq11 1 days ago [-]
> Whats next

To show you your hyperbole: Allowing monkeys on typewriters.

LLMs are neither IDEs nor random.

I am very sceptical about iterative AI deployment too. People pretend the success threshold is vibing somethging that gets widely used, but its more than that. These one-shot solutions are not project maintenance. Answer yourself this one, could LLMs do what the linux kernel cummunity did over the same time span? This would be a good measure of success and if so, a strong argument to allow generated contributions.

grg0 20 hours ago [-]
Visual Studio programmer spotted.

They're going to force you to use vim. Better start learning those key bindings as soon as possible.

askI12 1 days ago [-]
What's next? Forbid cribbing from your neighbor in an exam? The audacity!

They simply don't want people like you and lose nothing.

ramon156 1 days ago [-]
> Given that the source of code generated by AI is unknown, we can't accept it under the Zlib license.

So what about SO code snippets? I'm not here to make a stance for AI, but this thread is leaning towards biased.

Address the elephant, LLM-assisted PR's have a chance of being lower quality. People are not obligated to review their code. Doing this manually, you are more inclined to review what you're submitting.

I don't get why these conversations always target their opinion, not the facts. I totally agree about the ethicality, the fact it's bound to get monopolized (unless GLM becomes SOTA soon), and is harming the environment. That's my opinion though, and shouldn't interfere with what others do. I don't scoff at people eating meat, let them be.

The issue is real, the solution is not.

johndough 1 days ago [-]
> So what about SO code snippets?

StackOverflow snippets are mostly licensed under CC BY-SA 3.0 or 4.0, so I'd wager that they are not allowed, either.

The SDL source code makes a few references to stackoverflow.com, but the only place I could find an exact copy was where the author explicitly licensed the code under a more permissive license: https://github.com/libsdl-org/SDL/blob/5bda0ccfb06ea56c1f15a...

Sharlin 1 days ago [-]
Most SO snippets likely aren't unique or creative enough to count as works. If a hundred programmers would write essentially the same snippet to solve a problem, it's not copyrightable.
johndough 1 days ago [-]
I wouldn't be so sure about that. The famous "rangeCheck" function in the Google vs Oracle lawsuit was only 9 lines: https://news.ycombinator.com/item?id=11722514
cwillu 1 days ago [-]
And the judge in that case famously stated: “I couldn’t have told you the first thing about Java before this problem. I have done, and still do, a significant amount of programming in other languages. I’ve written blocks of code like rangeCheck a hundred times before. I could do it, you could do it. The idea that someone would copy that when they could do it themselves just as fast, it was an accident. There’s no way you could say that was speeding them along to the marketplace. You’re one of the best lawyers in America, how could you even make that kind of argument?”
shevy-java 1 days ago [-]
I don't think this can be used as a counter-argument.

Most SO contributions are dead-simple; often just being a link to the documentation or an extended example. I mean just have a look at it.

Finding a comparable SO entry that is similar to Google versus Oracle example, is in my opinion much much harder. I have been using SO in the last 10 years a lot for snippets, and most snippets are low quality. (Some are good though; SO still has use cases, even though it kind of aged out now.)

embedding-shape 1 days ago [-]
> Most SO snippets likely aren't unique or creative enough to count as works.

How is this different from LLM outputs? Literally trained on the output of N programmers so it can give you a snippet of code based on what it has seen.

sdJah18 1 days ago [-]
The "humans do it, too" or "humans have always done it" arguments break down very quickly.

Not only by comparing the scale of infringement, but because direct Stackoverflow snippets are very rare. For example, C++ snippets are 95% code cleverness monstrosities and you can only learn a principle but not use the code directly.

I'd say that Stackoverflow snippets in well maintained open source projects are practically zero. I've never seen any PR that is accepted that would even trigger that suspicion.

rzmmm 1 days ago [-]
[dead]
LLMCodeAuditor 1 days ago [-]
Most SO snippets that you might actually copy-paste aren’t copyrightable: it is a small snippet of fairly generic code intended to illustrate a general idea. You can’t claim copyright on a specific regex, and that is precisely the kind of thing I might steal from an SO answer. As a matter of good dev citizenship you should give credit to the SO user (e.g. a link in a comment) but it’s almost never a copyright issue. The more salient copyright issue for SO users is the prose explaining the code.
missingdays 1 days ago [-]
> I don't scoff at people eating meat, let them be.

Why not let the animals be?

crackez 1 days ago [-]
I'm just happy to be on the food chain at all...
reactordev 1 days ago [-]
People who can wield AI properly have no use for SDL at all. It’s a library for humans to figure out platform code. AI has no such limitations.
fhd2 1 days ago [-]
So AI generated code doesn't benefit from stable foundations maintained by third parties? Fascinating take I don't currently agree with. Whether it's AI or hand written, using solid pre-existing components and having as little custom code as possible is my personal approach to keep things maintainable.
miningape 1 days ago [-]
This is probably the most insane take I've read all year. As though an LLMs don't have an increased chance to bork code when they have to write it multiple times for different platforms - even LLM users benefit from the existence of libraries that handle cross platform, low level implementation details and expose high level apis.
canelonesdeverd 1 days ago [-]
10/10 parody, perfectly nailed the delusion.
reactordev 1 days ago [-]
gotta channel some of that Kai Lentit energy.
LLMCodeAuditor 1 days ago [-]
“Claude, please purchase a few USB steering wheel controllers from Amazon and make sure they work properly with our custom game engine. Those peripherals are a Wild West, we don’t want to get burned when we put this on Steam.”

>> ………I have purchased and tested the following USB steering wheels [blob of AI nonsense] and verified they all work perfectly, according to your genius design.

“Wow, that was fast! It would take a stoopid human 48 hours just to receive the shipment.”

[I would think Claude would recommend using SDL instead of running some janky homespun thing]

reactordev 1 days ago [-]
HID and XInput, you don’t need SDL for Steering Wheels.
jhasse 1 days ago [-]
You absolutely do need SDL, it's full of knowledge by humans from trial and error over years of using input devices in the real world.
thunderfork 1 days ago [-]
Xinput is a pretty constrained interface that plenty of novel controllers, including steering wheels, don't/can't adhere to. Good luck getting the PS5 controller's fancy rumble working over xinput, for example
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 20:03:10 GMT+0000 (Coordinated Universal Time) with Vercel.