Should we allow questions on the forum about code created by ChatGPT or similar services?

I’d say nobody will complain if you formulated in your native language and made a LLM translate to English. The point is: You wrote the content and just let the translation be done. You are responsible for the content and you might want to proof-read the English text once to be sure there was nothing mixed up. Just as your last sentence said.

This is especially the problem. How would we filter here?
We have 3 groups of stakeholders involved as I see: The askers of questions, the community that volunteer to share their knowledge, and the moderators that are to organize the topics etc.
Now, if the askers used LLM, I do not mind it for most parts. This is just the problem formulation. If the formulation is plainly wrong, the users might be logical enough to understand that the resulting answers might be completely off. As a result these have an inherent self-interest in formulating the correct problem statement/task at hand.
The volunteers in the community could easily use LLMs to simplify their lifes. However, here is the risk that people tend to give wrong or untested information as already mentioned.
The mods would now have to do their job: moderate the forums. While the amount of questions is not significantly increasing with the LLM usage, the amount of answers as well as their vast size (in total) increases significantly. That means: more time required to read and check the facts. Also, it gets really harder to identify wrong information. In the past, it was relatively easy just by the shape of an answer if a volunteer was answering with experience or just some nasty comment. With LLMs this is no longer as clear to distinguish as before. So, in the worst case, malicious code could be spreading (intentional or non-intentional), as many users copy&paste answers unseen and unchecked.

So, I ask: Who should do the filtering? The mods that already have theirs hand full (and that are working as volunteers as well)? The inexperienced users asking the question? The rest of the community blaming each other?

1 Like

I already shared my thoughts on this topic in a previous post here: Discussion about AI-generated content - #11 by bb77

This was in the context of normal forum posts and there I talk about responsible use of A.I. and @christianlupus has just described very well in his post what I mean by that.

But since this is the developement section, and therefore we are also talking about code contributions and bug reports here, it’s even more important to use A.I. responsibly to not to put unnecessary burden on the maintainers and developers of FOSS projects.

Unfortunately, there is a trend that so-called ā€œvibe codersā€, or ā€œvibe security researchersā€ use LLMs to generate bug reports, and then the maintainers instead of dealing with actual bugs and security issues have to spend their time sorting out these nonsense bug reports.

See here for a prominent example: Curl takes action against time-wasting AI bug reports • The Register

Things like that are, of course, an absolute no-go, and I can completely understand the maintainers’ reaction and their desire to ban AI-generated content entirely, although I wouldn’t necessarily support a blanket ban myself.

1 Like