Should we allow questions on the forum about code created by ChatGPT or similar services?

I personally have difficulty with people who ask for help to developers for code generated by ChatGPT that does not work.

I think developers should not be spending their valuable time answering questions caused by such “code”.

Of course in theory ChatGPT could be an entry point for people who are new to developing apps but I don’t believe that it is at this point in time helpful to use such service for newcomers. Instead I would prefer to motivate newcomers to invest in their knowledge and understanding by starting with the tutorials and looking at existing apps for inspiration, and trying to code something themselves manually.

Should we allow questions on the forum of code created by ChatGPT or similar services?

What do you think?

4 Likes

Speaking as a mod… No, because the volunteer dev who responds is the only one having to actually apply themselves in understanding what was generated.

If the intention is to get something coded, all this situation is offering is a new method to ask developers for help, while still not being able to offer any sort of guidance, assistance, or understanding of whatever ChatGPT generated, but the original poster has no interest in understanding… which only wastes even more volunteer time.

Unless one day the devs start requesting AI generated code, then it would be fine.

3 Likes

Since ChatGPT is still a long way from developing anything beyond code snippets, the question does not even arise for the time being.
You really have to bring a lot of your own knowledge with you to develop something with the support of ChatGPT. It is hardly possible without your own knowledge, as the AI quickly begins to halucinate. She then pretends to know and yet is just the famous stochastic parrot.

Actually, this is exactly what has happened. So, that is why the question is being asked more broadly. :heart:

2 Likes

OK, I see. Sorry, I was not yet informed.

In my opinion, someone who gets involved with ChatGPT should also finish the whole thing with ChatGPT and understand it as an aid for learning. As I have seen over and over again, ChatGPT stubbornly goes in a certain direction over and over again, which very often is NOT the right one.

But the handling, the correct questioning technique with AI must also be learned, which is why I find working with it as useful, and be it as a spaerings partner but not for ready-made “solutions” - and no, I would also not dig deep into questions about not working code, unless I can see immediately where a wrong turn was made.

2 Likes

I do think that ChatGPT is highly risky approach to the development of apps. There is no human being involved to check for crazy code or whatever. This means in the worst case that a badly trained ai model might introduce security issues.

So, i think that apps should at least for the moment be developed by humans. As a consequence, code generated by ai would be not supportable. I would however make a difference if the author uses it solely for learning purposes or as a starting point/proof of concept/example and shows personal interest in the solution and just stumbles over his own feet.

As a side remark, i would even suggest to put some rules into the terms of service to share your code in the org and the app store: either no ai code should be part of it (also for license reasons) or at least there should be a flag in the manifest that tells if ai was used to generate the code base. This could be shown as a symbol on the app store.

3 Likes

With the risk of being a party pooper, the question is pretty moot. People using ChatGTP or even GitHub Pilot (or is it Copilot?) to produce something that resembles code, can do so without telling us that is how it was done. We will most of the time be unable to distinguish between a bad (non-)programmer’s “code” and something that was generated by the bot(s). The only time the question of this thread is actually relevant is when the OP actually tells us they used a bot to produce the code. Which will probably be pretty rare, overall.

I agree that we should not spend valuable time debugging code written by code. At least certainly not at this juncture. If a rookie developer wants to conceal that they have done this… maybe that’s a situation that needs some firm discouragement.

Considering how ChatGPT has had all sorts of problems such as political discrimination, and Bing’s copy of ChatGPT went completely insane and was threatening users, I can’t imagine why someone would want to have it write code and put it in their own name. Who knows what these things will write…

That will become exceedingly dangerous when they start doing it with proprietary code that can’t be reviewed.

1 Like

Yes, it will be hard to distinguish bad code from what an AI produced.

This is also the reason, i suggested the flag power app. This will make the users aware that they are responsible with their name for the code they provide unless the flag is set. Just some psychological trick to enhance awareness.

As the models evolve, the code they produce will get better but also complexer, i guess. This will be more like a remote debugging with a complete newcomer. So, at some point we will have to tell users to just learn the basic programming skills before approaching Nextcloud apps. We might need of an encouraging way to do just that.

I am not a Nextcloud community developer, nor am I developing in PHP anymore. However through my woork I am heavily involved with high skilled and high quality developers, and here they use ChatGPT to get a skeleton, which they then breaks apart, because any code they do and implement, has rails. Not to be confused with ruby on rails.

This means that for any code to make it to ever be “official”, it needs to be structured in certain ways. It needs to be unit tested security wise. And it MUST be reviewed by another developer.

Cool that anyone can develop an app and then sign it with occ and have their own homebrewn app in their own solution. This should always be like that. However to make it to the official apps.nextcloud.com marketplace, the above should the standard way. To not have all community developers running away screaming, then at least this should be changed to:
“And it MUST be reviewed by another developer if a preliminary bot or code validation shows any deviations from this”

A disclaimer on the development section of the community forum (Example) “Questions in regards to development of nextcloud apps or contribution to core, is welcome. However questions in regards to bot or AI generated code examples which does not conform to the nextcloud coding standards and deployment scheme, …” Whatever.

This is perhaps not directly related, but something else to consider is that ChatGPT has a certain potential for plagiarism. Some types of queries have resulted in it spitting out web content verbatim, and I don’t know if it takes any licensing or attribution into account when it does this.

2 Likes

Hi @Daphne :slight_smile:

In my opinion, ChatGPT can be an assistant for the developer who doesn’t understand the official developer documentation. Or the documentation does not specify what the developer is looking for.

For example : In the official developer documentation, I cannot find how to list the user’s files and folder. How can I manipulate them ? And so on.

Even ChatGPT couldn’t answer me on this subject.

2 Likes

If they already debug the code by themselves, with error log or error message, does that make sense?

IMO if you are asking questions about code, it is not the source of the code that matters but that you fully understand the code that you have added.

Asking questions about the source code of Nextcloud makes sense because you didn’t write it. If you wrote new code by hand and were asking for help with it, the community would understand that you understand at least the stuff you wrote because otherwise, why would you have written it?

If you use AI generated code, it seems logical that you would just be expected to have the same knowledge of that generated code as you would if you had hand written it. And if not, do not ask for help until you do.

One additional consideration is whether the programmer owns the rights to the code they generate. If AI-generated code made its way into Nextcloud as a FOSS program, and if the AI-generator had some legal claim to the ownership of the code, you could run into a license conflict that could jeopardize Nextcloud.

4 Likes

And that’s ChatGpt’s answer: :wink:
Should we allow questions on the NEXTCLOUD forum of code created by ChatGPT or similar services?
As an AI language model, I don’t have a personal opinion. However, regarding your question, it would depend on the purpose and context of the NEXTCLOUD forum.

If the forum is designed for discussing and troubleshooting code related to Nextcloud or other relevant software, then questions related to code generated by AI language models such as ChatGPT may not be appropriate. However, if the forum is open to broader discussions about artificial intelligence or programming in general, then questions related to code created by ChatGPT or similar services may be allowed.

Ultimately, it would be up to the moderators of the NEXTCLOUD forum to determine the scope and guidelines for the types of questions and discussions allowed on the platform.

5 Likes

I would say - no. ChatGPT is a service provided by OpenAI. If anyone has problems with that service then they should ask OpenAI. They make money with that and I don’t see the point providing free support for their products.

Furthermore generated code is just that - generated code. If someone does not even now how to create code for a specific purpose on their own, they will never understand the generated code as well and it does not matter if the code generator is ChatGPT or some IDE which offers “wizards” to create code for specific tasks.

I reckon you cannot the question that easily.
First, we should appreciate that someone is trying to contribute to the community-driven Nextcluoud worksuit despite lacking basic programming skills. They try to overcome this with the help of ChatGPT or other tolls like GH Copilot.
In that case, help goes in another direction, as previously mentioned, prompt engineering.
Not sure how to handle that but is something we should deal with as AI help on such things will be more and more a common thing, so better you try to grow with it from day one rather than be driven by it.

Just a general update, I have asked the code of conduct team to form a statement about the use of AI-related technologies, using the input of all of you in this thread.

Some people mentioned to me that if they want to help someone with their code, they want this person to grow and learn from mistakes and that this person is willing to learn, which are dynamics that are different if they are behaving like the human intermediary with the bot. In other open source projects we have now also seen the rise of BS-PR’s generated by chatGPT. On the other hand it was also raised that it can be a new entry point for contributors.

I guess in the end it is a matter of respectful behaviour and taking responsibility as a contributor for the overall community health: not spamming, but trying to be useful for the overall good.

Thanks everyone for the insightful discussion here and taking the time to share with me your thoughts.

4 Likes

Exactly, you are 100% right :smiley:

1 Like