Iād say nobody will complain if you formulated in your native language and made a LLM translate to English. The point is: You wrote the content and just let the translation be done. You are responsible for the content and you might want to proof-read the English text once to be sure there was nothing mixed up. Just as your last sentence said.
This is especially the problem. How would we filter here?
We have 3 groups of stakeholders involved as I see: The askers of questions, the community that volunteer to share their knowledge, and the moderators that are to organize the topics etc.
Now, if the askers used LLM, I do not mind it for most parts. This is just the problem formulation. If the formulation is plainly wrong, the users might be logical enough to understand that the resulting answers might be completely off. As a result these have an inherent self-interest in formulating the correct problem statement/task at hand.
The volunteers in the community could easily use LLMs to simplify their lifes. However, here is the risk that people tend to give wrong or untested information as already mentioned.
The mods would now have to do their job: moderate the forums. While the amount of questions is not significantly increasing with the LLM usage, the amount of answers as well as their vast size (in total) increases significantly. That means: more time required to read and check the facts. Also, it gets really harder to identify wrong information. In the past, it was relatively easy just by the shape of an answer if a volunteer was answering with experience or just some nasty comment. With LLMs this is no longer as clear to distinguish as before. So, in the worst case, malicious code could be spreading (intentional or non-intentional), as many users copy&paste answers unseen and unchecked.
So, I ask: Who should do the filtering? The mods that already have theirs hand full (and that are working as volunteers as well)? The inexperienced users asking the question? The rest of the community blaming each other?