Hi everyone,
there is an idea floating in my head since some months, that I want to ask for feedback from you forum people here.
Introduction: On the development side of the project we have a lot of automated testing available, started with unit tests, integration tests, acceptance tests some tests for the APIs, some tests for the UI (especially files), some static code analysis and all of this. We have the basics covered quite good and have decent results there.
The problem: On the other side there are a lot of special cases that are not covered by the above automated testing cases and that are also quite some work to implement. That’s the reason why we call for testing and have alphas, betas, RCs and then final releases for this project and not just release final packages. Currently those testing efforts are quite intransparent - especially the positive side of it. If something is broken quite fast the github issue tracker reflects that, but the stuff that works is not shown immediately.
The solution: My idea was to bring a little web page to life, which makes it easy for random persons to participate in this process with the lowest possible barrier. It is basically a collection of user stories like:
As a user I want to be able to move a folder “Photos” (incl. it’s content) into a folder “Personal” via drag and drop in the web UI.
Then you could go to a page that lists a lot of these user stories and you could give feedback if it works for you or not including an optional comment and adding links (for images/videos or github/forum discussions) to the test case.
No login required: This also does not require a login or any authentication, because it is basically for collection the basic information and should be as easy as possible to participate in. To avoid vandalism the results are just the average over all results (if 10 times the feedback was “works” and 1 time “does not work” it is likely to work).
Help needed: It also allows to show which tests are already done by other persons and which aren’t. That makes it easy for people that want to test some stuff to pick the pending tasks.
Versioned: Obviously the results are stored per version of Nextcloud or app.
It then would allow a feedback loop to the developers when the time is right for further polishing or if all the basic tests are done.
Give me feedback about this idea!
This should just be a quick introduction into what is in my head for quite some time and I want to get basic feedback if this would make sense at all and people would help to gather this information or if there is nobody that actually want to test.
Disclaimer: I will add more details to this in the coming days - I have collected already a lot more thoughts about this but don’t want to start bikeshedding about details right from the begining. This should just be a rough draft to get an idea if this is wanted or not. If the idea makes sense or is complete nonsense.
@tflidd @JasonBayton @Andy @TobiasKaminsky @Soko @Schmu @anon99252149 @juliushaertl @MariusBluem @rullzer I just picked you as the more active people here in the forums and may already have a quite good overview if this is needed or not. Maybe you could mention other people that are active so that they see this. Or you have already read about this and can link to other relevant forum posts. Thanks