Hello,
I am trying to avoid a problem with testing the DB connection during integration tests. Can you help me?
TL;DR
How can the complete NC core be restarted in order to enforce a new DB connection to be created?
Motivation
For integration testing, the DB is always a hard thing to deal with. This is the more true, as the DB is partly abstracted from the app’s code by the NC core, e.g. when accessing files in the NC instance from the files app. There is a - more or less complete - documentation of the API but the internals might change anytime.
Therefore, it makes sense to use the fresh fixture pattern (ref) to start all tests with a clean and well-defined state. So, it is necessary to reset the complete DB content for each test.
General idea
As long as the DB is not used yet, the server does not cache anything. So, one can think of a general structure of a test case like
// Namespace, use statements etc ...
class MyTestCase extends TestCase {
protected function setUp() {
// Load fixture into DB
exec('path/restore-system-to-fixture.sh');
// Prepare rest of system
}
// ...
}
The file path/restore-system-to-fixture.sh
is a shell script that allows setting the complete system to a defined state (data
folder, database dump for non-SQLite DBs).
The approach to manually code the fixture in PHP was not realized. This is due to the fact, that the internals of the files app need to be taken into account accordingly. This is out of the scope of our app and this might break the test easily if there is any modification made upstream.
Setup and wiring of the classes
Typically, there is a PHPUnit test suite, that has a bootstrap.php
configured. This file is something like this (for the cookbook app):
<?php
require_once __DIR__ . '/../../../tests/bootstrap.php';
\OC_App::loadApp('cookbook');
The path __DIR__ . '/../../../tests/bootstrap.php'
points to the bootstrap.php
of the Nextcloud server. During evaluating this inclusion, some basic things are initialized like the \OC::$server
and the corresponding database connection.
So, the order of events for a single test is
- Startup code of PHPUnit
-
App bootstrap using
bootstrap.php
as above -
Core bootstrap using the
bootstrap.php
from the server repository - Initialization of the server and opening of a connection to the DB
- Some internals of PHPUnit to prepare the test run
-
Setup using
setUp()
method of test - Test run of the main test function
- Tear down if configured
The bold
Non-SQLite database systems
For all database types except for SQLite this is working. The reason is that all these are capable of multiple simultaneous incoming connections.
So, the connection by the server core is already open but the test class can still trigger an external script to connect to the DB server. The DB might be rebuilt completely but this is all right at this point.
Only after the core has started accessing the data it might have started caching the data and this approach will no longer work. One way to accomplish this is to use PHPUnit’s feature of process separation.
Usage of SQLite
In general, SQLite only accepts one connection at a time. One can easily backup the database by simply copying the single file to another location. When done manually/asynchronously, this works well.
Problem description
When taking the approach from above, with SQL, the following problem arises:
The test is started and the list above starts to run. The core in an early stage opens a connection to the existing SQLite file. In the setUp
the old file is unlinked and replaced by a new one (from the fixture).
The problem is that the server has the file already opened and the inode/file descriptor is saved within the PHP process. So, the old DB content is used during the test run. For the next test run, the current fixture will be used.
Obviously, this time shift by one test execution is all but obvious when looking at the PHP test classes and might be very surprising.
Ideas for a solution
There are a few ideas on how to tackle the problem but non was found to be working or it was unclear how to implement the solution.
Restart of the core
When it was possible to trigger/force a restart of the complete core or at least the database connection, one could do this after the SQLite file was recovered in the setUp()
method.
The additional benefit of a complete restart would be that any cached data should be gone as well. That might allow avoiding the time-consuming process isolation.
Delaying of the bootstrap
Another approach is to not include the bootstrap.php
of the NC core within the app’s bootstrap.php
. As the bootstrap file of the core will trigger the DB connection, it should be called within the setUp()
method to start the core only after the DB has been reset.
Usage of the PHP classes of the NC server
One could try to dump the classes involved using IDBConnection
and similar classes and restore from within the NC process. This would avoid the problem with the access from different processes but introduce the problem that the complete fixture must be built in PHP. Any upstream change might break the complete test environment.
Usage of transactions
One idea is to use transactions to “undo” any DB change. While this might work well for queries against the data of the database (SELECT
, INSERT
, UPDATE
, DELETE
, CRUD operations), it will not necessarily work well with schema updates.
One of the reasons, I want to run the tests with a real database is to test all migrations (including error handling). As far as I know at least PostgreSQL is not able to rollback a update of the schema. For the other database engines I am not sure.
For testing of the usual business logic, this might be a feasible solution. There is even a package available to realize this structure.
Dummy DB as stub/mock
According to some literature, the best one could do regarding the DB was to create a stub or mock in order to avoid the runtime penalty of storing the things in the DB. As a side effect, the fixture would be only in PHP again.
I do not see how this might work here. All possible kinds of actions might be considered. In fact, I think one would have to rewrite a complete DB engine from scratch. Obviously, this is not the idea of that approach. Again the problem is there that the internals of the core should keep working but not be tested along the device-under-test.