Git Product home page Git Product logo

turtle's Issues

Create a symlink to last used sandbox folder

@scott-gibson-sociomantic

Well whenever debugging something and checking logs between runs i'd usually just have a command set up with something like 'less build/devel/tmp/sandbox-xxxx/log/xxxx.log' and check it after each run. Now i'll have to update it each time. It's a minor inconvenience.

@andrej-mitrovic-sociomantic

Maybe turtle could do something clever, like create a symlink to the last ran test-suite directory and name the symlink: devel/tmp/sandbox-latest

Deprecate specifying sandbox/test name manually to prevent sandbox clashing

Right now the default turtle behavior is to default sandbox name to the binary name. This is a problem if the tests are running in parallel (make -j2) and test the same binary - the sandbox path will clash.

Perhaps is better to deprecate the manual sandbox naming and to derive the name from the hash of the
main test's path/class name or similar.

Irregular `Tested application kept running even after SIGABRT` test suite failures

With at least one of our applications we have been seeing test failures related to the tested app not shutting down correctly when SIGABRT is issued:

09:30:55 [turtle    ] Tested application kept running even after SIGABRT
09:30:56 core.exception.AssertError@./submodules/turtle/src/turtle/application/model/TestedApplicationBase.d(275): Assertion failure
09:30:56 ----------------
09:30:56 src/core/exception.d:434 _d_assert [0xe9d827]
09:30:56 ??:? void turtle.application.model.TestedApplicationBase.__assert(int) [0xdd92a4]
09:30:56 ./submodules/turtle/src/turtle/application/model/TestedApplicationBase.d:275 bool turtle.application.model.TestedApplicationBase.KillTimerEvent.nextSignal() [0xdd9114]

The two observed cases so far impacted D2 builds only, so it could be that this is a D2-only issue.

The app for which this was observed does have some functionality that uses ocean.task.util.Timer.wait to temporarily suspend tasks, but I can't see how this could reasonably interfere with SIGABRT: there isn't any custom-written signal handling, and it uses the DaemonApp TaskExt extension (so signal handling should be automatically set up under the hood).

Requesting a fresh test run usually results in success, so this is not a consistent failure.

Consider separating test finding facilities

Turtle currently contains several independent chunks of functionality. One of them is framework to defined independent test cases as classes and runner to find/execute those via reflection. However, currently it is bound to other functionality like spawning tested application. It could be useful to have just test case interaction functionality available separately.

Always clean tested application environment and add option to preserve.

Currently the environment logic is unclear exactly what is going on.

If the test suite returns a null env then the env of the test suite will be passed on to the spawned process.
Setting any env variable in the configureTestedApplication method will then cause the existing env to not be used. I recently discovered this when adding stomping prevention for d2 conversion tests and my path was now wiped, causing a test failure.

Behaviour should be updated to always clean the environment by default with an option to preserve.

Provide more informations on early abort

@mathias-lang-sociomantic wrote:

Typical case :

   run build/devel/tmp/test-feed
[turtle    ]     Early termination from 'XXX', aborting
make: *** [XXX] Error 255

This isn't very informative. The app crashed with a segv (signal 11), and Turtle could detect that (although it requires ocean to support it).

Also having the arguments the program (and the testsuite) were started with so I can easily debug it would be useful.

D2: Automatically convert tags and push them

To move forward with D2 we need to provide automatically converted libraries for projects that are built only for D2.

Every time a new tag is pushed, we have to convert it to D2, tag the converted version with a +d2 build metadata information appended (so tag v2.3.4 --- conversion D2 ---> v2.3.4+d2) and push it back (making sure the new tag is not converted again!).

This capability will be probably added to beaver, but at some point we need to use that feature from beaver here.

Delay in checking status file makes RunTwiceCompareStats unreliable

RunTwiceCompareStats runs the test suite twice and verifies that the vsize of the tested application has not increased.

public bool runTwiceCompareStats ( ref RunnerConfig config,
ref Context context, void delegate() reset, istring[] disabled )
{
// run twice and compare peak stats
auto app = cast(TestedDaemonApplication) context.app;
enforce(app !is null);
bool result1 = runAll(config, context, reset, disabled);
if (!result1)
return result1;
auto vsize1 = app.getPeakStats().vsize;
log.info("Peak virtual memory after first run: {}", vsize1);
log.trace("");
log.trace("-----------------------------------------");
log.trace("----------- END OR FIRST RUN ------------");
log.trace("-----------------------------------------");
log.trace("");
bool result2 = runAll(config, context, reset, disabled);
auto vsize2 = app.getPeakStats().vsize;
log.info("Peak virtual memory after second run: {}", vsize2);
enforce!(">=")(vsize1, vsize2);
return result2;
}

The problem is that vsize isn't checked when queried but returned from a cache.

public PeakStats getPeakStats ( )
{
return this.stats_grabber.peak_stats;
}

The cache is updated using a timer.

this.stats_grabber.set(0, 1, 0, 10);

If memory is allocated in the last test of the first run then it is quite possible that the timer doesn't fire before (I am supposing) the epoll loop is exited. That means vsize1 is assigned an out of date value.

auto vsize1 = app.getPeakStats().vsize;

That in turn means that, the last allocation of the first run, is thought to occur during the second run and so the RunTwiceCompareStats test fails.

As we need "final" stats in the case of RunTwiceCompareStats then I don't think the cache should be used.

A fix that works for me is below. I'm not sure it is nice to call handle_ though so I hope someone else will chime in on a correct fix. I'm also not sure how useful the cache is - how often does it get queried?

--- a/src/turtle/application/TestedDaemonApplication.d
+++ b/src/turtle/application/TestedDaemonApplication.d
@@ -164,13 +164,20 @@ class TestedDaemonApplication : TestedApplicationBase
 
     /***************************************************************************
 
+        Params:
+            use_cache = if stats should be returned from the cache or should
+                be queried live
         Returns:
             stats for peak resource usage by the application
 
     ***************************************************************************/
 
-    public PeakStats getPeakStats ( )
+    public PeakStats getPeakStats (bool use_cache = true )
     {
+        if (! use_cache)
+        {
+            this.stats_grabber.handle_(0);
+        }
         return this.stats_grabber.peak_stats;
     }
 }
diff --git a/src/turtle/runner/actions/RunTwiceCompareStats.d b/src/turtle/runner/actions/RunTwiceCompareStats.d
index 5f74bf6..a46b06a 100644
--- a/src/turtle/runner/actions/RunTwiceCompareStats.d
+++ b/src/turtle/runner/actions/RunTwiceCompareStats.d
@@ -45,7 +45,7 @@ public bool runTwiceCompareStats ( ref RunnerConfig config,
     if (!result1)
         return result1;
 
-    auto vsize1 = app.getPeakStats().vsize;
+    auto vsize1 = app.getPeakStats(false).vsize;
     log.info("Peak virtual memory after first run: {}", vsize1);
     log.trace("");
     log.trace("-----------------------------------------");
@@ -54,7 +54,7 @@ public bool runTwiceCompareStats ( ref RunnerConfig config,
     log.trace("");
 
     bool result2 = runAll(config, context, reset, disabled);
-    auto vsize2 = app.getPeakStats().vsize;
+    auto vsize2 = app.getPeakStats(false).vsize;
     log.info("Peak virtual memory after second run: {}", vsize2);
 
     enforce!(">=")(vsize1, vsize2);

Improve documentation in case of instantiating abstract test case class

If the test case Iterator fails to create an test case instance, it will show this error to the user:

.log.error(
"Found an invalid test case class '{}' which doesn't " ~
"have the default constructor defined.",
cinfo.name

Adding the default constructor to the abstract class is pointless since the create will still fail. We could/should omit the default constructor bit if the test case is an abstract class.

Unclear documentation / naming in turtle.env.model.Registry

The module doc says:

Provides centralized registry where turtle env additions from other libraries can register themselves for the purpose of being notified about test suite shutdown.

The one location where the Registry.unregisterAll method is called says:

before killing tested app, unregister all know environment additions to avoid irrelevant errors being printed because of shutdown sequence

The documentation of ITurtleEnv says:

Turtle environment addition must implement this interface to be able to be notified about test suite shutdown to adjust own state accordingly.

The documented purposes of the registry and the unregistration facility (bolded above) don't really seem to tie in. From the point of view of a user implementing ITurtleEnv, it's not clear what unregister is supposed to do.

Ignore --progress when verbose output is requested

Currently, turtle enforces that the --progress option has not been given if any level of verbosity is desired. This results in the following error on doing make test V=1 if --progress is set in the Makefile:

sed -i '\|^ */project/build/devel/include/Version.d.*$|d' /project/build/devel/tmp/test-appname.mak 
/project/build/devel/tmp/test-appname  --progress=15 
terminated after throwing an uncaught instance of 'object.Exception' at ./submodules/turtle/src/turtle/runner/Runner.d:706
  toString():  enforcement has failed
submodules/makd/Makd.mak:651: recipe for target '/project/build/devel/tmp/test-appname.stamp' failed
make: *** [/project/build/devel/tmp/test-appname.stamp] Aborted (core dumped)

Setting any level of verbosity means that the output is meant for human consumption, so the --progress flag can be completely ignored in this case (even if it has been supplied).

Add network delay / packet loss functionality to turtle

@gavin-norman-sociomantic wrote:

Allowing a test suite to easily (i.e. by calling a single function, ideally) set up different degrees of network trouble, in order to test the application's robustness under these conditions.

Proposed approach was to use virtual network namespaces combined with iptables filters. However, this is not tested and very far from what turtle does currently, so keeps being delayed.

Allow test suite to wait for a length of time with no writes to the Node.

Many applications are architectured in a way where some predefined input has been feed into them, process the input and output the profiles into, say, DHT. It's often necessary to know that the application has finished all processing and that all writes landed into the output node before confirming the contents of it.

One of the easiest ways, which may be just enough for most purposes is to allow turtle to wait for a length of time with no changes made to the whatever fake node.

Display which sandbox was used

While testing a change, it's not rare that one gets multiple failures in a row and does not run make clean in between. This leads to a situation where there's multiple sandbox-APP-XX______ folders in build/last/tmp and it's hard to tell which one is the most recent, and hence to consult application's logs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.