Comments (33)
Perhaps the easiest way is if I add the ability to pick .lvt
files in a config. One can then have two configs which differ only in checkruns
: easier than trying to extend what is at present a simple syntax ...
from l3build.
from l3build.
@wspr Could just be an appropriately-named .lua
file, doesn't have to have an 'odd' extension.
from l3build.
@josephwright Sure; makes little big difference whether you have l3test01.lvt
then l3test01.l3b
or l3test01-l3b.lua
but having a separate extension kind of ‘fits’ nicely.
from l3build.
from l3build.
@FrankMittelbach At present, our configs are simple .lua
files with no odd parsing. In current case, checkruns
is an integer. In we want to sue the suggested syntax, we'd have to parse the entire config file separately to pick out the value. Assuming we don't want to do that, we might make it a string so we can parse just the value itself, but that still feels awkward to me ...
from l3build.
from l3build.
from l3build.
@josephwright hmm. I wonder if that approach then is valid (and not dangerous):
checkruns = <max-number>
runs the checks a maximum number of times each time comparing the result. Stops if either the results match or we have tried of times. For save always do
Offhand I can't really see a case where that is going to fail, ie matching first time by not second or third. I know one can construct such cases, but that has to be rather deliberately in my opinion.
But I think that @davidcarlisle 's suggestion is even better. 0 doesn't quite work as you may want to say how often to try.
from l3build.
I'll have to adjust the code to let us 'break out' of the loop to get to the .tlg
comparison stage. I'd like to do #50 first (it's a big-ish job), then I guess I can look at this and #6. I'll flag for TL'18.
from l3build.
Hmmmm, what if you were testing that a certain condition didn't converge after a number of runs? I'm thinking something cross-ref related where you might expect a value after two runs but due to some misbehaving package still have ??
. Possibly too contrived an example...
Secondly, I guess this isn't a major problem but this does slow down all legitimately failing tests, right? (Especially if not running --halt
.) Unless you subdivide all tests that need multiple runs, but if you were willing to do that you wouldn't need the variable number of runs to start with.
I'm sorry to be contrary but if it were up to a vote I think I'd prefer a simple mechanism to indicate how many runs a given test should use. (Even if it were embedded somehow in the .lvt
file?)
from l3build.
from l3build.
from l3build.
from l3build.
from l3build.
from l3build.
from l3build.
I've added some code for this but not documented yet. I wonder if we should just enable this approach all of the time: typically the TeX run is slower than the comparison, and we are normally running only one check run so there would be no impact. That avoids the entire need for some specialised interface.
I also need to get it working with PDF-based tests, but to be honest that entire area needs re-doing so I've not worried at present.
from l3build.
from l3build.
my approach would have been to use the max number of reruns at save time (ie no breaking out earlier)
from l3build.
On saving, I think the current logic should be OK. When you save, there are two cases:
- There is no existing
.tlg
file: the comparison will always fail and the maximum number of runs will be applied - There is an existing
.tlg
, which will only match if we can safely break out of the loop
That said, I've not tried this out just yet: I thought I'd first see if the entire plan sounded any good. I can look at forcing 'no break out', but it's a bit tricky as the runtest()
function doesn't actually know if the required target is check
or save
...
from l3build.
I've done some (simple) tests and I'm reasonably confident that there should be no issue with the save
target ...
from l3build.
from l3build.
@blefloch I'd not thought of that case: I guess I will need to work out how best to handle it. All doable of course.
More general thoughts on leaving the interface unchanged? I wondered if we should have a boolean to turn this on-and-off: optimisecheckruns
or similar?
from l3build.
from l3build.
from l3build.
from l3build.
The latest change means that treating checkruns
as a maximum only applies when checking, not when saving. If people could test it out, or at least have a look over the commits, that would be great! I'll document once it's clear this is the desired behaviour.
from l3build.
I'm calling this fixed ...
from l3build.
I think so ... but my machine is still too slow :-(
from l3build.
@FrankMittelbach Comes down to how many tests one decides to run: we could only pick 'obvious' ones for multiple engines, but in the past it's been the non-obvious ones that have been problematic.
from l3build.
no comes down to that being an inexpensive machine that by now is really old so ... it is simply slow. It is always the non-obvious that show errors so there is a good reason to run all (I'm not complaining)
from l3build.
@FrankMittelbach One reason people use a branch-and-pull-request workflow: you check stuff in on a branch, the CI does the tests, you only merge when they pass ;)
from l3build.
Related Issues (20)
- Outdated comments in `testfiles/support/regression-test.cfg`
- Are variables `os_cmpexe` and `os_cmpext` still in use? HOT 1
- Normalize `luaotfload | db : Font names database loaded from <path>` lines? HOT 1
- LaTeX kernel date shows in certain type of test files HOT 9
- Refine error mesasage "attempt to index a nil value" HOT 4
- Ideas about splitting slow `l3build check/doc` into smaller even runs HOT 4
- small improvements for -S switch HOT 13
- Remove shebang in build.lua HOT 3
- Documentation typo HOT 4
- Could `l3build check` only test if the compilation is successful HOT 11
- Copyright: update manually or automatically by `update_tag()`? HOT 2
- Recent change seems to have rendered documented way to call `biber` not working HOT 16
- l3build fails for unknown reasons HOT 3
- How to copy `docfiles` with directory structure respected? HOT 1
- Non-zero exit code caused by lots of `\showbox`
- Updating `man l3build` HOT 7
- Output on failure does not help
- Check with `stdengine` only HOT 2
- Sync up documented log normalization rules HOT 3
- Output normalization hook
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from l3build.