Comments (14)
But after doing some experiments I think it is doable. Island of TeX provides weekly updated Docker images for latest full TeX Live. So we can run tests on them with GitHub Actions.
That's not an issue: we build a minimal TeX Live for our tests anyway - it's just a question of what to include. (This means the install is updated every day from CTAN.)
Step 0: Find out all
.sty
or.cls
files intex/latex
folder and compile them withpdflatex
on current TeX Live.% for somename.cls file
\documentclass{somename}
\begin{document}
TEST
\end{document}% for somename.sty file
\documentclass{article}
\usepackage{somename}
\begin{document}
TEST
\end{document}Then add every package file which fails the compilation to an
ignorelist
. (If you want to do more you could also considertex/xelatex
andtex/lualatex
folders as well asxelatex
andlualatex
programs in the future.)
The problems here are
- Testing package loading may or may not be useful - it's not really possible to know unless you are the package author (e.g.
siunitx
has about 30 test files, most with multiple tests inside) - If a package changes and causes a failure, the team then have to debug - and until that's done, any releases are blocked
Now, one might decide this is a reasonable balance, but at least when we looked before it felt fragile.
from latex2e.
That's not an issue: we build a minimal TeX Live for our tests anyway - it's just a > question of what to include. (This means the install is updated every day from CTAN.)
It is faster to install a Docker image than manually install TeX Live and lots of packages. And the docker image contains all files needed.
Also these tests could be run at the same time when l3build
is running. We only need to add a new .yml
file to workflows
folders.
Testing package loading may or may not be useful - it's not really possible to know unless you are the package author (e.g. siunitx has about 30 test files, most with multiple tests inside)
It is still useful even if package loading is tested only.
If a package changes and causes a failure, the team then have to debug - and until that's done, any releases are blocked
Step 1 and Step 2 together mainly catch latex bugs.
And the team could still decide not to consider these tests as release blockers and upload new releases to CTAN as before.
from latex2e.
That's not an issue: we build a minimal TeX Live for our tests anyway - it's just a > question of what to include. (This means the install is updated every day from CTAN.)
It is faster to install a Docker image than manually install TeX Live and lots of packages. And the docker image contains all files needed.
Rather, it contains too many: you need to be selective :)
Testing package loading may or may not be useful - it's not really possible to know unless you are the package author (e.g. siunitx has about 30 test files, most with multiple tests inside)
It is still useful even if package loading is tested only.
If a package changes and causes a failure, the team then have to debug - and until that's done, any releases are blocked
Step 1 and Step 2 together mainly catch latex bugs.
I forgot to add (3): one has to decide for every package involved if the issue is a LaTeX or a package bug.
And the team could still decide not to consider these tests as release blockers and upload new releases to CTAN as before.
Nope: releases can only go if the test suite passes.
from latex2e.
That's not an issue: we build a minimal TeX Live for our tests anyway - it's just a > question of what to include. (This means the install is updated every day from CTAN.)
It is faster to install a Docker image than manually install TeX Live and lots of packages. And the docker image contains all files needed.
Rather, it contains too many: you need to be selective :)
I.e. the current cache size is only ~240 Mb.
from latex2e.
Previous discussion was in
In a more general aspect, people want an as smooth as possible experience in updating (La)TeX packages because the package managers of TeX Live and MiKTeX both don't support per-project package and package version locking.
Maybe latex3 packages could have a set of -dev
releases, for instance two weeks ahead of the production releases. Then package maintainers could config CI to test with -dev
releases on schedule (like once a week).
from latex2e.
Maybe latex3 packages could have a set of
-dev
releases, for instance two weeks ahead of the production releases. Then package maintainers could config CI to test with-dev
releases on schedule (like once a week).
The team have been discussing this: there are some downsides but obviously potential upsides too. We are likely to talk 'in person' about it next week before reaching a conclusion.
from latex2e.
Another idea: technically it's possible to write an action which fetches the latest files in latex3/latex3
repo (or the latest commit which passed all the checks), installs them into TDS tree, and updates the (dev-)formats. Then maintainers of packages hosted on GitHub and using GitHub Actions as CI platform could use this action in testing their packages against the latest latex3 components, on schedule.
Just technically.
from latex2e.
Another idea: technically it's possible to write an action which fetches the latest files in
latex3/latex3
repo (or the latest commit which passed all the checks), installs them into TDS tree, and updates the (dev-)formats. Then maintainers of packages hosted on GitHub and using GitHub Actions as CI platform could use this action in testing their packages against the latest latex3 components, on schedule.
Sure, but that most likely helps devs who are already relatively 'involved' - ones with for example an existing testing setup. My take on the original question is it's more aimed at people who are not in that position. (And yes, it's unfortunate that l3debug
missed the issue that sparked this - now corrected but doesn't necessarily help.)
from latex2e.
I created a new repository https://github.com/lvjr/pkgstatus and write some code for experiments.
First I removed ignorelist.txt
and ran the tool. The result was
number of ignored packages = 0
number of passed packages = 3953
number of failed packages = 944
Then I renamed faillist.txt
as ignorelist.txt
and ran the tool agian. The result was:
number of ignored packages = 944
number of passed packages = 3953
number of failed packages = 0
from latex2e.
I created a new repository https://github.com/lvjr/pkgstatus and write some code for experiments.
Well your own ignorelist shows the problem with this approach: It contains for example citation-style-language
and acro
(for different reasons), which means that your test would now not catch if we break these packages. You exclude packages which require a specific engine, you exclude packages which expect a specific class, you exclude packages which expect that some other package is loaded before (like tagpdf, newpax or hypcap) etc. So to make such a testsuite more meaningful (and once it is there package authors will request to make it more meaningful) you need exceptions---you already started that by handling beamer and fontspec differently. This all requires manual maintenance for which we don't have the bandwidth.
Naturally nothing prevents you (or someone else) to setup such a testsuite. You can pull in the newest latex sources, run the tests and notify us if you think that there is a problem with the format.
from latex2e.
Well your own ignorelist shows the problem with this approach: It contains for example
citation-style-language
andacro
(for different reasons), which means that your test would now not catch if we break these packages. You exclude packages which require a specific engine, you exclude packages which expect a specific class, you exclude packages which expect that some other package is loaded before (like tagpdf, newpax or hypcap) etc. So to make such a testsuite more meaningful (and once it is there package authors will request to make it more meaningful) you need exceptions---you already started that by handling beamer and fontspec differently. This all requires manual maintenance for which we don't have the bandwidth.
Could you please read the title of this issue again? I am not talking about testing ALL packages but MORE packages. Even tesing 3953 packages is a large improvement and the team could decide which packages must be included or excluded.
Naturally nothing prevents you (or someone else) to setup such a testsuite. You can pull in the newest latex sources, run the tests and notify us if you think that there is a problem with the format.
I have heard this kind of sentences several times.
from latex2e.
Could you plese read the title of this issue again? I am not talking about testing ALL packages but MORE packages. Even tesing 3953 packages is a large improvement and the team could decide which packages must be included or excluded.
The issue comes in that if there's a breakage, one has to decide what to do. Much of the time, issues arise which are. highlighted by a LaTeX change, but are not (formally) 'caused' by a LaTeX change, i.e. something was already broken compared to the formal API, etc. One then has to decide how to handle things. Excluding a 'broken' package is easiest but doesn't help (other than as a record), asking a dev to fix may or may not be successful, fixing via some firstaid approach is sometime necessary but isn't something that scales well, etc.
from latex2e.
The issue comes in that if there's a breakage, one has to decide what to do. Much of the time, issues arise which are. highlighted by a LaTeX change, but are not (formally) 'caused' by a LaTeX change, i.e. something was already broken compared to the formal API, etc. One then has to decide how to handle things. Excluding a 'broken' package is easiest but doesn't help (other than as a record), asking a dev to fix may or may not be successful, fixing via some firstaid approach is sometime necessary but isn't something that scales well, etc.
I am not meant to require responsibilty of the team for broken packages. My key point is, it is always an improvement to know the breakages sooner.
And I don't expect lots of effort of the team in this direction. Minimal effort is just enough:
- set up the test suite and run it only when a new release is out or run it only manually.
- the test suite will only output the failed list and not return errors in every execution.
- the team can spend one minute on a glance at the failed list before uploading a new release.
- the team need not to debug the failed list, but can decide to do it if the failed list looks abnormal.
I think spending one more minute will not slow down the development of the project, and it is good for all.
from latex2e.
The team discussed the general issue at today's meeting. We have a few possible approaches in mind, each with positive arguments in favour. We are going to work on this a bit more before making a decision on changes for the future.
from latex2e.
Related Issues (20)
- lthooks doc ActivateGenericHook HOT 4
- Undocumented limitation on builtin generic hook names HOT 8
- ltkeys: Global options starting surrounded by spaces in the input are not removed from the unused option list HOT 8
- Many warnings about reguler to m series substitution as corollary from loading main font using regular not m in its font definition file HOT 4
- fntguide: No redefinition information in the transcript file HOT 3
- `\AddToHook` broken in old lthooks release for undeclared hooks
- \verbatiminput* from the verbatim package vs TABs HOT 2
- Bad filehook-bug-140.lvt HOT 1
- Package textcomp Info: Symbol ... not provided by HOT 9
- Conditional statements syntax HOT 5
- `\dots` not working correctly with `\cong` HOT 12
- minor doc tweaks to ltfilehook HOT 2
- source2e documentation footers/ \ProvidesFile of base .dtx HOT 3
- Possible source2e improvements HOT 3
- Automatically adjust horizontal boxes in table of contents HOT 4
- Image upscaled for no apparent reason HOT 7
- Avoid option conflicts between classs and packages HOT 4
- Reformatting examples in clsguide.tex HOT 5
- fpeval expands past its argument HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from latex2e.