informalsystems / apalache-tests Goto Github PK
View Code? Open in Web Editor NEWBenchmarks for apalache
License: Apache License 2.0
Benchmarks for apalache
License: Apache License 2.0
Towards informalsystems/apalache#349
We want to run some benchmarks that will take upwards of 24 hours. We cannot do this on GitHub hosted runners, but it looks easy to connect self-hosted runners. For a start, we are considering digital ocean droplets, as per the walkthrough here: https://digitaloceancode.com/deploying-self-hosted-runners-for-github-actions/ and we might graduate to our hardware if it seems worth it.
This repo hasn't seen updates in a while, and we keep running into maintenance burden (dependabot alerts and other breakage).
As discussed on Slack, we should probably archive it, to mark it as unsupported.
I don't have the necessary privileges; @konnov @shonfeder, maybe one of you?
Towards informalsystems/apalache#349
Probably the easiest way to do this is to the run the benchmarks inside of the apalache/mc:unstable
image.
There seems to be a few issues with the Raft spec in this repo.
(1) The quorums in the APAraft.tla should be subsets of the servers but they are not.
apalache-tests/performance/raft/APAraft.tla
Lines 21 to 32 in 2760e1c
(2) The invariant OneLeader should be violated at some point. It is possible for multiple servers to believe themselves to be leaders so long as they are in different terms.
apalache-tests/performance/raft/APAraft.tla
Line 571 in 2760e1c
(3) Apalache (version: 0.22.1) will not check this spec without modification due to the following issue:
Assignment error: APAraft.tla:545:15-545:51: Manual assignment is spurious, votedFor is already assigned!
Once approved, propagate informalsystems/apalache-bench#111 to this repo.
We can't really get accurate indication of the relative running time if we don't track info about the computer power given to the benchmark runs.
Currently, we have to manually position the source code for every tested version in a hard-coded location and manually add a new make target for each version that should be tested. It would be helpful to set up the build along these lines
Towards informalsystems/apalache#349
The CI task should present the report as an opened PR that adds the new results to our past results
Requires #18
But we should figure out the benchmark licenses first
Extend these benchmarks with the Informal specifications.
Transferred form informalsystems/apalache#349
We have discussed moving the build system (but not the specs) to the main apalache repo.
Benefits:
Draw backs:
Requirements
At some point we lost support for running TLC. We should restore it by fixing the TODO in the tool_cmd
function in https://github.com/informalsystems/apalache-tests/tree/master/scripts/mk-run.py
Currently, all the tests are assumed to report NoError
. Add configurations, where errors are expected.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.