davidsantiago / perforate Goto Github PK
View Code? Open in Web Editor NEWPainless benchmarking with Leiningen 2
Painless benchmarking with Leiningen 2
defcase doesn't appear to work with setup arguments as described in the docs.
e.g.
(defgoal simple "Simple." :setup (fn [] [1 2]))
(defcase simple :default [a b] (+ a b))
If I don't specify a :namespaces entry within an environment, I would like it to default to all benchmark namespaces (as done when there's no :environments at all).
Hi everybody,
I see this gem was last updated a couple of years ago and it uses outdated versions of Clojure and Criterium. Does anyone know the reason?
Cheers
I'm trying to performance test FUSE bindings to Clojure, and have a few questions about perforate:
I already have a with-mounted-fs
macro, and would prefer to continue to use that; is there any way I can do so without including the mounting time inside of the benchmark?
For example, I can write a plain criterium test (with qb
being the quick bench macro):
(with-mounted-fs (context (atom {"foo" (mmapped-file big-data-file)})) mountpoint
(qb "reading 100MB through FUSE via mmapped file"
(Files/readAllBytes (.toPath (File. mountpoint "foo")))))
Is it possible to have case-level setup/teardown fns?
Maybe both of these issues could be solved by introducing a symbol that specifies which subform you actually want benchmarked?
That way consumers could continue to use their own contextual macros (e.g. with-open
).
I don't know the roadmap for perforate development, but I was wondering if you were interested in perforate doing some of these things. I'm interested in seeing the following exist as part of the testing ecosystem for Clojure and would try to help develop them.
Consider a map with a namespace, goal, then benchmark hierarchy, where each leaf is a map of the mean, std dev and lower/upper quantile. The first time perforate is run, it would create and write this map to something like .perforate-results
. The next time the benchmarks are run, the results would be compared for each benchmark. If the results varied significantly, then perforate would print out that which test changed in terms of performance regression. The developer could then go back and fix the regression, or alternatively, accept the new value for the benchmark.
To fuel adoption and cut down on duplicated code, it would be useful if perforate could just find and use tests defined with deftest
. This doesn't seem too complicated and would encourage developers to just add perforate as a dependency and get started.
If the two previous suggestions were developed, then the time it would take to benchmark even a small library like Titanium would be astronomical. It would be really awesome if we could throw money at speeding up testing and benchmarking. I've heard good things about using spot requests on cloud providers to farm out testing. Providing a minimum bid price, maximum numbers of nodes, a node spec, and configuration step should allow most developers to setup their libraries to use this. Additionally, standardizing the node specs would allow developers to easily compare performance results despite working on different machines.
I'm still pretty new to pallet and perforate so I'm unsure of how difficult all of the above would be to develop.
I´d like to benchmark my word counter (here https://github.com/humanitiesNerd/stammtisch )
I´d like to pass the path to the file containing the words to be counted as an argument and I can´t figure out how to do that
Can you hep me ?
Thanks
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.