langtonhugh / asreview_irr Goto Github PK
View Code? Open in Web Editor NEWCode to automatically produce a report from ASReview on inter-rater reliability.
License: Apache License 2.0
Code to automatically produce a report from ASReview on inter-rater reliability.
License: Apache License 2.0
Currently, there is no automatic robust testing. We need to implement testing so that each time a change is made, there are some automatic controls that check whether the changes break stuff. We need to have test files that have different sizes (both small and big) so that all of the different components, such as the waffle graphs, look good.
Currently, it will take in any file. We need to add limitations on the file size so the software doesn't break.
Hey Sam! I'll be the new colleague Rens hired to look at ways to calculate the IRR, and to see if I could extend your package / do stuff with it.
I got some dependency problems, nothing I couldn't solve myself, but it still could be made a bit more user friendly by using an R environment. This encapsulates those packages in an isolated folder. That way, people can just download and run the code without having any trouble with dependencies and what not.
The report has now been updated and expanded to generate new information, including written explanations of the descriptive statistics. One major change has been to remedy an error in the calculation of n relevant v. unreviewed. If you ran the report prior to December, 2023, please re-run it to check the impact.
For data containing > 5,000 abstracts (roughly), the waffle output is rendered rather useless. Could be fixed with ifelse statements, plotting differently for different sample sizes. For example:
Can you add a license to your work? As it currently is, no one can use your scripts, but I don't think this is intentional :-)
Currently, only the kappa statistic is calculated for the IRR. Implement different ways of calculating the IRR that are for example robust to missing data. Give options for comparing the different ways of calculating the inter-rater reliability, and statistics for comparing them.
Add validation code to make sure that the two ASReview input files match. If the two files does not match, it should return an error (for now, later it can be extended a bit so that small incongruencies can still work).
Hi there,
I tried out your script, it works very well.
However, it turns out that the record_ids differ between me and my colleague.
Therefore, I cannot use the results.
Is there any automated way to update the record_ids so they match between the .csv files?
(e.g. by matching on title?)
Hi,
First of all, thank you the the amazing tool! I think the names of the columns changed in ASReview since this tool was last updated. I personally use Asreview 1.2 with a Mac and tried to use your tool with datasets from simulation projects.
To make it work I had to change some lines of code in report_example.Rmd:
Currently, if the name of the column changes then the reports break. We need to implement the handling of the files in such a way that this issue doesn't happen. I will do that by checking the codebase of ASReview and discussing with Rens how to change that so that it functions without issues.
Currently, the report just prints the Console output from the inter-rater reliability tests (e.g., Kappa). It would be clearer and easier for users if this output was formatted into a basic table (similar to how broom
works for regression outputs).
Currently, the files that are given as input for the are not validated to be identical. We need to implement a robust way to handle file validation to check if the files are the same. There's one r script that we can work from in one of the folders.
Can you update your readme with the information you provided in this discussion thread: asreview/asreview#975 (comment). The information you provided here might be useful for users of your template. Thanks!
I added your extension to our new overview of extensions.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.