This repository is going to contain a collection of experiments and analyses performed on the Modern Slavery Statements Dataset.
The UN Sustainable Development Goal 8.7 states: Take immediate and effective measures to eradicate forced labour, end modern slavery and human trafficking and secure the prohibition and elimination of the worst forms of child labour, including recruitment and use of child soldiers, and by 2025 end child labour in all its forms.
In 2018, the Global Slavery Index found that there were 40.3 M people in modern slavery, of whom 25M were in forced labor producing computers, clothing, agricultural products, raw materials, etc and 15M were in forced marriage.
The Future Society, an independent nonprofit think-and-do tank launched a partnership with the Walk Free Initiative to automate the analysis of modern slavery statements produced by businesses to boost compliance and help combat and eradicate modern slavery. The team at The Future Society is curating an up-to-date repository of >16K modern slavery statements (and counting) to boost machine learning research in this area. The data is scraped based on the collection of report links provided by the modernslaveryregistry.org.
By sharing your analysis and contributing to this repository you help the global community to hold multi-national corporations accountable for how they treat their workforce and suppliers.
- Python 3.6+ installed on your system
- If you'd like to use the provided tutorials, you also need access to a Jupyter notebook
It's recommended that you use a virtual environment such as virtualenv, pipenv or similar.
Copy this notebook and follow the instructions.
Install the package:
pip install modern-slavery-statements-research
Specify your AWS access credentials as -i
(aws access key id) and -a
(secret access key) arguments and run (without the curly brackets):
download-corpus -i {aws_access_key_id} -a {aws secret access key}
The logs printed in the console will tell you the name of the data folder.
If you've set up your modern slavery project related AWS CLI credentials as default you can simply run
download-corpus
You can explore more options by running download-corpus --help
The dataset includes the following columns:
Company ID Unique company identifier Company Company name Is Publisher Whether the company is a publiser Statement ID Unique statement identifier URL Original URL where the statement could be found Override URL Edited URL Companies House Number Company's registered number in companieshouse.gov.uk Industry Company's main area of activity HQ Country of company's headquarters Is Also Covered UK Modern Slavery Act Whether the company is legislated by the UK Modern Slavery Act California Transparency in Supply Chains Act Whether the company is legislated by the California Transparency in Supply Chains Act Australia Modern Slavery Act Whether the company is legislated by the Australia Modern Slavery Act Period Covered Year that is being reported for Text Extracted statement text
As the corpus is a work in progress, all feedback is welcomed in the Repository issues at present, if you'd like to work with this data, please send an email to [email protected] with a link to your social profile (linkedin, facebook or similar ) and you'll receive IAM user credentials on the first possible instance that would allow you to download and access the data.
If you'd like to get help with domain expertise or technical requirements and implementations then get in touch with Adriana or Karyna respectively.
Over the next few weeks and months, the following improvements are planned to the dataset and the repository:
Provide a convenient one-command entry point to the dataImprove the dataset quality by continuously including more documents and improving the data cleaning pipeline.Provide examples of analysis.- Provide manually annotaded labels for a subset of the corpus to enable analyses using supervised methods.
- Open source the data and research for public access.
If you intend to share any form of public research and analysis based on the data from this repository and the modern-slavery-dataset
bucket in AWS S3, then please include the following citation to your publication:
The Future Society. (2020) Modern Slavery Statements Research. Retrieved from https://github.com/the-future-society/modern-slavery-statements-research.
If you'd like to contribute to the research then take a look at any of the issues or get in touch with Adriana or Karyna.
Take a look at colab notebooks based on the modern slavery corpus:
- Rey Farhan's initial exploration Modern Slavery Statements NLP (rey farhan) v1.0.ipynb.
- Parth Shah's exploration of knowledge graphs based on subject-object syntactic relations
- Darin Plutchok's Clustering Analysis of Modern Slavery data
- Goutham Venkatesh's Clustering Analysis of Modern Slavery data
- Rey Farhan's Modern Slavery Statements NLP - Word2Vec w/ Bigrams (rey farhan) v1.2.ipynb
- Daniel Hilgart's Exploratory Data Analysis of the hackathon dataset