Git Product home page Git Product logo

work-set-clustering's Introduction

Work-set Clustering

DOI

A Python script to perform a clustering based on descriptive keys. It can be used to identify work clusters for manifestations according to the FRBR (IFLA-LRM) model.

This tool only performs the clustering. It needs a list of manifestation identifiers and their descriptive keys as input. If already computed cluster identifiers and descriptive keys from a previous run are provided, they can be reused.

Usage via the command line

Create and activate a Python virtual environment

# Create a new Python virtual environment
python3 -m venv py-request-isni-env

# Activate the virtual environment
source py-request-isni-env/bin/activate

# There are no depdendenies to install

# install the tool
pip install .

Available options:

usage: clustering.py [-h] -i INPUT_FILE -o OUTPUT_FILE --id-column ID_COLUMN --key-column KEY_COLUMN [--delimiter DELIMITER] [--existing-clusters EXISTING_CLUSTERS]
                     [--existing-clusters-keys EXISTING_CLUSTERS_KEYS]

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT_FILE, --input-file INPUT_FILE
                        The CSV file(s) with columns for elements and descriptive keys, one row is one element and descriptive key relationship
  -o OUTPUT_FILE, --output-file OUTPUT_FILE
                        The name of the output CSV file containing two columns: elementID and clusterID
  --id-column ID_COLUMN
                        The name of the column with element identifiers
  --key-column KEY_COLUMN
                        The name of the column that contains a descriptive key
  --delimiter DELIMITER
                        Optional delimiter of the input/output CSV, default is ','
  --existing-clusters EXISTING_CLUSTERS
                        Optional file with existing element-cluster mapping
  --existing-clusters-keys EXISTING_CLUSTERS_KEYS
                        Optional file with element-descriptive key mapping for existing clusters mapping

Clustering from scratch

Given a CSV file where each row contains the relationship between one manifestation identifier and one descriptive key, the tool can be called the following to create cluster assignments.

python -m work_set_clustering.clustering \
  --input-file "descriptive-keys.csv" \
  --output-file "clusters.csv" \
  --id-column "elementID" \
  --key-column "descriptiveKey"

Example CSV which should result in two clusters, one for book1 and book2 (due to a similar key) and one for book3:

elementID descriptiveKey
book1 theTitle/author1
book1 isbnOfTheBook/author1
book2 isbnOfTheBook/author1
book3 otherBookTitle/author1

The script can also read descriptive keys that are distributed across several files. Therefore you only have to use the --input-file parameter several times. Please note that all of those input files should have the same column names specified with --id-column and --key-column.

You can find more examples of cluster input in the test/resources directory.

Reuse existing clusters

You can reuse the clusters created from an earlier run, but you also have to provide the mapping between the previous elements and optionally their descriptive keys.

python -m work_set_clustering.clustering \
  --input-file "descriptive-keys.csv" \
  --output-file "clusters.csv" \
  --id-column "elementID" \
  --key-column "descriptiveKey" \
  --existing-clusters "existing-clusters.csv" \
  --existing-cluster-keys "initial-descriptive-keys.csv"

Please note that with the two parameters --existing-clusters and --existing-cluster-keys the data from a previous run are provided.

Similar to the initial clustering, you can provide several input files.

Note

When skipping existing descriptive keys, existing cluster identifiers and assigments are kept, even if their elements have overlapping descriptive keys. Additionally, none of the new elements can be mapped to the existing clusters, because no descriptive keys are provided (more info in #9)

Usage as a library

The tool can also be used as a library within another Python script or a Jupyter notebook.

from work_set_clustering.clustering import clusterFromScratch as clustering

clustering(
  inputFilename=["descriptive-keys.csv"],
  outputFilename="cluster-assignments.csv",
  idColumnName="elementID",
  keyColumnName="descriptiveKey",
  delimiter=',')

Or if you want to reuse existing clusters:

from work_set_clustering.clustering import updateClusters as clustering

clustering(
  inputFilename=["descriptive-keys.csv"],
  outputFilename="cluster-assignments.csv",
  idColumnName="elementID",
  keyColumnName="descriptiveKey",
  delimiter=',',
  existingClustersFilename="existing-clusters.csv",
  existingClusterKeysFilename="initial-descriptive-keys.csv")

Software Tests

  • You can execute the unit tests of the lib.py file with the following command: python work_set_clustering.lib.
  • You can execute the integration tests with the following command: python -m unittest discover -s test

Contact

Sven Lieber - [email protected] - Royal Library of Belgium (KBR) - https://www.kbr.be/en/

work-set-clustering's People

Contributors

svenlieber avatar

Watchers

 avatar

work-set-clustering's Issues

Make existing descriptive keys optional to support the reuse of multiple clusters with overlapping keys

When reusing existing clusters (#1) we also reuse the underlying descriptive keys for the existing clusters. This makes sure that possibly new data can be merged to the existing clusters in case a descriptive key matches.

This leads to the side effect that elements of two existing clusters share a descriptive key, they are merged into a new cluster.
This logic follows the idea of the algorithm, but was unexpected for use cases where - despite overlapping keys - two elements where assigned different clusters by a human.

This use case we have in the BELTRANS project where we use the clustering algorithm for manifestations (kbrbe/beltrans-data-integration#193) and for finding correlations between manifestations across sources and contributors across data sources (kbrbe/beltrans-data-integration#234). In BELTRANS, the "existing clusters" are based on human judgment and are needed, because some of the data sources are not in our control and hence we cannot correct or adjust existing data to avoid the wrongly overlapping keys.

A possible solution is to make the parameter of existing descriptive keys optional. Then the user of the clustering script can choose whether to take existing descriptive keys into account or not.

Handle multiple input files

Currently one list of descriptive keys can be given via the argument -i / --input.
For data integration of the MetaBelgica project we want to cluster with descriptive key files from multiple data sources.

We could either allow multiple values for the --input parameter or we could provide the list of input files via positional arguments.

Improve documentation

Currently the description of the needed input is quite vague, we could add an example CSV with a few lines to the README.

Reuse existing clusters when possible instead of clustering from scratch

Based on descriptive keys internal data structures are created and elements with overlapping descriptive keys are grouped together. Every time the clustering is performed on certain input data, new cluster identifiers are generated because UUIDs are used.

Currently there is no way to cluster possibly new elements to already existing clusters, and thus reusing existing cluster identifiers. The script should be equipped with new command line options to read existing clusters and their descriptive keys.

Technically we only have to prefill the internal data structures related to clusters and the rest of the clustering can stay the same.

This feature is needed for the BELTRANS project. More concretely for kbrbe/beltrans-data-integration#204 and possibly for kbrbe/beltrans-data-integration#234

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.