Git Product home page Git Product logo

stjude-biohackathon / kids23-team3 Goto Github PK

View Code? Open in Web Editor NEW
3.0 3.0 0.0 6.36 MB

SCCRIP (Sickle Cell Clinical Research and Intervention Program) established a longitudinal cohort at multiple sites with Sickle Cell Disease (SCD) in 2014 managed by St. Jude Clinical Hematology. A new collaborator for SCCRIP has longitudinal data for 600 SCD patients in OMOP CDM format and this effort is to convert OMOP CDM to SCCRIP format.

Home Page: https://www.stjude.org/patient-referrals/seek-treatment/taking-part-in-clinical-research/sickle-cell-clinical-research-intervention-program-sccrip.html

License: MIT License

R 100.00%
hematology omop research sickle-cell

kids23-team3's Introduction

KIDS23-Team3

kids23-team3's People

Contributors

j-andrews7 avatar lindsayevanslee avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

kids23-team3's Issues

OHDSI Tool - USAGI

Verify Install

Introductory Tutorials

https://ohdsi.org/ohdsi2022-tutorial/

https://ohdsi.org/open-source-tutorials/

Practice with use cases

Discuss OHDSI tool among team with specific relevance to Integrating SCCRIP Data in OMOP format.

From https://ohdsi.org/software-tools/ below

USAGI is a tool to aid the manual process of creating a code mapping. It can make suggested mappings based on the textual similarity of code descriptions. Usagi allows the user to search for the appropriate target concepts if the automated suggestion is not correct. Finally, the user can indicate which mappings are approved to be used in the ETL. Source codes that need mapping are loaded into Usagi (if the codes are not in English additional translations columns are needed). A term similarity approach is used to connect source codes to vocabulary concepts. However, these code connections need to be manually reviewed and Usagi provides an interface to facilitate that. Usagi will only propose concepts that are marked as standard concepts in the vocabulary.

USAGI LINKS
Documentation: Book of OHDSI
Installation Information: Click Here
Source Code: GitHub
“10-Minute Tutorial” Video: Click Here

OHDSI Tool - HADES (optional)

Verify Install

Introductory Tutorials

https://ohdsi.org/ohdsi2022-tutorial/

https://ohdsi.org/open-source-tutorials/

Practice with use cases

Discuss OHDSI tool among team with specific relevance to Integrating SCCRIP Data in OMOP format.

From https://ohdsi.org/software-tools/ below

HADES (previously the OHDSI METHODS LIBRARY) is a collection of open-source R packages that offer functions which can be used together to perform a complete observational study, starting from data in the CDM, and resulting in estimates and supporting statistics, figures, and tables. The packages interact directly with observational data in the CDM, and can be used simply to provide cross-platform compatibility to completely custom analyses, or can provide advanced standardized analytics for population characterization, population-level effect estimation, and patient-level prediction. HADES supports best practices for use of observational data and observational study design as learned from previous and ongoing research, such as transparency, reproducibility, as well as measuring the operating characteristics of methods in a particular context and subsequent empirical calibration of estimates produced by the methods.

HADES LINKS
Documentation: Book of OHDSIHADES Web Site
Installation Information: Click Here
Source Code: GitHub

OHDSI Tools - ACHILLES

Verify Install

Introductory Tutorials

https://ohdsi.org/ohdsi2022-tutorial/

https://ohdsi.org/open-source-tutorials/

Practice with use cases

Discuss OHDSI tool among team with specific relevance to Integrating SCCRIP Data in OMOP format.

From https://ohdsi.org/software-tools/ below

ACHILLES is a software tool that provides for characterization and visualization of a database conforming to the CDM. It can also be a critical resource to evaluate the composition of CDM databases in a network. ACHILLES is an R package, and produces reports based on the summary data it generates in the “Data Sources” function of ATLAS.

ACHILLES LINKS
Documentation: Book of OHDSI – ACHILLES tool and ACHILLES in practice
Demo: Click Here
Installation Information: Click Here
Source Code: GitHub
“10-Minute Tutorial” Video: Click Here

OHDSI Tool - ATHENA

Verify Install

Introductory Tutorials

https://ohdsi.org/ohdsi2022-tutorial/

https://ohdsi.org/open-source-tutorials/

Practice with use cases

Discuss OHDSI tool among team with specific relevance to Integrating SCCRIP Data in OMOP format.

From https://ohdsi.org/software-tools/ below

ATHENA allows you to both search and load standardized vocabularies. It is a resource to be used, not a software tool to install. To download a zip file with all Standardized Vocabularies tables select all the vocabularies you need for your OMOP CDM. Vocabularies with standard concepts and very common usage are preselected. Add vocabularies that are used in your source data. Vocabularies that are proprietary have no select button. Click on the “License required” button to incorporate such a vocabulary into your list. The Vocabulary Team will contact you and request you demonstrate your license or help you connect to the right folks to obtain one.

ATHENA LINKS
Vocabularies: Click Here
Source Code: GitHub
“10-Minute Tutorial” Video: Click Here

Generate Sickle Cell Disease Patients from Synthea

Synthea supports the ability to filter the patient records that are exported, using a module built with the Generic Module Framework.

Generic Module Framework: States
"Simple" State in Synthea - https://github.com/synthetichealth/synthea/wiki/Generic-Module-Framework%3A-States#simple

Generic Module Framework: Basics
https://github.com/synthetichealth/synthea/wiki/Generic-Module-Framework%3A-Basics#snomed-codes
SNOMED ICD CODES FOR SCD.xlsx

Related References:

https://github.com/synthetichealth/synthea

https://ohdsi.github.io/ETL-Synthea/

Welcome, Contact Information, Time Zones, Availability, Skill set

Please edit below and update. I have filled in the info I have.

Member Name, Contact Phone, Email, GitHub User name, Time Zones (IST/CST), Availability / Skill sets / Fun Facts

Ragha Srinivasan 803-719-2600 USA, [email protected], CST, available May 3 to 5th at Memphis, fulltime.

Ashish Pagare

Kasi Vegesana - 571-439-6299, [email protected], CST, May3-May5, kvegesan-stjude

Aastha Naik

Pratik Mishra

Shalini Gupta

Lokesh Chinthala

Lindsay Lee - 865-601-4755, [email protected], CST, available May 4-5 in the afternoon

Vocabulary from ATHENA for Synthea data converted into OMOP CDM

https://ohdsi.org/software-tools/

ATHENA allows you to both search and load standardized vocabularies. It is a resource to be used, not a software tool to install. To download a zip file with all Standardized Vocabularies tables select all the vocabularies you need for your OMOP CDM. Vocabularies with standard concepts and very common usage are preselected. Add vocabularies that are used in your source data. Vocabularies that are proprietary have no select button. Click on the “License required” button to incorporate such a vocabulary into your list. The Vocabulary Team will contact you and request you demonstrate your license or help you connect to the right folks to obtain one.

ATHENA LINKS
Vocabularies: Click Here
Source Code: GitHub
“10-Minute Tutorial” Video: Click Here

devtools::install_github("OHDSI/CommonDataModel")

https://github.com/OHDSI/CommonDataModel/?tab=readme-ov-file#first-install-the-package-from-github

error

Create DDL, Foreign Keys, Primary Keys, and Indexes from R
First, install the package from GitHub
install.packages("devtools")
devtools::install_github("OHDSI/CommonDataModel")

https://github.com/OHDSI/CommonDataModel/?tab=readme-ov-file#first-install-the-package-from-github

devtools::install_github("OHDSI/CommonDataModel")
Using github PAT from envvar GITHUB_PAT
Downloading GitHub repo OHDSI/CommonDataModel@HEAD
-- R CMD build ------------------------------------------------------------------------------
v checking for file 'C:\Users\rsrini86\AppData\Local\Temp\2\Rtmp0QaJGi\remotes38840f1542\OHDSI-CommonDataModel-43f6573/DESCRIPTION' (392ms)

  • preparing 'CommonDataModel': (1.2s)
    v checking DESCRIPTION meta-information ...
  • checking for LF line-endings in source and make files and shell scripts
  • checking for empty or unneeded directories
    Omitted 'LazyData' from DESCRIPTION
  • building 'CommonDataModel_0.2.0.tar.gz'

Installing package into ‘C:/Users/rsrini86/Documents/renv/library/R-4.2/x86_64-w64-mingw32’
(as ‘lib’ is unspecified)
Error: evaluation nested too deeply: infinite recursion / options(expressions=)?
Execution halted
Warning messages:
1: In untar2(tarfile, files, list, exdir, restore_times) :
skipping pax global extended headers
2: In untar2(tarfile, files, list, exdir, restore_times) :
skipping pax global extended headers
3: In i.p(...) :
installation of package ‘C:/Users/rsrini86/AppData/Local/Temp/2/Rtmp0QaJGi/file3887a6338f0/CommonDataModel_0.2.0.tar.gz’ had non-zero exit status

SOLUTION SUGGESTION - https://stackoverflow.com/questions/15170399/change-r-default-library-path-using-libpaths-in-rprofile-site-fails-to-work

https://www.accelebrate.com/library/how-to-articles/r-rstudio-library

Introduction to OHDSI & OMOP CDM ‘10-minute tutorials’

Build capacity and technical know-how among SCCRIP data management team and investigators in embracing Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM), an open community data standard, designed to standardize the structure and content of observational data and to enable efficient analyses that can produce reliable evidence.

  • OHDSI (Observational Health Data Sciences and Informatics) is a global, multi-disciplinary, interdisciplinary collaborative with a shared mission to improve health by empowering a community to collaboratively generate real-world evidence that promotes better health decisions and better care. Browse through https://ohdsi.org/ and Check out the video https://youtu.be/aSLLfbGhnGE

Dr George Hripcsak shared attached presentation slides (April 2021) You may find much wider, and stronger adoption now) on Observational Health Data Sciences and Informatics, Interoperability, and Research https://www.ohdsi.org/wp-content/uploads/2021/04/[OHDSI-ONC-Hripcsak-2021.pdf](https://www.ohdsi.org/wp-content/uploads/2021/04/OHDSI-ONC-Hripcsak-2021.pdf)

The Book of OHDSI
https://ohdsi.github.io/TheBookOfOhdsi/

The Observational Health Data Sciences and Informatics (or OHDSI, pronounced "Odyssey") program is a multi-stakeholder, interdisciplinary collaborative to bring out the value of health data through large-scale analytics. All our solutions are open-source.
https://ohdsi.org/

Standardized Data: The OMOP Common Data Model

The Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) is an open community data standard, designed to standardize the structure and content of observational data and to enable efficient analyses that can produce reliable evidence. A central component of the OMOP CDM is the OHDSI standardized
Read more about the OMOP Common Data Model
Read more about OHDSI's standardized vocabularies
vocabularies. The OHDSI vocabularies allow organization and standardization of medical terms to be used across the various clinical domains of the OMOP common data model and enable standardized analytics that leverage the knowledge base when constructing exposure and outcome phenotypes and other features within characterization, population-level effect estimation, and patient-level prediction studies.
GitHub https://github.com/OHDSI

Self-Paced Free courses:

the EHDEN Consortium to develop the EHDEN Academy, a set of free, on-demand training and development courses. These are open to anybody, but we always encourage new OHDSI collaborators to use this resource to learn about best practices towards our mission of improving health by empowering a community to collaboratively generate evidence that promotes better health decisions and better care.

• Our OHDSI News & Updates page keeps you informed of recent publications, upcoming studies and more, while also profiling collaborators and providing any other updates about our global efforts.

Infrastructure set up - PostgreSQL, OHDSI Tools on Azure VM

Install PostgreSQL, JDK, Tomcat, WebAPI, OHDSI Tools -

OHDSI offers a wide range of open-source tools to support various data-analytics use cases on observational patient-level data. What these tools have in common is that they can all interact with one or more databases using the Common Data Model (CDM). For More info check out https://ohdsi.org/software-tools/

https://github.com/OHDSI/Tutorial-ETL

Any issues - please review for prior solutions at http://forums.ohdsi.org/[↩︎](https://ohdsi.github.io/TheBookOfOhdsi/ExtractTransformLoad.html#fnref37) and https://forums.ohdsi.org/c/implementers[↩︎](https://ohdsi.github.io/TheBookOfOhdsi/ExtractTransformLoad.html#fnref38)

OHDSI tools - WHITERABBIT and RABBIT-IN-A-HAT

Verify Install

Introductory Tutorials

https://ohdsi.org/ohdsi2022-tutorial/

https://ohdsi.org/open-source-tutorials/

Practice with use cases

Discuss OHDSI tool among team with specific relevance to Integrating SCCRIP Data in OMOP format.

From https://ohdsi.org/software-tools/ below

WHITERABBIT and RABBIT-IN-A-HAT are software tools to help prepare for ETLs of longitudinal healthcare databases into the OMOP CDM. WhiteRabbit scans your data and creates a report containing all the information necessary to begin designing the ETL. WhiteRabbit’s main function is to perform a scan of the source data, providing detailed information on the tables, fields, and values that appear in a field. The source data can be in comma-separated text files, or in a database (MySQL, SQL Server, Oracle, PostgreSQL, Microsoft APS, Microsoft Access, Amazon RedShift). The scan will generate a report that can be used as a reference when designing the ETL, for instance by using it in conjunction with the Rabbit-In-a-Hat tool. WhiteRabbit differs from standard data profiling tools in that it attempts to prevent the display of personally identifiable information (PII) data values in the generated output data file.

The Rabbit-in-a-Hat tools that come with the White Rabbit software are specifically designed to support a team of experts in these areas. In a typical setting, the ETL design team sits together in a room, while Rabbit-in-a-Hat is projected on a screen. In a first round, the table-to-table mappings can be collaboratively decided, after which field-to-field mappings can be designed while defining the logic by which values will be transformed.

Rabbit-In-a-Hat is designed to read and display a White Rabbit scan document. White Rabbit generates information about the source data while Rabbit-In-a-Hat uses that information and through a graphical user interface to allow a user to connect source data to tables and columns within the CDM. Rabbit-In-a-Hat generates documentation for the ETL process, it does not generate code to create an ETL.

WHITERABBIT and RABBIT-IN-A-HAT LINKS
Documentation: Book of OHDSI – WhiteRabbit and Rabbit-In-A-Hat • Web Sites – WhiteRabbit and Rabbit-In-A-Hat
Installation Information: WhiteRabbit and Rabbit-In-A-Hat
Source Code: GitHub

References - Standardized Data: The OMOP Common Data Model

The Book of OHDSI
https://ohdsi.github.io/TheBookOfOhdsi/

The Observational Health Data Sciences and Informatics (or OHDSI, pronounced "Odyssey") program is a multi-stakeholder, interdisciplinary collaborative to bring out the value of health data through large-scale analytics. All our solutions are open-source.
https://ohdsi.org/

Standardized Data: The OMOP Common Data Model

The Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) is an open community data standard, designed to standardize the structure and content of observational data and to enable efficient analyses that can produce reliable evidence. A central component of the OMOP CDM is the OHDSI standardized
Read more about the OMOP Common Data Model
Read more about OHDSI's standardized vocabularies
vocabularies. The OHDSI vocabularies allow organization and standardization of medical terms to be used across the various clinical domains of the OMOP common data model and enable standardized analytics that leverage the knowledge base when constructing exposure and outcome phenotypes and other features within characterization, population-level effect estimation, and patient-level prediction studies.
GitHub https://github.com/OHDSI

Self-Paced Free courses:

the EHDEN Consortium to develop the EHDEN Academy, a set of free, on-demand training and development courses. These are open to anybody, but we always encourage new OHDSI collaborators to use this resource to learn about best practices towards our mission of improving health by empowering a community to collaboratively generate evidence that promotes better health decisions and better care.

• Our OHDSI News & Updates page keeps you informed of recent publications, upcoming studies and more, while also profiling collaborators and providing any other updates about our global efforts.

Import OHDSI Standardized Vocabularies

Please download and load the Standardized Vocabularies as following:

  1. Click on this LINK from your email to download the zip file (do not share the link per OHDSI). Typical file sizes, depending on the number of vocabularies selected, are between 30 and 250 MB.
  2. Unpack.
  3. Reconstitute CPT-4. See below for details.
  4. If needed, create the tables.
  5. Load the unpacked files into the tables.
    Important: All vocabularies are fully represented in the downloaded files with the exception of CPT-4: OHDSI does not have a distribution license to ship CPT-4 codes together with the descriptions. Therefore, we provide you with a utility that downloads the descriptions separately and merges them together with everything else. After unpacking, simply open a command line in the directory you unpacked all the files into and run "java -Dumls-apikey=xxx -jar cpt4.jar 5". Please replace "xxx" with UMLS API KEY.
    Scripts for importing the vocabulary csv files into your OMOP CDM vocabulary tables can be found here. They are provided in the respective folders, e.g. Oracle/, PostgreSQL/ and SQL Server/ for supported SQL dialects. The loading scripts are inside the subfolder VocabImport/.
    If you hit problems please use the OHDSI Forum pages, and somebody will help you.
    ===
    Please download the latest version from ATHENA25 and load it into your local database. ATHENA also allows faceted search of the Vocabularies.

To download a zip file with all Standardized Vocabularies tables select all the vocabularies you need for your OMOP CDM. Vocabularies with Standard Concepts (see Section 5.2.6) and very common usage are preselected. Add vocabularies that are used in your source data. Vocabularies that are proprietary have no select button. Click on the “License required” button to incorporate such a vocabulary into your list. The Vocabulary Team will contact you and request you demonstrate your license or help you connect to the right folks to obtain one.

http://athena.ohdsi.org/[↩︎](https://ohdsi.github.io/TheBookOfOhdsi/StandardizedVocabularies.html#fnref25)

http://13.66.39.25/hades/auth-sign-in

RStudio Sign In

uname: ohdsi password: mypass

OHDSI Tool - ATLAS

Verify Install

Introductory Tutorials

https://ohdsi.org/ohdsi2022-tutorial/

https://ohdsi.org/open-source-tutorials/

Practice with use cases

Discuss OHDSI tool among team with specific relevance to Integrating SCCRIP Data in OMOP format.

From https://ohdsi.org/software-tools/ below
ATLAS is a free, publicly available, web-based tool developed by the OHDSI community that facilitates the design and execution of analyses on standardized, patient-level, observational data in the CDM format. ATLAS is deployed as a web application in combination with the OHDSI WebAPI. Performing real-time analyses requires access to the patient-level data in the CDM and is therefore typically installed behind an organization’s firewall. However, there is a public ATLAS, and although this ATLAS instance only has access to a few small simulated datasets, it can still be used for many purposes including testing and training. It is even possible to fully define an effect estimation or prediction study using the public instance of ATLAS, and automatically generate the R code for executing the study. That code can then be run in any environment with an available CDM without needing to install ATLAS and the WebAPI.

ATLAS LINKS
Documentation: Book of OHDSIATLAS Wiki (includes YouTube tutorials)
Demo: Click Here
Installation Information: Click Here
Source Code: GitHub
“10-Minute Tutorial” Video on Creating Cohort Definitions in ATLAS: Click Here

DatabaseConnector - PostgreSQL JDBC Driver

DatabaseConnector - This R package provides function for connecting to various DBMSs. Together with the SqlRender package, the main goal of DatabaseConnector is to provide a uniform interface across database platforms: the same code should run and produce equivalent results, regardless of the database back end.

https://github.com/ohdsi/DatabaseConnector?tab=readme-ov-file

STEP 4 (Optionally) To use Windows Authentication for SQL Server, download the authentication DDL file as described https://ohdsi.github.io/DatabaseConnector/articles/Connecting.html#obtaining-drivers

library(DatabaseConnector)
downloadJdbcDrivers("postgresql")

conn <- connect(dbms = "postgresql",
connectionString = "jdbc:postgresql://localhost:5432/postgres",
user = "ohdsi_admin_user",
password = "123456X")

PostgreSQL JDBC Driver https://github.com/pgjdbc/pgjdbc

need to debug this

querySql(conn, "SELECT TOP 3 * FROM person")

https://ohdsi.github.io/DatabaseConnector/reference/connect.html#windows-authentication-for-sql-server-1

User Documentation
Documentation can be found on the package website.

PDF versions of the documentation are also available:

Vignette: Connecting to a database
Vignette: Querying a database
Vignette: Using DatabaseConnector through DBI and dbplyr
Package manual: DatabaseConnector manual
Support
Developer questions/comments/feedback: OHDSI Forum
We use the GitHub issue tracker for all bugs/issues/enhancements

Install HADES (OHDSI Methods Library)

HADES (formally known as the OHDSI Methods Library) is a set of open source R packages for large scale analytics, including population characterization, population-level causal effect estimation, and patient-level prediction.

The packages offer R functions that together can be used to perform an observation study from data to estimates and supporting statistics, figures, and tables. The packages interact directly with observational data in the Common Data Model (CDM), and are designed to support both large datasets and large numbers of analyses (e.g. for testing many hypotheses including control hypotheses, and testing many analyses design variations). For this purpose, each Method package includes functions for specifying and subsequently executing multiple analyses efficiently. HADES supports best practices for use of observational data as learned from previous and ongoing research, such as transparency, reproducibility, as well as measuring of the operating characteristics of methods in a particular context and subsequent empirical calibration of estimates produced by the methods. For more information about HADES’ design considerations, please refer to the HADES paper.

HADES has already been used in many published clinical and methodological studies, as can be seen in the Publications section.

Installation

https://ohdsi.github.io/Hades/
https://ohdsi.github.io/Hades/rSetup.html
Learn how to use HADES to produce reliable evidence from real-world data with The Book of OHDSI. Read it online.

** Important ** https://ohdsi.github.io/Hades/installingHades.html

HADES-wide releases
At the end of quarter 1 and 3 of each year a HADES-wide release is created.

https://forums.ohdsi.org/t/picking-a-target-r-version-for-hades/18989

not resolved as of3/5/2024 (initial install) but may not be a show stopper

renv 1.0.5 was loaded from project library, but this project is configured to use renv 1.0.3.

  • Use renv::record("[email protected]") to record renv 1.0.5 in the lockfile.
  • Use renv::restore(packages = "renv") to install renv 1.0.3 into the project library.
  • Project '~' loaded. [renv 1.0.5]
  • One or more packages recorded in the lockfile are not installed.
  • Use renv::status() for more details.
    [Workspace loaded from ~/.RData]

VOCAB from ATHENA into OMOP CDM PostgreSQL

https://github.com/OHDSI/Vocabulary-v5.0/wiki/General-Structure,-Download-and-Use

Alternatively, the ETL-Synthea R package contains a function LoadVocabFromCsv which can be used for the same purpose.

March 12, 2024 - Vocabulary downloaded from ATHENA and ZIP extracted to C:\ATHENA_OHDSI_VOCAB but later renamed to C:\CDMV5VOCAB to match with CommonDataModel/PostgreSQL/VocabImport
/OMOP CDM vocabulary load - PostgreSQL.sql

Once you choose Vocabularies to download from ATHENA, you will get an email. Here is a glimpse of the instructions:

Please download and load the Standardized Vocabularies as following:

Click on <>> to download the zip file. Typical file sizes, depending on the number of vocabularies selected, are between 30 and 250 MB.
Unpack.
Reconstitute CPT-4. See below for details.
If needed, create the tables.
Load the unpacked files into the tables.

Follow special steps for CPT-4 vocabular from email -- getting UMLS Account setup takes 1-2 days for approval initially but API key is easy to get once you have UMLS Account

Scripts for importing the vocabulary csv files into your OMOP CDM vocabulary tables can be found <>. They are provided in the respective folders, e.g. Oracle/, PostgreSQL/ and SQL Server/ for supported SQL dialects. The loading scripts are inside the subfolder VocabImport/.

I had alread ycreated the OMOP CDM v5.3 table structures. Howeer, if you need to here is glimpse on how-to

https://github.com/OHDSI/CommonDataModel/tree/v5.3.1/PostgreSQL#common-data-model--postgresql

This folder contains the SQL scripts for PostgreSQL.

In order to create your instantiation of the Common Data Model, we recommend following these steps:

Create an empty schema.

Execute the script OMOP CDM postgresql ddl.txt to create the tables and fields.

Load your data into the schema.

Execute the script OMOP CDM postgresql indexes required.txt to add the minimum set of indexes and primary keys we recommend.

Execute the script OMOP CDM postgresql constraints.txt to add the constraints (foreign keys).

Note: you could also apply the constraints and/or the indexes before loading the data, but this will slow down the insertion of the data considerably.

OMOP CDM v5.4 Specs OVERVIEW

https://ohdsi.github.io/CommonDataModel/cdm54.html

OMOP CDM v5.4
This is the specification document for the OMOP Common Data Model, v5.4. This is the latest version of the OMOP CDM. Each table is represented with a high-level description and ETL conventions that should be followed. This is continued with a discussion of each field in each table, any conventions related to the field, and constraints that should be followed (like primary key, foreign key, etc). Should you have questions please feel free to visit the forums or the github issue page.

OMOP CDM v5.4 ERD
Late in 2022 we held a community contest to find the best entity-relationship diagram and we crowned two winners! Martijn Schuemie and Renske Los created the best printable version and Vishnu Chandrabalan created the best interactive version.

Changes by Table
CDM v5.3 -> CDM v5.4
https://ohdsi.github.io/CommonDataModel/cdm54Changes.html

OHDSI Tool - Data Quality Dashboard

Verify Install

Introductory Tutorials

https://ohdsi.org/ohdsi2022-tutorial/

https://ohdsi.org/open-source-tutorials/

Practice with use cases

Discuss OHDSI tool among team with specific relevance to Integrating SCCRIP Data in OMOP format.

From https://ohdsi.org/software-tools/ below

DATA QUALITY DASHBOARD applies a harmonized data quality assessment terminology to data that has been standardized in the OMOP Common Data Model. Where ACHILLES runs characterization analyses to provide an overall visual understanding of a CDM instance, the DQD goes table by table and field by field to quantify the number of records in a CDM that do not conform to the given specifications. In all, over 1,500 checks are performed, each one organized into the Kahn framework. For each check, the result is compared to a threshold whereby a FAIL is considered to be any percentage of violating rows falling above that value.

DATA QUALITY DASHBOARD LINKS
Documentation: Book of OHDSI
Demo: Click Here
Installation Information: Click Here
Source Code: GitHub

Eunomia - Sample data for testing OMOP CDM

Eunomia is part of HADES.

Eunomia is a standard dataset in the OMOP (Observational Medical Outcomes Partnership) Common Data Model (CDM) for testing and demonstration purposes. Eunomia is used for many of the exercises in the Book of OHDSI.

https://github.com/OHDSI/Eunomia

Step#2 in Instructions gave error and so I used solution from https://forums.ohdsi.org/t/installing-eunomia-problem/16355

Practiced R code located in Package manual: Eunomia.pdf

ON NEXT DAY March 6, 2023 - Eunomia failed and so had to reinstall all of R and also used this https://forums.ohdsi.org/t/how-do-you-eventually-resolve-cannot-open-file-renv-activate-r/15246/15

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.