Git Product home page Git Product logo

scoping-review's People

Contributors

danielibsen avatar hchats avatar lwjohnst86 avatar mariogucbmr avatar

Stargazers

 avatar

Watchers

 avatar

scoping-review's Issues

Complete functions for extracting from PubMed

Write functions that will extract from the source and download to the repository. To finish this issue, we don't need to download and save the data from the source, we only need the code.

Code should be placed in: R/pubmed-search.R

Complete functions for extracting from MedRxiv

Write functions that will extract from the source and download to the repository. To finish this issue, we don't need to download and save the data from the source, we only need the code.

Code should be placed in: R/medrxiv-search.R

Complete functions for extracting from bioRxiv

Write functions that will extract from the source and download to the repository. To finish this issue, we don't need to download and save the data from the source, we only need the code.

Code should be placed in: R/biorxiv-search.R

Scoping review: resources for conducting scoping reviews

This is a databank for useful resources on how to conduct scoping reviews. Please add here as we go along.

This is based on the initial links provided by Hannah and additional searches.

Key papers describing scoping reviews:
Arksey & O'Malley 2005: https://www.tandfonline.com/doi/abs/10.1080/1364557032000119616

Levac, Colquhoun & OBrien 2010: https://implementationscience.biomedcentral.com/articles/10.1186/1748-5908-5-69

Seminar from Cochrane:
https://training.cochrane.org/resource/scoping-reviews-what-they-are-and-how-you-can-do-them

Guidelines for conducting scoping reviews

Joanna Briggs Institute: https://jbi-global-wiki.refined.site/space/MANUAL/3283910770/Chapter+11%3A+Scoping+reviews

Guidelines for reporting scoping reviews:

PRISMA checklist: http://www.prisma-statement.org/Extensions/ScopingReviews

PRISMA description paper: https://www.acpjournals.org/doi/10.7326/M18-0850

Scoping review of scoping reviews

Tricco et al. 2016: https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-016-0116-4

Complete functions for extracting from Zenodo

Write functions that will extract from the source and download to the repository. To finish this issue, we don't need to download and save the data from the source, we only need the code.

Code should be placed in: R/zenodo-search.R

Issues with batch_pubmed_download and make_df_from_pubmed functions

Hi all,

Since last month I have been unable to run pubmed-search.R. For some reason, make_df_from_pubmed() function generates an empty dataframe. Last month I figured out that this is solved when the encoding is not specified in batch_pubmed_download() function. It is possible that this is just an issue with how my computer handles these issues, but please check it out just in case.

My current function looks like this:

pubmed_search <-
  batch_pubmed_download(
    pubmed_query_string = search_terms$pubmed,
    dest_file_prefix = "open_collab_"
  )

Complete functions for extracting from arXiv

Write functions that will extract from the source and download to the repository. To finish this issue, we don't need to download and save the data from the source, we only need the code.

Code should be placed in: R/arxiv-search.R

General process for each scoping review topic

This is a huge topic and hence we want to break it down into smaller more manageable chunks.

Suggestions for subtopics (science-collective/admin#5 (comment)) are:

  • Open science principles and open collaboration
  • Reproducible Research and data analysis
  • Open data
  • Open software and software design
  • Open access
  • Open peer review
  • Open outreach

For each of these topic we could have outline a general process of how we want to work with these topics.

A suggestion could be:

  1. Scoping review (protocol -> search -> write -> preprint -> publish journal article)
  2. Event (when the paper is at the preprint stage, we could organize an event for researchers to come and discuss the topic of the review, talk about how they work with the topic, learn about our findings, talk about barriers and solutions to problems related to the topic. Such a discussion could add further perspectives to our review before we publish it)
  3. Brainstorm needed tools (this could be a session, after the review has been published where we think about the gaps identified based on 1. and 2. and which tool(s) could fill these gaps. This would feed into our other branch of building tools)
  4. Blog post on website (based on findings from the review and the event, we could write a blog on our website and even similar blog posts, but with slightly different angles, to other places)

This would give us time to dive into each subtopic and build this initiative brick-by-brick.

Feedback from meeting with research librarian

Research question:

  • We need to define our key topic (which we had planned to do anyway in terms of defining accessibility/transparency)
  • “Determine best practices” part of our research question is problematic, as this would require assessment of which practices are good and bad
  • Suggests that we change research question to something like “Provide an overview of current practices"
  • Our aim should be to capture the most important material and describe everything once, rather than capturing all material and counting the number of times mentioned (as in a systematic review)

Searches:

  • We need to begin by testing our search terms and search strategy
  • We should start with 4-5 articles
  • Could get a bit of noise due to the data availability statements that are required by some journals (e.g., document includes the search term “open data” as they have responded to a data availability statement saying that they can be contacted for the dataset, but the journal article itself might not be in any way related to open science/collaboration)

Data sources:

  • Primary: Journal articles and books
  • Supplementary searches: Websites, hand searching
  • Sending emails to subject specialists/major university libraries asking for guidelines on open science/collaboration is a search strategy in itself
  • Hannah idea: Contact Lex Bouter?

Screening:

  • Multiple co-authors test search strategy
  • Since we are likely to be working with so many websites, Lasse suggests that we have one main screener (first author) that does all title/abstract screening
  • Then could have multiple screeners for full text screening

Data synthesis:

  • Could use a descriptive analytical approach
  • Some other scoping reviews do some sort of thematic analysis using software like NVivo
  • Lasse uses a well-known method for data synthesis in scoping reviews, originally proposed by Arksey & O’Malley (2005) and now built on by Levac, Colquhoun, & O’Brien (2010)
  • ^^I’ll put these in the Discord chat

Resources:

ACTION POINTS:

  • Triple check that this scoping review isn’t already being done/registered on OSF
  • Choose 4-5 best articles to start with
  • Begin testing search terms/search strategy @lwjohnst86 @danielibsen @MarioGuCBMR
  • Hannah to contact Lex Bouter to ask for resources
  • Hannah to contact the SDU Library contact person for open science/FAIR principles to ask for resources

Complete functions for extracting from figshare

Write functions that will extract from the source and download to the repository. To finish this issue, we don't need to download and save the data from the source, we only need the code.

Code should be placed in: R/figshare-search.R

Minutes Discord Chat 21-02-2022

Here are the first minutes for the scoping-review Discord chat held on 21-02-2022. I love doing minutes, since it helps me understanding what has been done and what should I do next. As @lwjohnst86 suggested, there is no need to have these for every Discord chat, since they are mostly informal, but I believe that today's chat had some nice things I would like to come back to.

After I joined here are some of the highlights:

  1. @danielibsen suggested whether we should look for examples of best practices on open science and @lwjohnst86 commented that that could be a second project, following this one.

  2. We discussed the objectives @lwjohnst86 wrote. From this I realized there are two sections: the introduction and the scoping-review itself. In the introduction we should discuss about the definition of science and maybe we should open an issue.

  3. Shall we only include open science publications? We discussed about how hypocritical not doing so might be. @hchats pointed out that if we use it as filtering criteria, it is going to be reported in the methods and for me that is making a strong statement or a "declaration of intentions".

  4. @hchats also commented about doing a pre-survey on what topics should we focus on in the scoping-review.

  5. We still need to think about the best way of doing the reference list.

  6. I was wondering what the best way of collaborating and knowing what to do in a non-hierarchical structure is. We discussed about it and I will add some ideas on some workflows in this issue: #3.

I think that sums it up! Please add in the comments anything I might missed, misheard or if you want to discuss something about it!

Ideas for strategy for searching web, literature, and any other resource for any given topic area

Linked to ideas here: science-collective/admin#5

Specifically comments:

What I've gotten from the discussion so far is to:

  1. Test out a few strategies (get some insight from the librarian)
  2. There are so many topics within this area, maybe focus down on a few. I suggest we focus first on open collaboration connected to open science practices, since that's what we're doing right now 😝
  3. Brainstorm our specific aim and questions to help focus our search
  4. Decide and agree on some strategies and the overall protocol. Here I'd suggest we write out a draft protocol that specifically focuses on how we'd go about doing it as a team

Test code

I have been doing some tests with R libraries such as easyPubMed or rscopus. It would be interesting to have a folder where we could put our code so that people can glance it over and help. What are your thoughts on the best way of doing this?

Complete functions for extracting from Scopus

Write functions that will extract from the source and download to the repository. To finish this issue, we don't need to download and save the data from the source, we only need the code.

Code should be placed in: R/scopus-search.R

Can't get Scopus code to work, Elsevier API Key isn't working

I'm trying to run the Scopus code but I can't get the API key to work and I've created several. Neither the rscopus package website has much information on this, nor does Elsevier's website. I'm close to considering we exclude any database that requires a substantial amount of work to access.

Thoughts?

Issues with search-terms.R

Hi all,

I have been trying to reproduce the code for arxiv and medrxiv, but an error occurs when the query is done. The problem lies when using the function search_terms from search-terms.R.

As you know the function returns the search terms according to the "engine" you introduce as input. Right now the arxiv and medrxiv code load this as:

query = search_terms$medrxiv #this seems like search_terms was a dataframe with the searching terms once, but now it is a function and, of course, the code fails.

To solve this issue we just need to properly call the function:

query = search_terms("medrxiv") #this solves the issue!!

If you give the thumbs up I will change the code accordingly!!

Noise in pubmed-search.R

Not the most important thing, but after going through the output of pubmed-search.R, I have noticed that with the current query we get lots of noise from medical and technological papers. Here are some examples:

"Oxygen vacancies in open-hollow microcapsule enable accelerated kinetics for stable Li-S battery."
"Predicting major complications in patients undergoing laparoscopic and open hysterectomy for benign indications."

Clearly, the word open is used for many more topics than we were expecting. Those two seem like an exception, but the amount of noise is actually quite big and, though unavoidable to some extent, there might be some ways of mitigating it. If you think it is necessary, we can use this issue to highlight terms that are found more than once and that can be used in query.

For instance, I have seen more than twice the following terms:

"open fracture"
"open-label"

Tasks to do for completing project

Whatever task we work on, we should follow scoping review guidelines (e.g. #10). Everything is at this point in rough draft stage, but we should be working as much as possible to fill out the doc/review.md and doc/protocol.md file (and other supporting documents).

Team-wide general collaboration tasks

  • Contributing document to guide how we will contribute to the work
  • Workflow and process strategy for working collaboratively on the search strategy, briefly in the Methods section of doc/protocol.md and more detailed in the Methods of the doc/papers.md and in the CONTRIBUTING.md document
  • Update README with more relevant information about the project
  • Fill in the GitHub repo "description" details

Protocol tasks (in doc/protocol.md)

(Inspired from manual)

  • Draft of title
  • Draft of objectives/research questions/aims (see #11, #10)
  • Draft of overview/introduction, with basic terms and definitions (e.g. what is open, what is science)
  • Draft of inclusion criteria and exclusion criteria (in Methods)
  • Draft of search strategy in Methods (search terms used, databases used)
    • Including how data is extracted and saved (which folder, file naming, etc)
  • Draft of post-search source selection, like how we decided to keep sources for further inclusion (in Methods)
  • Draft of data extraction details in Methods
    • Including how we decide who does it, how the data is saved (which folder, file naming, etc)
  • Draft of how we will "analyze"/"synthesize" the data
    • Draft "charting form" (for data extraction)
  • Review protocol by at least two people
  • Upload protocol to OSF

Code and data extracted from sources

Paper (in doc/paper.md)

(More will be added here as we go on).

Book selection

Following the TODO added by @hchats in the protocol, maybe we should add books. A maximum of 4 books should be added after reaching a common consensus among the collaborators.

In this issue I suggest that we discuss of the best strategies to look for books (if they differ to those used for other type of data) so that they can be added in the protocol accordingly.

Complete functions for extracting from Web of Science

Write functions that will extract from the source and download to the repository. To finish this issue, we don't need to download and save the data from the source, we only need the code.

Code should be placed in: R/wos-search.R

Complete functions for extracting from Embase

Write functions that will extract from the source and download to the repository. To finish this issue, we don't need to download and save the data from the source, we only need the code.

Code should be placed in: R/embase-search.R

PRISMA-ScR Checklist (for both protocol and review itself)

Note: This checklist was generated with the script R/extract-prisma-scr-checklist.R.
The checklist can also be found at doc/instructions/prisma-scr-checklist.md.

Title:

  • Title: Identify the report as a scoping review.

Abstract:

  • Structured summary: Provide a structured summary that includes (as applicable): background, objectives, eligibility criteria, sources of evidence, charting methods, results, and conclusions that relate to the review questions and objectives.

Introduction:

  • Rationale: Describe the rationale for the review in the context of what is already known. Explain why the review questions/objectives lend themselves to a scoping review approach.
  • Objectives: Provide an explicit statement of the questions and objectives being addressed with reference to their key elements (e.g., population or participants, concepts, and context) or other relevant key elements used to conceptualize the review questions and/or objectives.

Methods:

  • Protocol and registration: Indicate whether a review protocol exists; state if and where it can be accessed (e.g., a Web address); and if available, provide registration information, including the registration number.
  • Eligibility criteria: Specify characteristics of the sources of evidence used as eligibility criteria (e.g., years considered, language, and publication status), and provide a rationale.
  • Information sources*: Describe all information sources in the search (e.g., databases with dates of coverage and contact with authors to identify additional sources), as well as the date the most recent search was executed.
  • Search: Present the full electronic search strategy for at least 1 database, including any limits used, such that it could be repeated.
  • Selection of sources of evidence†: State the process for selecting sources of evidence (i.e., screening and eligibility) included in the scoping review.
  • Data charting process‡: Describe the methods of charting data from the included sources of evidence (e.g., calibrated forms or forms that have been tested by the team before their use, and whether data charting was done independently or in duplicate) and any processes for obtaining and confirming data from investigators.
  • Data items: List and define all variables for which data were sought and any assumptions and simplifications made.
  • Critical appraisal of individual sources of evidence§: If done, provide a rationale for conducting a critical appraisal of included sources of evidence; describe the methods used and how this information was used in any data synthesis (if appropriate).
  • Synthesis of results: Describe the methods of handling and summarizing the data that were charted.

Results:

  • Selection of sources of evidence: Give numbers of sources of evidence screened, assessed for eligibility, and included in the review, with reasons for exclusions at each stage, ideally using a flow diagram.
  • Characteristics of sources of evidence: For each source of evidence, present characteristics for which data were charted and provide the citations.
  • Critical appraisal within sources of evidence: If done, present data on critical appraisal of included sources of evidence (see item 12).
  • Results of individual sources of evidence: For each included source of evidence, present the relevant data that were charted that relate to the review questions and objectives.
  • Synthesis of results: Summarize and/or present the charting results as they relate to the review questions and objectives.

Discussion:

  • Summary of evidence: Summarize the main results (including an overview of concepts, themes, and types of evidence available), link to the review questions and objectives, and consider the relevance to key groups.
  • Limitations: Discuss the limitations of the scoping review process.
  • Conclusions: Provide a general interpretation of the results with respect to the review questions and objectives, as well as potential implications and/or next steps.

Funding:

  • Funding: Describe sources of funding for the included sources of evidence, as well as sources of funding for the scoping review. Describe the role of the funders of the scoping review.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.