Git Product home page Git Product logo

rplos's Introduction

Project Status: Abandoned

This package has been archived. The former README is now in README-not.

rplos's People

Contributors

bbolker avatar benda1997 avatar cboettig avatar jeroen avatar jrnold avatar karthik avatar katrinleinweber avatar kbroman avatar maelle avatar sckott avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rplos's Issues

standard cran install failed (ubuntu 11.10)

Hi - tried installing from cran (sudo R; install.packages('rplos')), got this:

[...]
installing to /usr/local/lib/R/site-library/RCurl/libs
** R
** data
** inst
** preparing package for lazy loading
Creating a new generic function for "close" in "RCurl"
** help
*** installing help indices
** building package indices ...
** testing if installed package can be loaded

* DONE (RCurl)
* installing *source* package ‘reshape’ ...
** R
** data
**  moving datasets to lazyload DB
** inst
** preparing package for lazy loading
** help
*** installing help indices
** building package indices ...
** testing if installed package can be loaded

* DONE (reshape)
* installing *source* package ‘googleVis’ ...
** R
** data
**  moving datasets to lazyload DB
** demo
** inst
** preparing package for lazy loading
** help
*** installing help indices
** building package indices ...
** testing if installed package can be loaded

* DONE (googleVis)
ERROR: dependency ‘stringr’ is not available for package ‘httr’
* removing ‘/usr/local/lib/R/site-library/httr’
ERROR: dependencies ‘ggplot2’, ‘stringr’, ‘httr’ are not available for package ‘rplos’
* removing ‘/usr/local/lib/R/site-library/rplos’

The downloaded packages are in
    ‘/tmp/RtmpInY9HC/downloaded_packages’
Warning messages:
1: In install.packages("rplos") :
  installation of package 'httr' had non-zero exit status
2: In install.packages("rplos") :
  installation of package 'rplos' had non-zero exit status

Any clues please?

Make totals summary metrics the default output?

Martin suggested "summary" as the default output, but I disagree:

We could have "summary" as the default option, but it seems like the summary option doesn't give you any metrics at all, just metadata. Perhaps it would make more sense to by default return the summary metrics for each data provider. I think this is returned together with the very detailed by day metrics within the "detail" option output. For example:

> out <- almplosallviews(doi='10.1371/journal.pone.0029797', info='detail')
> out[["metrics"]] # get metrics summary data.frame
                .id shares total citations  pdf  html comments likes groups
1         citeulike      1     1        NA   NA    NA       NA    NA     NA
2          crossref     NA     2         2   NA    NA       NA    NA     NA
3            nature     NA     3         3   NA    NA       NA    NA     NA
4            pubmed     NA     0         0   NA    NA       NA    NA     NA
5            scopus     NA     1         1   NA    NA       NA    NA     NA
6           counter     NA 23689        NA 2038 21553       NA    NA     NA
7  researchblogging     NA     1         1   NA    NA       NA    NA     NA
8              biod     NA    16        NA    0     0       NA    NA     NA
9               wos     NA     1         1   NA    NA       NA    NA     NA
10              pmc     NA   173        NA   33   140       NA    NA     NA
11         facebook    133   176        NA   NA    NA       18    25     NA
12         mendeley     29    30        NA   NA    NA       NA    NA      1
13          twitter     NA     5        NA   NA    NA        5    NA     NA
14        wikipedia    169   208        39   NA    NA       NA    NA     NA

Check for returned fields

Martin said something about copyright and author roles fields as not documented or not in the output of fxn calls. look in to this.

ToDo

-Making progress, writing the default search function now.
-Need to figure out a compact way of displaying results, and the best way of doing so.
-Include some graphics functions summarizing results.
-Can datasets be downloaded in addition to just the bibliographic information?

Possible bug or question from user

Direct quote from user's email:

I also had another question, and possible bug report. we've been trying to search all PLoS papers ever for data, using the code below.

f <- "id, results_and_discussion, journal, cross_published_journal_key, subject_level_1, publication_date"

num.results <- 250 # how many papers per search. I think it is best to keep this limited, to reduce strain on memory and the PLoS API. 250 seems fine.

for(start in seq(1, 300000, num.results)){

    print(paste("Starting from record", start))

    # get a list of 250 results (note that we don't want to keep these around)
    x <- searchplos("*:*", fields=f, toquery="doc_type:full", returndf = FALSE, start=start, limit=num.results) 

    # sleep for 1.5 seconds. Minimises chance of getting locked out of PLoS API.
    Sys.sleep(1.5)

    # [I have some code that goes here that processes the search hits (e.g. searching through the Results with a regular expression) in "x" and saves some of the data for later analysis]

}

I chose 300,000 since that is comfortably more than the number of PLOS papers that have been published.

The code kind of works, but it is clear that we are not getting all PLoS papers ever. For example if you change the "for" argument to seq(100000, 300000, num.results), it says " Error: Internal Server Error" and crashes, though one can enter any other numbers than 100,000 and it generally works. Another issue is this:

rplos_image

The x axis shows the dates of papers in PLoS Biology that turned up in my search, expressed as days before today. Looks like there are some ranges that have not been sampled by my current search method. This could be that because of issues with my code that I snipped from the above section (it crashes occasionally when "start" gets over 100,000, and I don't yet know why), but I wanted to check first that our general approach is sensible. Is that what you'd do if you wanted all the Results sections?

If not, is there a simpler way to get the list of fields "f" in my code for all papers on the API? I guess I am not really trying to do a "search", so much as accessing a set of fields in all entries - maybe the function is not built with this task in mind. But I feel that a great addition to rplos would be an easier way of easily getting, say, all the full texts of all PloS papers into R for some text mining.

What now?

What other functions should we add to this package?

-Seems like a useful function would be one that graphs results using ID's to label articles so that one can look at the figure, then go get that specific article based on looking at the figure. Probably over my head at the moment. Just plotting results I suppose is a good start.

-What sorts of text mining functions would be interesting?

plot_throughtime is borken

None of the examples work. e.g.

> plot_throughtime(list("reproducible research"), 500)
Error in `$<-.data.frame`(`*tmp*`, "V1", value = numeric(0)) :
  replacement has 0 rows, data has 500
Calls: plot_throughtime -> $<- -> $<-.data.frame

Fix documentation

Jennifer Lin was asking for clarification of parameters, and I noticed that at least toquery isn't accurate as to what it does. Fix up docs, and give examples, etc

Search limit workaround

Can only pull down up to 999 search results from PLOS. Solution:

Use 'start' argument to start at certain record, then use 'rows' argument which will pull down results of start-to-rows length

should be able to grab the numFound result in the search output from the first set of results, and if the length of search results in less than numFound, we can then pull down the rest of the results in increments of 999 until all are pulled down.

Sound reasonable?

Is there any way to automagically detect DOI vs. pmid vs. pmcid vs. mendeley ID?

Right now the function alm (just changed name from almplosallviews) has four parameters for each of the four ID types, and you can only use of them. It would be nice to be able to detect them automatically, but Pubmed IDs (pmid) and and PubMed Central IDs (pmcid) as far as I can tell can not be disambiguated.

Any thoughts anyone?

multiple package pages

What is our protocol on documentation pages and where they reside?

Currently we have: 1. The website (a bit out of date in terms of tutorials and examples). 2. Some repos have gh-pages 3. Others have neither.

Should we explore a way to keep this consistent somehow? Update only the github documents and have that mirror on wordpress (plugin?)

thoughts?

almplot doesn't work

image.png

at least the examples don't because they don't match the function definition.

Include example of how to get full article DOIs

Annotation DOIs are now returned along with full paper DOIs when doing fq=doc_type:full, so need to exclude annotation DOIs with fq=-article_type:Correction

E.g., http://api.plos.org/search?q=*:*&fq=doc_type:full&fq=-article_type:Correction&fl=id,title_display,article_type%20desc&api_key=DEMO&wt=json

.DS_Store

Any idea how to remove those .DS_Store files that are hidden on my local directory, but show up on github repo?

Thanks, S

Change arg names

toquery is misleading, as the actual parameter it represents is fq, which actually filters a query, but doesn't affect the query itself.

Change:

  • terms -> q
  • fields -> fl
  • toquery -> fq

But leave in old params for a good deprecation message

Aggregate records

William Gunn (@MrGunn) on twitter asked "off the top of your head, do you know a good one liner to aggregate counts by day given timestamps including HH:MM:SS?"

Default output as

@ropensci/owners Hey, curious what your thoughts are on the default output for functions in rplos.

I used to output a data.frame or list of the data, but recently thought it may be better to output a simple S3 class, and have a print method, so what is returned is like

out <- searchplos('ecology', 'id,publication_date', limit = 2)

 Data
No. of records: 2
No. of variables: 2
Variables: id, publication_date

 Highlighting
No. of records: 0
No. of variables: 0
Variables: 

And then they can get data in a data.frame by

plos_todf(out)

                            id     publication_date
1 10.1371/journal.pone.0059813 2013-04-24T00:00:00Z
2 10.1371/journal.pone.0001248 2007-11-28T00:00:00Z

I thought the S3 class seems appropriate since you can quickly get a lot of data back, and its a bit overwhelming perhaps. And I am adding in highlighting and faceting data in the output, so it sorta makes sense I think to do this. And with highlighting and faceting output could do plos_todf(out, return='data'), plos_todf(out, return='highlighting'), or plos_todf(out, return='facets'), or plos_todf(out, return='all').

Thoughts?

Two errors with almevents function

Error in data.frame(issn = "0081-685X", journal_title = "Studia Psychologiczne", :
Argumente implizieren unterschiedliche Anzahl Zeilen: 1, 0
Error in data.frame(issn = "2191-0200; 0334-1763", journal_title = "Reviews in the Neurosciences", :
Argumente implizieren unterschiedliche Anzahl Zeilen: 1, 0
Error in x[[1]]$contributors : $ operator is invalid for atomic vectors
Error in data.frame(issn = "1868-1891; 1868-1883", journal_title = "Hormone Molecular Biology and Clinical Investigation", :
Argumente implizieren unterschiedliche Anzahl Zeilen: 1, 0
Error in x[[1]]$contributors : $ operator is invalid for atomic vectors
Error in x[[1]]$contributors : $ operator is invalid for atomic vectors

GoogleVis, add it functionality for this

Add this to the function plot_throughtime, as an additional option for visualizing search results. Output needs to be rendered in a browser, so cannot be visualized locally. Probably need to change data frame format from that used for the ggplot graphics.

Fix import issue

At load, get the error

library(rplos)
Loading required package: ggplot2
New to rplos? Tutorial at http://ropensci.org/tutorials/rplos_tutorial.html. Use suppressPackageStartupMessages() to suppress these startup messages in the future
Warning messages:
1: replacing previous import ‘rename’ when loading ‘reshape’
2: replacing previous import ‘round_any’ when loading ‘reshape’

Various errors from Martin

  • The options info=event and source=crossref are missing

  • Limit for ALM API is 50 articles at a time.

  • I also want to add the source=crossref option to almevents.

  • The example "almevents(doi="10.1371/journal.pone.0029797", key = 1)" is currently broken,: results in:

    Error in names(temp[[1]]) <- c("bloglines", "citeulike", "connotea", "crossref", :
    Attribut 'names' [18] muss dieselbe Länge haben wie der Vektor [0]

  • Enable the year=2012 option.

Clean up result text

Martin said there is leading and trailing white space in some of the returned data from either the search api or the alm api

And some html in titles and other fields - try to remove those

How to install/use this package?

Hi, I am a new R user.

Forgive me if my question is too naive.

I have tried to install this package, but clearly, the zip file downloaed from the source file is not actually an recognizable R package. Then I unzip the file and set R work directory to it, run the code from wiki, R told me "Error: could not find function "searchplos" "

Could you tell me how to install it in R or how to directly use it?

Thank you very much! ~

suppressPackageStartupMessages

Figure out how to suppress the googleVis startup message when loading rplos.

A user can do this suppressPackageStartupMessages(library(googleVis)), but need some way to do it within rplos.

This maybe helpful

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.