geocompx / geocompr Goto Github PK
View Code? Open in Web Editor NEWGeocomputation with R: an open source book
Home Page: https://r.geocompx.org/
License: Other
Geocomputation with R: an open source book
Home Page: https://r.geocompx.org/
License: Other
Minor issue but it distracts from the text.
See here: http://robinlovelace.net/geocompr/attr.html#vector-attribute-joining
@Robinlovelace there is a mention of three packages - sp, rgdal, and rgeos in the second chapter, however references are missing there. It looks like they are not in the geocompr package description, therefore there are no valid bib entry of them.
The question is - should we add them to the description or we should add theirs bib entry to the ref.bib file?
After #56 is complete
https://twitter.com/mdsumner/status/860676991040733184
"Tip from a reader; if you are writing a book intended for print & use #knitr, turn off the background on code blocks; it looks shit. #rstats"
"Also, think carefully about using syntax highlighting & the scheme you use; #knitr default doesn't print well
in BW #rstats"
Sources:
My idea is to mostly use the .gpkg format for vector examples. We could show one example with .shp at the beginning, and shortly inform that we will use .gpkg later in the book. We could also add an extended explanation why .gpkg is better in an appendix to the book. What do you think about it @Robinlovelace ?
Will contain information on CRS transformations (raster and vector) as well as afine transforms. Could also contain transformation in raster datasets.
There is a minor issue with README.Rmd. If you use a knit button, It doesn't give a proper .md file. This is probably because we use "Project build tools: Website". I found out a temporary solution:
rmarkdown::render('README.Rmd', output_format = 'md_document', output_file = 'README.md')
I think Geocomputation with R is a better working title and that the repo would benefit from being called geocompR but let's decide after April 11th.
This would be well-suited to chapter 04 I think: http://r-spatial.org/r/2017/06/22/spatial-index.html
With reference to sfr1 vignette.
With a code example using osmdata by @mpadge
https://github.com/Robinlovelace/geocompr/blob/master/01-introduction.Rmd#introduction-intro - what should be there?
E.g. from http://r4ds.had.co.nz/transform.html
Current (25 June):
2 Geographic data in R
Prerequisites
This chapter requires the packages sf, and spData to be installed and loaded:
library(sf)
library(spData)
Should be:
devtools::install_github("nowosad/spData")
People working with geospatial data rarely work in one projection, however this is a topic that is often neglected in books.
A brief intro and explanation of how to transform existing maps using various different projections would be great. But more importantly, it would be extremely interesting to read about what can go badly wrong with projection mismatches.
Section 2.1.4 says: "So far, our geometry types have just included one feature. To represent multiple features in one object, we can use the “multi”-version of each geometry type:"
But I think (and a careful reading of the standard would confirm this) these things are still "single features". They're just single features with multiple (ie compound) geometries. You only get multiple features when you create an sfc
object.
I don't think this should be in the introduction:
https://bookdown.org/robinlovelace/geocompr/intro.html#why-simple-features
@Robinlovelace please take a look at the Licence file. What should be there?
As pointed out by @rsbivand this is an issue: too much focus given to large areas. Nice solution is setting opacity propoportional to pop. density, as shown here: https://twitter.com/PhilPierdo/status/865953679840612353
Another solution is to apply a mask hiding everything but buildings as implemented by @oobr.
Installation of each package fails with the message:'Don't know how to decompress files with extension - ' . Where - appears to be the version number of the particular package e.g. 3-23 or 94-5 etc.
I'm using RStudio 1.0.136 (R v3.2.0) on Mac OS X 10.10.5
Thanks for your help - looking forward to reading (contributing?)
Stu
See this - not sure why this fails: https://travis-ci.org/Robinlovelace/geocompr#L2289
Currently some have this:
---
knit: "bookdown::preview_chapter"
---
They should all have prerequisites...
@Nowosad note my use of pacman's p_load()
for package loading - you happy with that? I think it's the best solution and that the benefits justify the costs of having another (minor) dependency for the book.
E.g. using a new dataset.
@Robinlovelace can you check if make build
or make html
work for you? I've got an error for lines 119-128 in the 03-attr.Rmd:
Error in as(st_geometry(x), "Spatial") :
no method or default for coercing "sfc_MULTIPOLYGON" to "Spatial"
Calls: <Anonymous> ... eval -> eval -> set_units -> st_area ->
<Anonymous> -> as
Currently the data for chapter 10 lives here (in cycle-hire*.geojson
): https://github.com/Robinlovelace/Creating-maps-in-R/tree/master/data
I think it would make sense to have these small datasets in spData for consistency.
That way we can have the data stored consistently and add another spatial dataset for people to use - one based on official and one based on crowd sourced data which is interesting conceptually.
Probably on 1st mention is best plan.
Finding interesting datasets to showcase algorithms can be very difficult.
One option could be to use ECMWF and Copernicus data. If you are not familiar with them, ECMWF stands for European Centre for Medium-range Weather Forecast, it is an inter-governamental organisation and stores the largest meteorological data archive in the world (global coverage). Copernicus is a European programme that provides satellite and in-situ observations for a number of domains (e.g. biodiversity, environmental protection, climate, public health, tourism, etc.).
There are countless uses for these datasets!
https://twitter.com/mdsumner/status/860482390702960641
The idea from https://twitter.com/clavitolo/status/865671631368105984
List of suggestions for the second edition. See below for various ideas and things already implemented (see #12 (comment) for an older version of this list that includes ideas already implemented) .
st_convex_hull
, st_triangulate
, st_polygonize
, st_segmentize
, st_boundary
and st_node
- see c1d1c0egeos
package by @paleolimbot3d mapping section / chapter - section on that
? A new part on visualizing geographic data with static/interactive/other chapters. Just thinking it could be usefully split-up given its popularity (RL)
OGC standards such as WMS/WFS/WCS/... are becoming very common (hurray!) and many data providers serve layers via web services.
A section on how to assemble a typical request and visualise the response would be fantastic!
I think this should be the new chapter 10, after raster-vector.
Sound good? Please give it a bash if you find some spare time (p.s. there are more typos in there - try to spot them!) @Nowosad.
From Chapter 3
library(tidyverse)
library(sf) #0.4-3
library(spData) # inc spatial data and datsets
f = system.file("shapes/wrld.shp", package = "spData")
world = st_read(f)
# this works
world_few_rows = world[world$population > 1e9,]
#OR
world_few_rows = world %>%
filter(population > 1e9)
Error in filter_impl(.data, quo) : Result must have length 177, not 12180
and
world_orig = world # create copy of world dataset for future reference
world = select(world_orig, name_long, continent, population = pop)
#Error in select.sf(world_orig, name_long, continent, population = pop) : requires dplyr > 0.5.0: install that first, then reinstall sf
> sessionInfo()
R version 3.3.2 (2016-10-31)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)
locale:
[1] LC_COLLATE=English_Canada.1252 LC_CTYPE=English_Canada.1252
[3] LC_MONETARY=English_Canada.1252 LC_NUMERIC=C
[5] LC_TIME=English_Canada.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] bindrcpp_0.1 spData_0.1-18 sf_0.4-3 dplyr_0.6.0
[5] purrr_0.2.2.2 readr_1.1.1 tidyr_0.6.3 tibble_1.3.1
[9] ggplot2_2.2.1.9000 tidyverse_1.1.1
loaded via a namespace (and not attached):
[1] Rcpp_0.12.10.1 bindr_0.1 cellranger_1.1.0 plyr_1.8.4
[5] forcats_0.2.0 tools_3.3.2 jsonlite_1.4 lubridate_1.6.0
[9] gtable_0.2.0 nlme_3.1-131 lattice_0.20-34 rlang_0.1.1
[13] psych_1.7.5 DBI_0.6-1 parallel_3.3.2 haven_1.0.0
[17] stringr_1.2.0 httr_1.2.1 knitr_1.16 xml2_1.1.1
[21] hms_0.3 grid_3.3.2 glue_1.0.0 R6_2.2.1
[25] readxl_1.0.0 foreign_0.8-68 udunits2_0.13 reshape2_1.4.2
[29] modelr_0.1.0 magrittr_1.5 units_0.4-4 scales_0.4.1
[33] assertthat_0.2.0 mnormt_1.5-5 rvest_0.3.2 colorspace_1.3-2
[37] stringi_1.1.5 lazyeval_0.2.0 munsell_0.4.3 broom_0.4.2
Hi there, your book already looks amazing!
It would be great to add a section dedicated to best practices for visualisation, e.g. use of color-blind friendly palettes. A package that provides some nice palettes is viridis. There are also tools to select good color schemes for maps and other graphics (e.g. colorbrewer2).
The use of these palettes throughout the book would also help strenghten the message.
I am trying to use dplyr verbs in reactive functions in a shiny app (and I am pretty new to shiny). The app is quite long, so I am not attaching it. Basically the issue is with the instruction within the server function which subset one of the datasets I am loading:
# some code here
# load files (from the data folder) ---------------------------------------
studies <- readRDS("data/studies.rds")
samples <- readRDS("data/samples.rds")
edges <- readRDS("data/edges.rds")
taxa <- readRDS("data/taxa.rds")
version <- readRDS("data/version.rds")
# some more code here defining the ui
server <- function(input, output) {
# some code irrelevant to the issue (defines other outputs)
# samples table, view tab -------------------------------------------------
# Render selectInput for study, View tab
output$choose_study <- renderUI({
study_names <- studies$studyId
selectInput("view_study",
label = h5("Select one or more studies to explore samples"),
choices = study_names,
selected = "ST1",
multiple = TRUE)
})
# Reactively subset so that only the selected study is viewed
f_vsamples <- reactive({
subset(samples, studyId %in% input$view_study,
select = c(label_2, s_type, n_reads2, foodId, llabel, L1, description))
})
output$vsamples_table <- DT::renderDataTable(
f_vsamples(),
rownames = F,
escape = F,
colnames = c("label" = "label_2",
"sample type" = "s_type",
"reads" = "n_reads2",
"code" = "foodId",
"ext. code" = "llabel",
"food group" = "L1",
"description" = "description"),
options = list(pageLength = 5,
lengthMenu = c(5, 10, 25, 20),
columnDefs = list(
list(
targets = 5,
render = JS(
"function(data, type, row, meta) {",
"return type === 'display' && data.length > 30 ?",
"'<span title=\"' + data + '\">' + data.substr(0, 30) + '...</span>' : data;","}"
)
)
)
)
)
}
this works fine and the table use rendered correctly. However, when I replace
f_vsamples <- reactive({
dplyr::filter(samples, studyId %in% input$view_study) %>%
dplyr::select(label_2, s_type, n_reads2, foodId, llabel, L1, description)
})
the table is still generated correctly, but I get a warning in the R console
Listening on http://127.0.0.1:4980
Warning: Error in filter_impl: Result must have length 1723, not 0
Stack trace (innermost first):
99: <Anonymous>
98: stop
97: filter_impl
96: filter.tbl_df
95: dplyr::filter
94: <reactive:f_vsamples> [/Users/eugenio/Dropbox/incomune/Progetti attivi/FoodMicrobioNet/attività2017/filterApp/appv2.R#240]
83: f_vsamples
82: exprFunc
81: widgetFunc
80: func
79: origRenderFunc
78: renderFunc
77: origRenderFunc
76: output$vsamples_table
1: runApp
Am I doing something wrong?
Everything goes fine if I try this
require(shiny)
require(dplyr)
n <- 10
model.data0 <-
data.frame( "COURSE" = sample(LETTERS[1:3], n, replace=TRUE),
"VALUE" = sample(1:10, n, replace=TRUE),
"TODROP" = sample(1:10, n, replace=TRUE))
ui <- fluidPage(
sidebarLayout(
sidebarPanel(
uiOutput('choose_course')
),
mainPanel(
tableOutput('courseTable')
)
)
)
server <- function(input, output, session) {
# Build data, would be replaced by the csv loading in your case
model.data0 <-
data.frame( "COURSE" = sample(LETTERS[1:3], n, replace=TRUE),
"VALUE" = sample(1:10, n, replace=TRUE),
"TODROP" = sample(1:10, n, replace=TRUE))
# Render selectInput
output$choose_course <- renderUI({
course.names <- as.vector( unique(model.data0$COURSE) )
selectInput("courses","Choose courses", choices=course.names, multiple=TRUE)
})
# Subset so that only the selected rows are in model.data
model.data <- reactive({
# subset(model.data0(), COURSE %in% input$courses)
dplyr::filter(model.data0, COURSE %in% input$courses) %>%
dplyr::select(COURSE, VALUE)
})
output$courseTable <- renderTable({ model.data() })
}
runApp(shinyApp(ui,server))
which is taken from here https://stackoverflow.com/questions/37887482/filtering-from-selectinput-in-r-shiny. In this example using dplyr::filter or subset does not really matter neither does the position of the code building model.data0. So the culprit is likely to be the way I am using dplyr verbs in the server part of my app. Please advise
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.