Git Product home page Git Product logo

dash's Introduction

DASH

R Package Companion to the Drone Assisted Stream Habitat (DASH) Protocol. DASH is an R package to summarize habitat metrics from data generated using the DASH protocol (Carmichael et al. 2019). Initially, the DASH R package is being created to import, perform QA/QC, and summarize habitat data collected by on-the-ground personnel. Summarized habitat information can then be paired with fish abundance and density information for various fish-habitat models including quantile random forest (QRF) capacity models (See et al. 2020). The eventual goal of the DASH R package is also to join fish and habitat data to stream centerlines and also generate habitat metrics from drone collected orthomosaics. Habitat metrics from DASH include data describing characteristics such as large woody debris, undercut banks, channel unit size, undercut banks, etc.

Getting Started

To install the current working version of this package to your computer, you can use Hadley Wickham's devtools package. To install and load devtools (or any R package on CRAN for that matter) use:

install.packages("devtools")
library(devtools)

Once devtools is successfully installed, use the following to install DASH:

devtools::install_github("BiomarkABS/DASH", build_vignettes = TRUE)

Be sure to include the build_vignettes = TRUE argument, as this will ensure that vignettes include with the package will be built during install and made available.

Vignettes

Because the R packages rmarkdown and knitr are used to build this vignette to HTML output, you are required to have these installed to view vignettes available from this package. Both of these packages can be installed and loaded using the directions above for devtools. Vignettes can then be accessed using:

browseVignettes("DASH")

Alternatively, vignettes can be viewed in the Help menu using, for example:

vignette("otg-import-qc", package = "DASH")

DASH currently includes the following vignettes:

otg-import-qc: Describes how to import and QC on-the-ground (OTG) DASH habitat data collected using the DASH protocol and ArcGIS Survey123 data collection forms. Also describes how to resolve some errors identified during the QC process and some "data cleaning" to generate a summary of channel-unit scale data that can be joined to stream centerlines.

Installed Files

When installing the DASH R package, it will download to your default R library, which is typically something like:

# on a Windows machine
"C:/Users/username/Documents/R/win-library/x.x"

You can also find your current (default) library tree using:

.libPaths()

If you navigate to the installed DASH package within that directory, we've included some "extra" useful folders that are downloaded during install. Folders include:

DASH_Protocol/: The latest-and-greatest working version of Carmichael et al. 2019. scripts/: R scripts that folks, both internal and external, may find useful.

Developers Note

To use devtools you may also have to download and install Rtools. The latest version of Rtools can be found here.

Making Contributions

If you are interested in making contributions to DASH, consider getting a GitHub account, fork this repository, clone to a local directory, modify, and send us a pull request. The authors can then review any changes and merge.

Executive Summary from DASH Protocol

This Drone Assisted Stream Habitat (DASH) protocol outlines procedures to collect accurate habitat data in an efficient and cost-effective manner that can be implemented across large spatial scales. Habitat attributes are collected primarily at the channel-unit (i.e., pool, riffle, run, rapid +, side channel) scale and secondarily at the reach (e.g., 100m - 1km) scale. Channel-unit scale habitat data can then later be summarized at larger scales if desired. By integrating high-resolution drone imagery, and when available, bathymetric light detection and ranging (LiDAR) data with minimal ground crew data collection, this protocol provides robust and accurate habitat data to inform habitat status and trends as well as fish-habitat modeling efforts. Ground crews delineate channel units, collect habitat attributes that cannot be obtained from remote sensing data, and collect high-resolution GPS information so that on-the-ground data is spatially explicit and easily compatible with remote sensing (e.g., drone, LiDAR) data. Data collected by ground crews can also be used to cross-validate remotely sensed data, when desired.

This protocol builds on previously developed methods for habitat sampling, and improves upon them by leveraging: 1) sub-meter global navigation satellite system (GNSS) receivers; 2) cost-effective drone imagery collection, image stitching, and photogrammetry; and 3) semi-automated data post-processing. Many of the ground crew methods used here have been adapted and simplified from the Columbia Habitat Monitoring Program (CHaMP) in an effort to increase survey repeatability and to remove potential human error. All data collection efforts are georeferenced and topologically compatible to increase repeatability of methods and data collection locations; a primary criticism of previous CHaMP survey efforts.

Another concern from previous habitat monitoring programs was the inability to extrapolate site-level data to larger (e.g., tributary, watershed) scales. With the DASH protocol, the intent is to circumvent the need to extrapolate data by collecting data for individual channel units in a rapid manner and using remote sensing technologies. During initial efforts, channel unit data will be collected at the reach scale (e.g., 3 km reaches); however, this protocol can easily be applied to larger (e.g., tributary, watershed) scales because of the speed and cost of drone imagery data collection and minimized minimal use of ground crew data collection. Habitat data acquired using this protocol can be paired with channel unit scale or larger scale fish abundance and density estimates to better elicit fish-habitat relationships. For example, estimates of capacity could be generated at any desired scale using available models (e.g., quantile regression forest [QRF] capacity models). The DASH protocol can be used for status and trends estimates of watershed health because of the ability to repeat measurements efficiently and effectively across large spatial scales. In addition, by enabling the use of drone and remote-sensing data, this protocol reduces labor; providing a cost-effective tool for habitat data collection supporting status and trend evaluation and model products to better inform habitat restoration prioritization and planning.

Literature Cited

Carmichael, R.A., M.W. Ackerman, K. See, B. Lott, T. Mackay, and C. Beasley. 2019. Drone Assisted Stream Habitat (DASH) Protocol, DRAFT.

See, K.E., M.W. Ackerman, R.A. Carmichael, S.L. Hoffmann, and C. Beasley. In Review. Estimating Carrying Capacity for Juvenile Salmon Habitat using Quantile Random Forest Models. Ecosphere.

Questions?

Please feel free to post an issue to this repository for requested features, bug fixes, errors in documentation, etc.

Licenses

Text and figures : CC-BY-4.0

Code : See the DESCRIPTION file

Data : CC-0 attribution requested in reuse

Cheers!

dash's People

Contributors

boldemeyer avatar kevinsee avatar mackerman44 avatar roesm avatar treymac avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar

dash's Issues

Acceptable Undercut Location

What are the acceptable values for Location in the Undercut csv? In the 2019 data, almost all of them are either "Right_Bank" or "Left_Bank", which make sense. However, there are a few weird ones, such as:

  • "Island": 2

  • "Island,Right_Bank": 1

  • "Right_Bank,Left_Bank": 3

Realizing that if a surveyor finds an undercut on an island, there should be some way to identify where it is. But then I think it should either just be "Island" for somewhere on an island, or "Island,Left_Bank" / "Island,Right_Bank".

Currently, the QA/QC function flags anything other than "Right_Bank" and "Left_Bank"

Update Ocular Estimate and Pebble Count QCs and Protocol

Beginning in 2019, the desire is that ocular substrate estimates are recorded for ALL channel units (both slow and fast), and in addition, pebble counts are taken in the first 10 fast water channel units.

  1. Currently, the QC only checks for missing ocular substrate estimates in slow channel units and needs to be updated to reflect the above.
  2. The DASH protocol needs to be updated to reflect this.
  3. We should consider whether still missing ocular estimates can be imputed in the rollup_cu() function. These would be the remaining CU records where field staff forgot to estimate in the field AND we can't reliably estimate from drone imagery

@rcarmichael3 did, we believe, fill in all of the missing values he could in the 2018 data using drone imagery. Perhaps the same needs to be done to the 2019 and 2020 data?

Also, this brought up the idea of moving the master DASH document over to the repo to put in under versioning control. Can we move the .docx and .pdf versions over to the repo and delete from SharePoint? And we'll consider moving to a .Rmd file in the future.

2018 braidedness metrics missing Large Side Channel data

The dash2018_rollup_v2.R script calculates sinuosity and braidedness based on 2 shapefiles, a centerline and a side channel one for each watershed. However, neither shapefile includes large side channels, with leaves the total length of any habitat reach with a large side channel smaller than it should be.

Sinuosity is only calculated using the main channel (centerline), but the wet_braid and wet_braid_sin metrics use the total length of the whole habitat reach, including side channels.

This oversight could also impact QRF capacity estimates, as the fish / m estimate will be multiplied by a total length that is too small, leading to total capacity estimates that are biased low for habitat reaches with large side channels. Currently the MRA_QRF repo uses the sum of all the channel unit lengths as the length for a habitat reach, and that does include large side channels.

@mackerman44 or @rcarmichael3, did I misread or misstate what's going on here?

"Spreading" Discharge Estimates Across CUs

The DASH package currently has the functionality to calculate discharge at channel units where those station width, depth, and velocity measurements are made, and those calculated discharge estimates are attached to those channel units. That is located in the calc_discharge() and rollup_cu_discharge() functions.

However, it seems we would want to "spread" those discharge estimates across all channel units e.g., the discharge est at a channel unit is applied to all downstream or upstream CUs (depending on how CUs were laid out) until you reach the TOS, BOS, or another channel where discharge is estimated. We need to determine the best way to do that and add a function that is likely applied in or right after the rollup_cu_discharge() function.

Replace one_of() with any_of() or all_off()

Low priority.

If I get a wild hair, consider replacing one_of() with any_of() or all_of(). First, need to determine if one_of() can universally be replaced with any_of(). What does all_of() replace?

Little Springs 2019 Small Side Channel Data

Within the 2019 OTG DASH data, there's a miscellaneous Excel file within the Little Springs survey that appears to contain some small side channel data, presumably widths? At some point, need to figure out what to do with this...if anything.

S:/data/habitat/DASH/OTG/2019/lemhi/raw/Little Springs_Survey123_2019/Little Springs Small SC_2019.xlsx.

Perhaps wait to see if some of this information is missing during the QC/roll-up process.

2018 Pahsimeroi habitat reach #5

This habitat reach has a large side channel with many missing metrics. Is it possible that only the main channel was surveyed on the ground? However, the large side channel has channel units with aquatic veg cover and total cover metrics (they are the only non-NA metrics recorded).

In addition, the length needs to be sorted out. I see 3 values in the dash2018_hr object:
Length.x: 588.4
Length.y: 143.4
Length.y: 141.4

Part of that issue is that the centerline file contains 2 centerlines for this reach (one follows the main channel, one follows the large side channel), but the side channel shapefile doesn't contain anything for this reach.

Fix some issues with 2018 data

I recently came across some issues with the 2018 DASH data, including a duplicated habitat reach (#6 in the Pahsimeroi), and a realization that the script dash2018_rollup_v2.R contained an error that was not computing any side channel areas in the habitat rollup.

In the process of examining that, a few errors were found in the centerline shape files. Richie has fixed those and I'll be overwriting them on the NAS.

I will be working on that rollup script, and re-saving its output.

Eighteenmile Creek missing centerlines & OTG data

The dash_cu_points_1920.shp file contains collector points for Eighteenmile Creek, supposedly from 2020. However, I couldn't find centerlines, orthomosaics or OTG data for it. Should these points be deleted or ignored? Or is there missing data somewhere that needs to be brought into the NAS?

Grouse 2020 CU table has multiple parent global IDs

The first 34 channel units in the CU_1.csv file have the same parent global ID, which matches a global ID in surveyPoint_0.csv. After that the parent global IDs start increasing by 1 in the last few digits every channel unit. None of those have a match in the surveyPoint_0.csv file, so the channel unit rollup ends up with NAs in several columns for those channel units, including site name, survey date, etc.

@rcarmichael3, any thoughts about why this might happen? Should someone go into the CU_1.csv file and manually change all those parent global IDs to match the first 34?

Channel units in multiple habitat reaches

There are four channel units in the dash_cu_points_ shape files that have some points with one habitat unit listed, and other points with another. All points show the same channel unit type. The four are:

  1. Little Springs 2019 CU 209 (Riffle): habitat reaches 30 (2 pts) and 31 (1 pt)
  2. Lower Lemhi 3 2019 CU 3 (SSC): habitat reaches 1 (3 pts) and 2 (3 pts)
  3. Upper Lemhi 1 2019 CU 69 (SSC): habitat reaches 11 (3 pts) and 12 (3 pts)
  4. Lower Pahsimeroi MRA 2018 CU 9 (Riffle) habitat reaches 2 (1 pt) and 3 (2 pts)

Review QC results in "/1_formatted_csvs/" directory

At some point, somebody still needs to review each of the latest "qc_results_YYYYMMDD.csv" files in the "2019/lemhi", "2019/nf_salmon", and "2020/secesh" directories and resolve errors in the "/2_qcd_csvs/" data where possible. Further directions are provided in the "01_otg_import_qc.R" script and below:

# At this point, someone very familiar with the OTG data (preferably a field technician or field coordinator,
# secondarily a project leader) should likely intervene, review the remaining QC errors the we just wrote
# to file, and attempt to resolve those, and ideally, make notes for those QC errors that can't be resolved.
# In addition, for the QC errors that are resolved, it is useful to provide notes on how they are resolved. Notes
# on how errors were or were not resolved can be useful towards improving data validation (e.g., during field
# collections) or quality control steps in the future.

# Above, we imported data from the "/1_formatted_csvs/" directory for 3 year x watershed combinations ("2019/lemhi",
# "2019/nf_salmon", and "2020/secesh"), and in the case of "2019/lemhi" resolved some common issues in ocular
# estimates using the rescale_values() function and fish cover estimates using fix_fish_cover(), and then wrote out all
# of the remaining identified errors to a file in each respective directory titled "qc_results_YYYYMMDD.csv" which
# is the file that should be reviewed.

# The next suggested step is to copy/paste all of the Survey123 data to a new directory for each year x watershed
# combination; we used "/2_qcd_csvs/". And then, only data in the "/2_qcd_csvs/" directories should be modified while
# reviewing the QC results. Doing so preserved the integrity of the raw data in "/1_formatted_csvs/", which is good
# general practice. However, it is fine to add notes in the "qc_results_YYYYMMDD.csv" about how errors were or were
# not resolved (those qc results can always be replicated in the future using this script).

# Note: the data in the "/1_formatted_csvs/" directory and "/0_raw_csvs/" directories only differ in that file
# formatting issues may have been identified during data import (i.e., no data has changed).

# Now that errors have been resolved (which as of 20201209 they all have not, still need review), we can move on and
# re-import the "/2_qcd_csvs/" data in which some errors have been resolved.

QC for Pebble Values

Need to add a QC to qc_cu() to check that pebble values match expected. Should be able to get the expected values from Richie and/or the DASH protocol. Just a reminder.

Resolve Station Widths in 2020 Secesh Surveys

We need to review and resolve the station widths in the "2020/secesh" surveys which includes data for Grouse, Lake, and Summit creeks. Any changes should occur in the "/2_qcd_csvs/" folder. From Kevin's e-mail:

I also noticed that a number of the discharge metrics for 2020 are 0, which seems odd. Looking at the discharge measurements, it looks like maybe the station widths weren’t captured correctly? For example, in the Grouse Discharge_Measurements_6.csv file, all the station widths are either 0.25 or 0.64. Seems odd, no?

Might also be worth adding a QC to make sure that station widths are unique i.e., station widths, it seems, should occur in consecutive increments.

Flag NAs in `impute_cols`

At some point, I need to go back and ensure that NAs are being flagged during the QC process in any column that is later imputed during the rollup process. These should be the impute_cols = columns in rollup_cu_wood(), rollup_cu_jam(), and rollup_cu_undercut() (others?).

Update DASH Protocol

Reminder to review and update the DASH protocol from lessons learned developing the pipeline.

README - Clone, Build, and Install

In the README.txt file, under Getting Started, consider adding instructions on how to clone, build, and install DASH, especially for internal folks, and perhaps, also include brief instructions on making a pull request.

edit_date_2

Make sure edit_date_2 is eliminated during otg_to_cu() after re-processing data when back in "office"

Compare Column Names

I created a function check_col_names() to compare a list of column names from a data frame being imported to a list of expected column names. However, I see there is a function janitor::compare_df_cols() to do a similar thing. Consider seeing if I can use compare_df_cols() instead.

Improve QC Error Messages

A reminder to myself to, at some point, review and improve my error messages in the QC process. In many cases, I say the value is outside the expected range, but I should also just provide the value.

Mismatch between OTG and centerline site names

Some of the site_names in the OTG survey data don't match with the centerline Site_ID. Things like LowerLemhi3_2019 vs. Lowerlemhi3_2019, or Lower Lemhi_2, or even "Lemhi River". We can fix this by hand when matching the two datasets up, but it would be good to establish a better process, maybe pre-loading site names into Survey123 or something?

2018 NAs for pool metrics

In the 2018 data, there are some habitat reaches (9 of them) that contain no pool channel units. Therefore, the metrics related to max pool depth are NAs. Is that appropriate, or should we set those to 0?

When used for QRF prediction of capacity, these could end up being imputed, or we'll need to flag them to not be

Discharge Function

We need to create a function that calculated discharge and any location where those measurements were made.

Here is a description of Discharge (Q), which I grabbed, that I believe is from the CHaMP metric dictionary:

The sum of station discharge across all stations. Station discharge is calculated as depth x velocity x station increment for all stations except first and last. Station discharge for the first and last station is 0.5 x station width x depth x velocity.

Little Springs Side Channel Data

There is a file within the Little Springs 2019 DASH survey data that appears to contain width measurements for small side channels there:

S:/data/habitat/DASH/OTG/2019/lemhi/0_raw_csvs/Little Springs Small SC_2019.xlsx.

Need to "resolve" that file. Perhaps that data just needs to be entered into the CU data in the 2_qcd_csvs?

Looking into it now...

Add QC to verify "site_name" is filled out

I need to add a step a QC to verify that the "site_name" field in the otg_type = "surveyPoint_0.csv" is filled out. Just identified that the site name for the LowerLemhi3_2019 field is not filled.

2018 Ssc avg width of 0

In the 2018 data, there are 2 small side channels with an average width of 0. They are:
UpperLemhi_2018, hab_rch 3, cu_id 012
UpperSalmon_2018, hab_rch 10, cu_id 062

This seems incorrect.

2018 NAs for substrate metrics

In the 2018 data:

There are a number of habitat reaches (3 of them) that contain no riffle channel units, so no pebble counts were conducted and the D50 and D84 metrics are NA.

Similarly, there are 6 habitat reaches with no pools or runs, so it appears no percent fines/gravel/cobble/boulders was recorded in those reaches, and the percent substrate metrics are NA.

That is accurate, but if a QRF model uses one of those metrics, we'll end up imputing it. Are we OK with that?

Survey123 and Collector Templates

@rcarmichael3 is going to provide the data entry templates for both Survey123 (used to collect the OTG data) and Collector and/or information on how to log into our account and access that info so that we can change variable names or improve templates, if needed.

Also, might be worthwhile to include those templates in the repo, maybe inst/templates/ so that they are included during package install.

Start DASH Metric Dictionary

Reminder to start a metric dictionary for DASH metrics, likely to include metrics that are collected OTG and metrics that are summarized both to the channel unit (CU) and habitat reach (HR) scale. Something similar to the CHaMP metric dictionary, but better!

`rescale_percents()` to be removed

I just added the rescale_values() function as a simplified version of rescale_percents() that simply:

  1. sums the designated col_names to calculate a new column sum_values
  2. identifies records where sum_values does not fall within an expected min_value and max_value
  3. rescales those values to be equal to sum_to (default = 100)

And returns a corrected data frame. My intent is then to use those within functions to remedy errors in the ocular estimates and fish cover data. For now, I left rescale_percents(), but leaving this issue to remind myself to eventually remove rescale_percents() after we're certain all its functionality is used in the functions to fix that data.

Clean ESRI Online Data

Just a reminder to organize the OTG, collector, and survey template forms on ESRI online.

QA/QC cover metrics

The sum of the various cover metrics can be more than 100%. However, I would suggest changing the qc_cu function to flag the following issues related to cover (instead of the current "does it sum to 100"?):

  • Are some (but not all) of the cover metrics for a particular Global ID NA? These could be made 0, before moving on.

  • Are any of the cover measurements in decimals, rather than multiples of 10? If they are all less than 1, this could be fixed before moving on.

  • For multiples of 10, if we add all the cover metrics except "Total No Cover", it should be less than (100 - Total No Cover). If it's not, flag it.

  • Is "Total No Cover" more than 100?

  • If we add all the cover metrics including "Total No Cover", it should be at least 100. Flag any that are smaller

Resolve Station Widths

According to @rcarmichael3 Ricardo, "station width refers to the distance between each depth and velocity measurement, so they should be the same across each measurement. So if all of them are 0.25 or 0.64 that would make sense to me. At least that is how it is supposed to be recorded."

This was in response to @KevinSee's earlier remarks to me "I also noticed that a number of the discharge metrics for 2020 are 0, which seems odd. Looking at the discharge measurements, it looks like maybe the station widths weren’t captured correctly? For example, in the Grouse Discharge_Measurements_6.csv file, all the station widths are either 0.25 or 0.64. Seems odd, no?"

So it seems we need to review how measurements are collected and discharge is calculated. 1) It appears that station width's may have been collected incorrectly in 2018 & 2019, at least after first glance. However, 2) this doesn't necessarily explain Kevin's remark above where a number of the discharge metrics for 2020 are 0.

In any case, worth a review.

2018 Pahsimeroi Hab_Roll #9

In the 2018 data, Richie noticed a missing QRF capacity prediction in the MRA_QRF repo for, we believe, Pahsimeroi Hab_Roll 9 which Richie says is channel units 73-77. These channel units have data, but Hab_Roll 9 doesn't appear in the centerline shapefile used to calculate sinuosity and braidedness metrics. Why not?

Here's a case for SCOOBY DOO (or Mike or Kevin)!

datetime imports

Currently, datetime imports occurring in read_otg_csv() and with column specs provided in get_otg_col_specs() are a little problematic. Some are being read in with an AM/PM designation and in 12-hr format whereas others are being read in as 24-hr format. I initially tried to remedy using readr::col_datetime(format = "%m/%d/%Y %H:%M"), but to no avail.

For the moment, I'm simply reading in date time columns (e.g., Survey Date, EditDate, CreationDate) as readr::col_character(), which is maybe okay? i.e., is it up to DASH to determine datetime specifications? Or do we need to be better about date formatting in Survey123?

NAs in length_m, diameter_m, width_m, height_m, and estimated_number_of_pieces columns in wood & jam data

Related to Issue #15

We need to determine the best method for dealing with NAs in the length_m, diameter_m, width_m, height_m, and estimated_number_of_pieces columns in the otg_type = "Wood_2.csv" and otg_type = "Jam_3.csv" data. These are individual LWD and jam measurements that are missed in the field (or less likely, fall outside of expected QC values and we don't know how to resolve).

Currently, within rollup_cu_wood() and rollup_cu_jam, I use the median among all other measured values within a channel unit to fill in the NA. However,

  1. This may not be the most appropriate solution, and
  2. This fails when no other pieces of wood or jams were measured within the channel unit.

For now, I'm proceeding w/ the above, temporary solution, but we should identify better and then add that to the above functions for imputing missing values during the rollup step.

NAs in wet, channel_forming, and ballasted Wood Columns

We need to determine how to deal w/ NAs in the wet, channel_forming, and ballasted columns of the wood data. These are records where the field tech forgot or had an error when recording that information. We currently aren't using that information "downstream", but it could be useful at some point...

One idea is to calculate the proportion Yes/No for each column within a given channel unit OR survey (which is important), and then draw from those proportions to determine whether a given NA should be Yes/No (e.g., if 60% of all wood pieces in a survey are wet, draw from that to determine Yes/No with a 60% chance of being "Yes"). Stated that very inelegantly.

2019 Station Widths

Station widths in the "DischargeMeasurements_6.csv" files and /2_qcd_csvs/ folders are incorrect in 2019 data. We need to identify the root of the problem and resolve in the 01_otg_import_qc.R script and using values from the /1_formatted_csvs/ folders.

Duplicated CUs in centerline files

When pulling together the updated centerline files, I found 6 instances of duplicated channel unit numbers. They are:

  • Big Timber 1, 2019, CU 11
  • Grouse, 2020, CU 41
  • Hawley, 2019, CU 176
  • Kenney 2, 2019, CU 34
  • Kenney 2, 2019, CU 98
  • Upper Lemhi 3, 2019, CU 162

Some of these appear to be adjacent segments, but some of these appear to be distinct channel units (and the Big Timber 1 appears to have a segment far, far away from the rest of the centerline). These centerline files are meant to be all cleaned up before we start joining other data to them, so these need to be addressed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.