Git Product home page Git Product logo

ds-pipelines-1's People

Contributors

aappling-usgs avatar ratshan avatar

Watchers

 avatar  avatar

ds-pipelines-1's Issues

Create a branch in your code repository

You'll be revising files in this repository shortly. To follow our team's standard git workflow, you should first clone this training repository to your local machine so that you can make file changes and commits there.

Open a git bash shell (Windows) or a terminal window (Mac) and change (cd) into the directory you work in for projects in R (for me, this is ~/Documents/R). There, clone the repository and set your working directory to the new project folder that was created:

git clone [email protected]:RAtshan/ds-pipelines-1.git
cd ds-pipelines-1

Now you should create a local branch called "structure" and push that branch up to the "remote" location (which is the github host of your repository). We're naming this branch "structure" to represent concepts in this section of the lab. In the future you'll probably choose branch names according to the type of work they contain - for example, "pull-oxygen-data" or "fix-issue-17".

git checkout -b structure
git push -u origin structure

By using checkout, you have switched your local branch from "master" to "structure", and any changes you make from here on out to tracked files will not show up on the master branch. To take a look back at "master", you can always use git checkout master and return to "structure" with git checkout structure. We needed the -b flag initially because we wanted to combine two operations - creating a new branch (-b) and switching to that new branch (checkout).

While you are at it, this is a good time to invite a few collaborators to your repository, which will make it easier to assign them as reviewers in the future. In the ⚙️ Settings widget at the top of your repo, select "Manage access". Go ahead and invite your cohort coworkers, aappling-usgs, and jread-usgs. It should look something like this:
add some friends

Close this issue when you've successfully pushed your branch to remote and added some collaborators. (A successful push of the branch will result in a message that looks like this "Branch 'structure' set up to track remote branch 'structure' from 'origin'")


I'll send you to the next issue once you've closed this one.

Get started with USGS Data Science pipelines

Data analyses are often complex. Data pipelines are ways of managing that complexity. Our data pipelines have two foundational pieces:

  • Good organization of code scripts help you quickly find the file you need, whether you or a teammate created it.

  • Dependency managers such as remake, scipiper, and drake formalize the relationships among the datasets and functions to ensure reproducibility while also minimizing the amount of unnecessary runtime as you're creating or modifying parts of the pipeline.

⌨️ Activity: Assign yourself to this issue to get started.

💡 Tip: Throughout this course, I, the Learning Lab Bot, will reply and direct you to the next step each time you complete an activity. But sometimes I'm too fast when I ⏳ give you a reply, and occasionally you'll need to refresh the current GitHub page to see it. Please be patient, and let my humans know (jread-usgs or aappling-usgs) if I seem to have become completely stuck.


I'll sit patiently until you've assigned yourself to this one.

Organize your project files

You should organize your code into functions, targets, and conceptual "phases" of work.

Often we create temporary code or are sent scripts that look like my_work_R/my_happy_script.R in this repository. Take a minute to look through that file now.

This code has some major issues, including that it uses a directory that is specific to a user, it plots to a non-project file location, and the structure of the code makes it hard to figure out what is happening. This simple example is a starting point for understanding the investments we make to move towards code that is more reproducible, more shareable, and understandable. Additionally, we want to structure our code and our projects in a way where we can build on top of them as the projects progress.

Assign this issue to yourself and then we'll get started on code and project structures.


I'll sit patiently until you've assigned yourself to this one.

Why use a dependency manager?

We're asking everyone to invest in the concepts of reproducibility and efficiency of reproducibility, both of which are enabled via dependency management systems such as remake, scipiper, and drake.

Background

We hope that the case for reproducibility is clear - we work for a science agency, and science that can't be reproduced does little to advance knowledge or trust.

But, the investment in efficiency of reproducibility is harder to boil down into a zingy one-liner. Many of us have embraced this need because we have been bitten issues in our real-world collaborations, and found that data science practices and a reproducibility culture offer great solutions. Karl Broman is an advocate for reproducibility in science and is faculty at UW Madison. He has given many talks on the subject and we're going to ask you to watch part of one of them so you can be exposed to some of Karl's science challenges and solutions. Karl will be talking about GNU make, which is the inspiration for almost every modern dependency tool that we can think of. Click on the image to kick off the video.

reproducible workflows with make

💻 Activity: Watch the above video on make and reproducible workflows up until the 11 minute mark (you are welcome to watch more)

Use a GitHub comment on this issue to let us know what you thought was interesting about these pipeline concepts using no more than 300 words.


I'll respond once I spot your comment (refresh if you don't hear from me right away).

The anatomy of a remakefile

Most of our pipelines in R use a remakefile file to "orchestrate" the connections among files, functions, and phases. In this issue, we're going to develop a basic understanding of how these files work, starting with the anatomy of a remakefile.

Background

A remakefile uses the yaml file format. A basic understanding of YAML is important for making use of remakefiles, including creating your own or editing existing files. Additionally, several of the tools and workflows common to USGS data science take advantage of a YAML file to do other things too, so you may bump into them elsewhere. YAML ("YAML ain't markup language") files are fairly simple, flexible, and readable, but there is a bit of a learning curve. We're not going to get too far into YAMLs, but reading up on them or finding a good reference for future use might be a good idea.

Using remakefiles in data science pipelines

In addition to phases (which we covered in #3 (comment)), it is important to decompose high-level concepts (or existing scripts) into thoughtful functions and "targets" that form the building blocks of data processing pipelines. A target is a noun we use to describe a tangible output of function, which is often a file or an R object. Targets can be used as an end-product (like a summary map) or as input into another function to create another target.


The simplest version of a remakefile (adapted from the remake repo) might look something like this:

sources:
  - code.R

targets:
  all:
    depends: figure_1.png

  model_RMSEs.csv:
    command: download_data(out_filepath = "model_RMSEs.csv")

  plot_data:
    command: process_data(in_filepath = "model_RMSEs.csv")

  figure_1.png:
    command: myplot(out_filepath = "figure_1.png", data = plot_data)

This file defines the relationships between different "targets" (see how the target model_RMSEs.csv is an input to the command that creates the target plot_data?), tells us where to find any functions that are used to build targets (see that sources points you to code.R), and isolates the output(s) that must be created in order to complete the all target (in this case, it is only the figure_1.png file).

Even though this is a simple example, there is some new syntax that may be confusing. We'll explain a few of these quickly:

  • command is a field that specifies what function should be called in order to build each target.
  • depends is a field that explicity specifies a dependency of a target. So when that dependency is considered "out of date" (more on that later), the target that lists that dependency in depends is also going to be "out of date". In this way, the "all" target can't be complete and up-to-date until figure_1.png is.
  • all is a special target that groups other targets. We'll cover group targets - like all - more later on.
  • model_RMSEs.csv shows up three times, why? model_RMSEs.csv is a file target that gets created when the command download_data is run, and is an input to that same function in order to tell the function what file name to write the data to. Then "model_RMSEs.csv" shows up as an input to another function, process_data, which looks like it reads in that file and changes it (or "processes" it) in some way.

We're going to start with this simple example, and modify it to match our pipeline structure. This will start by creating a new branch, creating a new file, adding that file to git tracking, and opening a new pull request that includes the file:

⌨️ Activity: get your code plugged into a remake file

First things first: We're going to want a new branch. You can delete your previous one, since that pull request was merged.

git checkout master
git pull
git branch -d structure
git checkout -b remakefile
git push -u origin remakefile 

Next, create the file with the contents we've given you by entering the following from your repo directory in terminal/command line:

cat > remake.yml
sources:
  - code.R

targets:
  all:
    depends: figure_1.png

  model_RMSEs.csv:
    command: download_data(out_filepath = "model_RMSEs.csv")

  plot_data:
    command: process_data(in_filepath = "model_RMSEs.csv")

  figure_1.png:
    command: myplot(out_filepath = "figure_1.png", data = plot_data)

then use Ctrl+D to exit the file creation mode and return to the prompt.


Finally, create a pull request that includes this new file (the file should be called remake.yml).


When I see your pull request, I'll make some in-line suggestions for next steps.

What's next

You are doing a great job, @RAtshan! 🌟 💥 🐠

But you may be asking why we asked you to go through all of the hard work of connecting functions, files, and targets together using a yaml file. We don't blame you for wondering...


The real power of depedency management is when something changes - that's the EUREKA! moment, but we haven't put you in a situation where it would show up. That will come further down the road on later training activities and also in the project work you will be exposed to.

In the meantime, here are a few nice tricks given you have a functional pipeline.

  • run scmake() again. What happens? Hopefully not much. I see this:
    make all is fresh
    Which means everthing is up to date so all targets are :OK:

  • now try making a change to one of your functions in your code. What happens after running scmake() then?

  • access the plot_data target by using plot_data <- scmake('plot_data'). (You may or may not have an R-object target named plot_data in your own repo at this point, so go ahead and try it with some target that you do have.) Here we are using the first argument of scmake(), which is target_names. Any vector of targets used will be built (the default is to build the all target in the case where no target names were specified). We can access the output of the target by assigning the result to a variable. In this example, we have called that variable plot_data, and it receives output from scmake in the form of a data.frame because that's what our example function process_data() creates. If you assign the result of a file target, like file_name <- scmake(target_names = '1_fetch/out/model_RMSEs.csv'), the result is the path to that file.

  • now try making a change to the template_1 variable in your function that creates the .txt file. What happens after running scmake() then? Which targets get rebuilt and which do not?


Lastly, imagine the following comment appeared on your pull request.

Oh shoot @RAtshan, I am using your results for FANCY BIG PROJECT and I have coded everything to assume your outputs use a character for the experiment number (the exper_n column), of the form "01", "02", etc. It looks like you are using numbers. Can you update your code accordingly?

Would your code be easy to adjust to satisfy this request? Would you need to re-run any steps that aren't associated with this naming choice? Did the use of a dependency management solution allow you to both make the change efficiently (i.e., by avoiding rebuilding any unnecessary parts of the pipeline) and increase your confidence in delivering the results?


You have completed introductions to pipelines I. Great work!

Below you will find some quick links that can help you review the content covered here

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.