Git Product home page Git Product logo

shellwhat's Introduction

โš ๏ธ This repo has outdated tokens in its travisci config To make new releases for this project it needs to be moved to circleci

shellwhat

Build Status codecov PyPI version FOSSA Status

shellwhat enables you to write Submission Correctness Tests (SCTs) for interactive Shell exercises on DataCamp.

Installing

pip install shellwhat

Development

Install everything needed for developing:

pip install -r requirements.txt
pip install -e .

By default, the DummyParser is used, that does not parse the shell code. Hence, you can not run tests that need this parser:

pytest -m "not osh"

If you also want to run these 'parser tests', there is a Dockerfile to parse shell commands with the Oil parser:

# Look in Makefile for details
SHELLWHAT_PARSER='docker' make test

License

FOSSA Status

shellwhat's People

Contributors

ddmkr avatar filipsch avatar fossabot avatar hermansje avatar machow avatar timsangster avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

shellwhat's Issues

complete output tests

currently has test_output_contains, need tests to..

  1. run expression, and see if its result is in the student's output.
  2. thin wrapper around prev test to run solution and then do as above.

Support desktop execution of unit tests

Running python -m pytest in the root directory of shellwhat on a Mac produces:

subprocess.CalledProcessError: Command '['python2', '-m', 'osh', 'echo a b c;']' returned non-zero exit status 1.

Variations on this invocation produce similar results. When debugging shellwhat_ext SCTs, I would really like to be able to run their unit tests (and those for `shellwhat, naturally) native in my desktop.

comparing what student has done to a target

In python and R, SCTs use a solution environment to allow exercise creators to specify what they want to test. (e.g. Ex().test_object('a') will compare the variable a across the student and solution environments). In shell, there is currently only one "environment" (the process running as user repl).

While we think about the right way to create a solution environment for shell exercises, it'd be useful to lay out how to perform SCTs comparing some result to a target. For example,

  1. that the student has created all the files in a folder
  2. that the student has created a file, matching some target file

currently, this can be done using test_expr_error. For example, if the target file is at /somedir/file-a.txt, and the student was supposed to create file-a.txt in their current directory, you could run

# cmp returns non-zero exit code (error) when files don't match
Ex().test_expr_error('cmp file-a.txt /somedir/file-a.txt')

Let me know of cases where test_expr_error feels clunky, and we can implement new SCTs / modify it.

cc @gvwilson

allow penalty-free execution of whitelisted shell commands

@gvwilson commented on Sun May 27 2018

As a user doing a Unix shell course, in consoleExercises I want to be able to run commands like pwd and ls as often as I like without triggering execution of SCTs (influencing submission metrics). After discussion with @filipsch, it seems we could add a whitelist of commands that short-circuit execution of SCTs. These could be provided on a per-exercise, per-chapter, or per-course basis; the last of these three is probably sufficient for everything except the first few chapters of the "Intro to Shell" course (where we definitely don't want these commands short-circuited).

test_output_contains does not include errors

See the last lines of this file; in the exercise, we want to check whether a certain command produces the correct error. It seems that this is not possible through test_output_contains(). I'm suspecting that is because the error stream is not captured in the stu_output field of the State. Requires some more investigation, just wanted a placeholder. Not urgent.

Sub-exercise not reported in build error messages

@gvwilson commented on Tue Nov 07 2017

Summary of the issue

If an SCT contains a syntax error in a BulletConsoleExercise, the build system reports the exercise number but not the sub-exercise number, which makes tracking down the bug less easy.

Minimally reproducible example

  1. Create a BulletConsoleExercise with two or more bullets in a shell course.
  2. Create a syntax error in the second or subsequent bullet's SCT.
  3. Build.

The build system will report the exercise number (e.g., "Exercise 4") but not which bullet.

Additional information

Not blocking, but slowing down.


@rv2e commented on Sun Dec 03 2017

I'll move it to shellwhat and tag it as an enhancement.

remove dependency on sqlwhat

Create protowhat with...

  • State, which is basically sqlwhat's State class
  • slightly more general dispatcher, for loading the correct code parser, coordinating AST search
  • general process for sct_syntax.py logic

repl.run_command should NOT echo commands to the shell

Learners are finding it very confusing that the pre-exercise commands executed via repl.run_command are echoed to their terminal. I can fake my way around this by using repl.run_command('clear') inside each pre block.

Must be able to run one command for testing solution and show another to user

We need to be able to lie to our learners in our shell courses (shell and Git). Here's the use case:

  1. We want the learner to run nano config.txt to create a new a configuration file.

  2. If we put that in the solution block for the exercise, automated testing fails (because we go into the editor and never come out).

  3. So instead we use echo nano config.txt and use test_student_typed with the regular expression '.*nano\s+config\.txt.*' to check.

  4. That makes the SCT pass, but when the learner asks to see the solution, they see the extra echo command rather than just the nano command.

In other exercises, the solution is cp /solutions/config.txt ./config.txt to copy a file with the correct data from the /solutions directory created when the Docker image is built into the right place. Again, this gets a file comparison SCT to pass, but when the learner looks at the solution, they don't see what we want them to see.

We run into something similar when we're teaching the man command. If the solution code is:

man cut

then automated testing fails, because man cut launches less, which hangs up waiting for the user to type q. To prevent this, we use:

man cut | cat

as the solution, because piping the output of man to another command suppresses the launch of the pager. Again, when the learner asks for the solution, they see something different from what they should run.

Implement Git SCTs

There are a couple approaches we could take..

  1. Implement a shellwhat-ext library, so development could move quickly (since shellwhat updates will need to pass on the exercise validator for all shell courses)

  2. Just put them in shellwhat. We could either require gitpython, or raise an error when a git SCT is run but gitpython isn't installed (comparable to suggested in R)

CC @gvwilson

Research SCT problems with git and shell courses

NOTE: The content dashboard does not work properly for courses with sub-exercises. This should be fixed soon.

Currently, the SCTs are Regex based. Figure out to what extent the problems people are having are related to the SCT system being too limited.

A big part of frustration could also be explained by the difference between what the solution tells students to do, and what the instructions suggest, as discussed in https://github.com/datacamp/learn-features/issues/14.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.