Git Product home page Git Product logo

gofer-grader's People

Contributors

choldgraf avatar gavrilm avatar gusennan avatar papajohn avatar rameshvs avatar vipasu avatar yanay1 avatar yuvipanda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gofer-grader's Issues

Issue with Matplotlib 2.2.3 conflict.

I was able to successfully use this for a few class exercises but now have run into an issue.

(I know you explicitly said don't rely upon but I really like how Okpy client works and this seems to be working and wanted to give it a shot. )

If matplotlib is not installed:
(1) Tests run fine.

If matplotlib is installed:
(1) Tests run fine.
(2) after importing pandas all tests result in the following error:

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-3-0157b017ef8a> in <module>()
----> 1 _ = ok.grade('q02')

/anaconda3/envs/grademe/lib/python3.6/site-packages/client/api/notebook.py in grade(self, question, global_env)
     56             # inspect trick to pass in its parents' global env.
     57             global_env = inspect.currentframe().f_back.f_globals
---> 58         result = check(path, global_env)
     59         # We display the output if we're in IPython.
     60         # This keeps backwards compatibility with okpy's grade method

/anaconda3/envs/grademe/lib/python3.6/site-packages/gradememaybe/ok.py in check(test_file_path, global_env)
    243         # inspect trick to pass in its parents' global env.
    244         global_env = inspect.currentframe().f_back.f_globals
--> 245     return tests.run(global_env, include_grade=False)

/anaconda3/envs/grademe/lib/python3.6/site-packages/gradememaybe/ok.py in run(self, global_environment, include_grade)
    138         failed_tests = []
    139         for t in self.tests:
--> 140             passed, hint = t.run(global_environment)
    141             if passed:
    142                 passed_tests.append(t)

/anaconda3/envs/grademe/lib/python3.6/site-packages/gradememaybe/ok.py in run(self, global_environment)
     81     def run(self, global_environment):
     82         for i, t in enumerate(self.tests):
---> 83             passed, result = run_doctest(self.name + ' ' + str(i), t, global_environment)
     84             if not passed:
     85                 return False, OKTest.result_fail_template.render(

/anaconda3/envs/grademe/lib/python3.6/site-packages/gradememaybe/ok.py in run_doctest(name, doctest_string, global_environment)
     39     runresults = io.StringIO()
     40     with redirect_stdout(runresults), redirect_stderr(runresults), hide_outputs():
---> 41         doctestrunner.run(test, clear_globs=False)
     42     with open('/dev/null', 'w') as f, redirect_stderr(f), redirect_stdout(f):
     43         result = doctestrunner.summarize(verbose=True)

/anaconda3/envs/grademe/lib/python3.6/contextlib.py in __exit__(self, type, value, traceback)
     86         if type is None:
     87             try:
---> 88                 next(self.gen)
     89             except StopIteration:
     90                 return False

/anaconda3/envs/grademe/lib/python3.6/site-packages/gradememaybe/utils.py in hide_outputs()
     46         yield
     47     finally:
---> 48         flush_inline_matplotlib_plots()
     49         ipy.display_formatter.formatters = old_formatters

/anaconda3/envs/grademe/lib/python3.6/site-packages/gradememaybe/utils.py in flush_inline_matplotlib_plots()
     21     try:
     22         import matplotlib as mpl
---> 23         from ipykernel.pylab.backend_inline import flush_figures
     24     except ImportError:
     25         return

/anaconda3/envs/grademe/lib/python3.6/site-packages/ipykernel/pylab/backend_inline.py in <module>()
    167             ip.events.register('post_run_cell', configure_once)
    168 
--> 169 _enable_matplotlib_integration()
    170 
    171 def _fetch_figure_metadata(fig):

/anaconda3/envs/grademe/lib/python3.6/site-packages/ipykernel/pylab/backend_inline.py in _enable_matplotlib_integration()
    158         try:
    159             activate_matplotlib(backend)
--> 160             configure_inline_support(ip, backend)
    161         except (ImportError, AttributeError):
    162             # bugs may cause a circular import on Python 2

/anaconda3/envs/grademe/lib/python3.6/site-packages/IPython/core/pylabtools.py in configure_inline_support(shell, backend)
    409     if new_backend_name != cur_backend:
    410         # Setup the default figure format
--> 411         select_figure_formats(shell, cfg.figure_formats, **cfg.print_figure_kwargs)
    412         configure_inline_support.current_backend = new_backend_name

/anaconda3/envs/grademe/lib/python3.6/site-packages/IPython/core/pylabtools.py in select_figure_formats(shell, formats, **kwargs)
    215     from matplotlib.figure import Figure
    216 
--> 217     svg_formatter = shell.display_formatter.formatters['image/svg+xml']
    218     png_formatter = shell.display_formatter.formatters['image/png']
    219     jpg_formatter = shell.display_formatter.formatters['image/jpeg']

KeyError: 'image/svg+xml'

Uninstalling matplotlib seems to work.

Conflict with Pandas

I have a strange error that I've been able to reproduce on 2 machines but it doesn't occur on others. For example, I have the issue running on my local laptop but then not with other environments.

The behavior is such that Gofer Grader runs fine until pandas is imported. Then the issue below occurs every time you run the grading.

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-7-bdbe578848a5> in <module>()
----> 1 _ = ok.grade('q21')

~/anaconda3/envs/auto/lib/python3.6/site-packages/client/api/notebook.py in grade(self, question, global_env)
     56             # inspect trick to pass in its parents' global env.
     57             global_env = inspect.currentframe().f_back.f_globals
---> 58         result = check(path, global_env)
     59         # We display the output if we're in IPython.
     60         # This keeps backwards compatibility with okpy's grade method

~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/ok.py in check(test_file_path, global_env)
    294         # inspect trick to pass in its parents' global env.
    295         global_env = inspect.currentframe().f_back.f_globals
--> 296     return tests.run(global_env, include_grade=False)

~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/ok.py in run(self, global_environment, include_grade)
    143         failed_tests = []
    144         for t in self.tests:
--> 145             passed, hint = t.run(global_environment)
    146             if passed:
    147                 passed_tests.append(t)

~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/ok.py in run(self, global_environment)
     85     def run(self, global_environment):
     86         for i, t in enumerate(self.tests):
---> 87             passed, result = run_doctest(self.name + ' ' + str(i), t, global_environment)
     88             if not passed:
     89                 return False, OKTest.result_fail_template.render(

~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/ok.py in run_doctest(name, doctest_string, global_environment)
     43     runresults = io.StringIO()
     44     with redirect_stdout(runresults), redirect_stderr(runresults), hide_outputs():
---> 45         doctestrunner.run(test, clear_globs=False)
     46     with open('/dev/null', 'w') as f, redirect_stderr(f), redirect_stdout(f):
     47         result = doctestrunner.summarize(verbose=True)

~/anaconda3/envs/auto/lib/python3.6/contextlib.py in __exit__(self, type, value, traceback)
     86         if type is None:
     87             try:
---> 88                 next(self.gen)
     89             except StopIteration:
     90                 return False

~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/utils.py in hide_outputs()
     46         yield
     47     finally:
---> 48         flush_inline_matplotlib_plots()
     49         ipy.display_formatter.formatters = old_formatters

~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/utils.py in flush_inline_matplotlib_plots()
     21     try:
     22         import matplotlib as mpl
---> 23         from ipykernel.pylab.backend_inline import flush_figures
     24     except ImportError:
     25         return

~/anaconda3/envs/auto/lib/python3.6/site-packages/ipykernel/pylab/backend_inline.py in <module>()
    167             ip.events.register('post_run_cell', configure_once)
    168 
--> 169 _enable_matplotlib_integration()
    170 
    171 def _fetch_figure_metadata(fig):

~/anaconda3/envs/auto/lib/python3.6/site-packages/ipykernel/pylab/backend_inline.py in _enable_matplotlib_integration()
    158         try:
    159             activate_matplotlib(backend)
--> 160             configure_inline_support(ip, backend)
    161         except (ImportError, AttributeError):
    162             # bugs may cause a circular import on Python 2

~/anaconda3/envs/auto/lib/python3.6/site-packages/IPython/core/pylabtools.py in configure_inline_support(shell, backend)
    409     if new_backend_name != cur_backend:
    410         # Setup the default figure format
--> 411         select_figure_formats(shell, cfg.figure_formats, **cfg.print_figure_kwargs)
    412         configure_inline_support.current_backend = new_backend_name

~/anaconda3/envs/auto/lib/python3.6/site-packages/IPython/core/pylabtools.py in select_figure_formats(shell, formats, **kwargs)
    215     from matplotlib.figure import Figure
    216 
--> 217     svg_formatter = shell.display_formatter.formatters['image/svg+xml']
    218     png_formatter = shell.display_formatter.formatters['image/png']
    219     jpg_formatter = shell.display_formatter.formatters['image/jpeg']

KeyError: 'image/svg+xml'

Create an alias to `grade` called, e.g., `feedback`

Many folks seem confused by the fact that interactive feedback is given via a function called grade. To my knowledge, when students call grade, there isn't actually any grading happening, the only thing that happens is tests are run and feedback is given.

Can we think of a function (maybe it'd just alias grade since that's used elsewhere) that more cleanly conveys what is happening when used in an interactive student session?

Plain-text report of per-question scores

Right now, the only way to render the result of scoring is as HTML via gofer.ok.OKTestsResult._repr_html_. A __repr__ method that generates an equivalent plain-text version that's nicely formatted would be awesome. Right now, if you run grade_notebook from the command line, the description isn't useful. E.g.,

Question 1:
<gofer.ok.OKTestsResult object at 0x107b3fe48>
Question 2:
<gofer.ok.OKTestsResult object at 0x107b3f390>

Support incremental checking

Individual 'check' functions run tests against the state of the environment as it was when the user ran the check code. However, grade_notebook runs tests against the environment as it was when all the user code has been written. This has caused a lot of subtle, hard to debug errors.

For example, the following code works ok:

a = 5

check('tests/q1.py')

a = 10

check('tests/q2.py')

Assuming q1.py tests if a is 5 and q2 tests if a is 10.

However, if you add a 'grade_notebook' at the end, it'll always fail for q1, since a will be ten.

Re-assignment of variables used in tests will always fail. We could ask users to be careful about it, but that seems like an unnecessary restriction. Both nbgrader & okpy allow users to do incremental checking, so gradememaybe should also allow this.

okpy does this by using an object that collects grades. nbgrader does this by keeping tests directly in the notebook.

The approach I'd like to take here instead is to use AST rewriting to find all the 'check' calls and rewrite them to something else when grading!

The code above will be rewritten to:

a = 5

check_results_abcdefg.append(check_abcdefg('tests/q1.py'))

a = 10

check_results_abcdefg.append(check_abcdefg('tests/q2.py'))

check_results_abcdefg is a list (or other object) that can collect TestResult objects generated by check_abcdefg. A randomly generated unique suffix is added to the check functions to make it harder (but not impossible!) for students to cheat by simply redefining a 'check' function. The check_results_abcdefg object and check_abcdefg function will be injected by the grading function as well. After all the code has been evaluated, check_results_abcdefg can be evaluated to find the final grade.

This imposes the following constraints on test notebook authors:

  1. Check cells are empty other than check calls (comments are allowed!). We execute code cell by cell, and this ensures that check functions aren't skipped due to errors
  2. Check cells shouldn't be deleted by students. This should be enforced with a notebook extension outside the scope of gradememaybe.

There should be a linter that validates these.

Submit button should report correct status

I installed the submit extension and clicked submit in an example notebook. The pop-up told me that my notebook had been submitted, even though I wasn't running the autograder server. It seems that the button should await a positive response before showing such a message.

Dockerfile and Grade lab shell script?

Where can I find the dockerfile that's used to build the gofer container and the shell script /srv/repo/grading/containergrade.bash (

command = [
'docker', 'run',
'--rm',
'-m', '2G',
'-i',
'--net=none',
grader_image,
"/srv/repo/grading/containergrade.bash",
lab_container_path_template.format(section=section, lab=lab)
]
)

Asking because want to run the dockerized process in the jupyterhub k8s deployment as a job.

Using okgrade with R in Jupyter

I would like to use okgrade with R in Jupyter - how I start to move forward on this and make this possible?

Some naive ideas I have started to play with include using reticulate (an R package that lets you call Python from R) to import okgrade and call the grade function. And then to use the rpy2 library to call R from Python for the tests in the .py files in tests directory. This seems like a hacky workaround which is less than ideal... Any help/ideas would be appreciated!

Add command-line tool

It would be convenient to have a command-line tool that calls gofer.ok.gradenotebook when passed a path to a notebook and a path to a tests directory. That way, during the build process for a course, we could easily verify that the instructor solution gets full credit. (And it would be handy for development.)

Authoring questions

Not quite sure where to place this, so putting it here.

What is the current user story for authoring questions/ok-tests?

I think having an instructor version of the notebook that is then split into ok-tests and student notebook is nice. It would allow you to keep your questions/ok-tests together with the material. I can also imagine how that authoring tool would work.

One thing that I can't quite imagine: I set a question to create a plot (say). I have a standard set of ok-tests that I want to use for every question that produces a plot. Is there a x axis label? Is there a y axis label? Is there a legend? etc As a user I'd want to have a library (python file?) that contains all of them and then in my instructor notebook I somehow reference that ok-test.

Is discussing this here the right place? Should we prototype the authoring tool?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.