data-8 / gofer-grader Goto Github PK
View Code? Open in Web Editor NEWSmall autograding library
Home Page: http://okgrade.readthedocs.io/en/latest/
License: BSD 3-Clause "New" or "Revised" License
Small autograding library
Home Page: http://okgrade.readthedocs.io/en/latest/
License: BSD 3-Clause "New" or "Revised" License
I was able to successfully use this for a few class exercises but now have run into an issue.
(I know you explicitly said don't rely upon but I really like how Okpy client works and this seems to be working and wanted to give it a shot. )
If matplotlib is not installed:
(1) Tests run fine.
If matplotlib is installed:
(1) Tests run fine.
(2) after importing pandas all tests result in the following error:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-3-0157b017ef8a> in <module>()
----> 1 _ = ok.grade('q02')
/anaconda3/envs/grademe/lib/python3.6/site-packages/client/api/notebook.py in grade(self, question, global_env)
56 # inspect trick to pass in its parents' global env.
57 global_env = inspect.currentframe().f_back.f_globals
---> 58 result = check(path, global_env)
59 # We display the output if we're in IPython.
60 # This keeps backwards compatibility with okpy's grade method
/anaconda3/envs/grademe/lib/python3.6/site-packages/gradememaybe/ok.py in check(test_file_path, global_env)
243 # inspect trick to pass in its parents' global env.
244 global_env = inspect.currentframe().f_back.f_globals
--> 245 return tests.run(global_env, include_grade=False)
/anaconda3/envs/grademe/lib/python3.6/site-packages/gradememaybe/ok.py in run(self, global_environment, include_grade)
138 failed_tests = []
139 for t in self.tests:
--> 140 passed, hint = t.run(global_environment)
141 if passed:
142 passed_tests.append(t)
/anaconda3/envs/grademe/lib/python3.6/site-packages/gradememaybe/ok.py in run(self, global_environment)
81 def run(self, global_environment):
82 for i, t in enumerate(self.tests):
---> 83 passed, result = run_doctest(self.name + ' ' + str(i), t, global_environment)
84 if not passed:
85 return False, OKTest.result_fail_template.render(
/anaconda3/envs/grademe/lib/python3.6/site-packages/gradememaybe/ok.py in run_doctest(name, doctest_string, global_environment)
39 runresults = io.StringIO()
40 with redirect_stdout(runresults), redirect_stderr(runresults), hide_outputs():
---> 41 doctestrunner.run(test, clear_globs=False)
42 with open('/dev/null', 'w') as f, redirect_stderr(f), redirect_stdout(f):
43 result = doctestrunner.summarize(verbose=True)
/anaconda3/envs/grademe/lib/python3.6/contextlib.py in __exit__(self, type, value, traceback)
86 if type is None:
87 try:
---> 88 next(self.gen)
89 except StopIteration:
90 return False
/anaconda3/envs/grademe/lib/python3.6/site-packages/gradememaybe/utils.py in hide_outputs()
46 yield
47 finally:
---> 48 flush_inline_matplotlib_plots()
49 ipy.display_formatter.formatters = old_formatters
/anaconda3/envs/grademe/lib/python3.6/site-packages/gradememaybe/utils.py in flush_inline_matplotlib_plots()
21 try:
22 import matplotlib as mpl
---> 23 from ipykernel.pylab.backend_inline import flush_figures
24 except ImportError:
25 return
/anaconda3/envs/grademe/lib/python3.6/site-packages/ipykernel/pylab/backend_inline.py in <module>()
167 ip.events.register('post_run_cell', configure_once)
168
--> 169 _enable_matplotlib_integration()
170
171 def _fetch_figure_metadata(fig):
/anaconda3/envs/grademe/lib/python3.6/site-packages/ipykernel/pylab/backend_inline.py in _enable_matplotlib_integration()
158 try:
159 activate_matplotlib(backend)
--> 160 configure_inline_support(ip, backend)
161 except (ImportError, AttributeError):
162 # bugs may cause a circular import on Python 2
/anaconda3/envs/grademe/lib/python3.6/site-packages/IPython/core/pylabtools.py in configure_inline_support(shell, backend)
409 if new_backend_name != cur_backend:
410 # Setup the default figure format
--> 411 select_figure_formats(shell, cfg.figure_formats, **cfg.print_figure_kwargs)
412 configure_inline_support.current_backend = new_backend_name
/anaconda3/envs/grademe/lib/python3.6/site-packages/IPython/core/pylabtools.py in select_figure_formats(shell, formats, **kwargs)
215 from matplotlib.figure import Figure
216
--> 217 svg_formatter = shell.display_formatter.formatters['image/svg+xml']
218 png_formatter = shell.display_formatter.formatters['image/png']
219 jpg_formatter = shell.display_formatter.formatters['image/jpeg']
KeyError: 'image/svg+xml'
Uninstalling matplotlib seems to work.
I have a strange error that I've been able to reproduce on 2 machines but it doesn't occur on others. For example, I have the issue running on my local laptop but then not with other environments.
The behavior is such that Gofer Grader runs fine until pandas is imported. Then the issue below occurs every time you run the grading.
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-7-bdbe578848a5> in <module>()
----> 1 _ = ok.grade('q21')
~/anaconda3/envs/auto/lib/python3.6/site-packages/client/api/notebook.py in grade(self, question, global_env)
56 # inspect trick to pass in its parents' global env.
57 global_env = inspect.currentframe().f_back.f_globals
---> 58 result = check(path, global_env)
59 # We display the output if we're in IPython.
60 # This keeps backwards compatibility with okpy's grade method
~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/ok.py in check(test_file_path, global_env)
294 # inspect trick to pass in its parents' global env.
295 global_env = inspect.currentframe().f_back.f_globals
--> 296 return tests.run(global_env, include_grade=False)
~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/ok.py in run(self, global_environment, include_grade)
143 failed_tests = []
144 for t in self.tests:
--> 145 passed, hint = t.run(global_environment)
146 if passed:
147 passed_tests.append(t)
~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/ok.py in run(self, global_environment)
85 def run(self, global_environment):
86 for i, t in enumerate(self.tests):
---> 87 passed, result = run_doctest(self.name + ' ' + str(i), t, global_environment)
88 if not passed:
89 return False, OKTest.result_fail_template.render(
~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/ok.py in run_doctest(name, doctest_string, global_environment)
43 runresults = io.StringIO()
44 with redirect_stdout(runresults), redirect_stderr(runresults), hide_outputs():
---> 45 doctestrunner.run(test, clear_globs=False)
46 with open('/dev/null', 'w') as f, redirect_stderr(f), redirect_stdout(f):
47 result = doctestrunner.summarize(verbose=True)
~/anaconda3/envs/auto/lib/python3.6/contextlib.py in __exit__(self, type, value, traceback)
86 if type is None:
87 try:
---> 88 next(self.gen)
89 except StopIteration:
90 return False
~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/utils.py in hide_outputs()
46 yield
47 finally:
---> 48 flush_inline_matplotlib_plots()
49 ipy.display_formatter.formatters = old_formatters
~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/utils.py in flush_inline_matplotlib_plots()
21 try:
22 import matplotlib as mpl
---> 23 from ipykernel.pylab.backend_inline import flush_figures
24 except ImportError:
25 return
~/anaconda3/envs/auto/lib/python3.6/site-packages/ipykernel/pylab/backend_inline.py in <module>()
167 ip.events.register('post_run_cell', configure_once)
168
--> 169 _enable_matplotlib_integration()
170
171 def _fetch_figure_metadata(fig):
~/anaconda3/envs/auto/lib/python3.6/site-packages/ipykernel/pylab/backend_inline.py in _enable_matplotlib_integration()
158 try:
159 activate_matplotlib(backend)
--> 160 configure_inline_support(ip, backend)
161 except (ImportError, AttributeError):
162 # bugs may cause a circular import on Python 2
~/anaconda3/envs/auto/lib/python3.6/site-packages/IPython/core/pylabtools.py in configure_inline_support(shell, backend)
409 if new_backend_name != cur_backend:
410 # Setup the default figure format
--> 411 select_figure_formats(shell, cfg.figure_formats, **cfg.print_figure_kwargs)
412 configure_inline_support.current_backend = new_backend_name
~/anaconda3/envs/auto/lib/python3.6/site-packages/IPython/core/pylabtools.py in select_figure_formats(shell, formats, **kwargs)
215 from matplotlib.figure import Figure
216
--> 217 svg_formatter = shell.display_formatter.formatters['image/svg+xml']
218 png_formatter = shell.display_formatter.formatters['image/png']
219 jpg_formatter = shell.display_formatter.formatters['image/jpeg']
KeyError: 'image/svg+xml'
Many folks seem confused by the fact that interactive feedback is given via a function called grade
. To my knowledge, when students call grade
, there isn't actually any grading happening, the only thing that happens is tests are run and feedback is given.
Can we think of a function (maybe it'd just alias grade
since that's used elsewhere) that more cleanly conveys what is happening when used in an interactive student session?
Right now, the only way to render the result of scoring is as HTML via gofer.ok.OKTestsResult._repr_html_
. A __repr__
method that generates an equivalent plain-text version that's nicely formatted would be awesome. Right now, if you run grade_notebook
from the command line, the description isn't useful. E.g.,
Question 1:
<gofer.ok.OKTestsResult object at 0x107b3fe48>
Question 2:
<gofer.ok.OKTestsResult object at 0x107b3f390>
I have some existing assessments that use test setup, but I notice that the grading code disallows these at the moment:
https://github.com/data-8/Gofer-Grader/blob/master/gofer/ok.py#L126
Is there some structural reason for this? How difficult would it be to add support for these? (I'm happy to work on it, if it's feasible).
Individual 'check' functions run tests against the state of the environment as it was when the user ran the check code. However, grade_notebook runs tests against the environment as it was when all the user code has been written. This has caused a lot of subtle, hard to debug errors.
For example, the following code works ok:
a = 5
check('tests/q1.py')
a = 10
check('tests/q2.py')
Assuming q1.py tests if a is 5 and q2 tests if a is 10.
However, if you add a 'grade_notebook' at the end, it'll always fail for q1, since a will be ten.
Re-assignment of variables used in tests will always fail. We could ask users to be careful about it, but that seems like an unnecessary restriction. Both nbgrader & okpy allow users to do incremental checking, so gradememaybe should also allow this.
okpy does this by using an object that collects grades. nbgrader does this by keeping tests directly in the notebook.
The approach I'd like to take here instead is to use AST rewriting to find all the 'check' calls and rewrite them to something else when grading!
The code above will be rewritten to:
a = 5
check_results_abcdefg.append(check_abcdefg('tests/q1.py'))
a = 10
check_results_abcdefg.append(check_abcdefg('tests/q2.py'))
check_results_abcdefg
is a list (or other object) that can collect TestResult
objects generated by check_abcdefg
. A randomly generated unique suffix is added to the check
functions to make it harder (but not impossible!) for students to cheat by simply redefining a 'check' function. The check_results_abcdefg
object and check_abcdefg
function will be injected by the grading function as well. After all the code has been evaluated, check_results_abcdefg
can be evaluated to find the final grade.
This imposes the following constraints on test notebook authors:
There should be a linter that validates these.
I installed the submit extension and clicked submit in an example notebook. The pop-up told me that my notebook had been submitted, even though I wasn't running the autograder server. It seems that the button should await a positive response before showing such a message.
Where can I find the dockerfile that's used to build the gofer
container and the shell script /srv/repo/grading/containergrade.bash
(
Gofer-Grader/gofer_service/grade_lab.py
Lines 8 to 17 in 2027d43
Asking because want to run the dockerized process in the jupyterhub k8s deployment as a job.
I would like to use okgrade with R in Jupyter - how I start to move forward on this and make this possible?
Some naive ideas I have started to play with include using reticulate
(an R package that lets you call Python from R) to import okgrade
and call the grade
function. And then to use the rpy2
library to call R from Python for the tests in the .py
files in tests
directory. This seems like a hacky workaround which is less than ideal... Any help/ideas would be appreciated!
It would be convenient to have a command-line tool that calls gofer.ok.gradenotebook
when passed a path to a notebook and a path to a tests directory. That way, during the build process for a course, we could easily verify that the instructor solution gets full credit. (And it would be handy for development.)
Not quite sure where to place this, so putting it here.
What is the current user story for authoring questions/ok-tests?
I think having an instructor version of the notebook that is then split into ok-tests and student notebook is nice. It would allow you to keep your questions/ok-tests together with the material. I can also imagine how that authoring tool would work.
One thing that I can't quite imagine: I set a question to create a plot (say). I have a standard set of ok-tests that I want to use for every question that produces a plot. Is there a x axis label? Is there a y axis label? Is there a legend? etc As a user I'd want to have a library (python file?) that contains all of them and then in my instructor notebook I somehow reference that ok-test.
Is discussing this here the right place? Should we prototype the authoring tool?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.