Git Product home page Git Product logo

cereal-lab / evopie Goto Github PK

View Code? Open in Web Editor NEW
1.0 3.0 0.0 33.65 MB

Evolutionary Peer Instruction Environment

Python 94.20% HTML 0.80% Shell 0.46% Dockerfile 0.01% CSS 0.04% PowerShell 0.02% JavaScript 0.14% C 1.60% Cython 2.51% C++ 0.18% Fortran 0.02% Smarty 0.02% XSLT 0.01% Jinja 0.01%
national-science-foundation computer-aided-teaching computer-aided-learning evolutionary-computation evolutionary-algorithms

evopie's Introduction

EvoPIE - Evolutionary Peer Instruction Environment

Synopsis

This web application supports asynchronous peer instruction. Server side is currently handled by Python/Flask app and also exposes a RESTful API for future development toward single page web app format.

Acknowledgement

This material is based in part upon work supported by the National Science Foundation under awards #2012967. Any opinions, findings, and conclusions or recommendation expressed in this work are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Repository structure:

Folder Description
deployment archive of scripts and Dockerfiles from previous field tests
docs you will never guess
evopie main application
nginx Dockerfiles for nginx container
testing mix of scripts and other tools used to test the system

How to build / deploy the server

Check out the main branch of our GitHub repository:

git clone https://github.com/cereal-lab/EvoPIE.git

Edit the docker-compose.yml file to update the volumes for "web". Put the absolute path to the folder containing the database file where we have "REPLACE_ME" below:

version: '2.0'

services:
  web:
    build: ./evopie
    volumes:
      - /REPLACE_ME:/app/data
    environment:
      - EVOPIE_DATABASE_URI=sqlite:////app/data/db.sqlite
    expose:
      - 5000
    env_file:
      - ./evopie/.env.dev
    restart: always
  nginx:
    build: ./nginx
    ports:
      - "5000:5000"
    depends_on:
      - web
    restart: always
    volumes:
      - /etc/letsencrypt:/etc/nginx/certs

Build the docker containers and run them:

docker-compose up --build -d

evopie's People

Contributors

drpventura avatar dvitel avatar emjapo avatar neverdue avatar profgrumpy avatar rpwiegand avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar

evopie's Issues

Problems with escaping quotes

We should not be using any form of eval if possible, see:

grading_details[i].justifications = ast.literal_eval(grades[i].justifications)

The underlying problems is that some characters (", ', ) seem to not be properly escaped / unescaped when displaying justifications (but it might happen for questions titles and other fields too) in the modals.

Since they seem to be displayed correctly elswhere in the code, the priority is to figure out what is done differently in the context of the modals and rely on the flask libraries to clean the user input instead of rewriting our own approach.

Add number of likes received

Both in the modals from quizGrader for;

  • The "grades for justification given"
  • The "likes received"
    Both modals should show the justifications that we talking about + the number of likes each of them received. This should make it easier for an instructor to double check what were the highly-liked justifications for each student.

Update quiz editor based on the tunable parameters added to quiz grader

We have added to the model (issue #29) and to the quiz grader page the following tunable parameters:

  • Issue #12 (the threshold to receive participation points that are currently hardcoded)
  • Issue #13 (the weights for each grade component)
  • Issue #15 (limiting factor)

We should also add them to the quiz editor page so that an instructor can set the base values for these when the quiz is created / modified and save them in the model.

Improving information displayed in modal for initial / revised scores.

For initial score and revised score, in the grading page, add 2 columns in the modal.
End result should be something like:

Question | answer | check mark | answer selected by student

The new columns are marked in bold.
The first one should be the correct answer to the question, the second should be a check mark that shows whether the student got it right or not. Use the following for the checkmarks since we're using these icons already in other parts of the UI:
https://icons.getbootstrap.com/icons/check-square/
https://icons.getbootstrap.com/icons/x-square/
(just suggestions, feel free to make it look better / color the icons)

DB Migration Scripts

Need some scripts to migrate the DB used during spring 2022 to the new schema as soon as we merge the grading branch into master.

Keep partial selections saved before submit

If we close a quiz tab before to submit, all selections are lost. This makes it awkward for students to come back after thinking things through and have to re-enter their justifications + select answers again. This is even messier due to the shuffling of answers.

On selection of an alternative or unfocus of a justification editor, save to the model the student submission but mark it as "PENDING" somewhere. While the submission shows pending, students are able to modify it. When using submit, the pending status is removed thus preventing students from resubmitting.

Duplicates of Liked Justifications Listed in "Likes Received" modal

In the grading page, the likes received modal is showing duplicates when using the testing/Easy Labels Tests. This might indicate that the corresponding query should do a group by. While the number of likes is correct. we should not list the same justification multiple times in the modal because it was liked several times.

Switch to new Bootstrap stylesheet caused misalignment of UI elements

Some UI elements, i.e., right-most navbar elements in base.html and the action icons in all the data tables featuring such icons, are now left-centered instead of right-centered. This is due to switching to a new version of bootstrap stylesheet.

The bootstrap 5 classes that are needed now are text-end for the action icons in the data tables (questions, quiz browser pages, users browser), and ms-auto for the navbar in base.html.
I tested switching from the form to another ul element also tagged as nav-var and ms-auto in the master branch but didn't commit the changes. Make sure to apply these changes to the grading branch before to submit the pull request.

When you do so, make sure to first have integrated the new users-browser feature that is in the latest master branch.

Likes Progress bar issues during step2

If a student opens twice in a row a quiz during step2, without submitting the first time, then the likes progress bar shows 0 likes given even though the student already gave some and they even show on the page.

Also, the bar should be red to start off with since 0 likes is not going to grant participation points to the student.

Add MaxLikes to the quiz model

We need maxLikes for the progress bar in step 2!

So that the width of each increment is on the basis of the current rate!

Balancing grade components in quiz-grader page

On the quiz-grader page, we need a way to balance the grades received for the following:

  • justifications grade
  • correctness of initial responses
  • correctness of revised responses
  • participation grade

Typically a single question is on 1 pt for pre-correctness and 1pt for post-correctness but 3 points for justifying the 3 alternatives that were not selected. We should be able to tune on the page the weight for each of the 4 grade components so as to recompute all the grades on the page (and for the CSV).

The weight for each grade component should be saved in the DB as part of the Quiz model.

Compute "justifications score" based on likes

In the quiz-grader page, add a column score that shows the points earned by each student based on how many likes from other students his or her justifications received.

Formula for scoringjustifications

score(s) = sum { Likes(g,s) * min( ( MaxLikes * LimitingFactor) / LikesGiven(g) , 1 ) }

where

  • s is the student whose justifications are being scored
  • g is any student who gave a like to one of s' justifications
  • Likes(g,s) is 1 if student g gave a like to a justification from student s
  • LikesGiven(g) is the total # of likes given by student g
  • MaxLikes is the number of all alternatives for all questions in this quiz * number of justifications from other students shown for each alternative.
  • LimitingFactor is [0:1] and represents the maximal percentage of all justifications that the student could like, that is allowed for them to like. For instance, if there are 50 MaxLikes in a quiz but the LimitingFactor is .5 then every like that the student gives will be worth 1 point for the student that receives it except if the student gives more than 25 likes in which case the value of each individual like will decrease proportionally to how many likes they gave past MaxLikes * LimitingFactor.

All of the above are to be measured independently for each quiz.

Update downloadable CSV file

Once the following have been computed and displayed in the grading page, they need to also be dumped into our CSV file:

  • Participation Score (number of likes given)
  • Participation Grade --> see issue #12
  • Justifications Score (number of likes received)
  • Justifications Grade --> see issue #8
  • initial score
  • revised score

Discuss whether the following should be also part of the CSV of whether we just dump the grades computed based on the scores and these weights (the latter is most likely the way to go):

  • weights for initial / revised / participation / justifications --> see issue #13
  • Limiting factor for MaxLikes --> see issue #16 issue #15

Enable detailed configuration of justifications grading

The maximal grade for justifications is hardcoded to 10. E.g., routes_pages.py line 548 in bc60398
Depending on which quartile students fall into, they get a variable amount of points:

  • 1st quartile gets 1 pt
  • 2nd quartile gets 3 pts
  • 3rd quartile gets 5 pts
  • 4th quartile gets 10

Instructors might want to change the point rewards for each quartile, which would in turn make the maximal justification grade variable. It should therefore be pulled from the model and we should have a UI to tune these values in the quiz editor and quiz grader.

Multi-Instructors Support

In order to allow many instructors / students on system, only show to instructors the quizzes / questions / distractors that they have authored, as per author field.
For students, allow them to pick an instructor and see only their quizzes.
Problem: they could go take other instructors quizzes and send bogus justifications... so we need student to pick instructor and instructor to confirm student. Maybe on a per-class basis?

At any time, allow a student to pick an instructor and add them to their list of instructors. Mark as pending until the instructor confirms them OR, even better, let the instructor make up a passphrase for the students and only allow students to add him or her if they know the passphrase. Changing the passphrase won't affect students already with that instructor.

Allow instructor to de-register students e.g., if they post unacceptable justifications.

Prevent user from liking one of their own justifications

Add a check that the author of a justification is not also the current user trying to like or unlike it right now.

EvoPIE/evopie/models.py

Lines 336 to 347 in 6fc9d2b

def like_justification(self, justification):
if not self.has_liked_justification(justification):
like = Likes4Justifications(student_id=self.id, justification_id=justification.id)
DB.session.add(like)
DB.session.commit()
def unlike_justification(self, justification):
if self.has_liked_justification(justification):
Likes4Justifications.query.filter_by(
student_id=self.id,
justification_id=justification.id).delete()
DB.session.commit()

Tracking viewing of justifications

Follow up on issue #34

The idea is to track the students who actually take a look (or not) at their peers' justifications. This may be used for the purpose explained below in the original post, or simply to validate whether this feature is even used by students beyond asking them if they like it in surveys.

Original post:

We might want to collapse all peers' justifications by default and allow students to either not look at them at all (we need to track this down in the model of the attempt) or allow them to consult them and apply likes.
This might allow to identify to what degree students are interested in peer instruction at all, and then characterize the performance profiles of students who are or are not

Quiz-grader should allow to tune Limiting Factor

The LimitingFactor should be modifiable on the grading page. E.g., "you can only like maximum 50% of the justifications you see"
This value should also be saved in the Quiz model and in the DB, just like the grade components from issue #13

Update Quiz Model

Right now we have a few things that are hardcoded or re-tuned every time we load the grading page for a quiz.

  • Issue #12 (the threshold to receive participation points that are currently hardcoded)
  • Issue #13 (the weights for each grade component)
  • Issue #15 (already done)

We need to save these values in the Quiz model but let's do that AFTER we merge in the grading branch and deploy to test it out thoroughly. For the participation grade, that will entail adding some UI elements to the quiz editor.

Maximum grade

Display maximal number of points in the total column's header

Replace old JS alert() and non-formatted flash messages

We have everywhere in the code some ugly flash messages, some of which are simply displaying the JSON being returned for instance, as well as some usage of the good-old JS alert(...) popup.
Replace all this by properly formatted Bootstrap 5 alerts. These are already implemented in a div for flask's flash responses (at least some of them) but there's more left to do:

  • Verify that all existing flash responses are actually displaying more than just some JSON
  • Make sure to use danger / success / primary as alert class, depending on the severity
  • Update any fetch or AJAX calls so that they insert dynamically a similar alert when relevant

Refactor Markup(...).unescape()

Right now, we use these Markup(...).unescape throughout the server-side code in order to prepare the content to be rendered in the jinja2 templates, using the | safe filter.
We should really instead apply these transformations to the contents as it is sent to the server (routes_pages.py or routes_mcq.py mostly I'd guess) and before it is inserted in the DB. Potential target for this would be the models maybe. This way, we would not have to constantly apply them when we're about to serve the content and just send directly what's in the DB to the jinja2 templates.

# NOTE this particular one works without doing the following pass on the data, probably bc it's using only the titles in the list
for q in all_questions:
q.title = Markup(q.title).unescape()
q.stem = Markup(q.stem).unescape()
q.answer = Markup(q.answer).unescape()

# working on getting rid of the dump_as_dict and instead using Markup(...).unescape when appropriate
# all_quizzes = [q.dump_as_dict() for q in models.Quiz.query.all()]
all_quizzes = models.Quiz.query.all()

# we replace dump_as_dict with proper Markup(...).unescape of the objects'fields themselves
#ds = [d.dump_as_dict() for d in q.distractors]
#q = q.dump_as_dict()
q.title = Markup(q.title).unescape()
q.stem = Markup(q.stem).unescape()
q.answer = Markup(q.answer).unescape()
for d in q.distractors:
d.answer = Markup(d.answer).unescape()

for d in qq.distractors:
d.answer = Markup(d.answer).unescape()
# now edit the QuizQuestion

for q in questions:
q.stem = Markup(q.stem).unescape()
q.answer = Markup(q.answer).unescape()

question.stem = Markup(question.stem).unescape()
question.answer = Markup(question.answer).unescape()
for d in question.distractors:
d.answer = Markup(d.answer).unescape()
return render_template('quiz-question-selector-2.html', quiz_id=quiz_id, question=question)

# we replace dump_as_dict with proper Markup(...).unescape of the objects'fields themselves
#q = q.dump_as_dict()
for qq in q.quiz_questions:
qq.question.title = Markup(qq.question.title).unescape()
qq.question.stem = Markup(qq.question.stem).unescape()
qq.question.answer = Markup(qq.question.answer).unescape()
# NOTE we do not have to worry about unescaping the distractors because the quiz-editor
# does not render them. However, if we had to do so, remember that we need to add to
# each QuizQuestion a field named alternatives that has the answer + distractors unescaped.

qq.question.title = Markup(qq.question.title).unescape()
qq.question.stem = Markup(qq.question.stem).unescape()
qq.question.answer = Markup(qq.question.answer).unescape()
for d in qq.distractors:
d.answer = Markup(d.answer).unescape()
# Preparing the list of alternatives for this question (these are the distractors + answer being displayed in the template)
# This comes straight from models.py dump_as_dict for QuizQuestion
tmp1 = [] # list of distractors IDs, -1 for right answer
tmp2 = [] # list of alternatives, including the right answer
tmp1.append(-1)
tmp2.append(Markup(qq.question.answer).unescape())
for d in qq.distractors:
tmp1.append(Markup(d.id).unescape()) # FIXME not necessary
tmp2.append(Markup(d.answer).unescape())
qq.alternatives = [list(tup) for tup in zip(tmp1,tmp2)]
shuffle(qq.alternatives)
# now, each QuizQuestion has an additional field "alternatives"

Add "total percent grade" column to quiz grader

Right now we display things like 2.6 / 5.6 as the total grade. Let's keep this but add another column on the screen and in the CSV file that will provide this same grade as a percentage; e.g., 75.3%. One decimal might be plenty enough at that level.

Basing test scripts on CSV comparisons

Modify the Easy Labels Tests scripts so that they end up downloading the CSV and comparing it (diff?) with a hand-verified CSV that we already double checked to have the exact correct results.

Double quotes and newlines rendering in Justification modals

The modals on the quiz-grader route renders the following erroneously:

  • double quotes as \"
  • newlines as \n

In comparison, the question displayed to its left does render them properly. Check for differences along the chain the data followed.

This seems to happen at least in the justifications, likes received, and likes given modals.

Feature - Deadlines management and enforcement

Add ability for instructor to set deadlines when releasing a quiz;

  • DL0 is availability
  • DL1 is for completing step1
  • DL2, for completing step2
  • DL3 is for availability of answers
  • DL4 is for when quiz should be re-hidden.

Update the quizzes/x/take route so as to enforce DL0, DL1, & DL2. Flash back, from flask or JS fetch/ajax, appropriate messages back.

For now, we should not handle students trying to take step1 while step2 is not over but, in the future, this might be something to consider adding. Same for not allowing students to see solutions if they didn't participate yet to step1 and step2. In these scenarios, we should mark the student attempt as "late" or something. Another further option would be to prompt students for justification and invite them to get in touch with their instructor.

Likes given / received should show corresponding alternative

It might be useful to not only list the justifications that received or were given likes but also their corresponding alternative, with its id somewhere, in the quiz (whether it's the solution or not). Along with the question and answer (already there), it will provide all necessary context for by-hand grading, if desired.

Compute "participation score" and "participation grade"

We want to encourage students to use likes, especially on worthy justifications.

How do we translate the number of likes a student has given in a quiz into a "participation grade"?

Idea - Taking into account that if student gives more than 50% (percentage based on MaxLikes, see above) of all encountered justifications a like, they get diminishing returns.

BUG when trying to grade a quiz that no-one took yet

The following internal server error is produced in the scenario detailed in the title:

127.0.0.1 - - [06/Mar/2022 18:27:10] "GET /quizzes-browser HTTP/1.1" 200 -
0 3 0
127.0.0.1 - - [06/Mar/2022 18:27:14] "GET /grades/4 HTTP/1.1" 500 -
Error on request:
Traceback (most recent call last):
File "/home/tux/.local/share/virtualenvs/EvoPIE-wWJma92r/lib/python3.8/site-packages/werkzeug/serving.py", line 323, in run_wsgi
execute(self.server.app)
File "/home/tux/.local/share/virtualenvs/EvoPIE-wWJma92r/lib/python3.8/site-packages/werkzeug/serving.py", line 312, in execute
application_iter = app(environ, start_response)
File "/home/tux/.local/share/virtualenvs/EvoPIE-wWJma92r/lib/python3.8/site-packages/flask/app.py", line 2464, in call
return self.wsgi_app(environ, start_response)
File "/home/tux/.local/share/virtualenvs/EvoPIE-wWJma92r/lib/python3.8/site-packages/flask/app.py", line 2450, in wsgi_app
response = self.handle_exception(e)
File "/home/tux/.local/share/virtualenvs/EvoPIE-wWJma92r/lib/python3.8/site-packages/flask/app.py", line 1867, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/tux/.local/share/virtualenvs/EvoPIE-wWJma92r/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/tux/.local/share/virtualenvs/EvoPIE-wWJma92r/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/tux/.local/share/virtualenvs/EvoPIE-wWJma92r/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/tux/.local/share/virtualenvs/EvoPIE-wWJma92r/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/tux/.local/share/virtualenvs/EvoPIE-wWJma92r/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/tux/.local/share/virtualenvs/EvoPIE-wWJma92r/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/tux/.local/share/virtualenvs/EvoPIE-wWJma92r/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functionsrule.endpoint
File "/home/tux/.local/share/virtualenvs/EvoPIE-wWJma92r/lib/python3.8/site-packages/flask_login/utils.py", line 272, in decorated_view
return func(*args, **kwargs)
File "/home/tux/dev/github/EvoPIE/evopie/routes_pages.py", line 671, in quiz_grader
q, grades, grading_details, distractors, questions, likes_given, likes_received, count_likes_received, like_scores, justification_grade = get_data(qid)
File "/home/tux/.local/share/virtualenvs/EvoPIE-wWJma92r/lib/python3.8/site-packages/flask_login/utils.py", line 272, in decorated_view
return func(*args, **kwargs)
File "/home/tux/dev/github/EvoPIE/evopie/routes_pages.py", line 519, in get_data
median, median_indices = find_median(sorted_scores)
File "/home/tux/dev/github/EvoPIE/evopie/routes_pages.py", line 554, in find_median
median = (sorted_list[indices[0]] + sorted_list[indices[1]]) / 2
IndexError: list index out of range

Improving layout in all modals

The layout of the information in all modals need to be improved; Here are some examples;

  • do not center the information but favor left-justification (when appropriate)
  • use padding between multi columns
  • use horizontal or vertical separator lines if this makes things easier to read
  • Provide a header to the modals indicating what they are showing; e.g., "Your justifications, for each question, that received at least a like"
    Instead of providing a header, I have clarified the columns labels. This seems to be clear enough.

Improve JS-initiated bootstrap alerts notifications

The alerts triggered by flask's flash are ok for now. However, we do have a few instances where a fetch or AJAX request is triggering a JS alert() or directly updating the innerHTML of an element on the page.
Replace these by having the JS trigger a bootstrap alert (or a bootstrap toast, but that would make these non uniform with the way the flash messages are displayed).

  • #25
  • student page should flash instead of using plain JS alert() for both step1 and step2 submits

Some additional details below from attempted duplicate issue I just closed;

  • #24
  • Make sure to use danger / success / primary as alert class, depending on the severity
  • Update any fetch or AJAX calls so that they insert dynamically a similar alert when relevant
  • Do not use result.innHTML in quiz-editor as a way to flash an update

Verify layout of modals

Current tests are using very short justifications / stems / alternatives.
Try on real DB to ensure the layouts are all readable with more data / structured data such as code fragments.

Incremental grading

Quiz grader is loading all grades up front which can be quite time consuming with a non-minimal number of users.

  • Make sure that, on page load, we only query the data base to get the minimal amount of information and leave the grade as ' - '
  • Add a way to recalculate the participation grades column, trigger it anytime we change the limiting factor as well
  • Add a way to recalculate the justification grades (this one should be the time intensive one)
  • Add a way to recalculate the total grade every time we adjust the weights.

Exploit during step 2

Let's say I am a new student, I completed STEP 1, the instructor now sets the quiz as STEP 2

Now I go and see a few justifications that I can like

I was under the assumption that this student would like the justifications and once he hits the submit button, all these likes are processed and added to the likes_4_justifications table.

However, I found out that if as the student I am on the page and I like let's say three justifications and then reload the page without submitting, then even though I never submitted those likes, they still get added to the likes_4_justifications table

"Too many Likes" warning & participation score / grade display

During step 2 in student.html, we need to have a way to warn the student when their number of likes is such that the value of each of their likes will be decremented. A warning popup could work but I think it might be better to have this information displayed somewhere in the page with a warning once we reach the maximal number of recommended likes.

This could be wrapped up together with displaying the current user's participation score / grade for the quiz.

Provide instructor's reference justifications

  • Update the model so that we store instructor authored justifications on why each distractor is wrong
  • Optionally also store why the answer is right --> we are not going to do this
  • When quiz is released in SOLUTIONS mode, display the instructor-authored justifications for each distractor

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.