Git Product home page Git Product logo

fate's People

Contributors

vduseev avatar

Watchers

 avatar

fate's Issues

Support syntax tab completion in python debugger

There is a way to invoke nc with suc parameters that
syntax completion will be available in IPDB.

SAVED_STTY=`stty -g`; stty -icanon -opost -echo -echoe -echok -echoctl
-echoke; nc 127.0.0.1 4444; stty $SAVED_STTY

Retrieve stdout as log when OUTPUT_PATH is used

Currently stdout is not retrieved from finished container in case
OUTPUT_PATH is used for algorithm results.

It should pulled out in that case as well and reported as additional
logs from container.

fate/analyzer.sh: line 87: printf: -1: invalid option

Analyzer receives -1 as an input and considers the whole thing to be a shit show.
That's because when using printf I ignored the value argument and put the value directly into the format string.

The resolution: add '%s\n' string before the actual value.

Implement wait function

Function should periodically check if any containers are still running
and return when all of them are finished.

It should also forcibly stop any container that exceeds its running time
limits despite being specifically told via docker flag opetion to run
no more that preset amount of time.

Support AWS EC2

Just an idea. Not sure if I'll go with it:

  1. What if we spawn N virtuals each with a docker container for each test case?
  2. What if we spawn 1 virual with as many CPU cores and mem as required for all our test cases?
  3. What if we try to balance between parallel and consequiential execution?
  4. Where are we going to store the EC2 image?

Support OUTPUT_PATH paradigm on HackerRank

HackerRank has for some God's forsaken reason decided to use an environment variable to store the path to the output file and instead of writing to the stdout write to the file.

It could be justified by the fact that not all users are able to print to stderr and their debug printing to stdout messes up their test runs. Hence, HackerRank changes the system. Well, if that's the case, then a nice written article about the means of using stderr for debugging in any language present on HackerRank could suffice.

Anyway, we need to support this mess now.

Verify debugger is listening on port before connecting

Currently a wait of 3 sec is implemented before connecting to debugger
via nc.

This has to be replaced with a proper check via lsof, netstat, or nmap inside
a container before trying to invoke nc.

Something in the lines of

while true; do
  local result=$(docker exec $cid lsof -i :4444)
  if result not empty; then
    break
  fi
  if we are in the loop for more than 10 sec; then
    break
    exit
  fi
done

Collecting results from container - error reported outside log when path does not exist

[Sat Jan 11 20:50:45 MSK 2020] [DEBUG]   Collecting results from all containers...
Error: No such container:path: 9fafe7c79327ab319b85d231d29ea98b86e664efad3c3a78b9b411ddbb47cbd7:/tmp/result.txt

In this case the algorithm failed to write down the file due to a Runtime error in the algorithm.

This shouldn not be thrown out as a result of error from docker cp. Instead this error must be caught and
reported together with other errors.

Collect logs in main module rather than docker

Currently logs are collected after the docker container has finished
executing in docker module.

I want docker module to return immidiately and instead expose a
function that retrieves the logs based on the CID returned from
initial execute_in_docker_container function invokation.

Bind for 0.0.0.0:4576 failed: port is already allocated

When running without debug fate reports that the port for debug is already allocated.

This happens because fate launches all the test pairs simultaneously and in parallel in docker containers.
Each one of the launched pairs has same host port mapped to the debugger. Which can't be done.

Actually, same thing will happen when we launch with --debug flag. Because as soon as we are finished
with the first container and its test pair we switch to the next one. And the next one will have a conflict,
because something has already mapped itself to port 4576.

By the way, what if user has something running on that port? Wouldn't it be better to add some heuristics
and radomizations to dance around such issues?

Asynchronous execution of test cases

Be able to run a multitude of docker containers in parallel.

  1. For every test case
  2. Start a docker container
  3. Either of two is easier:
    1. Wait for all containers to finish then read their logs
    2. For every container that has finished read its logs and report them imidiately via diff (preferred)

Fate reports correct output as wrong (trailing cr)

Fate reported exact same outputs between the algorithm and the output file as wrong.

Algorithm from my hackerrank-solutions repo: cats-and-a-mouse

Same 2 lines for the output at both ends, however, error is reported.

Turns out tuning out diff to strip trailing cr might solve the problem.

Support remote python3 debugging

The idea is like this:

  1. If language is Python3
  2. For each discovered test case
  3. Start a docker container with python debugger
  4. Attach to debugger in that container
  5. Debug the code in the container using local machine
  6. Be able to see the output right away if it's printed
  7. If halted of finished then exit as normal

Support multiple pairs of input/output files running consequentially

Currently only one pair of files representing single test case is supported
by being supplied via -i and -o arguments.

As per requirements in README multiple pairs must be supported via
autodiscovery of test case files.

In this feature only consequential execution of all test pairs must be
implemented. This is not the cloud-like-running-all-test-cases-simultaneously
case.

Cleanup after test run

Remove any created and stopped containers. Use --force and also delete any associated volumes via --volumes flag.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.