Git Product home page Git Product logo

jonepatr / genea_visualizer Goto Github PK

View Code? Open in Web Editor NEW
39.0 5.0 18.0 2.24 MB

This repository provides scripts that can be used to visualize BVH files. These scripts were developed for the GENEA Challenge 2020, and enables reproducing the visualizations used for the challenge stimuli. The server consists of several containers which are launched together with the docker-compose.

License: GNU General Public License v3.0

Dockerfile 6.46% Python 90.34% Shell 3.20%
visualization-server bvh celery

genea_visualizer's Introduction

GENEA 2020 BVH Visualizer

example from visualization server
Example output from the visualization server

This repository provides scripts that can be used to visualize BVH files. These scripts were developed for the GENEA Challenge 2020, and enables reproducing the visualizations used for the challenge stimuli. The server consists of several containers which are launched together with the docker-compose command described below. The components are:

  • web: this is the HTTP server which receives render requests and places them on a "celery" queue to be processed.
  • worker: this takes jobs from the "celery" queue and works on them. Each worker runs one Blender process, so increasing the amount of workers adds more parallelization.
  • monitor: this is a monitoring tool for celery. Default username is user and password is password (can be changed by setting FLOWER_USER and FLOWER_PWD when starting the docker-compose command)
  • redis: needed for celery

GENEA Challenge 2022 BVH Visualizer

A newer version of the visualizer used for the GENEA Challenge 2022 can be found in this fork

Build and start visualization server

First you need to install docker-compose: sudo apt install docker-compose (on Ubuntu)

You might want to edit some of the default parameters, such as render resolution and fps, in the .env file.

Then to start the server run docker-compose up --build

In order to run several (for example 3) workers (Blender renderers, which allows to parallelize rendering, run docker-compose up --build --scale worker=3

The -d flag can also be passed in order to run the server in the background. Logs can then be accessed by running docker-compose logs -f. Additionally it's possible to rebuild just the worker or API containers with minimal disruption in the running server by running for example docker-compose up -d --no-deps --scale worker=2 --build worker. This will rebuild the worker container and stop the old ones and start 2 new ones.

Use the visualization server

The server is HTTP-based and works by uploading a bvh file. You will then receive a "job id" which you can poll in order to see the progress of your rendering. When it is finished you will receive a URL to a video file that you can download. Below are some examples using curl and in the file example.py there is a full python (3.7) example of how this can be used.

Since the server is available publicly online, a simple authentication system is included – just pass in the token j7HgTkwt24yKWfHPpFG3eoydJK6syAsz with each request. This can be changed by modifying USER_TOKEN in .env.

For a simple usage example, you can see a full python script in example.py.

Otherwise, you can follow the detailed instructions on how to use the visualization server provided below.

Depending on where you host the visualization, SERVER_URL might be different. If you just are running it locally on your machine you can use 127.0.0.1 but otherwise you would use the ip address to the machine that is hosting the server.

curl -XPOST -H "Authorization:Bearer j7HgTkwt24yKWfHPpFG3eoydJK6syAsz" -F "file=@/path/to/bvh/file.bvh" http://SERVER_URL/render will return a URI to the current job /jobid/[JOB_ID].

curl -H "Authorization:Bearer j7HgTkwt24yKWfHPpFG3eoydJK6syAsz" http://SERVER_URL/jobid/[JOB_ID] will return the current job state, which might be any of:

  • {result": {"jobs_in_queue": X}, "state": "PENDING"}: Which means the job is in the queue and waiting to be rendered. The jobs_in_queue property is the total number of jobs waiting to be executed. The order of job execution is not guaranteed, which means that this number does not reflect how many jobs there are before the current job, but rather reflects if the server is currently busy or not.
  • {result": null, "state": "PROCESSING"}: The job is currently being processed. Depending on the file size this might take a while, but this acknowledges that the server has started to working on the request.
  • {result":{"current": X, "total": Y}, "state": "RENDERING"}: The job is currently being rendered, this is the last stage of the process. current shows which is the last rendered frame and total shows how many frames in total this job will render.
  • {"result": FILE_URL, "state": "SUCCESS"}: The job ended successfully and the video is available at http://SERVER_URL/[FILE_URL].
  • {"result": ERROR_MSG, "state": "FAILURE"}: The job ended with a failure and the error message is given in results.

In order to retrieve the video, run curl -H "Authorization:Bearer j7HgTkwt24yKWfHPpFG3eoydJK6syAsz" http://SERVER_URL/[FILE_URL] -o result.mp4. Please note that the server will delete the file after you retrieve it, so you can only retrieve it once!

Replicating the GENEA Challenge 2020 visualizations

The parameters in the enclosed file docker-compose-genea.yml correspond to those that were used to render the final evaluation stimuli of the GENEA Challenge, for ease of replication.

If you use this code in your research please cite our IUI article:

@inproceedings{kucherenko2021large,
  author = {Kucherenko, Taras and Jonell, Patrik and Yoon, Youngwoo and Wolfert, Pieter and Henter, Gustav Eje},
  title = {A Large, Crowdsourced Evaluation of Gesture Generation Systems on Common Data: {T}he {GENEA} {C}hallenge 2020},
  year = {2021},
  isbn = {9781450380171},
  publisher = {Association for Computing Machinery},
  address = {New York, NY, USA},
  url = {https://doi.org/10.1145/3397481.3450692},
  doi = {10.1145/3397481.3450692},
  booktitle = {26th International Conference on Intelligent User Interfaces},
  pages = {11--21},
  numpages = {11},
  keywords = {evaluation paradigms, conversational agents, gesture generation},
  location = {College Station, TX, USA},
  series = {IUI '21}
}

genea_visualizer's People

Contributors

dependabot[bot] avatar jonepatr avatar svito-zar avatar teonikolov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

genea_visualizer's Issues

problem on server building

Hi Dear Patrik, thanks for your work. I am using a Ubuntu 16.04, when I run the following command I met a problem.

sudo /usr/local/bin/docker-compose up --build

WARNING: Dependency conflict: an older version of the 'docker-py' package may be polluting the namespace. If you're experiencing crashes, run the following command to remedy the issue:
pip uninstall docker-py; pip uninstall docker; pip install docker
Pulling redis (redis:latest)...
ERROR: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

Look forward to your help,

FYI about my docker-compose:
sudo /usr/local/bin/docker-compose --version
WARNING: Dependency conflict: an older version of the 'docker-py' package may be polluting the namespace. If you're experiencing crashes, run the following command to remedy the issue:
pip uninstall docker-py; pip uninstall docker; pip install docker
docker-compose version 1.18.0, build 8dd22a9

Kelvin

Using visualizer with non-GENEA data?

I tried using this visualizer to render a bvh file coming from another dataset (not the GENEA dataset), where the skeleton has different joints and the name conventions differ. I changed a few parts in the code, such as the part for validating the bvh, however I can't seem to render a result since the server is stuck "Processing the file (this can take a while depending on file size)". The bvh file is not large (90). I was wondering if I'm missing something there and if you could provide any pointers to modifications that should be performed.

where the folder blender should be put in

I found that the docker file will download a file and uncompressed it ( blender ). Due to the internet, it downloaded quite slowly, so I download it manually, but I don't know where should I put the folder in. I found that in the docker file, it made the blender directory in the root folder, but I don't have the permission on my lab machine, so I put it in my home directory. So is it ok to do so ? If it is ok ,and where should I modify to make it work. If I put the blender in /home/abc, what's the server_url should I use ?

come from speech-driven-hand-gesture-generation

Sorry to disturb again, i m very interested in your achievement .
In speech-driven-hand-gesture-generation i get txt file. Could i change the txt file into the video just like the one below, using this project? And maybe in this project i can only change bvh file into the one below?
Thanks!
image

Get "list index out of range" error

Thank you very much for sharing this visualization tool. I want to set up this tool to text our results.
I built the docker according to the README. When I ran the example.py with the bvh file you provided in "GENEA Challenge 2020 submitted BVH files", I got the "list index out of range" error. I wonder what is the reason?
Thanks

Failed with bvh file obtained from OpenPose joint

I used video2bvh to get a bvh file from a video. In video2bvh, it seems that the 2D-joints are estimated by OpenPose and it is lifted to 3D-joints by 3d-pose-baseline. Here is a sample bvh file I output.

However, the visualizer doesn't seem to work in this format.
How can I convert the 3D coordinates of the 8 upper body joints obtained by OpenPose into a bvh file that can be handled by this visualizer?

How to set video resolution?

Hi, I intend to modify the video resolution, however, I set the new value in the .env file or in blender_render.py ,but it does not work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.