Git Product home page Git Product logo

dynalab's Introduction

What is Dynalab?

Imagine you developed a fancy model, living in /path/to/fancy_project, and you want to share your amazing results on the Dynabench model leaderboard. Dynalab makes it easy for you to do just that.

License

Dynalab is MIT-licensed.

Installation

Dynalab has been tested on Python 3.6+, on Mac OS and Ubuntu.

git clone https://github.com/facebookresearch/dynalab.git
cd dynalab
pip install -e .

You will also need to install docker. The Docker version we use is 20.10.5.

Model submission workflow

Step 1: Initialize the project folder

Run the following command to initialize your project folder for model upload:

$ cd /path/to/fancy_project # We will refer to this as the root path
$ dynalab-cli init -n <name_of_your_model>

Follow the prompts to configure this folder. You can find more information about the config of files by running dynalab-cli init -h. Make sure that all files you specified on initialization are physically inside this project folder, e.g. not soft linked from elsewhere, otherwise your may encounter errors later, and your model deployment may fail. You should assume no external internet access from the docker.

From now on, you should always run dynalab-cli from the root path, otherwise it will get confused and you may see weird errors.

Step 2: Complete the model handler

If you don't already have a handler file, we will have created a template for you with instructions to fill at ./handler.py. The handler file defines how your model takes inputs, runs inference and returns a response. Follow the instructions in the template file to complete the handler.

For the expected model I/O format of your task, check the definitions here.

Step 3: Quickly check correctness by local test

Now that you completed the handler file, run a local test to see if your code works correctly.

$ dynalab-cli test --local -n <name_of_your_model>

If your local test is successful, you'll see "Local test passed" on the prompt. You can then move on to the next step. Otherwise, fix your project according to the error prompt and re-run this step until the output is error free.

Exclude large files / folders You may get an error if your project folder is too big (e.g. more than 2GB). You can reduce its size by excluding files / folders that are not relevant to your model (e.g. unused checkpoints). To do this, add the paths to the files / folders that you want to exclude into the config by running dynalab-cli init -n <name_of_your_model> --amend and update the exclude entry, e.g.

{
    "exclude": ["checkpoints_folder", "config/irrelevant_config.txt"]
}

Remember not to exclude files / folders that are used by your model.

Step 4: Check dependencies by integrated test

The integrated test will run the test inside a mock docker container to simulate the deployment environment, which is on Ubuntu 18.04 and uses Python 3.6 (see dockerfile for detailed version information). It may take some time to download dependencies in the docker.

$ dynalab-cli test -n <name_of_your_model>

If the integrated test is successful, you'll see "Integrated test passed" on the prompt. You can then proceed to the next step. Otherwise, please follow the on-screen instructions to check the log and fix your code / dependencies, and repeat this step until the output is error free.

If the integrated test is unsuccessful, it is possible that your machine lacks the resources to run the deployment environment in docker, or that you do not have sufficient resources allocated to docker. If this happens, your log file will show that workers crashed but will not include an error that references your handler.py. Uploading your model could result in fully functional behavior on our server in this scenario, even though the integrated test fails. However, we would strongly recommend running and passing the integrated test with more allocated resources prior to model upload.

Third party libraries If your code uses third-party libraries, you may specify them via either requirements.txt or setup.py. Then call dynalab-cli init -n <name_of_your_model> --amend to update the corresponding entry in the config file

{
    "requirements": true | false, # true if installing dependencies using requirements.txt
    "setup": true | false # true if installing dependencies using setup.py
}

Some common libraries are pre-installed so you do not need to include them in your requirements.txt or setup.py, unless you need a different version. Please check the dockerfile for the supported libraries. At the moment, supported libraries include

torch==1.7.1

Extra model files There may be a config file, or self-defined modules that you want to read or import in your handler. There are two ways to do this.

  1. Include these files in the dynalab config and read / import them directly without worrying about paths. This also means that all file structure will be flattened. Firstly, run dynalab-cli init -n <name_of_your_model> --amend and fill the model_files list with the list of file paths inside the root directory, e.g.
    {
        "model_files": ["configs/model_config.json", "src/my_model.py"]
    }
    
    Then in the handler, to read a file, you can read the config by its name, i.e. no path needs to be specified
    config = json.load("model_config.json")
    
    and directly import the module by its name, i.e. no path needs to be specified
    import my_model
    
    We recommend using this method to read files (e.g. configs, vocabularies) which is often flat-structured by its nature.
  2. If you do not want to flatten the file structure (e.g. there may be too many dependencies involved), you do not need to add them to the dynalab config. First of all, notice there is a ROOTPATH variable available in your handler template. Suppose the file locations are the same as those specified above (configs/model_config.json and src/my_model.py), you will read the config by
    config = json.load(os.path.join(ROOTPATH, "configs", "model_config.json"))
    
    and import the module by
    import sys
    sys.path.append(ROOTPATH) # you can uncomment this line in the handler template
    from src import my_model
    
    We recommend using this method for importing self-defined modules.

Step 5: Submit your model

Make sure you pass the integrated test in Step 4 before submitting the model, otherwise your model deployment might fail. You will first need to log in by running

$ dynalab-cli login

You will be taken to the Dynabench website where you'll see an API token (you'll be asked to log in there if you haven't). Click the "Copy" button on the webpage and paste that back in the terminal prompt.

To upload your model, run

$ dynalab-cli upload -n <name_of_your_model>

Follow the on-screen instructions for uploading the model. After the model is uploaded, it will enter our deployment queue, and you will receive an email when the deployment is done. If deployment is successful, your model will be evaluated on the datasets for that task, and you will be able to see the results on your model page. You can then publish the model for the results to be shown in the leaderboard.

How do I get help if I run into trouble?

Please create an issue.

dynalab's People

Contributors

anmol1707 avatar gwenzek avatar kokrui avatar ktirumalafb avatar maxbartolo avatar mazhiyi avatar tristanthrush avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

dynalab's Issues

Run local test in the same process

We currently dosubprocess.run(['python', 'test_handler.py']) in local test (https://github.com/fairinternal/dynalab/blob/master/dynalab_cli/test.py#L59), and there are a couple of inconveniences, e.g.

  1. User's python may be different from the python where dynalab is installed. @soniakris5398 noticed this issue when testing the code - she got module import error that dynalab was not found. The problem was that dynalab was installed explicitly under python3 in her path. We should avoid such case that user needs to worry about their python path.
  2. We need to compose a test_handler file and we currently do this using python code, which is prone to indent errors and is hard to maintain. https://github.com/fairinternal/dynalab/blob/master/dynalab_cli/test.py#L66-L92

The shown average score is different with the manual calculation of mean of individual scores

Hi,

I found a bug of the leaderboard score. When we submit a model, we can get the average score and the individual scores for each language pair. Even concerning the left digits after the second decimal place, they are not matched. For my model (https://dynabench.org/models/250), the shown average score for "Leaderboard Datasets" is 27.59. However, I calculate the average by myself, it should be 28.21. It is the same for "Non-Leaderboard Datasets", the shown one is 27.89, and my calculation is 28.50. Could you check this for me? If it is true, then all of the shown scores on the leaderboard are misleading.

Or you calculate some kind of weighted average.

[2/N] Dynalab init

Base setup_config template
Create base handlers for tasks to be inherited
Implement the workflow

Dynalab README

  • How to use dynalab

  • Should include a requirements file for dependencies

  • Path for transformers config (where to put extra files)

  • There are two ways to include dependencies

Taken Down in the Dynabench for Large Track

Hi,
I passed the local and integrated tests following the model submission workflow on GitHub and submitted my model. But I notice that "Your model t1 has been successfully deployed. You can find and publish the model at https://dynabench.org/models/119. (python handler.py, dynalab-cli test --local, dynalab-cli test -n all successfully passed the test on our local server)
"。 But the status shows that "Taken Down". Would you mind sending the detailed log information to me for debugging our code? Thanks very much !

Question about the dockerfile

When I execute the cmd "dynalabcli test -n <name_of_your_model>". The docker build fails and I notice that the
WORKDIR /home/model-server/code
RUN if [ -f setup.py ] && [ ${setup} = True ]; then python -m pip install --no-cache-dir --force-reinstall -e code; fi.
The error reports can not find the code.
I think the "RUN if [ -f setup.py ] && [ ${setup} = True ]; then python -m pip install --no-cache-dir --force-reinstall -e code; fi." should be "RUN if [ -f setup.py ] && [ ${setup} = True ]; then python -m pip install --no-cache-dir --force-reinstall -e .; fi." since we have "WORKDIR /home/model-server/code" ?

Translate with ensemble models instead of one model

Hi,
I take part in the wmt21 flores competition. I wonder is it possible for us to use an ensemble of multiple checkpoints instead of the best one? It is quite common for translation task. If yes, how could I do that? I use this handler . When I change line 287 to
manifest = {"model": {"serializedFile": "model1.pt:model2.pt"}}, it didn't work.

fix init - torchserve extra-files flat structure

The extra-files can come from any subfolder - it doesn't need to be under the same directory as handler.py when we run torch-model-archiver.

However, when torchserve starts to serve a model, the extra files will be moved into the same tmp directory as handler.py - an example model_dir at serving time will look like this

list model dir:  ['__pycache__', 'checkpoint.pt', 'MAR-INF', 'handler.py', 'test_config.txt']

We shouldn't prevent user from adding extra files that does not sit under the same directory as handler.py, but should warn them that this file will be moved at serving time. There are two ways to read a file in the handler.py:

  1. If the file is added to extra-file in torch-model-archiver, one can read it directly using the filename, without any path prefix. For single files (e.g. text files) that do not have other dependencies, this is a recommended way.
  2. The file can also be read as with open('/home/model-server/code/<original_relative_path>') as f, since the whole project directory will be added to the docker. /home/model-server/code is just where we put the project folder in the docker image.

The same applies if one needs to import a module from the project folder

  1. If the file is added to extra-file in torch-model-archiver, just import file is enough.
  2. If not, add sys.path.append('/home/model-server/code') when importing modules and then import the module as if working normally in the project folder. For modules that may have dependency on other files, this is recommended.

Caveat: user will always find inconsistent results in local test and integrated test whenever the file they read / import is not under the same directory as handler.py. This is caused by torchserve moving the modules and docker renaming the parent directory. The solutions above are all for passing the integrated test, which is equivalent to ensuring the model can be deployed successfully on sagemaker.

Validate response format in integrated test

We need to figure out what dynabench is expecting for each task and check that a user uploaded model will return the correct response, this includes:

  • correct keys
  • correct value type

Dynalab base handler

See if it's possible to write a base handler that applies to most cases, and if so provide a base handler for tasks.

Possible risks that may prevent us from developing a base handler, if they vary a lot with specific models:

  • How to load model
  • Loading vocabulary?
  • How to parse the model prediction to our expected response?

Large model not evaluating on the devtest set

Hi! We're having trouble sending in our large model for the contest.

When we send in our base model (Model 278) it evaluated correctly on both the dev set and the devtest set. However, when we send in our large model (Model 283) with the exact same handler as the base model, it only evaluated on the dev set, but not the devtest set. I was using batch size 128 for both of these submissions.

I tried reducing the batch size a little (Model 284), the same problem occurs. I also tried increasing the batch size a little (Model 293), and it gave me a takendown status. I'd rather not trial-and-error batch sizes as there is a submission limit per day, so I decided to post an issue instead.

Can we get any advice on what to do here? Thank you in advance for the help!

Integrated test taking too much memory

One potential issue can be that we ping the model server from inside the docker, and this can become increasingly an issue with multiple test data (i.e. multiple pings).

It would be nice if we can just set up docker run as a server, and ping it from the host.

Submission Limit Exceeded Warning: Provide Additional Details

Submission Limit Exceeded Warning should state what the submission limit is, and when it will get reset

Also, minor bug report: the first time you try to upload a model when you've exceeded the submission limit, you get a warning such as Failed to submit model electra-synqa due to submission limit exceeded. If you try again, you will get OverflowError: string longer than 2147483647 bytes.

If you remove the tar.gz file from your root directory, you go back to the original submission limit warning (and if you try again after this, you will get the OverflowError again).

Different model sizes between pretrained model and finetuned model

Hi,

when I check the model size of my finetuned model and your provided pretrained model, I notice they are different.

There are two major dicts in the checkpoint, i.e. "model" and "last_optimizer_state". If both are in fp32, the size of "last_optimizer_state" should be roughly as twice big as "model", since there are first and second momentums in the adam optimizer.
For the pretrained model you offered, the sizes are:

  • For pretrained MM100_175M, the size of "model": 336M, the size of "last_optimizer_state": 1.4G
  • For pretrained MM100_615M, the size of "model": 1.2G, the size of "last_optimizer_state": 4.7G

It makes sense, because the pretrained "model" is in fp16 and the "last_optimizer_state" in fp32. The size of "last_optimizer_state" should be roughly as four times as the "model".

However, when I fintune the pretrained model, I meet some problems.

  1. The "model" is saved in fp32 instead of fp16, even though I train with --fp16. My training config is as:
DATA=/path/to/data
TOOL=/path/to/fairseq/train.py
PRETRAINED_MODEL=/path/to/flores101_mm100_615M/model.pt
lang_pairs=/path/to/language_pairs.txt

python $TOOL \
    $DATA \
    --dataset-impl mmap \
    --arch transformer_wmt_en_de_big \
    --dropout 0.1 --attention-dropout 0.1 \
    --encoder-embed-dim 1024 --decoder-embed-dim 1024 \
    --encoder-attention-heads 16 --decoder-attention-heads 16 \
    --encoder-ffn-embed-dim 4096 --decoder-ffn-embed-dim 4096 \
    --encoder-normalize-before --decoder-normalize-before \
    --encoder-layers 12 --decoder-layers 12 \
    --share-all-embeddings \
    --restore-file $PRETRAINED_MODEL \
    --task translation_multi_simple_epoch \
    --encoder-langtok "src" --decoder-langtok \
    --lang-pairs $lang_pairs \
    --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
    --optimizer adam --adam-eps 1e-08 --adam-betas '(0.9, 0.98)' \
    --fp16 --fp16-init-scale 128  --fp16-scale-tolerance 0.0  --memory-efficient-fp16 \
    --lr-scheduler inverse_sqrt --lr 8e-04 --warmup-init-lr 1e-07 --warmup-updates 2500 \
    --max-tokens 2048  \
    --save-interval 1  
  1. The size of "model" and "last_optimizer_state" are weird.
  • For finetuned MM100_175M, the size of "model" is 1.7G, the size of "last_optimizer_state" is 1.4G.
  • For finetuned MM100_615M, the size of "model" is 4.3G, the size of "last_optimizer_state" is 4.7G.

The sizes of "model" and "last_optimizer_state" are comparable, which is strange to me. Besides, even though I manually change the float of "model" to half, I can only obtain half size of the "model" that is still different with your pretrained "model". For your convenience, you can check my 615M model at https://dynabench.org/models/250

Do you have any ideas for this?

Failed to submit large model

Hi, I use the command dynalab-cli upload -n flores-ft-v3 to upload my submission but it failed.

Config file validated
Tarballing the project directory...
Uploading files to S3. For large submissions, the progress bar may hang a while even after uploading reaches 100%. Please do not kill it...
Uploading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8.14G/8.14G [14:40<00:00, 9.93MB/s]
Failed to submit model due to: 504 Server Error: Gateway Time-out for url: https://api.dynabench.org/models/upload/s3
You can inspect the prepared model submission locally at .dynalab_submissions/Jul-28-2021-15-17-21-flores-ft-v3.tar.gz

This submission is 263 (https://dynabench.org/models/263) and the status is failed (The model could not be deployed.)

I have checked the related issue, but none of them can solve my problem. My code is the latest version. Could you check it for me?

Thank you in advance.

Improved Reporting of Model Upload Status

The model upload process is pretty straightforward and user-friendly. That said, some feedback in addition to the status would be helpful. For example:

  1. A "publishable" status indicating the status of all the steps remaining for the model to appear on the leaderboard e.g. fairness testing, robustness testing, etc
  2. A per-dataset status (especially as the number of datasets scale, for example, model https://dynabench.org/models/109 took approx. 16hrs for all the non-leaderboard dataset evaluations to come in). This could include statuses such as "Queued", "Running", "Complete", "Failed", etc

tar --exclude does not work as expected for multiple exclude files

Noticed this issue when I was testing upload - when there are multiple files to exclude, the current code doesn't actually exclude anything https://github.com/fairinternal/dynalab/blob/master/dynalab_cli/upload.py#L36

(Also functionally we should exclude all other model folders + tmp folder of this model).

Possible solution:
create exclude.txt and use --exclude-from as suggested here https://unix.stackexchange.com/questions/419394/want-to-exclude-multiple-folders-and-files-when-extracting-using-tar

504 Server Error: Gateway Time-out

from #91
@TristanThrush can you please look into the timeout issue? cc @douwekiela

Original error log:

/miniconda3/envs/dynalab/bin/dynalab-cli upload -n t1

Config file validated
Tarballing the project directory...
Uploading files to S3. For large submissions, the progress bar may hang a while even after uploading reaches 100%. Please do not kill it...
Uploading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.22G/5.22G [09:39<00:00, 9.67MB/s]
Failed to submit model due to: 504 Server Error: Gateway Time-out for url: https://api.dynabench.org/models/upload/s3
You can inspect the prepared model submission locally at .dynalab_submissions/Jun-29-2021-11-27-13-t1.tar.gz

But I find that I have already successfully submitted the task even with this error, and can find the task in the dynabench. Can this error be ignored?

Large Size Model > 2GB

When I upload a large model by command "dynalab-cli upload -n t1", I met an error as below:

Config file validated
Tarballing the project directory...
Uploading file to S3...
Traceback (most recent call last):
File "/miniconda3/envs/amlt8/bin/dynalab-cli", line 33, in
sys.exit(load_entry_point('dynalab', 'console_scripts', 'dynalab-cli')())
File "/dynalab/dynalab_cli/main.py", line 35, in main
command_mapargs.option.run_command()
File "dynalab/dynalab_cli/upload.py", line 75, in run_command
url, files=files, data=data, headers=AccessToken().get_headers()
File "miniconda3/envs/amlt8/lib/python3.7/site-packages/requests/api.py", line 119, in post
return request('post', url, data=data, json=json, **kwargs)
File "miniconda3/envs/amlt8/lib/python3.7/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "miniconda3/envs/amlt8/lib/python3.7/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "miniconda3/envs/amlt8/lib/python3.7/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "miniconda3/envs/amlt8/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "miniconda3/envs/amlt8/lib/python3.7/site-packages/urllib3/connectionpool.py", line 677, in urlopen
chunked=chunked,
File "miniconda3/envs/amlt8/lib/python3.7/site-packages/urllib3/connectionpool.py", line 392, in _make_request
conn.request(method, url, **httplib_request_kw)
File "miniconda3/envs/amlt8/lib/python3.7/http/client.py", line 1277, in request
self._send_request(method, url, body, headers, encode_chunked)
File "miniconda3/envs/amlt8/lib/python3.7/http/client.py", line 1323, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "miniconda3/envs/amlt8/lib/python3.7/http/client.py", line 1272, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "miniconda3/envs/amlt8/lib/python3.7/http/client.py", line 1071, in _send_output
self.send(chunk)
File "miniconda3/envs/amlt8/lib/python3.7/http/client.py", line 993, in send
self.sock.sendall(data)
File "miniconda3/envs/amlt8/lib/python3.7/ssl.py", line 1034, in sendall
v = self.send(byte_view[count:])
File "miniconda3/envs/amlt8/lib/python3.7/ssl.py", line 1003, in send
return self._sslobj.write(data)
OverflowError: string longer than 2147483647 bytes

Better management of submitted files

Separated from #83. Specifically

"Being able to inspect the submitted file is extremely useful, but I don't think its presence should affect re-submission. If so, we should have a check for the .tar.gz file and ask the user if they would like to remove it and proceed or cancel and review. It's almost worse in the case that the model size is too small to be caught by the OverflowError because then we could potentially be re-uploading old model submissions, right?

We could also consider keeping them in a separate directory (e.g. previous_submissions) inside root and renamed to something like yyyymmdd-xxx.tar.gz and excluding these from tar to prevent the error occurring, and then informing users that they currently have X previous submission files that they might want to review."

[4/N] Dynalab-cli test-local

  • Check the setup_config includes all required keys and values. Check all the files exist.
  • Check that handler file and all extra files are under the same root directory (requirement from torchserve archive, is this sufficient? Note we don’t use model file here.).
  • Check validity of file size and ask to remove large files.
  • Check that handler inherits from the correct task base handler.
  • Run preliminary checks to see if this looks like it might work (basically like our current testhandler.py). Call the testhandler of the specific task with example data to check that it’s expecting and returning the right things. This should print a message like “local test passed” or “something went wrong”.

Make local test support multiple test example by default

This will change the data from a single json to a list of jsons, and change the functions in BaseTaskIO to run on multiple datasets.

As a warning, if any dataset uses character sets beyond UTF-8, they should provide such as example in the local test to avoid unexpected failures once deployed.

The status of the model is "takendown"

Hi,

I am a participant of the Flores task.

I passed the local and integrated tests following the model submission workflow on GitHub and submitted my model. Everything went smoothly, and I received a notification email, which said "Your model track2 has been successfully deployed. You can find and publish the model at https://dynabench.org/models/105."

but when I tried to check my results on the website, the status of our model is "takendown".

takendown

The model could not be evaluated on all of the datasets. The model could have bugs.

Meanwhile, display "No data available, the model is still evaluating." at the bottom of the site.
I don't know what the problem is and I don't have any more details. What should I do to see my results?

The same model failed today, but worked before

Hi,

the submission system is much informed than before, since it returns the error by email. However, it also confused me for these non-detailed information.

I obtain "Error in docker build" for my submission https://dynabench.org/models/268 and "Exception in fetching model folder and loading config" for my submission https://dynabench.org/models/267. However, both models worked before, like https://dynabench.org/models/244. I didn't do any modification, except for changing the model.pt to a newly generated one. Could you check what's wrong with my submissions?

By the way, is it possible to not count failed submission? Otherwise, 3 submissions as the limit are not enough, if error happens, even though it works in our local machines.

Submission shows "Takendown" status

Hi!

We're submitting for the FLORES Small Track 2 and after configuring the handler to work for our model, both the local and integrated tests passed. Model upload finished without any problems as well, but when the model finished processing on dynabench, it immediately gave us a "takendown" status.

Here's some details on our submission:

  • Submission name: wmt21-srph
  • Model number: 236
  • System used for submission: AWS p2.xlarge
  • I also checked out the other issue that had a "takendown" tag (#93) and added in our other files in model_files in the setup_config.json file as suggested.

Any ideas why the model failed to evaluate? Thanks in advance!

Non-Leaderboard Datasets Leaderboard

The non-leaderboard datasets functionality is super useful. So useful in fact, that I think having a "non-leaderboard leaderboard" would be quite useful to compare how models compare even on these datasets.

It shouldn't be anywhere prominent but maybe somewhere accessed through the models page.

In addition, sorting the non-leaderboard datasets such that the order is identical for all models of the same task would also help speed up model comparison.

[dynalab + dynabench] API to generate model response signature

Blockers: there is no official my_secret before model is uploaded and written into our system, so local and cloud usage will be slightly different but the way the secret is accessed must be the same since handler cannot be changed; this needs to go into user handler so we have to make the API as blackbox as possible to use

Proposal:

  • Locally upon model initialize, add a secret file in the .dynalab folder with a random secret written there. Automatically add this file in torch-model-archiver as extra files, but no need to upload this file. In this way we can locally ensure that generate signature function is called properly in the handler.
  • The generate_signature function will be the same on dynalab or dynabench, and read this secret file from the same as handler path
  • Upon upload, officially assign a secret in our database (e.g. models table), and generate the secret file with the new secret. Similarly add it as extra files in archive to be put in the same path as the handler

In this way user doesn't need to worry about what's actually in the signature or secret.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.