Git Product home page Git Product logo

nfvdeep's People

Contributors

stwerner97 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

nfvdeep's Issues

How to use the experiment to make the figures?

Hello sir, it is my honor to study your project. I can understand the rough process, but I do not know how to use the experiment results and use them to make the figures like that. Could you give me an instruction about the experimental indicators ?(logs, placements in script.py and ray results in tuning.py). I am looking forward to your reply very much.
image
image
image

An error I can not understand

I'm sorry to bother you, sir. Your project is very exciting, and I have learned a lot through practical experience. But There is a problem now that my script.py file can train normally, but the tune.py file encountered this issue. I tried reinstalling ray, but still displayed this error. Can you help me answer the question?

Question to understand resulting placement

Hello, thank you for providing a great paper code reproduction! I would like to ask, after I run the script.py, the generated placement.txt shows that the vnf in the sfc is only placed on node 6. I don't know what the problem is. Can you help me answer it?

need a help

Hello, thank you for this very wonderful project. This is also what I was looking for for my graduate project.
I have a few questions you can answer these are:
-In the update function in the network.py file, when the SFC exceeds their TTL, will the environment be changed? For example, those SFCs return CPU, Memory, Bandwidth resources to the nodes that the SFCs have embedded into.
-About results: I understand an episode is a set of states from the beginning to the last (from the first SFC that joins the system until the last SFC leaves the system). In the following episodes, the agent is better trained based on the experience gained from the previous episodes. Do I understand that correctly?
Thanks in advance!!!

Ask for help regarding the value of reward

Frist of all, thank you for the great work you've done! The code of this reproduction is very clear.
Here's my problem.
I add a best_model_save_path parameter to the EvalCallback call in the script.py, so I can get the best model after the training steps. But after I try to evaluate the model using evaluate_policy from stable_baselines3.common.evaluation, I got really confused. The reward I got from this evaluation is negative, which is far away from the episode_reward in the logs, it's even worse than the fisrt eval result during the training. Why is it the case? I looked at the docs of stable-baselines3, the EvalCallback also uses evaluate_policy to get the reward values, so the results should be close.

In my test code, I just load the env like the script.py, and here's my evaluation process.

Agent = getattr(stable_baselines3, args.agent)
model = Agent.load("./testing/evaluation/model/best_model")
print(evaluate_policy(model, env))

Actually, I found this bcs I tried to tune the hyperparameters using Optuna, but the value given by Optuna is negative, while the episode reward is positive and pretty large. I really get confused by this result.
Thanks again!

Error running tune.py

Hi
I ran the first part of NFVDeep successfully (Experiments/script.py).
But in executing the second part(Hyperparameter Optimization/tune.py) , I encounter the following error:

_File "C:\Users\98913\Documents\Python\NFVdeep-main\tune.py", line 158, in
search_alg=AxSearch(ax_client),
File "C:\Simulate\lib\site-packages\ray\tune\suggest\ax.py", line 173, in init
self._setup_experiment()
File "C:\Simulate\lib\site-packages\ray\tune\suggest\ax.py", line 197, in setup_experiment
raise ValueError(
ValueError: Please specify the mode argument when initializing the AxSearch object or pass it to tune.run().

I spent many hours fixing it but did not get the result.Please help me fix this error.

I need some help to learn this project

Hello sir, it is my honor to study your project.
When I run the program, in the [_on_step()] function of EvalLogCallback,
eval_ Envs: List [StatsWrapper]=self. local ["callback"]. eval_ Env. envs
This line of code shows that there is no "callback" variable name . I want to know if the monitor file is missing any content.
I am looking forward to your reply very much.

ask for help

Hello! This is a great program and I download it, but it misses "request.json". Can you share the file with me? My Email address is [email protected]. Thank you very much for your consideration. I am very pleased to hear from you.

Yours sincerely

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.