Git Product home page Git Product logo

Comments (4)

Root970103 avatar Root970103 commented on June 19, 2024

Specifically, I want to know how the performance of the algorithm can be evaluated. Similar to the evaluation of reinforcement learning algorithms can be represented by the reward obtained from the environment. Taking leduc poker as an example, how to prove the algorithm is effective? After the model training is completed, what we expect to save, rl model or the policy. I am not completely famaliar with this. I hope you can give some advice, thank you!

from open_spiel.

lanctot avatar lanctot commented on June 19, 2024

Hi @Root970103,

If you're using PSRO or some form of fictitious play, the thing you save is either the average strategy, or the entire set of policies coupled with the meta-strategy. The latter can be turned into one policy using the policy_aggregator (if the game is small enough).

A good place to start is this example: https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/examples/psro_v2_example.py

Hope this helps, but please don't hesitate to ask more questions if it's not clear.

from open_spiel.

Root970103 avatar Root970103 commented on June 19, 2024

Hi @Root970103,

If you're using PSRO or some form of fictitious play, the thing you save is either the average strategy, or the entire set of policies coupled with the meta-strategy. The latter can be turned into one policy using the policy_aggregator (if the game is small enough).

A good place to start is this example: https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/examples/psro_v2_example.py

Hope this helps, but please don't hesitate to ask more questions if it's not clear.

Thank you for your reply! I have run this example script. And I obeserved the changes in nash_conv.

I also wonder if I can used the trained model (or policy ) against other algorithms in leduc poker env. For example, I want to test the trained model against CFR, the entir policies or the aggregate policy should be used?

In addition, in an adversarial scenario, is it appropriate to use the q value to evaluate the algorithms?

It's very kind of you to give these advices.

from open_spiel.

lanctot avatar lanctot commented on June 19, 2024

I also wonder if I can used the trained model (or policy ) against other algorithms in leduc poker env. For example, I want to test the trained model against CFR, the entir policies or the aggregate policy should be used?

Yes, you can extract the policy (that is what the NashConv computation needs) and you can simulate the policy against CFR's policy.

In addition, in an adversarial scenario, is it appropriate to use the q value to evaluate the algorithms?

Q-values are just estimates of values for a state and action. You can turn that into a policy by choosing argmax Q(s,a) but these will be deterministic. So if the environment requires any kind of mixing, you would lose that if you argmax over the Q-values.

from open_spiel.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.