Git Product home page Git Product logo

Comments (5)

jakkarn avatar jakkarn commented on May 24, 2024

I just found out that the state_dict contains the gradients. So they should at least be somewhat reset when loading the global state_dict (with new gradients) to the local nn.

From the pytorch documentation: "torch.nn.Module.load_state_dict: Loads a model’s parameter dictionary using a deserialized state_dict.".

To me, that sounds like it loads a copy of the global parameters, meaning that the gradients will be added to the previous global gradients.

from pytorch-a3c.

Bear-kai avatar Bear-kai commented on May 24, 2024

I have a similar question about the gradient.

Acturally, after lnet.load_state_dict(gnet.state_dict()) being excuted, all the parameters in both lnet and gnet are shared. That is to say, the opt.zero_grad() will set the gradients in lnet and gnet to zero! And, the loss.backward() will make lnet and gnet have the same gradient! So after the 1st iteration, gp._grad = lp.grad is useless because they are already the same! I find another implementation here involving a if-return criterion (I guess it corresponds to my claim that the grad assignment is useless after the 1st iteration).

# copy from continuous A3C, consider the cases after the 1st iteration
opt.zero_grad()         # zero gradient in both lnet and gnet
loss.backward()         # parameters in both lnet and gnet have the same gradients
for lp, gp in zip(lnet.parameters(), gnet.parameters()):     # the for loop is useless
    # if gp.grad is not None:       
    #     return                           # This "if-return" code are copied from above link 
    gp._grad = lp.grad   
opt.step()                                 # update gnet parameters (parameters in lnet will not change!)
lnet.load_state_dict(gnet.state_dict())    # update lnet parameters

It is confused to me and it might be a (serious) bug. What if worker A is updating gnet by opt.step and worker B just clears/modifies the gradients by opt.zero_grad()/loss.backward() ? However, the code just works (look the episode reward curve and the visualization)!

BTW, the state_dict does not contain any gradient info! It is an OrderedDict of weights and biases of parameters.

from pytorch-a3c.

MorvanZhou avatar MorvanZhou commented on May 24, 2024

The lnet.load_state_dict() function shows as below:

def load_state_dict(self, state_dict):
    # deepcopy, to be consistent with module API
    state_dict = deepcopy(state_dict)
    # Validate the state_dict
    groups = self.param_groups
    saved_groups = state_dict['param_groups']

it uses deepcopy to isolate parameters from the gnet. So there is no memory share on here.

So after the 1st iteration, gp._grad = lp.grad is useless because they are already the same!

Once local worker has moved to another worker, the gp._grad is necessary to switch to another worker's grad.

from pytorch-a3c.

Bear-kai avatar Bear-kai commented on May 24, 2024

Thanks for your reply! @MorvanZhou

  1. Yes, the load_state_dict() will not make parameters shared. I found it and had scratched out the sentence before.
  2. It will be very kind of you to explain if there might be conficts between workers without locking the shared model.

What if worker A is updating gnet by opt.step and worker B just clears/modifies the gradients by opt.zero_grad()/loss.backward() ?

  1. Note that I made the following comments by step-by-step debug.
# copy from continuous A3C, consider the cases after the 1st iteration
opt.zero_grad()         # zero gradient in both lnet and gnet
loss.backward()         # parameters in both lnet and gnet have the same gradients
for lp, gp in zip(lnet.parameters(), gnet.parameters()):     # the for loop is useless after the 1st iteration ??
    # if gp.grad is not None:       
    #     return                           # This "if-return" code are copied from above link 
    gp._grad = lp.grad   
opt.step()                                 # update gnet parameters (parameters in lnet will not change!)
lnet.load_state_dict(gnet.state_dict())    # update lnet parameters

from pytorch-a3c.

MorvanZhou avatar MorvanZhou commented on May 24, 2024

It will be very kind of you to explain if there might be conficts between workers without locking the shared model.

A lock could be applied in this case, but take a look of HOGWILD for the analysis of backprop without locking.

from pytorch-a3c.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.