Git Product home page Git Product logo

gin-config's Introduction

Gin Config

Authors: Dan Holtmann-Rice, Sergio Guadarrama, Nathan Silberman Contributors: Oscar Ramirez, Marek Fiser

Gin provides a lightweight configuration framework for Python, based on dependency injection. Functions or classes can be decorated with @gin.configurable, allowing default parameter values to be supplied from a config file (or passed via the command line) using a simple but powerful syntax. This removes the need to define and maintain configuration objects (e.g. protos), or write boilerplate parameter plumbing and factory code, while often dramatically expanding a project's flexibility and configurability.

Gin is particularly well suited for machine learning experiments (e.g. using TensorFlow), which tend to have many parameters, often nested in complex ways.

This is not an official Google product.

Table of Contents

[TOC]

Basic usage

This section provides a high-level overview of Gin's main features, ordered roughly from "basic" to "advanced". More details on these and other features can be found in the user guide.

1. Setup

Install Gin with pip:

pip install gin-config

Install Gin from source:

git clone https://github.com/google/gin-config
cd gin-config
python -m setup.py install

Import Gin (without TensorFlow functionality):

import gin

Import additional TensorFlow-specific functionality via the gin.tf module:

import gin.tf

Import additional PyTorch-specific functionality via the gin.torch module:

import gin.torch

2. Configuring default values with Gin (@gin.configurable and "bindings")

At its most basic, Gin can be seen as a way of providing or changing default values for function or constructor parameters. To make a function's parameters "configurable", Gin provides the gin.configurable decorator:

@gin.configurable
def dnn(inputs,
        num_outputs,
        layer_sizes=(512, 512),
        activation_fn=tf.nn.relu):
  ...

This decorator registers the dnn function with Gin, and automatically makes all of its parameters configurable. To set ("bind") a value for the layer_sizes parameter above within a ".gin" configuration file:

# Inside "config.gin"
dnn.layer_sizes = (1024, 512, 128)

Bindings have syntax function_name.parameter_name = value. All Python literal values are supported as value (numbers, strings, lists, tuples, dicts). Once the config file has been parsed by Gin, any future calls to dnn will use the Gin-specified value for layer_sizes (unless a value is explicitly provided by the caller).

Classes can also be marked as configurable, in which case the configuration applies to constructor parameters:

@gin.configurable
class DNN(object):
  # Constructor parameters become configurable.
  def __init__(self,
               num_outputs,
               layer_sizes=(512, 512),
               activation_fn=tf.nn.relu):
    ...

  def __call__(inputs):
    ...

Within a config file, the class name is used when binding values to constructor parameters:

# Inside "config.gin"
DNN.layer_sizes = (1024, 512, 128)

Finally, after defining or importing all configurable classes or functions, parse your config file to bind your configurations (to also permit multiple config files and command line overrides, see gin.parse_config_files_and_bindings):

gin.parse_config_file('config.gin')

Note that no other changes are required to the Python code, beyond adding the gin.configurable decorator and a call to one of Gin's parsing functions.

3. Passing functions, classes, and instances ("configurable references")

In addition to accepting Python literal values, Gin also supports passing other Gin-configurable functions or classes. In the example above, we might want to change the activation_fn parameter. If we have registered, say tf.nn.tanh with Gin (see registering external functions), we can pass it to activation_fn by referring to it as @tanh (or @tf.nn.tanh):

# Inside "config.gin"
dnn.activation_fn = @tf.nn.tanh

Gin refers to @name constructs as configurable references. Configurable references work for classes as well:

def train_fn(..., optimizer_cls, learning_rate):
  optimizer = optimizer_cls(learning_rate)
  ...

Then, within a config file:

# Inside "config.gin"
train_fn.optimizer_cls = @tf.train.GradientDescentOptimizer
...

Sometimes it is necessary to pass the result of calling a specific function or class constructor. Gin supports "evaluating" configurable references via the @name() syntax. For example, say we wanted to use the class form of DNN from above (which implements __call__ to "behave" like a function) in the following Python code:

def build_model(inputs, network_fn, ...):
  logits = network_fn(inputs)
  ...

We could pass an instance of the DNN class to the network_fn parameter:

# Inside "config.gin"
build_model.network_fn = @DNN()

To use evaluated references, all of the referenced function or class's parameters must be provided via Gin. The call to the function or constructor takes place just before the call to the function to which the result is passed, In the above example, this would be just before build_model is called.

The result is not cached, so a new DNN instance will be constructed for each call to build_model.

4. Configuring the same function in different ways ("scopes")

What happens if we want to configure the same function in different ways? For instance, imagine we're building a GAN, where we might have a "generator" network and a "discriminator" network. We'd like to use the dnn function above to construct both, but with different parameters:

def build_model(inputs, generator_network_fn, discriminator_network_fn, ...):
  ...

To handle this case, Gin provides "scopes", which provide a name for a specific set of bindings for a given function or class. In both bindings and references, the "scope name" precedes the function name, separated by a "/" (i.e., scope_name/function_name):

# Inside "config.gin"
build_model.generator_network_fn = @generator/dnn
build_model.discriminator_network_fn = @discriminator/dnn

generator/dnn.layer_sizes = (128, 256)
generator/dnn.num_outputs = 784

discriminator/dnn.layer_sizes = (512, 256)
discriminator/dnn.num_outputs = 1

dnn.activation_fn = @tf.nn.tanh

In this example, the generator network has increasing layer widths and 784 outputs, while the discriminator network has decreasing layer widths and 1 output.

Any parameters set on the "root" (unscoped) function name are inherited by scoped variants (unless explicitly overridden), so in the above example both the generator and the discriminator use the tf.nn.tanh activation function.

5. Full hierarchical configuration {#full-hierarchical}

The greatest degree of flexibility and configurability in a project is achieved by writing small modular functions and "wiring them up" hierarchically via (possibly scoped) references. For example, this code sketches a generic training setup that could be used with the tf.estimator.Estimator API:

@gin.configurable
def build_model_fn(network_fn, loss_fn, optimize_loss_fn):
  def model_fn(features, labels):
    logits = network_fn(features)
    loss = loss_fn(labels, logits)
    train_op = optimize_loss_fn(loss)
    ...
  return model_fn

@gin.configurable
def optimize_loss(loss, optimizer_cls, learning_rate):
  optimizer = optimizer_cls(learning_rate)
  return optimizer.minimize(loss)

@gin.configurable
def input_fn(file_pattern, batch_size, ...):
  ...

@gin.configurable
def run_training(train_input_fn, eval_input_fn, estimator, steps=1000):
  estimator.train(train_input_fn, steps=steps)
  estimator.evaluate(eval_input_fn)
  ...

In conjunction with suitable external configurables to register TensorFlow functions/classes (e.g., Estimator and various optimizers), this could be configured as follows:

# Inside "config.gin"
run_training.train_input_fn = @train/input_fn
run_training.eval_input_fn = @eval/input_fn

input_fn.batch_size = 64  # Shared by both train and eval...
train/input_fn.file_pattern = ...
eval/input_fn.file_pattern = ...


run_training.estimator = @tf.estimator.Estimator()
tf.estimator.Estimator.model_fn = @build_model_fn()

build_model_fn.network_fn = @dnn
dnn.layer_sizes = (1024, 512, 256)

build_model_fn.loss_fn = @tf.losses.sparse_softmax_cross_entropy

build_model_fn.optimize_loss_fn = @optimize_loss

optimize_loss.optimizer_cls = @tf.train.MomentumOptimizer
MomentumOptimizer.momentum = 0.9

optimize_loss.learning_rate = 0.01

Note that it is straightforward to switch between different network functions, optimizers, datasets, loss functions, etc. via different config files.

6. Additional features

Additional features described in more detail in the user guide include:

Best practices

At a high level, we recommend using the minimal feature set required to achieve your project's desired degree of configurability. Many projects may only require the features outlined in sections 2 or 3 above. Extreme configurability comes at some cost to understandability, and the tradeoff should be carefully evaluated for a given project.

Gin is still in alpha development and some corner-case behaviors may be changed in backwards-incompatible ways. We recommend the following best practices:

  • Minimize use of evaluated configurable references (@name()), especially when combined with macros (where the fact that the value is not cached may be surprising to new users).
  • Avoid nesting of scopes (i.e., scope1/scope2/function_name). While supported there is some ongoing debate around ordering and behavior.
  • When passing an unscoped reference (@name) as a parameter of a scoped function (some_scope/fn.param), the unscoped reference gets called in the scope of the function it is passed to... but don't rely on this behavior.
  • Wherever possible, prefer to use a function or class's name as its configurable name, instead of overriding it. In case of naming collisions, use module names (which are encouraged to be renamed to match common usage) for disambiguation.
  • In fact, to aid readability for complex config files, we gently suggest always including module names to help make it easier to find corresponding definitions in Python code.
  • When doing "full hierarchical configuration", structure the code to minimize the number of "top-level" functions that are configured without themselves being passed as parameters. In other words, the configuration tree should have only one root.

In short, use Gin responsibly :)

Syntax quick reference

A quick reference for syntax unique to Gin (which otherwise supports non-control-flow Python syntax, including literal values and line continuations). Note that where function and class names are used, these may include a dotted module name prefix (some.module.function_name).

Syntax Description
@gin.configurable Decorator in Python code that registers a function or class with Gin, wrapping/replacing it with a "configurable" version that respects Gin parameter overrides. A function or class annotated with `@gin.configurable` will have its parameters overridden by any provided configs even when called directly from other Python code. .
@gin.register Decorator in Python code that only registers a function or class with Gin, but does *not* replace it with its "configurable" version. Functions or classes annotated with `@gin.register` will *not* have their parameters overridden by Gin configs when called directly from other Python code. However, any references in config strings or files to these functions (`@some_name` syntax, see below) will apply any provided configuration.
name.param = value Basic syntax of a Gin binding. Once this is parsed, when the function or class named name is called, it will receive value as the value for param, unless a value is explicitly supplied by the caller. Any Python literal may be supplied as value.
@some_name A reference to another function or class named some_name. This may be given as the value of a binding, to supply function- or class-valued parameters.
@some_name() An evaluated reference. Instead of supplying the function or class directly, the result of calling some_name is passed instead. Note that the result is not cached; it is recomputed each time it is required.
scope/name.param = value A scoped binding. The binding is only active when name is called within scope scope.
@scope/some_name A scoped reference. When this is called, the call will be within scope scope, applying any relevant scoped bindings.
MACRO_NAME = value A macro. This provides a shorthand name for the expression on the right hand side.
%MACRO_NAME A reference to the macro MACRO_NAME. This has the effect of textually replacing %MACRO_NAME with whatever expression it was associated with. Note in particular that the result of evaluated references are not cached.

gin-config's People

Contributors

adarob avatar cghawthorne avatar conchylicultor avatar dhr avatar ericjang avatar gmrukwa avatar hawkinsp avatar jaingaurav avatar kbanoop avatar lamblin avatar mattdangerw avatar qstanczyk avatar rchen152 avatar sguada avatar swapneelm avatar tfboyd avatar vfdev-5 avatar yilei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gin-config's Issues

[bug] gin.REQUIRED ignored when default value of a keyword arg

When a function decorated with @gin.configurable is passed gin.REQUIRED, not as an explicit value but as the default value of a keyword argument, it does not raise a RuntimeError, but happily uses the string value '__gin_required__'.

For instance, using a slight modification of an example on https://github.com/google/gin-config/blob/master/gin/gin_intro.ipynb:

gin.clear_config()

@gin.configurable
def say_hello(name=gin.REQUIRED):
  print("Hello %s!" % name)

try:
  say_hello()
except RuntimeError as e:
  print('Error message (gin.REQUIRED):', e)

Actual output:

Hello __gin_required__!

Expected output:

Error message (gin.REQUIRED): Required bindings for `say_hello` not provided in config: ['name']

Note that explicitly passing gin.REQUIRED works as expected:

try:
  say_hello(gin.REQUIRED)
except RuntimeError as e:
  print('Error message (gin.REQUIRED):', e)

produces the expected output:

Error message (gin.REQUIRED): Required bindings for `say_hello` not provided in config: ['name']

I understand that in such a simple case, I could just declare def say_hello(name) and let Python complain about the missing argument. However, it would be really useful to be able to essentially mark a keyword argument as required through gin. Moreover, it is really surprising to have different behaviors when the argument is passed explicitly vs. implicitly.

This happened with the latest development version.

Overwriting a value back to gin config file

My code automatically makes a folder when running an experiment. If I want to resume training, I have to manually specify the checkpoint folder to the gin-config file. Is there any way to write this checkpoint path into gin-config after the experiment runs? Thanks!

Skip Unknown ignoring @gin.register

Hi! I've got a lot of functions decorated with @gin.register in ddsp, and I recently removed a kwarg from one of those functions (delta_delta_time_weight from spectral_ops.SpectralLoss). My operative_config for a pretrained model still has the old kwargs in it's config:

.
.
.
# Parameters for SpectralLoss:
# ==============================================================================
SpectralLoss.delta_delta_freq_weight = 0.0
SpectralLoss.delta_delta_time_weight = 0.0
.
.
.

but when I try to load the gin config with the new codebase that doesn't have the kwarg, it throws an error even though I use skip_unknown=True.

Command:

with gin.unlock_config():
  gin.parse_config_file(gin_file, skip_unknown=True)

Error message:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-9-2893e878fb89> in <module>()
     55 # Parse gin config,
     56 with gin.unlock_config():
---> 57   gin.parse_config_file(gin_file, skip_unknown=True)
     58 
     59 # Assumes only one checkpoint in the folder, 'ckpt-[iter]`.

9 frames
/usr/local/lib/python3.6/dist-packages/gin/config.py in __new__(cls, binding_key)
    515     if not _might_have_parameter(configurable_.fn_or_cls, arg_name):
    516       err_str = "Configurable '{}' doesn't have a parameter named '{}'."
--> 517       raise ValueError(err_str.format(selector, arg_name))
    518 
    519     if configurable_.whitelist and arg_name not in configurable_.whitelist:

ValueError: Configurable 'SpectralLoss' doesn't have a parameter named 'delta_delta_freq_weight'.
  In file "/content/pretrained/operative_config-0.gin", line 92
    SpectralLoss.delta_delta_freq_weight = 0.0

As far as I understand, this is the use case of skip_unknown correct? Is it missing it because SpectralLoss is an object wrapped in @gin.register instead of @gin.cofigurable? Is there a way to avoid this error without requiring modifying the gin config file itself?

AttributeError: 'ModifiedTensorBoard' object has no attribute '_write_logs'

C:\CarlaProject\PythonAPI\examples>python Adonia5.py
Using TensorFlow backend.
WARNING:tensorflow:From C:\Users\Hp\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\compat\v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
WARNING:tensorflow:From C:\Users\Hp\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py:1635: calling BaseResourceVariable.init (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
2020-01-13 08:39:25.890943: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
Exception in thread Thread-2:
Traceback (most recent call last):
File "C:\Users\Hp\AppData\Local\Programs\Python\Python37\lib\threading.py", line 917, in _bootstrap_inner
self.run()
File "C:\Users\Hp\AppData\Local\Programs\Python\Python37\lib\threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "Adonia5.py", line 278, in train_in_loop
self.model.fit(X,y, verbose=False, batch_size=1)
File "C:\Users\Hp\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\engine\training.py", line 1209, in fit
if self._uses_dynamic_learning_phase():
File "C:\Users\Hp\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\engine\training.py", line 382, in _uses_dynamic_learning_phase
not isinstance(K.learning_phase(),int))
File "C:\Users\Hp\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_backend.py", line 76, in symbolic_fn_wrapper
if _SYMBOLIC_SCOPE.value:
AttributeError: '_thread._local' object has no attribute 'value'

0%| | 0/100 [00:00<?, ?episodes/s]ERROR: failed to destroy actor 93 : time-out of 2000ms while waiting for the simulator, make sure the simulator is ready and connected to localhost:2000
ERROR: failed to destroy actor 94 : time-out of 2000ms while waiting for the simulator, make sure the simulator is ready and connected to localhost:2000
Traceback (most recent call last):
File "Adonia5.py", line 379, in
agent.tensorboard.update_stats(reward_avg=average_reward, reward_min=min_reward, reward_max=max_reward, epsilon=epsilon)
File "Adonia5.py", line 102, in update_stats
self._write_logs(stats, self.step)
AttributeError: 'ModifiedTensorBoard' object has no attribute '_write_logs'
What is the cause of such an error during reinforcement learning using Carla simulator?

configured values sometimes ignored

I've been witnessing some inconsistent behaviour with basic configuration - some configs are ignored completely. I wish I could be more specific as to when they are ignored and when they aren't, but I'm at a loss as to any kind of pattern.

Minimal example here

Can't cloudpickle gin-configured class

Running this code...

import cloudpickle
import gin
import pickle


@gin.configurable
def msg_fn(msg):
    return msg


@gin.configurable
class Foo:
    def __init__(self, msg_fn):
        self.msg = msg_fn()


config = """
msg_fn.msg = "Hello World"
Foo.msg_fn = @msg_fn
"""
gin.parse_config(config)

try:
    pkl = pickle.dumps(Foo)
    foo = pickle.loads(pkl)()
    print("Can pickle: {}!".format(foo.msg))
except:
    print("Can't pickle!")

try:
    cpkl = cloudpickle.dumps(Foo)
    foo = cloudpickle.loads(cpkl)()
    print("Can cloudpickle: {}!".format(foo.msg))
except:
    print("Can't cloudpickle!")

...prints:

Can pickle: Hello World!!
Can't cloudpickle!

Cloudpickle rises Error on cloudpickle.dumps(): TypeError: can't pickle _thread.lock objects.

This is a problem, because Ray uses cloudpickle to serialize objects. We can't send gin configurable classes to Ray actors or Ray object store (with ray.put). Is it a bug or should it be done differently? I didn't know where to open this issue (in cloudpickle, ray or here). Can you help us with this?

Evaluate reference with kwargs ?

Sometimes it is necessary to pass the result of calling a specific function or class constructor. Gin supports "evaluating" configurable references via the @name() syntax, but do not support passing
kwargs , it's not flexible

For example, say we wanted to use the result of calling test_func

@gin.configurable
def test_func(a , b)
    ...

we should config as

scope/test_func.a='a'
scope/test_func.b='b'
ref=scope/test_func()

should it possible be simplified as

ref=scope/test_func(a='a', b='b')

Save all parameters to a file

After loading all gin files and command line parameters via gin.parse_config_files_and_bindings, is it possible to save all loaded parameters to a single gin file? Thank you!

gin provided objects not pickable

Using gin configs for a multiprocessing process providing a class as input leads to a pickle error when using the spawn start method. Following minimal example probably explains the problem better:

Code
import multiprocessing as mp
import gin


@gin.configurable('example_process')
class Process(mp.Process):
  def __init__(self, input_class):
      super().__init__()
      self.input_class = input_class
      print("Init Process")

  def run(self):
      print("Hello World!")


@gin.configurable()
class A:
  pass


if __name__ == "__main__":
  mp.set_start_method('spawn')
  gin.parse_config('example_process.input_class = @A')

  process_1 = Process(A)
  process_1.start()
  process_1.join()

  process_2 = Process()
  process_2.start()
  process_2.join()

process_1 runs fine whereas process_2 fails with error
_pickle.PicklingError: Can't pickle <class '__main__.A'>: it's not the same object as __main__.A

Is there a way to solve this problem? I really need to use the spawn method, fork does not work for different reasons.

I am running python 3.7.4 with gin 0.2.1

Thanks and best,
besterma

Python 3.7 throws error "future feature google_type_annotations is not defined"

$ python3 -c 'import sys; print(sys.version); import gin'

3.7.7 (default, Mar 14 2020, 02:39:38) 
[Clang 11.0.0 (clang-1100.0.33.17)]
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gin/__init__.py", line 17, in <module>
    from gin.config import add_config_file_search_path
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gin/config.py", line 87
    from __future__ import google_type_annotations
    ^
SyntaxError: future feature google_type_annotations is not defined

can't bind/rebind constant after parsing config

Consider a macro is referenced in a config but not bound in that config. One may want to specify that constant in the program after parsing the config. For example, consider a configurable function:

@gin.configurable
def func(x):
    return x

and the configuration:

gin.parse_config('func.x = %num')
gin.constant('num', 20)

func() # error

Currently, the code above will cause No values supplied by Gin or caller for arguments: ['value']. On the other hand, the following code snippet work as expected:

gin.constant('num', 20)
gin.parse_config('func.x = %num')

func() # get 20

Note that if we bind the macro in the config and rebind it using gin.constant afterward:

gin.parse_config('''
func.x = %num
num = 10
''')
gin.constant('num', 20)

func() # get 10 instead of 20

It seems that gin.constant doesn't affect the config that is parsed before it. I think this behavior is a bug...?

RFC: (Optionally?) hide gin from tracebacks

Right now wrapping a function with gin adds two lines to every traceback. For projects which use gin heavily, this gets to be...kind of a lot, see example at the bottom of this bug report.

Would you be willing to take a patch which (optionally? always?) hides gin from stack traces?

(I'm no Python expert, but I think it's possible to do this, see e.g. https://stackoverflow.com/questions/23765707/python-2-and-3-compatible-way-of-hiding-a-decorator-from-stacktrace)

Example of highly-decorated stack trace:

Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/google/home/jlebar/code/tensor2tensor/tensor2tensor/trax/trainer.py", line 133, in <module>
    app.run(main)
  File "/usr/local/google/home/jlebar/.local/lib/python3.6/site-packages/absl/app.py", line 300, in run
    _run_main(main, args)
  File "/usr/local/google/home/jlebar/.local/lib/python3.6/site-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "/usr/local/google/home/jlebar/code/tensor2tensor/tensor2tensor/trax/trainer.py", line 129, in main
    trax.train(output_dir=output_dir)
  File "/usr/local/google/home/jlebar/.local/lib/python3.6/site-packages/gin/config.py", line 1073, in gin_wrapper
    utils.augment_exception_message_and_reraise(e, err_str)
  File "/usr/local/google/home/jlebar/.local/lib/python3.6/site-packages/gin/utils.py", line 49, in augment_exception_message_and_reraise
    six.raise_from(proxy.with_traceback(exception.__traceback__), None)
  File "<string>", line 3, in raise_from
  File "/usr/local/google/home/jlebar/.local/lib/python3.6/site-packages/gin/config.py", line 1050, in gin_wrapper
    return fn(*new_args, **new_kwargs)
  File "/usr/local/google/home/jlebar/code/tensor2tensor/tensor2tensor/trax/trax.py", line 878, in train
    save_steps=save_steps, has_weights=has_weights)
  File "/usr/local/google/home/jlebar/.local/lib/python3.6/site-packages/gin/config.py", line 1073, in gin_wrapper
    utils.augment_exception_message_and_reraise(e, err_str)
  File "/usr/local/google/home/jlebar/.local/lib/python3.6/site-packages/gin/utils.py", line 49, in augment_exception_message_and_reraise
    six.raise_from(proxy.with_traceback(exception.__traceback__), None)
  File "<string>", line 3, in raise_from
  File "/usr/local/google/home/jlebar/.local/lib/python3.6/site-packages/gin/config.py", line 1050, in gin_wrapper
    return fn(*new_args, **new_kwargs)
  File "/usr/local/google/home/jlebar/code/tensor2tensor/tensor2tensor/trax/trax.py", line 540, in __init__
    inputs = inputs(n_devices)
  File "/usr/local/google/home/jlebar/.local/lib/python3.6/site-packages/gin/config.py", line 1073, in gin_wrapper
    utils.augment_exception_message_and_reraise(e, err_str)
  File "/usr/local/google/home/jlebar/.local/lib/python3.6/site-packages/gin/utils.py", line 49, in augment_exception_message_and_reraise
    six.raise_from(proxy.with_traceback(exception.__traceback__), None)
  File "<string>", line 3, in raise_from
  File "/usr/local/google/home/jlebar/.local/lib/python3.6/site-packages/gin/config.py", line 1050, in gin_wrapper
    return fn(*new_args, **new_kwargs)
  File "/usr/local/google/home/jlebar/code/tensor2tensor/tensor2tensor/trax/inputs.py", line 89, in inputs
    dataset_name, data_dir, input_name, n_devices)
  File "/usr/local/google/home/jlebar/code/tensor2tensor/tensor2tensor/trax/inputs.py", line 527, in _train_and_eval_batches
    dataset, data_dir)
  File "/usr/local/google/home/jlebar/.local/lib/python3.6/site-packages/gin/config.py", line 1073, in gin_wrapper
    utils.augment_exception_message_and_reraise(e, err_str)
  File "/usr/local/google/home/jlebar/.local/lib/python3.6/site-packages/gin/utils.py", line 49, in augment_exception_message_and_reraise
    six.raise_from(proxy.with_traceback(exception.__traceback__), None)
  File "<string>", line 3, in raise_from
  File "/usr/local/google/home/jlebar/.local/lib/python3.6/site-packages/gin/config.py", line 1050, in gin_wrapper
    return fn(*new_args, **new_kwargs)
  File "/usr/local/google/home/jlebar/code/tensor2tensor/tensor2tensor/trax/inputs.py", line 217, in train_and_eval_dataset
    return _train_and_eval_dataset_v1(dataset_name[4:], data_dir)
  File "/usr/local/google/home/jlebar/code/tensor2tensor/tensor2tensor/trax/inputs.py", line 280, in _train_and_eval_dataset_v1
    train_dataset = problem.dataset(tf.estimator.ModeKeys.TRAIN, data_dir)
  File "/usr/local/google/home/jlebar/code/tensor2tensor/tensor2tensor/data_generators/problem.py", line 646, in dataset
    data_filepattern))
  File "/usr/local/google/home/jlebar/.local/lib/python3.6/site-packages/tensorflow/contrib/slim/python/slim/data/parallel_reader.py", line 316, in get_data_files
    raise ValueError('No data files found in %s' % (data_sources,))

Gin-Config Best Practices

Hey all,

I'm wondering if folks have some design recommendations for how to use gin-config in both research and production project. Gin is powerful and can be dangerous if used in excess. In particular, I'm curious about where people recommend storing function default values (for hyperparams specifically). In gin we have 2 popular options:

Option 1: Include default values in method signature, and only override specific hparams in .gin files if we are actively experimenting on them. Do not store defaults in .gin.

def preprocess(inputs, eps=.001, p=1.4):
    # do stuff

Option 2: Store all default values in a base.gin file or similar, and override them as necessary for different experiments. Do not store default hparam values in functions.

def preprocess(inputs, eps, p):
    # do stuff
preprocess.eps = .001
preprocess.p = 1.4

train/preprocess.eps = .0002

The key challenge I'm facing is it's hard to manage default values in 2 places. Much easier to store all defaults in a single place. Thoughts?

List of external configurables

Is it possible to pass a list of external configurables in the gin configuration?

Something on the lines of:

build_transforms.transforms_list = [
    @transform1,
    @transform2,
    @transform3
]

transform1.param = 1
transform3.other_param = 2

And if not, is there any way of achieving a similar result?

I need to be able to control the data transforms pipeline and set each transform's parameters individually

Unable to pickle objects bound by `.. = @SomeClass()`

I'm unable to pickle objects that were assigned to the function argument using the following synthax... = @SomeClass().

Consider the following script:

import pickle
import gin


@gin.configurable
class Example:
    def __init__(self, a):
        self.a = a

    def __call__(self):
        print("hello", self.a)
        print("Pickle:")
        print(pickle.dumps(self))


@gin.configurable
def say_hello(example):
    print("in function say_hello")
    example()


if __name__ == '__main__':
    gin.parse_config_files_and_bindings(config_files=[],
                                        bindings=['say_hello.example = @Example()',
                                                  'Example.a = 1'])
    e = Example()
    e()
    say_hello()

When I run it, I get the following error:

 python script.py
hello 1
Pickle:
b'\x80\x03c__main__\nExample\nq\x00)\x81q\x01}q\x02X\x01\x00\x00\x00aq\x03K\x01sb.'
in function say_hello
hello 1
Pickle:
Traceback (most recent call last):
  File "script.py", line 28, in <module>
    say_hello()
  File "/opt/modules/i12g/anaconda/3-5.0.1/lib/python3.6/site-packages/gin/config.py", line 1032, in wrapper
    utils.augment_exception_message_and_reraise(e, err_str)
  File "/opt/modules/i12g/anaconda/3-5.0.1/lib/python3.6/site-packages/gin/utils.py", line 48, in augment_exception_message_and_reraise
    six.raise_from(proxy.with_traceback(exception.__traceback__), None)
  File "<string>", line 3, in raise_from
  File "/opt/modules/i12g/anaconda/3-5.0.1/lib/python3.6/site-packages/gin/config.py", line 1009, in wrapper
    return fn(*new_args, **new_kwargs)
  File "script.py", line 19, in say_hello
    example()
  File "script.py", line 13, in __call__
    print(pickle.dumps(self))
_pickle.PicklingError: Can't pickle <class '__main__.Example'>: it's not the same object as __main__.Example
  In call to configurable 'say_hello' (<function say_hello at 0x7f92c765de18>)

You can see that the first instance of Example could be correctly pickled whereas the second one, bounded to say_hello.example argument, couldn't. Any ideas on how to solve this issue? I'm using the recent gin-config package from PyPI (0.1.2) on python 3.6.8.

Ordinary reference as value ?

Now, gin config support configurable reference, macro , basic type (str, bool, number) or nested of them as value . It would be great if we can refer to local variable (module, class ,function)

  def testLocalVariableReference(self):
    config_str = """
          configurable2.non_kwarg = a
    """
    config.parse_config(config_str)

    a = 1
    instance, _ = configurable2()  # pylint: disable=no-value-for-parameter
    self.assertEqual(instance, a)

    a = 2
    instance, _ = configurable2()  # pylint: disable=no-value-for-parameter
    self.assertEqual(instance, a)

  def testUnregisteredFunction(self):
    config_str = """
             configurable2.non_kwarg = numpy.abs
    """
    config.parse_config(config_str)
    import numpy
    instance, _ = configurable2()  # pylint: disable=no-value-for-parameter
    self.assertEqual(instance, numpy.abs)

without this , we make all function that may refer to to be configurable explicited by gin.external_configurable , it seems not reasonable .
see gin.tf and gin.torch

Unsure how to parameterise class methods

I'm not sure if this is possible but I would like to gin-parametrise a class method:

import gin


@gin.configurable
def func_static(b):
    print("a", b)

@gin.configurable
class Test():
    @gin.configurable
    def func(self, b):
        print("a", b)

if __name__ == '__main__':
    # works
    gin.parse_config("func_static.b = 'b'")
    func_static()

    # doesnt work
    gin.parse_config("Test.func.b = 'b'")
    Test().printa()

I get the same issue if I remove either of the gin.configurable statements from the class Test(). Is this supported?

gin-config 0.3.0

Load gin config to 1 function or as a dict

Hi. Thanks for your nice work.

I am using gin to configure my experiments. Next, while visualizing results in a Jupyter Notebook, I would like to be able to load only my model parameters from the .gin file.

Is there a way to just load the .gin params to only one function? Or just to load it as a dict?

Thanks :)

gin.operative_config_str() does not show values changed during unlock

I have a function that generates a random seed integer if None is provided.

Because I want to make my experiments reproducible I need to dump the exact value of the random seed in the config. I am using gin.operative_config_str() to do so. However, this string logs None for a seed value instead of the overloaded one.

Simple example:

import gin
import random
import numpy as np


def set_random_seed(seed=None):
    if seed is None:
        seed = random.randint(0, 1024)

    random.seed(seed)
    np.random.seed(seed)
    return seed


@gin.configurable('experiment')
def confgurable_fn(arg1, seed=None):
    seed = set_random_seed(seed)

    with gin.unlock_config():
        gin.bind_parameter("experiment.seed", seed)
    return {'arg1': arg1, 'seed': seed}


if __name__ == '__main__':
    gin.bind_parameter('experiment.arg1', 1)
    confgurable_fn()
    print(gin.operative_config_str())

Which prints

# Parameters for experiment:
# ==============================================================================
experiment.arg1 = 1
experiment.seed = None

Is this the expected behaviour? If so, is there any workaround?

configurable error with pytorch bottleneck profiler

Running pytorch's bottleneck profiler if your script has gin configurables causes an error
RuntimeError: Attempted to add a new configurable after the config was locked. or ValueError: A configurable matching 'some.configurable.module' already exists.

It seems that the profiler runs the script twice using exec, once for cprofile and once for the autograd profiler (https://github.com/pytorch/pytorch/blob/1aae4b02df797f98239fde511575c96b8d6806da/torch/utils/bottleneck/__main__.py#L210) and on the second run, my guess is that gin tries to recreate configs that already exist, causing this error.

This is a pretty niche problem and it's possible for someone to just run cprofile and the autograd profiler separately but I was wondering if there could be a simple workaround. Is there a way to either reset the configurables created (which could be put at the end of the script), somehow ignore errors, or some other workaround you see? Thanks

Here's a minimal snippet

import gin


@gin.configurable
def main(word):
    print(f'hello {word}')

if __name__ == '__main__':
    main('world')

when run with python -m torch.utils.bottleneck script.py gives the output

Running environment analysis...
Running your script with cProfile
hello world
Running your script with the autograd profiler...
Traceback (most recent call last):
  File "/home/mnoukhov/miniconda3/envs/selfish/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/mnoukhov/miniconda3/envs/selfish/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/mnoukhov/miniconda3/envs/selfish/lib/python3.7/site-packages/torch/utils/bottleneck/__main__.py", line 231, in <module>
    main()
  File "/home/mnoukhov/miniconda3/envs/selfish/lib/python3.7/site-packages/torch/utils/bottleneck/__main__.py", line 210, in main
    autograd_prof_cpu, autograd_prof_cuda = run_autograd_prof(code, globs)
  File "/home/mnoukhov/miniconda3/envs/selfish/lib/python3.7/site-packages/torch/utils/bottleneck/__main__.py", line 102, in run_autograd_prof
    result = [run_prof(use_cuda=False)]
  File "/home/mnoukhov/miniconda3/envs/selfish/lib/python3.7/site-packages/torch/utils/bottleneck/__main__.py", line 98, in run_prof
    exec(code, globs, None)
  File "torchgin.py", line 4, in <module>
    @gin.configurable
  File "/home/mnoukhov/miniconda3/envs/selfish/lib/python3.7/site-packages/gin/config.py", line 1129, in configurable
    return perform_decoration(decoration_target)
  File "/home/mnoukhov/miniconda3/envs/selfish/lib/python3.7/site-packages/gin/config.py", line 1126, in perform_decoration
    return _make_configurable(fn_or_cls, name, module, whitelist, blacklist)
  File "/home/mnoukhov/miniconda3/envs/selfish/lib/python3.7/site-packages/gin/config.py", line 891, in _make_configurable
    raise ValueError(err_str.format(selector))
ValueError: A configurable matching '__main__.main' already exists.

I'm running pytorch 1.1.0 and gin-config 0.1.4

Query the value of a macro

Is it possible to query the value of a macro? I have tried:

gin.parse_config("my_macro = 3")
print(gin.query_parameter("my_macro"))

to which I get the message:
ValueError: No configurable matching 'my_macro'.

Same occurs for print(gin.query_parameter("gin.macro.my_macro")) or print(gin.query_parameter("%my_macro"))

Enum constants not reloaded after clear_config

I ran into a rather confusing issue related to using constants_from_enum decorator and gin.clear_config(clear_constants=True). In my code there is a loop over multiple parameter sets that clears the configuration and constants before reparsing the input file.

My enum was defined in a separate file which is imported by the config file. The decorator is executed and the constants are added to _CONSTANTS. Once clear_config(clear_constants=True) is called these constants are removed.
parse_config is called again, but this time mymodule has already been imported and is skipped. This means that the decorator is not run again and the enum constants are missing from _CONSTANTS and gin complains that a macro is missing a value.

This problem can be avoid by setting clear_constants=False, but perhaps gin could prevent this issue by treating enum constants as a special case and not clearing them out?

Below is an example recreating the issue

mymodule.py:

import enum
import gin

@gin.constants_from_enum
class MyEnum(enum.Enum):
    A = 1
    B = 2

main.py:

import gin
#mymodule can be imported here, with a similar effect

@gin.configurable
def myfunc(value):
    print(value)

for i in range(3):
	gin.clear_config(clear_constants=True)
	gin.parse_config([
		'import mymodule',
		'myfunc.value = %MyEnum.B'
	])

	myfunc()

Import module with alias

A project I am working on has many modules and significant module depth. I would like to configure parameters in a number of functions and classes throughout the project using a gin file located in the root directory.

For a function foo located in module a.b.c.d I would like to set a number of function arguments. For module names longer than single letters, it seems a bit overwhelming to write the whole module path for each argument. Instead, I would like to import the module using an alias.

# Found in gin file
import a.b.c.d as m
m.foo.x = 4
m.foo.y = 2

Doing so gives me this error

                                                ^
SyntaxError: Expected newline.

I assume that this failure is because the parser does not currently support module aliases.

Instead of files, programmatically create gin config

In my use cases, I rarely set hyper parameters by hand. Instead I have an external logic that generates them (e.g. random search). Now, using gin it seems I would have to create config files programmatically, e.g. through templates.

That feels "wrong", even though I could go that route: have a process that creates config files from templates (which would be programmatically generated by inspecting the respective module using gin), then start the processes in a second go.

Nevertheless, I was wondering if I am missing an obvious path to get the same out of it. Soemthing like setting the configuration through pure python, instead of going through gin's own format, would do the job.

Keras callback to save gin config text file

It was once added at #17 but is removed now. Currently, there's only a summary hook implementation that would work with Estimator API - which is not the most recommended TF2.0 way. Would you consider to get this feature back? :)

Unable to bind tf.keras.* classes

config.gin

build_model.optimizer = @tf.keras.optimizers.Adam()
build_model.loss_fn = @tf.keras.losses.sparse_categorical_crossentropy


@gin.configurable
def build_model(self, optimizer, loss_fn):
self.model.compile(optimizer=optimizer,
loss=loss_fn,
metrics=['accuracy'])


Getting following error

No configurable matching reference '@tf.keras.optimizers.Adam()'.
In file "../configs/config.gin", line 4
build_model.optimizer = @tf.keras.optimizers.Adam()


By the way it works fine if I replace @tf.keras.optimizers.Adam() with @tf.train.AdamOptimizer()

How can I get rid of this error without updating TF

The CUDA in my machine is old and cannot run the newest TF. I got the error

ImportError: This version of Gin-Config requires TensorFlow version >= 1.12.0; Detected an installation of version 1.9.0.

Are there any workarounds to get rid of this error?

unable to leverage imports due to 'Failed to parse token' ; SyntaxError: Expected '='. ; ModuleNotFoundError: No module named ; SyntaxError: malformed node or string

We'd like to configure our modular project with gin but the imports always break:

import nature

  File "config.gin", line 3
    FC = nature.FC
        ^
SyntaxError: malformed node or string: <_ast.Name object at 0x7f5aeb65b0f0>
    Failed to parse token 'nature'

from nature import *

  File "config.gin", line 1
    from nature import *
        ^
SyntaxError: Expected '='.

import nature.bricks.built_in.fc.FC

ModuleNotFoundError: No module named 'nature.bricks.built_in.fc.FC'; 'nature.bricks.built_in.fc' is not a package
  In file "config.gin", line 1
    import nature.bricks.built_in.fc.FC

import nature.bricks.built_in.fc


  File "config.gin", line 23
    Layer.layer_fn = fc.FC
                    ^
SyntaxError: malformed node or string: <_ast.Name object at 0x7fa62799bba8>
    Failed to parse token 'fc'

FC is decorated with @gin.configurable and we import nature in the normal python file which invokes gin.parse_config_file

maybe if "gin" files were .py files, then the imports would work, but as it stands, gin doesn't work like normal python because it's a separate/reinvented format, and thus always fails on imports, aliases, functions

How to bind configurable references using `gin.bind_parameter`?

Currently I'm doing some hyperparameter optimization for some gin-configured object, and I need to rebind some configurable references at run-time. I can bind an object to some name using gin.bind_parameter, but when I save the config using gin.operative_config_str(), the run-time binding is not there. What's the best way to do this? Thanks

Using importlib and gin-config

Hello, how to configure a parameter, used in a class that is imported using the importlib?

  1. Running in the main.py
import gin
import importlib

gin.parse_config_file('config.gin')
module = importlib.import_module('src.external.SomeModule')
_class = getattr(module, 'SomeClass')

  1. This is the externally defined module in src.external.SomeModule
@gin.configurable
SomeClass:
    def __init__(self, variable):
        self.variable = variable

  1. This is the 'config.gin' file
SomeClass.variable=0

However if we define it this way, we get:

Traceback (most recent call last):
  File "main_test.py", line 291, in <module>
    gin.parse_config_file(args.config_path)
  File "/site-packages/gin/config.py", line 1599, in parse_config_file
    parse_config(f, skip_unknown=skip_unknown)
  File "/site-packages/gin/config.py", line 1517, in parse_config
    bind_parameter((scope, selector, arg_name), value)
  File "/contextlib.py", line 130, in __exit__
    self.gen.throw(type, value, traceback)
  File "/site-packages/gin/utils.py", line 68, in try_with_location
    augment_exception_message_and_reraise(exception, _format_location(location))
  File "/site-packages/gin/utils.py", line 49, in augment_exception_message_and_reraise
    six.raise_from(proxy.with_traceback(exception.__traceback__), None)
  File "<string>", line 3, in raise_from
  File "/site-packages/gin/utils.py", line 66, in try_with_location
    yield
  File "/site-packages/gin/config.py", line 1517, in parse_config
    bind_parameter((scope, selector, arg_name), value)
  File "/site-packages/gin/config.py", line 643, in bind_parameter
    pbk = ParsedBindingKey(binding_key)
  File "/site-packages/gin/config.py", line 508, in __new__
    raise ValueError("No configurable matching '{}'.".format(selector))
ValueError: No configurable matching 'src.data.SomeModule.SomeClass'.
  In file "config/cnn_test.gin", line 35
    src.external.SomeModule.SomeClass.variable=0.

inheriting configs with default parameters

Let's consider a simple two-class hierarchy:

@gin.configurable
class A:
  def __init__(self, a=1):
    self.a = a

@gin.configurable
class B(A):
  def __init__(self, a=1, b=2):
    super().__init__(a)
    self.b = b

Ideally I would like to replicate this structure in my .gin config files:

A.a = 3
B.b = 4

So that when I call obj = B() it would contain obj.a == 3 and obj.b == 4.
However, right now this does not happen - I get obj.a == 1 and obj.b == 4.
This is because gin detects that A constructor is manually called with a set as B's default.

The current workaround is to define full configuration for each class separately, however that becomes quite cumbersome when configuring many classes that inherit from one base class with majority of the parameters shared.


One possible fix would be to reuse existing macro syntax and explicitly replicate the hierarchy in .gin config, e.g.:

A.a = 3

B = #A
b.b = 4

Value as statement?

Hi. Is there any way to assign a statement to an argument in Gin? For e.g., I want to do something like this

# inside config file
output_size = 10
network.hidden_size = %output_size / 2

Is this possible?
Otherwise I want to request this as a feature.

Example of using the command line functionality

Do you think you guys could provide an example of passing configurable parameters via command line?

Regardless, thank you to everyone involved with gin-config! I had been hacking something together along these lines but you guys came along and saved me a lot of effort producing something that does half as much. Great work!

TypeError: 'zip' object is not subscriptable

This is the error summary

---> 32 import gin.tf
33 import numpy as np
34 import replay_memory

~/anaconda3/lib/python3.6/site-packages/gin/tf/init.py in
66
67 # pylint: disable=unused-import
---> 68 from gin.tf.utils import GinConfigSaverHook
69
70 # pylint: enable=g-import-not-at-top

~/anaconda3/lib/python3.6/site-packages/gin/tf/utils.py in
32
33 # Register TF file reader for Gin's parse_config_file.
---> 34 config.register_file_reader(tf.gfile.Open, tf.gfile.Exists)
35
36

~/anaconda3/lib/python3.6/site-packages/gin/config.py in register_file_reader(*args)
1412 return functools.partial(do_registration, is_readable_fn=args[0])
1413 elif len(args) == 2:
-> 1414 do_registration(*args)
1415 else: # 0 or > 2 arguments supplied.
1416 err_str = 'register_file_reader() takes 1 or 2 arguments ({} given)'

~/anaconda3/lib/python3.6/site-packages/gin/config.py in do_registration(file_reader_fn, is_readable_fn)
1406 """
1407 def do_registration(file_reader_fn, is_readable_fn):
-> 1408 if file_reader_fn not in list(zip(*_FILE_READERS))[0]:
1409 _FILE_READERS.append((file_reader_fn, is_readable_fn))
1410

TypeError: 'zip' object is not subscriptable

1

How do I fix this?

AttributeError: module 'tensorflow._api.v1.random' has no attribute 'categorical'

Problem Definition

I receive the following error when running import gin.tf.external_configurables :
AttributeError: module 'tensorflow._api.v1.random' has no attribute 'categorical'

Install Process

I installed `gin-config' via GitHub, and installed older version to avoid the error #9 :

git clone https://github.com/google/gin-config.git
git checkout 89e9c79d465263ce825e718d90413e2bcffadd64
python -m setup.py install

TraceBack

Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 05:52:31) 
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import gin.tf.external_configurables
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 656, in _load_unlocked
  File "<frozen importlib._bootstrap>", line 626, in _load_backward_compatible
  File "/Users/user-name/rl/lib/python3.6/site-packages/gin_config-0.1.2-py3.6.egg/gin/tf/external_configurables.py", line 99, in <module>
AttributeError: module 'tensorflow._api.v1.random' has no attribute 'categorical'

Versions

tensorflow   1.12.0
gin-config   0.1.2

Any assistance is greatly appreciated!!

Gin Breaks Jupyter Module Reload

I'm using gin in a Jupyter notebook and I'm unable to reload a module when I've made changes to it. I'll get an error that the configurables already exist: ValueError: A configurable matching 'main.get_train_envs' already exists.

I tried running this before doing a module reload:

tf.reset_default_graph()
gin.config.clear_config()

But this doesn't work because it doesn't clear the gin registry.

This is a real pain because it requires me to restart my Jupyter kernel and re-run every cell in my notebook on any change to any dependency. Typically this is unnecessary in Jupyter notebook development. Modules can be automatically reloaded with:

%load_ext autoreload
%autoreload 2

Or manually with:

import importlib
importlib.reload(some_module)

Feature Request: Remove restriction to define all configurables before parsing config file?

I'm working in a Jupyter notebook. I would like to import gin and gin.parse_config_file('config.gin') at the top of the notebook. Then, inside the notebook, I would like to define a @gin.configurable function which is only needed inside this notebook. Currently, when I do this, I get an error at the point where I parse the config file (before defining the configurable function):

ValueError: No configurable matching 'my_function'.
  In file "config.gin", line 3
    my_function.my_arg = 'my_string'

Is this behavior intentional? If so, why? (I'm fairly new to python, so it could be that there's a great reason for this, but it's not obvious to me.)

I think this feature request would make gin more flexible and easier to use in this setting, and it seems like a reasonable behavior given that gin already requires configurable names to be unique. It seems like one of the benefits of gin is providing deep configuration of your python functions and classes, meaning that I can, for example, configure the constructor of a particular class without changing any of the code which instantiates that class to know about the configuration. So it seems counter-intuitive, then, that the same library would require me to define all of my configurable functions and classes in advance of loading my config file, which essentially means that I need an executable script which is completely independent of any (configurable) function or class definitions.

Without knowing the technical complexity of the change, it seems like the only trade-off in supporting this feature request is that we would lose the ability to give this kind of error message (shown above) upon parsing a config file, because we wouldn't know in advance whether a particular configurable will ever be defined.

AttributeError: module 'tensorflow._api.v1.io' has no attribute 'gfile'

Problem Definition

I receive the following error when running import gin.tf:
AttributeError: module 'tensorflow._api.v1.io' has no attribute 'gfile'

Install Process

I installed gin-config via pip inside an existing Anaconda environment:
pip install gin-config

Traceback

Python 3.6.7 |Anaconda custom (64-bit)| (default, Dec 10 2018, 20:35:02) [MSC v.1915 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import gym.tf
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'gym.tf'
>>> import gin.tf
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\18048\AppData\Local\Continuum\anaconda3\envs\base_3_6\lib\site-packages\gin_config-0.1.3-py3.6.egg\gin\tf\__init__.py", line 20, in <module>
    from gin.tf.utils import GinConfigSaverHook
  File "C:\Users\18048\AppData\Local\Continuum\anaconda3\envs\base_3_6\lib\site-packages\gin_config-0.1.3-py3.6.egg\gin\tf\utils.py", line 34, in <module>
    config.register_file_reader(tf.io.gfile.GFile, tf.io.gfile.exists)
AttributeError: module 'tensorflow._api.v1.io' has no attribute 'gfile'

Versions

tensorflow                1.12.0
gin-config                0.1.3               

Any assistance is greatly appreciated!

TF2.0 compatibility?

Does this work with TF 2.0 / is compatibility on the roadmap? Great work so far on this.

Getting "no module named gin.torch" when doing "import gin.torch"

I set up a new environment using this sequence of commands:

conda create -n test-gin python=3.7
conda activate test-gin
pip install torch
pip install gin-config
python -c "import gin.torch"

This results in the following:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'gin.torch'

Unsure what's going wrong.

Thanks in advance for the help! gin is quite cool and working well for me otherwise.

[Feature Requests] Support configuration aliases / names that persist through code refactoring

Hello project maintainers,

We started evaluating gin-config for wider use in some of our projects. One thing I noticed is that gin config files are tied to symbol names in code (i.e. class names and function names).

In repositories with high change velocity, refactoring class and function names tend to break their associated config files. Could you provide a functionality to specify stable aliases for configuration names? For example, like this:

@gin.configurable(alias=model_training_config)
def train_model(learning_rate):
  pass
model_training_config.learning_rate=0.001

The goal is that when we refactor and change the function name, it's associated configuration should not break. This functionality is usually available in dependency injection frameworks in general.

Thank you very much for your time.

-Yi

Feature Request: Parse within Python

I would like for the following to be valid:

gin.parse_line('foo.x=1')

So I can inject values programatically.

Thoughts?

Edit: I see this is achievable with gin.parse_config, somehow I missed it!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.