Git Product home page Git Product logo

clr's People

Contributors

bckenstler avatar carlthome avatar jeremyjordan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clr's Issues

Linking R implementation of CRL

Hi @bckenstler. This is great. Found your repo and figured the R implementation of keras could also greatly benefit from this. I literally translated your code into R and put it into a new package I plan to develop. Would you consider linking my repo somewhere at the top of your README so people who are looking for the R implementation could find it easily?

Plotting range of Learning Rate

Hi,
Thank you so much for your work.
I want to plot the range of learning rate base_lr= 0.01, max_lr=0.1 for my method

schedulers = torch.optim.lr_scheduler.CyclicLR(optim_backbone, base_lr= 0.01, max_lr=0.1, step_size_up=2000, step_size_down=None, mode='triangular')

Like Figure 2(a) of this paper https://arxiv.org/pdf/1708.07120.pdf

Clarification for step_size?

From readme, "step_size : number of training iterations per half cycle. Authors suggest setting step_size = (2-8) x (training iterations in epoch) . Default 2000."
Does it mean step_size should be "np.ceil(x_train.shape[0]/batch_size/2)" or "2*np.ceil(x_train.shape[0]/batch_size)"?

PR to keras

Hi, this callback seems quite interesting. Do you plan to PR to the keras repo?

Order of learning rate augmentation

Note that the clr callback updates the learning rate prior to any further learning rate adjustments as called for in a given optimizer.

Hi, @bckenstler , excellent work! I am still confused about what you said about the "order of learning rate augmentation". If the clr callback is added and sets the learning rate after each batch training ends, will a given optimizer (e.g. adam) still adjust the learning rate that the clr just set for updating weights. Thanks!

Have you considered submitting this to Pypi?

It's cool that you've implemented a cyclical learning rate for Keras, but have you considered adding this to Pypi? That way, it's a lot easier for others to incorporate CLR in their own repos

CLR callback for R's keras

After using CLR for a bit in models written in Python, I must say CLR makes a huge difference in my work.

Now that R is well served by the keras package, I wonder if you could also write a CLR callback for the R Keras (see its API here)? That would help wonders people who for one reason or another have some models already prepared in R.

Thanks!

AttributeError: 'CyclicLR' object has no attribute 'on_train_batch_begin'

Hi, I've tried to use your class within my training code, but I got the following error: AttributeError: 'CyclicLR' object has no attribute 'on_train_batch_begin'.

My code is the following:

from tensorflow.keras.applications.resnet50 import ResNet50

base_model = ResNet50(weights='imagenet', include_top=False)
base_model.trainable = False

model = tf.keras.Sequential([
base_model,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(units=200, activation='relu'),
tf.keras.layers.Dense(units=5, activation="softmax")
])

model.compile(optimizer=tf.keras.optimizers.Adam(0.001), metrics=["accuracy"],
loss=tf.keras.losses.sparse_categorical_crossentropy)
clr = CyclicLR(base_lr=0.001, max_lr=0.006, step_size=240)
model.fit(train_dataset, epochs=30, steps_per_epoch=60, validation_data=val_dataset,
validation_steps=1, callbacks=[clr])

By the way train_dataset and val_dataset are tf.data.Datasets.

Versions: Python3 and TF 1.10.

Any idea why the issue?

On learning rate range test

Hi @bckenstler ,
Thanks a lot for sharing your implementation. I just read the paper on cyclical learning rate. I'd like to know how you dealt with the following

  1. How do you choose the number of epochs to run the model?
  2. When I run the LR range test, and estimate accuracy on validation set, I get an accuracy of 0.5 for almost all the learning rates. Since, the model is hardly trained, it is behaving as a random classifier. Then, the validation accuracy around 0.5 looks justified. But, this is not what the accuracy vs learning curve look like in the paper. How did you deal with this?
    Thanks

FYI - Trapezoid schedule implementation is ready

Thanks to your CLR implementation, I forked and tailored another version for trapezoid schedule which is introduced in this paper:

This is just for your information, you can find it here: https://github.com/daisukelab/TrapezoidalLR

I was thinking I could ask for merge, but I just kept it as another version. I guess the trapezoid schedule might be a temporary solution though I implemented...

May I use CLR for Adam optimizer?

From the paper and your implementation, your examples are only use SGD optimizer. I am wondering if I can use this CLR for Adam or other optimizers. Many thanks.

LR vs Accuracy

Hi,

I am trying out to plo LR vs Acc. However it is not showing the stable graph like it shows on the page.
Shown in a paper:
image

Its showing something like this:
image

Any suggestions ?

How to reset the lr cycle

Dear @bckenstler ,

recently I stumbled across an issue with CLR calling from keras 2.1.5:
I ran using

CyclicLR(base_lr=1e-5, max_lr=8e-4, mode='triangular2', step_size=trn_steps//10, scale_mode='iterations')

where trn_steps is equal to steps_per_epoch in model.fit_generator.

Now, my observation is that during the first epoch CLR goes through 10 cycles (as planned), but then lr stays constant throughout the remaining epochs. How do I properly reset the lr cycle? I tried scale_mode='cycle' as well, but no luck. What am I doing wrong?

Strange Error

Hello,
I try to perform clr as described and it works very well with VGG16. But when training other networks like DenseNet I get following error (TypeError: integer argument expected, got float):


Epoch 1/10
2/91 [..............................] - ETA: 37:11 - loss: 0.3528 - acc: 0.8500

TypeError Traceback (most recent call last)
in ()
26 validation_steps = len(val_list) // batch_size + 1,
27 callbacks=[clr],
---> 28 verbose=1)

/opt/conda/lib/python3.6/site-packages/Keras-2.2.4-py3.6.egg/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
89 warnings.warn('Update your ' + object_name + ' call to the ' +
90 'Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper

/opt/conda/lib/python3.6/site-packages/Keras-2.2.4-py3.6.egg/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
1434 use_multiprocessing=use_multiprocessing,
1435 shuffle=shuffle,
-> 1436 initial_epoch=initial_epoch)
1437
1438 @interfaces.legacy_generator_methods_support

/opt/conda/lib/python3.6/site-packages/Keras-2.2.4-py3.6.egg/keras/engine/training_generator.py in fit_generator(model, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
217 batch_logs[l] = o
218
--> 219 callbacks._call_batch_hook('train', 'end', batch_index, batch_logs)
220
221 batch_index += 1

/opt/conda/lib/python3.6/site-packages/Keras-2.2.4-py3.6.egg/keras/callbacks.py in _call_batch_hook(self, mode, hook, batch, logs)
93 'Method (%s) is slow compared '
94 'to the batch update (%f). Check your callbacks.', hook_name,
---> 95 delta_t_median)
96 if hook == 'begin':
97 self._t_enter_batch = time.time()

TypeError: integer argument expected, got float

Has anybody an idea what is the reason for this error? Thank you!

Dima S.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.