Git Product home page Git Product logo

redo's Introduction

Redo - Utilities to retry Python callables

Introduction

Redo provides various means to add seamless ability to retry to any Python callable. Redo includes plain functions (redo.retry, redo.retry_async), decorators (redo.retriable, redo.retriable_async), and a context manager (redo.retrying) to enable you to integrate it in the best possible way for your project. As a bonus, a standalone interface is also included ("retry").

Installation

For installing with pip, run following commands

pip install redo

How To Use

Below is the list of functions available

  • retrier
  • retry
  • retry_async
  • retriable
  • retriable_async
  • retrying (contextmanager)

retrier(attempts=5, sleeptime=10, max_sleeptime=300, sleepscale=1.5, jitter=1)

A generator function that sleeps between retries, handles exponential back off and jitter. The action you are retrying is meant to run after retrier yields. At each iteration, we sleep for sleeptime + random.randint(-jitter, jitter). Afterwards sleeptime is multiplied by sleepscale for the next iteration.

Arguments Detail:

  1. attempts (int): maximum number of times to try; defaults to 5
  2. sleeptime (float): how many seconds to sleep between tries; defaults to 60s (one minute)
  3. max_sleeptime (float): the longest we'll sleep, in seconds; defaults to 300s (five minutes)
  4. sleepscale (float): how much to multiply the sleep time by each iteration; defaults to 1.5
  5. jitter (int): random jitter to introduce to sleep time each iteration. the amount is chosen at random between [-jitter, +jitter] defaults to 1

Output: None, a maximum of attempts number of times

Example:

>>> n = 0
>>> for _ in retrier(sleeptime=0, jitter=0):
...     if n == 3:
...         # We did the thing!
...         break
...     n += 1
>>> n
3
>>> n = 0
>>> for _ in retrier(sleeptime=0, jitter=0):
...     if n == 6:
...         # We did the thing!
...         break
...     n += 1
... else:
...     print("max tries hit")
max tries hit

retry(action, attempts=5, sleeptime=60, max_sleeptime=5 * 60, sleepscale=1.5, jitter=1, retry_exceptions=(Exception,), cleanup=None, args=(), kwargs={})

Calls an action function until it succeeds, or we give up.

Arguments Detail:

  1. action (callable): the function to retry
  2. attempts (int): maximum number of times to try; defaults to 5
  3. sleeptime (float): how many seconds to sleep between tries; defaults to 60s (one minute)
  4. max_sleeptime (float): the longest we'll sleep, in seconds; defaults to 300s (five minutes)
  5. sleepscale (float): how much to multiply the sleep time by each iteration; defaults to 1.5
  6. jitter (int): random jitter to introduce to sleep time each iteration. The amount is chosen at random between [-jitter, +jitter] defaults to 1
  7. retry_exceptions (tuple): tuple of exceptions to be caught. If other exceptions are raised by action(), then these are immediately re-raised to the caller.
  8. cleanup (callable): optional; called if one of retry_exceptions is caught. No arguments are passed to the cleanup function; if your cleanup requires arguments, consider using functools.partial or a lambda function.
  9. args (tuple): positional arguments to call action with
  10. kwargs (dict): keyword arguments to call action with

Output: Whatever action(*args, **kwargs) returns

Output: Whatever action(*args, **kwargs) raises. retry_exceptions are caught up until the last attempt, in which case they are re-raised.

Example:

>>> count = 0
>>> def foo():
...     global count
...     count += 1
...     print(count)
...     if count < 3:
...         raise ValueError("count is too small!")
...     return "success!"
>>> retry(foo, sleeptime=0, jitter=0)
1
2
3
'success!'

retry_async(func, attempts=5, sleeptime_callback=calculate_sleep_time, retry_exceptions=Exception, args=(), kwargs={}, sleeptime_kwargs=None)

An asynchronous function that retries a given async callable.

Arguments Detail:

  1. func (function): an awaitable function to retry
  2. attempts (int): maximum number of attempts; defaults to 5
  3. sleeptime_callback (function): function to determine sleep time after each attempt; defaults to calculateSleepTime
  4. retry_exceptions (list or exception): exceptions to retry on; defaults to Exception
  5. args (list): arguments to pass to func
  6. kwargs (dict): keyword arguments to pass to func
  7. sleeptime_kwargs (dict): keyword arguments to pass to sleeptime_callback

Output: The value from a successful func call or raises an exception after exceeding attempts.

Example:

>>> async def async_action():
...     # Your async code here
>>> result = await retry_async(async_action)

retriable(*retry_args, **retry_kwargs)

A decorator factory for retry(). Wrap your function in @retriable(...) to give it retry powers!

Arguments Detail: Same as for retry, with the exception of action, args, and kwargs, which are left to the normal function definition.

Output: A function decorator

Example:

>>> count = 0
>>> @retriable(sleeptime=0, jitter=0)
... def foo():
...     global count
...     count += 1
...     print(count)
...     if count < 3:
...         raise ValueError("count too small")
...     return "success!"
>>> foo()
1
2
3
'success!'

retriable_async(retry_exceptions=Exception, sleeptime_kwargs=None)

A decorator for asynchronously retrying a function.

Arguments Detail:

  1. retry_exceptions (list or exception): exceptions to retry on; defaults to Exception
  2. sleeptime_kwargs (dict): keyword arguments to pass to the sleeptime callback

Output: A function decorator that applies retry_async to the decorated function.

Example:

>>> @retriable_async()
... async def async_action():
...     # Your async code here
>>> result = await async_action()

retrying(func, *retry_args, **retry_kwargs)

A context manager for wrapping functions with retry functionality.

Arguments Detail:

  1. func (callable): the function to wrap other arguments as per retry

Output: A context manager that returns retriable(func) on __enter__

Example:

>>> count = 0
>>> def foo():
...     global count
...     count += 1
...     print(count)
...     if count < 3:
...         raise ValueError("count too small")
...     return "success!"
>>> with retrying(foo, sleeptime=0, jitter=0) as f:
...     f()
1
2
3
'success!'

redo's People

Contributors

adammcmaster avatar ayrx avatar bhearsum avatar callek avatar dependabot[bot] avatar dieresys avatar escapewindow avatar gabrielbusta avatar helfi92 avatar hneiva avatar jonringer avatar mhsiddiqui avatar milescrabill avatar mozilla-github-standards avatar nolanlum avatar parkouss avatar patrickdepinguin avatar peterbe avatar samueldg avatar spacerat avatar tomprince avatar usize avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

redo's Issues

can't sleep for less than 1 second with jitter>0

It would be useful to me to be able to sleep for milliseconds with jitter, but this isn't possible because jitter is defined as integer and must be less than the sleep time.

ie retrier(sleep_time=0.5, jitter=1) is not possible.

Ideally jitter would also be float.

Logging error when args contains "%"

I ran into this when retrying a call to requests.put where the URL contains a %, but here's a simpler example:

>>> retry(print, args=("%",))
DEBUG:redo:attempt 1/5
--- Logging error ---
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/logging/__init__.py", line 992, in emit
    msg = self.format(record)
  File "/usr/local/lib/python3.6/logging/__init__.py", line 838, in format
    return fmt.format(record)
  File "/usr/local/lib/python3.6/logging/__init__.py", line 575, in format
    record.message = record.getMessage()
  File "/usr/local/lib/python3.6/logging/__init__.py", line 338, in getMessage
    msg = msg % self.args
ValueError: unsupported format character ''' (0x27) at index 35
Call stack:
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.6/site-packages/redo/__init__.py", line 161, in retry
    logfn(log_attempt_format, n)
Message: "retry: calling print with args: ('%',), kwargs: {}, attempt #%d"
Arguments: (1,)
%
>>> 

My first thought is that this could be fixed by doing a replace here to escape the %. Or maybe it'd be safer to switch to using .format().

I'm happy to open a PR with a fix if someone could let me know what the preferred approach would be.

Update to follow Best Practices

For the backend:

(Remove items that are not applicable to this project).

call cleanup hook only if we decide to have next attempt

What I mean is, change original code from

            if cleanup:
                cleanup()
            if n == attempts:
                log.info("retry: Giving up on %s", action_name)
                raise

to

            if n == attempts:
                log.info("retry: Giving up on %s", action_name)
                raise
            if cleanup:
                cleanup()

Two benefits of this change:

  1. The cleanup may be costly. Doing cleanup without more attempts is a waste.
  2. If the last attempt failed, it is better to keep the environment as-is for developer investigation. Cleanup may be destructive and make debugging harder.

For original behavior (do something after exception), it could be done inside action() using normal try..except.

How do you think? If you think it is good change, I can send PR.

License and packaging

Hi team,

I want to provide this wonderful project as a package in conda-forge, which requires including the license file if the PyPI release itself does not include it. However, I cannot even see it in this repo source. I have two petitions then:

  1. Specify and include the license in the repo source
  2. Make sure this is distributed in the next release

Thanks!

Redundant logging for the first attempt

The retry method will add an 'info' log on every attempt, even on the first one. When all goes well, this pollutes the log.
What do you think of:
a. removing the log on the first attempt completely, or
b. reducing the log level of the first message to 'debug'
?

don't show tracebacks unless all attempts are exhausted

In automation at Mozilla, we do a lot of heuristic log parsing to find exceptions and highlight them. One of the scenarios we've been running into a lot lately is where we have a job that uses redo, it prints some tracebacks, a subsequent attempt succeeds, but then the job fails for an unrelated reason (eg: a timeout is some non-retried thing). This ends up with the traceback that we successfully retried for getting highlighted, and confusion ensues.

We should stop printing tracebacks at less verbose log levels unless we've exhausted all the attempts.

use function name in the log output if possible

Hey there, first thanks for the redo package!

So basically when using the retry function, We have this kind of lines:

INFO | retry: Calling <function _download at 0x7fad39b54b18> with args: (), kwargs: {}, attempt #2

It would be good to just print "Calling _download ..." - i.e. the name of the function (if available) instead of the function object.

What do you think @bhearsum ? I can create a PR for that.

Pass exceptions to the `cleanup` method

I'd like to see what's happening in the logs when a retry is necessary. The cleanup function is nice but I'd like to access the exception object in order to print it in my warning message. I understand that I can puttry...except inside on my foo function but I like the conciseness of letting redo deal with the inner exceptions.

        bar = retry(
            action=foo,
            attempts=5,
            sleeptime=5,
            retry_exceptions=(MyCustomError,),
            cleanup=lambda: log.warn('Trying the operation again. But why did it fail?'))

CODE_OF_CONDUCT.md file missing

As of January 1 2019, Mozilla requires that all GitHub projects include this CODE_OF_CONDUCT.md file in the project root. The file has two parts:

  1. Required Text - All text under the headings Community Participation Guidelines and How to Report, are required, and should not be altered.
  2. Optional Text - The Project Specific Etiquette heading provides a space to speak more specifically about ways people can work effectively and inclusively together. Some examples of those can be found on the Firefox Debugger project, and Common Voice. (The optional part is commented out in the raw template file, and will not be visible until you modify and uncomment that part.)

If you have any questions about this file, or Code of Conduct policies and procedures, please reach out to [email protected].

(Message COC001)

Asyncio

Is there any support for async functions in redo?

retry_exceptions lambda

Hello,

I need additional checks before retry code, for instance:

def is_sql_exception_for_retry(exc: t_sql_errors):
    # sqlalchemy.exc.OperationalError
    # (psycopg2.OperationalError) server closed the connection unexpectedly
    # (psycopg2.OperationalError) could not connect to server: Connection refused
    message = exc._message()
    if "server closed the connection unexpectedly" in message:
        return True
    if "could not connect to server: Connection refused" in message:
        return True
    return False

In my case I can do only:

@redo.retriable(retry_exceptions=(sqlalchemy.exc.OperationalError, psycopg2.OperationalError), sleeptime=10, jitter=3)
def function():
    pass

I will be great to add to functional callback for retry.

Also I can define specific retry exception, check my function is_sql_exception_for_retry in retriable function and raise created exception, like this:

class MyRetryException(Exception):
    pass

@redo.retriable(retry_exceptions=(MyRetryException,), sleeptime=10, jitter=3)
def function():
    try:
      ...
    except  (sqlalchemy.exc.OperationalError, psycopg2.OperationalError) as exc:
        if is_sql_exception_for_retry(exc):
            raise MyRetryException()
        raise exc

but in this case I will lose convenience of decorator, and will have additional code in each retriable function.

Set protected status on production branch

The production branch on this repository is not protected against force pushes. This setting is recommended as part of Mozilla's Guidelines for a Sensitive Repository.

Anyone with admin permissions for this repository can correct the setting using this URL.

If you have any questions, or believe this issue was opened in error, please contact us and mention SOGH0001 and this repository.

Thank you for your prompt attention to this issue.
--Firefox Security Operations team

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.