Decorators

Class Method / Function decorators

Copyright:

    +===================================================+
    |                 © 2019 Privex Inc.                |
    |               https://www.privex.io               |
    +===================================================+
    |                                                   |
    |        Originally Developed by Privex Inc.        |
    |        License: X11 / MIT                         |
    |                                                   |
    |        Core Developer(s):                         |
    |                                                   |
    |          (+)  Chris (@someguy123) [Privex]        |
    |          (+)  Kale (@kryogenic) [Privex]          |
    |                                                   |
    +===================================================+

Copyright 2019     Privex Inc.   ( https://www.privex.io )

Functions

async_retry([max_retries, delay])

AsyncIO coroutine compatible version of retry_on_err() - for painless automatic retry-on-exception for async code.

mock_decorator(*dec_args, **dec_kwargs)

This decorator is a pass-through decorator which does nothing other than be a decorator.

r_cache(cache_key[, cache_time, …])

This is a decorator which caches the result of the wrapped function with the global cache adapter from privex.helpers.cache using the key cache_key and with an expiry of cache_time seconds.

r_cache_async(cache_key[, cache_time, …])

Async function/method compatible version of r_cache() - see docs for r_cache()

retry_on_err([max_retries, delay])

Decorates a function or class method, wraps the function/method with a try/catch block, and will automatically re-run the function with the same arguments up to max_retries time after any exception is raised, with a delay second delay between re-tries.

Classes

FO

alias of privex.helpers.decorators.FormatOpt

FormatOpt(value)

This enum represents various options available for r_cache() ‘s format_opt parameter.

privex.helpers.decorators.FO

alias of privex.helpers.decorators.FormatOpt

class privex.helpers.decorators.FormatOpt(value)[source]

This enum represents various options available for r_cache() ‘s format_opt parameter.

To avoid bloating the PyDoc for r_cache too much, descriptions for each formatting option is available as a short PyDoc comment under each enum option.

Usage:

>>> @r_cache('mykey', format_args=[0, 'x'], format_opt=FormatOpt.POS_AUTO)
KWARG_ONLY = 'kwarg'

Only use kwargs for formatting the cache key - requires named format placeholders, i.e. mykey:{x}

MIX = 'mix'

Use both *args and **kwargs to format the cache_key (assuming mixed placeholders e.g. mykey:{}:{y}

POS_AUTO = 'force_pos'

First attempt to format using *args whitelisted in format_args, if that causes a KeyError/IndexError, then pass kwarg values in the order they’re listed in format_args (only includes kwarg names listed in format_args)

# def func(x, y) func(‘a’, ‘b’) # assuming 0 and 1 are in format_args, then it would use .format(‘a’, ‘b’) func(y=’b’, x=’a’) # assuming format_args = ['x','y'], then it would use .format(‘a’, ‘b’)

POS_ONLY = 'pos_only'

Only use positional args for formatting the cache key, kwargs will be ignored completely.

privex.helpers.decorators.async_retry(max_retries: int = 3, delay: Union[int, float] = 3, **retry_conf)[source]

AsyncIO coroutine compatible version of retry_on_err() - for painless automatic retry-on-exception for async code.

Decorates an AsyncIO coroutine (async def) function or class method, wraps the function/method with a try/catch block, and will automatically re-run the function with the same arguments up to max_retries time after any exception is raised, with a delay second delay between re-tries.

If it still throws an exception after max_retries retries, it will log the exception details with fail_msg, and then re-raise it.

Usage (retry up to 5 times, 1 second between retries, stop immediately if IOError is detected):

>>> from privex.helpers import async_retry
>>>
>>> @async_retry(5, 1, fail_on=[IOError])
... async def my_func(some=None, args=None):
...     if some == 'io': raise IOError()
...     raise FileExistsError()
...

This will be re-ran 5 times, 1 second apart after each exception is raised, before giving up:

>>> await my_func()

Where-as this one will immediately re-raise the caught IOError on the first attempt, as it’s passed in fail_on:

>>> await my_func('io')

We can also use ignore_on to “ignore” certain exceptions. Ignored exceptions cause the function to be retried with a delay, as normal, but without incrementing the total retries counter.

>>> from privex.helpers import async_retry
>>> import random
>>>
>>> @async_retry(5, 1, fail_on=[IOError], ignore=[ConnectionResetError])
... async def my_func(some=None, args=None):
...     if random.randint(1,10) > 7: raise ConnectionResetError()
...     if some == 'io': raise IOError()
...     raise FileExistsError()
...

To show this at work, we’ve enabled debug logging for you to see:

>>> await my_func()
[INFO]    <class 'ConnectionResetError'> -
[INFO]    Exception while running 'my_func', will retry 5 more times.
[DEBUG]   >> (?) Ignoring exception '<class 'ConnectionResetError'>' as exception is in 'ignore' list.
          Ignore Count: 0 // Max Ignores: 100 // Instance Match: False

[INFO]    <class 'FileExistsError'> -
[INFO]    Exception while running 'my_func', will retry 5 more times.

[INFO]    <class 'ConnectionResetError'> -
[INFO]    Exception while running 'my_func', will retry 4 more times.
[DEBUG]   >> (?) Ignoring exception '<class 'ConnectionResetError'>' as exception is in 'ignore' list.
          Ignore Count: 1 // Max Ignores: 100 // Instance Match: False

[INFO]    <class 'FileExistsError'> -
[INFO]    Exception while running 'my_func', will retry 4 more times.

As you can see above, when an ignored exception (ConnectionResetError) occurs, the remaining retry attempts doesn’t go down. Instead, only the “Ignore Count” goes up.

Attention

For safety reasons, by default max_ignore is set to 100. This means after 100 retries where an exception was ignored, the decorator will give up and raise the last exception.

This is to prevent the risk of infinite loops hanging your application. If you are 100% certain that the function you’ve wrapped, and/or the exceptions passed in ignore cannot cause an infinite retry loop, then you can pass max_ignore=False to the decorator to disable failure after max_ignore ignored exceptions.

Parameters
  • max_retries (int) – Maximum total retry attempts before giving up

  • delay (float) – Amount of time in seconds to sleep before re-trying the wrapped function

  • retry_conf – Less frequently used arguments, pass in as keyword args (see below)

Key list fail_on

A list() of Exception types that should result in immediate failure (don’t retry, raise)

Key list ignore

A list() of Exception types that should be ignored (will retry, but without incrementing the failure counter)

Key int|bool max_ignore

(Default: 100) If an exception is raised while retrying, and more than this many exceptions (listed in ignore) have been ignored during retry attempts, then give up and raise the last exception.

This feature is designed to prevent “ignored” exceptions causing an infinite retry loop. By default max_ignore is set to 100, but you can increase/decrease this as needed.

You can also set it to False to disable raising when too many exceptions are ignored - however, it’s strongly not recommended to disable max_ignore, especially if you have instance_match=True, as it could cause an infinite retry loop which hangs your application.

Key bool instance_match

(Default: False) If this is set to True, then the exception type comparisons for fail_on and ignore will compare using isinstance(e, x) instead of type(e) is x.

If this is enabled, then exceptions listed in fail_on and ignore will also match sub-classes of the listed exceptions, instead of exact matches.

Key str retry_msg

Override the log message used for retry attempts. First message param %s is func name, second message param %d is retry attempts remaining

Key str fail_msg

Override the log message used after all retry attempts are exhausted. First message param %s is func name, and second param %d is amount of times retried.

privex.helpers.decorators.mock_decorator(*dec_args, **dec_kwargs)[source]

This decorator is a pass-through decorator which does nothing other than be a decorator.

It’s designed to be used with the privex.helpers.common.Mocker class when mocking classes/modules, allowing you to add fake decorators to the mock class/method which do nothing, other than act like a decorator without breaking your functions/methods.

privex.helpers.decorators.r_cache(cache_key: Union[str, callable], cache_time=300, format_args: list = None, format_opt: privex.helpers.decorators.FormatOpt = <FormatOpt.POS_AUTO: 'force_pos'>, **opts) → Any[source]

This is a decorator which caches the result of the wrapped function with the global cache adapter from privex.helpers.cache using the key cache_key and with an expiry of cache_time seconds.

Future calls to the wrapped function would then load the data from cache until the cache expires, upon which it will re-run the original code and re-cache it.

To bypass the cache, pass kwarg r_cache=False to the wrapped function. To override the cache key on demand, pass r_cache_key='mykey' to the wrapped function.

Example usage:

>>> from privex.helpers import r_cache
>>>
>>> @r_cache('mydata', cache_time=600)
... def my_func(*args, **kwargs):
...     time.sleep(60)
...     return "done"

This will run the function and take 60 seconds to return while it sleeps

>>> my_func()
done

This will run instantly because “done” is now cached for 600 seconds

>>> my_func()
done

This will take another 60 seconds to run because r_cache is set to False (disables the cache)

>>> my_func(r_cache=False)
done

Using a dynamic cache_key:

Simplest and most reliable - pass ``r_cache_key`` as an additional kwarg

If you don’t mind passing an additional kwarg to your function, then the most reliable method is to override the cache key by passing r_cache_key to your wrapped function.

Don’t worry, we remove both r_cache and r_cache_key from the kwargs that actually hit your function.

>>> my_func(r_cache_key='somekey')    # Use the cache key 'somekey' when caching data for this function

Option 2. Pass a callable which takes the same arguments as the wrapped function

In the example below, who takes two arguments: name and title - we then pass the function make_key which takes the same arguments - r_cache will detect that the cache key is a function and call it with the same (*args, **kwargs) passed to the wrapped function.

>>> from privex.helpers import r_cache
>>>
>>> def make_key(name, title):
...     return f"mycache:{name}"
...
>>> @r_cache(make_key)
... def who(name, title):
...     return "Their name is {title} {name}"
...

We can also obtain the same effect with a lambda callable defined directly inside of the cache_key.

>>> @r_cache(lambda name,title: f"mycache:{name}")
... def who(name, title):
...     return "Their name is {title} {name}"

Option 3. Can be finnicky - using ``format_args`` to integrate with existing code

If you can’t change how your existing function/method is called, then you can use the format_args feature.

NOTE: Unless you’re forcing the usage of kwargs with a function/method, it’s strongly recommended that you keep force_pos enabled, and specify both the positional argument ID, and the kwarg name.

Basic Example:

>>> from privex.helpers import r_cache
>>> import time
>>>
>>> @r_cache('some_cache:{}:{}', cache_time=600, format_args=[0, 1, 'x', 'y'])
... def some_func(x=1, y=2):
...     time.sleep(5)
...     return 'x + y = {}'.format(x + y)
>>>

Using positional arguments, we can see from the debug log that it’s formatting the {}:{} in the key with x:y

>>> some_func(1, 2)
2019-08-21 06:58:29,823 lg  DEBUG    Trying to load "some_cache:1:2" from cache
2019-08-21 06:58:29,826 lg  DEBUG    Not found in cache, or "r_cache" set to false. Calling wrapped function.
'x + y = 3'
>>> some_func(2, 3)
2019-08-21 06:58:34,831 lg  DEBUG    Trying to load "some_cache:2:3" from cache
2019-08-21 06:58:34,832 lg  DEBUG    Not found in cache, or "r_cache" set to false. Calling wrapped function.
'x + y = 5'

When we passed (1, 2) and (2, 3) it had to re-run the function for each. But once we re-call it for the previously ran (1, 2) - it’s able to retrieve the cached result just for those args.

>>> some_func(1, 2)
2019-08-21 06:58:41,752 lg  DEBUG    Trying to load "some_cache:1:2" from cache
'x + y = 3'

Be warned that the default format option POS_AUTO will make kwargs’ values be specified in the same order as they were listed in format_args

>>> some_func(y=1, x=2)   # ``format_args`` has the kwargs in the order ``['x', 'y']`` thus ``.format(x,y)``
2019-08-21 06:58:58,611 lg  DEBUG    Trying to load "some_cache:2:1" from cache
2019-08-21 06:58:58,611 lg  DEBUG    Not found in cache, or "r_cache" set to false. Calling wrapped function.
'x + y = 3'
Parameters
  • format_opt (FormatOpt) – (default: FormatOpt.POS_AUTO) “Format option” - how should args/kwargs be used when filling placeholders in the cache_key (see comments on FormatOption)

  • format_args (list) – A list of positional arguments numbers (e.g. [0, 1, 2]) and/or kwargs ['x', 'y', 'z'] that should be used to format the cache_key

  • cache_key (str) – The cache key to store the cached data into, e.g. mydata

  • cache_time (int) – The amount of time in seconds to cache the result for (default: 300 seconds)

  • whitelist (bool) – (default: True) If True, only use specified arg positions / kwarg keys when formatting cache_key placeholders. Otherwise, trust whatever args/kwargs were passed to the func.

Return Any res

The return result, either from the wrapped function, or from the cache.

privex.helpers.decorators.r_cache_async(cache_key: Union[str, callable], cache_time=300, format_args: list = None, format_opt: privex.helpers.decorators.FormatOpt = <FormatOpt.POS_AUTO: 'force_pos'>, **opts) → Any[source]

Async function/method compatible version of r_cache() - see docs for r_cache()

You can bypass caching by passing r_cache=False to the wrapped function.

Basic usage:

>>> from privex.helpers import r_cache_async
>>> @r_cache_async('my_cache_key')
>>> async def some_func(some: int, args: int = 2):
...     return some + args
>>> await some_func(5, 10)
15

>>> # If we await some_func a second time, we'll get '15' again because it was cached.
>>> await some_func(2, 3)
15

Async cache_key generation (you can also use normal synchronous functions/lambdas):

>>> from privex.helpers import r_cache_async
>>>
>>> async def make_key(name, title):
...     return f"mycache:{name}"
...
>>> @r_cache_async(make_key)
... async def who(name, title):
...     return "Their name is {title} {name}"
...
Parameters
  • format_opt (FormatOpt) – (default: FormatOpt.POS_AUTO) “Format option” - how should args/kwargs be used when filling placeholders in the cache_key (see comments on FormatOption)

  • format_args (list) – A list of positional arguments numbers (e.g. [0, 1, 2]) and/or kwargs ['x', 'y', 'z'] that should be used to format the cache_key

  • cache_key (str) – The cache key to store the cached data into, e.g. mydata

  • cache_time (int) – The amount of time in seconds to cache the result for (default: 300 seconds)

  • whitelist (bool) – (default: True) If True, only use specified arg positions / kwarg keys when formatting cache_key placeholders. Otherwise, trust whatever args/kwargs were passed to the func.

Return Any res

The return result, either from the wrapped function, or from the cache.

privex.helpers.decorators.retry_on_err(max_retries: int = 3, delay: Union[int, float] = 3, **retry_conf)[source]

Decorates a function or class method, wraps the function/method with a try/catch block, and will automatically re-run the function with the same arguments up to max_retries time after any exception is raised, with a delay second delay between re-tries.

If it still throws an exception after max_retries retries, it will log the exception details with fail_msg, and then re-raise it.

Usage (retry up to 5 times, 1 second between retries, stop immediately if IOError is detected):

>>> @retry_on_err(5, 1, fail_on=[IOError])
... def my_func(self, some=None, args=None):
...     if some == 'io': raise IOError()
...      raise FileExistsError()

This will be re-ran 5 times, 1 second apart after each exception is raised, before giving up:

>>> my_func()

Where-as this one will immediately re-raise the caught IOError on the first attempt, as it’s passed in fail_on:

>>> my_func('io')

Attention

For safety reasons, by default max_ignore is set to 100. This means after 100 retries where an exception was ignored, the decorator will give up and raise the last exception.

This is to prevent the risk of infinite loops hanging your application. If you are 100% certain that the function you’ve wrapped, and/or the exceptions passed in ignore cannot cause an infinite retry loop, then you can pass max_ignore=False to the decorator to disable failure after max_ignore ignored exceptions.

Parameters
  • max_retries (int) – Maximum total retry attempts before giving up

  • delay (float) – Amount of time in seconds to sleep before re-trying the wrapped function

  • retry_conf – Less frequently used arguments, pass in as keyword args (see below)

Key list fail_on

A list() of Exception types that should result in immediate failure (don’t retry, raise)

Key list ignore

A list() of Exception types that should be ignored (will retry, but without incrementing the failure counter)

Key int|bool max_ignore

(Default: 100) If an exception is raised while retrying, and more than this many exceptions (listed in ignore) have been ignored during retry attempts, then give up and raise the last exception.

This feature is designed to prevent “ignored” exceptions causing an infinite retry loop. By default max_ignore is set to 100, but you can increase/decrease this as needed.

You can also set it to False to disable raising when too many exceptions are ignored - however, it’s strongly not recommended to disable max_ignore, especially if you have instance_match=True, as it could cause an infinite retry loop which hangs your application.

Key bool instance_match

(Default: False) If this is set to True, then the exception type comparisons for fail_on and ignore will compare using isinstance(e, x) instead of type(e) is x.

If this is enabled, then exceptions listed in fail_on and ignore will also match sub-classes of the listed exceptions, instead of exact matches.

Key str retry_msg

Override the log message used for retry attempts. First message param %s is func name, second message param %d is retry attempts remaining

Key str fail_msg

Override the log message used after all retry attempts are exhausted. First message param %s is func name, and second param %d is amount of times retried.