AsyncSqliteCache

class privex.helpers.cache.asyncx.AsyncSqliteCache.AsyncSqliteCache(db_file: str = None, memory_persist=False, use_pickle: bool = None, connection_kwargs: dict = None, *args, **kwargs)[source]

An SQLite3 backed implementation of AsyncCacheAdapter. Creates and uses a semi-global Sqlite instance via privex.helpers.plugin by default.

To allow for a wide variety of Python objects to be safely stored and retrieved from Sqlite, this class uses the pickle module for serialising + un-serialising values to/from Sqlite.

Basic Usage:

>>> from privex.helpers import AsyncSqliteCache
>>> rc = AsyncSqliteCache()
>>> await rc.set('hello', 'world')
>>> await rc.get('hello')
'world'
>>> rc['hello']
'world'

Disabling Pickling

In some cases, you may need interoperable caching with other languages. The pickle serialisation technique is extremely specific to Python and is largely unsupported outside of Python. Thus if you need to share Sqlite cache data with applications in other languages, then you must disable pickling.

WARNING: If you disable pickling, then you must perform your own serialisation + de-serialization on complex objects such as dict, list, Decimal, or arbitrary classes/functions after getting or setting cache keys.

Disabling Pickle per instance

Pass use_pickle=False to the constructor, or access the attribute directly to disable pickling for a single instance of SqliteCache (not globally):

>>> rc = AsyncSqliteCache(use_pickle=False)  # Opt 1. Disable pickle in constructor
>>> rc.use_pickle = False              # Opt 2. Disable pickle on an existing instance

Disabling Pickle by default on any new instances

Change the static attribute pickle_default to False to disable the use of pickle by default across any new instances of SqliteCache:

>>> AsyncSqliteCache.pickle_default = False
__init__(db_file: str = None, memory_persist=False, use_pickle: bool = None, connection_kwargs: dict = None, *args, **kwargs)[source]

AsyncSqliteCache uses an auto-generated database filename / path by default, based on the name of the currently running script ( retrieved from sys.argv[0] ), allowing for persistent caching - without any manual configuration of the adapter, nor the requirement for any running background services such as redis / memcached.

Parameters
  • db_file (str) – (Optional) Name of / path to Sqlite3 database file to create/use for the cache.

  • memory_persist (bool) – Use a shared in-memory database, which can be accessed by other instances of this class (in this process) - which is cleared after all memory connections are closed. Shortcut for db_file='file::memory:?cache=shared'

  • use_pickle (bool) – (Default: True) Use the built-in pickle to serialise values before storing in Sqlite3, and un-serialise when loading from Sqlite3

  • connection_kwargs (dict) – (Optional) Additional / overriding kwargs to pass to sqlite3.connect() when AsyncSqliteCacheManager initialises it’s sqlite3 connection.

  • purge_every (int) – (Default: 300) Expired + abandoned cache records are purged using the DB manager method AsyncSqliteCacheManager.purge_expired() during get() / set() calls. To avoid performance issues, the actual AsyncSqliteCacheManager.purge_expired() method is only called if at least purge_every seconds have passed since the last purge was triggered ( last_purged_expired )

async close()[source]

Close any cache library connections, and destroy their local class instances by setting them to None.

async connect(db=None, *args, connection_kwargs=None, memory_persist=None, **kwargs)[source]

Create an instance of the library used to interact with the caching system, ensure it’s connection is open, and store the instance on this class instance - only if not already connected.

Should return the class instance which was created.

async get(key: str, default: Any = None, fail: bool = False, _auto_purge=True) → Any[source]

Return the value of cache key key. If the key wasn’t found, or it was expired, then default will be returned.

Optionally, you may choose to pass fail=True, which will cause this method to raise CacheNotFound instead of returning default when a key is non-existent / expired.

Parameters
  • key (str) – The cache key (as a string) to get the value for, e.g. example:test

  • default (Any) – If the cache key key isn’t found / is expired, return this value (Default: None)

  • fail (bool) – If set to True, will raise CacheNotFound instead of returning default when a key is non-existent / expired.

Raises

CacheNotFound – Raised when fail=True and key was not found in cache / expired.

Return Any value

The value of the cache key key, or default if it wasn’t found.

pickle_default: bool = True

Change this to False to disable the use of pickle by default for any new instances of this class.

async remove(*key: str)bool[source]

Remove one or more keys from the cache.

If all cache keys existed before removal, True will be returned. If some didn’t exist (and thus couldn’t remove), then False will be returned.

Parameters

key (str) – The cache key(s) to remove

Return bool removed

True if key existed and was removed

Return bool removed

False if key didn’t exist, and no action was taken.

async set(key: str, value: T, timeout: Optional[Union[decimal.Decimal, int, float]] = 300, _auto_purge=True) → T[source]

Set the cache key key to the value value, and automatically expire the key after timeout seconds from now.

If timeout is None, then the key will never expire (unless the cache implementation loses it’s persistence, e.g. memory caches with no disk writes).

Parameters
  • key (str) – The cache key (as a string) to set the value for, e.g. example:test

  • value (Any) – The value to store in the cache key key

  • timeout (int) – The amount of seconds to keep the data in cache. Pass None to disable expiration.

async update_timeout(key: str, timeout: Union[decimal.Decimal, int, float] = 300) → Any[source]

Update the timeout for a given key to datetime.utcnow() + timedelta(seconds=timeout)

This method should accept keys which are already expired, allowing expired cache keys to have their timeout extended after expiry.

Example:

>>> c = CacheAdapter()
>>> c.set('example', 'test', timeout=60)
>>> sleep(70)
>>> c.update_timeout('example', timeout=60)   # Reset the timeout for ``'example'`` to ``now + 60 seconds``
>>> c.get('example')
'test'
Parameters
  • key (str) – The cache key to update the timeout for

  • timeout (int) – Reset the timeout to this many seconds from datetime.utcnow()

Raises

CacheNotFound – Raised when key was not found in cache (thus cannot extend timeout)

Return Any value

The value of the cache key

use_pickle: bool

If True, will use pickle for serializing objects before inserting into Redis, and un-serialising objects retrieved from Sqlite3. This attribute is set in __init__().

Change this to False to disable the use of pickle - instead values will be passed to / returned from Sqlite3 as-is, with no serialisation (this may require you to manually serialize complex types such as dict and Decimal before insertion, and un-serialise after retrieval).

Methods

Methods

__init__([db_file, memory_persist, …])

AsyncSqliteCache uses an auto-generated database filename / path by default, based on the name of the currently running script ( retrieved from sys.argv[0] ), allowing for persistent caching - without any manual configuration of the adapter, nor the requirement for any running background services such as redis / memcached.

close()

Close any cache library connections, and destroy their local class instances by setting them to None.

connect([db, connection_kwargs, memory_persist])

Create an instance of the library used to interact with the caching system, ensure it’s connection is open, and store the instance on this class instance - only if not already connected.

get(key[, default, fail, _auto_purge])

Return the value of cache key key.

purge_expired([force])

remove(*key)

Remove one or more keys from the cache.

set(key, value[, timeout, _auto_purge])

Set the cache key key to the value value, and automatically expire the key after timeout seconds from now.

update_timeout(key[, timeout])

Update the timeout for a given key to datetime.utcnow() + timedelta(seconds=timeout)

Attributes

Attributes

last_purged_expired

pickle_default

Change this to False to disable the use of pickle by default for any new instances of this class.

purge_due

wrapper