experimental/cuda-ubi9/: async-lru-2.0.4 metadata and description

Homepage Simple index

Simple LRU cache for asyncio

classifiers
  • License :: OSI Approved :: MIT License
  • Intended Audience :: Developers
  • Programming Language :: Python
  • Programming Language :: Python :: 3
  • Programming Language :: Python :: 3 :: Only
  • Programming Language :: Python :: 3.8
  • Programming Language :: Python :: 3.9
  • Programming Language :: Python :: 3.10
  • Programming Language :: Python :: 3.11
  • Programming Language :: Python :: 3.12
  • Development Status :: 5 - Production/Stable
  • Framework :: AsyncIO
description_content_type text/x-rst
keywords asyncio,lru,lru_cache
license MIT License
maintainer aiohttp team <team@aiohttp.org>
maintainer_email team@aiohttp.org
project_urls
  • Chat: Matrix, https://matrix.to/#/#aio-libs:matrix.org
  • Chat: Matrix Space, https://matrix.to/#/#aio-libs-space:matrix.org
  • CI: GitHub Actions, https://github.com/aio-libs/async-lru/actions
  • GitHub: repo, https://github.com/aio-libs/async-lru
requires_dist
  • typing-extensions >=4.0.0 ; python_version < "3.11"
requires_python >=3.8
File Tox results History
async_lru-2.0.4-py3-none-any.whl
Size
6 KB
Type
Python Wheel
Python
3
info:

Simple lru cache for asyncio

GitHub Actions CI/CD workflows status async-lru @ PyPI https://codecov.io/gh/aio-libs/async-lru/branch/master/graph/badge.svg Matrix Room — #aio-libs:matrix.org Matrix Space — #aio-libs-space:matrix.org

Installation

pip install async-lru

Usage

This package is a port of Python’s built-in functools.lru_cache function for asyncio. To better handle async behaviour, it also ensures multiple concurrent calls will only result in 1 call to the wrapped function, with all awaits receiving the result of that call when it completes.

import asyncio

import aiohttp
from async_lru import alru_cache


@alru_cache(maxsize=32)
async def get_pep(num):
    resource = 'http://www.python.org/dev/peps/pep-%04d/' % num
    async with aiohttp.ClientSession() as session:
        try:
            async with session.get(resource) as s:
                return await s.read()
        except aiohttp.ClientError:
            return 'Not Found'


async def main():
    for n in 8, 290, 308, 320, 8, 218, 320, 279, 289, 320, 9991:
        pep = await get_pep(n)
        print(n, len(pep))

    print(get_pep.cache_info())
    # CacheInfo(hits=3, misses=8, maxsize=32, currsize=8)

    # closing is optional, but highly recommended
    await get_pep.cache_close()


asyncio.run(main())

TTL (time-to-live, expiration on timeout) is supported by accepting ttl configuration parameter (off by default):

@alru_cache(ttl=5)
async def func(arg):
    return arg * 2

The library supports explicit invalidation for specific function call by cache_invalidate():

@alru_cache(ttl=5)
async def func(arg1, arg2):
    return arg1 + arg2

func.cache_invalidate(1, arg2=2)

The method returns True if corresponding arguments set was cached already, False otherwise.

Python 3.8+ is required

Thanks

The library was donated by Ocean S.A.

Thanks to the company for contribution.