Python __aiter__ Method
Last modified April 8, 2025
This comprehensive guide explores Python's __aiter__
method, the
special method that enables asynchronous iteration. We'll cover basic usage,
custom async iterators, practical examples, and common patterns.
Basic Definitions
The __aiter__
method defines an object's behavior when used in an
async for
loop. It returns an asynchronous iterator object that
must implement __anext__
.
Key characteristics: it must be an async function (defined with async def
),
returns an async iterator, and works with __anext__
to support
asynchronous iteration. It's the async equivalent of __iter__
.
Basic Async Iterator
Here's a simple implementation showing how __aiter__
works with
__anext__
to create an asynchronous iterator.
class AsyncCounter: def __init__(self, stop): self.stop = stop self.current = 0 async def __aiter__(self): return self async def __anext__(self): if self.current >= self.stop: raise StopAsyncIteration self.current += 1 return self.current - 1 async def main(): async for i in AsyncCounter(3): print(i) import asyncio asyncio.run(main())
This example creates an async counter that yields numbers from 0 to 2. The
__aiter__
method returns self, and __anext__
produces
values until reaching the stop value.
Note the use of StopAsyncIteration
instead of StopIteration
and the async for
loop which requires an async context.
Async Data Fetcher
This example demonstrates a practical use case - fetching data from multiple URLs asynchronously while supporting async iteration.
import aiohttp class AsyncDataFetcher: def __init__(self, urls): self.urls = urls async def __aiter__(self): self.session = aiohttp.ClientSession() self.index = 0 return self async def __anext__(self): if self.index >= len(self.urls): await self.session.close() raise StopAsyncIteration url = self.urls[self.index] self.index += 1 async with self.session.get(url) as response: data = await response.text() return (url, len(data)) async def main(): urls = [ 'https://python.org', 'https://docs.python.org', 'https://pypi.org' ] async for url, length in AsyncDataFetcher(urls): print(f"{url}: {length} bytes") asyncio.run(main())
This async fetcher downloads multiple web pages concurrently. __aiter__
sets up the HTTP session, and __anext__
fetches each URL in sequence.
The iterator properly manages resources by closing the session when iteration completes. Each iteration returns a tuple of URL and content length.
Async Generator as Iterator
Python 3.6+ allows using async generators which automatically implement
__aiter__
and __anext__
.
async def async_counter(stop): for i in range(stop): await asyncio.sleep(0.1) # Simulate async work yield i async def main(): async for num in async_counter(5): print(num) asyncio.run(main())
This async generator simplifies creating async iterators. Under the hood, it
implements the same protocol as classes with __aiter__
and
__anext__
.
The generator pauses at each yield
and await
, making
it perfect for producing values asynchronously. No explicit StopAsyncIteration
is needed.
Buffered Async Reader
This example shows an async iterator that reads files in chunks, useful for processing large files without loading everything into memory.
class AsyncFileReader: def __init__(self, filename, chunk_size=1024): self.filename = filename self.chunk_size = chunk_size async def __aiter__(self): self.file = open(self.filename, 'rb') return self async def __anext__(self): data = await asyncio.to_thread( self.file.read, self.chunk_size ) if not data: self.file.close() raise StopAsyncIteration return data async def main(): async for chunk in AsyncFileReader('large_file.dat'): print(f"Read {len(chunk)} bytes") asyncio.run(main())
This reader processes files in chunks using a thread pool for the blocking I/O
operations. __aiter__
opens the file, and __anext__
reads each chunk.
The asyncio.to_thread
runs the blocking file operation in a separate
thread, keeping the async loop responsive. The iterator properly closes the file
when done.
Rate-Limited Async Producer
This example demonstrates an async iterator that produces items with rate limiting, useful for APIs with request limits.
class RateLimitedProducer: def __init__(self, items, requests_per_second): self.items = items self.delay = 1.0 / requests_per_second async def __aiter__(self): self.index = 0 return self async def __anext__(self): if self.index >= len(self.items): raise StopAsyncIteration item = self.items[self.index] self.index += 1 await asyncio.sleep(self.delay) return item async def main(): items = [f"item_{i}" for i in range(10)] async for item in RateLimitedProducer(items, 2): # 2 items/sec print(f"Processed {item} at {time.time()}") asyncio.run(main())
This producer yields items with a controlled rate. The __anext__
method sleeps between items to maintain the desired rate.
The sleep duration is calculated from the desired rate (items/second). This pattern is useful when interacting with rate-limited APIs or services.
Best Practices
- Resource management: Clean up resources in
__anext__
when done - Error handling: Handle and properly signal errors during iteration
- Use async generators: Prefer them for simpler cases
- Document behavior: Clearly document iteration patterns
- Consider cancellation: Handle asyncio cancellation properly
Source References
Author
List all Python tutorials.