Skip to main content

Understanding Asyncio Internals: How Python Manages State Without Threads

Python · Concurrency · Under the hood Understanding Asyncio Internals: How Python Manages State Without Threads Published 28 Apr 2026 · ~10 min read · PyVerse · ByteForge A common question from developers diving into async Python: “When an async function hits await , how does it pick up later with all its variables intact?” Let’s pop the hood – no fluff, just how it actually works under the hood. asyncio coroutines event loop Python concurrency TL;DR An async def function is a stateful coroutine object , not a regular function. When you await , the coroutine pauses , saves its locals + instruction pointer, and yields control to the event loop. The event loop runs other tasks while I/O happens, then resumes the coroutine from the exact same line. Key components: coroutine (state machine), task (scheduling wrapper), future (promise of a result), event loop (the traffic cop). As...

Understanding Asyncio Internals: How Python Manages State Without Threads

Python · Concurrency · Under the hood

Understanding Asyncio Internals: How Python Manages State Without Threads

Published 28 Apr 2026 · ~10 min read · PyVerse · ByteForge

A common question from developers diving into async Python: “When an async function hits await, how does it pick up later with all its variables intact?” Let’s pop the hood – no fluff, just how it actually works under the hood.

asynciocoroutinesevent loopPythonconcurrency
TL;DR
  • An async def function is a stateful coroutine object, not a regular function.
  • When you await, the coroutine pauses, saves its locals + instruction pointer, and yields control to the event loop.
  • The event loop runs other tasks while I/O happens, then resumes the coroutine from the exact same line.
  • Key components: coroutine (state machine), task (scheduling wrapper), future (promise of a result), event loop (the traffic cop).
  • Asyncio gives you concurrency, not parallelism – single thread, cooperative multitasking.

1. The core idea: coroutines are resumable state machines

In Python, an async def function is not just a function – it’s a stateful coroutine object. When execution hits an await, the coroutine does not lose its state. Instead:

It pauses execution, stores its internal state (locals, instruction pointer, stack), and yields control back to the event loop.

This is handled internally via a frame object, similar to how Python manages generators – but extended for async workflows.

2. What actually gets stored?

Each coroutine maintains:

  • Local variables (x, y, data, etc.)
  • Current execution position (instruction pointer – which line it’s on)
  • Call stack (frame object)
  • The awaited dependency (Future or Task that will wake it up)

Why this matters: Unlike threads, which require OS context switching and can lose state if not scheduled correctly, coroutines are cooperative and save everything explicitly – so you never lose the plot.

3. Execution flow – step by step with a real example

Let’s walk through an example that actually happens in production:

import asyncio

async def fetch_data():
    await asyncio.sleep(1)   # simulate network I/O
    return 42

async def compute():
    a = 10
    b = await fetch_data()   # <-- pause happens here
    return a + b

async def main():
    result = await compute()
    print(result)   # 52

asyncio.run(main())

Runtime behavior (what the event loop sees)

  1. compute() starts, assigns a = 10.
  2. Hits await fetch_data() – the coroutine saves its state (locals, instruction pointer) and returns control to the loop.
  3. The event loop schedules fetch_data() and sees that it’s waiting on a timer (the sleep).
  4. The loop goes to execute other ready tasks (if any).
  5. After 1 second, the timer completes, the Future associated with sleep becomes “done”.
  6. The loop resumes fetch_data(), which returns 42.
  7. That result is passed back to the await in compute(), so b = 42 is assigned.
  8. compute() returns 52, and main() prints it.

No threads. No magic. Just a resumable state machine and a loop that knows how to wake it up.

4. Visualising the flow (Mermaid diagram)

┌─────────────┐     ┌─────────────┐     ┌──────────────────┐
│  Coroutine  │     │    Event    │     │      Future      │
│  (compute)  │────▶│    Loop     │◀────│  (I/O result)    │
└─────────────┘     └─────────────┘     └──────────────────┘
       │                   │                       │
       │ await fetch_data  │                       │
       │──────────────────▶│                       │
       │                   │ registers the Future  │
       │                   │──────────────────────▶│
       │                   │                       │
       │                   │   (loop runs other    │
       │                   │    tasks while I/O)   │
       │                   │                       │
       │                   │   I/O completes       │
       │                   │◀──────────────────────│
       │                   │                       │
       │   resume compute  │                       │
       │◀──────────────────│                       │
       │                   │                       │
      

Mermaid version (if your platform renders it):

```mermaid
sequenceDiagram
    participant C as Coroutine
    participant L as Event Loop
    participant F as Future
    C->>L: await fetch_data()
    L->>F: register I/O wait
    L-->>L: run other tasks
    F-->>L: I/O done, set result
    L->>C: resume compute()
    C->>C: assign result & continue
```

5. The four pillars: Coroutine, Task, Future, Event Loop

These four objects are the heart of asyncio. Let’s define each one properly, with examples you can run.

Coroutine – the stateful work unit

Definition: An async def function that can be paused and resumed. It keeps its local variables, instruction pointer, and call stack across await points.

Key properties:

  • Created with async def – calling it returns a coroutine object, does not execute it.
  • Can only be run inside an event loop.
  • Pauses voluntarily at await (cooperative multitasking).
async def slow_double(x):
    await asyncio.sleep(1)
    return x * 2

coro = slow_double(5)        # coro is a coroutine object, nothing runs yet
result = await coro          # now it runs (inside a loop)

Task – the scheduler’s handle

Definition: A task wraps a coroutine and schedules it on the event loop. It represents a running or runnable coroutine.

Key properties:

  • Created with asyncio.create_task().
  • Runs concurrently with other tasks (interleaved, not parallel).
  • Can be cancelled (task.cancel()).
  • Subclass of Future – you can await task to wait for its result.
async def fetch_page(url):
    await asyncio.sleep(0.5)
    return f"data from {url}"

async def main():
    t1 = asyncio.create_task(fetch_page("example.com"))
    t2 = asyncio.create_task(fetch_page("python.org"))
    results = await asyncio.gather(t1, t2)
    print(results)

Future – the promise of a result

Definition: A low‑level awaitable that holds a value that will be available later. Starts with no result, becomes “done” when the result is set.

Key properties:

  • Created manually with asyncio.Future().
  • Not tied to a coroutine – can be completed from anywhere using set_result().
  • You can await future – the loop waits until it’s done.
  • Supports callbacks (add_done_callback()).
async def waiter(fut):
    print("Waiting for future...")
    result = await fut
    print(f"Got: {result}")

async def setter(fut):
    await asyncio.sleep(1)
    fut.set_result(42)   # this resumes the waiter

async def main():
    fut = asyncio.Future()
    await asyncio.gather(waiter(fut), setter(fut))

Event loop – the traffic cop

Definition: The core runtime engine. It keeps lists of tasks and futures, monitors which ones are ready, and executes them one by one.

Key properties:

  • One loop per thread (get it with asyncio.get_running_loop()).
  • Runs until no tasks remain.
  • Handles I/O readiness via selectors/epoll/kqueue.
  • You rarely interact with it directly – asyncio.run() creates and manages one for you.
async def say_after(delay, msg):
    await asyncio.sleep(delay)
    print(msg)

async def main():
    t1 = asyncio.create_task(say_after(2, "hello"))
    t2 = asyncio.create_task(say_after(1, "world"))
    await t1
    await t2

asyncio.run(main())   # loop orchestrates everything
ConceptCreated byAwaitable?Holds state?Who schedules it?
Coroutineasync def callYesYes (locals, IP)Loop (via Task)
Taskasyncio.create_task()YesWraps coroutineLoop
Futureasyncio.Future()YesResult / exceptionUser or loop
Event loopasyncio.new_event_loop() / get_running_loop()NoTask queue, timers, I/O watchersN/A

6. Why this matters for real systems

This design enables:

  • High‑concurrency systems without thread overhead (each thread costs ~8MB of stack memory).
  • Efficient I/O‑bound workloads – one event loop can handle thousands of idle connections while barely touching the CPU.
  • Scalable architectures – frameworks like FastAPI, aiohttp, and async DB drivers rely on this model every day.

Common misconception: “asyncio means parallel execution.”

Not exactly. Asyncio provides concurrency (many tasks making progress), not parallelism (multiple things at the exact same time). It’s cooperative, single‑threaded, and preemption‑free. For CPU‑bound work, you’d still need multiprocessing or a thread pool.

7. Deeper dive: what happens inside the event loop during an await

Let’s simulate the loop’s internal mental model (simplified but accurate):

ready_queue = [t1, t2]
pending_futures = {}

while ready_queue or pending_futures:
    current = ready_queue.pop()
    try:
        # run until next await (yield)
        current.send(None)
    except StopIteration as e:
        # coroutine finished, store result
        current.result = e.value
    else:
        # coroutine yielded a future; store which future we're waiting on
        fut = current.awaiting
        pending_futures[fut] = current
        # loop also registers I/O watcher for fut (e.g. epoll)
    # after I/O, when fut.set_result() is called:
    task = pending_futures.pop(fut)
    ready_queue.append(task)

The actual C implementation is far more optimised, but this gives you the idea: no hidden threads, just a loop that moves tasks between “ready” and “waiting” buckets.

8. FAQ: asyncio in production

Q1: Does await asyncio.sleep(0) really yield control?
A: Yes. It’s a deliberate yield point, useful to allow other tasks to run in a tight loop.
Q2: What happens if I forget to await a coroutine?
A: You get a coroutine object that never runs, and you’ll see a “coroutine was never awaited” warning. Always await or create_task.
Q3: Can I use asyncio with CPU‑bound code?
A: Not directly – it blocks the event loop. Use asyncio.to_thread or concurrent.futures.ProcessPoolExecutor for CPU‑heavy tasks.
Q4: How many tasks can I create?
A: Thousands or even millions given enough RAM. Each coroutine’s stack is small (a few KB), unlike OS threads.
Q5: Why do I get RuntimeError: no running event loop in Jupyter?
A: Jupyter already runs its own event loop. Use await directly in a cell or install nest_asyncio.

Glossary (quick reference)

Coroutine
Stateful object created by async def; can be paused/resumed.
Task
Wraps a coroutine for scheduling; runs concurrently with others.
Future
Low‑level promise of a result that will be set later; awaitable.
Event loop
Core scheduler that multiplexes tasks and I/O readiness.
Awaitable
Any object that can be used with await (coroutine, task, future).
Frame object
Internal Python structure holding local vars and execution state.

Takeaway

Async functions in Python are resumable state machines. Every await is a checkpoint where execution is paused – but never lost. The event loop is just a traffic cop that decides who runs next based on who has work ready.

Keep this mental model, and you’ll never be surprised by “how does it remember my variables?” again.

๐Ÿ’ก Found this useful? Let me know in the comments or connect with me for more Python internals and backend deep dives.

#asyncio#PythonInternals#EventLoop#Concurrency#BackendEngineering #SystemDesign#Coroutines#FastAPI#AsyncPython

Comments

Popular posts from this blog

The Hidden Danger of PostgreSQL Connection Pooling: Why Your FastAPI Crashes Under Load

⚠️ The Hidden Danger of Connection Pooling: Why Your FastAPI Crashes Under Load ๐Ÿ”ฅ production war story ๐Ÿ˜ PostgreSQL ⚡ FastAPI ๐Ÿ” asyncpg Published: April 9, 2026 · 8 min read · By PyVerse (Anupam Dutta) 1. ๐Ÿšจ I learned this the hard way “More connections = faster API” — FALSE. In development, with low traffic, everything was smooth. Then real production traffic hit, and within minutes: ๐Ÿ’ฅ Random too many connections errors ๐Ÿ’ฅ API endpoints started timing out ๐Ÿ’ฅ Latency increased by 10x instead of improving ๐Ÿง  The mindset shift: Stop asking “How many connections do I want?” → “How many can my database safely handle?” Formula: workers × (pool_size + max_overflow) ≤ max_connections - reserved_connections ๐Ÿ“„ Full deep dive (Google Drive) → 2. ⏱️ The true cost of a database connection Every session.execute() wit...

PostgreSQL Connection Pooling, FastAPI, and Why More Connections Made Everything Worse

Long read · Systems · Python PostgreSQL Connection Pooling, FastAPI, and Why More Connections Made Everything Worse Published 9 Apr 2026 · ~12 min read · PyVerse (Anupam Dutta) · ByteForge This is a production-focused walkthrough: how connection pools actually multiply with workers, what each TCP + Postgres connection costs, when PgBouncer and NullPool make sense, and how to right-size pool_size / max_overflow so you stop outliving max_connections under real traffic. PostgreSQL FastAPI asyncpg SQLAlchemy SQLAlchemy 2.0 Performance TL;DR Each app worker (e.g. Uvicorn) typically owns its own pool → total open connections = workers × (pool_size + max_overflow) in the worst case. Opening a new DB connection is not free: TCP/TLS, auth, and backend process memory add latency and RAM; unbounded pools amplify that under load. “Turn everything up” without checking ...