Python Multiprocessing.Pool lazy iteration

Let’s look at the end of the program first.

The multiprocessing module uses atexit to call multiprocessing.util._exit_function when your program ends.

If you remove, your program ends quickly.

The _exit_function eventually calls Pool._terminate_pool. The main thread changes the state of pool._task_handler._state from RUN to TERMINATE. Meanwhile the pool._task_handler thread is looping in Pool._handle_tasks and bails out when it reaches the condition

            if thread._state:
                debug('task handler found thread._state != RUN')

(See /usr/lib/python2.6/multiprocessing/

This is what stops the task handler from fully consuming your generator, g(). If you look in Pool._handle_tasks you’ll see

        for i, task in enumerate(taskseq):
            except IOError:
                debug('could not put task on queue')

This is the code which consumes your generator. (taskseq is not exactly your generator, but as taskseq is consumed, so is your generator.)

In contrast, when you call the main thread calls, and waits when it reaches self._cond.wait(timeout).

That the main thread is waiting instead of
calling _exit_function is what allows the task handler thread to run normally, which means fully consuming the generator as it puts tasks in the workers’ inqueue in the Pool._handle_tasks function.

The bottom line is that all Pool map functions consume the entire iterable that it is given. If you’d like to consume the generator in chunks, you could do this instead:

import multiprocessing as mp
import itertools
import time

def g():
    for el in xrange(50):
        print el
        yield el

def f(x):
    return x * x

if __name__ == '__main__':
    pool = mp.Pool(processes=4)              # start 4 worker processes
    go = g()
    result = []
    N = 11
    while True:
        g2 =, itertools.islice(go, N))
        if g2:

Leave a Comment