« Back to Index

[Python Tornado Request Performance]

View original Gist on GitHub

Tags: #tornado #performance #async #requests

Python Tornado Request Performance.md

Imagine a tornado server that makes three asynchronous HTTP requests using AsyncHTTPClient.

The AsyncHTTPClient is configured using max_client=10

AsyncHTTPClient.configure(None, max_clients=10)

Meaning tornado will be able to handle a maximum of 10 simultaneous fetch() operations in parallel on each IOLoop.

See tornado docs for full details

The maths is then…

This means a single tornado process should only be able to handle ~3 incoming requests before subsequent ones would queue in Tornado’s AsyncHTTPClient.

According to our metrics it looks like the internal API request takes ~250ms, so it would only take a backlog of 4 to reach the tornado application’s configured 1s timeout.

Where we’ve tested using aiohttp and found better performance ‘out-of-the-box’, what’s most likely happening is that aiohttp provides no mechanism to control concurrency, which appears useful in this specific case, but is ultimately not what we want, because we don’t want the application to be able to DoS downstream dependencies. Granted, aiohttp might be actually be more performant. But it doesn’t appear as though we’re resource starved and so further configuration tweaking should be investigated before switching.