Optimizing the construction of Django QuerySets
Django’s ORM is normally fast enough as-is, but if you’ve ever profiled a high traffic view with a fairly complicated query, you might have found that constructing
QuerySet can take a noticeable portion of your request time. For example, I once found a query on the front page of the site I was working on that took 1ms to construct and 1ms for the database to answer. With a performance budget of 100ms, that was 1% gone on computing the exactly same SQL.
Thankfully we don’t need to instantly drop down to raw SQL to optimize such cases, as Django’s
QuerySet API naturally lends itself to caching the intermediate objects. Since each operation on a
QuerySet returns a new object with the change applied, they’re always lazy as to executing the SQL, and operations can (normally) be chained in any order, you can build the non-specific part of your
QuerySet up as a cached object and then apply final, specific filtering required at request time.
Just a note before we dive in: this should be one of the least reached for tools in your optimization toolbox - normally it’s enough to fix the basics such as avoiding N+1 queries with
prefetch_related(), and adding caching of data between requests with Django’s caching framework. On that old front page I was talking about, the reason the rest of the page fit in 98ms was because most of it came from a few cache keys, avoiding even some template rendering.
As an example, let’s say we’re building an autocomplete feature on Django’s
User model. We might have a function that looks like this:
def full_construction_autocomplete(typed): return ( User.objects.annotate(username_length=Length("username")) .order_by("username_length") .filter(username__startswith=typed) )
If we time it with IPython’s
./manage.py shell, it will time just the construction time, since nothing is iterating the QuerySet and causing the lazy fetching of results from the database. It comes out taking about a quarter of a millisecond on my machine:
In : %timeit full_construction_autocomplete('ad') 1000 loops, best of 3: 263 µs per loop
To cache most of the construction, we can just define the non-specific part of the query as a module-level object, and apply the
filter() at the last step:
cached_qs = User.objects.annotate(username_length=Length("username")).order_by( "username_length" ) def cached_construction_autocomplete(typed): return cached_qs.filter(username__startswith=typed)
And just like that, we’ve sped the function calls up by more than 50%:
In : %timeit cached_construction_autocomplete('ad') 10000 loops, best of 3: 105 µs per loop
Obviously 160 microseconds on its own is hardly going to be noticeable to your end users, but if you find yourself looking at a complicated or frequently called
QuerySet, this technique might help you make your performance budget. It’s also a simple optimization.
You don’t necessarily have to cache with a module-level object, for example in
ModelAdmin classes you could cache partially constructed
QuerySets on the class itself. For example this will work as long as you don’t do per-request modifications in
class MyModelAdmin(ModelAdmin): def get_queryset(self, request): if not hasattr(self, "_queryset"): self._queryset = ( super() .get_queryset(request=None) .annotate( username_length=Length("username"), ) ) return self._queryset.all()
all() here is important to make sure we hand a copy of the
QuerySet to the caller and avoid caching the results once for the whole class!
I’m sure you can come up with the right caching scheme for wherever it is you construct your
QuerySets, such as your class-based views, or custom managers.
The Django core team are aware of the time that can be wasted on
QuerySet construction, and have looked at optimizing it. This would be particularly useful for some Django internals, such as when constructing a
Model.save(). Anssi Kääriäinen created a patch to add an option to
QuerySets to not clone themselves on operations in Ticket 20880, and Josh Smeaton opened an experimental Pull Request implementing the same idea as a (maybe) public API. I personally think that it’s going to be better kept as a private API for Django’s core, as there are many ways of optimizing user code, including this strategy of caching partially constructed
If your Django project’s long test runs bore you, I wrote a book that can help.
One summary email a week, no spam, I pinky promise.
- Working Around Memory Leaks in Your Django Application
- Django’s Test Case Classes and a Three Times Speed-Up
- Getting a Django Application to 100% Test Coverage
Tags: django, python