Upgrading YPlan to Python 3 with Zero Downtime2016-08-24
We recently upgraded our 160,000 lines of backend Python code from Python 2 to Python 3. We did with zero downtime and no major errors! Here’s how we did it, hopefully it will help anyone else still stuck on Python 2!
This was basically a three step process:
- Make the code compatible with both Python 2 and 3
- Clean up the Python 2 compatibility code, and use some Python 3 idioms
That makes it sound simple, but each step was a little more involved… I’ll expand on each in sections below.
Make the code compatible with both Python 2 and 3
Since we operate on a continuous delivery schedule, we couldn’t just stop everything, convert the code base over to Python 3, deploy, and continue. Rather, we made the code compatible with both Python versions whilst new features were being written, allowing us to roll over to Python 3 when ready, and even roll back to Python 2 when needed, without rolling back features.
Making our dependencies compatible
The first step to make our code base compatible was to ensure that all our dependencies were! We have nearly 200 of
them at this point, so auditing them all took some time. Luckily there’s the useful
caniusepython3 tool which checks for the appropriate PyPI classifiers
on your installed packages. We found some cases where packages were just missing the correct classifiers, or where the
Python 3 compatibility had been recently released recently and we needed to upgrade, but there were about twenty
packages were just not compatible and needed fixing.
We got to work making pull requests. Unfortunately in a few cases the original libraries were totally abandoned; we
found for them that we were only using a fraction of the library so porting it all ourselves didn’t look attractive,
and thus we made our own to fill in the gaps (e.g.
The other thing to look out for is where forks already exist for Python 3, but the original package is abandoned and
hasn’t been updated to link to the fork. The main one for us was the abandoned
python-openid which has been forked to
python3-openid - it was only by an almost accidental web search that
we found out about this! Because of this, we ended up with two requirements files, one for each python version, and a
base file that they both included with the line
After all the dependencies were confirmed compatible, we started adding backports packages on Python 2 to use some
Python 3 features and compatibility done early. For example,
backports.csv is very useful for getting the Python 3
Unicode-friendliness of the
csv module on Python 2 - we’d actually already written a wrapper around the Python 2
csv and could delete that just by using this library.
Making our code compatible
Once all that was done, we had to make sure our own code was Python 3 ready. For a long time we’d been adding all the
__future__ headers possible on Python 2 (
absolute_import, division, print_function, unicode_literals) as they all
helped with preventing bugs. We do this with excellent
isort plus this config
We had also been forcing all files to be UTF-8 for a long time, using the encoding header
# -*- encoding:utf-8 -*-,
flake8-coding with this config in
To get to fully cross-compatible code, we added another linter to our toolchain:
python-modernize. This uses the power of
2to3, but rather than just
convert once, it moves your code to be compatible with both Pythons, via the amazing
modernize one rule at a time, every few days, starting with the easier ones and working our way up. This
allowed everyone to be synchronized on the compatible style by learning a few rules at a time, e.g. “use
six.text_type instead of
unicode”, rather than getting them all at once.
Another thing that we found important was ensuring that the
six.moves aliases and the backport packages we added were
used. To help with this I added a new option to my
flake8-tidy-imports and then added some rules on the YPlan code
base, which were configured in our
setup.cfg like so:
io.StringIO one is especially important; the
io module is a lot closer
to how Python 3 thinks about
str, and allows you to detect errors a lot sooner. We found several cases
str data was mixed with Python 2’s
StringIO because it allows you to be sloppier.
Making our tests pass
Having added all these linting rules there were still other problems with compatibility, for example some small things
python-modernize doesn’t check for. Luckily we have very high test coverage with about 5000 tests, so we could
be reasonably sure that things were working on Python 3 as long as all the tests passed.
Once we’d fixed some base failures, such as code that just wouldn’t import, we took the test suite and worked on it piece by piece to make it pass, mostly on one Django app at a time. In a final four day push, we got the test failures down from 1400 to 0, as tracked by white board:
One thing we found at this stage was that there were a few small bits that couldn’t be solved with
six. Instead, we
had to write code that ran differently on each python version, for example in one class we had:
This was still easy to clean up after the deployment of Python 3, as it continued to mention
six, so we’d easily
come across it.
After this stage, we modified our Ansible to always install a Python 3 virtualenv alongside the Python 2 one, on all development and testing machines. We then didn’t regress on our test failures - no commit was allowed to land that broke Python 3 compatibility, as enforced by Jenkins.
To understand how we deployed, you’ll first have to understand a bit about our deploy process.
We use an ‘immutable infrastructure’ deployment pattern on AWS, with EC2 instances deployed by Cloudformation. This means that when we deploy, we create brand new instances running the new code and environment, all of which has been baked into a single Amazon Machine Image (AMI). When an instance boots, it runs a copy of this image, and gets some extra information (‘user data’) about what type of instance it should be (celery worker, web, etc.) and launches the appropriate services.
We expanded this user data to include a second flag alongside the instance type, indicating which Python version to run with. When set to Python 3, the boot process would just change virtualenv path in a few config files, such as the uWSGI ini file, before starting the relevant services.
We created a whole duplicate of our infrastructure where this flag was set to Python 3, but defaulting to 0 instances of each type. With a deployment running, we could then manually go in and scale up one of the types on Python 3, e.g. the web instances, to run Python 3 there alongside Python 2.
We planned to do this carefully - first adding a single web instance for just a few minutes, then an hour, then
repeating with each other instance type. Whilst this generally went smoothly, there was an initial problem when we
added that first web instance. That instance never passed the health check and served traffic, because of
compatibility, something we simply hadn’t thought of.
Our memcached infrastructure stores values via
pickle, the Django standard. It turns out that whilst the
protocol doesn’t meaningfully change between the Python versions, there’s still the
str confusion to deal
with on Python 3.
str objects that were encoded on Python 2 could decode to either
str on Python 3, so
you need to pass a hint to Python 3’s
pickle about this.
Thankfully this was only cached data, and rather than figure out whether it would be safe to always decode to
bytes on Python 3, we could just make Python 2 and 3 talk to different sections of the cache space by configuring
in our Django settings a different key prefix, such as:
Asides from this there were no big surprises, except from a couple bits of poorly tested code failing in fairly simple ways, for which we could push the fixes within an hour. After a few iterations solving these bugs, we rolled everything over to Python 3, and no one using the site could tell the difference.
After all this, there was a lot of clean up to be done - pretty much the inverse of every above step. Thankfully it’s a lot more brainless than the initial writing!
The first thing we did, a week after our final cutover to Python 3, was to remove the ability to run Python 2 in
production by deleting the associated Ansible code. We then had to remove all the other things, such as the Python 2
virtualenv, the conditional usage of
__future__ headers and relevant lint rules, etc. We still did each
in individual commits to make code review easier - it was important to verify that we weren’t breaking anything by
removing the compatibility code!
We also made some general code base improvements by taking advantage of the nicer syntax on Python 3. For example, we
changed all the
super(Foo, self).bar() invocations to
super().bar() with this shell one-liner:
Overall we’re glad we went our slow three-step route of compatibility, deployment, and clean up, as opposed to making this a ‘stop everything’ project. This also made it easier for everyone to adapt to writing Python 3.
If you’re (still) running a code base on Python 2, there’s never been a better time to switch to Python 3. The tooling continues to improve and loads of packages are ready for you! It’s also healthy for your code base to move off old unmaintained dependencies and on to newer ones.
Hope this post helps anyone else upgrading!
- Dropping Python 2 Support From Open Source Projects
- All Is Turned to Black
- Limit Your Try Clauses in Python
© 2019 All rights reserved.