In the fast-paced world of globally scaled technology, every millisecond counts. I joined Uber in April of 2015, and if I'm being truly honest, I'm not sure if this story happened at the end of 2015 or in 2016. It all sort of blurs together when you're working at a startup like Uber. For the sake of this story, I'm going to say that it's late 2015. We were already a global company and were constantly under the immense pressure of scale.
One day, while working in the office at 555 Market Street in downtown San Francisco, I overheard a conversation that piqued my interest. If I'm being honest, it sort of pissed me off. A team was discussing the high latency issues faced by our users in India. The latency was so severe that it took over 900 milliseconds to initialize the first fetch, stacking on each other and making the experience miserable. In hindsight, anger may have been a telling state of my emotional headspace at the time, but that's for another story. But I was angry for our users in India. I was angry they had to wait SECONDS just for the page to load. Adding to that, most of India at the time was on the 3G networking and on remote or rural connectivity, and their experiences would have made my mind numb.
Think about that. When most of us load an app that takes longer than 250 ms to initially fetch data, you'll usually do 1 of two things.
- Close the app and re-open it, hoping it's a transient issue.
- Close the app and never return.
The Science Behind the Speed
To understand the significance of this improvement, let's dive into the math. Our primary ingress at the time was located in California, with a secondary data center in Virginia. The distance from California to India is approximately 13,000 kilometers. Considering the speed of light (around 300,000 km/s) and the fact that India was primarily using 3G networks at the time (which reduces the effective speed to about 133,333 km/s), we can calculate the round-trip time (RTT) as follows:
Now, let's factor in the TLS handshake. TLS uses a three-way handshake, which means three round trips across the ocean. This translates to a p50 (median) latency of roughly 600 milliseconds just to establish the TLS connection. Add in the final hop back with data, and you get a total latency of around 800 milliseconds.
Halub3, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons |
The Power of Go and Edge Computing
Our cloud proxy leveraged Go's reverse proxy functionality to establish a secure TLS connection to Uber's frontend. By deploying the proxy onto cloud providers right at the edge in India, we effectively eliminated the need for multiple round trips across the ocean. This simple yet powerful solution slashed nearly 600 milliseconds from each new request, resulting in a dramatically improved user experience.
While I've pivoted my career towards AI Risk and Security, I'm thankful for this (and many other) experiences I had at Uber. This was just once of many examples of some amazing ingenuity we developed and it will always be a fond memory for me.
I'm curious if you have ever experienced insanely high latency or come against unreasonable road blocks that you ran through to accomplish amazing engineering feats. Let me know on Twitter!
I want to thank Twitter user @gillarohith for suggesting I turn this thread into a blog post. Thank you!
No comments:
Post a Comment