Internet Latency and Multi-Master Database Transactions

There’s 2 common misconceptions in engineering West Coast – East Coast data centers:

  1. that packets travel at the speed of light
  2. that database masters can be located anywhere (ie. far apart.)

What happens when we look at the actual latency numbers with ecommerce/advertising applications in mind?

Cross-USA Internet Packet Latency (One-Way)

Figure 1: SF-NY (4,700 km geographic distance)

Transmission Method End-End Speed Time SF-NY Note
Light in vacuum 299,792 kps 16 ms similar speed in air
Microwave repeaters in air 235,000 kps 20 ms Repeaters every 48 km (actually built in 1950s in both USA and Canada! Currently HFT applications use 15+ microwave routes from Chicago to New York.)
Light in silica fiber (theoretical) 204,081 kps 22 ms Index of refraction is 1.45
Oceanic cable for comparison 156,666 kps 30 ms Including amplification and switching
Google Routing in silica fiber 150,000 kps 31 ms Extrapolated from The Dalles to Ashburn (4,350 km) at 29 ms
AT&T Routing in silica fiber 150,000 kps 31 ms
Public Internet Packets in silica fiber 137,000 kps 35 – 36 ms Public Internet already using MPLS
Transmission Method End-End Speed Time SF-NY Note

From Figure 1 above, we see that light can travel from SF to NY in 16 ms, yet the public Internet averages 35 ms. That’s 2.2x longer than expected if packets are expected to travel at the speed of light in a vacuum. So packets don’t travel at the speed of light.

Now that we’ve described numerically the latency limits, there’s some very interesting things to investigate:

  1. A serious enterprise could construct microwave towers across the USA again for latency-sensitive traffic with 20 ms latency in good weather, with fiber backup. (“It’s better to be fast 99% of the time than slow 99.999% of the time” – has done that between Chicago and NY for HFT. 🙂 )
  2. If SF – NY is too ambitious (after all, SF is earthquake-prone) “pinch” the west and east-most locations by using a central region. (See below.)

Figure 2: Instead of SF-NY (~31 ms today) Data Center locations, “pinch” the network topology of the synchronous master database pair to LAS or SLC and ATL or ORD (~20 ms today). (Map of USA Population Centers According to Major Airport Traffic)

Figure 3: Another interesting topology, using near speed-of-light microwave links from the East-most synchronous master database Chicago to NY (8.5 ms). Instead of spending a few billion dollars on a nation-wide microwave chain, one of the 15+ existing microwave providers in Chicago can be leveraged for 1,300 km for low-bandwidth transaction traffic.

Wide-Area Multi-Master Database Transactions

So how does that help us with multi-master database latency?

  1. for 2-phase/sync commit, 31-35 ms for a medium to high volume of OLTP transactions isn’t workable, especially over the Public Internet. But 17-20 ms of reliable latency is fundamentally different. (10 ms is the same as public Internet latency from San Jose to Las Vegas!) An optimized ecommerce store application would work with a reliable latency near 20 ms. (Confirmed with Percona Consulting.)
  2. if that’s not workable, think beyond 2-phase commit. Lamport/vector clock algorithms have been available since 1988, and have been implemented in Voldemort and since 2018 in Redis (so you can delegate database session handling, etc. to Redis if you need cross-DC availability.) Cassandra uses last-write wins and is DC-aware. Use NTP/GPS/optimization like Google Spanner does.
  3. #1 can be modified by “pinching” the location of the database masters. Instead of thinking SF and NY, locate the masters in Las Vegas or SLC and Atlanta or ORD with read-slaves in SJC and Ashburn as required.
  4. Google and AT&T have virtually unlimited CONUS fiber, meaning unlimited bandwidth and known reliability around 31 ms. A new algorithm can be built according to those constraints. Think git, but for database transactions.

What Does a Reliable Network Mean?

Reliable for wide-area multi-master database transactions means:

  1. almost always partition-free – 5x9s or more during most-active shopping times (Google is emphasizing partition-free in their networks, as it’s far easier than reducing latency and more predictable overall)
  2. zero packet loss
  3. maintenance windows known in advance
  4. good enough for your DBA Team to say “Yes, we can support this.”

At this time, that requires a dedicated network, either yours or a cloud provider (Spanner with SQL has been available since 2017.)

What is the Low-Hanging Fruit?

From lowest-cost to highest-cost for making database transactions WAN-safe:

  1. wiki exercise – document how your business applications:
    1. Internal and external SLAs are defined
    2. what applications connect to the databases (what options are used, are they persistent and how many round-trips result)
    3. how many database round-trips are needed per page
    4. how sessions and session failover works
    5. what percentage of writes vs. reads are made
    6. are the transactions as thin as possible using row self-updates and removing read-before-write cases aka race conditions
    7. how it all should really work in edge cases (network partitions and slowdowns, etc.)
    8. what can be cached with Redis/Elasticache, Memcached, DynamoDB, etc?
  2. data reduction/archiving (just active OLTP rows, please)
  3. use transaction group commit
  4. pinching west-most and east-most locations closer together. ie. put one master in a central location. See Figures 2 and 3 above.
  5. algorithms like vector clocks, or newer/better
  6. reducing latency on existing routes (MPLS, direct optical routes)
  7. building new private CONUS/Gulf of Mexico fiber route.

In my experience, most organizations never even get to step #1 above: 🙂

Fortunately, there is a half-measure: multi-AZ with AWS uses different data centers in the same region with only 1-2 ms inter-DC latencies. James Hamilton from AWS calls using small data centers in the same region “limiting the blast radius.”

The Speed of Light – Depends on the Medium

The speed of light in a vacuum is 299,792,458 meters per second, or 186,282 miles per second. In any other medium, though, it’s generally a lot slower. In normal optical fibers (silica glass), light travels a full 31% slower.

Exercises for the Reader

  • Fill in the wiki outline above.
  • What regions does my cloud provider support?
  • What is the lowest inter-master latency that can be provisioned?
  • How many TPS does my database do that is directly ecommerce-related (not DW or logging)?

The World’s First West-East MySQL Multi-master Cluster

Yahoo paid MySQL AB about $40,000 for the first replication feature (statement-based) to use on their leased fiber. Because MySQL classic replication is asynchronous, latency is not a big issue for most operations as long as the total throughput is adequate.

Google Spanner

Google has built a database that corresponds to what’s discussed in this blog post called Spanner. SQL was added in 2017.

Please leave a comment!

Please leave a comment (no registration required) if you have any experience implementing similar topologies, or have suggestions or corrections.


How Google Does It With Multi-Region support in Cloud Spanner, have your cake and eat it too
Google Public NTP

Microwave WAN Transmission

The secret world of microwave networks
The Abandoned Microwave Towers That Once Linked the US
Trans Canada Microwave Microwave Bandwidth at Extreme Low Latency
109 Microwave Towers Bring the Internet to Remote Alaska Villages

Fiber Optic

Calculating Optical Fiber Latency
$1.5 billion: The cost of cutting London-Tokyo latency by 60ms
Researchers create fiber network that operates at 99.7% speed of light, smashes speed and latency records (fiber optic waveguide)

Public Internet Latency Measurements

SO: How much network latency is “typical” for east – west coast USA?
AWS Inter-Region Latency

Research and Other News

netflix: Active-Active for Multi-Regional Resiliency
Network latency – how low can you go?
W: Multiprotocol Label Switching (MPLS)
Latency: The New Web Performance Bottleneck Networking Concepts Primer on Latency and Bandwidth
Network performance: Links between latency, throughput and packet loss
Turning the Optical Fiber Network into a Giant Earthquake Sensor Network latencies and speed of light
Einstein, Poincaré & Modernity: a Conversation

This entry was posted in Business, Cassandra, Linux, MySQL, MySQL Cluster, Open Source, Oracle, Tech. Bookmark the permalink.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.