India Requests Additional MiG-29 Fighters – in 2019

Interesting how supposedly “obsolete” but great military airplanes never disappear:

  • USA has chosen the B-52 (first flown 1952) to outlast the B-1 and B-2 due to high maintenance costs and low dispatch rate of the newer stealth bombers. More.
  • USA relies on the F-5/T-38 (first flown 1959) for a large percent of its “behind-the-scenes” training and testing military operations. So much so that Cold War Military Assistance Program (MAP) versions are being repatriated from overseas and overhauled for drone use.

    NASA’s forward-swept X-29 (2 copies) were an F-5 body with modified F-16 wings. Unfortunately the wings were cut with a titanium chainsaw to ship to a museum instead of via the Panama Canal or ferrying, so they won’t fly again.



    X-29 #049. Notice tufting on the wing surface, aft fuselage and aft control surfaces to visualize airflow, which is expected to flow from wingtip to inboard for a forward-swept wing. Each of the parties that funded the research is listed on the side.

    Swiss Air Force F-5E. USA is buying back 22 plus spares for Top Gun and other programs.

    “Perhaps the most interesting aviation item in the FY20 request is that for 22 Northrop F-5E/F Tiger II aircraft, to be divided equally between the Navy and Marine Corps. These aircraft will be acquired to improve and expand the adversary fleets of both services. The Navy bought 44 F-5E/Fs from Switzerland in the 2000s, refurbishing them as F-5Ns. The new batch of 22 is also coming from Switzerland, and the aircraft are due for refurbishment by Northrop Grumman at its St Augustine, Florida, facility. The value of this requested purchase is just under $40 million for all 22 aircraft and spares.”

  • India lost a MiG-21 (first flown 1959) in the Kashmir air battle in Feb. 2019
  • India is buying more MiG-29s (first flown 1977) for $40 million each

What those planes all have in common is that they had all-metal airframes that required low maintenance – none are fly by wire or composite (Mig 29 has minimal composites.) The fighters can all land on grass strips and be maintained without a hangar or special tools.

nytimes.com: After India Loses Dogfight to Pakistan, Questions Arise About Its ‘Vintage’ Military
reuters.com: Washington wants to know if Pakistan used U.S.-built jets to down Indian warplane
ainonline.com: India Requests Additional MiG-29 Fighters
Pentagon To Retire USS Truman Early, Shrinking Carrier Fleet To 10

W: MiG-21, MiG-29, F-5, B-52
Did Pakistan use its Chinese JF-17 jets to shoot down Indian planes?
Fighting Falcon puts off retirement: F-16 to fly for USAF through 2048
Dutch F-16 flies into its own bullets, scores self-inflicted hits
avweb.com: Sometimes Old Technology Is Appropriate

Posted in Tech | Leave a comment

Comprehensive and Well-written Collection of Life and Business Development Topics

35 Hard Truths You Should Know Before Becoming “Successful” is a comprehensive and well-written collection of life and business development topics.

They can be used in many useful ways:

  1. read in one sitting, combined with self-reflection
  2. as a sequence of items to study, one per day
  3. as topics to expand upon, for example, google each of the quotes and examples for more details
  4. as topics to discuss between mentor and mentee
  5. even if some are intuitive, it’s good to explore them deeper.

My favorites are 12, 16, 18 and 20.

How to be More Productive and Eliminate Time Wasting Activities by Using the “Eisenhower Box”
W: Koan
What I Learned From Learning How to Say No
“Be yourself” is terrible advice HN

Suggestions:

  • “Ask yourself periodically, is this who I really wanna be?”
  • “Just do it.”
  • “Be the best version of yourself you can possibly be.”
  • “Become your Platonic ideal.”
Posted in Tech | Leave a comment

Postgres Performance on AWS EBS

AWS EBS is network-attached storage … in other words, S L O W, compared to local SSD for Postgres database use.

I’ve been seeing average disk latency of 0.55 – 0.80 milliseconds per block (when operating correctly, otherwise 3 ms to 5 ms), and IOPS and bandwidth are throttled by both the instance and the volume.

For m4.2xlarge, only 10,000 IOPS with 100 Mbps are available, regardless of how many or beefy your attached EBS volumes are – not impressive for SSD at all:


Figure 1: m4.2xlarge throttling IO from 4G EBS gp2 volume with 10,000 IOPS and 250 Mbps

Figure 2: 4G EBS gp2 unencrypted volume showing minimum read latency of 0.55 ms


In the above case, one thing you can do is to switch from m4.2xlarge to m5.2xlarge, which is cheaper and has double the IO performance.

But if you’re stuck using Postgres with EBS for large databases (bigger than RAM), there are workarounds related to the fact that shared_buffers will store the index in RAM:

  1. carefully configure shared_buffers to be as large as possible, and max_connections as small as possible
  2. run EXPLAIN to see if indexes are used (no SEQ SCAN) and pg_stat_statements extension to identify slow or frequent queries
  3. use covering indexes to read data from the index cache
  4. rewrite queries to do index scans from RAM instead of table scans across the network from EBS (ie. HAVING => INTERSECT and EXCEPT, WHERE-splitting, etc.)
  5. remove ORDER BY if clause not indexed and your app doesn’t need sorting
  6. use Redis to cache repeated queries
  7. use io1 instead of gp2 volumes, but do your own benchmarks and latency measurements as they vary with both types
  8. use local instance, “ephemeral” SSD volumes and replication/WAL copy for HA.

It would be nice if Postgres had a setting to indicate network-attached storage as a hint to the optimizer.

Percona has some advice for tuning operating system parameters.

Amazon EBS Volume Types
The most useful Postgres extension: pg_stat_statements
Amazon Postgres RDS pg_stat_statements not loaded
pg_hint_plan
PostgreSQL Workload Analyzer
docs.aws.amazon.com: Initializing Amazon EBS Volumes

Keywords: cloud, architecture, Postgresql

Posted in Linux, Postgresql, Tech | Leave a comment

Cassandra vnodes Streaming Reliability Calculator

The Cassandra database has a setting in cassandra.yaml, num_tokens, for the number of vnodes. num_tokens is the number of partitions to use per host, and thus the number of parallel streams to use for data updates.

The default was 256 vnodes, but that lead to a high probability of a streaming failure, so “DataStax recommends using 8 vnodes (tokens)” now.

A Netflix paper agrees, saying, “the Cassandra default of 256 virtual nodes per physical host is unwise”, as well as the experienced DBAs on the Apache Cassandra Users List.

To calculate the impact of vnodes count on cluster streaming reliability:


where Pstreaming-one-failure is the independent probability of a streaming failure of one connection, possibly in the range of .0001 to .00001, during one week. (You could process your log files to get your exact failure count.)

I wrote a Javascript calculator to help visualize how vnodes increase the probability of streaming failures.

Calculate Cassandra streaming reliability using Javascript:

Expected probability of a vnode stream failing (per week):
Number of nodes in cluster:

Note that changing num_tokens after a ring bootstraps is not a casual thing. The easiest way is to replicate to a new ring or DC with different num_tokens setting, then fail over.

Examples of Streaming Errors

datastax: Streaming operations throw “java.lang.AssertionError: Memory was freed” error
SO: Can’t add a new Cassandra datacenter due to streaming errors
Cassandra Vnodes and token Ranges
Netflix: Cassandra Availability with vnodes Whitepaper

Posted in Cassandra, Open Source, Tech | Leave a comment

Postgres Monitoring Script pg_glance.sh Available

I wrote a small performance monitoring script for Postgres during the Super Bowl on Sunday called pg_glance.

You can download it from my github project pg_glance.

Getting Started with pg_glance

It’s easy to get started …

If you’re remotely monitoring postgres instances using ssh keys to login from your notebook or application server:

  1. Download pg_glance.sh to your ~/.ssh directory
  2. update the hosts variable with a space-separated list of postgres servers to monitor
  3. if you don’t have passwords on the linux postgres account, just run:
    watch -n 15 "./pg_glance.sh | grep ':: '"

If you’re monitoring localhost, or using passwords with your postgres login, then you’ll need to spend a minute customizing the script.

Posted in Postgresql, Tech | Leave a comment