Cisco ASR 920: A router with a fear of heights?

TheRegister has an article about a Cisco recall on the ASR 920 Series Aggregation Services Routers PSUs

I enjoy reading TheRegister, but this article is a little light on research.

Looking around online, it turns out that PSUs are often designed for certain altitudes when using air as a dielectric and for cooling, and 2,000 meters is common. However, the current ASR 920 datasheet specifies 4,000 meters, hence the recall.

The real story is:

  1. How did Cisco learn of the PSU problem? Device failure, or DC fire?
  2. How many other models have the wrong PSU?

Cisco ASR 920
Various Configuratins of the Cisco ASR 920

How Does Altitude Affect AC-DC Power Supplies? A router with a fear of heights? Yup. It’s a thing Cisco ASR 920 Series Aggregation Services Routers: Low-Port-Density Models Data Sheet

Posted in Tech | Leave a comment

Recent Aviation News – Challenger Bizjet Upset, Appareo’s Stratus Power USB

A couple of interesting aviation news items this week …

The Challenger 604 bizjet that was upset by A380 wake turbulence on Jan. 7 appears to have been totalled.

Challenger 604 Bizjet
Challenger 604 Bizjet Similar to Damaged One

The Real Story – 1000′ RVSM is Unsafe

Even more important, the 1000′ Reduced Vertical Separation Minimums (RVSM) rule may have to be updated because of this incident. Since this was obviously going to happen when heavy jets (747 or A380) overfly business jets, we need to look into how 1000′ RVSM was implemented in the first place.

For safe flight, the Challenger needed to fly above the A380, or 15 miles laterally if below.

Just because you can (maintain vertical separation with advanced navigation like GPS) doesn’t mean you should (allow an A380 to overwhelm smaller jets.)

And pilots need a reminder that during turbulence, just maintain attitude, not altitude or roll excursions. Likely it will turn out that the Challenger was damaged trying to over-control it, though high-altitude operations are not comparable to low-and-slow.

Also, Appareo’s Stratus Power USB-A Dual Charging Port is TSO-C71-certified for aviation use and only $349 plus installation, letting you charge all those shiny portable electronics from the panel.

Total output power is 1.9 Amps with a max of 2.1 Amps at 5 volts, enough for 1 iPad or 2 iPhones. Check with Appareo if you need to charge an iPad plus another device at the same time.

StratusPower USB
Stratus Power TSO-certified USB-A Dual Charging Port Wake Turbulence Writes Off Challenger Bizjet Accident: Emirates A388 over Arabian Sea on Jan 7th 2017, wake turbulence sends business jet in uncontrolled descent

Appareo’s Stratus Power USB-A Charging Port

Posted in Tech, Toys | Leave a comment

AWS Loft Architecture Week – Databases

I attended 2 days of the AWS Loft SF Database Architecture Conference.

The software scalability work that Amazon has done on databases and caches, especially the Aurora distributed MySQL and Postgres databases, is very impressive. (DBA note: do careful acceptance testing of any distributed database.)

Executive Summary:

  1. AWS has gone far beyond “IaaS EC2 Classic hosting” and developed a complete HA database software stack.
  2. AWS Solution Architects are very knowledgeable, and available to all account holders.
  3. The free data migration tools, SCT (Schema Conversion Tool) then DMS (Data Migration Service), can use any source and destination in both AWS and OnPrem servers, and across several common database products.
  4. The new Intel chips make the new EC2 instance types 34% faster
  5. Slides

Some of the talks I went to:

What’s New in Amazon Aurora for MySQL and PostgreSQL
by Kevin Jernigan, AWS Manager of Tech Product Management, DBS

– very impressive engineering work by AWS engineers – complete internals modernization of MySQL and PostgreSQL
– split Open Source MySQL and PG code into two components (SQL engine and SAN storage modules)
– rewrote algorithms (btree => Z-index), log replay (max. 1.5 seconds) and locking code pushed down for MySQL
– no checkpoints, so 3x less jitter (query latency variation) since data is written to network, so no disk stalls
– Aurora MySQL 5x faster or more
– Aurora PG 2x faster (already well-written internals)
– 6 nodes, 4 required for quorum.
– my opinion as a DBA is that SANs are always a problem, so carefully evaluate this.
– story on edge cases are not well understood yet, critical for operating large distributed database
– you still choose and instance size for IO/CPU since that’s how their billing works.

The speaker was well received as being authoritatively technical by my co-attendee. I found the MySQL comparisons to Aurora a little contrived as being the worst-case configuration of MySQL. ie. I can fail over in 2-5 seconds with master-master and a load balancer, as compared to his discussion of 30 seconds to a minute or more.

What’s New in Amazon RDS for Open-Source and Commercial Databases
by KD Singh, AWS Partner Solutions Architect:

– MariaDB, Oracle 12c now supported
– Read slaves use regular async replication from product, so could lag. Your app needs to handle it.
– HIPAA, ITAR, USgov, UKgov, SGgov, PCI Level 1 seller approval
– during RDS failover, your app must be programmed to reconnect automatically. expected downtime is about a minute for the CNAME => IP address to update
– SQL Server limit is 4 TB, supports AD, .bak files
– 1-second monitoring now included in Cloudwatch
– “pick the smallest you think will work, and migrate when you need a bigger instance size”
– local timezone now supported everywhere

Migrating to Amazon RDS with Database Migration Service
by Dhanraj Pondicherry, Senior Manager of Solutions Architecture, AWS

– SCT (Schema Conversion Tool) then DMS (Data Migration Service)
– Successfully used for Oracle => PG by marquis clients like
– may need careful VPC setup for source or target
– SCT lists count of tables, SPs so easy to eyeball for QC
– inbound bandwidth is free, so migration is very cheap within AZ
– DMS requires a CDC method to be enabled on the source, like Oracle CDC or MySQL binlogs.
– very impressive effort on these migration tools, with almost any combination of source and target possible now, including OnPrem, EC2 Classic, RDS, Aurora, and Redshift.
– “for your migration tool, pick the smallest you think will work, and migrate when you need a bigger instance size”

Amazon Aurora and Amazon Database Migration Service
by Joyjeet Banerjee, Solutions Architect, AWS Migration Lab

– download and try SCT (OLAP option is for Redshift)
– DMS Online Lab:


Introduction to Amazon DynamoDB
by Sean Shriver, NoSQL Solutions Architect, AWS

– is a key-value store, key up to 2KB, is a string
– not currently related to original Dynamo paper. Most of the authors highly promoted.
– 1 partition, 5 GSIs, 5 LSIs (per partition)
– GSI and LSI separate tables
– GSI need to provision IOPs
– charged 1k writes 4k reads. reading from GSI reduces io cost
– partition key uses consistent hashing

Amazon Elasticache
by Darin Briskman, Developer Evangelist, AWS

– used to be based on memcached, now redis
– average operation 480 microseconds for 4 KB object, 240 microseconds for 1 KB object
– max. 3.5 TB per server, in clusters of 15 servers. 20 million reads per second, 4.5 million writes
– 300,000 TPS
– HA is 1,000 little details. 999 doesn’t count
– “Fast Data” sub-millisecond requests for IoT, mobile real-time info
– Alexa 1,500 ms budget, but 1,000 ms is network trip. So 500 ms for calculation.
– Alexa is DynamoDB+ElastiCache
memcached challenges:
1. no persistence
2. no HA
3. race conditions on threads
Thus Redis (“Remote Dictionary Service”)
Oracle, SQL Server, Mysql then Redis
– AWS-redis persistence via snapshot to S3
– snapshots to 90% of RAM (even 95%) network copied to alternate node. Allowed 20 snapshots per day.
– replication for HA. 30 seconds to failover usually
– primary and replica (don;t like master-slave sounds)
– 1 ms in same AZ, 2-3 ms different AZ
– 55 DCs in an AZ in US-EAST
– new Intel chips 34% faster
– no cross-AZ data transfer costs, so similar cost to Classic EC2
– don’t use T2 for prod. Use R or M.
– key CRC16
– promotion is to last-written replica, timeout of 15 seconds in case of network problem
– string key up to 512 MB, really just binary
– hash, set, list, geo, hyperloglog
– could use lambda to notify of OnPrem database update and invalidate Elasticache row
– IGA Works/Adpopcorn is Korean mobile business platform. Moment scoring on mobile users, including when to show ads
– Expedia’s real-time analytics with Dynamodb was 35000 writes, down to 3500 with elasticahe, 6x savings. 200 million messages daily.
– only a few airlines overseas, but lots of hotels at the destination. mom and pop agencies refreshing expedia also as their backend. 100 most popular routes are 50% of queries. TTL 24 hour, updates 10 time per day
– one day of work and 5 days of testing
– beyond time of year caching, you may know the most popular teams/items
– or cache the whole database if small enough
– cannot add another node for say 5 shards to 6 shards because could lost data now. maybe later.
– “you don’t have to do anything. when you woke up later, it’ll be there.”

ElastiCache Best Practices

– set reserved-memory to 90% so writes can fit in without eviction
– swap usage should be zero
– position a read replica in another AZ for HA
– primary with 2 replicas is 5×9’s
– avoid KEYS and other long-running commands
– not needed for like 1 MB of data
– 50% – 90% reduction in cost
– former Solution Architect at IBM for 20 years. At AWS, allowed to recommend ways to save money.

Everything You Need for a Viral Game, Except the Game
by Darin Briskman, Developer Evangelist, AWS

– use Dynamodb and redis
– wechat runs on redis
– publish and subscribe redis commands: subscribe to a channel then publish to it
– twitch offers hosted chat

– CloudTrail tracks every action including DBA-level access to RDS in JSON
– talk to your Solution Architect, available to every AWS account holder

by Darin Briskman, Developer Evangelist, AWS

– most downloaded Open Source app after linux kernel
– nice REST interace
– same code as Open Source, but manageable in AWS
– AWS is green-blue instead of red-black
– can resize
– AWS answers
– Centralized Logging
– CloudSearch is Solr

Hands-on Labs: Amazon ElastiCache
by Darin Briskman, Developer Evangelist, AWS


AWS Talks link

Kudos to Kevin Jernigan and Darin Briskman for their excellent Aurora and ElastiCache talks – the best database talks I’ve ever seen.

This version of the AWS Loft is nice as far as “pop-up” conferences go – everything is hosted on the 2nd Floor, so no sprinting up and down stairs every 30 minutes.


– Windows 10 has openssh support via Ubuntu. No Putty needed.

Getting there: 1446 Market St. SF. Take Muni K or T line to Van Ness station. or take Castro bus on Market St.

Posted in Conferences, Linux, MySQL, Open Source, Oracle, Tech | Leave a comment

Advanced Swagger UI Techniques

The benefits of using Swagger/OpenAPI are to maintain consistent API specifications, validation and documentation.

And the Swagger UI documentation tool initially looks very attractive cosmetically. However there’s no publishing or privacy restriction features. The reason is that the Swagger API spec itself doesn’t support publishing controls.

That’s fine for Open source authors and most internal-only corporate users, but commercial sites will be unhappy without more control.

So it’s important to decide well before ship date if the Swagger UI will work for you, or if you need to find another solution (ie. you might find it to be “more hole than donut.”)

Swagger UI showing API endpoints (the colored bars) and Auth Dialog. Note the double Authorize buttons. Everything you see is live (auto-generated from the Swagger API spec file.)

Here’s a summary of the Swagger UI issues that I’ve observed when writing a non-trivial API:

# Swagger UI Issue Solution
1. Swagger UI makes your Swagger API spec file downloadable by end-users Not easily fixable. A half-step is to use Basic Auth for viewing it, but authenticated end-users can still save it to disk. A plan would be to run it server-side.
2. Swagger UI by default sends your Swagger API spec file to an external validation service Fixable. validatorUrl: null
3. the “Try it out” buttons” are active, letting anybody too easily send GET, DELETE, PUT, POST, PATCH commands to your server, whether the request makes sense or not Fixable. See supportedSubmitMethods parameter.
4. the default branding is for the Swagger project Fixable. Read docs or just use Firefox Inspector on Swagger UI header and change the CSS/JS.
5. the UI tool is nice, but even nicer is professionally written text with detailed examples Not fixable. Outside the scope of Swagger UI.
6. the UI tool does not publish to a static file. Most commercial publishers want to provide a PDF file. Not easily fixable, though you can load your Swagger spec file into the Swagger Editor, drag the left frame further left, “print as PDF” and edit the text with Libre Office.
7. No ability to restrict on displaying paths or request methods Not easily fixable. You could export a minimal Swagger API spec file just for use with Swagger UI.
8. Add username and password auth Easy. Just add a securitydefinitions block in your Swagger spec file and define the key name in Swagger UI’s index.html. Note that multiple auth methods require showing and clicking multiple “Authorize” buttons (see screenshot above) since the Swagger specification considers multiple auth methods to logically OR and your app has to sort them out.
9. Add API key auth Easy. Supported by current versions.
10. URL bar shows address of Swagger spec file Fixable, but end-users can still download your spec file: document.
style.visibility= "hidden";
11. Themes Some are available at swagger-ui-themes
12. There can be one “body” parameter at most. The Swagger spec only allows one body element (formData or JSON) while you may want to allow both. Some parsers enforce that, and some don’t.

* by “not fixable” and “not easily fixable” I mean “total Swagger UI re-design and rewrite required.” :)

Getting Started with Swagger UI

  1. unzip or clone master to a public directory
  2. update dist/index.html with the location of your api.json file
  3. if you’re using a load balancer and see an error like “cannot call https from http”, change scheme from https to http in your Swagger API spec file.

Customizing the Swagger UI

Test Parameter
Security Tokens
How to break swagger 2.0 JSON file into multiple modules
Tom Johnson’s Tutorial
Tuan’s Tip to Add Username and Password (Check spelling carefully)
A Visual Guide to What’s New in Swagger 3.0

Note: if you google or stackoverflow for help, ignore any bug reports about Swagger UI before 2016.

Swagger Editor Custom UI


Posted in API Programming, Open Source, Tech | Leave a comment

Free Mac OS X PDF Editors

When you need to edit a PDF, you really need to edit a PDF.

The free Preview app that comes with Mac OS X can annotate and do some simple operations on PDF files, but does not have a feature to edit the actual text.

I tried the following free or trial PDF editors on Mac OS X 8.5 for editing text on a 30-page Firefox “print to PDF.”

Product Recommendation Notes
Libre Office for Mac (Open Source) Recommended Does a great job of editing text
iSkysoft PDF Editor Trial Not Recommended Can edit text, but inserts large yellow watermark on save
Inkscape (Open Source) Not Recommended Can only edit first page “due to SVG spec” unless you install plugin

iSkySoft Watermark
iSkySoft Watermark

Multiple page support for Inkscape

Posted in Tech | Leave a comment

Super Bowl LI 2017

Best Super Bowl that I can remember, with first Super Bowl Overtime.

The New England Patriots (QB Tom Brady) came from behind to win 34-28 over the Atlanta Falcons (QB Matt Ryan).

Atlanta scored 3 TD’s in Q2:

  1. Freeman does 3 drives resulting in a flying landing in the endzone
  2. xxx runs into endzone
  3. did a 82-yard interception.

Patriots got a 41-yard field kick for the 3 points.

Lady Gaga’s half-time performance was good, with surprising acrobatics.

Then the Patriots scored 31 unanswered points to win.

Commercials ($5 million for 30 seconds) weren’t too good, although John Malkovich trying to get his vanity domain from a domain squatter was pretty good (

Seemed like mostly cell phone, car and VR-related ads.

W: Super Bowl LI

Posted in Tech | Leave a comment

Odd but Handy URLs

Organization Link Purpose
Apple non-SSL ProbeURL for WiFi hand-shaking. Replaces success.html.
Google force plain English site (“no country redirect”)
IANA official domain for examples in documents
IANA official domain for examples in documents

How to fix “SuccessSuccess” Wi-Fi issue on MacOS X Mavericks temporally

Posted in Tech | Leave a comment

SpaceX Launch Begins Era of Space-Based ADS-B Tracking

The big news from the SpaceX launch of 10 Aireon Iridium 2G “Next” satellites is that ADS-B was also included on each satellite.

ADS-B is used for tracking and sending ATC digital information to airplanes. The FAA has mandated that almost all aircraft will install ADS-B transceivers before 2020, at a cost of $5,000 to $1 million per airplane, plus downtime.

Since there’s 150,000 registered US aircraft and thousands of foreign airliners, that just isn’t going to happen with the existing number of Mx shops and remaining 1,080 days. :)

These are fairly large satellites at 860 kg each:

SpaceX Launch Begins Era of Space-Based ADS-B Tracking
Iridium-1 Hosted Webcast
Layman HN Commentary
W: Automatic dependent surveillance – broadcast ADS-B Frequently Asked Questions (FAQs) New Satellites Promise Better ATC Coverage Clocks fail on some Galileo satellites, backups working

Posted in Tech | Leave a comment

Perl and Monotonic Time Functions

Perl on Linux supports the POSIX C clock_gettime() function to get the monotonic time (always increasing system time, except for variable overflow) values:

Comparing monotonic time values:

  • avoid problems with leap seconds going backwards in time by NTP, but can “warp”
  • avoid problems with VM time going backwards
  • can only be used locally, not compared across machines
  • can rollover on variable overflow

Disadvantages of clock_gettime() over time/gmtime:

  • rollover requires awareness and calculation
  • not supported on Mac OS X and buggy before RHEL 5.3
  • relative time, not actual time, so cannot be displayed for humans
  • for most programs, requires code change and re-QA
  • dichotomy still exists between system and database time
use strict;
use diagnostics;

use Time::HiRes qw(clock_gettime CLOCK_REALTIME CLOCK_MONOTONIC);

   my $realtime = clock_gettime(CLOCK_REALTIME);

   my $mono = clock_gettime(CLOCK_MONOTONIC);

   print "realtime = $realtime, monotonic = $mono\n";
$ perl /tmp/
realtime = 1483451061.64625, monotonic = 4536159.37919642

Perl – Time::HiRes
clock_gettime(3) – Linux man page
Erlang – Postscript: Time Goes On The leap second of doom
SO: How do I get monotonic time durations in python?
SO: Linux clock_gettime(CLOCK_MONOTONIC) strange non-monotonic behavior
SO: Is CLOCK_MONOTONIC process (or thread) specific?
How the NYE leap second clocked Cloudflare – and how a single character fixed it
W: Swatch Internet Time (Beats)

Posted in API Programming, Linux, Open Source, Perl, Tech | Leave a comment

eBay Bucks Base Earnings Now 1%


“Changes to eBay Bucks Rewards Program starting January 1, 2017
Effective January 1, 2017 the Base earnings are changing from 2% to 1%.”

Guess I’ll be advising sellers to wait for 8% or 10% eBay Bucks days.

Last time I’ll see one of those.

Basic Economy Fares Don’t Lower Ticket Prices, They Increase Ticket Prices
The Champions of the 401(k) Lament the Revolution They Started

Posted in Tech | Leave a comment

Microservices: Java MicroProfile Links

Java DukeFrom the MicroProfile FAQ:

“The MicroProfile is a baseline platform definition that optimizes Enterprise Java for a microservices architecture and delivers application portability across multiple MicroProfile runtimes.

The initially planned baseline is JAX-RS + CDI + JSON-P, with the intent of community having an active role in the MicroProfile definition and roadmap.”

Looks more like a a REST bundle to me that avoids fixing Java’s inherent flaws:

  1. long GC pauses (seconds)
  2. bloated memory consumption (GBs)
  3. slow start-up time (seconds)
  4. crashes from exceeding pre-configured heap size
  5. licence confusion – is it Apache v2? EPL? “Copyrights are inconsistent at the moment”? what does Oracle say? Home, FAQ MicroServices-friendly Java lands on Eclipse Eclipse MicroProfile

Posted in API Programming, Business, Cloud, GC Pauses, Java, Linux, Microservices, Open Source, Oracle, REST API Programming, Tech | Leave a comment

Aerodynamics: Rolling Gs

Avweb has an interesting article on ‘Extreme Maneuvering’ about practical applications of the FAA commercial aerobatic maneuvers (chandelles, lazy 8’s, steep spirals) that mentioned “rolling G’s.”

A rolling G occurs when you maneuver an aircraft in more than one axis at a time, causing the airframe or wing to twist. The rolling G design limit is considered to be 2/3 of the normal G limit, according to FAR 23.

Although I performed the commercial maneuvers during my Commercial Airplane practical test and Citabria checkout, I wasn’t really aware of two things:

  1. airframe twisting from rolling G’s can more easily exceed a plane’s load limit. Those limits would be lower in older aircraft, possibly already damaged, than newer ones. It’s important to load the airplane one axis at a time.
  2. the commercial maneuvers can be used to reverse in a box canyon. I know a private pilot who crashed in a box canyon (he luckily survived) because he knew of no course reversal methods, so this is handy to know.

Chandelle (Climbing, Reversing Turn) Animation

Advanced Section

The asymmetric lift, resulting in a torque, caused by the ailerons travelling up and down simultaneously with yawing and pitching maneuvers is believed to have caused several airshow accidents in older airplanes, shearing the wing spar. Contributing factors are the acceleration rate of the control movement, airspeed above maneuvering speed, Va, and wing and fuselage harmonics. Sideslip also affects G limits.

It would be difficult to calculate actual rolling G limits without destroying several aircraft to build a mathematical model. There are a number of reasons for that, but primarily the problem is that dynamic torque must be calculated for multiple types of members, including spars, fuselage skin, and especially attach points. The latter is tricky because attach point hardware may be very strong in one axis, and very weak when loaded off-axis (or corroded.) Why is Rolling G dangerous?, Normal G limits vs Rolling G limits?
W: Chandelle,
Ice and Tail Stalls

Posted in Tech | Leave a comment

Ecommerce Weather Report for Manila in 2016

Current consumer ecommerce weather report for Manila in Dec., 2016 …

US ecommerce sites – Just Say No

Manila sellers are wary of Facebook Pages commissions on retail listings, and “meh” on ebay for the same reason. Craigslist is free but doesn’t have any mindshare in Manila.

Southeast-Asian ecommerce sites – Just Say Yes

So they’re going with Lazada, probably #1 in Manila, or Shopee, which is ad-supported, and Carousell or

Shoppee is owned by the Garena Group of Singapore. They have registered country-specific top-level domains (TLDs) for each Asian country supported.

How Shopee works:

  • buyers and sellers download iPhone or Android mobile app or use web site to upload and view listings
  • Shoppee Customer Support, local to each Asian country, approves photos
  • buyers and sellers can apply for free shipping
  • Shoppee shows ad banners for $$$$.

I got a tour of the Shopee office. It’s similar to Silicon Valley start-up offices, but has a staffed reception area. :)

Car Hire

The most popular car apps are Uber and Grab. Riders use car apps because buses and the MRT (train) are inadequate for longer commutes, and unsafe due to petty criminal gangs. Drivers see car apps as a way to pay off their car loan, and to kill time, due to rampant underemployment.

Uber is rumored to have increased Manila traffic by the equivalent of 19,000 cars. Rush hour used to be 7 am to 9 am and 5 pm to 8 pm. Now it’s 6 am to midnite. (In the USA, there have been mixed reports of increased traffic. SF is reported to have problems, while Phoenix less.

Uber passengers used to cancel arriving sub-compact hatchbacks like the Toyota Wigo (MSRP USD$10,000) in favor of sedans, but the Wigo is getting more respect 2 years after market introduction.

Garena’s Shopee could be on its way to beating Carousell in Asia

Posted in Business, Tech | Leave a comment

Notes for Installing Percona Xtradb Cluster 5.7 on CentOS 5

Percona supports Percona Xtradb Cluster 5.7 on CentOS 6 and CentOS 7, but not CentOS 5.

You can install the RPMs or tarball binary, but on start will see various package dependencies that can’t be resolved on openssl.0.10 and others.

So your options are:

  1. upgrade your OS first to CentOS 6 or 7 64-bit first (recommended)
  2. downgrade and install Percona Xtradb Cluster 5.6 with yum install Percona-XtraDB-Cluster-server-56
  3. not recommended, but if you’re stubborn about clinging to CentOS 5.x and you’re a programmer, you can install Percona Xtradb Cluster 5.7 from source. You will need (at least) cmake 2.8.2+, boost 1.59+, and recommended are gcc 4.4 or clang 3.3.

Here’s the build instructions that compiled for me with gcc 4.1.2:

1. So download and install cmake 2.8.2 or higher from source first:

yum remove cmake
wget --no-check-certificate
tar zxvf - < cmake-3.7.1.tar.gz
cd cmake-3.7.1
./bootstrap && make && make install
cd ..

2. Download and install boost 1.62 from source:

yum remove boost
wget --no-check-certificate
yum install p7zip
7za x boost_1_62_0.7z
cd boost_1_62_0
./ --prefix=/usr/local
./b2 install
cd ..

3. Build Percona-XtraDB-Cluster-5.7 source like this:

wget --no-check-certificate
cd Percona-XtraDB-Cluster-5.7.16-27.19
# remove new gcc 4.4 flag -Wvla:
# -Wvla
#    Warn if variable length array is used in the code. -Wno-vla will prevent the -pedantic warning of the variable length array. 
perl -i.orig -p -e 's/-Wvla//g' `find . -name maintainer.cmake`
cmake . -DMYSQL_DATADIR=/var/lib/mysql
mv boost_1_59_0 /tmp
# fix the boost and gcc version errors. Just replace with your versions.
vi cmake/os/Linux.cmake +27
vi cmake/boost.cmake +265
# insert 2 "out-of-scope" macros os_compare_and_swap_thread_id and os_compare_and_swap from storage/innobase/include/os0atomic.h into these 2 source files:
vi storage/innobase/lock/ +1904
vi storage/innobase/trx/ +204
# define os_compare_and_swap(ptr, old_val, new_val) \
        __sync_bool_compare_and_swap(ptr, old_val, new_val)

#  define os_compare_and_swap_thread_id(ptr, old_val, new_val) \
        os_compare_and_swap(ptr, old_val, new_val)
make -j 8
make test
# the new server is located at sql/mysqld
make install
# note that 5.7 has a new grants schema, so your old database won't work until upgraded
# in /etc/init.d/mysql, bindir=/usr/local
Posted in Linux, MySQL, MySQL Cluster, Open Source, Oracle, Tech | Leave a comment

Star Wars ‘Rogue One’ Review

I don’t often go to the movies, but saw ‘Rogue One’ with a date.

The first half seemed kind of slow and disconnected, dealing with various rebel assassination plots (!) on Jedah and Eadu. Good visuals but weak story-telling.

The protagonist, Jyn Eros, portrayed by Felicity Jones, must be a pretty bad actor to have her mother killed in front of her, yet convey being unsympathetic and uninvolved throughout most of the film – I’d rather watch paint dry.

However, the second half dealing with the invasion and destruction of the Imperial base at Scarif and testing of the Death Star was riveting.

Darth Vader’s brutal but ultimately futile light-saber fight scene at the end will please action fans.

And seeing a youthful Princess Leia at the end receiving the Deathstar plans was a nice surprise.

The rebels’ companion robot, a re-programmed Imperial model named K-2SO, was intelligent and funny enough to be unsettling. Admiral “Fishlips” Raddus a Mon Calamari, also provided comedic distraction.

Admiral “Fishlips” Raddus. Photo Credit: Lucasfilm

W: Rogue One
IMDB: Rogue One
Can we talk about that final Darth Vader scene in Rogue One? Rogue One is Star Wars for Better and for Worse
Rogue One: Meet Admiral Raddus, thecharacter inspired by Winston Churchill

The story behind Princess Leia’s hairstyle

Keywords: General Fishlips

Posted in Tech | Leave a comment

Cessna Skycatcher 162 Inventory Crushed

The sorry tale of the Cessna 162 Light Sport Aircraft (LSA) has finally concluded. The remaining 80 airplanes have been crushed with a backhoe outside the the Chinese factory, including the installed engines and avionics.

It’s believed that liability insurance and parts support didn’t pencil out for the accountants. Crushing solved the liability problem, and also any agreements with suppliers like Continental and Garmin prohibiting resale.

Kind of a shame they couldn’t have sold them to the Chinese government for $1 for use in flight training in exchange for indemnity from lawsuits.

Of the original 1,000 projected orders, 200 were actually delivered and 80 crushed.

There were many problems with the 162:

  1. capabilities were Day and Night VFR, not IMC.
  2. 1,320 pounds gross weight only left room for one American after full fuel with this design
  3. flight schools required a separate check-out, even if you had 152 and 172 experience. This involved additional expense and searching for a slender CFI.
  4. price was high for flight schools given the above limitations. The 162 had teething problems, and some owners had to replace the ADHARS twice
  5. assembled in China, unlike most trainers.

Crushing C162 with a Backhoe
Crushing C162 with a Backhoe
Crushing C162 with a Backhoe
Crushing C162 with a Backhoe
Crushing C162 with a Backhoe

Cessna Scraps Unsold Skycatchers
Crushing More Than Airplanes
Skycatcher’s Demise: Barely a Ripple [2013]

Posted in Tech | Leave a comment

Mac OS X TextEdit Reads and Writes Microsoft Word Formats

TextEdit IconWow. Who knew the little TextEdit application supports .doc, .docx and PDF formats?

This actually worked for me today:

  1. I imported some Word 97/5.0 business documents
  2. updated and saved them
  3. and I exported them as PDF.

So you can do light but professional documentation, invoicing, etc. without installing additional software.

For multi-column and chart formatting, just use the the Format => Table… menu option, similar to old-school HTML layout. You can set the table cell borders to 0 pixels to make them disappear.

Bonus tip: Preview, which is also included with Mac OS X, can be used to professionally annotate images. If you’re a manager or engineer, you will love the results. Just click on Tools … Annotate. Opening DOCX Files on a Mac, Without Microsoft Office

Posted in Business, Tech, Toys | Leave a comment

Storage: Erasure Encoding Acceleration with Intel CPUs

There’s basically 3 ways to store online data, where “storage” includes block and/or network locations:

  1. filesystems on top of blocks (zfs, xfs, ext4)
  2. object stores across storage (OpenStack Swift, Backblaze, S3)
  3. files erasure-encoded across storage networks (CleverSafe)

CleverSafe struggled along with VC funding until Oct. 2015, when it was bought by IBM for $1.3 billion.


An HN commenter has done us a favor by listing a handful of links to Intel CPU acceleration techniques useful for computing #3:

“At a glance, this seems like a clear explanation of using standard SIMD instructions to solve the problem, but I think the landscape has changed since this was written such that there are now better approaches.

In 2010, Intel released processors with a dedicated instruction for “packed carry-less multiplication.”

Unfortunately, the early implementations (through Sandy Bridge) were slow, and could be beaten by combining other SIMD operations as shown in this paper.

With the Haswell generation released in 2013, though, PCLMULQDQ got much faster. Instead of being able to complete one instruction every 8 cycles, it became possible to finish one every 2 cycles (inverse throughput went from 8 to 2). This 2015 paper “Faster 64-bit universal hashing using carry-less multiplications [PDF]” shows the difference this makes:

If you are looking for an explanation of how the problem could be solved with the basic building blocks of SIMD, the 2013 Plank, Greenan, Miller paper might be a good resource. But if you are hoping to implement high performance solution for modern processors, the 2015 Lemire and Kaser paper is probably a better starting point.

(This is with the caveat that I don’t actually understand the theory or terminology of Galois fields, and maybe there is something about applying it to Erasure Coding that makes the faster PCLMULQDQ approach inapplicable.)”

FAST-2013: Screaming Fast Galois Field Arithmetic Using Intel SIMD Instructions (2013) [PDF]
OpenStack Swift IBM Buys CleverSafe
Patent Troll Kills Open Source Project On Speeding Up The Computation Of Erasure Codes

Keywords: Reed, Solomon, Galois, GFC.

Posted in API Programming, Cloud, Linux, Open Source, Storage, Tech, Toys | Leave a comment

apis.json File AKA sitemap.xml for APIs

apis.json is a site discovery format like sitemap.xml, but for your APIs. It is an open project to create a new standard by some ambitious API evangelists .

Steps to create your own apis.json file:

  1. look at existing files featured on, or for a complete example,
  2. validate your apis.json file
  3. contact the maintainers to add a link to your apis.json file.

apis.json: validator, Google Group, github, Proposed Intranet Properties

API Evangelist
github: OpenAPI Specification

Posted in API Programming, Tech | Leave a comment

Retro: GeoCities Cage Photos [1999]

I had a chance to see the GeoCities Exodus 1 cage in 2000 when I was at eBay Payments.

It used LaCie JBOD stacked to the ceiling as storage devices, and a 3′ diameter floor fan to move the hot air to other customers’ cages. :)

$50 Floor Fan Protecting $millions in GeoCities equipment from outside their colo cage. What could go wrong?

Their cage left an impression on me, and demonstrated how:

  1. ghetto colo can work
  2. cages can achieve very high densities
  3. devices can work at very high temperatures for extended time intervals
  4. to work your colo provider (there’s no way they got prior approval for that floor fan!)

Below is some photos of one of their cages with Sun and Netapp gear:

The GeoCities Cage at Exodus Communications [1999] HN Comments

Note their use of Veritas Volume Manager.

Until around 1998, linux did not have a journalled filesystem. I started evaluating Reiserfs 3 on Suse Linux at that time on my personal machine. A Suse salesrep a decade later refused to believe that anybody in the USA could have been using Suse back then. :)

The other cage from 2000 that left an impression on me was’s, which had a Sun E10k server ($2 million each fully populated). I don’t think they ever launched a product, yet they had the same equipment as eBay’s main cage.

I was talking to some other sysadmins with gear at Exodus 3 and 4, and they mentioned a lot of customers also built out their colo but never launched.

Dedicated Internet Access & Hosting Agreement between Exodus and GeoCities
W: Exodus Communications

Posted in San Jose Bay Area, Storage, Tech, Toys | Leave a comment

TransAsia Airways Shuts Down After Two Horrific Accidents


TransAsia Shuts Down Amid Safety, Financial Problems

Posted in Tech | Leave a comment

GitLab Validating Ceph in Production For Me Spikes are Outages. OSD = Ceph Object Storage Daemon

  • It would be easy for me to criticize GitLab for using a distributed file system in production, especially Ceph, in AWS. I just wouldn’t roll that way.
  • And it would be easy for me to say, “I told you so.” again about AWS latency being a performance killer. It’s physics.

    After all, when you yoke a bunch of water buffalo together, your team is only as fast as the slowest buffalo.

    But I find it fascinating and convenient that they’re doing all that distributed file system testing for me. Thanks, guys! :)

    On the plus side, supporting a distributed file system is almost possible on homogeneous hardware …

    Here’s some free consulting from somebody who works on x,000 to xx,000-server data centers:

    1. buy hardware compatible with Ceph
    2. use 10 Gbps switch ports
    3. use cluster-dedicated switches
    4. hire somebody already doing it now
    5. don’t goof up your health-checks. Include all healthy servers, not just the healthiest one
    6. or instead of using Ceph or Gluster, do it right. Implement Backblaze’s object store design. Invert the problem from being “the network and OSD has to always work” to something tractable like “my HTTP API has to work most of the time”. And use a combination of Arista Clos network design and HAProxy as the mesh router to avoid network hotpspots and SPOFs. Non-blocking and “Propah!” with multi-terabits per second sustained throughput! Now we’re talking! 😎

    “There is a threshold of performance on the cloud and if you need more, you will have to pay a lot more, be punished with latencies, or leave the cloud.” How We Knew It Was Time to Leave the Cloud
    HN discussion (with Cloud Apologists)
    Proposed server purchase for HN

  • Posted in Cassandra, Cloud, Open Source, Storage, Tech, Toys | Leave a comment

    Congrats to Cirrus on Type Certificate for SF50 Jet

    Cirrus SF50 Vision Jet – almost actual size!

    Congrats to Cirrus Aircraft on receiving an FAA type certificate for their new SF50 Vision Jet.

    It’s a short-range single-engine 4-seat passenger jet for $1.5 million with a parachute that can be operated for $660/hour.

    I’ve been following the news on this during the last decade of development. General Aviation (GA) moves slowly, but the SF50 and HondaJet show that eventually small aircraft do get certified.

    The numbers:

    1. 300 KTAS
    2. 2036′ runway
    3. 67 knots stall speed
    4. FL280
    5. FIKI
    6. cabin height is only 4.1′.

    Obviously it’s intended for existing Cirrus owners who want to step up to a jet.

    It uses a Williams FJ33-5A jet engine, similar to the original $800,000 Eclipse 500 jet plans. Williams is a cruise-missile manufacturer, so has lots of manufacturing experience with small jet engines, but not much experience with passenger airplane maintenance.

    Boeing Business Jet (BBJ)
    The Cirrus SF50 is not like this, a Boeing Business Jet (BBJ)

    Cirrus was bought by a Chinese state company, AVIC, in 2011.

    China has bought most of the American GA manufacturing capacity out of bankruptcy in the past decade to position itself for growth in the emerging Chinese civilian market, including Continental Engines (2010), Superior Air Parts (2008), Mooney (2013) and Diamond Canada (2016.)

    So far, that has resulted in much-needed investment, though it’s unclear what the long-term implications are for the USA. Cirrus SF50 Vision Jet: Learning From the Past Hourly operating costs of 45 jets compared
    Mooney’s Fortunes Tied to China
    Luxury VIP jets: How the super-rich fly Checking The China Acquisition Score Card
    Cirrus Delivers First Vision Jet

    Posted in Tech | Leave a comment

    Linux HTTP Load Testing with httperf

    Linux logohttperf is an easy-to-use but powerful GPL2 command line (CLI) stress and load testing tool for linux.

    Installing httperf

    CentOS 6:

    yum install httperf

    CentOS 7:

    rpm -Uvh rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm
    yum install httperf

    Running httperf

    1. Always get permission from the site owner first before doing load testing
    2. It’s important to start by calibrating your tool first. Send one request and check the response:
    $ httperf --server --uri /index.php --print-request --print-reply -d10

    If you see non-200 HTTP responses, like this 301 example response below, then you need to ensure you have the correct –uri parameter:

    httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE
    httperf: maximum number of open descriptors = 1024
    SH0:GET /index.php HTTP/1.1
    SH0:User-Agent: httperf/0.9.0
    SS0: header 83 content 0
    RH0:HTTP/1.1 301 Moved Permanently

    You can ignore the open files warning – it’s a bug in httperf. Just keep the load under 200 connections, or compile your own version from source.

    Now we’re ready to do concurrent testing:

    $ httperf --server --uri /index.php --num-conn 20 --num-cal 10 --rate 2 --timeout 5
    httperf --timeout=5 --client=0/1 --port=80 --uri=/blog --rate=2 --send-buffer=4096 --recv-buffer=16384 --num-conns=20 --num-calls=10
    httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE
    Maximum connect burst length: 1
    Total: connections 20 requests 200 replies 200 test-duration 10.675 s
    Connection rate: 1.9 conn/s (533.8 ms/conn, <=4 concurrent connections)
    Connection time [ms]: min 1175.2 avg 1266.2 max 1728.3 median 1179.5 stddev 179.3
    Connection time [ms]: connect 63.4
    Connection length [replies/conn]: 10.000
    Request rate: 18.7 req/s (53.4 ms/req)
    Request size [B]: 73.0
    Reply rate [replies/s]: min 18.2 avg 19.1 max 20.0 stddev 1.3 (2 samples)
    Reply time [ms]: response 120.3 transfer 0.0
    Reply size [B]: header 238.0 content 0.0 footer 0.0 (total 238.0)
    Reply status: 1xx=0 2xx=0 3xx=200 4xx=0 5xx=0
    CPU time [s]: user 2.36 system 8.30 (user 22.1% system 77.7% total 99.9%)
    Net I/O: 5.7 KB/s (0.0*10^6 bps)
    Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
    Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

    Always check for non-zero error counts.

    Going Pro

    After you're comfortable using httperf, here's how to take it to the next level:

    1. use a dedicated physical machine separate from your subject under test to reduce intrusive latencies, and tail the server logs in separate terminal windows. Graph CPU and RAM consumption of the subject.
    2. build your own version of httperf with your preferred options. On CentOS 7:
      git clone
      cd httperf
      # read
      sudo yum install automake openssl-devel libtool
      libtoolize --force
      autoreconf -i
      sudo make install
      read the links below and configure open files, port range and TCP timeout
    3. do runs 3 times at different times of the day and/or seasons
    4. again, always check for non-zero error counts
    5. add load and stress testing to your server and application deployment checklists. There's always some kind of surprise just waiting to be discovered. :)
    Advanced Notes
    1. test tools are one of those things where you really need the source code to get what you want
    2. Runnning strace httperf ..., we see that httperf does polling with the select() system call. Hmm ...
      select(4, [3], [], NULL, {0, 0})        = 0 (Timeout)
      select(4, [3], [], NULL, {0, 0})        = 0 (Timeout)
      select(4, [3], [], NULL, {0, 0})        = 0 (Timeout)
   stress test your web server with httperf
    SO: Changing the file descriptor size in httperf Increase "Open Files Limit" The USE Method

    Posted in API Programming, Cloud, Linux, Open Source, Tech | Leave a comment

    The first ever photograph of light as both a particle and wave

    Magnified image of electrons interacting with a standing photonic wave along a thin wire. The standing wave shows the wave nature of light, and the coloration measures the change in velocity as photons interact with electrons (particles)

    The first ever photograph of light as both a particle and wave [2015]

    Posted in Tech | Leave a comment

    Basic JMeter Load Testing of Web Sites and Rest APIs

    JMeter LogoThis is an intro to load testing with Apache JMeter, an Open Source load testing tool.

    As a developer, QA or Operations engineer, it’s important to be familiar with what load testing tools can do, and to know how to configure a few actual tools.

    I usually reach for a load testing tool in the following scenarios, which appear similar, but really are very different. You can divide stress and QA testing into 4 categories:

    1. I want to know what will happen when 100 requests are sent to a single or handful of endpoints in a brief time interval, usually one second (connection and configuration testing)
    2. I want to know what will happen under sustained load of 50 requests/second to a single or handful of endpoints, for typically 5 minutes (performance testing)
    3. I want to know that all of an application’s pages respond successfully, typically using 1 thread (application testing)
    4. I want to know how many synthetic users that application’s pages respond successfully under load, typically 20 – 100 threads. (application load testing)

    Note that I don’t consider a simulated load to be meaningful for predicting human loads.

    For example, on one intranet project, 70,000 users were happy with a phone book web app that only load tested to 20 simultaneous users. The test was useful in indicating that nothing was misconfigured, but not useful in predicting how many people could actually use it.

    Load tests just tell me:

    1. if something is misconfigured or broken. If I get less than once response per second, or the server stops listening for a period of seconds, then we know there is a problem to investigate.
    2. numbers that I can compare to other runs over time.

    Why JMeter?

    JMeter is:

    • convenient (after the first time you learn it)
    • popular in the Java community, so worth being familiar with
    • Open Source (free)
    • extensible

    The disadvantages of JMeter are that:

    • it has a complex UI
    • Java GC pauses can affect results on longer test runs. You can mitigate that by setting up your tests using the UI, then run the tests from the command line as recommended.

    JMeter Installation

    1. check your Java version for 1.7 or 1.8 with java -version
    2. download and install JMeter
    3. read the JMeter Getting Started guide
    4. read the first 7 pages of the Basic Scripting with JMeter tutorial by Simon Knight
    5. setup an initial test using the JMeter UI. You must include a Response Assertion for a credible test. Then save to a jmx file as bin/ (it’s an XML file with your settings.)


    Make 4 copies of your jmx file with:

    cp -p Mysite1.jmx
    cp -p Mysite2.jmx
    cp -p Mysite2.jmx
    cp -p Mysite3.jmx

    Edit each jmx file to customize the properties according to the 4 strategies I listed above:


    (If you want to invest some time, you can parameterize those as documented in the JMeter FAQ.)

    Create the following bash script so that you can run your test from the command line:

    # Program:
    rm -f mysite1.log
    ./jmeter -n -t Mysite1.jmx -l mysite1.samples.log -j mysite1.log
    grep "Thread Group" mysite1.samples.log | grep -v [O]K

    Running Tests

    1. Ensure you’re authorized before running any load test against a server you don’t own.
    2. bash
    3. Analyze the response codes and timings. Test samples will be in mysite1.samples.log, and reports in mysite1.log.

    When to Run

    Every time a change is made to your environment, you should re-run the load tests. So include it as part of your release process checklist.

    Distributed Testing

    After you’re familiar with load testing using a single client with JMeter, you can learn about using multiple load test clients.

    Bonus – “Soak Testing”

    In the telco industry, historically new systems have undergone “soak testing.” This is operating test systems under a realistic load for one month or more to “provide a measure of a system’s stability over an extended period of time.”

    JMeter Too Difficult?

    There’s a couple options for people who want results without fussing with JMeter:

    1. Command Line Interface (CLI) – httperf
    2. Graphical User Interface (GUI) – Microsoft’s discontinued Web Application Stress Tool (WAST) aka “Homer” is an incredibly easy-to-use, distributed and powerful Windows graphical tool – “its ease of use means it actually gets used.” If you want to do load testing from Windows client machines, you can download it from here.

      WAST is so good that it has fans, which can’t be said for any other load test tool. It was replaced by Visual Studio Team System’s (VSTS) Test Manager.

    JMeter: FAQ, Best Practices
    SO: Load test with varying number of threads in JMeter

    Posted in API Programming, GC Pauses, Java, Open Source, REST API Programming, Tech | Leave a comment

    How to Build Linux rkt Container Manager on CentOS 6.7

    Linux logoInstalling the rkt container manager on CentOS 6.x with yum will give you this error:

    # yum -y install go rkt
    # rkt run
    rkt: /lib64/ version `GLIBC_2.14' not found (required by rkt)

    glibc is not something you can easily upgrade yourself, but you can build rkt from source. On CentOS 6.6 and 6.7, this works:

    sudo yum install -y go squashfs-tools libacl-devel glibc-static trousers-devel
    wget &&
    tar zxvf - < v1.15.0.tar.gz &&
    cd rkt-1.15.0 &&
    echo "insert 'echo' at line 5667 to workaround old autogen bug"
    vi configure
    ./configure --disable-sdjournal --with-stage1-flavors=fly --disable-tpm &&
    echo 'readlink -f "$@"' > realpath &&
    chmod +x realpath &&
    export PATH=$PATH:. &&
    # now test by downloading a Docker Ubuntu image and running it (requires about 256 MB RAM)
    sudo ./build-rkt-1.15.0/target/bin/rkt run --interactive docker://ubuntu --insecure-options=image

    CoreOS Issue #1063: build with old glibc so rkt runs on CentOS 6?

    Posted in API Programming, Cloud, Open Source, Tech | Leave a comment

    Linux rkt on CentOS7 is Just Too Easy

    Linux logoThe rkt (pronounced “rocket”) container manager is just too easy to run on CentOS7!

    Here’s me running a Docker Ubuntu 16.04.1 LTS image on CentOS7 (Dell 1950 III with 8 GB RAM on 100 Mbps Internet connection) for the first time in under a minute. The Ubuntu Docker image actually starts in 3 seconds once downloaded.

    Download the rkt RPM then …

    # rpm -Uvh rkt-1.18.0-1.x86_64.rpm
    # cat /etc/redhat-release 
    CentOS Linux release 7.2.1511 (Core) 
    # uptime
    06:16:04 up 141 days, 51 min, 1 user,load average: 0.18, 0.26, 0.22
    # rkt run --interactive docker://ubuntu --insecure-options=image
    Downloading sha256:6bbedd9b76a [================] 49.9 MB / 49.9 MB
    Downloading sha256:fc19d60a83f [================]     824 B / 824 B
    Downloading sha256:668604fde02 [================]     160 B / 160 B
    Downloading sha256:de413bb911f [================]     444 B / 444 B
    Downloading sha256:2879a7ad314 [================]     678 B / 678 B
    root@rkt-2ee79be0-a70b-44be-90fd-1a1a54c17216:/# cat /etc/os-release 
    VERSION="16.04.1 LTS (Xenial Xerus)"
    PRETTY_NAME="Ubuntu 16.04.1 LTS"
    root@rkt-2ee79be0-a70b-44be-90fd-1a1a54c17216:/# uptime
    06:16:27 up 141 days, 52 min, 0 users,load average: 0.25, 0.27, 0.22
    root@rkt-2ee79be0-a70b-44be-90fd-1a1a54c17216:/# ps -ef
    UID        PID  PPID  C STIME TTY          TIME CMD
    root         1     0  0 06:16 ?        00:00:00 /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --show-status=0
    root         3     1  0 06:16 ?        00:00:00 /usr/lib/systemd/systemd-journald
    root         5     1  0 06:16 console  00:00:00 /bin/bash
    root        13     5  0 06:16 console  00:00:00 ps -ef
    root@rkt-2ee79be0-a70b-44be-90fd-1a1a54c17216:/# exit
    # cat /etc/redhat-release 
    CentOS Linux release 7.2.1511 (Core) 
    # uptime
    06:16:58 up 141 days, 52 min, 1 user,load average: 0.15, 0.24, 0.22

    Is 0.0% memory usage light-weight enough? :)

    # ps aux | egrep -e "[U]SER|[r]kt"
    root      6711  0.4  0.0  41772  2244 pts/0    S+   07:22   0:01 stage1/rootfs/usr/lib/ stage1/rootfs/usr/bin/systemd-nspawn --boot --notify-ready=yes --register=true --link-journal=try-guest --quiet --uuid=40660e8b-09c0-46c2-893e-53de6d4068ff --machine=rkt-40660e8b-09c0-46c2-893e-53de6d4068ff --directory=stage1/rootfs --capability=CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FSETID,CAP_FOWNER,CAP_KILL,CAP_MKNOD,CAP_NET_RAW,CAP_NET_BIND_SERVICE,CAP_SETUID,CAP_SETGID,CAP_SETPCAP,CAP_SETFCAP,CAP_SYS_CHROOT -- --default-standard-output=tty --log-target=null --show-status=0

    rkt prepare
    Get Started with rkt Containers in Three Minutes
    build with old glibc so rkt runs on CentOS 6? ./build-rkt-1.15.0/target/bin/rkt run –interactive docker://ubuntu –insecure-options=image

    Posted in Cloud, Linux, Open Source, Tech | Leave a comment

    Linux Graceful Service Shutdown Techniques

    Linux logoWhen doing server upgrades with multiple servers, the ideal way is to:

    1. take one instance out of the pool
    2. drain connections on it
    3. upgrade it
    4. put it back into the pool
    5. back to #1.

    The various techniques can be categorized as:

    1. application-level
    2. load-balancer-level
    3. OS-level

    The most graceful method is to use an application-level feature, since the application knows what its worker status is.

    For example, with httpd on CentOS or Redhat, either use the apachectl command, or add the graceful-stop option to /etc/init.d/httpd:

    set -e
    echo "info: Draining connections ..."
    apachectl graceful-stop
    echo "info: You have 5 minutes to start and finish your upgrade."
    sleep 300
    apachectl start
    echo "info: httpd restarted!"
    exit 0

    If we didn’t have an application-specific way to do that, we could use iptables:

    set -e
    iptables -I INPUT -j DROP -p tcp --syn --destination-port 80
    echo "info: Draining connections ..."
    sleep 60
    echo "info: You have 5 minutes to start and finish your upgrade"
    sleep 300
    iptables -D INPUT -j DROP -p tcp --syn --destination-port 80
    echo "info: iptables allowing new incoming connections!"
    exit 0

    With HAProxy we can do this on the HAProxy host (do yum -y install socat first):

    set -e
    echo "set server application-backend/www0 state drain" | socat unix-connect:/var/run/haproxy.sock stdio
    echo "info: Draining connections ..."
    sleep 60
    echo "set server application-backend/www0 state maint" | socat unix-connect:/var/run/haproxy.sock stdio
    echo "info: You have 5 minutes to start and finish your upgrade"
    sleep 300
    echo "set server application-backend/www0 state ready" | socat unix-connect:/var/run/haproxy.sock stdio
    echo "info: haproxy allowing new incoming connections!"
    exit 0

    Sample HAProxy “show stat” output while www0 is draining (notice the “DRAIN” status):

    [root@gw ~]# echo "show info" | socat unix-connect:/var/run/haproxy.sock stdio
    Name: HAProxy
    Version: 1.5.10
    [root@gw ~]# echo "show stat" | socat unix-connect:/var/run/haproxy.sock stdio
    # pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,
    application-backend,www0,0,0,0,1,5000,3,583,1078,,0,,0,0,0,0,DRAIN,1,1,0,0,0,385,0,,1,4,1,,3,,2,0,,1,L7OK,301,0,0,2,1,0,0,0,0,,,,0,0,,,,,442,Moved Permanently,,0,0,0,1,
    application-backend,www1,0,0,0,1,5000,4,709,18640,,0,,0,0,0,0,UP,1,1,0,0,0,755,0,,1,4,2,,4,,2,0,,1,L7OK,301,0,0,3,1,0,0,0,0,,,,0,0,,,,,141,Moved Permanently,,0,1,8,8,

    For nginx:

    set -e
    echo "info: Draining connections ..."
    nginx -s quit
    echo "info: You have 5 minutes to start and finish your upgrade."
    sleep 300
    nginx -s start
    echo "info: nginx restarted!"
    exit 0

    If you’re using a configuration management system, like puppet or Chef, you can remove the service from your load balancer pool. This works well in practice with only 2 or 3 servers, though draining is usually not considered.

    Note that when using the popular “reverse HAProxy” setup with application servers running HAProxy on localhost, and HAProxy forwarding localhost requests to the real servers (like httpd), then you want to stop or block the httpd services on the real server end. Otherwise you would have to make changes on multiple application servers.

    In a future post, I’ll discuss zero-downtime deploys.

    Drain connections on restart of NGINX process? (with iptables)
    Tomcat’s Graceful Shutdown with Daemons and Shutdown Hooks
    Get haproxy stats/informations via socat
    Go net/http: add built-in graceful shutdown support to Server #4674 HAProxy Socket Commands

    Posted in API Programming, Business, Cloud, Java, Linux, Microservices, Open Source, Tech | Leave a comment

    Solving Java GC Pause Outages in Production

    Java Duke
    Just thinking about how to configure HAProxy with two backend Java servers to be HA, despite GC pauses.

    Java programs pause periodically to recycle temporary variables, known as garbage collection (GC). This is called a “GC Pause.”

    The description “Stop the World” (STW) illustrates their true severity – GC pauses are a slow-motion train wreck for incoming requests. They can last from hundreds of milliseconds to minutes, and require intense CPU activity.

    Executive Summary:

    • If you have a latency-sensitive requirement, don’t use Java – use C or Go 1.8+ [GC benchmarks]
    • If you want to use Java, follow the best programming practices listed below to reduce garbage collection pause time, or consider paying Azul $3,500/server
    • HAProxy can be used with option redispatch to load balance across multiple Java servers to maintain availability during GC pauses. You can either use the HAProxy drain feature for rolling deployments, or in more complex setups, iptables.
    • Bonus tip: Java GC pauses don’t only impact your application, they also affect their entire environment like a grenade – performance tools written in Java pause, tomcat pauses, even reflection APIs are paused.

    If you’re new to this topic, please read:

    Willy: “I work with people who use a lot of Java applications, and I’ve seen them spend as much time on tuning the JVM as they spend writing the code, and the result is really worth it.” Anybody have some extra time? 😐

    My operational requirements for Java in production are:

    1. understand GC pause activity for my application servers
    2. control GC pause activity to a reasonable and bounded extent
    3. configure HAProxy load balancer to not send requests to servers undergoing GC pauses (ie. don’t lose requests)
    4. use an affordable amount of RAM to accomplish the above, preferably 8 or 16 GB in a shared VM environment.

    1. Understand GC pause activity for my application servers

    Detailed GC logging and heap dump on OOM can be enabled with:

    -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+HeapDumpOnOutOfMemoryError

    and you can specify a separate GC log with:

    -verbose:gc -Xloggc:/tmp/gc.log

    See “Understanding Garbage Collection Logs.”

    2. Control GC pause activity to a reasonable and known extent

    One of the biggest challenges is to control the frequency, duration and intensity of GC pauses …

    Some Java configuration approaches:

    • set heap size and compaction percent only somewhat above need. That will cause GCs to be more frequent, but also faster or the opposite …
    • set heap size to large amount and compaction to 100%, then trigger GC after hours
    • investigate alternate JVMs.

    An example of some of the tuning options:

    java -Xms512m -Xmx1152m -XX:MaxPermSize=256m -XX:MaxNewSize=256m

    JRockit JVM: Tuning For a Small Memory Footprint
    Tuning Java Virtual Machines (JVMs)
    Weblogic Tuning JVM Garbage Collection for Production Deployments

    Programming best practices to reduce GC pauses:

    • use streaming file IO with Files.lines() instead of reading into a String or hashmap, or use memory-mapped files
    • rewrite portions of your application to correctly use StringBuffer instead of String
    • Reduce object copies – if you do not have a problem with thread safety, then you don’t need immutable objects.
    • call dispose() method when available, such as SWT image class
    • for HashMaps, call clear() to re-use the memory later, but set to null to GC it
    • split java server into real-time and batch servers where possible with appropriate minimal heap sizes for each
    • preallocate array memory using the length parameter to potentially avoid re-copying the entire array for new elements
    • Note that Java debuggers change the lifetime of variables so that they can be viewed longer scope-wise. Caveat emptor.

    3. Configure HAProxy load balancer requests to not be sent to servers undergoing GC pause events

    The first thing to do is to read up on HAProxy’s option redispatch feature. Continue reading for more in-depth considerations.

    This is tricky for several reasons:

    • health checks can be passive or active. Both have check gaps that won’t notice a GC starting before a request is sent
    • even if GC notifications are enabled and the server health check is red, HAProxy will not know (see above)
    • even if GC notifications are enabled and the server health check is now green, HAProxy will not know (see above) :)
    • the HAProxy options log-health-checks and redispatch may be helpful

    a) Some things to think about:

    1. understand your GC pattern
    2. use HAProxy socket interface to drain, then disable one backend
    3. wait for zero connections
    4. force a GC (easier said than done in Oracle Java since System.gc() is only a request for GC), or restart the Java server
    5. use HAProxy socket interface to enable the Java server.

    This method would be risky with two Java servers, since during maintenance on one server, the other could GC pause. (facepalm)

    b) Another possible approach would be to handle MemoryPoolMXBean MEMORY_THRESHOLD_EXCEEDED events. Maybe that can be used to update the health check on the server side and send a drain socket request to HAProxy if you reliably had advance notice and could force a GC now, trying the Java Tool Interface ForceGarbageCollection()?

    c) And another idea is to write a sentinel file every 250 ms, and if it reaches 750 ms, assume a GC is happening and drain HAProxy. Unfortunately the TI events GarbageCollectionStart() and GarbageCollectionEnd() are sent after the VM is stopped, so you’re limited in what you can do when you need the most flexibility.

    Some Java 8 Classes related to GC notifications:

    1. MemoryPoolMXBean – “The memory usage monitoring mechanism is intended for load-balancing or workload distribution use. For example, an application would stop receiving any new workload when its memory usage exceeds a certain threshold. It is not intended for an application to detect and recover from a low memory condition.”
    2. GarbageCollectionNotificationInfo
    3. GarbageCollectorMXBean

    Also, investigate mod_jk and AJP. tomcat uses the same heap as your application, so tuning is very important here too.

    4. Use an affordable amount of RAM to accomplish the above, preferably 8 or 16 GB in a shared VM environment

    If you work in a VM consolidation environment, it’s important to minimize the footprint of your applications. See above for rewriting applications to minimize heap and GCs.

    Garbage Collection JMX Notifications Example Code
    Blade: A Data Center Garbage Collector
    How to Tame Java GC Pauses? Surviving 16GiB Heap and Greater
    SO: Garbage Collection Notifications
    Letting the Garbage Collector Do Callbacks
    How to force garbage collection in Java?
    SSL Termination, Load Balancers & Java
    Github: Measuring Java Memory Consumption – sample code
    Java is not “angry” with you.
    Set State to DRAIN vs set weight 0
    Scalable web applications [with Java]
    Examples of forcing freeing of native memory direct ByteBuffer has allocated, using sun.misc.Unsafe?
    Lucene ByteBuffer sample code
    Improve availability in Java enterprise applications
    The Four Month Bug: JVM statistics cause garbage collection pauses
    Memory management when failure is not an option

    Making Garbage Collection faster
    The Complete Guide to Instrumentation: How to Measure Everything You Need Inside Your Application
    Java heap terminology: young, old and permanent generations?
    5 Coding Hacks to Reduce GC Overhead

    Java Debugger Changes Lifetime of Variables
    Objects Should Be Immutable
    Thread Safety and Immutability
    Azul Blog: So, G1 may become the default collector for Java 9?
    Java and Scala Type Systems are Unsound


    Golang: sub-millisecond GC pause on production 18gb heap HN
    Getting Past C
    Go GC: Prioritizing low latency and simplicity
    Sub-millisecond GC pauses in Go 1.8 Graphs


    CASSANDRA-5345: Potential problem with GarbageCollectorMXBean
    Java GC pauses, reality check

    Posted in Cassandra, GC Pauses, Java, Microservices, Open Source, Oracle, REST API Programming, Tech | Tagged | Leave a comment

    I found jMeter, however, really easy to use.

    Java DukeSo, let me get this straight

    • Java is not safe for use in servers because of GC pauses.
    • And it’s not safe for use in clients because of GC pauses.

    Doesn’t leave much left! :)

    Thanks to Greg Lindahl, founder of Blekko, for making my day. You’re The Man when it comes to performance!

    Another good one that made *me* pause:

    • Me at ApacheCon 2009: “So how do you like programming in Java?”
    • Random Attendee in Wifi tables area: “It’s great. Not sure why people gripe about memory consumption.”
    • Me: “Really. Show me your Java app.”
    • Random Attendee: “Well, my Macbook Air doesn’t have enough RAM.” :)


    Apache jMeter
    Distributed Testing with JMeter on EC2

    Analyzing JMeter Application Performance Results

    Dan Luu: HN comments are underrated HN comments

    Posted in API Programming, Conferences, GC Pauses, Java, Open Source, Tech | Leave a comment

    REST API Client Computer Languages and Frameworks Survey

    I recently wrote REST API client programs in several programming languages as a subproject of my Perl REST API Framework, and had some surprises, both good and bad.

    I would have gladly just linked to somebody else’s sample clients, but I couldn’t find any remotely complete or professional-grade code (complete working program with error-handling, Basic auth and timeout.)

    The closest to useful REST clients that I saw were the Java tutorials by, RESTful Java client with Apache HttpClient

    Here’s my notes:


    • tough to find the a working HTTP class for Java 1.8 on Centos7. I couldn’t get Apache HTTPClient imports working, so I ended up using HttpURLConnection
    • first experience with immutable data collections – quite jarring to realize you have to copy anything returned from a library first to change it. And an int is not an Integer, and a String is not a StringBuffer. Hahaha, good one!
    • somewhat of a learning curve for java build process. See for a minimal build tool
    • lint: javac -Xlint:all
    • I wouldn’t be surprised if Java’s legendary slowness and memory bloat are from the above issues, obvious even from a 200-line program.
    • since Java uses block scope, when you add try/catch/finally blocks, variables referenced in catch/finally blocks must be moved outside the enclosing try scope. Makes code a lot messier. In Java 7, try-with-resource partially solves that.


    • overall, programming in Go is a pretty nice experience. The included net/http package has everything you need.
    • but “encoding/json” is overly-complicated. Some XML-head must have designed it.
    • not sure why Go treats unused modules and variables as fatal compiler errors
    • http.StatusNoContent appears to be missing from package net/http


    • used the requests HTTP module
    • felt comfortable until running Pylint and discovering how freaky the python community can get (scoring my working program -1.5/10, but getting 10/10 after whitespace-only changes. Really?)
    • nice indenting: -r -s 4 -e


    • used the httparty HTTP framework
    • elegant, beautiful OO code without even trying.


    • got it working the fastest, but then took longer to polish it
    • wish there was a lint for PHP


    • not bad – straight-forward to do various requests and get responses


    • inadequate for REST API programming, more for manually fetching files only


    • LWP is mature, well-documented and readily available and made Perl the easiest scripting language to work with overall
    • Perl’s built-in lint checking (strict and diagnostics) is much appreciated after its lack in PHP, Python and bash.

    RESTful Java client with Apache HttpClient
    Why Pylint is both useful and unusable, and how you can actually use it
    Notes on Managing Java in the Cloud
    Static typing will not save us from broken software
    OpenFeign Java HTTP Client Library
    Java: How To Read a Complete Text File into a String
    Golang’s Real-time GC in Theory and Practice

    Posted in API Programming, Java, Linux, Microservices, Open Source, REST API Programming, Tech | Leave a comment

    PagerDuty Summit Conference 2016 SF

    PagerDuty LogoI went to the complimentary PagerDuty Summit Sept. 13 on Market Street in SF.

    The well-organized conference format was 2 tracks downstairs, with breaks and a small expo area upstairs.

    Andrew Fong of Dropbox had a very good talk on their struggle to go from four 9’s (“can use tactics”) to five 9’s (“has to be strategic”.) Their solution was to have a working group composed of anybody who wanted to contribute, across departments. (Not dedicated HA staff.)

    Andre Kelly of Google talked about having well-defined post-mortem processes in place now to capture outages in an organized manner and data mine the results over time later.

    Apparently there’s some popular Open Source post-mortem systems for that. Please leave a comment if you have any experience with those.

    Sean Reilley of IBM discussed people issues in communicating agile across a large company with pockets of staff who were used to waiting for permission (ie. not inherently agile.)

    Upstairs, the mini-expo seemed to have a couple booths for security-related start-up Cloud products, Datadog, plus a booth for PagerDuty itself to do customer demos and get beta feedback.

    PagerDuty Incident Timeline

    Sketch of New PagerDuty Incident Timeline Visualization Tool

    The money shot was seeing their new beta graphical incident timeline, to be released in November, which made the trip worthwhile. Until then, you can enable HTML emails for a slightly richer experience.

    The “Village” historic venue, [pic], was not my favorite: climbing up and down steep stairs with a backpack got old fast.

    Conference Videos

    Posted in Conferences, Tech | Leave a comment

    Eye of Hurricane Matthew

    Eye of Hurricane Matthew

    Posted in Tech | Leave a comment

    John Collins “The Paper Airplane Guy”

    CNN linked to a video of John Collins, “The Paper Airplane Guy.”

    John holds the world-record for paper airplane distance throwing.

    I had a chance to see John live recently when he gave a lecture and demo at my office in Silicon Valley.

    It was a unique experience:

    1. John is a fun lecturer who really knows aerodynamics and can explain it clearly to both kids and adults
    2. learning the art of making high-performance airplanes was great fun.

    I hold a commercial airplane licence and can say that he really knows his stuff. Highly recommended.

    Posted in San Jose Bay Area, Tech, Toys | Leave a comment

    Does Software Rot?

    Back in the day, Joel wrote an infamous post asserting that “software doesn’t rot” over time.

    I believe Joel was addressing the tendency of new programmers on a project to avoid learning the old codebase and write a new one instead, at great cost in terms of time and money.

    But let’s discuss the more interesting topic of whether software can actually rot.

    I would say that he was correct in a very narrow sense, namely a program written for a single version of Windows.

    But in the big picture, he was completely wrong. Even Windows software requires re-writes for “Certified for Windows” assurance for new shrink-wrapped versions to be shelved in US chain stores. (Stores were trying to reduce the rate of returns and customer support.)

    And how’s Silverlight, discontinued in 2012, working out for developers? :)

    When it comes to web software, total re-writes have been required for:

    • mobile
    • REST APIs
    • XML and JSON output
    • new Javascript frameworks.

    Apple could kill almost 200,000 apps with iOS 11

    Posted in API Programming, Open Source | Leave a comment

    Perl Petstore Enhanced REST API Framework

    Perl LogoI’ve been doing a lot of work with REST APIs and microservices, so I decided to write a complete REST API framework in Perl based on the Mojolicious and Swagger2 Petstore sample.

    You can git clone the repo and add a new API endpoint in about 5 minutes with automatic parameter validation and documentation:

    git clone
    cd perl-petstore-enhanced/pets
    less ../
    vi api.spec cgi-bin/pets.cgi ./lib/Pets/Controller/
    # add an Alias for cgi-bin/pets.cgi to httpd or nginx
    # point your browser at
    # Good job. Have a Modelo! :)

    or you can spend an hour to rename the files for your project and tweak it to requirements.

    This project serves as a convenient bridge for those who:

    1. can write simple CGI programs and want to write a best-practises Swagger (OpenAPI) REST API server without climbing a steep learning curve, or
    2. want to write a quick proof-of-concept API server to be re-implemented in other languages or frameworks later, as your Swagger spec file is 100% reusable
    3. are targeting a small VM. This will work in a 2 GB RAM VM just fine, or on an existing server running httpd or nginx.

    Also of note is the samples/ folder, which has non-trivial client programs in several languages (bash, Java, Perl, PHP and Ruby.)

    I learned the importance of Swagger2 and auto-generated API documentation and validation when I was programming with the old Rackspace Cloud v1 and v2 APIs.

    People asked me, “How did you get anything to work? You must have really wanted it!” since the Rackspace sample code, docs and live API didn’t match each other. My secret: I actually guessed URLs in the browser to find the endpoints I needed. Swagger prevents that headache.
    Swagger UI 10 realizations as I was creating my Swagger spec and Swagger UI

    Are microservices for you? You might be asking the wrong question. List of default Accept values

    Posted in API Programming, Open Source, Perl, Tech | Leave a comment

    Hawaii Trip 2016 – What’s New in Waikiki

    Spent Labor Day weekend in Waikiki.

    I enjoy going there every few years and seeing what’s new.

    However, it’s been completely built out as a mall, so looks kind of corporate now. To combat that, plan to climb Diamondhead and go to the zoo.

    Also, who would fly a quadcopter drone at one of the most crowded beaches in the world? Not surprised, just saying.

    So what’s new in Waikiki?

    • Two hurricanes were approaching the Islands, but like usual did not landfall on Oahu
    • Not very busy, likely because of the Hurricane news
    • International Marketplace is now a shiny mall that opened Aug. 25. It is anchored by Saks Fifth Avenue, and has the only public restrooms in Waikiki now. It has plaques to remember the mom-and-pop stores they bulldozed.
    • Kalakaua is also a giant hand-bag mall for Japanese tourists
    • Matteo’s Italian (and Seafood) at Seaside and Kuhio closed, and a Crackin Kitchen Seafood opened next door
    • 24hour Fitness is charging $25 for a day-pass on Kalakaua, but it does have a beach view
    • Free Kuhio Beach Hula Show (Waikiki) is 6:30 pm Tues/Thu/Sat – features two dozen performers! Bring your own towel or beach chair to sit on, practise your photography.
    • 100 Japanese people were lined up outside Marukame Udon on Kuhio one night at 9 pm. Must be pretty good. Next door is a souvenir shop with the most awesomely tacky items. If you need a hula dancer for your car, get shopping.
    • Princess Kaiulani Hotel buffet ($42/person) still has free Hawaiian music and hula show downstairs, and a very good Polynesian show/dinner upstairs. (They cancelled the downstairs show at least one evening because of Hurricane weather reports.)
    • McD still serves the free pineapple cup with combos, and also offers taro pie – very sweet. They charge $10 for a combo, but you can get a BOGO Big Mac on Mondays and they have a Pick Two special, and they do have drink refills and wifi
    • Duke’s Restaurant is still packed, but the Hula Grill ($60/person) upstairs doesn’t have a wait list. Has restrooms.
    • TheBus is $2.50 per trip now, or $35/4-day tourist pass available in ABC Stores. The Waikiki Trolley is only $2/trip between Waikiki and Ala Moana and the open air cars are good for photography and sight-seeing
    • Lots of hotel and residential construction cranes
    • Flew American Airlines there – they served biscuits instead of meals, and had no entertainment systems. Ran APU for one-hour while finding pilots. Dreadful experience, but this is a USA airline, so I’m being redundant.
    • Disney Aulani is not a theme park – it’s a time-share (ie. scam) with a few hotel room rentals for $450/nite in the middle of nowhere. ok if you’re a large family that wants to cocoon, maybe.
    • if you go on a boat tour of any kind and want to have fun, buy the cheapest tickets or you’ll be stuck with grandparents

    Waikiki photo vantage points:

    • beach sunsets
    • surfboard stands
    • rescue canoes
    • Kuhio Hula Show (Tues/Thurs/Sat at 6:30 pm)
    • street performers
    • Diamondhead
    • Honolulu Zoo

    If you’re from the mainland, remember that Hawaii is hot and humid. Stay hydrated, wear a hat, and don’t over-exert yourself – especially around noon-time.


    Posted in Travel | Leave a comment

    Re: Botched Go-around Appears To Have Led to Emirates 777 Crash

    As a commercially-rated airline pilot who reads accident reports, I always tingle when I fly on anything but a USA majors flight in less than perfect weather.

    The recent Emirates 777 crash in Dubai is a case in point.

    The airliner, with 300 people aboard, crashed into the runway with a sink rate of 900’/minute, and later the center-tank exploded, killing one firefighter. 22 pax and FAs were injured descending the slides (typically, several people are injured during a slide evacuation.)

    It’s important for pilots to always be mindful that a landing approach can end in two ways:

    1. landing
    2. go-around

    Though it would take a lot of painstaking research to say where this particular flight started going wrong, we do know some of the links in the “accident chain”:

    1. wind shear from 8 knots headwind to 16 knots tailwind. Depending on when the pilots learned this, their spidey- sense should have been off the scale – ie. either requesting a hold, a go-around or another airport. I also wouldn’t use the autopilot in wind-shear because judgment is needed to manage the throttle in that situation
    2. long landing – aim point in an airliner is 1000′, but they had a 1,100 meter (3,609′) warning. If they couldn’t start a normal landing at 1,000′, it was time to seriously think about a go-around
    3. late go-around – if you’re over the runway at idle and 5′ in a wind shear with your gear down, you probably should just land. What were the pilots thinking here? Were they blindly following ATC or book procedures when they really needed piloting skill?
    4. late TOGA power – jet engines take about 6 seconds to produce useful lift, the pilots tried 3 seconds. Do the math.
    5. foreign airline and pilots – for some reason, they’re often not up to challenging weather. They seem to be more interested in epaulets than aerial mastery. I’d suggest making them fly this flight profile in the sim before graduation. Or is the extra $5,000 in fuel for a go-around a career-limiting problem?

    Taken together, obviously nobody with a clue was in the cockpit that day. I would rank this accident as bad as the TransAsia GE235 “Oops, I shutdown the good engine” accident in Taipei.

    Botched Go-around Appears To Have Led to Emirates 777 Crash

    Posted in Tech, Travel | Leave a comment

    Farewell to Prince

    Disbelief at the death of Prince at the relatively young age of 57.

    Prince was a musical genius, certainly one of the giants of this century – he wrote, sang, was a virtuoso of 2 dozen instruments, and played guitar at the level of Jimi Hendrix.

    He could perform with everybody, or nobody, yet chose to mentor female musicians, introducing them by name in his shows.

    I saw one of his shows, but wish I had gone to more.

    For business reasons, he never allowed his catalog on YouTube, but there’s a few links from TV performances that indicate his brilliance and show him “bringing the funk”:

    Prince & 3RDEYEGIRL Perform ‘She’s Always In My Hair’
    Prince Saturday Night Live Full Performance (2014)
    Prince playing piano over ‘Summertime’ at Soundcheck, Koshien, Hyogo Prefecture (1990)
    PRINCE BET Interview with Tavis Smiley(1998)
    “Stand Back” – Stevie Nicks (writer/vocals) with Prince (synths/drum machine), inspired by Little Red Corvette Prince’s vision for lifting up black youths: Get them to code, Prince’s Death: Latest News
    W: Prince
    Nicole Scherzinger sings Purple Rain Tribute

    Posted in Tech | Leave a comment

    Weekend of Earthquakes

    There were a few major earthquakes this weekend:

    • Ueki, Japan – 6.2 (foreshock)
    • Kumamoto, Japan – 7.0
    • Ecuador – 7.8

    Hundreds of aftershocks have occurred in Japan.

    You would think that Californians, of all people, would be concerned with earthquake safety, but the LA Times has reported on a building safety cover-up involving thousands of older schools and office buildings which will pancake in a major quake.

    How Risky Are Older Concrete Buildings?
    LA Times FAQ: Concrete buildings, earthquake safety and you
    Non-ductile Concrete Buildings

    Posted in Tech | Leave a comment

    Congrats to SpaceX on Ocean Landing

    I used to write telemetry collection software for the Space Shuttle, rockets and balloons, but even I watched the SpaceX barge landing with disbelief as the rocket smoothly rotated in all 11 or so degrees of freedom at the same time – no hesitation or staging before the touchdown.

    It was like watching a really big lawn dart plant itself. :)

    View post on

    twitter: SpaceX
    HN: SpaceX Launch Livestream: CRS-8 Dragon Hosted Webcast
    theRegister: SpaceX finally lands Falcon rocket on robo-barge in one piece, SpaceX’s Musk: We’ll reuse today’s Falcon 9 rocket within 2 months

    Posted in Tech | Leave a comment

    MH370 Debris Illuminates Crash Reasons

    A few pieces of MH370, a Boeing 777-200ER, have recently been found on a Mozambique beach, and confirmed as authentic parts.

    Their excellent condition and relatively large sizes indicate that the accident wasn’t a high-speed impact with an obstacle or water.

    As a commercially-rated airplane pilot, my opinion is that leaves:

    1. explosion or decompression
    2. descent (or phugoid) into ocean at relatively low speed
    3. “graveyard spiral dive” pulled the wings off.

    An interesting question would be, “In modern airliners, especially Airbus, anti-phugoid software is deployed. How would that affect an uncontrolled airliner?” Sully said that anti-phugoid software in his Airbus prevented him from slowing descent before impacting the Hudson River.

    MH370 Debris Storm
    Tourist who found debris was searching for MH370
    Turbulence V-Speeds
    Australia Confirms Mozambique Debris Came from MH370
    MH370: Debris found in March ‘almost certainly’ from missing plane
    Investigators Report On MH370 Debris Analysis (2016)
    ATSB MH370 Report [pdf]

    ATSB Image

    Posted in Tech | Leave a comment

    Congrats to LIGO Team

    Congrats to the Laser Interferometer Gravitational-wave Observatory (LIGO) team for directly detecting gravitational waves for the first time.

    LIGO was the NSF’s most expensive project, and took scientists basically from the 1960’s to 2015 to fully realize – initially nobody believed it was possible to actually build this instrument.

    Two detector locations with perpendicular 4 km 4-mirror laser interferometers were able to detect gravitation waves from a billion year-old blackhole collision:

    Direct Gravitational Wave Measurement of Two Black Holes Merging in 1/10 of a second!

    (The speed of light is a constant, while gravitation waves distort space, thus changing an interference pattern.)

    Basic science is always valuable, but just a few of the reasons why this experiment is important:

    1. confirm the equations originally proposed by Einstein in 1915 for gravitation in the GTR and Standard Model
    2. confirm experimentally that light and gravitation waves have different propagation characteristics
    3. develop the technology to make observations at the sub-proton level
    4. study large-scale cosmic events (black holes, colliding galaxies, supernova, binary star systems)
    5. study the time of the Big Bang, as gravitational waves are not filtered like EM waves
    6. confirm or deny cosmic observations and theories made in the EM spectrum, and provide advance notification of occurring events for study in the EM spectrum.

    More generally, measurement tools are the highest form of technology, whether for time, space, EM, or gravity. Any investment of time or money in measurement tools is easily repaid 1000x. For example, the GPS system is the result of accurate time measurement using “atomic clocks.”

    This decade is an exciting time for science, as several major terrestrial and space instruments come online or are upgraded.

    It will be interesting to see if anybody develops a table-top model of LIGO. Experiments in the 60’s with non-laser methods were susceptible to ambient vibrations, but we’ll see.

    Gravitational Waves Detected 100 Years After Einstein’s Prediction
    W: LIGO
    Reddit AMA
    LIGO black hole echoes hint at general-relativity breakdown

    Posted in Tech | Leave a comment

    Superbowl 50

    I watched Superbowl 50 in Sunnyvale – a nice spring-like day with blue skies.

    Got a bonus show: I was just going inside as the Blue Angels did a low-altitude formation flyover, followed by a couple solo approaches, toward Levi’s Stadium.

    Denver Broncos over Carolina Panthers 24 – 10, with Denver leading the entire game.

    Cam Newton, QB for Carolina, got sacked, to varying degrees, 6 times. He sore-loser sulked during the post-game interviews, which generated a lot of controversy.

    Peyton Manning, Bronco’s QB, won MVP, amidst the usual narcissistic drama of whether he’d retire on top, or not.

    The turf came under scrutiny, as some linebackers were literally sliding across it.

    Halftime Show

    Beyonce, looking thick, Bruno Mars, nice moves in a rubber suit, and Coldplay (woefully) performed. Must have been a nostalgic Brit on the halftime committee I guess.

    According to the media, Beyonce was doing a Black Power protest, but the show wasn’t particularly different than anything MJ or Janet did. And frankly, I wouldn’t blame her if she did.


    Most of the ads were forgettable.

    The Amazon ad with Baldwin and Marino was ok.

    There were a few annoying prescription ads, though the cartoon intestines with feet one was more than weird.

    Municipal Sports Stadium Corruption

    I’m local to the Levi’s Stadium, so am aware of the endless tales of corruption (lack of accounting to City Council, failure to make public service reimbursements, destruction of meeting notes and emails, mis-appropriation of a kids soccer park, etc.)

    But even I was surprised that the local transit authorities “privatized” the Caltrain and VTA Light Rail for the day, requiring a a SuperBowl 50 ticket and special $40 ticket per passenger to use a taxpayer-funded system. Hmm.

    “Event Passengers Must Pre-Purchase VTA Fare Prior to Boarding

    All passengers traveling to the Super Bowl must use VTA’s mobile app, EventTIK to purchase a special VTA Super Bowl 50 Day Pass fare AND possess proof of a valid Super Bowl ticket in order to board the special Super Bowl trains.”

    Mr. York: next time, pay for your own damn stadium. You can afford it.
    Formation Flyover Photo

    Posted in San Jose Bay Area | Leave a comment

    Babbage’s Difference Engine at Computer History Museum

    Today was the last chance to see Babbage’s Difference Engine at the CHM in Mountain View before the owner makes it private again.

    The Computer History Museum has certainly matured into a world-class museum over the years.

    The docent talked for about 45 minutes. Unfortunately, it was displayed at the end of a hallway. So 100+ people with kids and strollers jostled to get a view.

    It’s very impressive in person – consists of 8,000 parts, weighs five tons, and measures 11 feet long, moderately noisy and mesmerizing to watch. The cranker used a moderately strong rowing motion.

    Babbage, in building the first computer, did not have the hindsight to start with a smaller version first. Thus he never finished building a working model despite a decade of funding from the British government and the remaining days of his life working on it.

    CHM did a fantastic job on the DEC PDP-1 and IBM 1401 display rooms. Only about 50 PDP-1’s were made, so to have a working model is amazing.

    Posted in Tech | Leave a comment

    Congratulations on HondaJet USA Certification

    Congrats to Honda for earning FAA Production Certification for their first aircraft, the HondaJet HA-420 light business jet.

    I’ve been following the news of the HondaJet for over a decade as they progressed step-by-step towards certification.

    The HA-420 is the most technologically advanced, fastest (420 knots) and efficient (by up to 20%) small business jet currently certified. Of interest to owner/operators, it may be flown single-pilot.

    The price is $4.5 million, which Honda can finance.

    The creation of the HondaJet is an epic story, starting with Honda’s founder dreaming of building an airplane several decades ago, and establishing design facilities 2 decades ago in the USA.

    A jet engine, the GE Honda HF120, was also certified for this plane.

    The total investment to certify both an airframe and an engine must have been staggering to get to this point. Only a multinational mfg. company with support from top executives like Honda can pull that off in peace time.

    Even so, aviation is a tough business to make money in, especially as a new entrant.

    Japanese companies have a long history of interesting work in aerodynamics. Both the Battleship Yamato and Bullet Train used duck-bill shaped leading airfoils for significant drag reduction. The HondaJet developers likewise use laminar flow nose (see top photo) and wings, and winglets (see second photo.)

    According to a review by a friend of Philip Greenspun, the airplane has some issues: interior noise in the passenger compartment is 6 DB too high, only 573 pounds of useful load with full fuel, and a 4000′ runway is needed. Also, a lot of pilot ergonomics that should have made it in, didn’t. Also, the high price is comparable to the the next class up, which are much roomier and have more comfortable useful loads.

    yt: Kenny G Live at the HondaJet TC Event with Mr.Fujino,
    HondaJet FAA Type Certification Celebration HondaJet Wins FAA Certification
    HondaJet Nominated for 2015 Collier Trophy
    W: HondaJet HondaJet Pilot Review

    Posted in Tech, Toys | Leave a comment

    TAP Plastics Mountain View

    Although I’ve walked by TAP Plastics on Castro St. in Mountain View a hundred times, today was the first time I went inside.

    Their motto “the fantastic plastic place” is accurate.

    They have specialized in plastics sales since 1952 and have 21 stores.

    • marketing, signs and displays
    • collectibles displays
    • marine
    • fiberglass laminate supplies
    • custom design (linear, not vacuum forming)

    Their web site is a gem, supporting 9 languages using Google Translate.

    TAP Plastics Inc.
    312 Castro Street
    Mountain View, CA 94041

    Posted in San Jose Bay Area | Leave a comment

    HOWTO: CentOS 7/Redhat 7 Firewalld Setup for Cassandra Server

    How to do initial firewalld configuration for Cassandra Server and Opscenter on CentOS/Redhat 7 with 2 network interfaces, in my case Dell 1950/2950.

    First: verify that your network interfaces are associated with a NetworkManager zone:

    # grep -i zone /etc/sysconfig/network-scripts/ifcfg-*
    # service network restart

    Second: add the Cassandra ports to the internal zone (private interface) and public zone (public interface):


    # add ports on internal interface for Cassandra server

    firewall-cmd --zone=internal --add-port=7000/tcp --add-port=7199/tcp --add-port=9042/tcp --add-port=9160/tcp --add-port=61619-61621/tcp --permanent

    # add ports on public interface for Cassandra server

    firewall-cmd --zone=public --add-port=80/tcp --add-port=8888/tcp --permanent

    firewall-cmd --reload

    Edit the files in /etc/firewalld/zones to remove the desktop helper services, then do

    service firewalld restart

    3. Verify configuration:

    firewall-cmd --get-active-zones
    firewall-cmd --zone=public --list-ports
    firewall-cmd --zone=public --list-services
    firewall-cmd --zone=internal --list-ports
    firewall-cmd --zone=internal --list-services

    Output is:

    # firewall-cmd --get-active-zones
    interfaces: enp4s0
    interfaces: enp8s0

    # firewall-cmd --zone=internal --list-ports
    7000/tcp 7199/tcp 9042/tcp 9160/tcp 61619-61621/tcp

    # firewall-cmd --zone=internal --list-services

    # firewall-cmd --zone=public --list-ports
    80/tcp 8888/tcp

    # firewall-cmd --zone=public --list-services

    4. Verify firewall rules with nmap:

    # nmap -sS

    Starting Nmap 5.51 ( ) at 2015-10-15 22:34 PDT
    Nmap scan report for
    Host is up (0.075s latency).
    Not shown: 997 filtered ports
    22/tcp open ssh
    80/tcp open http
    8888/tcp open opscenter

    Nice! :)


    As always, if you experience network issues on linux, disable selinux, firewalld and TCP wrappers first and verify if those are the source of the problem:

    setenforce 0
    service firewalld stop
    cat /etc/hosts.*

    To boot into singleuser mode, replace the linux grub line “ro” item with “rw init=/sysroot/bin/sh”.

    Fedora introduces Network Zones Network Zones

    Posted in Cassandra, Linux, Open Source, Storage, Tech | Leave a comment

    Notes on Virtualbox 4.3.30 and OS X 10.8.5 for CentOS 7

    Virtualbox 4.3.30 on OS X 10.8.5 with CentOS 7 guest VMs work ok on my notebook for web development, but setup was a little fussy.

    I use VMs for:

    1. general web development and testing, to stay off the production environment
    2. destructive performance testing (intrusive changes to source code and configurations that require VM rollback to undo, most of which will never be commmitted.) This is great for work on profiling, i18n, caching, mod_rewrite rules, etc.
    3. accelerating automation testing, since a VM can boot in 10 seconds on my Mac with SSD, and VM creation is scriptable. This is a huge win.
    4. working offline (no-Wifi areas.)


    • “Host” is your Mac notebook. It runs Virtualbox under Mac OS X.
    • “Guest” is the VM running under Virtualbox. A guest can be any operating system, but in this case we’re using CentOS 7.x.

    Getting Started

    • check Internet for known software issues first
    • update to the latest version of Virtualbox

    Choose Network Topology

    I wanted to run my web site in a VM, viewable from the Mac browser and have the VM be able to run ‘yum update’, so needed host => guest and guest => Internet routing. There’s 2 networking choices that match those requirements:

    1. Bridged – easiest and works best if a Mac network adapter is always connected, like in the office, or at home if your Wifi access point is always on
    2. NAT – always works, but you have to NAT from host => guest (ie. => You can use Mac’s ipfw or ipf firewalls to then NAT from 80 to 8000, making it seamless:

      sudo ipfw add 100 fwd,8080 tcp from any to any 80 in


    • under “Machine … Settings”, choose “Bridged Adapter”
    • guest IP address will come from Virtualbox DHCP server, usually the guest IP address is
    • on the host, you just use the guest’s real IP address from above
    • if you bridge to the Airport interface (en0), and the host Wifi is off, you lose your guest lease (ie. no routing inside or outside guest VM)
    • binds to a host’s physical interface (conceptually speaking)
    • no NAT needed or available in Virtualbox settings
    • the Virtualbox DHCP address is 192.168.x.100


    • under “Machine … Settings”, just choose NAT, not “NAT Network”
    • guest IP address will come from Virtualbox DHCP server, usually or
    • host IP address will be (NATTed to guest address above)
    • click on “Port Forwarding” button and use host ports above 1024 (usually 2222 for ssh and 8000 for HTTP)


    • the Virtualbox manual is a reference, not a tutorial. After reading this blog post, the manual is useful to fill in details.
    • disable CentOS 7 firewall with ‘service firewalld stop’
    • view CentOS 7 interfaces with ‘ip a’
    • if one networking topology doesn’t work for you, try another. No need to reboot the VM.
    • if you spend more than an hour without success, try VMware Fusion. It covers my use case automatically.


    • do ‘tail -f /var/log/messages’, disable “Cable Connected”, click “OK”, and watch as DHCP lease is lost. Then click on “Cable Connected”, click “OK” to restore
    • if using Bridged on en0, do ‘tail -f /var/log/messages’, do “Turn Wi-fi Off” on Mac, and watch as DHCP lease is lost. Then turn Wifi back on.

    Network Security

    • use strong passwords if you value what’s inside the VM
    • enable guest firewall with ‘service firewalld start’
    • TCP wrappers is an easy and effective filtering method



      ALL: ALL

    Simulating Production

    You can update /etc/hosts to have your browser access your web site in a VM:


    # NAT
    # Bridged

    But I find that Firefox gets less confused with permanent redirects, etc. by prefixing the hostname:


    # Virtualbox NAT Topology (don't forget to use ports 2222 and 8000 from host => guest!)
    # Virtualbox Bridged Topology


    Take advantage of Virtualbox’s clone and snapshot features. What does “Cable connected” checkbox change?
    Port Forwarding in Mac OSX Mavericks
    Port Forwarding in Mac OS Yosemite

    Posted in Linux, Open Source, Oracle, Tech | Leave a comment

    Percona Clustercheck Improved Error Handling Patch

    Here’s my Github pull request for improved error handling in Percona’s clustercheck utility, used by haproxy for health-checking a Percona XtraDB Cluster.

    It adds two features:

    1. 401 Unauthorized response for failed authentication
    2. 404 Not Found response if the mysql program can’t be found

    The error detection is done in a low-latency manner using PIPESTATUS, without an additional database connection. Here is colored diff output.

    Posted in API Programming, Linux, MySQL, MySQL Cluster, Open Source, Tech | Leave a comment

    Percona MySQL Conference 2015

    Wed. Keynotes


    5 companies
    Facebook, Google, Alibaba, Twitter
    Percona and MariaDB
    Please use apache CLA CEO
    Multisource replication from Taobao
    Spider Sharding
    Atomic writes with Fusion IO/Sandisk
    18% faster with 1/4 writes
    Coneect Storage Engine for Federation
    Galera integrated
    Encryption by Google – tablespace and table
    Amazon Aurora
    Maxscale proxy

    Tomas Ulin, Oracle
    MySQL 5.7
    – Optimizer improvements
    – JSON support in pipeline
    – SYS
    – GIS rewrite
    – Innodb improvements
    – native partitions. Bug fixes and transportable tabelspaces
    – dynamic buffer pool size
    – group replication
    – Fabric 1.5
    – Workbench 6.3
    – MySQL Cluster 7.4 GA ???

    Robert Hodges, Continuent/VMware
    – “VMware is creating a new kind of hybrid cloud”
    – vSphere 6 FT – cpu/ram mirror over 10 Gb Ethernet up to 4 vcpus
    – but maintenance still needs continuent
    – information week 2014 db popularity
    – tunsten replication to vertica, redshift, oracle, hadoop


    Lightning Talks


    Percona Acquires Tokutek!

    Posted in Conferences, MySQL, Open Source, Storage, Tech, Toys | Leave a comment

    SVLUG: Daniel Klopp on Docker

    Linux Penguin LogoAt Silicon Valley Users Group (SVLUG) tonite, Daniel Klopp, Senior Technical Consultant, Taos Consulting, gave an intermediate talk on “Docker.”

    He had some really informative and detailed slides on using Docker, especially his cgroup commands samples.

    Some of the interesting things he mentioned were:

    1. cgroups are nested
    2. Docker currently has a limit of 127 “layers”, with prior layers appearing to be read-only to the current layer
    3. Docker is high-level enough to run on multiple operating systems, including both linux and windows

    Daniel Klopp

    Daniel Klopp

    One attendee mentioned that a work-around for the insecure nature of Docker is to combine it with SELinux, though that will involve a fair amount of work.

    Over 400 people RSVPed on a related Meetup, and over 150 people attended, a record for this decade.

    Pasta Spread

    Great turnout!

    Pasta Spread

    Salad, meat lasagna, pasta alfredo, veggie lasagna from Taos!

    Thanks to Taos for providing food for all. Taos has job postings for sys admin, network admin, devops and help desk IT persons.

    Thanks to Symantec once again for hosting the event.

    Posted in API Programming, Cloud, Linux, Open Source, Tech, User Groups | Leave a comment

    Top Utility for Cassandra Clusters – cass_top

    DataStax’s OpsCenter is pretty, but sometimes you don’t want to chop holes in your firewall for the server and agents.

    So I wrote cass_top. It works like top, but colorizes the output of nodetool status. It also lets you build nodetool commands using menus, run and log the output.

    What’s especially nice is that it uses bash (no python required), and uses minimal screen real estate, so you can view all your clusters on one monitor using eterms.

    $ cass_top

    cass_top Screenshot
    cass_top Help Screenshot

    Please leave a comment with your suggestions.

    github: Cassandra Top cass_top

    Posted in Cassandra, Linux, Storage, Tech, Toys | Leave a comment

    MariaDB Patch: CREATE [[NO] FORCE] VIEW Options

    MariaDB LogoBelow is my patch that implements the CREATE [[NO] FORCE] VIEW options against MySQL/MariaDB 10.1.0.

    It adds two new options that look like this:

    1. CREATE NO FORCE VIEW v1 AS SELECT * FROM TABLE1; — base TABLE1 must exist, as before
    2. CREATE FORCE VIEW v1 AS SELECT * FROM TABLE1; — base TABLE1 doesn’t need to exist


    • these options follow the Oracle Enterprise options fairly closely. NO FORCE works like the old default – a user needs database, table, column access and CREATE VIEW grant to create a view (more or less). FORCE allows a user to create a view with only database access and CREATE VIEW grant and no underlying base table. At SELECT time, full access control and grant checking is performed, and an error will occur if those constraints are not met.
    • views are more complicated than one would expect, and can be composed of base tables, derived tables, INFORMATION_SCHEMA (IS), and other views. The only table object not allowed is a temporary table
    • CREATE FORCE VIEW is an important option when managing large sets of views when you don’t want to track the creation sequence, or when creating views via program. An example is mysqldump, which can be simplified by replacing the current temporary tables ordering workarounds with FORCE VIEW.
    • It’s a fairly solid patch. I think the best thing is to commit it to alpha and let it bake for a while.
    • One permutation that will need special handling is this: CREATE FORCE VIEW view1 AS SELECT * FROM table1; Since * is not resolved to column names by FORCE, currently ” AS SELECT * AS ” is generated, causing an error. So just use explicit column names like CREATE FORCE VIEW view1 SELECT id, col1, col2 FROM table1; See this bug.
    • it passes t/view.test:
      # ./ view
      Logging: ./  view
      vardir: /usr/local/mariadb-10.1.0/mysql-test/var
      MariaDB Version 10.1.0-MariaDB-debug
      TEST                                  RESULT   TIME (ms) or COMMENT
      main.view                            [ pass ]   1896
      The servers were restarted 0 times
      Spent 1.896 of 7 seconds executing testcases
      Completed: All 1 tests were successful.
    • I wrote tests/ which does 8,000+ test permutations. It passes. :)

    $ cat create_force_view.patch

    --- ../mariadb-10.1.0/sql/sql_view.h 2014-06-27 04:50:36.000000000 -0700
    +++ sql/sql_view.h 2014-09-02 02:35:42.000000000 -0700
    @@ -29,10 +29,10 @@
    /* Function declarations */

    bool create_view_precheck(THD *thd, TABLE_LIST *tables, TABLE_LIST *view,
    - enum_view_create_mode mode);
    + enum_view_create_mode mode, enum_view_create_force force);

    bool mysql_create_view(THD *thd, TABLE_LIST *view,
    - enum_view_create_mode mode);
    + enum_view_create_mode mode, enum_view_create_force force);

    bool mysql_make_view(THD *thd, File_parser *parser, TABLE_LIST *table,
    uint flags);
    --- ../mariadb-10.1.0/sql/sql_lex.h 2014-06-27 04:50:33.000000000 -0700
    +++ sql/sql_lex.h 2014-09-02 01:21:10.000000000 -0700
    @@ -170,6 +170,12 @@
    VIEW_CREATE_OR_REPLACE // check only that there are not such table

    +enum enum_view_create_force
    + VIEW_CREATE_NO_FORCE, // default - check that there are not such VIEW/table
    + VIEW_CREATE_FORCE, // check that there are not such VIEW/table, then ignore table object dependencies
    enum enum_drop_mode
    DROP_DEFAULT, // mode is not specified
    @@ -2442,6 +2448,7 @@
    enum enum_var_type option_type;
    enum enum_view_create_mode create_view_mode;
    + enum enum_view_create_force create_view_force;
    enum enum_drop_mode drop_mode;

    uint profile_query_id;
    --- ../mariadb-10.1.0/sql/ 2014-06-27 04:50:34.000000000 -0700
    +++ sql/ 2014-09-02 02:34:31.000000000 -0700
    @@ -4943,7 +4943,7 @@
    Note: SQLCOM_CREATE_VIEW also handles 'ALTER VIEW' commands
    as specified through the thd->lex->create_view_mode flag.
    - res= mysql_create_view(thd, first_table, thd->lex->create_view_mode);
    + res= mysql_create_view(thd, first_table, thd->lex->create_view_mode, thd->lex->create_view_force);
    --- ../mariadb-10.1.0/sql/sql_yacc.yy 2014-06-27 04:50:37.000000000 -0700
    +++ sql/sql_yacc.yy 2014-09-05 17:19:29.000000000 -0700
    @@ -1851,7 +1851,7 @@
    statement sp_suid
    sp_c_chistics sp_a_chistics sp_chistic sp_c_chistic xa
    opt_field_or_var_spec fields_or_vars opt_load_data_set_spec
    - view_algorithm view_or_trigger_or_sp_or_event
    + view_algorithm view_or_trigger_or_sp_or_event view_force_option
    definer_tail no_definer_tail
    view_suid view_tail view_list_opt view_list view_select
    view_check_option trigger_tail sp_tail sf_tail udf_tail event_tail
    @@ -2446,6 +2446,7 @@
    Lex->create_view_algorithm= DTYPE_ALGORITHM_UNDEFINED;
    Lex->create_view_suid= TRUE;
    + Lex->create_view_force= VIEW_CREATE_NO_FORCE; /* initialize just in case */
    @@ -15887,6 +15888,15 @@
    | event_tail

    + /* empty */ /* 411 - is there a cleaner way of initializing here? */
    + { Lex->create_view_force = VIEW_CREATE_NO_FORCE; }
    + { Lex->create_view_force = VIEW_CREATE_NO_FORCE; }
    + | FORCE_SYM
    + { Lex->create_view_force = VIEW_CREATE_FORCE; }
    + ;

    DEFINER clause support.
    @@ -15944,7 +15954,7 @@

    - view_suid VIEW_SYM table_ident
    + view_suid view_force_option VIEW_SYM table_ident
    LEX *lex= thd->lex;
    lex->sql_command= SQLCOM_CREATE_VIEW;
    --- ../mariadb-10.1.0/sql/ 2014-06-27 04:50:36.000000000 -0700
    +++ sql/ 2014-09-05 19:33:58.000000000 -0700
    @@ -248,7 +248,7 @@

    bool create_view_precheck(THD *thd, TABLE_LIST *tables, TABLE_LIST *view,
    - enum_view_create_mode mode)
    + enum_view_create_mode mode, enum_view_create_force force)
    LEX *lex= thd->lex;
    /* first table in list is target VIEW name => cut off it */
    @@ -259,7 +259,7 @@

    - Privilege check for view creation:
    + Privilege check for view creation with default (NO FORCE):
    - user has CREATE VIEW privilege on view table
    - user has DROP privilege in case of ALTER VIEW or CREATE OR REPLACE
    @@ -272,6 +272,7 @@
    checked that we have not more privileges on correspondent column of view
    table (i.e. user will not get some privileges by view creation)
    if ((check_access(thd, CREATE_VIEW_ACL, view->db,
    @@ -285,6 +286,11 @@
    check_grant(thd, DROP_ACL, view, FALSE, 1, FALSE))))
    goto err;

    + if (force) {
    + res = false;
    + DBUG_RETURN(res || thd->is_error());
    + }
    for (sl= select_lex; sl; sl= sl->next_select())
    for (tbl= sl->get_table_list(); tbl; tbl= tbl->next_local)
    @@ -369,7 +375,7 @@

    bool create_view_precheck(THD *thd, TABLE_LIST *tables, TABLE_LIST *view,
    - enum_view_create_mode mode)
    + enum_view_create_mode mode, enum_view_create_force force)
    return FALSE;
    @@ -391,7 +397,7 @@

    bool mysql_create_view(THD *thd, TABLE_LIST *views,
    - enum_view_create_mode mode)
    + enum_view_create_mode mode, enum_view_create_force force)
    LEX *lex= thd->lex;
    bool link_to_local;
    @@ -425,14 +431,13 @@
    goto err;

    - if ((res= create_view_precheck(thd, tables, view, mode)))
    + if (res= create_view_precheck(thd, tables, view, mode, force))
    goto err;

    lex->link_first_table_back(view, link_to_local);
    view->open_type= OT_BASE_ONLY;

    - if (open_temporary_tables(thd, lex->query_tables) ||
    - open_and_lock_tables(thd, lex->query_tables, TRUE, 0))
    + if (open_temporary_tables(thd, lex->query_tables) || (!force && open_and_lock_tables(thd, lex->query_tables, TRUE, 0)))
    view= lex->unlink_first_table(&link_to_local);
    res= TRUE;
    @@ -513,6 +518,7 @@

    +if (!force) {
    /* prepare select to resolve all fields */
    lex->context_analysis_only|= CONTEXT_ANALYSIS_ONLY_VIEW;
    if (unit->prepare(thd, 0, 0))
    @@ -612,6 +618,7 @@

    res= mysql_register_view(thd, view, mode);

    @@ -621,7 +628,7 @@
    meta-data changes after ALTER VIEW.

    - if (!res)
    + // if (!res)
    + if (!res && !force) /* 411 - solves segfault problems with CREATE FORCE VIEW option sometimes */
    tdc_remove_table(thd, TDC_RT_REMOVE_ALL, view->db, view->table_name, false);

    if (mysql_bin_log.is_open())
    @@ -908,6 +915,8 @@
    fn_format(path_buff, file.str, dir.str, "", MY_UNPACK_FILENAME);
    path.length= strlen(path_buff);

    if (ha_table_exists(thd, view->db, view->table_name, NULL))
    if (mode == VIEW_CREATE_NEW)
    --- ../mariadb-10.1.0/mysql-test/t/view.test 2014-06-27 04:50:30.000000000 -0700
    +++ mysql-test/t/view.test 2014-09-06 00:23:32.000000000 -0700
    @@ -5263,4 +5263,17 @@
    --echo # -----------------------------------------------------------------
    --echo # -- End of 10.0 tests.
    --echo # -----------------------------------------------------------------
    +create no force view v1 as select 1;
    +drop view if exists v1;
    +create force view v1 as select 1;
    +drop view if exists v1;
    +create force view v1 as select * from missing_base_table;
    +drop view if exists v1;
    +--echo # -----------------------------------------------------------------
    +--echo # -- End of 10.1 tests.
    +--echo # -----------------------------------------------------------------
    SET optimizer_switch=@save_optimizer_switch;

    Posted in API Programming, Linux, MySQL, Open Source, Oracle, Storage, Tech | Leave a comment

    Installing Datastax Cassandra and Python Driver on CentOS 5

    Cassandra Logo

    Cassandra can run on CentOS 5.x, but there is no yum repo support.

    If you can’t upgrade linux distros, here’s how to install Datastax Cassandra Community Edition and the python cassandra driver on CentOS 5.x.

    It’s not difficult, but there’s several steps, including updating java.

    (The following steps would make a complete chef or puppet recipe for a non-SSL install with vnodes.)

    # setup environment
    groupadd -g 602 cassandra
    useradd -u 602 -g cassandra -m -s /sbin/nologin cassandra
    mkdir /var/lib/cassandra /var/log/cassandra /var/run/cassandra
    touch /var/log/cassandra/system.log
    chown -R cassandra:cassandra /var/lib/cassandra /var/log/cassandra /var/run/cassandra
    mkdir -p /opt && cd /opt

    cat >> /etc/security/limits.conf <<EOD
    cassandra soft memlock unlimited
    cassandra hard memlock unlimited
    cassandra soft nofile 8192
    cassandra hard nofile 10240

    # upgrade java
    yum remove java
    # download, then install JDK 7.x from
    rpm -Uvh jdk-7u67-linux-x64.rpm
    # download, then install recent jna.jar from
    mv jna.jar /usr/share/java
    ln -s /usr/share/java/jna.jar /opt/cassandra/lib/
    # update envariables
    cat >> /etc/profile <<"EOD"
    export JAVA_HOME=/usr/java/default
    export JRE_HOME=/usr/java/default/jre
    export CASSANDRA_HOME=/opt/cassandra

    # get Datastax DCE
    curl -L >dsc-cassandra-2.0.9.tar.gz
    tar zxvf - < dsc-cassandra-2.0.9.tar.gz ln -s /opt/dsc-cassandra-2.0.9 /opt/cassandra chown -R root:root /opt/cassandra/ bash cassandra/switch_snappy 1.0.4

    # open cassandra firewall ports if necessary (not needed if using internal interface on most servers)
    vi /etc/sysconfig/iptables
    -A INPUT -i eth0 -m state --state NEW -m multiport -p tcp --dport 7000,7199,9042,9160 -j ACCEPT
    service iptables restart
    # configure /opt/cassandra/conf/cassandra.yaml (at least listen_address, rpc_address, seeds and tokens before starting server. If you need a do-over, clean the cassandra data with # rm -fr /var/lib/cassandra/*)

    # download startup script:
    wget -O /etc/init.d/cassandra
    chown root:root /etc/init.d/cassandra
    chmod 755 /etc/init.d/cassandra
    chkconfig --add cassandra

    # start cassandra server (if it is standalone, or a seed server. otherwise start after the seed servers):
    service cassandra start

    # cat /etc/redhat-release 
    CentOS release 5.10 (Final)
    [root@www1 conf]# nodetool status
    Datacenter: datacenter1
    |/ State=Normal/Leaving/Joining/Moving
    --  Address   Load       Tokens  Owns   Host ID                               Rack
    UN  71.87 KB   256     66.8%  8302c6d5-4c88-4695-bbf4-762bc7f24544  rack1
    UN  136.63 KB  256     69.9%  eddb03b2-98d3-46ff-be63-95435414a883  rack1
    UN  100.08 KB  256     63.3%  2a8dde5e-29b0-4a67-8204-40769376c44a  rack1

    If you only see the node on localhost, then you have a problem:

    • read and fix any errors in /var/log/cassandra/system.log until there are zero errors. snappy-related errors are from /tmp being noexec or not running the switch_snappy 1.0.4 command above.
    • disable iptables firewall, test and reenable later
    • in, increase log4j.rootLogger to DEBUG
    • if you have multiple NICs, JMX (ie. nodetool) can bind to the wrong interface. You likely need to configure the-Djava.rmi.server.hostname=[address] option in - to the address you want to listen on
    • public/private IP address problems in AWS EC2. You may need to set broadcast_address: [public_ec2_address]
    • normally rmiregistry is not needed unless you have some atypical firewalling or routing (NAT.)

    Datastax Opscenter 5.0

    You can install the binary from yum or tarball, but the important things to know are:

    • the monitoring agent will be installed on each cassandra node and uses port 61621. The init script is called datastax-agent.
    • the UI only needs to be installed once, but needs ports 61620, and 8888 for HTTP.
    • to allow Opscenter to remotely manage nodes with ssh, remove old ssh entries from .ssh/known_hosts first, connect manually to each node, then Opscenter should be happy
    • by default, Opscenter listens for agents on, phones home to each day, and does not require web authentication, so you likely want to change those.

    Python also needs to be upgraded if you want to use cqlsh or the python client cassandra driver.

    # install python 2.6 and dependencies
    yum install gcc python26 python26-devel libev libev-devel

    # install python's pip module
    curl --silent --show-error --retry 5 | python26

    # install cassandra driver for python
    pip install cassandra-driver

    # install
    tar zxvf - < blist-1.3.6.tar.gz cd blist-1.3.6 python26 install cd ..

    # - test installation
    from cassandra.cluster import Cluster
    cluster = Cluster([''])
    def dump(obj):
       for attr in dir(obj):
           if hasattr( obj, attr ):
               print( "obj.%s = %s" % (attr, getattr(obj, attr)))
    # python26
    obj.__class__ = <class 'cassandra.cluster.Cluster'>

    Troubleshooting connection problems in JConsole Storing OpsCenter Data in a Separate Cluster

    Posted in Cassandra, Cloud, Linux, Open Source, Tech | Leave a comment

    MySQL 5.6 Views and Stored Procedures Tips

    MySQL LogoI recently tuned an existing application that used dozens of views and hundreds of stored procedures using MySQL 5.6.

    There seems to be three attitudes towards using views and stored procedures (SPs) with MySQL:

    1. don’t use them at all to increase portability
    2. just use SPs to reduce network traffic in large reporting queries (my choice)
    3. go crazy and use them everywhere like old-school Oracle Enterprise apps.

    Here are some notes on using views:

    • before creating views, review your schema to ensure keys have matching types and charsets for good performance. It’s much easier to spot schema problems in a text listing than to guess why a view is slower than expected at execution time. (This is doubly true for MySQL Cluster.)
    • MySQL currently doesn’t have CREATE VIEW FORCE, although MariaDB 10.1.0 alpha has my patch. The FORCE option will greatly simply view administration and also mysqldump output, which creates temporary tables to ensure views can be created regardless of table/view ordering issues
    • When looking at the MariaDB source code, it’s apparent that some view options were never actually implemented, like RESTRICT/CASCADE

    And some notes on stored procedures (SPs):

    • if a SP makes a stateful session change, like set sql_log_bin=0, ensure that isn’t going to be a problem later if an exception condition doesn’t reset it
    • after running a SP, SHOW PROFILES will list all the queries executed with performance statistics
    • SPs that do non-essential SELECTs or INFORMATION SCHEMA queries probably need to be reviewed by a DBA for fundamental problems like non-atomic “reading before writing”
    • MySQL compiles SPs again for each thread.

    Both views and SPs are relatively new MySQL features, so budget some extra development and testing time when using them, especially with replication.

    [MDEV-6365] CREATE VIEW Ignores RESTRICT/CASCADE Options Using MySQL triggers and views in Amazon RDS

    Posted in MySQL, MySQL Cluster, Open Source, Oracle, Tech | Leave a comment

    SVLUG: Devops and Release Canaries with Linux, CloudStack and MySQL Cluster

    I did a talk at the Silicon Valley Linux Users Group (SVLUG) tonite on “Devops and Release Canaries with Linux, CloudStack and MySQL Cluster.”

    Thanks again to Symantec for hosting.

    Ravello Arms Deutsche Telekom with On-Demand Cloud Flexibility
    Deutsche Telekom’s Enterprise DevOps Journey with VMware, AWS, Jenkins, Chef & Ravello, Slides

    Posted in API Programming, Cloud, Linux, MySQL, MySQL Cluster, Open Source, Oracle, Tech | Leave a comment

    Velocity Conference Santa Clara 2014 Tips Game Cards

    The O’Reilly Velocity Web Operations & Performance Conference is June 24-26 in Santa Clara.

    Next to the messages/jobs board was a Web Ops & Performance Tips board:

    – use source maps to debug compressed JS and CSS
    – use ::before to optimize font rendering
    – use local storage to persist markup and templates to reduce requests and payload
    – avoid CSS block rendering in chrome by not using screen media type until after. Then put screen back to element
    – use gatling stress tool for load generation/perf testing (Apache Licence 2.0)
    – learn curl
    – learn POSIX before recreating another tool that already exists. Bill Joy (?)
    – “if you do it more than twice a week, automate”
    – it takes no skills to do NoOps! :)

    Posted in Cloud, Conferences, Open Source, Tech | Leave a comment

    AWS Pop-up Loft, San Francisco

    Amazon Web Services pop-up loft (Ask an Architect area, lecture hall, kitchen/lounge)
    Photo credit:

    I happened to be in SF today, so I went to the Amazon Web Services pop-up loft on Market St.

    Amazon rented an empty storefront for 4 weeks for lecture sessions upstairs, and a computer lab and an ‘Ask an Architect’ bar downstairs.

    One of the hosts said the loft was a shell in May, and they had to build out everything: the kitchen area, 2 bathrooms and various partitions.

    I asked the experts about new EBS and RDS features, and they had answers as well as a $100 AWS credit.

    The weather was sunny and warm in SF.

    Lots of street performers and hustlers, including a very smooth male R&B singer. A young rapper named Rap2K15 was selling hand-made CDs.

    Update 2014 06 23: Apparently a drawing was held, and I was one of 3 winners of a free general pass to the AWS:Reinvent Conference :)

    Update 2014 06 24:

    AWS Bootcamp

    Full-day AWS overview, including EC2, S3, RDS, VPC and IAM, with 2 labs.

    “Provisioning and Managing AWS Infrastructure with Chef” with special guest George Miranda, Chef Technical Consultant, Chef

    George talked about using Chef tools like chef metal, knife and chef zero and a minimal amount of ruby to make an AMI and provision a MySQL server and 5 Nginx web servers.


    @gmiranda23, chef-ami-factory

    Update 2014 06 26:

    Dealing With Obstacles at Scale, Bob Hagemann, Twilio

    To reduce pain:

    – UTC timezone
    – UTF8
    – use thin AMI and chef/puppet instead of thick AMI
    – wrote boxconfig a few years ago (like netflix asgard)
    – remote admin mainly
    – small teams 3-8
    – services should run in 3 AZs
    – monitoring with nagios, cron, pingdom
    – haproxy on each host as proxy
    – MySQL, MHA, LVM. Manual failover.
    – SQS DLQ
    – global low latency with route53
    – @bobzilla42
    – Uses freeswitch plus own telcom sw
    – billing system 100s QPS
    – Ops team is about 8 people
    – VPNs to HQ and carrier-approved colo
    – three founders, one came from Amazon.

    925 Market Street, SF
    June 4 – 27, 2014 (likely closed on the 27th for dismantling)
    Free registration, tshirts and lunch. Closes 5:30 pm, 6:00 pm or 8:00 pm daily.
    Muni 30 and 45 return from Market St. and 5th to Caltrain.

    @AWSstartups #AWSloft

    AWS Loft Returning in Fall 2014

    Posted in API Programming, Business, Cloud, Conferences, Linux, MySQL, Open Source, Oracle, San Jose Bay Area, Tech | Leave a comment

    Advanced Liquibase Techniques

    Liquibase LogoI recently did some work with liquibase. Here’s some techniques for advanced users to workaround limitations to calculate query cost.

    Liquibase Introduction

    Liquibase is an Open Source (Apache 2.0 License) Java utility and API for specifying and versioning schema changes (DDL) for several popular databases. It is commonly introduced to projects by programmers, rather than DBAs.

    What liquibase can do:

    • allow “refactoring” of SQL schema changes to target multiple databases using XML by using a database-independent syntax, or raw SQL, depending on your preference
    • allow conditional execution and rollback of SQL based on database type or environment.

    What liquibase can’t do:

    • has no built-in provisions for operational concerns, like conditionally executing SQL based on time/cost. There’s an assumption that schema changes are online, often true on Oracle and SQL Server, less so on MySQL, especially prior to 5.6 (unless you do micro-sharding)
    • does not do intelligent merges to the same object across changesets, like adding multiple columns to the same table in one statement.

    How liquibase works:

    • the programmer specifies schema changes in Java, XML or JSON and runs the liquibase command
    • liquibase creates 2 tables in your database to store version, user and patch name information and to lock out other simultaneous liquibase runs.

    How to Make Liquibase Consider Cost for MySQL

    After some experimentation, there’s a couple liquibase features you can use to do more advanced things:

    1. create a savepoint using the tag and rollback options:
      • liquibase tag rel0; liquibase update …; liquibase rollback rel0
    2. prepend and append logic to each changeset to use information_schema on the SQL DDL statement. on failure, exit with 1 (See XML example below)


    <?xml version="1.0" encoding="UTF-8"?>


        <changeSet id="1" author="james">
           create table if not exists `profiling` ( `connection_id` int(11) not null default 0, `query_id` int(11) not null default '0', `state` varchar(40) default '', KEY (query_id));
           truncate table profiling;
           set profiling=1;

           alter table department add column test2 int default null;
           insert into profiling (connection_id, query_id, state) select connection_id(), query_id, state from information_schema.profiling where query_id=2;
            <sql>alter table department drop column test2</sql>

        <changeSet id="1-post" author="james">
          <preConditions onFail="HALT">
            <sqlCheck expectedResult="0">SELECT count(*) from profiling where state='copy to tmp table'</sqlCheck>


    1. the changeset DDL statement will still have run, even if the precondition HALTs – they’re separate changesets, after all
    2. the rollback in “1” will not be executed, even if “1-post” HALTs.

    The workaround for those 2 issues is to combine the two techniques in a shell script:

    liquibase tag rel0
    liquibase update changeset.xml || {
        # fail the build pipeline to not propagate changeset to next stage
        # (ie. don't run in production)
        liquibase rollback rel0
        mysql -e 'alter table test.department drop column test2' 
        exit 1

    The above looks a little kludgy, but provides a stepping stone for the reader to customize in their particular environment. (The preConditions and bash script can be easily autogenerated with a Perl or Python script.)

    An alternative to XML is using the Java API to set everything up.

    Please leave a comment if you have any suggestions or a Java API program.

    Posted in API Programming, MySQL, MySQL Cluster, Open Source, Oracle, Tech | Leave a comment

    Percona Live MySQL Conference Santa Clara 2014

    The Percona Live MySQL Conference was held once again in Santa Clara from April 1-4, 2014.

    Executive Summary:

    1. Percona hosted another excellent conference, with 1,150 attendees from 43 countries plus a vibrant exhibit hall.
    2. The overall themes that emerged this year were “What’s new in MySQL 5.6?” and “The rise of Galera Cluster.” Unfortunately, Oracle delivered the 5.6 features they promised, but didn’t bother to ask production DBAs what they really needed (ie. GTIDs require downtime to configure, and ALTER ONLINE doesn’t support throttling or background operation on slaves (SR 3-8856341908).)
    3. MySQL 5.7 is promising about double the performance of 5.6, but note that the 5.7 feature micro-benchmark effort hasn’t translated into a complete understanding of whole database performance yet.
    4. the current active branches are now: Oracle 5.6/5.7, MariaDB 10.0/10.1, Webscale SQL (Facebook, Google, LinkedIn, and Twitter), Facebook 5.6 with Deployable GTIDs, and Percona Server 5.6. (The version you want to migrate to is one based on MySQL 5.6.17 or later.)

    Severalnines Booth booth. They create and support cluster and cloud database solutions. Photo credit: Steve Barker,


    Wed. Keynotes

    Percona Live 2014 opening keynote with Percona CEO Peter Zaitsev
    Robert Hodges – Getting Serious about MySQL and Hadoop at Continuent
    (Continuent needs to pivot into another market as MySQL’s new built-in features displace their replication products.)
    ‘Raising the MySQL Bar’ with Oracle’s Tomas Ulin, VP of Engineering for MySQL, Oracle
    Adventures in MySQL at Dropbox, Renjish Abraham

    Wed. Talks

    Online schema changes for maximizing uptime, David Turner, Dropbox, Ben Black, Tango

    – MySQL 5.6 has online schema change capability, however there’s no way to throttle IO consumed during the operation and the single-threaded slave will lag
    – David has tested the ALTER ONLINE in MySQL 5.6.17 and will use it when ported to Percona Server
    – for now uses Percona Online Schema Change utility for its throttling feature.

    Be the hero of the day with the InnoDB Data recovery tool, Marco “The Grinch” Tusa and Aleksandr Kuzminsky, Percona Services

    – tools have been created by Percona to recover Innodb data if you don’t have backups and you’re out of business otherwise. Call them! :)

    Galera Cluster New Features, Seppo Jaakola, Codership

    – reviewed features in Galera Cluster versions 3 and 4
    – looking good.

    MySQL Cluster Performance Tuning, Johan Andersson,

    - Disable NUMA
    - echo 0 > /proc/sys/vm/swappiness
    - bind data node threads to CPUs
    - cat /proc/interrupts
    LDM = cores/2
    TC = LDM/4
    Tune redo log

    Practical sysbench, Peter Boros, Percona

    – prefers “latency” graph style with transparent dots vs. line charts
    – uses R and ggplot2 for graphing
    – attendees tried to guess SSD performance on Peter’s notebook for different block sizes, most were proven totally wrong by sysbench

    Birds of a Feather (BoF) Sessions

    “Meet MySQL Team (at Oracle)” BoF

    – discussion again this year about parallel query execution (same as at MariaDB BoF last year), with Peter Zaitsev also bringing it up again
    – discussion about raw partitions (belief is that they will be 20% more space-efficient and 30% faster, and avoid Linux endless limitations and bugs)
    – internal “development roadmap” only extends about 12 months at a time, subject to customer demands
    – I griped about FK panic/data loss issues in MySQL Cluster 7.3.3. Tomas Ulin, Vice President, MySQL Engineering, said that was news to him. (See SR 3-8717994851 and SR 3-87646727311)
    – Mark Callaghan, Facebook, said he was working on MongoDB now, but requested named keys in flexible schema in MySQL.
    – Peter Zaitsev, Percona, said several clients are using GTIDs and they seem to work.
    – Oracle pleaded with users to drop MyISAM. I mentioned the main reason was that legacy systems used older compression methods, but InnoDB could be used since it has compression too
    – The Oracle MySQL Fabric project is an attempt to counter MongoDB’s automatic slave promotion.


    Thursday Keynotes

    ‘9 Things You Need to Know…’, Peter Zaitsev, Percona
    The Evolution of MySQL in the All-Flash Datacenter, Nisha Talagala, Fusion-IO
    MySQL, Private Cloud Infrastructure and OpenStack, Sean Chighizola, Big Fish Games
    Keynote Panel: The Future of Operating MySQL at Scale

    Thu. Talks

    Benchmarking Databases for Scale, Peter Boros and Kenny Gryp, Percona

    Question: “What is Percona’s secret to professional benchmarks?”
    Answer: “Benchmark absolutely everything multiple times, time permitting.”

    MySQL 5.7: Performance & Scalability Benchmarks, Dimitri KRAVTCHUK

    – comprehensive micro-benchmarking graphs of 5.7 to gain a deeper understanding of parts
    – the challenge remains: how to tune the whole database to perform well?

    Use Your MySQL Knowledge to Become an Instant Cassandra Guru, Robert Hodges, Continuent and Tim Callaghan, Tokutek

    – good comparison of relational data modelling and C* data modelling, lots of similarities
    – note that MariaDB has a Cassandra plugin

    RDS for MYSQL, Tips, Patterns and Common Pitfalls, Laine Campbell, Blackbird (formerly PalominoDB)

    Write Conflicts in Multi-Master Replication Topologies, Seppo Jaakola, Codership

    – it’s good to see that Codership is paying attention to the details of replication

    MySQL Community Awards

    Shlomi has a comprehensive post on this years winners.

    MySQL Lightning Talks (5 minutes each)

    Truncating Sub Optimal DBA Verbal Responses Vectors, David Stokes (Oracle)

    MySQL 5.6 Global Transaction IDs: Benefits and Limitations, Stephane Combaudon (Percona)


    Zero database downtime using the Federated storage engine and Replication, prasad mani (BBC)

    Scaling via adding a Table, Rick James (self)

    Rick knows some clever ways to optimize solutions with MySQL. He’s doing consulting now, so contact him.

    Extra Table Saves the Day: Slides

    No es ‘ano’, es ‘año’! A take on encoding in your DB, Ignacio Nin (Vivid Cortex)

    What Not to Say to the MySQL DBA, Gillian Gunson (Blackbird (formerly PalominoDB))
    “I’ll code around it. ”
    “Stop micro-optimizing. ”
    “Use passive master for QA”
    “MySQL is a toy database. ”
    This conference is a support group. ”

    Hall of Shame, Shlomi Noach
    Triple active-replication in gaming anecdote: don’t do that.

    The bash slave-prefetch oneliner, Art van Scheppingen (Spil Games)

    Unsung Relay Log, Vishnu Rao, FlipKart
    Com_relaylog_dump for tungsten and mysql 5.5

    Unique User Count — Rollup, Rick James (self)

    Formula for user visit estimation by counting bits.

    Logical Backups in the Cloud, Bill Karwin, Percona
    Backups for PHP designers
    PHP class Mysql/Dump

    How to Squat, Kyle Redinger (VividCortex, Inc)

    Iron DBA Replication Challenge, Attunity


    Friday Keynotes

    Percona CMO Terry Erisman opens the 3rd and final day of Percona Live 201

    Keynote: OpenStack Co­Opetition, A View from Within, Boris Renski, Mirantis and OpenStack Boardmember

    – one of the best conference keynotes ever, and a great primer on Open Source marketing … up there with the O’Reilly Open Source Conference keynote on the importance of Android – before it shipped.

    Friday Talks

    Global Transaction ID at Facebook, Evan Elias, Santosh Banda and Yoshinori Matsunobu, Facebook

    – just write your own MySQL branch if a feature is too hard to deploy :)

    R for MySQL DBAs, Ryan Lowe and Randy Wigginton, Percona

    – R has about 1,000 interesting sample databases (demos included diamonds and cars)
    – good interface for quick graphing, not so great for complex programs
    – Percona usess R and ggplot graph module for most of the graphs you see now.

    MariaDB for Developers, Colin Charles, Chief Evangelist, MariaDB

    Closing Prize Drawing

    About 30 high-end gifts were handed out.

    Some nice prizes contributed by exhibitors, including Nexus 7 tablets, $250 AWS gift certificates, SQLyog and Monyog licenses, and a quad drone!


    The exhibits are one of my favorite things at the conference each year because of how strong the MySQL third-party community is.

    Some notable absences were Clustrix and Violin memory, but those were offset by new exhibitors. Webyog was a sponsor but I didn’t see a booth. PalominoDB changed their name to Blackbird, and appear to be offering DevOps as well as DBA services.

    And of course, as the organizers, Percona had a large, central spread. :)

    Thanks to the sponsors and exhibitors for making a conference like this financially possible.

    Facebook Debuts Web-Scale Variant Of MySQL

    Facebook’s Yoshinori Matsunobu on MySQL, WebScaleSQL & Percona Live
    Twitter’s Calvin Sun on WebScaleSQL, Percona Live
    Tweets about PerconaLive
    Percona Live MySQL Conference Highlights

    Posted in Cassandra, Cloud, Conferences, Linux, MySQL, MySQL Cluster, Open Source, Oracle, Perl, San Jose Bay Area, Storage, Tech | Leave a comment

    Cassandra Operations Checklist

    Most of the Cassandra rollouts I’ve heard about at conferences have been “Devopsed” – written by Dev and productionized by Dev, with hand-off to Operations long afterwards.

    That’s the opposite to how RDBMS projects are usually deployed in large companies.

    As Cassandra becomes more mature, this hand-off will occur earlier after development ends.

    Here is a checklist for handing off a Cassandra database to Operations (I only consider non-trivial rings of 3 or more nodes in production with a full data set):

      Node Impact
      Item Comments Performance/ Space/ Time/IOPs/BW
    Cassandra Server Version Should be exactly the same minor version across cluster except briefly during server updates
    Token or vnodes? needs to be configured before first start of server
    Cassandra Client/Connector Version Thrift or CQL?
    Snitch name? Why? several choices
    Replication Factor (RF)? Why? usually RF=3 for SoT* data, defined at keyspace level
    Compaction method? Why? Size or Level, defined at CF level
    Read Consistency Level? Why? Netflix recommends CL=ONE. ALL seldom makes sense.
    Write Consistency Level? Why? ALL seldom makes sense.
    TTL? Why? Defined at row level.
    Expected Average Query Latency 10 ms is reasonable, 1 ms is tough.
    nodetool repair/scrub needed weekly yes more space more
    Bootstrapping a new node yes yes
    Java gcpause stop the world yes yes
    Are there any wide columns? do they get wider over time? pathological case for Cassandra yes more space more
    Backup in case of application bug or a disaster. Opscenter, Priam, custom. yes slightly more for incremental backups, double for local cold copy more
    Restore requires Cassandra node shutdown yes
    If a storage volume fills, howto fix it? Especially a problem with multiple JBOD volumes, which fill unevenly. yes less space less
    If a storage volume fails, howto fix it? yes less space less
    What is the total data size now? Projected in 12 months? affects most operations yes yes yes
    What is the acceptable query latency? affects network and hardware choices
    What is the best maintenance window time each week?
    What are the business and practical SLAs?
    What training is needed for your Operations team? Datastax Admin and Data Modelling Classes (recommend most recent Cassandra version)
    What partitioner is used? Opscenter only supports random partitioner or murmur 3 partitioner for rebalancing
    What procedures need to be written for your Operations team?
    What monitoring tools?
    1. DSE or DCE/OpsCenter
    2. nodetool
    3. Jconsole/jmxterm
    4. Boundary
    5. nagios/zabbix
    What bugs have been encountered? Which ones still apply?
    What lessons can Devops share with the Operations team?

    SoT = Source of Truth

    About Data Consistency in Cassandra
    ConstantContact techblog: Cassandra and Backups Do I absolutely need a minimum of 3 nodes/servers for a Cassandra cluster or will 2 suffice?

    Posted in Business, Cassandra, Cloud, Tech | Leave a comment

    Howto Add a New Command to the MySQL Server

    MySQL LogoAdding a new statement or command to the MySQL server is not difficult.

    First, decide if you want to modify the server source code, or if a User-Defined Function (UDF) will meet your needs.

    Since I just added the SHUTDOWN server command, I thought I would be helpful to outline the steps needed to add a new command.


    1. some familiarity with C/C++ syntax and programming (like “The C Programming Language”, by Kernighan and Ritchie.)
    2. some familiarity with lex and yacc. (I read the Dragon Book a long time ago.)
    3. access to a linux account with cmake, gcc, make and bison packages.
    # CentOS
    yum install cmake gcc make bison
    # Ubuntu
    apt-get update
    apt-get install cmake gcc make bison

    # unpack the MySQL source code:

    tar zxvf - < mariadb-5.5.30.tar.gz

    # most of the files you need to modify are in this directory:

    cd mariadb-5.5.30/sql
    • sql_yacc.yy
    • sql_lex.h

    # add the token(s) (commands and arguments you think you will need) and verify the syntax:

    bison -v sql_yacc.yy

    # if you get warnings, fix %expect in

    # cut-and-paste a code block from a command with similar syntax in to implement your new command, and build a test version of MySQL

    # build your new server in a sandbox:

    cd mariadb-5.5.30
    cmake . -DCMAKE_INSTALL_PREFIX:PATH=/usr/local/mariadb-5.5.30
    make --with-debug
    sudo make install

    # test your new server with 3 terminal windows:

    killall mysqld
    /usr/local/mariadb-5.5.30/bin/mysqld_safe --user=mysql --debug &
    tail -f  /tmp/mysqld.trace | grep Got &
    tail -f /var/log/mysqld.log &
    mysql -u root -p
    # login, then test your new command while watching the log and trace

    # read /var/log/mysqld.log and /tmp/mysqld.trace for errors and panics like this:

    Version: '5.5.30-MariaDB-debug'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution
    mysqld: /home/james/mariadb-5.5.30/sql/ int mysql_execute_command(THD*): Assertion `0' failed.
    130515 11:25:19 [ERROR] mysqld got signal 6 ;
    This could be because you hit a bug. It is also possible that this binary
    or one of the libraries it was linked against is corrupt, improperly built,
    or misconfigured. This error can also be caused by malfunctioning hardware.

    The above panic was caused by the SQLCOM_ switch falling through, because the new command was not defined yet.

    # When you’re done, make a test

    vi mysql-test/t/my_new_command.test

    # Create a patch file:

    mv mariadb-5.5.30 mariadb-5.5.30-new
    tar zxvf - < mariadb-5.5.30.tar.gz
    cd mariadb-5.5.30/src
    for i in sql_yacc.yy sql_lex.h; do
       echo $i
       diff -u $i ../../mariadb-5.5.30-new/sql/ >>patch.txt
    # don't forget mysql-test/t/my_new_command.test

    # apply your patch file:

    patch -b < patch.txt

    # do a build and test your patch before distributing it.

    Easy peasy, right! :)

    Sergei Golubchik wrote on the MariaDB developers list: "Reserved words are keywords (listed in the sql/lex.h) that are
    not listed in the 'keyword' rule of sql_yacc.yy (and 'keyword_sp' rule, that 'keyword' rule includes)."

    How can I get the output of the DBUG_PRINT
    How to find shift/reduce conflict in this yacc file?
    MariaDB Contributor Agreement (MCA) Frequently Asked Questions
    wikipedia: diff

    MySQL Internals Manual XtraDB / InnoDB internals in drawing
    Overloading Procedures
    innodb_diagrams project
    Understanding MySQL Internals By Sasha Pachev (O'Reilly)
    DTrace can tell you what MySQL is doing
    MySQL C Client API programming tutorial
    MySQL 5.1 Class Index

    • IRC, #maria channel on Freenode
    • (ideas)
    • (search for unassigned tasks)

    Keywords: MariaDB, MySQL server programming, tutorial, patch.

    Posted in API Programming, Linux, MySQL, Open Source, Oracle, Tech, Toys | 3 Comments

    Patch to Add Shutdown Statement to MySQL MariaDB

    MySQL LogoAt the OSCON 2011 MariaDB Birds-of-a-Feather (BoF) session, I suggested adding a MySQL SHUTDOWN statement to Monty, which was written up as WL#232. Other databases have this feature, and it’s very handy when automating management of a cluster of MySQL servers.

    And at the Percona Live MySQL Conference 2013, Monty suggested to MariaDB BOF attendees that a good way to get a new feature added is to to write a patch to pave the way for a committer to start with.

    Phase 1

    So … I sat down last nite and wrote the patch against MariaDB 5.5.30.

    Basically it meant telling mysql’s lex/yacc files to parse “shutdown”, then calling the existing MySQL API shutdown kill_mysql() function.

    This code is released under the Open Source BSD-new License, according to the MariaDB Contributor Agreement.

    shutdown_0.1.patch.txt – MariaDB 5.5.30:

    ---	2013-03-11 03:29:13.000000000 -0700
    +++ /home/james/mariadb-5.5.30-new/sql/	2013-05-15 13:17:05.000000000 -0700
    @@ -1305,7 +1305,6 @@
       case COM_SHUTDOWN:
    @@ -1333,7 +1332,6 @@
       case COM_STATISTICS:
         STATUS_VAR *current_global_status_var;      // Big; Don't allocate on stack
    @@ -3736,6 +3734,31 @@
    +  case SQLCOM_SHUTDOWN:
    +  {
    +    // jeb - This code block is copied from COM_SHUTDOWN above. Since kill_mysql(void) {} doesn't take a level argument, the level code is pointless.
    +    // jeb - In fact, the level code should be removed and Oracle Database statements implemented: SHUTDOWN, SHUTDOWN IMMEDIATE and SHUTDOWN ABORT. See WL#232.
    +    status_var_increment(thd->status_var.com_other);
    +    if (check_global_access(thd,SHUTDOWN_ACL))
    +      break; /* purecov: inspected */
    +    enum mysql_enum_shutdown_level level;
    +    level= SHUTDOWN_DEFAULT;
    +    if (level == SHUTDOWN_DEFAULT)
    +      level= SHUTDOWN_WAIT_ALL_BUFFERS; // soon default will be configurable
    +    else if (level != SHUTDOWN_WAIT_ALL_BUFFERS)
    +    {
    +      my_error(ER_NOT_SUPPORTED_YET, MYF(0), "this shutdown level");
    +      break;
    +    }
    +    DBUG_PRINT("SQLCOM_SHUTDOWN",("Got shutdown command for level %u", level));
    +    my_eof(thd);
    +    kill_mysql();
    +    res=TRUE;
    +    break;
    +  }
    --- sql_yacc.yy	2013-03-11 03:29:19.000000000 -0700
    +++ /home/james/mariadb-5.5.30-new/sql/sql_yacc.yy	2013-05-15 11:12:03.000000000 -0700
    @@ -791,7 +791,7 @@
       Currently there are 174 shift/reduce conflicts.
       We should not introduce new conflicts any more.
    -%expect 174
    +%expect 196
        Comments for TOKENS.
    @@ -1645,6 +1645,7 @@
             definer_opt no_definer definer
             parse_vcol_expr vcol_opt_specifier vcol_opt_attribute
             vcol_opt_attribute_list vcol_attribute
    +        shutdown
     %type  call sp_proc_stmts sp_proc_stmts1 sp_proc_stmt
    @@ -1796,6 +1797,7 @@
             | savepoint
             | select
             | set
    +        | shutdown
             | signal_stmt
             | show
             | slave
    @@ -13715,6 +13717,17 @@
    +          SHUTDOWN
    +          {
    +            LEX *lex=Lex;
    +            lex->value_list.empty();
    +            lex->users_list.empty();
    +            lex->sql_command= SQLCOM_SHUTDOWN;
    +          }
    +        ;
               expr { $$=$1; }
             | DEFAULT { $$=0; }
    ---	2013-03-11 03:29:11.000000000 -0700
    +++ /home/james/mariadb-5.5.30-new/sql/	2013-05-15 03:07:00.000000000 -0700
    @@ -2173,6 +2173,7 @@
       case SQLCOM_GRANT:
       case SQLCOM_REVOKE:
       case SQLCOM_KILL:
    +  case SQLCOM_SHUTDOWN:
       case SQLCOM_PREPARE:
    ---	2013-03-11 03:29:14.000000000 -0700
    +++ /home/james/mariadb-5.5.30-new/sql/	2013-05-15 01:20:11.000000000 -0700
    @@ -3333,6 +3333,7 @@
       {"savepoint",            (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SAVEPOINT]), SHOW_LONG_STATUS},
       {"select",               (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SELECT]), SHOW_LONG_STATUS},
       {"set_option",           (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SET_OPTION]), SHOW_LONG_STATUS},
    +  {"shutdown",             (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHUTDOWN]), SHOW_LONG_STATUS},
       {"signal",               (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SIGNAL]), SHOW_LONG_STATUS},
       {"show_authors",         (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_AUTHORS]), SHOW_LONG_STATUS},
       {"show_binlog_events",   (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_BINLOG_EVENTS]), SHOW_LONG_STATUS},
    --- sql_lex.h	2013-03-11 03:29:13.000000000 -0700
    +++ /home/james/mariadb-5.5.30-new/sql/sql_lex.h	2013-05-15 01:19:17.000000000 -0700
    @@ -193,6 +193,7 @@
         When a command is added here, be sure it's also added in

    To apply:

    tar zxvf - < mariadb-5.5.30.tar.gz
    cd mariadb-5.5.30/sql
    patch -b < shutdown_0.1.patch.txt

    cd mariadb-5.5.30
    cmake . -DCMAKE_INSTALL_PREFIX:PATH=/usr/local/mariadb-5.5.30
    make --with-debug
    sudo make install

    killall mysqld
    /usr/local/mariadb-5.5.30/bin/mysqld_safe --user=mysql --debug &
    tail -f  /tmp/mysqld.trace | grep Got &
    mysql -u root -p

    mysql client (with mysqld.log and mysql.trace entries overlaid):

    mysql> shutdown;
    ERROR 2013 (HY000): Lost connection to MySQL server during query
    mysql> 130515 13:20:38 mysqld_safe mysqld from pid file /var/run/mysqld/ ended


    T@4    : | | | >parse_sql
    T@4    : | | | <parse_sql
    T@4    : | | | >LEX::set_trg_event_type_for_tables
    T@4    : | | | <LEX::set_trg_event_type_for_tables
    T@4    : | | | >mysql_execute_command
    T@4    : | | | | >deny_updates_if_read_only_option
    T@4    : | | | | <deny_updates_if_read_only_option
    T@4    : | | | | >stmt_causes_implicit_commit
    T@4    : | | | | <stmt_causes_implicit_commit
    T@4    : | | | | SQLCOM_SHUTDOWN: Got shutdown command for level 16
    T@4    : | | | | >set_eof_status
    T@4    : | | | | <set_eof_status
    T@4    : | | | | >kill_mysql
    T@4    : | | | | | quit: After pthread_kill
    T@4    : | | | | <kill_mysql
    T@4    : | | | | proc_info: /home/james/mariadb-5.5.30/sql/  query end


    130515 13:20:08 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
    130515 13:20:08 InnoDB: !!!!!!!! UNIV_DEBUG switched on !!!!!!!!!
    130515 13:20:08 InnoDB: The InnoDB memory heap is disabled
    130515 13:20:08 InnoDB: Mutexes and rw_locks use GCC atomic builtins
    130515 13:20:08 InnoDB: Compressed tables use zlib 1.2.3
    130515 13:20:08 InnoDB: Initializing buffer pool, size = 128.0M
    130515 13:20:08 InnoDB: Completed initialization of buffer pool
    130515 13:20:08 InnoDB: highest supported file format is Barracuda.
    130515 13:20:09  InnoDB: Waiting for the background threads to start
    130515 13:20:10 Percona XtraDB ( 5.5.30-MariaDB-30.1 started; log sequence number 1597945
    130515 13:20:10 [Note] Plugin 'FEEDBACK' is disabled.
    130515 13:20:10 [Note] Event Scheduler: Loaded 0 events
    130515 13:20:10 [Note] /usr/local/mariadb-5.5.30/bin/mysqld: ready for connections.
    Version: '5.5.30-MariaDB-debug'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution
    130515 13:20:37 [Note] Got signal 15 to shutdown mysqld
    130515 13:20:37 [Note] /usr/local/mariadb-5.5.30/bin/mysqld: Normal shutdown
    130515 13:20:37 [Note] Event Scheduler: Purging the queue. 0 events
    130515 13:20:37  InnoDB: Starting shutdown...
    130515 13:20:38  InnoDB: Shutdown completed; log sequence number 1597945
    130515 13:20:38 [Note] /usr/local/mariadb-5.5.30/bin/mysqld: Shutdown complete
    130515 13:20:38 mysqld_safe mysqld from pid file /var/run/mysqld/ ended

    A possible test would be like this, but it would interfere with operation of the test mysqld instance:



    Phase 2

    My above patch applies cleanly within the existing MySQL shutdown framework, which implements a feature like Oracle Database's SHUTDOWN IMMEDIATE command.

    However, my patch is a Pyrrhic victory, since there's so much wrong with MySQL's existing shutdown framework that it will take an internals committer to sort it out.

    The shutdown framework is badly designed, if it was designed at all, since it fails the "does this feel programmed on purpose?" test, and in fact doesn't work reliably:

    1. Conceptually, there should be 3 Oracle Database-style SHUTDOWN options: WAIT, IMMEDIATE and ABORT. Implementing SHUTDOWN WAIT would mean intrusive changes to the MySQL source code, while SHUTDOWN ABORT would be easier to program, but at the risk of data integrity.
    2. the following bug reports describe a race condition between mysqld threads and the shutdown thread:

    I guess I'll have to pay myself the worklog bounty of $100. :)

    This is actually my second MySQL patch contribution. In 1997 or 1998 I submitted a patch for the installer, which was one of the most troublesome components at that time. Monty rewrote it, but I liked my version better.

    Update: Sergei Golubchik committed this patch to MariaDB 10.0.4 on 2013-06-25. Thanks, Sergei!

    MySQL's Missing Shutdown Statement
    Bug #63276: skip sleep in srv_master_thread when shutdown is in progress

    Posted in Linux, MySQL, Open Source, Oracle, OSCON, Tech | 1 Comment

    WWII Propaganda About Carrots and Night Vision

    The Smithsonian magazine has an interesting article on WWII propaganda about carrots and night vision.

    Besides discussing the cover of how carrots were responsible for aerial combat instead of RADAR, I had no idea there was a World Carrot Museum. :)

    A WWII Propaganda Campaign Popularized the Myth That Carrots Help You See in the Dark

    Posted in Tech | Leave a comment