During Web 1.0, companies were very careful where they hosted their databases.
I worked at Netscape, and all databases were stored in the USA, even for international sites. Only partial caching was allowed offshore.
It was a simpler era in that RDBMS’ were centralized, with occasionally proxies or caches or LDAP caches configured remotely.
Generally that worked well pre-9/11 as a default, as the USA had some of the most permissive laws in the world, especially California, and followed the rule of law.
The reasons had to do with legal problems in other countries:
- Legal Jurisdiction
- taxation nexus
- local laws thorny to user-generated content (ie. French and Austrian anti-Nazi rulings, Turkish Article 301)
- privacy – regimes demanding complete copies of user databases
- corruption of court officials allowing competitors to access your databases locally
- overreaching trial discovery requests.
Recently I had a conversation with some IT Managers and a corporate lawyer at a Web 2.0 company, and was surprised to find them completely ignorant or dismissive of these issues.
Although Feds generally don’t follow the rule of law post-9/11, an American’s company’s best choice for database jurisdiction is still the USA, along with a real-time slave hosted in Canada.
As we start to build globally distributed databases with tools like Cassandra, it’s worth considering the legal and privacy issues of multi-jurisdictional databases.
Facebook’s Swedish data centre will be subject to Snoop Law
Stoel.com: E-Commerce and Internet Law Topics
wsj.com: Feds Can Get Twitter Users’ Data Without Warrant, Judge Says
PATRIOT Act clouds picture for tech