The last session before lunch was by Robert Wiseman, CTO of Sabre Holdings — interesting that the two customer keynotes this morning were both by travel-related companies. They have the same problems as many other organizations, in that they have to deal with large numbers of transactions and compete on pricing, but they’re doing it at a much higher volume and with a much greater abandonment rate than most other companies: think about how many times that you use Travelocity (one of their companies) to search for prices versus how often you actually book the travel.
Wiseman talked about how it’s necessary to focus on the differentiators: their business is differentiated by the quality and quantity of content that they provide to their customers, and the level of efficiency that enables the customers to locate and consume the content that’s right for them. Everything else, as he says, is just plumbing. In that plumbing, they push towards a “cookie cutter architecture”, where technology is commoditized which in turn makes for easier build and test cycles, and improves time to market as well as TCO.
They design for technology obsolescence by staying as vendor agnostic as possible, and abstracting the technology layers.
They also design for failure: since they run 20,000 transactions per second, 24×7, all technology will eventually fail, and the fail-overs must be obvious to the operational support staff but transparent to users. The goal, in the case of failure, is to continue limping along in some fashion rather than a total meltdown.
They design for flexibility, using XML-based APIs and web services against some rather ancient mainframe technology; however, they’re now redesigning some of their web services to be less granular (so as to not invoke 30 million web service calls per day) based on tracking consumer behaviour on the website. This is an interesting concept for designing the right granularity of web services: implement your best guess, tending to be more granular, then watch the behaviour and rebuild them into less granular services.
As you might imagine given their transaction volumes, they’re very focussed on performance testing, and designing for the purposes of facilitating performance testing by standardizing hardware (e.g., blade servers) and software (e.g., DBMS) so that it’s possible to test a full range of situations. He stressed how important it is to consider performance testing at design time, not just as an afterthought.
In the Q&A, he talked more about their standardization and reuse efforts: pretty thorough at the infrastructure and even the middleware layers, but not so easy at the application layers. They’ve standardized on Linux, which an audience question referred to as “open source shareware”, which was a bit funny — they’re using a supported version of Linux, so it’s not like they’re pirating software from the internet or something. He also made an interesting comment about how the best strategy for staying vendor independent when you have a single source is to have a really good exit strategy for that vendor’s products: don’t stay with it just because it’s too hard to migrate away from it.