Shallow vs. Deep Knowledge

The EDS Fellows’ Next Big Thing blog today discusses how business applications continue moving towards less custom coding and more off-the-shelf reusable vendor components, and the impact that has on an integrator’s knowledge of the vendor components. Interesting that some of the best minds at this large SI are pointing out that their portion of any particular job is likely to continue to shrink, something that I wrote about last week, although they don’t discuss how EDS or other large SIs are going to fill the in gaps in their past business model of “build everything”.

The point of their post is, however, how can someone working on a business application have sufficient knowledge in order to understand the strengths and weaknesses of a given vendor component when they haven’t seen the source code? They go on to provide a scientific method for gaining a deeper knowledge of a component without access to the source code, but their entire argument is based on an old-style mainframe integration (which, to be fair, was/is EDS’ sweet spot) where it was fairly common to have access to vendors’ source code.

I have to say, welcome to the real world: I’ve been doing integration for over 15 years, have a very deep knowledge of a few vendors’ products, plus a shallower knowledge of a bunch of other products, and I’ve never seen a line of vendor source code. Personally, I can’t think of very many cases where access to the source code would have improved the end result; as any good software QA team can tell you, you don’t need to see the code in order to determine the behaviour and boundaries of a component.

Furthermore, their scientific method doesn’t include a vital component: vendor relationships. If you’re building a significant business on a specific vendor’s products, you have to establish and maintain a relationship with them so as to have relatively easy access to their internal technical resources, the people further behind the customer support front line. Having done this with a couple of vendors in the past (and then being accused of being in bed with them for my efforts), I know that this is a key contributor to gaining the requisite deep knowledge for a successful integration.

Is anyone thinking about the users?

Everyone once in a while (okay, maybe more often than that), I’ll see some piece of crappy user interface design and have a private little rant about it. Since I’ve been designing UI since back before it was called “user experience”, and am a heavy user of a large number of systems with different UI’s, I have some idea of what works and what doesn’t work.

Often, this happens when I’m in a retail environment and the store employee is fighting through a variety of screens to achieve what should be a common and easily-accessible task: I really have the sense that the software designers didn’t bother to consider usability because they knew that the users would be trained on the software, that is, it’s not web-based consumer software that can be abandoned in favour of a different communications channel, it’s a part of the person’s job and they must learn to use it.

Occasionally, this does happen with web-based software that I use as a consumer, such as banking websites. When I used to travel almost full-time for business, I got in the habit of doing everything online: if a supplier (bank, phone, whatever) couldn’t give me a way to check and pay my account online, then I went elsewhere. That philosophy has done well for me over the years, and I still stick by it. I bank with one of the large Canadian banks, and I was commenting yesterday on how there are some really stupid, albeit minor, design flaws in their “account download” screens that make me do a few extra clicks every day when I download my account information. They might think that a few clicks don’t mean much, but I used to design UI for transaction processing staff at financial institutions, and we squeezed every keystroke out in order to maximize the efficiency of that particular factor. Why can’t web UI designers consider that some (many?) of the users are concerned about efficiency, want to use the least possible keystrokes — and even less mouse movements — in order to do “chores” such as online banking. I just want to get in, download the transactions and get out as quickly as possible, I have no desire to linger on their site and check out some fancy UI widget.

Although I’m not picking on this particular bank, because every bank that I have dealt with has similar (or worse) problems, I have a few other bones to pick with their systems people. I applied for a registered investment account online with them recently, but because I was transferring in assets from another institution, I took the completed application form to the bank instead of mailing it in, so that I could get the account number right away and initiate the transfer. Although the application was retrievable by me online with its unique ID, the person at the bank who assisted me had to key in all the information over again because there is no way for him to pull up the already-completed application form from their own systems and complete the account opening. Total waste of my time (if I had know that, I wouldn’t have spent the time online completing the app in the first place, then had to wait while he re-keyed it) and total waste of the bank’s employee’s time, which costs them money and customer goodwill.

The latest in the continuing saga came this morning, in response to my request that they discontinue my monthly brokerage statements by snail mail since I’ve been downloading the PDF versions from their website for more than three years:

Regrettably, due to our current internal platform, we do not offer the option to discontinue paper-based statements. However, this feature is on our agenda for consideration for future system enhancements.

Waste of paper, waste of energy printing the statements, damage to the environment from the trucks used to ship all this paper around, waste of my time opening and shredding the statement, and again, loss of customer goodwill. All this from a bank that rakes in a couple of billion in profits every year.

What is all adds up to is IT departments that are not focussed on their customers’ needs, whether these are internal or external customers. With so many people using the systems created by these IT departments, how can they continue to justify the philosophy that the users will just put up with bad software? Most IT departments don’t even think about the fact that the business side of the organization is their customer, as well as potentially external customers, and if they don’t service those customers adequately then they may find themselves outsourced out of existence.

More on ancient engineering

Good to see that I’m not the only blogger slacking off by taking European vacations these days, and trying to compensate by blogging about ancient engineering feats: John Reynolds ponders Pisa’s leaning tower by exploring the bond between Renaissance engineers and all of us implementing IT systems these days:

  • Renaissance engineers were asked (or forced) to build on foundations that they knew were flawed
  • Renaissance engineers had to deal with “legacy systems”
  • Renaissance engineers had to implement “quick-fixes” that made the original problem worse
  • Renaissance engineers had to expend great effort over many years to patch and maintain defective projects (instead of starting over)

He echoes my sentiment somewhat by hoping that something of his will become as significant as Pisa’s tower someday.

Think big, start small

I watched an SAP presentation today about their NetWeaver platform, and although the product was only of peripheral interest to me, I love their philosophy on how to get started on a project: think big, start small.

I can’t even count how many projects that I’ve seen fail, or miss their targets significantly, due to over-reaching on the first phase. Usually, someone gets all excited about the technology and before you know it, they’re trying to implement the 8th wonder of the world in Phase I. Schedules slip, but even worse, vision slips: often, no one is left with a clear idea of what is to be accomplished, or the path to getting there. [Of course, the converse is just as bad: the pilot (a.k.a. Project In Lots Of Trouble), where someone hacks together a system without proper design or testing, and it becomes the cornerstone of a future legacy system, but that’s a story for another day.]

This type of scope creep is especially prevalent on BPM projects, since it’s so (conceptually) easy to just add another step to the initial process, then another, and another. What starts out as a simple process with 2 human touch-points using an out-of-the-box interface and 3 system touch-points using standard adapters becomes a morass of custom interfaces and extraneous exception paths. Without fail, the biggest argument that I ever have with anyone on a BPM project is about keeping the first phase small so as to get something into production sooner.

Of course, as a designer, I believe in getting some amount of the design work done up front: you have to understand the overall scope and the required functionality to provide a framework for the work, but you also have to carve off a reasonable first phase that won’t take too long and will provide a useful system, when implemented. In the case of BPM projects, if you can’t implement that first something inside 6 months, there’s something wrong with what you’re doing.

Think big, start small.

Testing for real life

I watched the movie K-19: The Widowmaker on TV last night; it’s about a Russian nuclear submarine on its maiden voyage in 1961 where pretty much everything goes wrong. In the midst of watching reactor failures and other slightly less catastrophic mishaps, I started thinking about software testing. I’ve seen software that exhibited the functional equivalent of a reactor failure: a major point of failure that required immediate shutdown for repairs. Fortunately, since I have worked primarily on back-office BPM systems for financial services clients over the years, the impact of these catastrophic system failures is measured in lost efficiences (time and money) by having to revert to paper-based processes, not in human lives.

When I owned a professional services company in the 90’s, I spent many years being directly responsible for the quality of the software that left our hands and was installed on our clients’ systems. In the early days, I did much of the design, although that was later spread over a team of designers, and I like to think that good design led to systems with a low “incident” rate. That’s only part of the equation, however. Without doubt, the single most important thing that I did to maximize the quality of our product was to create an autonomous quality assurance and testing team that was equivalent in rank (and capabilities) to the design and development teams, and had the power to stop the release of software to a client. Because of this, virtually all of our “showstopper” bugs occurred while the system was still in testing, saving our clients the expense of production downtime, and maintaining our own professional reputation. Although we always created emergency system failure plans that would allow our client to revert to a manual process, these plans were rarely executed due to faults in our software, although I did see them used in cases of hardware and environmental failures.

When I watched Liam Neeson’s character in K-19 try to stop the sea trials of the sub because it wasn’t ready, and be overruled for political reasons, I heard echoes of so many software projects gone wrong, so many systems put into production with inadequate testing despite a QA team’s protests. But not on my watch.