Category Archives: design

AWD Advance14: The New Face Of Work

I’m spending the last session of the last day at DST’s AWD Advance conference with Arti Deshpande and Karla Floyd as they talk about how their more flexible user experience came to be. They looked at the new case management user experience, which is research-driven and requires very little training to use, and compared it to the processor workspace, which looks kind of like someone strapped the Windows API onto a green screen.

To start on the redesign of the processor workspace, they did quite a bit of usability evaluation, based on a number of different channels, and laid out design principles and specific goals that they were attempting to reach. They focused on 12 key screens and the navigation between them, then expanded to the conceptual redesign of 66 screens. They’re currently continuing to research and conceptualize, and doing iterative usability testing; they actively recruited usability testers from their customers in the audience during the presentation. They’ve worked with about 20 different clients on this, through active evaluations and visits but also through user forums of other sorts.

We saw a demo of the new screens, which started with a demo of the existing screens to highlight some of the problems with their usability, then moved on to the redesigned worklist grid view. The grid column order/presence is configurable by the user, and saved in their profile; the grid can be filtered by a few attributes such as how the work item was assigned to the worklist, and whether it is part of a case. Icons on the work items indicate whether there are comments or attachments, and if they are locked. For a selected work item, you can also display all relationships to that item as a tree structure, such as what cases and folders are associated with it. Reassigning work to another user allows adding a comment in the same action. Actions (such as suspending a work item) can be done from the worklist grid or from the banner of the open work item. The suspend work item action also allows adding a comment and specifying a time to reactivate it back to the worklist – combining actions into a single dialog like this is definitely a time-saver and something that they’ve obviously focused on cleaning up. Suspended items still appear in the worklist and searches but are in a lighter font until their suspense expires – this saves adding another icon or column to indicate suspense.

Comments can be previewed and pinned open by hovering over the work item icon in the worklist, and the comments for a work item can be sorted and filtered. Comments can be nested; this could cause issues for customers who are generating custom reports from the comments table in the database, at least one of whom was in the audience. (For those of you who have never worked with rigid legacy systems, know that generating reports from comment fields is actually quite common, with users being trained to enter some comments in a certain format in order to be picked up in the reports. I *know*.)

The workspace gains a movable vertical divider, allowing the space to be allocated completely to the worklist grid, or completely to the open work item; this is a significant enhancement since it allows the user to personalize their environment to optimize for what they’re working on at the time.

The delivery goal for all of this is Q4 2014, and they have future plans for more personalization and improved search. Some nice improvements here, but I predict that the comments thing is going to be a bit of a barrier for some customers.

That’s it for the conference; we’re all off to the Hard Rock Café for a private concert featuring the Barenaked Ladies, a personal favorite of mine. I’ll be quiet for a few days, then off to bpmNEXT in Monterey next week.

Upcoming Webinars with Progress Software

Blogging around here has been sporadic, to say the least. I have several half-finished posts about product reviews and some good BPM books that I’ve been reading, but I have that “problem” that independent consultants sometimes have: I’m too busy doing billable work to put up much of a public face, both with work with vendors and some interesting end-customer projects.

Today, I’ll be presenting the second in a series of three webinars for Progress Software, focused on how BPM fits with more traditional application development environments and existing custom applications. Progress continues to integrate the Savvion and Corticon acquisitions into their product set, and wanted to put forward a webinar series that would speak to their existing OpenEdge customers about how BPM can accelerate their application development without having to abandon their existing custom applications. I really enjoyed the first of the series, because Matt Cicciari (Progress product marketing manager) and I had a very conversational hour – except for the part where he lost his voice – and this time we’ll be joined by Ken Wilmer, their VP of technology, to dig into some of their technology a bit more. My portion will focus on generic aspects of combining BPM and traditional application development, not specific to the Progress product suite, so this may be of use even if you’re not using Progress products but want to understand how these seemingly disparate methodologies and technologies come together.

We’re doing today’s webinar twice: once at 11am Eastern to cover Europe and North America, then a repeat at 7pm ET (that’s 11AM tomorrow in Sydney) for the Asia Pacific region or those of you who just didn’t get enough in the first session. It will be live both times, so I will have the chance to think about what I said the first time around, and completely change it. ;-)

You can sign up for today’s session here, plus the next session on February 29th that will include more about business rules in this hybrid environment.

Salesforce’s Peter Coffee On The Cloud

I just found my notes from a Salesforce.com lunch event that I went to in Toronto back in April, where Peter Coffee spoke enthusiastically while we ate three lovingly-prepared courses at Bymark, and was going to just pitch them out but found that there was actually quite a bit of good material in there. Not sure how I managed to write so much while still eating everything in front of me.

This came just a few days after the SF.com acquisition of Radian6, a move that increased the Canadian staff to 600. SF has about 1,500 customers in Canada, a few of whom where in the room that day. Their big push with these and all their customers is on strategic IT in the cloud, rather than just cost savings. One of the ways that they’re doing this is by incorporating process throughout the platform, allowing it to become a global user portal rather than just a collection of silos of information.

Coffee discussed a range of cloud platform types:

  • Infrastructure as a service (IAAS) provides virtualization, but persists the old IT and application development models, combining the weaknesses of all of them. Although you’ve outsourced your hardware, you’re still stuck maintaining and upgrading operating systems and applications.
  • Basic cloud application development, such as Google apps and their add-ons.
  • SF.com, which provides a full application development environment including UI and application support.

The old model of customization, that most of us are familiar with in the IT world, has led to about 1/3 of all enterprise software running on the current version, and the rest stuck with a previous version, unable to do the upgrade because the customization has locked it in to a specific version. This is the primary reason that I am so anti-customization: you get stuck on that old version, and the cost of upgrading is not just the cost of upgrading the base software, but of regression testing (and, in the worst case, redeveloping) all the customization that was done on top of the old version. Any wonder that software maintenance ends up costing 10x the original purchase cost?

The SF.com model, however, is an untouchable core code base sitting on managed infrastructure (in fact, 23 physical instances with about 2,000 Dell servers), and the customization layer is just an abstraction of the database, business logic and UI so that it is actually metadata but appears to be a physical database and code. In other words, when you develop custom apps on the SF.com platform, you’re really just creating metadata that is fairly loosely coupled with the underlying platform, and resistant to changes therein. When security or any other function on the core SF.com platform is upgraded, it happens for all customers; virtualization or infrastructure-as-a-service doesn’t have that, but requires independent upgrades for each instance.

Creating an SF.com app doesn’t restrict you to just your app or that platform, however: although SF.com is partitioned by customer, it allows linkages between partners through remapping of business objects, leveraging data and app sharing. Furthermore, you can integrate with other cloud platforms such as Google, Amazon or Facebook, and with on-premise systems using Cast Iron, Boomi and Informatica. A shared infrastructure, however, doesn’t compromise security: the ownership metadata is stored directly with the application data to ensure that direct database access by an administrator doesn’t allow complete access to the data: it’s these layers of abstraction that help make the shared infrastructure secure. Coffee did punt on a question from the (mostly Canadian financial services) audience about having Canadian financial data in the US: he suggested that it could be encrypted, possibly using an add-on such as CipherCloud. They currently have four US data centers and one in Singapore, with plans for Japan and the EU; as long as customers can select the data center country location that they wish (such as on Amazon), that will solve a lot of the problem, since the EU privacy laws are much closer to those in Canada. However, recent seizures of US-owned offshore servers brings that strategy into question, and he made some comments about fail-overs between sites that makes me think that they are not necessarily segregating data by the country specified by the customer, but rather picking the one that optimizes performance. There are other options, such as putting the data on a location-specific Amazon instance, and using SF.com for just the process parts, although that’s obviously going to be a bit more work.

Although he was focused on using SF.com for enterprises, there are stories of their platform being used for consumer-facing applications, such as Groupon using the Force.com application development platform to power the entire deals cycle on their website. There’s a lot to be said for using an application development environment like this: in addition to availability and auto-upgrading, there’s also built-in support for multiples mobile devices without changing the application, using iTunes for provisioning, and adding Chatter for collaboration to any application. Add the new Radian6 capabilities to monitor social media and drive processes based on social media interactions and mentions, and you have a pretty large baseline functionality out of the box, before you even start writing code. There are native ERP system and desktop application connectors, and a large partner network offering add-ins and entire application suites.

I haven’t spent any time doing evaluation specifically of Salesforce or the Force.com application development platform (except for a briefing that I had over a year ago on their Visual Process Manager), but I’m a big fan of building applications in the cloud for many of the reasons that Coffee discussed. Yes, we still need to work out the data privacy issues; mostly due to the potential for US government intervention, not hackers. More importantly, we need to get over the notion that everything that we do within enterprises has to reside on our own servers, and be built from the metal up with fully customized code, because that way madness lies.

BPM and Application Composition Webinar This Week

I’m presenting a webinar tomorrow together with Sanjay Shah of Skelta – makers of one of the few Microsoft-centric BPM suites available – on Tuesday at noon Eastern time. The topic is BPM and application composition, an area that I’ve been following closely since I asked the question five years ago: who in the BPM space will jump on the enterprise mashup bandwagon first? Since then, I’ve attended some of the first Mashup Camps (1, 2 and 4) and watched the emerging space of composite applications collide with the world of BPM and SOA, to the point where both Gartner and Forrester consider this important, if not core, functionality in a BPM suite.

I’ll be talking about the current state of composite application development/assembly as it exists in BPM environments, the benefits you can expect, and where I see it going. You can register to attend the webinar here; there will be a white paper published following the webinar.

CASCON Keynote: 20th Anniversary, Big Data and a Smarter Planet

With the morning workshop (and lunch) behind us, the first part of the afternoon is the opening keynote, starting with Judy Huber, who oversees the 5,000 people at the IBM Canada software labs, which includes the Centre for Advanced Studies (CAS) technology incubation lab that spawned this conference. This is the 20th year of CASCON, and some of the attendees have been here since the beginning, but there are a lot of younger faces who were barely born when CASCON started.

To recognize the achievements over the years, Joanna Ng, head of research at CAS, presented awards for the high-impact papers from the first decade of CASCON, one each for 1991 to 2000 inclusive. Many of the authors of those papers were present to receive the award. Ng also presented an award to Hausi Müller from University of Victoria for driving this review and selection process. The theme of this year’s conference is smarter technology for a smarter planet – I’ve seen that theme at all three IBM conferences that I’ve attended this year – and Ng challenged the audience to step up to making the smarter planet vision into reality. Echoing the words of Brenda Dietrich that I heard last week, she stated that it’s a great time to be in this type of research because of the exciting things that are happening, and the benefits that are accruing.

Following the awards, Rod Smith, VP of IBM emerging internet technologies and an IBM fellow, gave the keynote address. His research group, although it hasn’t been around as long as CAS, has a 15-year history of looking at emerging technology, with a current focus on “big data” analytics, mobile, and browser application environments. Since they’re not a product group, they’re able to take their ideas out to customers 12-18 months in advance of marketplace adoption to test the waters and fine-tune the products that will result from this.

They see big data analytics as a new class of application on the horizon, since they’re hearing customers ask for the ability to search, filter, remix and analyze vast quantities of data from disparate sources: something that the customers thought of as Google’s domain. Part of IBM’s BigInsights project (which I heard about a bit last week at IOD)  is BigSheets, an insight engine for enabling ad hoc discovery for business users, on a web scale. It’s like a spreadsheet view on the web, which is a metaphor easily understood by most business users. They’re using the Hadoop open source project to power all of the BigInsights projects.

It wouldn’t be a technical conference in 2010 if someone didn’t mention Twitter, and this is no exception: Smith discussed using BigSheets to analyze and visualize Twitter streams related to specific products or companies. They also used IBM Content Analytics to create the analysis model, particularly to find tweets related to mobile phones with a “buy signal” in the message. They’ve also done work on a UK web archive for the British Library, automating the web page classification and making 128 TB of data available to researchers. In fact, any organization that has a lot of data, mostly unstructured, and wants to open it up for research and analysis is a target for these sort of big data solutions. It stands to reason that the more often you can generate business insights from the massive quantity of data constantly being generated, the greater the business value.

Next up was Christian Couturier, co-chair of the conference and Director General of the Institute of Information Technology at the Canada’s National Research Council. NRC provides some of the funding to IBM Canada CAS Research, driven by the government’s digital economy strategy which includes not just improving business productivity but creating high-paying jobs within Canada. He mentioned that Canadian businesses lag behind other countries in adoption of certain technologies, and I’m biting my tongue so that I don’t repeat my questions of two years ago at IT360 where I challenged the Director General of Industry Canada on what they were doing about the excessively high price of broadband and complete lack of net neutrality in Canada.

The program co-chairs presented the award for best paper at this show, on Testing Sequence Diagram to Colored Petri Nets Transformation, and the best student paper, on Integrating MapReduce and RDBMSs; I’ll check these out in the proceedings as well as a number of other interesting looking papers, even if I don’t get to the presentations.

Oh yeah, and in addition to being a great, free conference, there’s birthday cake to celebrate 20 years!

Fidelity Investments’ Evolution To Product-Focused Software Delivery

Darrell Fernandes, SVP of advisory solutions technology at Fidelity Investments, finished up the morning at Forrester’s BP&AD Forum with a discussion of their IT transformation: how they changed their software delivery process to become more like a software product company. They created “fences” around their projects in terms of centers of excellence and project management offices, with the idea that this would drive excellence on their projects; what they found is that the communication overhead started to bog them down, and that the silos of technology expertise became obsolete as technologies became more integrated. This is a really interesting counterpoint to Medco’s experience, where they leveraged the centers of excellence to create a more agile enterprise.

For Fidelity, the answer was to structure their software delivery to look more like that of a software product company, rather than focusing specifically on projects. They looked at and introduced best practices not just from other organizations like themselves, but also from software companies such as Microsoft. Taking a broader product portfolio view, they were able to look for synergies across projects and products, as well as take a longer-term, more disciplined view of the product portfolio development. A product vision maps to the product roadmap, then to the release plans, then ties into the project high-level plans. They’ve created an IT product maturity model, moving through initiation, emerging, defined, managed and optimizing; Fernandes admitted that they don’t have any in the optimizing category, but told about how they’ve moved up the maturity scale significantly in the past few years. They also started as an IT-led initiative before coming around to a business focus, and he recommends involving the business from the start, since their biggest challenges came when they started the business engagement so far along in their process.

They’ve had some cultural shifts in moving to the concept of IT products, rather than IT providing services via projects to the business, and disengaged the project/product cycle from annual IT budgets. Also, they drove the view of business capabilities that span multiple IT products, rather than a siloed view of applications that tended to happen with a project and application-oriented view. Next up for them is to align the process owners and product owners; he didn’t have any answers yet about how to do that, since they’re just starting on the initiative. They’re a long way from being done, but are starting to shift from the mode of IT process transformation to that of it just being business as usual.

Interesting view of how to shift the paradigm for software development and delivery within large organizations.

Bridging Process Modeling and IT Solutions Design at adidas

Eduardo Gonzalez of the adidas Group talked about how they are implementing BPM within their organization, particularly the transition from business process models to designing a solution, which ties in nicely with the roundtable that I moderated yesterday. The key issue is that process models are created for the purpose of modeling the existing and future business processes, but the linkage between that and requirements documents – and therefore on to solution design – is tenuous at best. One problem is with traceability: there is no way to connect the process models to the thick stack of text-based requirements documents, and from the requirements documents to the solution modules; this means that when something changes in a process model, it’s difficult to propagate that through to the requirements and solution design. Also, the requirements leave a bit too much to the developers imaginations, so often the solution doesn’t really meet the requirements.

The question becomes how to insert the business process models into the software development lifecycle. Different levels of the process model are required, from high-level process flows to executable workflows; they wanted to tie this in to their V-cycle model of solution design and development, which appears to be a modified waterfall model with integrated testing. Increasingly granular process models are built as the solution design moves from requirements and architecture to design and implementation; the smaller and more granular process building blocks, translated into solution building blocks, are then reassembled into a complete solution that includes a BPMS, a rules engine, a portal, and several underlying databases and other operational systems that are being orchestrated by the BPMS.

adidas V-BPM Cycle Reference Model

Gonzalez has based some of their object-driven project decomposition methods on Martyn Ould’s Business Process Management: A Rigorous Approach , although he found some shortcomings to that approach and modified it to suit adidas’ needs. Their approach uses business and solution objects in an enterprise architecture sort of approach (not surprising when he mentioned at the end of the presentation that he is an enterprise architect), moving from purely conceptual object models to logical object models to physical object models. Once the solution objects have been identified, they model the object states through its lifecycle, and object handling cases (analogous to use cases) that describe how the system handles an object through its full lifecycle, including both system and human interaction. He made the point that you have to have the linkage to master data; this is becoming recognized as a critical part of process applications now, and some BPMS vendors are starting to consider MDM connectivity.

The end solution includes a portal, BPMS, BRMS, ESB, MDM, BI and back-end systems – a fairly typical implementation – and although the cycle for moving from process model to solution design isn’t automated, at least they have a methodology that they use to ensure that all the components are covered and in synchronization. Specific models at particular points in their cycle include models from multiple domains, including process and data. They did a proof of concept with this methodology last year, and are currently running a live project using it, further refining the techniques.

Their cycle currently includes the model and execute phases of a standard BPM implementation cycle; next, they want to take on the monitor and optimize phases, and add modeling techniques to derive KPIs from functional and non-functional requirements. They also plan to look at more complex object state modeling techniques, as well as how adaptive case management fits into some of their existing concepts.

I posed a question at the end of my roundtable yesterday: if a tool existed that allowed for the definition of the process model, user interface, business rules and data model, then generated an executable system from that, would there still be a need for written requirements? Once we got past the disbelief that such tools exist (BPMS vendors – you have a job to do here), the main issue identified was one of granularity: some participants in the process modeling and requirements definition cycle just don’t need to see the level of detail that will be present in these models at an executable level. Obviously, there are still many challenges in moving seamlessly from conceptual process models to an executable process application; although some current BPMS provide a partial solution for relatively simple processes, this typically breaks down as processes (and related integrations) become more complex.

Salesforce Releases Force.com Visual Process Manager

A couple of months back, there was a private discussion amongst the Enterprise Irregulars about who Salesforce.com was going to buy next, and there was a thought in the back of my mind that it might be a BPM vendor. Since that time, two BPM vendors have been acquired, but not by Salesforce: instead, they launched their own Force.com Visual Process Manager for designing and running processes in the cloud.

However, they seem determined to keep it a secret: first, the Visual Process Manager Demo video on YouTube has been made private (that’s just a screen snapshot of the cached video below), and second, I was unable to get a call back in response to the technical questions that I had during the demo.

Salesforce Visual Process Manager video: now missing from YouTube

For those of you unfamiliar with options for Salesforce application development ( as I mostly was before this briefing), Force.com is the platform originally built for customizing the Salesforce CRM offering, which became a necessity for larger customers requiring customization of data, UI and business logic. Customers started using it as a general business application development and delivery platform, and there are now 135,000 custom applications on Force.com, ranging from end-user-created databases and analytics, to sophisticated order management and e-commerce systems that link directly to customers and trading partners, and can update data from other Salesforce applications. In the past four years, they’ve gone from offering transactional applications to entire custom websites, and are now adding collaboration with Chatter.

As you might guess, there are processes embedded in many applications; classic software development might view these as screen flows, that is, the process for a person to move from one screen to another within an application. Visual Process Manager came about for exactly that purpose: customers were building departmental enterprise applications applications with process (screen flow) logic, but were having to use a lot of code in order to make it happen.

Link between form and process mapSalesforce acquired Informavores for their process design and execution engine, and that became Visual Process Manager. This is primarily human-centric BPM; it’s not intended as a system-centric orchestration platform, since most customers already have extensive middleware for integration, usually on-premise and already integrated with their Force.com apps so don’t need that capability. That means that although a process step can call a web service or pretty much anything else within their existing Force.com platform, asynchronous web service calls are not supported; this would be expected to be done by that middleware layer.

The process designer allows you to create a process map, then create a form that is tied to each human-facing step in the process map. Actions are bound to the buttons on the forms, where a form may be a screen for internal use, or a web page for a public user to access. You can also add in automated steps and decisions, as well as calling subprocesses and sending emails. It uses a fairly simple flowchart presentation for the process map, without swimlanes. There isn’t a lot of event handling that I could see, such as handling an external event that cancels an insurance quote process. There’s a process simulator, although that wasn’t demonstrated.

Visual Process Manager is priced at $50/user/month for Force.com Enterprise and Unlimited Edition customers, although it’s not clear if that’s just for the application developers, or if there’s a runtime licensing component as well.

Similar to what I said about SAP NetWeaver BPM, this isn’t the best BPMS around – in fact, in the case of Force.com, it’s little more than application screen flow – but it doesn’t have to be the best in class: it only has to be the best BPMS for Force.com customers.

Designing compelling customer-facing user experiences #BTF09

For the last breakout of the day before the final keynote, I attended Mike Gualtieri’s session on designing customer-facing user interfaces. He started with the idea that application developers have to be involved in user experience design, and not just leave it to the designers (which is, of course, exactly what we did in the bad old days of development when there was no such thing as a user experience designer). Forrester defines user experience as “users’ perceptions of the usefulness, usability, and desirability of a Web application based upon the sum of all their direct and indirect interactions with it”, and propose that a great UX is useful, usable and desirable.

User experience impacts how your customers feel about you, and it’s also not just about the interfaces that the customer works with directly: a second-hand interface can also impact the customer experience, as you know if you’ve ever waited ages while a hotel desk clerk clicks their way through a complex interface in order to check you in. A good UX can increase purchases, retain customers and attract more customers; leaving it to chance hurts your conversion rates, alienates customers and increases your development costs due to redesign and redevelopment.

Gualtieri argues that UX design is Lean (although you could argue that only good UX design is Lean), and sets out best practices for good UX design:

  • Become your users, by listening to their needs, observing them in their natural habitat, creating personas, and empathizing with them. Users typically don’t articulate their needs fully or accurately so it’s not sufficient to just listen to them, but they will demonstrate them if you watch how they do their work. This type of user research is not the same as gathering requirements from business stakeholders; remember the Henry Ford quote: “If I asked people what they wanted, they would have said faster horses”. Forrester uses personas in their own materials – for example, representing an application development manager, complete with picture and name – and I’m seeing some companies such as Global 360 use these for BPMS user interface design.
  • Design first, and understand constraints and potential areas of change as well as the different personas that you discovered in your user research. Keep in mind that you have to serve business goals by serving user goals. Create rough prototypes first, and don’t rush into development or lock into a design too soon. There is some amount of art UX design, so don’t assume that tools can do it for you. Keep the basic principles in mind: useful, usable and desirable.
  • Trust no one: test your designs. It doesn’t matter how many experts review the designs, there is no better review of some features than testing the UX with a range of intended users. Remember that this is not just about usability, it’s also about usefulness and desirability.
  • Inject UX design into your software development life cycle. Everyone on the team should understand why UX design is important, and be incented to help create great UX. UX design should be part of your development process, and requires someone on the team to own the UX design efforts. You still need to use the same techniques as discussed in the other best practices, not just do the design in isolation from the users, but having it integrated into the development team will improve the overall software design.

He finished with the ideas that your development efforts are essentially wasted if the user experience isn’t done right, but it doesn’t have to add a lot of time or money to your project. Good UX design is the mark of a great application development team.

Can packaged applications ever be Lean? #BTF09

Chip Gliedman, George Lawrie and John Rymer participated in a panel on packaged applications and Lean.

Rymer argued that packaged apps can never be Lean, since most are locked down, closed engines where the vendor controls the architecture, they’re expensive and difficult to upgrade, they use more functions than customers use, they provide a single general UI for all user personas, and each upgrade includes more crap that you don’t need. I tend to be on his side in this argument about some types of appls (as you might guess about someone who used to write code for a living), although I’m also a fan of buy over build because of that elusive promise of a lower TCO.

Gliedman argued the opposite side, pointing out that you just can’t build the level of functionality that a packaged application provides, and there can be data and integration issues once you abandon the wisdom of a single monolithic system that holds all your data and rules. I tend to agree with respect to functionality, such as process modeling: you really don’t want to build your own graphical process modeler, and the alternative is hacking your own process together using naked BPEL or some table-driven kludge. Custom coding also does not guarantee any sort of flexibility, since many changes may require significant development projects (if you write bad code, that is), rather than a package app that may be more configurable.

It’s never a 100% choice between packaged apps and custom development, however: you will always have some of each, and the key is finding the optimal mix. Lean packaged apps tend to be very fit-to-purpose, but that means that they become more like components or services than apps: I think that the key may be to look at composing apps from these Lean components rather than building Lean from scratch. Of course, that’s just service-oriented architecture, albeit with REST interfaces to SaaS services rather than SOAP interfaces to internal systems.

There are cases where Lean apps are completely sufficient for purpose, and we’re seeing a lot of that in the consumer Web 2.0 space. Consider Gmail as an alternative to an Exchange server (regardless of whether you use Outlook as a desktop client, which you can do with either): less functionality, but for most of us, it’s completely sufficient, and no footprint within an organization. SaaS, however, doesn’t not necessarily mean Lean. Also, there are a lot of Lean principles that can be applied to packaged application deployment, even if the app itself isn’t all that Lean: favoring modular applications; using open source; and using standards-based apps that fit into your architecture. Don’t build everything, just the things that provide your competitive differentiation where you can’t really do what you need in a packaged apps; for those things where you are doing the same all every other company, suck it up and consider a packaged app, even if it’s bulky.

Clearly, Gliedman is either insane or a secret plant from [insert large enterprise vendor name here], and Rymer is an incurable coder who probably has a ponytail tucked into his shirt collar. :) Nonetheless, an entertaining discussion.