BPM Think Tank Day 1: Jim Adamczyk

The second keynote of the day was Jim Adamczyk of Accenture on how standards play a critical role in creating value with BPM. He said that they have about 40 current projects that are focussed on BPM — the discipline of creating process-centric business and IT architectures — in addition to those doing “low-level workflow”, although it’s not completely clear where the distinction lies. They’re early enough with all of these projects that he couldn’t even list client names, which means to me that Accenture is a bit late to the game here.

He moved on to talk about the value of standards, both notational and serialization, covering much the same territory as I did in a webinar recently: notational standards like BPMN allows users to move between different modelling tools more easily, and serialization/interchange standards make is easier to move processes from one system to another.

He made some great points about how changes are specified: the tendency is for business to actually specify the system change (e.g., add this function to this screen) rather than focus on their business process and KPIs — I struggle with this constantly with my customers, and have to constantly remind the business side to state their requirements, not try to design the system. The problem is that IT has been letting them do this for years, either because it’s easier to not have to learn enough about the business to do the specifications and design based on actual requirements, or because it effectively passes the buck for any mismatch between requirements (what the business needs) and specifications (what the system does) to the business side. But I digress.

Adamczyk stated that a client’s need for standards depend on their entry point to BPM, although I’m not clear what he meant by this. He said that IT almost always drives standards and that business rarely wants to implement standards; I completely disagree with this in the case of BPMN, since there are significant tangible benefits to the business side from having everyone use the same modelling notation, and I have few business-side clients who don’t recognize this. I agree that the business side doesn’t explicitly care about XPDL or BPDM or BPEL or whatever is being used for serialization, but they will start caring if it means that they can’t do round-tripping between the modelling and execution environments. However, he’s deep in the weeds talking about WSDL and LU6.2, so I think that he and I have different views of BPM standards. Since he went on to talk about how Oracle Fusion is one of the most commonly used BPM platforms amongst their client base, I think that we have different views of BPM, too.

Then he made the comment that if you use of the BPM suites (like Lombardi or Appian) that you probably don’t care about BPM standards, which couldn’t be further from the truth. Many companies use a separate process modelling tool even though they use a BPMS, so both notational standards and interchange standards are critical. They’re even important between tools from the same vendor, such as Lombardi’s Blueprint process discovery tool and their TeamWorks BPM suite, which uses BPDM for process model interchange. And there’s the advantage hiring a new analyst who already knows BPMN, even if they don’t know the particular BPMS that you’re using, because that particular standard has become so widespread, and the reduced training requirements that result.

He does have a future view for the perfect world enabled by standards that includes federated orchestration, consistent policy and governance, dynamic and predictive infrastructure, and consistent methodology/training/tools; ranging from consistency on one platform to coverage of all platforms.

Although an engaging speaker, Adamczyk seemed to spend a lot of time apologizing for things that might be missing or inaccurate in his presentation: according to him, he doesn’t really know a lot about BPM standards, nor about the utilities vertical (the industry of his unnamed client example). He also said that he’s here as a proxy for CIOs (this is billed as a “CIO keynote”) rather than as an actual CIO. Enough, already: tell us what you do know, not what you don’t know.

BPM Think Tank Day 1: Paul Harmon

Phil Gilbert kicked off morning with welcome and logistics before turning it over to Paul Harmon, who gave a keynote entitled “Does the OMG have any business getting involved in business process management?” I love a little controversy first thing in the morning.

He started out with a fairly standard view of the history of BPM and process improvement, from Rummler-Brache and TQM in the 80’s to BPR in the 90’s to BPM in the 00’s. He pointed out that BPM has become a somewhat meaningless term, since it means process improvement, the software used to automate processes, a management philosophy of organizing businesses around their processes (the most recent Gartner viewpoint) and a variety of other things.

He broke down BPM into enterprise level, process level and implementation level concerns (with a nice pyramid graphic), and gave some examples of each. For example, at the enterprise level, we have frameworks such as SCOR (for supply chain) and high-level organizational issues such as the Business Process Maturity Model (BPMM); Harmon questions whether OMG should be involved at this level since its primary focus is on technology standards. Process-level concerns are more about modelling, documenting and improving processes, and spreading that process culture throughout the organization. Implementation-level concerns includes the automation of processes, including execution and monitoring, plus the training required to support these new processes.

He made an interesting distinction between stable processes,which need to be efficient and productive, and dynamic processes, which need to be flexible. Processes that are newer or need to be changed frequently are in the dynamic range; in my opinion, these tend to be the processes that are competitive differentiators for an organization. IBM has recently thrown the concept of “value nets” into the mix as an alternative to value chains, but Harmon feels that both are valid concepts: possibly using value chains for stable processes, which might even be candidates for outsourcing, and value nets for more dynamic processes.

He also made a distinction between process improvement, process redesign and process reengineering, a division that I find a bit false since it’s more of a spectrum than he shows.

There was an interesting bit on model-driven architecture (MDA) and how it moves from platform-independent models (in BPMN) to platform-specific models (also in BPMN) to implementation (e.g., J2EE); for example, there may be parts of a process modelled at the platform-independent level that are never automated, hence aren’t directly mapped to the platform-specific level.

He put forward the idea that process is where business mangers and IT meet, and that different organizations may have the implementation level being driven by either the business side or the IT side, and that there’s often poor coordination at this level.

He then discussed BPMS and came up with yet another taxonomy: integration-centric, employee-centric, document-centric, decision-centric and monitoring-centric. Do we need another way to categorize BPMS? Are these divisions all that meaningful, since the vendors all keep jostling for space in the segment that they think that the analysts are presenting as most critical? More importantly, Harmon sees that also the BPM suites vendors (those that combined process execution/automation with modelling, BAM, rules and all the other shiny things) are leading the market now, the platform vendors (IBM, Microsoft, etc.) will grow to dominate the market in years to come. I’m not sure that I agree with that unless those platform vendors seriously improve their offerings, which are currently disjointed and much less functional than the BPM suites.

Harmon’s slides will be available under OMG-BPM on the BPTrends site. There’s definitely some good stuff in here, particularly in the standards and practices that fit into each level of the pyramid.

Good thing that I’m blogging offline in Windows Live Writer, since the T-mobile connectivity keeps dropping, and isn’t smart enough to keep a cookie to stay logged in, but requires a new login each time that its crappy service cuts out. Posting may come in chunks since it will likely require me to dash out to the lobby to get a decent signal.

Why Facebook beats LinkedIn

I’ve been a big LinkedIn fan in the two years or so that I’ve been using it, but there’s a couple of things that really bother me about it. First, until recently, you couldn’t remove someone from your list of contacts, you had to email LinkedIn customer support to make it happen. It happened pretty quickly, but I wonder how many contacts were left languishing on lists because most people are too busy/lazy to email and have them taken off. The second thing is the completely opaque process that you have to go through to start a LinkedIn group. Members of the Toronto BarCamp community, also known as TorCamp, have very active online lives, and many of them are on my LinkedIn contact list. I thought that a LinkedIn TorCamp group would be a great idea, and when someone told me that they had tried and were turned down, I figured that I could do better, and I applied on the LinkedIn site to start a (free) TorCamp group (I point out the free part because I suspect that the paid groups might be given much better service). A long wait ensued, then I received an email telling me that some information was missing, and that I should reapply. I did, and now (months later), still nothing. I’ve given up on LinkedIn groups; they’re just too damned hard, and social networking should make it easy to connect, not hard.

Facebook, on the other hand, is being seen as a replacement for LinkedIn by many, and although I’ve primarily been using LinkedIn for professional/business contacts and Facebook for personal contacts, there’s been quite a bit of crossover with business contacts finding me and inviting me to be their Facebook friend.

Ever since Facebook opened up its platform a few weeks back for developers to create add-on applications, it looks like it might sweep the popularity contest, especially if LinkedIn stays with the walled garden/not-so-social network philosophy. In Toronto, which held the record for the biggest number of Facebook users until London overtook it last week, we’re even having a Facebook Camp.

Mashup Camp IV: Speed Geeking 2 and wrap-up

It’s taken me a couple of days to get my notes transcribed from Mashup Camp’s last speed geeking session, since I’ve spent the weekend frolicking around in San Francisco. I really have to learn to do this before I kill all my brain cells on a weekend…

I saw many less speed geeking sessions than the previous day, in part because some of them just didn’t bother to participate in the second round. I’d like to point out to the demonstrators that if you can’t manage to wrap up your predetermined 6-minute conversation even when a siren goes off in the room in order to restart the next round, don’t expect me to believe that you could ever deliver anything on schedule.

Here’s what I saw:

QEDwiki: Not sure what the mashup was called, but it was built on IBM’s QEDwiki and combined Google maps with Upcoming.org (or should I say upcoming.yahoo.com) events, allowing you to search for events by subject/keyword, plot them on a map, then click through directly to the Upcoming event page. As with other QEDwiki sites that I’ve seen, this was multi-pane, with the top pane containing an RSS feed of events matching the search criteria, and the bottom pane containing the map. Future plans include adding Eventful and other event listings.

Cold Call Assistant: Another QEDwiki mashup, this one combining Salesforce.com data with a variety of other sources. You create a sales campaign on Salesforce as usual, which consolidates a list of contacts and their details, and this mashes it up with competitor information, news about the selected competitor or IBM (this was created by an IBM’er), local weather, restaurants and golf courses. The idea is that it provides context for a conversation that you’re about to have with the customer that you’re cold-calling for this sales campaign; why they picked golf courses rather than strip clubs, I’m not sure. 🙂 There’s no feedback to Salesforce.com, that is, if anything comes from this call, you’d have to go back to Salesforce.com manually and enter in the data.

Best Price Food Shopping: An interesting idea for a mashup, which combines one feed per product (such as milk or bread) using Javascript to fetch and parse the RSS with Google maps to plot out where to buy your groceries. There’s some colour coding for which is the cheapest and most expensive, but this whole idea is reliant on the stores providing feeds for the underlying data, which isn’t currently happening. Useful for people with lots of kids who care about the price of milk, which isn’t me — I do much of my shopping in Toronto’s Chinatown, where I buy fruits and vegetables for pocket change.

Mashup Telephone: Probably the funniest (and most useless) mashup that I’ve seen, the mashup version of the telephone game: you know, the one where you whisper something in someone’s ear, they do the same to the person next to them, and so on until you’re all laughing about how scrambled that it got along the way to the 10th person. In this case, a search term was passed through successive mashup APIs sequentially to see what came up.

Flickrball: Another useless but charming mashup: a game where you have to get from one word to another in six moves (the old Six Degrees of Kevin Bacon thing) using tags on Flickr photos. Some nice eye candy in the UI, but it wouldn’t otherwise captivate me.

Seegest: A social movie rating site, where you rate what you’ve seen, what you’d like to see, and what you own. If your friends want to see the same movie as you, it helps to facilitate a movie night. If you own movies that you’re willing to lend, it can match them up with your friends (to whom presumably you’d be willing to loan a movie). It uses Yahoo authentication, so you’ll need a Yahoo ID to use it, plus feeds from Amazon for movie/DVD information and video of trailers from YouTube.

The last breakout session of the day, scheduled for after the speed geeking, sort of didn’t happen; most people were just hanging around chatting or demonstrating, and no one was in the room scheduled for the session that I had planned to attend. Given the opportunity to get north of the valley towards San Francisco before the 3pm car pool lanes kicked in, I headed out, skipping the closing session and the awards. At 2:58, I crossed the point just south of San Francisco airport where the car pool lanes cease to exist, and continued north for my weekend in San Francisco before the start of BPM Think Tank on Monday.

This was my third Mashup Camp, and likely my last; in fact, if it hadn’t been that I was coming down for BPM Think Tank anyway, I probably wouldn’t have attended this one. I enjoy Mashup Camp, but I’ve seen a lot of the stuff already, or am tracking it online; since I don’t write code any more, much of this is a bit peripheral to what I do, making it difficult to justify both the travel expenses and the time away from paying clients. Maybe I’ll be back when Mashup Camp hits Toronto, there’s certainly a strong enough tech community and BarCamp community to support one there.

Jon Pyke joins Cordys

I don’t usually repeat the standard PR blurbs that are sent my way every day, but this one piqued my interest: Jon Pyke, formerly CTO of Staffware before it was acquired by TIBCO, and co-founder of WfMC (and CEO of the Process Factory, although that never went very far) is joining Cordys as Chief Strategy Officer.

I’m never really sure about the CSO title: that’s the same one that Jim Sinur took when he left Gartner for Global360. Is it the “VP Special Projects” of this age?

The rip-off that is hotel internet access

I was just about to start crowing over how I haven’t paid for internet access since I arrived in the Bay area last Tuesday — the ratty old Best Western in Mountain View had free wired access as well as being in the Google wifi zone, and the Hilton in San Francisco’s financial district doesn’t charge for wifi, the first Hilton that I’ve ever been in that didn’t — but I’ve arrived at the Hyatt Regency near San Francisco airport for the BPM Think Tank conference and find myself having to buy access through T-mobile.

When are the organizers of technology conferences going to start to insist on only booking at hotels with free internet access? When that starts becoming a competitive differentiator for their bread-and-butter conference bookings, the hotels will start to listen.

Enter a survey on technology business in Toronto, win an iPod

There’s a new survey for technology-related businesses in the Toronto area available from now through September 7th; fill it out and you’ll be entered in a draw for a 30GB iPod. If you’re involved in the Toronto BarCamp community, use “TorCamp” as your survey code.

Update: More from Mark Kuznicki on the importance of the survey:

In a first for an unincorporated-unmember-unorganization, Toronto’s Barcamp community is partnering with the big guns – the Board of Trade, MaRS, ITAC and others – to gather a profile of the needs and opportunities facing Toronto’s technology community.

We are at the table trying to communicate on behalf of the thousands of small/micro-businesses who tend not to be members of these larger established member-based organizations.  It is important to communicate to the policymakers that it is the small/micro businesses that drive innovation and business development in this town!

Mashup Camp IV: Speed Geeking 1

Taking a break from the sessions just before lunch; there’s nothing that I want to go to right now, and this gives me a chance to catch up on my notes from yesterday’s speed geeking before today’s session starts after lunch.

Keep in mind that each session was a maximum of six minutes longer (shorter if the presenter didn’t actually restart when the siren went off), so there are likely to be errors and omissions here.

Here’s what I saw yesterday:

Voice + TWiki: This is an integration of LignUp‘s VOIP service with TWiki to allow for the integration of voice information into the wiki environment. Although the author claimed that this was for non-technical people (he actually used the phrase “over 40”) to be able to add content, Twiki is an easy enough environment that I don’t think that this is really necessary, but one of the use cases that he showed is more compelling reason: someone is out in the field without internet access and wants to add a comment to a particular page related to that field location. For example, someone at a building site sending in a verbal report of what’s going on; I can also imagine insurance adjusters doing something similar. If you’re on the wiki and doing it interactively, clicking a button causes LignUp to call your phone and prompts you through the recording process. If you’re in the field, you can call a number, enter a PIN and the desired project (page) ID, and record. The next step for this type of integration would be to convert the voice to text, although I’m not sure if LignUp offers that sort of service; I suspect in any sort of inspection scenario, you wouldn’t want to trust an automated conversion but would still want the voice recording.

5thbar: Not the place that you go after the 4th bar, 5thbar is a mashup of information about mobile phones and accessories that provides a good one-stop site. It combines blogs and news via feeds, videos via the YouTube API, listings of the device for sale via the eBay API, and allows signed-in users to add tags and reviews. Although some carriers provide a subset of this information for the phones that they carry, 5thbar covers information for a wide range of phones across multiple US and Canadian carriers. It doesn’t yet provide comparison charts between devices, although the author sounded like he’s considering that.

Fast Mash: This is a demonstration coming from a university research project, showing how unstructured data can be extracted from sites and served up in a more structured format by the use of a reference data set in the same domain of knowledge. The example that he showed was taking “cars for sale” listings from Craigslist and creating a structured, searchable data set, more like a relational/tabular form, for further filtering and display (potentially as a data feed to other mashups). He did this in part by using Edmunds car buying guide as a reference data set in order to understand the particular tokens that might occur in the unstructured data; this is the key to the technique.

ClubStumbler: Finally, someone’s doing something useful with Google Maps: a way to plot the optimal route between a number of different bars/clubs that you select in your area. 🙂 This uses the Eventful API to find out what’s going on at the clubs as well as Google and Flickr APIs, plus the author’s own API for finding the best route that he’s developing and commercializing. Seriously, though, this could be used in a number of commercial and business applications, such as optimizing delivery routes or real estate tours.

Plaxo Pulse: This mashup discovers information about all of your Plaxo contacts and feeds it back to you in a feed reader-type style (or it can be subscribed to in a regular feed reader). This includes any information found about the people from blogs, Flickr, Digg and other locations: sort of an automated identity aggregator based purely on feeds and RSS APIs from these sources. Although I don’t recall hearing the details, I suspect that the auto-discovery is done on some combination of what information that’s in the Plaxo contact record plus searches on the various content sites based on name and email address, since many allow the positive identification of a person if you know their email address. There’s also a new version of Plaxo out that looks less virus-like than the predecessors, I might give it a look.

Chime.tv: This is essentially a playlist of web-based video extracted from multiple sources (e.g., YouTube, Google Video) using the media RSS format, and played as a continuous stream. There are several pre-made “channels” of content of certain types, or you can create your own channels for private use or sharing. The Find And Play feature allows you to enter either a website URL that contains video or search terms, and get back a continuous-playing playlist of matching videos so that you don’t have to click your way through each result sequentially.

Billhighway.net: This is a social money website that allows you to collect money from friends, family, roommates, etc. for specific events or purposes, such as “mom and dad’s anniversary present” or “this month’s electricity bill”. You organize a group around whatever the event is that requires payment, then send out invoices to people. They can pay by eCheck or credit card, even if they don’t have a PayPal account, and the person organizing the group doesn’t have to have a credit card merchant account to accept payment by credit card. There is a 3% fee, of course, since those transfers have to be paid for in some way through the various financial services organizations that billhighway uses to process payments behind the scenes. Payments to the group are organized and reported as such, so that you know how much money that you received by group/event rather than just by person, and you can get RSS feeds of some of the data on the site such as transactions with your contacts. $US only at this time.

di.ngb.at: Although that domain’s not actually used, that name is on the site that was demonstrated so I’ve used it for naming purposes. I found this mashup particularly interesting since I’ve wanted to do something like this with Toronto’s library system ever since I started attending Mashup Camp last year. This uses an Amazon wish list to hold books that you want to read (since the Amazon API can retrieve the wish list using only your email address), then looks up the books on several different library sources (OCLC and LibraryThing were mentioned) to generalize the search by mapping the single ISBN from the Amazon wish list to all ISBN’s for that item, such as paperback, hardcover, audio and video versions. Metadata is also extracted from Amazon and the other library sources for display, and the local library systems (in this case, the author lives at the intersection of four overlapping library systems so searches all of them) to present a list of what’s available at the branches and allow holds to be placed on items. I so want this for myself in Toronto!

LignUp: I’ve already seen one mashup created using LignUp, and this was another one by someone from LignUp showing how to add voice alerts and SMS messages on a system administration functionality mashed up with the Intel Web 2.0 TDK. For example, a part fails or storage space starts to run low, and the designated system administrator receives an automated call or voice message, then can enter their current location by zip code and receive an SMS message with the address of the closest location to buy replacement parts. I moved on to the next mashup and found that it was also LignUp, showing voice mashups within Facebook: voice annotation of pictures by calling in, or web calling out either for 2 or 3-person calls or an automated broadcast message to an entire group. Another application, ReachMeRules, handled inbound calls and voice mails through the use of a cut-out number that masks your real phone number, and can provide specific responses to specific inbound numbers. This latter sounds like some similar functionality to what’s on Iotum, except that LignUp is offering a development platform that allows this to be created, not a specific fixed service.

SnapLogic: SnapLogic showed a mashup that highlighted the functionality of their data duration framework (available as open source under GPL), extracting data from Quickbooks and Salesforce.com, matching it up and displaying it as a combined table. Although the methods of extracting from these two data sources are quite different, the SnapDojo mashup demo showed how they can be wrapped and then consumed in a consistent fashion using SnapLogic’s APIs via JSON.

Where.com: The mashup by Where.com showed Google Street View on a mobile phone, using techniques originally discovered during WhereCamp. They have support for Sprint phones — I saw a Samsung demonstrated — for an extra $2.99 on your monthly bill, and unofficial support for the Blackberry. I believe that it’s all running in a mobile browser rather than downloading an app to the phone, although I’m not completely sure.

Twitterlicious and Bookmark Cleaner: Twitterlicious is a mobile app that allows you to capture your tweets (if you don’t know what Twitter or tweets are, move on) as del.ico.us or Ma.gnolia bookmarks with the tag “tweet” so that you can review them later, especially useful when a tweet contains a URL that you can’t open on your mobile device. They’re stored as private bookmarks, so I’m not sure if they’ll get picked up in the automatic Links blog posts that I create from my del.icio.us bookmarks; obviously, I wouldn’t want that to happen. The use of del.icio.us to store the tweets for later review then necessitated the Bookmark Cleaner, a utility to delete all bookmarks with a certain tag; he’s also extended this to do some other useful cleanups on your bookmarks, such as finding and deleting all bookmarks that point to a 404 (page not found), and finding all bookmarks that appear to point to a malware or phishing site, which can then be previewed using snap.com and deleted as required. In the future, he also want to find 302’s (permanent redirect) and replace the bookmark URL with the redirected URL.

Today’s speed geeking is just about to start, I’m out of here…

Mashup Camp IV Day 2: Automating web services and mashups

I’m in a session on automating the discovery and consumption of web services to create mashups. Coming from the enterprise integration side, lots of this stuff is pretty familiar to me: using a directory service (e.g., UDDI) to discover service providers in both the corporate SOA and internet web services (SOAP, REST, AJAX, XML-RPC, JSON, etc.) and extract the service description (e.g., WSDL), but he’s talking about adding a lot more intelligence to the discovery stage.

They’re making some pretty esoteric points about RDF, OWL and the semantic web, and this moved for a while into a two-way conversation between the main presenter and someone from a vendor who is obviously deeply into these issues as well.

Then we get to an interesting echo of yesterday’s session on why DIY when there are APIs available, and someone stated that he was more likely to write something himself because it’s too difficult to find services/APIs on the web. The other side was considered now: what if you put an API out there, and people use it in “stupid” (i.e., inefficient) ways that bring down your service? I think that the API developers need to put some checks and balances in place, like key limiting; someone from Google pointed out that if you don’t do that, people won’t take care to optimize their code to minimize the API calls. In fact, there’s services available to handle the API keys and their usage, such as Mashery.

Mashup Camp IV Day 2: Google Mashup Editor

Best Mashup Camp quote: “music is to mashups as porn was to the internet: it’s what drives it”.

I’m in the session on “Mashing” client-side mashup tools, where Jason Cooper from Google is demonstrating a mashup called Jookebox that he created using multiple Yahoo Pipes, such as one to retrieve album tracks from Amazon, assembled using the Google Mashup Editor (which I just received an account on the closed beta yesterday, by coincidence).

You can build the entire web page in the browser-based Mashup Editor in XML format, with the big difference between Pipes and GME is that Pipes outputs an RSS feed whereas GME outputs a web page. We had a discussion about how the mashup tools can often be categorized into enablers (which build the widgets, data feeds, etc. from underlying data sources, like Pipes) and builders (which assemble the components into a mashup, like GME).

Google’s goals in building their mashup editor was to remain standards-based, and eventually to open it up for extensions. There’s a whole gallery of mashups built using GME here (weirdly, not searchable…), and if you go to any of these, then there’s a “View Source” link in the top right that allows you to grab the source to learn how it was written. There’s a lot of mapping applications there, obviously, but there’s also things like a simple feed reader and a task list with nary a map in sight (or in site, if you like bad geek puns). You’ll also find a list of resources in the sidebar of the gallery page, such as how to get started, event handling, using the JavaScript API, etc.

We discussed a number of competitors, such as IBM’s QEDwiki, BEA’s Pages and Bungee Labs, although the Google guys state that these products play in a different place than them: GME is much more of a developer tool, since it’s basically a browser-based text editor that you drop code into rather than a drag-and-drop environment. They may decide to add nicer UI stuff in the future, such as a design view to accompany the code view.

Jason also talked about more complex mashups, such as using Dapper to parse a page into a more structured data source, feed it into Pipes for further slicing and dicing, then take the output feed from that and create the mashup using Google Mashup Editor.

We ended up with a discussion about the use Google’s geocoding in a GME-created mashup; currently, all GME apps use a single geocoding API key so there’s no issue of going over your daily limit, although there may be changes to this in the future.

Product roadmap:

  • Open up the beta by the end of this quarter
  • Allowing mashups to be hosted on other domains
  • Feeds from Google Calendar and other sources
  • New UI widgets

I’m looking forward to trying it out.