All posts by sandy

HoHoTO 2015: be a sponsor, or just come for the party

HoHoTO is a fundraiser event put on each year by Toronto’s digital community: a great party with dancing, raffles and a chance to catch up with your friends (at the top of your lungs to be heard over the dance tunes). Since its inception in 2008, HoHoTO has raised over $350,000 for the Daily Bread Food Bank – an awesome organization that helps to feed people in our community – but this year, HoHoTO has turned its eye to supporting “the next generation of founders, funders and tech professionals”. In particular, the focus will be on organizations that help to bring more women and minorities into technology and digital businesses. The event is on December 11 at the Mod Club, and early bird tickets are on sale here.

The primary focus is on the YWCA Toronto’s Girl’s Centre, with a 3-year goal to completely fund the Girls’ Centre and push for the opening of another one. This centre provides programs for girls from 9-18 to allow them to try activities and develop skills, including “Miss Media” for designing online media such as blogs and websites. It’s located in Scarborough, the easternmost 1/3 of Toronto, serving a community that has upwards of 65% visible minorities (and the best ethnic food in the world, according to one economist), meaning that it is a great match with HoHoTO’s focus on promoting women and minorities in business and technology from an early age. HoHoTO is also bringing together professional women as mentors, including me.

The HoHoTO event, run by unpaid volunteers, is raising money through tickets and sponsorships. If you or your organization recognizes the value of diversity in business, and wants to support the success of women and minorities in digital and technology fields, consider becoming a sponsor of the event. Details are here, and most of your contribution is eligible for a tax receipt. You’ll get recognition on HoHoTO’s site and at the event, other promotional opportunities throughout the year, a handful of event and drink tickets to bring your team out to enjoy the evening, and a nice warm feeling in your heart.

Join the AIIM paper-free pledge

Pledge_badge1AIIM recently posted about the World Paper-Free Day on November 6th, and although I’m not sure that it’s recognized as a national holiday or anything, it’s certainly a good idea. I blogged almost three years ago about my mostly paperless office, and how to achieve such a thing yourself. Since that time, I’ve added an Epson DS-510 scanner, which has a nice small footprint and a sheet feeder; it sits right on my desk and there is never a backlog of scanning.

It’s not just about scanning and shredding, although those are pretty important activities: you have to have a proper retention plan that adheres to any regulatory requirements, and a secure offsite (cloud or otherwise) backup capability to ameliorate any physical site disasters.

You also need to consider how much backfile conversion that you’ll do: I decided to back-scan everything except my financial records at the time that I started going completely paperless, then scan everything including financials from that date forward. Each year, another batch of old paper financial records reached their destruction date and were shredded, the last of them just last year, and I no longer have any paper files. If back-scanning is too time-consuming for you but you want to start scanning everything day-forward, then store your old paper files by destruction date so that you can easily shred the batch of expired files each year until there are none left.

These things – scanning, document destruction, retention plan, secure backup, backfile conversion – are the same things that I’ve dealt with at large enterprise customers in the past on ECM projects, just on a small-office scale.

Avoiding a surfeit of conferences

This time of year, I’m usually flying back and forth to Las Vegas to engage in the fall conference season: software vendors hold their annual user conferences, and invite me to attend in exchange for covering most of my travel expenses. They don’t pay me to attend unless I give a presentation – in fact, many are not even my clients – and since I’m self-employed, that means I’m giving up billable days to attend. Usually, I consider that a fair trade, since it allows me to get a closer look at the products and talk to the vendor’s employees and customers, and I typically blog about what I see.

This year, however, I stepped away from most of the conferences, including the entire slate of fall events. A couple of family crises over the summer required a lot of my attention and energy, and when I started getting requests to attend fall conferences, I just didn’t feel that they were worth my time.

Many vendors have become overly focused on the amount of blogging that I do at their conference, rather than on strengthening our relationship. My conference blogging, described as “almost like being there”, is seen by some vendors as a savant party trick, and they consider themselves cheated in some way if I don’t publish enough content during the conference. What they forget is that by attending their conference, I’m gaining insights into their company and products that I can use in future discussions with enterprise clients, as well as in any future projects that I might do with the vendor. I generate revenue as a consultant and industry analyst; blogging is something that I do to analyze and solidify my observations, to discuss opinions with others in the field, and to expand my business reach, but I’m never paid for it, and it is never a condition of attending an event – at least in my mind.

Another factor is the race to the bottom in travel expenses. Many vendors require that they book my air travel, and when booking the one conference that I was going to attend this fall, I asked their travel group to pay the $20 fee to select a decent (economy) seat for the 5-hour tourist-class flight, but they refused. Many times in the past I’ve just paid for seat assignments and upgrades out of my own pocket, but this time it became about the principle: the vendor in question, who is not an active client of mine, placed that little value on my attendance.

So if you’re a vendor, here’s the deal. A paid client relationship with me is not a prerequisite of me attending your conference, and has never been in the past, but there has to be a mutual recognition of the value that we each bring to the table. I bring 25 years of experience and opinions as a systems implementer, consultant and industry analyst, and I offer those opinions freely in conversation: consider it free consulting while I’m at your conference. I expect to gain insights into your company, products and customers, through public conference sessions and private discussions. I may blog about what I see and hear (at least the parts not under non-disclosure), or use that information in future discussions with enterprise clients. Or I may not, if I don’t find it relevant or interesting. Lastly, when you ask me to fly somewhere, keep in mind that it is not a treat for me to travel to Las Vegas or Orlando, and at least make sure that I’m not in the middle seat at the back of a 50-row aircraft.

As always, everything after the bar opens is off the record.

Appian Around The World – Toronto

Appian was recently doing a round of road-show conferences, and when they landed in my backyard, I decided to stop in for the day and see what was new. I missed Appian World this year and was looking forward to a bit of a product update as well as some of the local customer stories.

The day started with Edward Hughes, SVP of sales, giving us a high-level overview of Appian and their BPM platform-as-a-service and case management products (for the non-customers in the audience), as well as their shift to becoming more of a broad application development platform rather than just a BPMS. I’ve been seeing this trend with many BPM vendors over the past few years, and Appian has been repositioning this way for a year or two already. Using Appian as an application development platform allows applications to be developed independently of the deployment platform, both on server side (e.g., develop on the cloud version, deploy on premise) and for client interfaces on desktops or mobile devices. The messaging is that you can use their platform to create customer service applications “beyond CRM” to handle the entire customer journey, with a unified interface plus a consolidated view onto enterprise data using their Records function. He also talked about the Appian App Market, which is an expanded version of their Appian Forum, containing add-in components and complete applications from Appian and third parties.

Since it was a small room, the local customers introduced themselves and talked about their Appian experience and applications: 407 ETR with 10 apps integrated with their customer portals so that online actions (e.g., acquiring a new transponder) become Appian processes assigned to the 125 internal users; Manulife, the first Appian cloud customer back in 2008, now migrating their “legacy” Appian apps to the Tempo UI and serving 900 users for work/time tracking and records management in Marketing; and IESO with apps to register and manage information about energy companies participating in electricity markets. We also heard from some of the partners attending: TCS, Princeton Blue, and boutique contender Bits In Glass with 15+ Appian-trained people in Canada and the US. Bits In Glass used to do mostly code-level (Java) bespoke development, and have reduced their efforts and timelines to 1/3 of that using Appian’s model-driven development.

Next up was Michael Beckley, describing his new role as Chief Customer Officer (as well as CTO) as well as giving us a product update on the 7.11 quarterly release. Appian is seeing corporate IT budgets as 20% innovation and 80% maintenance, but they want to flip that ratio so that maintenance is much less expensive than the original build, freeing up time and energy for innovation. Most large enterprises aren’t going to get rid of custom applications, but they do need to make them faster to build and maintain, while enforcing strict security and providing a user-friendly interface for internal and external users. In theory, an integrated application development such as Appian provides all the pieces:user interface, reports, rules, collaboration, process, on-premise cloud, mobile, social, data, content, security, identity, and integration; in practice, most organizations end up doing something outside the model-driven development environment, although it can definitely improve their custom development efforts. Appian’s focus, as with many of the other BPMS vendors pivoting to become app dev vendors, is on providing a platform to build process-centric applications that get things done via automation, with people injected into the process where required.

Beckley gave us a hint of their growth strategy: they tend to build rather than buy in order to keep their technology pure, and since growth by acquisition inevitably requires a large (and underestimated) effort to integrate the new technology.

Here’s a quick list of the Appian 7.11 updates (some of these likely came before 7.11, but I haven’t had an update for a while):

  • Three UI styles for Appian apps: the Tempo social interface, Sites limited-function workspace/worklist for heads-down workers, and Embedded SAIL to embed Appian functionality within an existing portal for internal or external users. Sites have Action Forms for fit-for-purpose apps when a social feed UI isn’t appropriate, and Embedded SAIL has Action Forms for customer-facing apps within a third-party web portal. These latter two are critical for real-world enterprise applications: although I like the Tempo interface, many of my enterprise clients need a different sort of view for heads-down workers, which can be provided by Sites or using Embedded SAIL.
  • A number of improvements to the Tempo news feed and UI, including the Tempo Kudos view to promote collaboration and provide awareness of accomplishments, dynamically-updating filters to better link and manage record data and underlying data sources
  • Improvements to SAIL, including positioning it as a device-independent UI that provides shared model experience (rather than an HTML5 gateway into an existing app as seen in some other mobile-enablement technology) that is natively rendered on each device. The rendering engine can be updated independently of the applications, making it easier to adapt to new OS versions and devices. Appian uses SAIL to build their own components and apps that become part of the product. From a developer functionality standpoint, SAIL has added placeholder text and tooltips on forms, auto first field focus to reduce clicks and improve efficiency, additional image sizes that are auto-scaled to the device, initially-collapsed form sections, “submit” links that can be placed on a graphic element instead of standard buttons, links in milestones and pickers, grid enhancements, and continuing speed improvements. There’s also a new barcode component, although on iOS only and requiring a Verifone device for capture.
  • Mobile offline actions use native encrypted data containers rather than HTML5 storage (some of this is iOS only although Android is planned for later this quarter), with the developer deciding which actions and data are available offline. Changes to the definition of a form while a user is offline will prompt the user to review and resubmit the form with the new/updated form field definitions, so application changes can continue even if there are active offline users. This does not (yet) allow existing records to be locked for offline updates, although tasks can be locked to user before going offline.
  • For designers, the developer portal is being migrated to SAIL and enhanced with build processes; there’s a UI designer navigation tree to allow view/select/edit within a hierarchical tree view of an action form; the expression rule designer (“for those of you who are still writing expressions”, namely power developers) auto-suggests rule inputs and there is some level of expression rule testing; a process report designer can be used to create performance reports; impact analysis reports show where rules are invoked and other object relationships; bulk security updates can be made across objects.
  • For administrators, a big new thing is LDAP/SAML authentication with multiple LDAP servers and complex configurations.

They have frequent product update webinars , free introductory courses and tips & tricks sessions online; in fact, there is a product update webinar tomorrow if you want to hear more about what I’ve listed above.

We heard from Rew Dickinson, a solutions consultant, on what makes a great app — complete with a live demo to show us how it’s done. There were a lot of best practices here that I won’t repeat, better for you to check out one of their webinars, but a few key pointers:

  • Design applications to be omni-channel and easily adaptable.
  • Use Records to organize and model corporate data, regardless of source, for use in an application; bidirectional links between Records and process instances allow for a full view whether you’re coming from the process or data side of things.
  • Use Sites for fit-for-purpose applications, e.g., a worklist for heads-down task execution, as an alternative to full Tempo environment. Effectively, this is a report that can be sorted and filtered, with links to tasks that takes the user to the task form; it can include work management analytics for a manager/dispatcher user to monitor and reallocate task assignments. This made me think that Appian has just reinvented their per-application portal mode with Sites, albeit with better underlying technology, but that’s a discussion for another day.
  • Use Embedded SAIL for customer-facing portal environments, e.g., create service request from a customer order page.

Michael Beckley came back to talk to us about Appian Cloud, that is, their public cloud offering. It uses Amazon AWS/EC2/S3 in a single-tenant architecture, which allows each environment to be upgraded independently — more of a managed hosting model. The web tier is shared and handled by Appian, who also manages servers, load balancing, high availability and upgrades. There can be a VPN tunnel to on-premise data, and in fact the AWS instance does not have to be available on the public internet, but can be restricted to access only through the VPN from a corporate location. This configuration provides the elasticity and availability of the Amazon cloud, but allows private data to remain on premise — something that goes a long ways to resolving geographic data location issues. They’ve obviously been working on the optics of US-owned data centers by listing their privacy chops, but it would have been even more reassuring to see a mention of any Canadian standards such as PIPEDA for this purely Canadian audience. There are tiers for development, medium, large and extra-large deployments, with a redeployment to move between tiers (so not that elastic…) but it supposedly only takes a few minutes if planned. Uptime this year is mostly 5 9’s, with customer credits for missed uptime SLAs. You can also self-host Appian in other environments, e.g., Azure, although the Appian Cloud SaaS offering is currently Amazon only.

We finished up with Mike Cichy, an Appian consultant, discussing their center of excellence offerings and how customers can plug into the vast wealth of information, from checklists to migration guides to training in order to embody best practices. There are a number of tools available such as the Appian Health Check and Deployment Automation in addition to these practices, with an overall goal to help achieve a large improvement in developer speed and quality within customer/partner organizations.

Altogether an informative day, and great catch up with some old friends.

Knowledge Work Incentives at EACBPM

June was a bit of a crazy month, with three conferences in a row (Orlando-London-DC) including two presentations at IRM’s BPM conference in London: a half-day workshop on the Future of Work, and a breakout session on Knowledge Work Incentives, which was a deep dive into one of the themes in the workshop. I put the slides for the breakout session up on the day of the presentation, but then went off for a couple of days of holidays in Brighton and completely forgot to post a link here:

Yesterday, I read a post on The Eloquent Women called In a world of #allmalepanels, can we share pics of #eloquentwomen?, which is a riff on the Congrats, you have an all male panel Tumblr. This has been going on a long time: I wrote about the problem at Toronto’s mesh conference starting in 2007, and then just stopped attending it.

The recent TEW post had me think about the opportunities that I’ve had to present at conferences all over the world, and I decided to take them up on their challenge and post some of the pictures and videos from me presenting in the past. First, a few videos in a variety of speaking styles:

And some pictures taken and tweeted by audience members:

I speak primarily about technology and the impact that it has on business, and I’m recognized as an expert in my field, so I have to say that the common excuses for having no (paid) women speakers summarized here – no qualified women speakers; woman only speak about “women stuff” [wtf?]; women are more likely to say no to speaking; women are more likely to cancel – are patently untrue in my case, and likely in the case of most women speakers.

There are some shining examples of companies that put a lot of women – internal and external – on the stage at their conferences, and we need to see more of this in the future. Otherwise, you’re just ignoring half of the IQ available as speakers, and starting to alienate the attendees.

HP Consulting’s Standards-Driven Requirements Method at BPMCM15

Tim Price from HP’s enterprise transformation consulting group presented in the last slot of day 2 of the BPM and case management summit (and what will be my last session, since I’m not staying for the workshops tomorrow) with a discussion on how to improve requirements management by applying standards. There are a lot of potential problems with requirements: inconsistency, not meeting the actual needs, not designed for change, and especially the short-term thinking of treating requirements as project rather than architecture assets. Price is pretty up-front about how you can’t take a “garden variety” business analyst and have them create BPMN diagrams without training, and that 50% of business analysts are unable to create lasting and valuable requirements.

Although I haven’t done any quantitative studies on this, I would tend to agree that the term “business analyst” covers a wide variety of skill levels, and you can’t just assume that anyone with that title can create reusable requirements models and assets. This becomes especially important when you move past written requirements — that need the written language skills that many BAs do have — to event-driven BPMN and other models; the main issue is that these models are actually code, albeit visual code, that may be beyond the technical analysis capabilities of most BAs.

Getting back to Price’s presentation, he established traceability as key to requirements: between BPMN or UML process models and UML use cases, for example; or upwards from processes to capabilities. Data needs to be modeled at the same time as processes, and processes should be modeled as soon as the the high level use case is defined. You can’t always created a one-to-one relationship between different types of elements: an atomic BPMN activity may translate to a use case (system or human), or to more than one use cases, or to only a portion of a use case; lanes and pools may translate to use case actors, but not necessarily; events may represent states and implied state transitions, although also not necessarily. Use prose for descriptions, but not for control flow: that’s what you use process models for, with the prose just explaining the process model. Develop the use case and process models first, then write text to explain whatever is not obvious in the diagrams.

He walked through a case study of a government welfare and benefits organization that went through multiple failed implementations, which were traced back to poor requirements: structural problems, consistency issues, and designs embedded in the specification. Price and his team spent 12 months getting their analysts back on track by establishing standards for creating requirements — with a few of the analysts not making the transition — that led to CMMI recognition of their new techniques. Another case study applied BPMN process models and UML use cases for a code modernization process: basically, their SDLC was the process being improved. A third case study used BPMN to document as-is and to-be processes, then use case models with complete traceability from the to-be processes to the use cases, with UML domain class models being developed in parallel.

The lessons learned from HP’s experiences:

    • Apply existing standards consistently, including BPMN, CMMN, DMN, UML

    • Use graph-structured languages for structure and logic, and prose for description

    • Use repository-based modeling tools to allow for reusability and collaboration

    • Be concise, be precise, be consistent

    • Create requirements models that are architecture assets, not just short-term project assets

    Some good lessons for requirements analysis; although this was developed for complex more waterfall-y SDLCs, some of these can definitely be adapted for more agile implementations.

    The Enterprise Digital Genome with Quantiply at BPMCM15

    “An operating system for a self-aware quantifiable predictive enterprise” definitely gets the prize for the most intriguing presentation subtitle, for an afternoon session that I went to with Surendra Reddy and David Chaney from Quantiply (a stealth startup that has just publicly launched), and their customer, a discount brokerage service whose name I have been requested to remove from this post.

    Said customer has some significant event data challenges, with a million customers and 100,000 customer interactions per day across a variety of channels, and five billion log messages generated every day across all of their product systems and platforms. Having this data exist in silos with no good aggregation tools means fragmented and poor customer support, and also significant challenges in system and internal support.

    To address these types of heterogenous data analysis problems, Quantiply has a two-layer tool: Edge Cloud for the actual data analysis, which can then be exposed to different roles based on access control (business users, operational users, data scientists, etc.); and Pulse for connecting to various data sources including data warehouses, transactional databases, BPM systems and more. It appears that they’re using some sort of dimensional fact models, which is fairly standard data warehouse analytical tools, but their Pulse connectors is allowing them to pour in data on a near-real-time basis, then make the connections between capabilities and services to be able to do fast problem resolution on their critical trading platforms. Because of the nature of the graph connectivity that they’re deriving from the data sources, they’re able to not only resolve the problem by drilling down, but also determine what customers were impacted by the problem in order to follow up. In response to a question, the customer said that they had used Splunk and other log analytics tools, but that this was “not Splunk”, in terms of both the real-time nature, and the front-end user experience, plus deeper analytical capabilities such as long-term interaction trending. In some cases, the Quantiply representation is sufficient analysis; in other cases, it’s a starting point for a data scientist to dig in and figure out some of the more complex correlations in the data.

    There was a lot of detail in the presentation about the capabilities of the platform and what the customer is doing with it, and the benefits that they’re seeing; there’s not a lot of information on the Quantiply website since they’re just publicly launching.

    Update: The original version of this post included the name of the customer and their representative. Since this was a presentation at a public conference with no NDA or confidentiality agreements in place, not even a verbal request at any time during the session, I live-blogged as usual. A day later, the vendor, under pressure from the customer’s PR group, admitted that they did not have clearance to have this customer speak publicly, which is a pretty rookie mistake on their part, although it lines up with my general opinion on their social media skills. As a favor to the conference organizers, who put a lot of effort into making a great experience for all of us, I’ve decided to remove the customer’s name from this post. I’m sure that those of you who really want to know it won’t have any trouble finding it, because of this thing called “the internet”.

    The Digital Enterprise Graph with @denisgagne at BPMCM15

    Yesterday, Denis Gagné demonstrated the modeling tools in the Trisotech Digital Enterprise Suite, and today he showed us the Digital Enterprise Graph, the semantic layer that underlies the modeling tools and allows for analysis of relationships between them. There are many stakeholders involved in defining and implementing a digital enterprise, including enterprise architects, business architects and process analysts; each of these roles has a different view on transformation of the enterprise and different goals for their work. He sees a need for a more emergent enterprise architecture rather than a structured top-down architecture effort: certainly, architects need to create the basic structure, but rather than trying to build out every artifact that might exist in the architecture before making use of it, a more pragmatic approach is for a “just-in-time” architecture that is a bit more self-organizing.

    A graph, in general, is a powerful but simple contstruct: it consists only of nodes and links, but can provide meaningful connections of loosely-coupled business entities that can be easily modified. Think about a social graph, such as Facebook’s social graph: it’s just people and their connections, but it’s a rich context for analyzing the relationships between nodes (people) in the graph depending on the nature of the links (friends, likes, etc.) between them. Trisotech’s Digital Enterprise Graph links the who, what, when, where, why and how of an organization by mapping every model that is added to the Graph onto those types of nodes and links, whether the model originates with one of their own modelers (BPMN, CMMN, DMN) or an external EA modeling tool (Casewise, SAP PowerDesigner, Sparx). This provides an intelligent fabric for automated reasoning about the current relationships between parts of the organization, but also allows estimation of the impact of changes in one area on other parts of the organization. Their Insight Analyzer tool allows you to introspect the graph, providing views such as interconnectivity between nodes as part of impact analysis, or tracing responsibility for a capability up through the organizational structure. The analysis isn’t automated, but provides visualization tools for analysts and planners, based on a single integrated scheme that allows for straightforward queries.

    He gave us a demo of the Graph in action, starting with a BPMN model that uses the Sparx EA accelerator for SOA architecture artifacts, and tracing through that loose coupling to the architectural components in the EA framework, with similar linkages for roles from a Casewise business architecture framework and definitions of contracts from the Financial Business Industry Ontology (FIBO). The idea is that the Graph provides an underlying mesh of semantic linkages from elements in a model to other frameworks, ontologies and models while still retaining business understandability at the model level. In the Insight Analyzer, we saw how to explore linkages between different types of elements, such as RACI-type relationships between roles and activities, as well as a more detailed introspection that allows drilling down on any node to see what other nodes and models that it is linked to, and traversing those links.

    Interesting ideas about how to bring together all of the architecture, process, case and decision models and frameworks into a single graph for analysis of your digital enterprise.

    Wearable Workflow by @wareFLO at BPMCM15

    Charles Webster gave a breakout session on wearable workflow, looking at some practical examples of combining wearables — smart glasses, watches and even socks — with enterprise processes, allowing people wearing these devices to have device events integrated directly into their work without having to break to consult a computer (or at least a device that self-identifies as a computer). Webster is a doctor, and has a lot of great case studies in healthcare, such as detecting when a healthcare worker hasn’t washed their hands before approaching a patient by instrumenting the soap dispenser and the worker. Interestingly, the technology for the hand hygiene project came from smart dog collars, and we’re now seeing devices such as Intel’s Curie that are making this much more accessible by combining sensors and connectivity as we commercialize the internet of things (IoT).

    He was an early adopter of Google Glass, and talked to us about the experience of having a wearable integrated into his lifestyle, such as for voice-controlled email and photography, plus some of the ideas for Google Glass that he has for healthcare workflows where electronic health records (EHR) and other device information can be integrated with work patterns. Google Glass, however, was not a commercial success since it is too bulky and geeky-looking, as well as requiring frequent recharging if you’re using it a lot. It requires more miniaturization to be considered as a possibility for most people, but that’s a matter of time, and probably a short amount of time, especially if they’re integrated directly into eyeglass frames that likely have a lot of unused volume that could be filled with electronic components.

    Webster talked about a university curriculum for healthcare technology and IoT that he designed, which would include the following courses:

    • Wearable human factors and workflow ergonomics
    • Data and process mining wearable data, since wearables generate so much more interesting data that needs to be analyzed and correlated
    • Designing and prototyping wearable products

    IMG_20150623_104530He is working on a prototype for a 3D-printed, Arduino-based wearable interactive robot, MrRIMP, intended to be used by pediatric healthcare professionals to amuse and distract their young patients during medical examinations and procedures. He showed us a video of a demo of he and MrRIMP interacting, and the different versions that he’s created. Great ideas about IoT, wearables and healthcare.

    Day 2 Keynote at BPMCM15

    Second day at the BPM and Case Management summit in DC, and our morning keynote started with Jim Sinur — former Gartner BPM analyst — discussing opportunities in BPM and case management. He pointed out the proven benefits of process and case management, in terms of improving revenue, costs, time to market, innovation and visibility, while paving a path to digital transformation. However, these tried-and-true ROI measures aren’t just enough these days: we also need to consider customer loyalty, IoT, disruptive companies and business models, and in general, maintaining competitive differentiation in whatever way necessary to thrive in the emerging marketplace. In order to accommodate this, as well as attract good workers, it’s necessary to break the specialist mindset and allow people to become knowledge workers. I gave a workshop last week at the IRM BPM conference on the future of work, and I agree that this is a key part of it: more of the routine work is being automated, leaving the knowledge work for the people in the process; this requires a work environment that allows people to do the right thing at the right time to achieve a goal, not just work at a pre-defined task in a pre-defined way. Sinur cited a number of examples of processes that are leveraging emerging technologies, including knowledge workers’ workbenches that incorporate smart automated agents and predictive analytics; and IoT applications in healthcare and farming. The idea is to create goal-driven and proactive “smarming” processes that figure out on their own how to accomplish a goal through both human and automated intelligence, then assemble the resources to do it. Instead of pre-defining processes, you provide goals, constraints, analytics and contexts; the agents — including people, services, bots and sensors — create each process instance on the fly to best meet the situation. Although his case studies included a number of other technologies, he finished with a comment on how BPM and case management can be used to coordinate and orchestrate these processes as we move to a new world of digital transformation of the customer experience.

    Next up was Tom Debevoise, now with Signavio to help promote their recently-released DMN modeler; we had a sneak peek of the DMN modeler at bpmNEXT. He talked about three levels of decisions — strategic (e.g., should we change our business model), tactical (e.g. which customers to target) and operational (e.g., which discount to apply to this transaction) — and how these tend to be embedded within process models and business application logic, rather than externalized into decision models where they can be explicitly managed. Most organizations manage their decisions very poorly, both human and automated, resulting in inconsistent or just plain wrong decisions being made. In other words, our business decisions are at the same point now as business processes were a decade or more ago, before BPM systems became widespread, and the path to improving this is to consider decision management as a discipline as well as the systems to model and automate decisions. We now have a decision modeling standard, DMN 1.0; this is expected to drive the adoption of decision modeling in organizations in the same way that BPMN did for process modeling. He proposed a decision management lifecycle similar to a BPM lifecycle, starting with decision discovery that allows modeling using the DMN-standard elements of a decision, input data, knowledge sources, information requirements, authority requirements and knowledge requirements. He wrapped up with the linkage between process and decision models, particularly using the Signavio BPMN and DMN modelers: how decisions that are defined external to a process can be used to assign process activity participants, decide on next steps, select the process pathway, define data access control, or detect and respond to events. We saw yesterday how Trisotech’s tools combine BPMN, CMMN and DMN, and today how Signavio combines BPMN and DMN; as more process modeling vendors expand to include decision modeling, we are going to see more implementations of these modeling standards integrated.

    The last speaker in the keynote was Lloyd Dugan, on how business architecture and BPM work together, in response to a paper that he wrote last year with Neal McWhorter. Although dense (I recommend checking out the paper at the link), his presentation discussed some of the issues with reconciling business architecture and BPM, such as reconciling value stream, balanced scorecard and other BA models with activities within a process model. He reviewed a number of definitions and model types, cutting a wide swath through pretty much everything even remotely related to process and architecture, and highlighting some of the failures of mapping enterprise architecture frameworks to BPMN. He finished with a spectrum from business model perspectives (what the business is doing) to the operational model perspective (how the business is doing it), and how the business architecture versus BPM viewpoints differ, but can still both use BPMN as a modeling language. Pretty sure of two things from this: 1) I missed a lot of the detail 2) Dugan has never heard that you’re supposed to have less than 500 words on each PowerPoint slide.