Nicholas Doyle of DST gave a presentation on crowdfunding: an interesting topic to cover at a conference attended primarily by old-school financial services companies, who are the ones most likely to be radically disrupted by crowdfunding, but likely don’t see it coming. He started with a video from CraftFund — crowdsourced capital investment focused on the craft beer and food industry — then talked about the state of the market, the different business models, and the US securities regulations that apply to private securities that include crowdfunding. He also included a good timeline of crowdfunding from the 1983 startup of Grameen Bank‘s microfinancing operations, plus some of the regulations that govern microlending and microequity in the US:
I covered a bit about crowdfunding at the 2012 Technicity event, including the UK crowdfunding platform CrowdCube and a discussion on the legality of equity crowdfunding in Ontario (where I live). Equity crowdfunding really only started in the US in 2011 with MicroVentures, and the recently passed JOBS Act includes a number of regulations that apply to crowdfunding and other small-scale equity investments, including who can participate and how it can be promoted and sold. In particular, Title III of the JOBS Act applies to crowdfunding; it’s not finalized yet but Doyle was able to give us a review of what is expected there, as well as some of the state regulations that will impact crowdfunding.
The landscape positions the crowdfunding platforms between issuers and investors; that platform needs to include compliance, distribution, reporting and enabling technologies. Crowdfunding is only 0.6% of the world’s capital markets, but that’s still $1.6B and the industry has grown 1000% in the last five years, and will undoubtedly continue to grow. Debt financing is growing at a much higher rate than equity investing, in part due to the limits placed by the applicable titles of the JOBS Act. Donation models are also growing, and the biggest growth is in reward models, which are typical on sites such as Kickstarter. There are obvious challenges to work out with crowdfunding, such as secondary market liquidity and investor accreditation, but it’s safe to say that it’s here to stay and will continue to grow.
DST does not currently offer any products in this area, but it’s interesting to see that they are keeping a close eye on it to see how they can fit in the market, whether as a recordkeeper, clearinghouse or other role.
The sun is high and I’m all done with my presentation and videography commitments, so this will be the end of my blogging from DST ADVANCE 2015 as I head out to enjoy a bit of the Phoenix weather.
Day 2 at DST ADVANCE 2015, and I’m attending a panel of three people from AXA on how their journey to becoming a digital insurance business. They define digital business as new ways of engaging with their customers: customers that are increasingly more demanding with respect to online and mobile modes of interaction. This is also driven by their need to reduce and simplify paper requirements, internally in their opertions and field sales force, and with their customers. The mandate for their digital enterprise transformation came from top management as an initiative both for better customer engagement and operational efficiencies.
There was a big cultural and change management component here to encourage their field agents and advisor channel to take advantage of the new digital tools, which in turn improves back office effectiveness by, for example, reducing NIGO rates because of rules-driven application forms. In their operations center, this resulted in shifts in resources, and changes to the type of people that they needed to hire and train: less heads-down data entry, and more tech-savvy knowledge workers. They also needed to effect internal cultural changes to become more flexible, and to have closer collaboration between business and IT.
Becoming a digital insurance business has changed a lot in how AXA’s products are created and rolled out, and also in their IT operations: they introduced the role of chief data scientist, and shifted from a waterfall software development methodology to Agile development and integrated business-technical SCRUM teams. Like many insurance and financial services, they have a lot of legacy systems that run their business, and a big challenge ahead of them is to upgrade those to more agile platforms such as their upcoming migration to AWD 10. They’re using Salesforce in some areas, and want to be able to leverage that further in order to reduce the reliance on internal legacy CRM, as well as introducing emerging technologies such as speech analytics that are piloted successfully in a regional center before being rolled out across the broader enterprise. Within IT, they are changing their methods to more of a DevOps model, with a particular focus on quality engineering. They have created some entirely new teams, such as mobile testing, to accommodate emerging technologies, and be proactive with external forces such as mobile OS upgrades.
One area where they have seen success is in offering incentives to drive adoption by the advisors, such as competitions between regions on adoption levels; some of the incentives for adoption and suggesting new digital enterprise ideas include financial and travel benefits. New advisors are required to use the digital services, and existing advisors are becoming sold on the benefits of using the new tools; in the future, they are considering a negative financial incentive for continuing to use paper in order to further drive adoption. In rolling out a new version of an advisor portal, they included a feedback option, then gave priority to implementing the feedback suggested by the advisors; when the advisors realized that they were directly impacting the development of their day-to-day tools, their participation increased even more.
Audience members in the insurance industry also talked about a shift to digital enterprise causing an increase in top-line revenue by expanding markets, not just retaining existing customers and reducing costs. The AXA team echoed this, and the need to envision and evangelize completely new business models rather than just working on incremental improvement.
Key success factors that AXA identified include the merging of business and IT, and engaging the field sales force in defining and developing the digital services in order to create the right things at the right time. It took about a year from the point of their first rollout to widespread adoption, but now the new capabilities and tools are adopted more quickly since the advisors know that this is going to help them sell more and reduce problems in the sales and policy issue cycle.
To finish off the first morning at DST ADVANCE 2015, I attended the session on customer and work experience, which was presented as a case study of background investigations on a security-sensitive hiring process, such as for a government immigration and border control agency. This is a relatively straightforward case management scenario: create a case, uploading and indexing the initiating documents using a form; then case management from a case worker’s viewpoint, including tasks assigned to them or other people on the team, and an activity stream view of all case activity. They demonstrated a number of the new widget capabilities, including grid views of case tasks and investigation team members, and Google Maps integration with case data overlaid on the map. We also saw a field investigator’s portal view that limits the view to that user’s active case progress and the details of their assignments. The data entry forms regarding the person being investigated are reused from other parts of the process, plus forms specific to the investigator such as travel expenses.
This shows quite different interfaces depending on the user persona: the simple forms-based view for the case initiator; the full case management interface for the knowledge worker; a worklist-oriented case portal view for the field investigator; and a traditional internal worklist view for internal workers who are assigned specific tasks without visibility onto the entire case.
We didn’t see anything on how these interfaces are built, although there was some discussion of that; I think that there’s a more technical session on building interfaces using the widgets tomorrow.
Unfortunately, this session was in conflict with the Solutions for Tomorrow’s Workforce presentation about goal-driven design and some of the customer research that they’ve done; difficult to get to all of the sessions of interest here.
Roy Brackett and Mike Lovell from DST’s BPS (Business Process Solutions) product management gave us a review of what happened in 2014 and an update on their 2015 product strategy, following on from the bits that we heard from John Vaughn in the opening session.
DST has a ton of experience with the back office, since they run a huge outsourcing operation, but their current push is to also improve front office and customer-facing functionality. They are accelerating their release cycles, and providing detailed information via web conferences and technical briefings with customers. They’ve also mapped out a value journey for their customers for complex implementations: from defining outcomes and performance metrics to a solution design, proof of concept and production implementation.
With a large portfolio of products, including quite a bit of legacy still running at client locations, they have some product management challenges both in refactoring and modernizing platforms, and implementing newer technologies to keep up with the competition. Over three releases from June 2014 to January 2015, they added a number of capabilities:
UI widgets that use their RESTful services, including messaging between the widgets and with other applications
Advanced comments that allow threading and an activity feed view, bringing a more collaborative, social feel to authenticated comments within AWD
Variable timers that can be set at runtime
Multiple recipients on outbound letters, such as sending a broker or advisor a copy of a letter sent to a client
Separating out batch actions to improve performance
With their next releases, scheduled for April and beyond, they are adding or enhancing the following functionality:
Consume SAML-based web services
Communications editor for creating outbound correspondence
Updated platform support for WebSphere, Jbox, WebLogic, Oracle, WinSQL and IE 11
Updates to case management functionality including date functions, such as basing a date on the completion of dependencies, and adding case end dates
Early release of their Processing Workspace, which will mark the beginning of the end of their current portal UI; this improves real estate and navigation, and adds personalization for the primary workspace, worklists, search and attachment handling
Advanced workgroup management, rather than just a simple supervisor-worker hierarchy
Quality metrics and related analytics
Enhanced data transfer from AWD to the BI warehouse — there will be a session following this on the entire BI roadmap
There’s also some work being done on creating robust data centers under their Project Rainforest, which will be covered in sessions later today.
We pulled back from the details to look at the business problems that are front of mind for organizations: enhanced customer experience, process efficiency, targeted marketing (via analytics) and cost reduction top the list. Broken down by industry, DST’s big three customer verticals of banking/investments, insurance and healthcare are definitely focused on enhanced customer experience, but also concerned with risk and compliance more than the overall average. To address this, DST is doing significant product development in the vertical application solutions and accelerators that can help their target customers achieve value sooner.
DST has never been seen as an innovator in the BPMS market in terms of features, and their current roadmap isn’t going to change that view. However, what they do provide is a deep pool of domain expertise in their core markets, and solid products that solve the real business problems for those customers. This has allowed them to create extremely strong relationships with their customers, who rely on them to support existing practices while modernizing their technology.
Conference season always brings some decisions and conflicts, and this year’s first one (for me) came down to a decision between DST‘s ADVANCE in Phoenix, and IBM InterConnect in Las Vegas. DST had booked me to speak at this year’s conference right after last year, and when IBM combined Impact into InterConnect, the date moved from their usual March or April event to February, directly in conflict with DST. Also: Phoenix over Vegas? No competition.
DST has combined some of their smaller financial services events into ADVANCE this year, giving this a stronger than ever focus on financial services especially in the area of asset/fund management. This is a big chunk of their customer base, along with insurance, although many of the solutions that they offer — including their BPMS — cn apply to other verticals. I’ll be presenting later today about the challenges of onboarding, and how BPM, case management, smart process applications and other technologies can be applied to solve some of the business problems associated with these unpredictable, complex and risk-laden processes.
There were hands-on labs for customers yesterday, but the main conference started this morning with an opening session hosted by John Vaughn and featuring three of the senior management team, focused on the four key things that DST has been focusing on in the past year in order to help their customers with business growth and retention: transforming the customer experience, optimizing distribution, staying ahead of compliance risks, and providing smarter business process solutions. We heard a quick overview about their advances in customer experience; advanced analytics applied to distribution solutions leveraged by their acquisition of kasina; GRC solutions that extend into the distribution chain; and business process solutions including their onboarding, claims processing and AML/KYC frameworks, plus their upcoming AltServe offering for managing alternative investments.
Since DST has an outsourcing operation that processes a huge portion of the mutual fund and similar transactions in the US, and they are their own first and best customer, they know what they’re doing in creating technology solutions for managing the tough problems in financial services. As Vaughn pointed out during the keynote, a lot of the “easy” transactions are now being done outside your back office, either further up in the distribution channel or through customer self-service, meaning that the work being done internally includes more of the difficult, unpredictable business problems; their frameworks and solution accelerators are focused on many of those.
Lots of great sessions on the agenda today; I’m going to head to the BPM product strategy breakout to see what’s coming up
I caught up with Jakob Freund and Daniel Meyer of camunda last week in advance of their 7.2 release; with 1,700 person-days of work invested in this April-November release cycle, this includes a new tasklist application, an initial implementation of the Case Management Model and Notation (CMMN) standard, developer accelerators particularly for non-Java developers, and performance and stability improvements. You can hear more about the new release and see a demo in their webinar on Wednesday this week, and read their blog post about it. [Update: you can see the webinar replay here and the slides here, no registration required.]
It’s been interesting to track the progress of camunda and Activiti after camunda forked from the Activiti project in early 2013, since they are targeting slightly different markets but still offer competitive solutions in the open source BPM space. Last week, I wrote about Activiti’s recent release of their BPM Suite, which includes end-user task list and forms interfaces and tools for for “citizen developers” (read: non-hard-core-Java developers); we are seeing some similar themes in the new camunda release, although camunda sticks closer to a true open source model by releasing pretty much all of the code as part of the open source project, while Activiti is making everything except the core engine part of their commercial product.
Filters can be created using expressions based on drop-down lists of instance properties, and can be shared for use by other users or groups. Filters can also specify process variables so that those variables are visible in the task list and can be used for searching and sorting, which makes it easier to locate a specific task without having to click on each task to see the details. If permissions allow, other user’s tasks can be included in the task list. During our demo, they pointed out the ability to use keyboard controls to navigate through the task list, something that was suggested by the users: having seen many keyboard-centric users slowed down by having to use a mouse for controlling their screens, this was not a surprise to me, but I think that many software developers don’t think about the needs of the old-school keyboarding users.
The task detail pane of the UI contains more than just the task form: it also has a history tab showing events, due dates and comments; a diagram tab highlighting the current task within the BPMN diagram; and a description tab that I didn’t see, but I assume can contain task instructions.
The other new major user-facing functionality is support for CMMN; 2015 is definitely going to be the year when we see the BPM vendors pile on here, and camunda is out in front with this. Like many other BPM vendors, camunda’s BPMN implementation does not support ad hoc activities – arguably, support for events and ad hoc activities provides most of what is required for case management – so they are using the Trisotech CMMN modeler instead of their own modeler, but executing on the same core camunda engine which exposes standard engine features such as REST APIs and scripting. Cases can be instantiated through the API directly by a web form or other event, or can be instantiated using a call from a structured process. In turn, cases can instantiate structured processes through a call from an activity. This covers all of the use cases along the structured/unstructured spectrum: completely pre-defined processes, pre-defined with ad hoc exceptions, ad hoc with pre-defined fragments, and completely ad hoc. Calls from BPMN that instantiate a CMMN case can be synchronous (i.e., wait for completion), or asynchronous.
Case tasks, once instantiated, will appear in the tasklist along with those from BPMN processes; however, note that ad hoc tasks will require some sort of custom UI to allow a user to instantiate them if they are not triggered automatically by events or other tasks. They have nothing on the out of the box tasklist app to do this, although I can envision that they might extend this in the future to allow a case owner or participant to see and trigger ad hoc case tasks.
CMMN is still pretty new, although the concepts of case management have been around for a long time; camunda has some customers testing and providing feedback on their CMMN implementation, and they are expecting requirements and capabilities to emerge as they get more practical experience with it. By providing an open source engine that supports CMMN, they also hope to contribute to the CMMN standard and its use cases in general since others can use their engine to test the standard.
On the execution performance side, camunda likes to distinguish their product within the open source BPM field as being for high-load straight-through processing rather than just manual activities, where they define “high-load” as 10 process instances per second: they’ve been steadily improving execution engine performance since the split and are even working on implementing with alternative storage/computing strategies such as in-memory grids (I jokingly asked if we would see camunda on HANA any time soon). In tests on an earlier 7.x release, they were already 10-30x (that’s times, not percent) faster than Red Hat’s jBPM; 7.2 improves the caching and load balancing, and enhances the asynchronous history logging to allow the level of data logged to be configured by process class and activity. This helps provides the level of scalability that they need for their highest-volume customers, such as telcos, that may be executing more than 1,000 process instances per second.
The camunda open source community edition is available for download and they will be pushing out updates to their Enterprise subscription customers. Check out their upcoming webinar, or sign up for the webinar and watch the recording later.
Disclosure: camunda has been my customer during 2014 for services including a webinar, white paper and the keynote at their user conference. However, I have not been compensated in any way for researching and writing this review.
There are definitely changes afoot in the open source BPM market, with both Alfresco’s Activiti and camunda releasing out-of-the-box end-user interfaces and model-driven development tools to augment their usual [Java] developer-friendly approach. In both cases, they are targeting “citizen developers”: people who have technical skills and do some amount of development, but in languages lighter weight than Java. There are a lot of people who fall into this category, including those (like me) who used to be hard-core developers but fell out of practice, and those who have little formal training in software development but have some other form of scientific or technical background.
Prior to this year, Activiti BPM was not available as a standalone commercial product from Alfresco, only bundled with Alfresco or as the community open source edition; as I discussed last year, their main push was to position Activiti as the human-centric workflow within their ECM platform. However, Activiti sports a solid BPMN engine that can be used for more than just document routing and lifecycle management, and in May Alfresco released a commercially-supported Alfresco Activiti product, although focused on the human-centric BPM market. This provides them with opportunities to monetize the existing Activiti community, as well as evolving the BPM platform independently of their ECM platform, such as providing cloud and hybrid services; however, it may have some impact on their partners who were relying on support revenue for the community version.
The open source community engine remains the core of the commercial product – in fact, the enterprise release of the engine lags behind the community release, as it should – but the commercial offering adds all of the UI tools for design, administration and end-user interface, plus cluster configuration for the execution engine.
The Activiti Administrator is an on-premise web application for managing clusters, deploying process models from local packages or the Activiti Editor, and technical monitoring and administration of in-flight processes. There’s a nice setup wizard for new clusters – the open source version requires manual configuration of each node – and allows nodes within the cluster to be auto-discovered and monitored. The monitoring of process instances allows drilling into processes to see variables, the in-flight process model, and more. Not a business monitoring tool, but seems like a solid technical monitoring tool for on-premise Activiti Enterprise servers.
The Activiti Suite is a web application that brings together several applications into a single portal:
Kickstart is their citizen development environment, providing a simple step editor that generates BPMN 2.0 – which can then be refined further using the full BPMN Visual Editor or imported into the Eclipse-based Activiti Designer – plus a reusable forms library and the ability to bundles processes into a single process application for publishing within the Suite. In the SaaS version, it will integrate with cloud services including Google Drive, Alfresco, Salesforce, Dropbox and Box.
Tasks is the end-user interface for starting, tracking and participating in processes. It provides an inbox and other task lists, and provides for task collaboration by allowing a task recipient to add others who can then view and comment on the task. Written in Angular JS.
Profile Management to , for user profile and administration
Analytics, for process statistics and reports.
The Suite is not fully responsive and doesn’t have a mobile version, although apparently there are mobile solutions on the way. Since BP3 is an Activiti partner, some of the Brazos tooling is available already, and I suspect that more mobile support may be on the way from BP3 or Alfresco directly.
They have also partnered with Fluxicon to integrate process mining, allowing for introspection of the Activiti BPM history logs; I think that this is still a bit ahead of the market for most process analysts but will make it easy when they are ready to start doing process discovery for bottlenecks and outliers.
I played around with the cloud version, and it was pretty easy to use (I even found a few bugs ) and it would be usable by someone with some process modeling and lightweight development skills to build apps. The Step Editor provides a non-BPMN flowcharting style that includes a limited number of functions, but certainly enough to build functional human-centric apps: implicit process instance data definition via graphical forms design; step types for human, email, “choice” (gateway), sub-process and publishing to Alfresco Cloud; a large variety of form field types; and timeouts on human tasks (although timers based on business days, rather than calendar days, are not there yet). The BPMN Editor has a pretty complete palette of BPMN objects if you want to do a more technical model that includes service tasks and a large variety of events.
Although initially launched in a public cloud version, everything is also available on premise as of the end of November. They have pricing for departmental (single-server up to four cores with a limit on active processes) and enterprise (eight cores over any number of servers, with additional core licensing available) configurations, and subscription licensing for the on-premise versions of Kickstart and Administrator. The cloud version is all subscription pricing. It seems that the target is really for hybrid BPM usage, with processes living on premise or in the cloud depending on the access and security requirements. Also, with the focus on integration with content and human-centric processes, they are well-positioned to make a play in the content-centric case management space.
Instead of just being an accelerator for adding process management to Java development projects, we’re now seeing open source BPM tools like Activiti being positioned as accelerators for lighter-weight development of situational applications. This is going to open up an entire new market for them: an opportunity, but also some serious new competition.
Roberto Mercadante, SVP of operations and technology at Citibank Brazil, presented a session on their journey with AMX BPM. I also had a chance to talk to him yesterday about their projects, so have a bit of additional information beyond what he covered in the presentation They are applying AMX BPM to their commercial account opening/onboarding processes for “mid-sized” companies (between $500M-1B in annual revenue), where there is a very competitive market in Brazil that requires fast turnaround especially for establishing credit. As a global company in 160 countries, they are accustomed to dealing with very large multi-national organizations; unfortunately, some of those very robust features manifest in delays when handling smaller single-country transactions, such as their need to have a unique customer ID generated in their Philippines operation for any account opening. Even for functions performed completely within Brazil, they found that processes created for handling large corporate customers were just too slow and cumbersome for the mid-size market.
Prior to BPM implementation, the process was very paper-intensive, with 300+ steps to open an account, requiring as many as 15 signatures by the customer’s executives. Because it took so long, the commercial banking salespeople would try to bypass the process by collecting the paperwork and walking it through the operations center personally; this is obviously not a sustainable method for expediting processes, and wasn’t available to those people far from their processing center in Sao Paulo. Salespeople were spending as much as 50% of their time on operations, rather than building customer relationships.
They use an Oracle ERP, but found that it really only handled about 70% of their processes and was not, in the opinion of the business heads, a good fit for the remainder; they brought in AMX BPM to help fill that gap, typically representing localized processes due to unique market needs or regulations. In fact, they really consider AMX BPM to be their application development environment for building agile, flexible, localized apps around the centralized ERP.
When Citi implemented AMX BPM last year — for which they won an award — they were seeking to standardize and automate processes with the primary target to reduce the cycle time, which could be as long as 40 days. Interestingly, instead of reengineering the entire process, they did some overall modeling and process improvement (e.g., removing or parallelizing steps), but only did a complete rework on activities that would impact their goal of reducing cycle time, while enforcing their regulatory and compliance standards.
A key contributor to reducing cycle time, not surprisingly, was to remove the paper documents as early as possible in the process, which meant scanning documents in the branches and pushing them directly into their IBM FileNet repository, then kicking off the related AMX BPM processes. The custom scanning application included a checklist so that the branch-based salespeople could immediately know what documents that they were missing. Because they had some very remote branches with low communications bandwidth, they had to also create some custom store-and-forward mechanisms to save document transmission for times of low bandwidth usage, although that was eventually retired as their telecom infrastructure was upgraded. I’ve seen similar challenges with some of my Canadian banking customers regarding branch capture, with solutions ranging from using existing multifunction printers to actually faxing in documents to a central operational facility; paper capture still represents some of the hairiest problems in business processes, in spite of the fact that we’re all supposed to be paperless.
They built BPM analytics in Spotfire (this was prior to the Jaspersoft acquisition, which might have been a better fit for some parts of this) to display a real-time dashboard to identify operational bottlenecks — they felt strongly about including this from the start since they needed to be able to show real benefits in order to prove the value of BPM and justify future development. The result: 70% reduction in their onboarding cycle time within 3 months of implementation, from as much as 40 days down to a best time of about 3 days; it’s likely that they will not be able to reduce it further since some of that time is waiting for the customers to provide necessary documentation, although they do all the steps possible even in the absence of some documents so that the process can complete quickly as soon as the documents arrive. They also saw a 90% reduction in standard deviation, since no one was skewing the results by personally escorting documents through the operations center. Their customer rejection rate was reduced by 58%, so they captured a much larger portion of the companies that applied.
The benefits, however, extended beyond just operational efficiency: it allowed for decentralization of some amount of the front office functions, and allowed relocation of some back-office operations. This allows for leveraging shared services in other Citibank offices, relocating operations to less-expensive locations, and even outsourcing some operations completely.
They’re now looking at implementing additional functionality in the onboarding process, including FATCA compliance, mobile analytics, more legacy integration, and ongoing process improvement. They’re also looking at related problems that they can solve in order to achieve the same level of productivity, and considering how they can expand the BPMS implementation practices to support other regions. For this, they need to implement better BPM governance on a global basis, possibly through some center of excellence practices. They plan to do a survey of Citibank worldwide to identify the critical processes not handled by the ERP, and try to leverage some coordinated efforts for development as well as sharing experiences and best practices.
There’s one more breakout slot but nothing catches my eye, so I’m going to call it quits for TIBCO NOW 2014, and head out to enjoy a bit of San Francisco before I head home tomorrow morning. This is my last conference for the year, but I have a backlog of half-written product reviews that I will try to get up here before too long.
Michael O’Connell, TIBCO’s chief data scientist, and Hayden Schultz, a TIBCO architect, discussed and demonstrated an event-handling example using remote sensor data with Spotfire and Streambase. One oil company may have thousands of submersible pumps moving oil up from well, and these modern pumps include sensors and telemetry to allow them to be monitored and controlled remotely. One of their oil and gas customers said that through active monitoring and control such as this, they are avoiding downtime worth $1000/day/well, meaning an additional $100M in additional revenue each year. In addition to production monitoring, they can also use remote monitoring in drilling operations to detect conditions that might be a physical risk. They use standards for sensor data format, and a variety of data sources including SAP HANA.
For the production monitoring, the submersible pumps emit a lot of data about their current state: monitoring for changes to temperature, pressure and current shows patterns that can be correlated with specific pre-failure conditions. By developing models of these pre-failure patterns using Spotfire’s data discovery capabilities on historical failure data, data pushed into Streambase can be monitored for the patterns, then Spotfire used to trigger a notification and allow visualization and analytics by someone monitoring the pumps.
We saw a demonstration of how the pre-failure patterns are modeled in Spotfire, then how the rules are implemented in Streambase for real-time monitoring and response using visual modeling and some XML snippets generated by Spotfire. We saw the result in Streambase LiveView, which provides visualization of streaming data and highlights those data points that are exhibiting the pre-failure condition. The engineers monitoring the pumps can change some of the configuration of the failure conditions, allowing them to fine-tune to reduce false positives without missing actual failure events. Events can kick off notification emails, generate Spotfire root cause analysis reports, or invoke other applications such as instantiating a BPM process.
There are a number of similar industrial applications, such as in mining: wherever there are a large number of remote devices that require monitoring and control.
Nicolas Marzin, from the TIBCO BPM field group, presented a breakout session on the benefits of combining BPM and analytics — I’m not sure that anyone really needs to be convinced of the benefits, although plenty of organizations don’t implement this very well (or at all) so it obviously isn’t given a high priority is some situations.
BPM analytics have a number of different audiences — end users, team leaders, live of business managers, and customer service managers — and each of them are interested in different things, from operational performance to customer satisfaction measures. Since we’re talking about BPM analytics, most of these are focused on processing work, but different views and aspects of that process-related information. Regardless of the information that they seek, the analytics need to be ease to use as well as informative, and focused on how analytics is more driven by questions that more static reporting.
There are some key BPM metrics regardless of industry:
Work backlog breakdown, including by priority, segment and skillset (required to determine resourcing requirements) or SLA status (required to calculate risk)
Resource pool and capacity
Aggregate process performance
Business data-specific measures, e.g., troublesome products or top customers
Monitoring and analytics are important not just for managing daily operations, but also to feed back into process improvement: actions taken based on the analytics can include work reprioritization, resource reallocation, or a request for process improvement. Some of these actions can be automated, particularly the first two; there’s also value in doing an in situ simulation to predict the impacts of these actions on the SLAs or costs.
By appropriately combining BPM and analytics, you can improve productivity, improve visibility, reduce time to action and improve the user experience. A good summary of the benefits; as I mentioned earlier, this is likely not really news to the customer in the audience, but I am guessing that a lot of them are not yet using analytics to the full extent in their BPM implementations, and this information might help them to justify it.
In AMX BPM, Spotfire was previously positioned for analytics and visualization, but TIBCO’s acquisition of Jaspersoft means that they are now bundling Jaspersoft with AMX BPM. You can use either (or both), and I think that TIBCO needs to get on top of identifying the use cases for each so that customers are not confused by two apparently overlapping BPM analytics solutions. Spotfire allows for very rich interactive visualizations of data from multiple sources, including drill-downs and what-if scenarios, especially when the analysis is more ad hoc and exploratory; Jaspersoft is better suited for pre-defined dashboards for monitoring well-understood KPIs.