Positioning Business Modeling panel at bpmNEXT

We had a panel of Clay Richardson of Forrester, Kramer Reeves of Sapiens and Denis Gagne of Trisotech, moderated by Bruce Silver, discussing the current state of business modeling in the face of digital transformation, where we need to consider modeling processes, cases, content, decisions, data and events in an integrated fashion rather than as separate activities. The emergence of the CMMN and DMN standards, joining BPMN, is driving the emergence of modeling platforms that not only include all three of these, but provide seamless integration between them in the modeling environment: a decision task in a BPMN or CMMN model links directly to the DMN model that represents that decision; a predefined process snippet in a CMMN model links directly to the BPMN model, and an ad hoc task in a BPMN model links directly to the CMMN model. The resulting models may be translated to (or even created in) a low-code executable environment, or may be purely for the purposes of understanding and optimizing the business.

Some of the points covered on the panel:

  • The people creating these models are often in a business architecture role if they are being created top down, although bottom-up modeling is often done by business analysts embedded within business areas. There is a large increase in interest in modeling within architecture groups.
  • One of the challenges is how to justify the time required to create these models. A potential positioning is that business models are essential to capturing knowledge and understanding the business even if they are not directly executable, and as organizations’ use of modeling matures and gains visibility with executives, it will be easier to justify without having to show an immediate tangible ROI. Executable models are easier to justify since they are an integrated part of an application development lifecycle.
  • Models may be non-executable because they model across multiple implementation systems, or are used to model activities in systems that do not have modeling capabilities, such as many ERP, CRM and other core operational systems, or are at higher levels of abstraction. These models have strategic value in understanding complexity and interrelationships.
  • Models may be initiated using a model derived from process/data mining to reduce the time required to get started.
  • Modeling vendors aren’t competing against each other, they’re competing against old methods of text-based business requirements.
  • Many models are persistent, not created just for a specific point in time and discarded after use.

A panel including two vendors and an analyst made for some lively conversation, and not a small amount of finger-pointing. 🙂

bpmNEXT 2016

It’s back! My favorite conference of the year, where the industry insiders get together to exchange stories and show what cool stuff that they’re working on, bpmNEXT is taking place this week in Santa Barbara. This morning is a special session on the Business of BPM, looking forward at what’s coming in the next few years, with an analyst panel just after lunch that I’ll be participating in. After that, we’ll start on the demos: each presenter has a 5-minute Ignite-style presentation as an intro (20 auto-advancing slides of 15 second each) followed by a live demo.

After a brief intro by Bruce Silver, the morning kicked off with Nathanial Palmer providing an outlook of the next five years of BPM, starting with what we can learn from other areas of digital disruption, where new companies are leveraging infrastructure built by the long-time industry players. He discussed how the nature of work (and processes) is becoming data-driven, goal-oriented, adaptive, and containing intelligent automation. His take on what will drive BPM in the next five years is the three R’s: robots (and other smart things), rules, and relationships (really, the data about the relationships). The modern BPMS framework is much more than just process, but includes goal-seeking optimization, event processing, decision management and process management working on events captured from systems and smart devices. We need to redefine work and how we manage tasks, moving away from (or at least redefining) the worklist paradigm. He also suggests moving away from the monolithic integrated BPMS platform in favor of assembling best-of-breed components, although there was some discussion as to whether this changed the definition of a BPMS to steer away from the recent trend that is turning most BPMS into full-fledged application development platforms.

Up next was Neil Ward-Dutton, providing insights into how the CxO focus and influences are changing. Although many companies have a separate perspective and separate teams working on digital business strategy based on their focus — people and knowledge versus processes and things, internal versus external — these are actually all interconnected. The companies most successful at digital transformation recognize this, and create integrated experiences across what other companies may think of as separate parts of their organization, such as breaking down the barriers between employee engagement and external engagement. Smart connected things fill in the gaps of digital transformation, allowing us to not only create digital representations of physical experiences, but also create physical representations of digital experiences. Neil also looked at the issue of changing how we define work and how it gets done: automation, collaboration, making customers full participants in processes, and embracing new interfaces. Companies are also changing how they think about what they do and where their value lies: in the past 40 years, the S&P 500’s market value has changed from primarily tangible assets to primarily intangible assets, with a focus on optimizing customer experiences. In the face of that, there is a high employee turnover in call centers that are responsible for some of those customer experiences, driving the need for new ways to serve and collaborate with customers. He finished with five imperatives for digital work environments: openness, agility, measurability, collaboration and augmentation. Successful implementation of these digital transformation imperatives may allow breaking the rules of corporate strategy, allowing an organization to show excellence in products, customer engagement and operations rather than just along a single axis.

Great start to the conference, with lots of ideas and themes that I’m sure we’ll see echoed in the presentations over the next couple of days.

BPM and IoT in Home and Hospice Healthcare with @PNMSoft

I listened in on a webinar by Vasileios Kospanos of PNMSoft today about business process management (BPM) and the internet of things (IoT). They started with some basic definitions and origins of IoT – I had no idea that the term was coined back in 1999, which is about the same time that the term BPM came into use – as a part of controls engineering that relied on a lot of smart devices and sensors producing data and responding to remote commands. There are some great examples of IoT in use, including environmental monitoring, manufacturing, energy management, and medical systems, in addition to the more well-known consumerized applications such as home automation and smart cars. Gartner claims that there will be 26B devices on the internet by 2020, which is probably not a bad estimate (and is also driving the new IP6 addressing standards).

PNMSoft - Amedar healthcare presentationDominik Mazur from Amedar Consulting Group (a Polish business and technology consulting firm) joined to discuss a case study from one of their healthcare projects, helping to improve the flow of medical information and operational flow that included home care and hospices – parts of the medical system that are often orphaned from an information gathering standpoint – tied into their National Health Fund systems. This included integrating the information from various devices used to measure the patients’ vital statistics, and supported processes for admission and discharge from medical care facilities. The six types of special purpose devices communicate over mobile networks, and can store the data for later forwarding if there is no signal at the point of collection. Doctors and other health care professionals can view the data and participate in remote diagnosis activities or schedule patient visits.

PNMSoft - Amedar healthcare presentationMazur showed the screens used by healthcare providers (with English annotations, since their system is in Polish) as well as some of the underlying architecture and process models implemented in PNMSoft, such as the admitting interview and specialist referrals process for patients, as well as coordination of physician and specialist visits, plus home medical equipment rental and even remote configuration through remote monitoring capabilities. He also showed a live demo of the system, highlighting features such as alarms that appear when patient data falls outside of normal boundaries; they are integrating third-party and open-source tools such as Google for charting data directly into their dashboards. He also discussed how other devices can be paired to the systems using Bluetooth; I assume that this means that a consumer healthcare device could be used as an auxiliary measurement device, although manufacturers of these devices are quick to point out that they are not certified healthcare devices in order to absolve themselves of responsibility for bad data.

He wrapped up with lessons that they learned from the project, which sound much like many other BPM projects: model-driven Agile development (using PNMSoft, in their case), and work closely with key stakeholders. However, the IoT aspect adds complexiy, and they learned some key lessons around that, too: start device integration sooner, and allow 20-30% of time for testing. They developed a list of best practices for similar projects, including extending business applications to mobile devices, and working in parallel on applications, device integration and reporting.

We wrapped up with an audience Q&A, although there were many more questions than we had time for. One of the more interesting ones was around automated decisioning: they are not doing any of that now, just alerting that allows people to make decisions or kick off processes, but this work lays the foundation for learning what can be automated without risk in the future. Both patients and healthcare providers are accepting the new technology, and the healthcare providers in particular find that it is making their processes more efficient (reducing administration) and transparent.

Great webinar. It will be available on demand from the resources section on PNMSoft’s website within a few days.

PNMSoft - Amedar webinar

Update: PNMSoft published the recording on their YouTube channel within a couple of hours. No registration required!

When Lack Of System Integration Incurs Costs – And Embarrassment

BPM systems are often used as integrating mechanisms for disparate systems, passing along information from one to another to ensure that they stay in sync. They aren’t the only type of systems used for integrating and orchestrating – there’s everything from the consumer-focused IFTTT and Zapier to full-on server-side orchestration – but that’s often presented as a primary use case for BPMS.

What happens, however, when you don’t integrate systems, and rely on “swivel chair integration”, where people have to enter the same information twice in two different systems? In many cases, that integration just doesn’t happen on a consistent basis, and that can cost organizations a lot of money. The news headlines here are all about how lawyers were overpaid (really? that’s news? Winking smile), but for me, the real story is buried further down:

[Lawyers’] time-off recorded in a scheduling system known as iCase was not always properly recorded in a parallel payroll system, known as PeopleSoft. Lawyers themselves were supposed to update both systems, but for various reasons did not.

In short, an organization that employs highly-paid professionals expected those people to enter their time (reasonable) – twice, in two different systems (unreasonable). And for some reason, they are surprised that the lawyers didn’t always do this.

Smarter Mobile Apps Webinar with Me and @jamet123

I wrote a paper last year with James Taylor on smarter mobile apps that leverage process and decision management technologies, and we’re giving a webinar on the topic next Tuesday, January 19, at 1pm ET. You can read James’ more detailed post on this, or just head over and sign up for the webinar. We will be releasing the paper after the webinar.

Appian Around The World – Toronto

Appian was recently doing a round of road-show conferences, and when they landed in my backyard, I decided to stop in for the day and see what was new. I missed Appian World this year and was looking forward to a bit of a product update as well as some of the local customer stories.

The day started with Edward Hughes, SVP of sales, giving us a high-level overview of Appian and their BPM platform-as-a-service and case management products (for the non-customers in the audience), as well as their shift to becoming more of a broad application development platform rather than just a BPMS. I’ve been seeing this trend with many BPM vendors over the past few years, and Appian has been repositioning this way for a year or two already. Using Appian as an application development platform allows applications to be developed independently of the deployment platform, both on server side (e.g., develop on the cloud version, deploy on premise) and for client interfaces on desktops or mobile devices. The messaging is that you can use their platform to create customer service applications “beyond CRM” to handle the entire customer journey, with a unified interface plus a consolidated view onto enterprise data using their Records function. He also talked about the Appian App Market, which is an expanded version of their Appian Forum, containing add-in components and complete applications from Appian and third parties.

Since it was a small room, the local customers introduced themselves and talked about their Appian experience and applications: 407 ETR with 10 apps integrated with their customer portals so that online actions (e.g., acquiring a new transponder) become Appian processes assigned to the 125 internal users; Manulife, the first Appian cloud customer back in 2008, now migrating their “legacy” Appian apps to the Tempo UI and serving 900 users for work/time tracking and records management in Marketing; and IESO with apps to register and manage information about energy companies participating in electricity markets. We also heard from some of the partners attending: TCS, Princeton Blue, and boutique contender Bits In Glass with 15+ Appian-trained people in Canada and the US. Bits In Glass used to do mostly code-level (Java) bespoke development, and have reduced their efforts and timelines to 1/3 of that using Appian’s model-driven development.

Next up was Michael Beckley, describing his new role as Chief Customer Officer (as well as CTO) as well as giving us a product update on the 7.11 quarterly release. Appian is seeing corporate IT budgets as 20% innovation and 80% maintenance, but they want to flip that ratio so that maintenance is much less expensive than the original build, freeing up time and energy for innovation. Most large enterprises aren’t going to get rid of custom applications, but they do need to make them faster to build and maintain, while enforcing strict security and providing a user-friendly interface for internal and external users. In theory, an integrated application development such as Appian provides all the pieces:user interface, reports, rules, collaboration, process, on-premise cloud, mobile, social, data, content, security, identity, and integration; in practice, most organizations end up doing something outside the model-driven development environment, although it can definitely improve their custom development efforts. Appian’s focus, as with many of the other BPMS vendors pivoting to become app dev vendors, is on providing a platform to build process-centric applications that get things done via automation, with people injected into the process where required.

Beckley gave us a hint of their growth strategy: they tend to build rather than buy in order to keep their technology pure, and since growth by acquisition inevitably requires a large (and underestimated) effort to integrate the new technology.

Here’s a quick list of the Appian 7.11 updates (some of these likely came before 7.11, but I haven’t had an update for a while):

  • Three UI styles for Appian apps: the Tempo social interface, Sites limited-function workspace/worklist for heads-down workers, and Embedded SAIL to embed Appian functionality within an existing portal for internal or external users. Sites have Action Forms for fit-for-purpose apps when a social feed UI isn’t appropriate, and Embedded SAIL has Action Forms for customer-facing apps within a third-party web portal. These latter two are critical for real-world enterprise applications: although I like the Tempo interface, many of my enterprise clients need a different sort of view for heads-down workers, which can be provided by Sites or using Embedded SAIL.
  • A number of improvements to the Tempo news feed and UI, including the Tempo Kudos view to promote collaboration and provide awareness of accomplishments, dynamically-updating filters to better link and manage record data and underlying data sources
  • Improvements to SAIL, including positioning it as a device-independent UI that provides shared model experience (rather than an HTML5 gateway into an existing app as seen in some other mobile-enablement technology) that is natively rendered on each device. The rendering engine can be updated independently of the applications, making it easier to adapt to new OS versions and devices. Appian uses SAIL to build their own components and apps that become part of the product. From a developer functionality standpoint, SAIL has added placeholder text and tooltips on forms, auto first field focus to reduce clicks and improve efficiency, additional image sizes that are auto-scaled to the device, initially-collapsed form sections, “submit” links that can be placed on a graphic element instead of standard buttons, links in milestones and pickers, grid enhancements, and continuing speed improvements. There’s also a new barcode component, although on iOS only and requiring a Verifone device for capture.
  • Mobile offline actions use native encrypted data containers rather than HTML5 storage (some of this is iOS only although Android is planned for later this quarter), with the developer deciding which actions and data are available offline. Changes to the definition of a form while a user is offline will prompt the user to review and resubmit the form with the new/updated form field definitions, so application changes can continue even if there are active offline users. This does not (yet) allow existing records to be locked for offline updates, although tasks can be locked to user before going offline.
  • For designers, the developer portal is being migrated to SAIL and enhanced with build processes; there’s a UI designer navigation tree to allow view/select/edit within a hierarchical tree view of an action form; the expression rule designer (“for those of you who are still writing expressions”, namely power developers) auto-suggests rule inputs and there is some level of expression rule testing; a process report designer can be used to create performance reports; impact analysis reports show where rules are invoked and other object relationships; bulk security updates can be made across objects.
  • For administrators, a big new thing is LDAP/SAML authentication with multiple LDAP servers and complex configurations.

They have frequent product update webinars , free introductory courses and tips & tricks sessions online; in fact, there is a product update webinar tomorrow if you want to hear more about what I’ve listed above.

We heard from Rew Dickinson, a solutions consultant, on what makes a great app — complete with a live demo to show us how it’s done. There were a lot of best practices here that I won’t repeat, better for you to check out one of their webinars, but a few key pointers:

  • Design applications to be omni-channel and easily adaptable.
  • Use Records to organize and model corporate data, regardless of source, for use in an application; bidirectional links between Records and process instances allow for a full view whether you’re coming from the process or data side of things.
  • Use Sites for fit-for-purpose applications, e.g., a worklist for heads-down task execution, as an alternative to full Tempo environment. Effectively, this is a report that can be sorted and filtered, with links to tasks that takes the user to the task form; it can include work management analytics for a manager/dispatcher user to monitor and reallocate task assignments. This made me think that Appian has just reinvented their per-application portal mode with Sites, albeit with better underlying technology, but that’s a discussion for another day.
  • Use Embedded SAIL for customer-facing portal environments, e.g., create service request from a customer order page.

Michael Beckley came back to talk to us about Appian Cloud, that is, their public cloud offering. It uses Amazon AWS/EC2/S3 in a single-tenant architecture, which allows each environment to be upgraded independently — more of a managed hosting model. The web tier is shared and handled by Appian, who also manages servers, load balancing, high availability and upgrades. There can be a VPN tunnel to on-premise data, and in fact the AWS instance does not have to be available on the public internet, but can be restricted to access only through the VPN from a corporate location. This configuration provides the elasticity and availability of the Amazon cloud, but allows private data to remain on premise — something that goes a long ways to resolving geographic data location issues. They’ve obviously been working on the optics of US-owned data centers by listing their privacy chops, but it would have been even more reassuring to see a mention of any Canadian standards such as PIPEDA for this purely Canadian audience. There are tiers for development, medium, large and extra-large deployments, with a redeployment to move between tiers (so not that elastic…) but it supposedly only takes a few minutes if planned. Uptime this year is mostly 5 9’s, with customer credits for missed uptime SLAs. You can also self-host Appian in other environments, e.g., Azure, although the Appian Cloud SaaS offering is currently Amazon only.

We finished up with Mike Cichy, an Appian consultant, discussing their center of excellence offerings and how customers can plug into the vast wealth of information, from checklists to migration guides to training in order to embody best practices. There are a number of tools available such as the Appian Health Check and Deployment Automation in addition to these practices, with an overall goal to help achieve a large improvement in developer speed and quality within customer/partner organizations.

Altogether an informative day, and great catch up with some old friends.

Knowledge Work Incentives at EACBPM

June was a bit of a crazy month, with three conferences in a row (Orlando-London-DC) including two presentations at IRM’s BPM conference in London: a half-day workshop on the Future of Work, and a breakout session on Knowledge Work Incentives, which was a deep dive into one of the themes in the workshop. I put the slides for the breakout session up on the day of the presentation, but then went off for a couple of days of holidays in Brighton and completely forgot to post a link here:

Yesterday, I read a post on The Eloquent Women called In a world of #allmalepanels, can we share pics of #eloquentwomen?, which is a riff on the Congrats, you have an all male panel Tumblr. This has been going on a long time: I wrote about the problem at Toronto’s mesh conference starting in 2007, and then just stopped attending it.

The recent TEW post had me think about the opportunities that I’ve had to present at conferences all over the world, and I decided to take them up on their challenge and post some of the pictures and videos from me presenting in the past. First, a few videos in a variety of speaking styles:

And some pictures taken and tweeted by audience members:

I speak primarily about technology and the impact that it has on business, and I’m recognized as an expert in my field, so I have to say that the common excuses for having no (paid) women speakers summarized here – no qualified women speakers; woman only speak about “women stuff” [wtf?]; women are more likely to say no to speaking; women are more likely to cancel – are patently untrue in my case, and likely in the case of most women speakers.

There are some shining examples of companies that put a lot of women – internal and external – on the stage at their conferences, and we need to see more of this in the future. Otherwise, you’re just ignoring half of the IQ available as speakers, and starting to alienate the attendees.

HP Consulting’s Standards-Driven Requirements Method at BPMCM15

Tim Price from HP’s enterprise transformation consulting group presented in the last slot of day 2 of the BPM and case management summit (and what will be my last session, since I’m not staying for the workshops tomorrow) with a discussion on how to improve requirements management by applying standards. There are a lot of potential problems with requirements: inconsistency, not meeting the actual needs, not designed for change, and especially the short-term thinking of treating requirements as project rather than architecture assets. Price is pretty up-front about how you can’t take a “garden variety” business analyst and have them create BPMN diagrams without training, and that 50% of business analysts are unable to create lasting and valuable requirements.

Although I haven’t done any quantitative studies on this, I would tend to agree that the term “business analyst” covers a wide variety of skill levels, and you can’t just assume that anyone with that title can create reusable requirements models and assets. This becomes especially important when you move past written requirements — that need the written language skills that many BAs do have — to event-driven BPMN and other models; the main issue is that these models are actually code, albeit visual code, that may be beyond the technical analysis capabilities of most BAs.

Getting back to Price’s presentation, he established traceability as key to requirements: between BPMN or UML process models and UML use cases, for example; or upwards from processes to capabilities. Data needs to be modeled at the same time as processes, and processes should be modeled as soon as the the high level use case is defined. You can’t always created a one-to-one relationship between different types of elements: an atomic BPMN activity may translate to a use case (system or human), or to more than one use cases, or to only a portion of a use case; lanes and pools may translate to use case actors, but not necessarily; events may represent states and implied state transitions, although also not necessarily. Use prose for descriptions, but not for control flow: that’s what you use process models for, with the prose just explaining the process model. Develop the use case and process models first, then write text to explain whatever is not obvious in the diagrams.

He walked through a case study of a government welfare and benefits organization that went through multiple failed implementations, which were traced back to poor requirements: structural problems, consistency issues, and designs embedded in the specification. Price and his team spent 12 months getting their analysts back on track by establishing standards for creating requirements — with a few of the analysts not making the transition — that led to CMMI recognition of their new techniques. Another case study applied BPMN process models and UML use cases for a code modernization process: basically, their SDLC was the process being improved. A third case study used BPMN to document as-is and to-be processes, then use case models with complete traceability from the to-be processes to the use cases, with UML domain class models being developed in parallel.

The lessons learned from HP’s experiences:

    • Apply existing standards consistently, including BPMN, CMMN, DMN, UML

    • Use graph-structured languages for structure and logic, and prose for description

    • Use repository-based modeling tools to allow for reusability and collaboration

    • Be concise, be precise, be consistent

    • Create requirements models that are architecture assets, not just short-term project assets

    Some good lessons for requirements analysis; although this was developed for complex more waterfall-y SDLCs, some of these can definitely be adapted for more agile implementations.

    The Digital Enterprise Graph with @denisgagne at BPMCM15

    Yesterday, Denis GagnĂ© demonstrated the modeling tools in the Trisotech Digital Enterprise Suite, and today he showed us the Digital Enterprise Graph, the semantic layer that underlies the modeling tools and allows for analysis of relationships between them. There are many stakeholders involved in defining and implementing a digital enterprise, including enterprise architects, business architects and process analysts; each of these roles has a different view on transformation of the enterprise and different goals for their work. He sees a need for a more emergent enterprise architecture rather than a structured top-down architecture effort: certainly, architects need to create the basic structure, but rather than trying to build out every artifact that might exist in the architecture before making use of it, a more pragmatic approach is for a “just-in-time” architecture that is a bit more self-organizing.

    A graph, in general, is a powerful but simple contstruct: it consists only of nodes and links, but can provide meaningful connections of loosely-coupled business entities that can be easily modified. Think about a social graph, such as Facebook’s social graph: it’s just people and their connections, but it’s a rich context for analyzing the relationships between nodes (people) in the graph depending on the nature of the links (friends, likes, etc.) between them. Trisotech’s Digital Enterprise Graph links the who, what, when, where, why and how of an organization by mapping every model that is added to the Graph onto those types of nodes and links, whether the model originates with one of their own modelers (BPMN, CMMN, DMN) or an external EA modeling tool (Casewise, SAP PowerDesigner, Sparx). This provides an intelligent fabric for automated reasoning about the current relationships between parts of the organization, but also allows estimation of the impact of changes in one area on other parts of the organization. Their Insight Analyzer tool allows you to introspect the graph, providing views such as interconnectivity between nodes as part of impact analysis, or tracing responsibility for a capability up through the organizational structure. The analysis isn’t automated, but provides visualization tools for analysts and planners, based on a single integrated scheme that allows for straightforward queries.

    He gave us a demo of the Graph in action, starting with a BPMN model that uses the Sparx EA accelerator for SOA architecture artifacts, and tracing through that loose coupling to the architectural components in the EA framework, with similar linkages for roles from a Casewise business architecture framework and definitions of contracts from the Financial Business Industry Ontology (FIBO). The idea is that the Graph provides an underlying mesh of semantic linkages from elements in a model to other frameworks, ontologies and models while still retaining business understandability at the model level. In the Insight Analyzer, we saw how to explore linkages between different types of elements, such as RACI-type relationships between roles and activities, as well as a more detailed introspection that allows drilling down on any node to see what other nodes and models that it is linked to, and traversing those links.

    Interesting ideas about how to bring together all of the architecture, process, case and decision models and frameworks into a single graph for analysis of your digital enterprise.

    Wearable Workflow by @wareFLO at BPMCM15

    Charles Webster gave a breakout session on wearable workflow, looking at some practical examples of combining wearables — smart glasses, watches and even socks — with enterprise processes, allowing people wearing these devices to have device events integrated directly into their work without having to break to consult a computer (or at least a device that self-identifies as a computer). Webster is a doctor, and has a lot of great case studies in healthcare, such as detecting when a healthcare worker hasn’t washed their hands before approaching a patient by instrumenting the soap dispenser and the worker. Interestingly, the technology for the hand hygiene project came from smart dog collars, and we’re now seeing devices such as Intel’s Curie that are making this much more accessible by combining sensors and connectivity as we commercialize the internet of things (IoT).

    He was an early adopter of Google Glass, and talked to us about the experience of having a wearable integrated into his lifestyle, such as for voice-controlled email and photography, plus some of the ideas for Google Glass that he has for healthcare workflows where electronic health records (EHR) and other device information can be integrated with work patterns. Google Glass, however, was not a commercial success since it is too bulky and geeky-looking, as well as requiring frequent recharging if you’re using it a lot. It requires more miniaturization to be considered as a possibility for most people, but that’s a matter of time, and probably a short amount of time, especially if they’re integrated directly into eyeglass frames that likely have a lot of unused volume that could be filled with electronic components.

    Webster talked about a university curriculum for healthcare technology and IoT that he designed, which would include the following courses:

    • Wearable human factors and workflow ergonomics
    • Data and process mining wearable data, since wearables generate so much more interesting data that needs to be analyzed and correlated
    • Designing and prototyping wearable products

    IMG_20150623_104530He is working on a prototype for a 3D-printed, Arduino-based wearable interactive robot, MrRIMP, intended to be used by pediatric healthcare professionals to amuse and distract their young patients during medical examinations and procedures. He showed us a video of a demo of he and MrRIMP interacting, and the different versions that he’s created. Great ideas about IoT, wearables and healthcare.