CamundaCon Live 2020 – Day 2: Microservices Orchestration, new stuff from Camunda, and legacy BPM migration

Day 2 of CamundaCon Live kicked off with Camunda co-founder Bernd Rücker talking about microservices orchestration and integation using workflow automation. This is a common theme for him, and I’ve seen earlier versions of this presentation, but he always brings something fresh to the discussion. He discussed reactive applications that are responsive, resilient, elastic and message-driven, then covered different styles of event-driven architecture.

He gave a (live) demo of autonomous services communicating using Kafka, and showed the issue with peer-to-peer choreography: there is no sense of the end-to-end orchestration to ensure that all services that should have run did actually run. He created an event-based process in Camunda Optimize that modeled the expected end-to-end process, and now by connecting that to the Kafka messages, he had a visualization of the workflow that he defined that showed what happens when one service isn’t running: effectively, the virtual workflow is stuck at the previous service since it does not receive a message that the (stopped) service has picked up the messages.

One solution is to extract the end-to-end responsibility into its own service: really, this implies some level of orchestration via commands rather than purely reacting to events, even if it’s not a completely tightly-coupled workflow. If you use an engine like Camunda to do that top-level orchestration, then you can move the monitoring of the process within that engine (Cockpit rather than Optimize) although it’s likely that anyone using an event-based architecture is going to be looking at an event monitoring system like Optimize as well. You can see his slides below, and the video will be available on the CamundaCon Live hub probably by the time that I publish this post.

The morning session continued with CTO Daniel Meyer on some of the new product capabilities. Camunda’s goal has moved from just being a BPM engine for Java developers to a much broader orchestration platform that can integrate any technology stack and any endpoints.

He introduced a new distribution called Camunda Run (or Lil’ Camboot, as Niall Deehan calls it) provides a lightweight package (50MB) that includes the BPMN and DMN workflow and decision engines, Cockpit, Tasklist and the REST API. It can even be run in headless mode, which disables the web apps, if you just want the engines. It’s Open API enabled, CORS enabled, and SSL enabled out of the box. He gave a quick demo of downloading, starting and running Camunda Run: it’s pretty familiar if you’ve spent any time with Camunda, and it starts fast. From the blog post announcement, the target audience for Run is if at least one of the following is true:

  • You need a standalone process engine accessible via REST API
  • You don’t have extensive Java knowledge (or none at all) but still want to use Camunda BPM
  • You don’t want to configure an application server yourself
  • You want to configure everything in one place
  • You just want to Run Camunda BPM

Meyer also talked about Camunda Optimize, specifically the event-based process monitoring. We saw a bit of that yesterday in Felix Müller’s presentation, and I had a more complete view of the event-based features of Optimize a few weeks ago on the 3.0 release webinar. Basically, you add the event source to Optimize (such as Kafka), and Optimize exposes messages and allows them to be attached to the entry/exit points of elements on a BPMN diagram that represents the event-driven process. They are offering a 30-day free trial for Optimize now if you want to try it out.

Meyer’s third topic was about process automation as a service via Camunda Cloud, which is powered by Zeebe (rather than Camunda BPM). Having cloud-native Zeebe behind the scenes means that it’s highly scalable and fault-tolerant, and uses pub-sub orchestration to let you include endpoints from anywhere. He demonstrated how to spin up a new Zeebe cluster, then deploy a BPMN model that was created in the Zeebe Modeler and start instances of the process using the zbctl command line. These instances were then visible in Camunda Operate (the Zeebe process monitoring tool), and he ran JavaScript workers and published messages to complete tasks in the process and show the instance progressing through the process model. There’s a free trial for Camunda Cloud, and an early access version for $699/month that includes access to larger clusters and technical support.

He fielded some questions that came up on the Slack workspace during his talk. Moving from an existing Camunda BPM implementation to Camunda Run is apparently as easy as just redirecting to the new application server. You can’t use Java delegates, but will have to switch those out for external tasks. There was a question about BPM versus Zeebe, which I think is a question that a lot of Camunda customers have: although most are likely familiar with the technical and functional differences, there is an open question of whether Camunda will continue to support two workflow engines in the future, and if they are going to shift focus more towards Zeebe use cases.

The morning finished by breaking out into two tracks; I stayed with the customer presentations rather than the technical breakout to hear some of the case studies. The one that I was most interested in was Fareed Saeed, head of Product and Tools for Advanced Process Solutions at Fidelity Investments, talking about migrating their monolithic legacy BPM to Camunda, in part because I did some early technical architecture consulting with them on their digital process automation platform over a year ago, although I’m not involved at this time. For those of you who know me mostly through this blog and as an independent industry analyst, you may not be aware that the other half of my business is as a consultant to large enterprises, mostly financial services and insurance, on technical architecture and strategy, or anything else to help make their process-centric implementation projects a success.

James Watson of Doculabs, who advised Fidelity on migration strategies, joined him on the discussion. Saeed talked about their current home-built workflow system, which runs thousands of different processes for most of their back office operations, and the need to move away from monolithic architecture and fragile, non-agile systems to a more flexible platform. This talk was not about the architecture or platform, but about the migration planning and execution: a key subject for any large enterprise moving off a legacy platform, but one that is often not fully considered during new digital automation platform implementation.

There are a few different strategies for migrating process-based applications, and it’s not the same story for each process. Watson shared his thoughts on this (see the slide at right), but this is my take on it:

  • High-volume processes, that usually represent a smaller number of process models but most of the transaction volume, are usually rewritten from scratch while incorporating some degree of re-engineering and process improvement along the way. These are the core business processes that need to be done right, and will most benefit from the more agile and scalable new platform.
  • Lower volume processes can be reviewed to see if they’re still required, may possible be combined into similar processes, then a straightforward “lift and shift” rewrite done to just duplicate the functionality as is. In short, these aren’t worth the time to do the re-engineering unless there are obvious wins, since the volume is relatively low. These are also candidates for low-code business-led development if that’s available on the automation platform, rather than the professional development teams required for the high-volume transactional processes.
  • Very low volume processes can be retired, especially if their functionality can be rolled into processes in one of the first two categories.

Although they are looking at a “factory model” for some level of automation around the migration, Saeed believes that this is an opportunity to re-engineer the processes rather than just rewriting the same (broken) process on a new platform. They want to have smaller, distributed groups for developing and delivering new applications, which means that they need to have the right governance and standards in place to support a distributed model. He sees the need for early pilots and successes to allow everyone to see how this can work, and learn how to make it successful. A strong diverse team of business leaders is also a plus, since there will be some degree of pain in the business units as the migration happens.

That’s it for the morning of Day 2, they must have read my comments yesterday and actually made sure that we finished on time so that we get our 15 minute lunch break. 🙂 I’ll be back for the afternoon to finish off CamundaCon Live 2020.





CamundaCon Live 2020 – Day 1: Optimize, RPA, and how 24 Hour Fitness executes 5B process nodes per month

We continued the first day of CamundaCon Live (virtual) 2020 with Felix Mueller, senior product manager, presenting on how to use Camunda Optimize for driving continuous improvement in processes. I attended the Optimze 3.0 release webinar a couple of weeks ago, and saw some of the new things that they’re doing with monitoring and optimization of event-based processes — this allows processes that are not part of Camunda to be included in Optimize. The CamundaCon session started with a broader view of Optimize functionality, showing how it collects information then can be used for root cause analysis of process bottlenecks as well as displaying realtime metrics. They have some good case studies for Optimize, including insurance provider Visana Group.

He then moved to show the event-based process monitoring, and how Optimize can ingest and aggregate information from any external system with a connector, such as RabbitMQ (which they have built). His demo showed a customer onboarding process that could be triggered either by an online form that would be a direct Camunda process instantiation, or via a mailed-in form that was scanned into another system that emitted an event that would trigger the process.

It was very obvious that this was a live presentation, because Mueller was scrambling against the clock since the previous session went a bit long, having to speed through his demo and take a couple of shortcuts. Although you might think of this as a logistical “bug”, I maintain that it’s an interactivity “feature”, and made the experience much closer to an in-person conference than a set of pre-recorded presentations that were just queued up in sequence.

This was followed by a presentation by Kris Barczynski of Nokia Bell Labs about a really interesting use case: they are using Camunda to guide visiting groups on tours through the Nokia Campus customer experience spaces, and interact with devices including the guests’ wearables, drones and robots. Visitors are welcomed and guided by a robot, and they can interact with voice-controlled drones; Camunda is orchestrating the processes behind the scenes. He talked about some of their design decisions, such as using Camunda JavaScript workers to call external services, and building a custom Android app. Really interesting combination of physical and virtual processes.

Next was a panel discussion on the future of RPA, with Vittorio Dal Bianco of Nokia, Marco Einacker of Deutsche Telekom, Paul Jones of NatWest Group, and Camunda CEO Jakob Freund, moderated by Jason Bloomberg of Intellyx Research. The three customer presenters are involved with the RPA initiatives at their own organizations, and also looking at how to integrate that with their Camunda processes. Panels are always a challenge to live-blog, but here’s some of the points discussed (attributed where I remembered):

  • The customer panelists agreed that RPA has allowed people to move to more interesting/valuable work, rather than doing routine tasks such as copying and pasting between application screens. Task automation through RPA reduces resources/costs, decreases cycle time, and also improves quality/compliance.
  • RPA is a “short-term bandaid” driven from outside the IT organization in order to get some immediate efficiency benefits. It’s maintenance-intensive, since any changes to the appliations being integrated means that the bots need to be reprogrammed. Deutsche Telekom is moving from RPA front-end integration/automation to drive the more strategic BPMS/API automation, so sees that RPA has been an important step on the strategic journey but not the endpoint. NatWest recognizes RPA as a key automation tool, but see it as a short-term tactical tool; they classify RPA as part of their technical debt, and it is not a part of their long-term architecture. Nokia thinks that RPA will remain in niche pockets for applications that will never have a proper API, such as Excel-based applications.
  • Nokia uses Blue Prism for RPA. NatWest uses UIPath RPA, and has a group that is building the integration for having Camunda execute a UIPath task — although I would have thought this would be a relatively simple service call or external task. Deutsche Telekom is using seven different RPA platforms, three of which are commercial including Another Monday and Kryon; they are just starting to look at the integration between Camunda and RPA with a plan to have Camunda orchestrate steps, and one “microbot” performing an atomic task at that step. As their core system offers an API for that task, the RPA bot will be replaced with a direct API call. This last approach is definitely aligned with Camunda’s vision of how their BPM can work with RPA bots as well as any other “task performers”.
  • More discussion on the role of RPA in digital transformation: recommendations to go ahead and use it, but consider it as a stop-gap measure to get a quick win before you can get the APIs built out in the systems that are being integrated. It’s considered technical debt because it will be replaced in the future as the APIs of the core systems become available. It’s a painkiller, not a cure.
  • Although some of the companies are using business people to build their own bots, that has a mixed degree of success and other companies do not classify RPA as citizen developer technology. This is pretty much the same as we’re seeing with other low-code environments, where they are often sold as application development platforms for non-professional developers, but the reality is that many applications require a professional developer because of the technical complexity of systems being integrated.
  • Cost and effort of RPA bot maintenance can be significant, in some cases more than back-end integration. Bot fixes may be fairly quick, but are required much more frequently such as when a password changes: bots require babysitting.
  • The customers had a few Camunda product requests, such as better connectors to more of the RPA tools. In general, however, they don’t want Camunda to build/acquire their own RPA offering, but just see it as another example of where you can pick a best-of-breed RPA tool and use it for task automation at individual steps within a Camunda process.
  • Best practices/lessons learned:
    • Separate the process orchestration layer from the bot execution layer from the beginning, with the process orchestration being done by Camunda and the bot task execution being done by the RPA tool.
    • Use process mining first to objectively identify what should be automated; of course, this would also require that you mine the user interaction processes that would be automated with bots, not just the system logs.
    • Have a centralized control center for bot control.
    • Develop bot templates that can be more quickly modified and deployed.

Looking at how the panel worked, there are definitely aspects of online panels that work better than in-person panels, specifically how they respond to audience questions. Some people don’t want to speak up in front of an audience, while others get up and bloviate without actually asking a question. With online-only questions, the moderator can browse through and aggregate them, then select the ones that are best suited to the panel. With video on each of the presenters (except for one who lost his connection and had to dial in), it was still possible to see reactions and have a sense of the live nature of the panel.

The last session of the day was Jimmy Floyd of 24 Hour Fitness on their massive Camunda implementation of five billion (with a “B”) process node executions per month. You can see his presentation from CamundaCon Berlin 2018 as a point of comparison with today’s numbers. Pretty much everything that happens at 24 Hour Fitness is controlled by a Camunda process, from their internal processes to customer-facing activities such as a member swiping their card to gain access to a club. It hasn’t been without hiccups along the way: they had to turn off process history logging to attain this volume of data, and can’t easily drill down into processes that call a lot of other processes, but the use of BPMN and DMN has greatly improved the interactions between product owners and developers, sometimes allowing business people to make a rule change without involving developers.

He had a lot of technical information on how they built this and their overall architecture. Their use is definitely custom code, but using Camunda with BPMN and DMN gave them a huge step-up versus just writing code. Even logic inside of microservices is implemented with Camunda, not written in code. Their entire architecture is based on Camunda, so it’s not a matter of deciding whether or not to use it for a new application or to integrate in a new external solution. They are taking a look at Zeebe to decide if it’s the right choice of them moving forward, but it’s early days on that: it would be a significant migration for them, they would likey lose functionality (for BPMN elements not yet implemented in Zeebe, among other things), and Zeebe has only just achieved production readiness.

Camunda is changing how they handle history data relative to the transactional data, in part likely due to input from high-throughput customers, and this may allow 24 Hour Fitness to turn history logging back on. They’re starting to work with Optimize via Kafka to gain insights into their processes.

Day 1 finished with a quick wrapup from Jakob Freund; in spite of the fact that it’s probably been a really long day for him, he seemed pretty happy about how well things went today. Tomorrow will cover more on microservices orchestration, and have customer case studies from Cox Automotive, Capital One and Goldman Sachs.

As you probably gather from my posts today, I’m finding the CamundaCon online format to be very engaging. This is due to most of the presentations being performed live (not pre-recorded as is seen with most of the online conferences these days) and the use of Slack as a persistent chat platform, actively monitored by all Camunda participants from the CEO on down. They do need a little bit more slack in the schedule however: from 10am to 3:45pm there was only one 15-minute break scheduled mid-way, and it didn’t happen because the morning sessions ran overtime. If you’re attending tomorrow, be prepared to carry your computer to the kitchen and bathroom with you if you don’t want to miss a minute of the presentations.

As I finish off my day at the virtual CamundaCon, I notice that the videos of presentations from earlier today are already available — including the panel session that only happened an hour ago. Go to the CamundaCon hub, then change the selection from “Upcoming” to “On Demand” above the Type/Day/Track selectors.

CamundaCon Live 2020 – Day 1: Jakob Freund keynote and customer presentations

Every conference organizer has had to deal with either cancelling their event or moving it to some type of online version as most of us work from home during the COVID-19 pandemic. Some of these have been pretty lacklustre, using only pre-recorded sessions and no live chat/Q&A, but I had expectations for Camunda being able to do this in a more “live” manner that doesn’t completely replace an in-person event, but has a similar feel to it. They did not disappoint: although a few of the CamundaCon presentations were pre-recorded, most were done live, and speakers were available for live Q&A. They also hosted a Slack workspace for live chat, which is much better than the Q&A/chat features on the webinar broadcast platform: it’s fundamentally more feature-rich, and also allows the conversations to continue after a particular presentation completes.

Very capably hosted by Director of Developer Relations Mary Thengvall, presentations were all done from the speaker’s individual locations, starting with CEO Jakob Freund’s keynote. He covered a bit of Camunda’s history and direction, and discussed their main focus of providing end-to-end process orchestration using the example of Camunda together with RPA, then gradually migrating the RPA bots (widely used as a stop-gap process automation measure) to more robust API integrations. He also shared some news on new and timely product offerings, including a starter package for work-from-home human workflow, and an early adopter package for Camunda Cloud. I’ve shared a few of his slides below, but you can also go and see the recording: they are getting the videos and slides up within about an hour after each presentation, directly on the conference hub.

Next up was Simon Letort, Chief Digital Officer at Société Générale, on how they implemented their corporate investment banking’s core process automation platform using Camunda. They use Camunda as the core of their managed workflow platform, with 500+ processes deployed throughout their operations worldwide. They also use bpmn.io and form.io as their built-in process and forms modelers. Letort responded to an audience question about why not use another large BPMS product that was already in use; they wanted a best-of-breed solution rather than a proprietary walled garden, and also wanted to leverage open source tools as part of that so that they weren’t building everything from scratch. They transitioned from some of the proprietary tools by first replacing the underlying engine with Camunda, then trading out other components such as form.io as a more flexible UI was required.

Interestingly, about half of their workflows are created by 30 expert modelers within centers of expertise, and half by 1200 “amateur” modelers, or citizen developers. This really points out the potential for companies to mix together the experts (focused on core processes) and amateurs (focused on tactical or administrative processes) using the same engine, although they likely use quite different tools for the full development cycle. The SG Workflow “product” offers three main features to support these different modeler/developer types: the (Camunda) process engine, a workflow aggregator for grouping tasks and cases from multiple systems, and UI web components and apps. Their platform also auto-generates process documentation. The core product is created and maintained by a team of about 10, distributed between France and Canada.

He shared some good information on their architecture and roadmap: I did a few presentations last year (one of them at CamundaCon in Berlin) and wrote a paper on building your own process-centric platform using a BPMS and an assembly of other tools, inspired in part by companies like Société Générale that have done this to create a much more flexible application development environment for their large enterprises.

We moved from the main stage to the track sessions, and I sat in on a presentation by Jeremy Warren of Keller Williams Realty (a Camunda customer that works with integrator BP3) about their “SmartPlans” dynamic processes — these aren’t actually dynamic at runtime, but use a flower process model that loops back to allow any task to lead to any other task — which allow real estate agents to create their own plans and tasks.

This is a great example of automating some of the processes that real estate agents use to drive new business, such as contacting prospects on a regular schedule, which would normally be done (or not done) manually. Agents can decide what tasks to do in what order; the branching logic in the model then executes the plan as specified. He also shared some of their experiences in rolling out and debugging applications on this scale.

The second track session was Derek Hunter and Uzma Khan of Ontario Teachers’ Pension Plan (who have been an occasional client of mine over a number of years, including introducing them to startup Camunda back in 2013). They have a number of case management style of processes to handle requests from members (teachers) regarding their pensions. They have 144 BPMN templates, and execute 70,000 process instances per year with up to 20,000 active instances at any time since these are generally long-running workflows. Some of the extremely long-running processes are actually terminated after a specific stage, then a scheduler restarts a corresponding instance when new work needs to be done. Other processes may be suspended in the workflow engine, making them invisible to a user’s worklist until work needs to be done.

Camunda is really just an engine buried within the OTPP workflow system, completely hidden from calling applications by a workflow intermediary. This was essential during their migration off other platforms: at one time, they had three different workflow engines running simultaneously, and could migrate everything to Camunda without having to retool the higher-level applications. In particular, end users are never aware of the specific workflow engine, but work within applications that integrate business data from multiple systems.

They take advantage of in-flight instance migration due to the long-running nature of their processes, which is something that Camunda offers that is missing from many other BPMS products. Because of the large number of process templates and the complex architecture with many other systems and components, they have implemented automated testing practices including modeling user interactions through their workflow interface service (that sits above the workflow intermediary and the Camunda engine), and handling work-arounds for emulating external task processing in their core services.

They’ve developed a lot of best practices for automated testing, and built tools such as a BPMN navigation tool to use during unit testing. Another of their colleagues, Zain Esmail, will be presenting more about this on the technical track tomorrow. They have also developed tools for administrative monitoring and reporting on external tasks, to allow these to be integrated with the internal Camunda workflow metrics in Prometheus.

We’re taking a short break between the morning and afternoon sessions, so I’ll close this out now and be back in another post as things progress, either this afternoon or tomorrow.

My post on @Trisotech blog: The Changing Nature of Work – 2020 Version

I’ve been interested for a long time in how the work that people do changes when we introduce new types of technology. In 2011, I gave a keynote at the academic BPM conference in Clermont-Ferrand called “The Changing Nature of Work”, and I’ve written and presented on the topic many times since then.

The current pandemic crisis has me thinking about how work is changing, not due to the disruptive forces of technology, but due to the societal disruption. Technology is enabling that change, and I have some ideas on how that’s going to work and what that will mean going forward even when things are “back to [new] normal”. I believe there’s a big part for process to play in this, including process mining and automation, and you can find my thoughts about this in a guest post over on the Trisotech blog.

Free BPM (and other) textbooks and journals from SpringerLink

Springer, a publisher of academic journals and books, is offering many textbooks and papers for free right now; normally these come at a hefty price. Although they state that they are offering these “to support learning and teaching at higher education institutions worldwide”, there are no restrictions for anyone to download them. The first of these that I found of particular interest is Fundamentals of Business Process Management (2018) by Marlon Dumas, Marcello La Rosa, Jan Mendling and Hajo Reijers. This is a classic textbook used for teaching BPM at the university level, although useful to practitioners looking to dig a bit deeper into subjects.

The second one is Business Process Management Cases (2018) by Jan vom Brocke and Jan Mendling. This is a collection of real-world case studies highlighting best practices and lessons learned, hence aimed at a professional practitioner audience as well as students. Each case study chapter is written by a different author, and they are grouped into sections focused on Strategy and Governance, Methods, Information Technology, and People and Culture.

If you check out the SpringerLink search page, you’ll find a wide variety of books and papers available to download as PDF and/or EPUBs. To see the ones that are free, uncheck the “Include Preview-Only content” box near the top left. You can filter to see only certain types of content (e.g., books), as well as by discipline and language.

Thanks to Marlon Dumas and Jan Mendling for giving the shout out on Twitter about this.

My upcoming webinar on Financial Services and the need for flexible processes to weather a crisis

It’s a busy time for webinars, and I’ll be doing two more in the next two months sponsored by Signavio. On April 15, I’ll be talking about financial services, and how flexible processes are even more important now because of the current pandemic situation:

The current pandemic crisis and resultant market downturn has customers scrambling for safe investments, and financial services companies being forced to support remote working, all while legacy platforms creak precariously under the strain. Now, more than ever, it is critical for companies to understand their end-to-end processes and apply intelligent automation just to survive. Market leaders with flexible processes will be able to adjust quickly to the rapidly-changing environment, and even offer new business models and products suited to the uncertain times ahead.

I just had someone in financial services reach out to me and say “new automation strategy initiatives are the furthest thing from my leadership’s mind right now”, but I agree with other analysts that this is the perfect time to be pushing innovation in your organization. Regardless of your industry, you’re in the middle of a massive disruption; you can either respond to that disruption and innovate to stay ahead of your competition, or you’re going to be one of the many economic casualties in the months ahead.

Interested in hearing more about this, and participating in the conversation? Register for the webinar at the link above, and feel free to send me your ideas and experiences in advance to discuss during the webinar.

DIY face mask for techies

I’m calling this a DIY Mask for Techies because the basic materials are something that every techie has in their drawer: a t-shirt, a couple of conference lanyards, and a paperclip. Of course, you don’t have to be a techie to make one. 🙂

I realize that this is way, way off topic for this blog, and for me in general, but unusual times lead to unusual solutions. This is a long, DIY instructional post, and if you prefer to watch it instead of read it, I’ve recorded the same information as a video. Click on any of the photos in this post to see it at full resolution.

There’s a variety of recommendations on whether or not to wear a mask if you are not exhibiting any symptoms of illness. In some countries that have had success in reducing infection rates, they have a general policy of wearing a mask in public regardless of symptoms, since carriers can be asymptomatic.

I’m choosing to wear a mask when I leave home, or when I greet people who come to the door. Since it’s very difficult to buy medical-grade masks now, I’ve opted to make my own. I’ve also made a few for friends, especially those who have to walk their dogs, or work in an essential service that requires them to leave home.

I’m going to take you through the method that I used, with photos of each step, so that you can make your own. Although I use a sewing machine here, you can sew these by hand, or you could even use fabric glue or staples if you don’t have a needle and thread, or are in a hurry.

Design considerations

I did a lot of research before I started making masks, but my final design was based on these two references.

First, a Taiwanese doctor posted on Facebook about how to make a three-layer fabric mask, with two layers of fabric and an inside pocket where you can insert a filter layer. This is the basic premise of my design.

Second, there was a study published that compared different readily-available household fabrics against medical-grade masks to see what worked best. Believe it or not, a double layer of t-shirt material works pretty well.

Materials

I went through several iterations to try and make the materials something that a lot of people would already have, since it’s hard to get out shopping these days. Based on the suggested materials, I started with a large t-shirt from my husband’s collection of the many that I have brought home to him from conferences. To all of you vendors who provided me with t-shirts in the past, we thank you!

Next, I needed something to make ties, since these work better than elastic, and I didn’t have a lot of elastic on hand. Remember all of those conferences I went to? Yup, I still had a lot of the conference lanyards hanging in a closet. I provide a hack at the end of these instructions to use the bottom hem of the t-shirt if you don’t have conference lanyards, or you can use other types of ties such as shoelaces.

The paperclip was added in the third-to-last design iteration after I went out for a walk and found that my glasses steamed up due to a gap between the mask and the sides of my nose. It’s sewn into the top edge of the mask to create a bendable nose clip that can be adjusted for each wearer.

Caveats

Just a few caveats, since these are NOT medical-grade masks and I make no specific claims about their effectiveness:

  • They do not form a tight seal with your face, although they hug pretty closely.
  • They do not use medical-grade materials, and are not going to be as effective at filtering out the bad stuff.
  • In general, these masks may not provide complete protection. If you have medical-grade or N95 masks at home, you could use those and they would be better than these, but I recommend that you donate the medical-grade and N95 masks to your local healthcare organization so that front-line doctors and nurses are protected.

All in all, not perfect, but I believe that wearing a DIY fabric mask is better than wearing no mask at all.

Getting started and installing the nose clip

Let’s get started with the basic measuring and installing the nose clip.

Here’s the fabric pattern: it’s a 20cm by 37cm square cut from a t-shirt. Depending on your t-shirt size, you may get four or five of these out of a single shirt.

If you are using a double-knit material like a t-shirt, then you don’t need to hem the edges because it doesn’t ravel at the edges very much. If you are using a different fabric that will ravel, then cut slightly larger and hem the edges. I like to optimize the process so opted for the t-shirt with no hemming.

Next is our basic standard-sized paperclip. I don’t have a lot to say about this, expect that the first paperclip version used a larger paperclip and the wire was too stiff to easily bend while adjusting.

Next thing is to straighten the paperclip. Mine ended up about 10cm long, but plus or minus a centimetre isn’t going to matter.

Putting the paperclip aside for a moment, here’s how to fold the fabric to prepare for sewing. The 20cm length is the width of the mask from side to side on your face, and the 37cm length allows you to fold it so that the two ends overlap by about 1cm. In this case, I’ve overlapped by about 1.5cm, which means that the total height of the mask is 17cm, or 37/2 – 1.5.

I used these measurements because they fit both myself and my husband, so likely work for most adults. If you’re making a mask for a child, measure across their face from cheekbone to cheekbone to replace the 20cm measurement, then measure from the top of their nose to well under their chin, double it and add a centimeter to replace the 37cm measurement.

This next part is a bit tricky, and hard to see in the photos.

This is where we sew the paperclip into the top fold of the mask to create a bendable nose clip. What you’re seeing on the right is the fold at the top of the mask with the straightened paperclip rolled right up into the fold.

To prepare for sewing, I pinned the fabric below the paperclip, pushing the paperclip right into the inside of the fold. In the photo on the left, the paperclip is inside the folded material above the pins.

Now we move to the sewing machine, although this could be done by hand-stitching through the edge of the fabric and around the paperclip. In fact, after having done this a few times, I think that hand-sewing may be easier, since the feed mechanism on most home machines don’t work well when you have something stiff like a paperclip inside your fabric.

If you’re using a sewing machine, put on the zigzag foot and set the width of the stitch to as wide as it will go, so that the two sides of the zigzag stitches will go on either side of the paperclip. Start sewing and guide it through so that the fabric-covered paperclip tucked into the fold is in the centre, and the zigzag stitches go to either side of it: first to the left on the main part of the fabric, and then to the right where there is no fabric but the stitch will close around it.

That may not have been the best explanation, but here’s what you end up with. The straightened paperclip is inside the fold at the top of the fabric, and the zigzag stitches go to either side of it, which completely encloses the paperclip with fabric.

If you have fabric glue, you could definitely try that to hold the paperclip in place instead, although I haven’t tried that. You could also, as I mentioned before, hand-sew it into place.

And here’s why we went to all that work: a bendable nose clip. This is looking from the top of the mask, so you can see that the paperclip is completely sewn into the fold of the fabric, and when you bend the paperclip, it’s going to let you mold it to fit your own nose.

Now here’s what you have, and the hard part is over. You have the folded fabric like we saw earlier, with a straightened paperclip sewn inside the top fold. Lay out your fabric like this again for the next steps.

Adding ties and stitching sides

We’re now going to add ties and stitch the sides to create the mask. I used a sewing machine, but you could do all of this with hand sewing, or you could  some type of craft adhesive such as a hot glue gun or fabric glue. You could even staple it together, although if you opt for that, make sure that the smooth (top) side of the staples are facing inwards so that they don’t scratch your face.

Here’s where the conference lanyards come in: who doesn’t have a couple of these hanging around? You’ll need two of them, and I’ve selected two from past vendor clients of mine where I’ve also attended their conferences: Camunda and ABBYY. Thanks guys!

Cut off all that cool stuff at the end of the lanyard and throw it away. Cut each lanyard in half. These will be the four ties that attach to each corner of the mask, and tie behind your head. If one set is slightly longer than the other, use it at the top of the mask since it’s a bit further around the back of your head than around your neck where the other one ties.

To prepare for sewing the edges I’ve started with the right side, and you’re looking at it from the back side, that is, the side that will touch your face. Slide about 1 or 1.5cm of the tie into the fold of the fabric (that is, between the layers) at the top and bottom, and pin in place. I made mine so that the logos on the ties are facing out and right-side up when the mask is on, but the choice is yours.

Also put a pin where the fabric overlaps in the middle of that edge to hold it in place while sewing.

Now, it’s just a straight shot of sewing from top to bottom. I went back and forth over the ties a couple of times to make sure that they’re secure, then just stitched the rest of the way.

Once it’s sewn, if you flip it over, it will look like the photo on the right. This is the outside of the mask, now with the ties sewn in place and the entire edge stitched closed.

Now, pin the ties in place on the second edge, and pin the fabric overlap at the centre to prepare for sewing, just like you did with the first edge.

Sew that one just like you did the other, and you know have an almost completed mask. The photo to the right shows the side of the mask that faces outwards, with the nose clip in the edge at the top.

Some of the fabric designs that I’ve seen online stop with a simple version like this, but I find it leaves large gaps at the sides, so I wanted to tighten it up like what the Taiwanese doctor did by adding tucks to his.

Adding side tucks

There are other ways to do this rather than the tucks that I’m going to show you. I did a couple of masks using elastic, but I don’t have a lot of elastic on hand and thought that most people wouldn’t unless they do a lot of sewing or crafts. If you have a shirring foot on your sewing machine, you can definitely use that. If you don’t know what a shirring foot is, then you probably don’t have one. I recall a hand-shirring technique that I learned in Home Economics class in junior high, but that part of my memory was overwritten when I learned my 4th programming language.

Basically, I wanted to reduce the 17cm height of the mask that is required to stretch from nose to chin down to about 10cm at the edges. I added two large-ish tucks/pleats, angling them slightly in, and pinned them in place. This is shown from the inside of the mask, since I want the tucks to push out.

You’ll see what I mean when we flip the mask over, still just pinned, and you can see that the two tucks will cause the mask to pleat it out away from your face towards the centre.

Sew across the two tucks to hold them in place. There’s not going to be a lot of strain on these, so hand-sew them if that’s easier. The photo on the right shows what it looks like on the inside of the mask after stitching the tucks.

And when we flip it over, the photo on the left is what it looks like from the outside after stitching.

Do the same on the other side, and the mask is essentially done. This is the completed mask with ties at each corner, and a bendable nose clip at the top:

Filter insert

We’re not quite done. Remember that open fold at the back of the mask? We’re going to insert an extra layer of filter material inside the mask, between the two layers of t-shirt fabric.

The mask would work just fine as it is, but will work better with an additional layer of a non-woven material inside to stop transmission of aerosolized particles. The doctor from the original design said that you could use a few layers of tissue that had been wet and then dried, so that it melded together. I was also talking with a friend about using a paper coffee filter. Use your imagination here, as long as it isn’t embedded with any chemicals and air can pass through it sufficiently for breathing.

I found a great suggestion online, however…a piece of a Swiffer Sweeper cloth, cut to fit. It’s unscented and likely contains little in the way of harmful chemicals, although I might try out a few other things here.

With the inside of the mask facing up, open the pocket formed by the overlapping edges of the fabric that we left across the middle of the mask. This is where the filter is going to go, and then the fabric will close around it.

Push the filter into the opening, flattening it out so that you’re only breathing through a single layer of it. It’s going to be a bit harder than usual to breathe through the mask anyway, so you don’t want to make it worse.

Now the filter is all the way inside the mask. Flatten it out and push it up into the corners for best coverage. If it’s too big, take it out and trim it rather than having rolls of extra filter material inside the mask.

If you just tug gently at the sides of the mask, the opening closes over the filter, and you’re ready to go. I did one model that put a snap in the middle so that the opening was held closed, but it’s not necessary and the snap pressed up against my nose in an annoying fashion.

Adjusting and wearing

Time to finally put the mask on. Before tying it on the first time, put the nose clip up to the top of your nose and mold it to fit over your nose. Once you’ve done this once, you probably only need to make minor adjustments, if any, when you put it on again.

On the left, you can see how I’ve bent the nose clip down the sides of my nose, then flattened the ends of it to follow the edge of my cheek. This closes most of the gap between the mask and my face at the top edge, which reduces the opportunity for aerosol particles to get in, and also means that my glasses don’t fog up every time I exhale.

Next, tie the top tie around your head. Make it nice and high so that it won’t slip down. This should be fairly snug but not uncomfortably so. Readjust the nose clip, since tying the top tie will usually pull it up a bit.

The bottom tie goes around and ties at the back of your neck, and can be fairly loose. Notice how the tucks conform to the side of my face so that the mask fits closely there. I’m thinking in the next version to add a tuck right in the middle of bottom edge to have it hug the chin closer, but this is pretty good.

General wearing instructions

A few tips about wearing, then I’ll show you a final hack in case you don’t have a conference lanyard.

  • Always put a clean mask on with clean hands.
  • Try not to adjust the mask once your hands may no longer be clean. In general, put the mask on before you leave home, and take it off when you come back.
  • Since the outside of the mask may have picked up something, be sure to wash your hands after you take it off. If you have to take it off while outside, you can untie the top tie and let the mask hang forward over your chest while it’s still tied around your neck, then wash your hands or use hand sanitizer before touching your face. To put it back on, just pull up and tie.
  • There are a few different ways to clean the mask.
    • If it’s not dirty but may have contacted the virus, you can just leave it for 24 hours. I’m basing that recommendation on the time that it takes for the virus to deteriorate on a porous material such as cardboard.
    • You can wash it in warm soapy water while you’re washing your hands. Rinse in clear water, squeeze out and hang to try.
    • You can also do the equivalent of a medical autoclave, and steam it over boiling water for 10 minutes. Obviously, don’t put it in the microwave because of the metal paperclip.

Here’s the final hack. If you don’t have a conference lanyard, or a pair of bootlaces or the drawstring from your pajamas, you can create ties by trimming the bottom (hemmed) edge off the t-shirt and cutting it to the right length. If you trim it a little bit away from the stitching, as you can see at the bottom, it won’t even need to be hemmed.

You can also fold and sew strips into ties, but probably you can find something already made to use instead.

That’s it for my DIY mask for techies, made from a t-shirt, 2 conference lanyards and a paperclip. I don’t plan to start any series of DIY coronavirus supplies, but if you’re interested in business process automation and how our world of work is changing in these changing times, check out some of my other work.

Stay safe!

Catch me on a @SoftwareAG webinar next week talking about operational excellence

With so many cancelled conferences and people working from home, this is a great time to tune up your skills through online courses and webinars. I’ll be presenting with Eric Roovers, business transformation leader at Software AG, on a webinar about operational excellence next Wednesday, April 1, at 10:00 Eastern (16:00 in Central Europe, 07:00 Pacific).

We are doing this as a conversation rather than a presentation, so be prepared for Eric and I to riff on our opinions about the key issues today in achieving and maintaining operational excellence. This is the first of a series of webinars that Software AG will be hosting on operational excellence, although the only one where I’ll be speaking.

The webinar is sponsored by Software AG, and you’ll need to register at the link above. If you can’t make it at that time, go ahead and register and you’ll be sent a link to the on-demand version later.

Free COVID-19 apps from @Trisotech @Appian and @Pega

Yesterday, I passed on a link to The Master Channel’s free e-learning courses that you can use to start skilling up if you’re on the bench right now due to COVID-19. I’m also aware of a few companies in our industry who are offering free apps — some just to customers, some to everyone — that can help to fight COVID-19 in different ways.

The ability to build apps quickly is a cornerstone in our industry of model-driven development and low-code, and it’s encouraging to see some good offerings on the table already in response to our current situation.

Appian was first out of the blocks with a COVID-19 Response Management application for collecting and managing employee health status, travel history and more in a HIPAA-compliant cloud. You can read about it on their blog, and sign up for it online. Their blog post says that it’s free to any enterprise or government agency, although the signup page says that it’s free to organizations with over 1,000 employees — not sure which is accurate, since the latter seems to exclude non-customers under 1,000 employees. It’s free only for six months at this point.

Pegasystems followed closely behind with a COVID-19 Employee Safety and Business Continuity Tracker, which seems to have similar functionality to the Appian application. It’s an accelerator, so you download it and configure it for your own needs, a familiar process if you’re an existing Pega customer — which you will have to be, because it’s only available for Pega customers. The page linked above has a link get the app from the Pega Marketplace, where it will be free through December 31, 2020.

Trisotech is going in a different direction by offering several free online COVID-19 assessment tools based on clinical guidelines: some for the general public, and some to be used by healthcare professionals.

As a founding member of OMG’s BPM+ Health community, Trisotech has been involved in developing shareable clinical pathways for other medical conditions (using visual models in BPMN, CMMN and/or DMN), and I imagine that these new tools might be the first bits of new shareable clinical pathways targeted at COVID-19, possibly packaged as consumable microservices. You can click on the tools and try them out without any type of registration or preparation: they ask a series of questions and provide an assessment based on the underlying business rules, and you can also upload files containing data and download the results.

My personal view is that making these apps available to non-customers is sure to be a benefit, since they will get a chance to work with your company’s platform and you’ll gain some goodwill all around.

Free online digital transformation courses from @TheMasterChnnl

E-learning platform The Master Channel (which inexplicably has only 8 Twitter followers after I followed them, so get over there and connect) is offering free courses, exams, certificates and downloads to anyone affected by COVID-19. That is pretty much everyone on the planet by now. You can find out more details at the link above and in a recent LinkedIn post by Jan Moons, and he writes in more detail about e-learning in the time of the current pandemic in another post.

There’s a very real possibility that a lot of people will be “on the bench” in the near future: either their work requires travel, or their company has to make tough decisions about staffing. This is a great time to consider skilling up, and The Master Channel has courses on process and decision modeling, business analysis, analytics and more. I have never taken one of their courses so can’t vouch for the quality, and I am not being compensated in any way for writing this post, but probably worth checking out what they have to offer.

Their current offer is only until April 5th, although it’s clear to most people that our period of confinement is going to last much longer than that. Get them while you can.

If you know of other e-learning companies making similar offers, please add them in the comments of this post (including a link if you have one). I know of several universities that offer free online courses for related topics although they tend to be longer and much more detailed — I had to dedicate four weeks and relearn a lot of forgotten graph theory to get through the Eindhoven University of Technology’s course in process mining, which is more than a lot of people have time (or patience) for.