DSTAdvance16 Keynote with @KevinMitnick

Hacker and security consultant Kevin Mitnick gave today’s opening keynote at DST’s ADVANCE 2016 conference. Mitnick became famous for hacking into a lot of places that he shouldn’t have been, starting as a phone-phreaking teenager, and spending some time behind bars for his efforts; these days, he hacks for good, being paid by companies to penetrate their security and identify the weaknesses. A lot of his attacks used social engineering in addition to technical exploits, and that was a key focus of his talk today, starting with the story of how Stanley Rifkin defrauded the bank where he worked of $10.2M by conning the necessary passwords and codes out of employees.

Hacking into systems using social engineering is often undetectable until it’s too late, because the hacker is getting in using valid credentials. People are strangely willing to give up their passwords and other security information to complete strangers with a good story, or unintentionally expose confidential information on peer-to-peer networks, or even throw out corporate paperwork without shredding. Not surprisingly, Mitnick’s company has a 100% success rate of hacking into systems if they’re permitted to use social engineering in addition to technical hacks; the combination of internal information and technical vulnerabilities is deadly. He walked us through how this could be done by looking just at metadata about a company, its users and their computers in order to build a target list and likely attack vector. He also discussed hacks that can be done using a USB stick, such as installing a rootkit or keylogger, reminding me of a message exchange that I had a couple of days ago with a security-conscious friend:

image

Mitnick demonstrated how to create a malicious wifi hotspot using WifiPineapple to hijack a connection and capture information such a login credentials, or trigger an update (such as Adobe Flash Player) that actually installs a fake update instead, gaining complete access to the computer. He pointed out that you can avoid these types of attacks by using a VPN every time you connect to a non-trusted wifi hotspot.

He demonstrated an access (HID) card reader that can read a card from three feet away, allowing the card and site ID to be read from the card, then played back to gain physical access to a building as if he had the original card. Even high-security HID cards can be read with a newer device that they’ve created.

He described how phishing attacks can be used in conjunction with cloned IVR systems and man-in-the-middle attacks, where an unsuspecting consumer calls what they think is their credit card company’s number, but that call is routed via a malicious system that tracks any information entered on the keypad, such as credit card number and zip code.

Next, he showed the impact of opening a PDF with a malicious payload, where an Acrobat vulnerability can be exploited to insert malware on your computer. Java applets can use the same type of approach, making you think that the applet is signed by a trusted source.

Using an audience volunteer, he showed how online tracing sites can be used to search for a person, retrieving their SSN, date of birth, address, phone numbers and mother’s maiden name: more than enough information to be able to call in to any call center and impersonate that person.

Although he demonstrated a lot of technical exploits that are possible, the message was that many of these can be avoided by educating people, and testing them on their compliance to the procedures necessary to thwart social engineering attacks. He referred to this as the “human firewall”, and had a lot of good advice on how to strengthen it, such as advising people to use Google Docs to open untrusted attachments, and using technology to protect information from internal people when they don’t need to see it.

Lots of great — and scary — demos of ways that you can be hacked.

This is the last day for ADVANCE 2016; I might make it to a couple of sessions later today, then we have a private concert with Heart tonight.

DSTAdvance16 Day 1 Keynote with @PeterGSheahan

I’m back at DST‘s annual AWD ADVANCE user conference, where I’ll be speaking this afternoon on microservices and component architectures. First, however, I’m sitting in on the opening keynote where John Vaughn kicked things off, then passed off to Steve Hooley for a market overview. He pointed out that we’re in a low-growth environment now, with uncertain markets, making it necessary to look at cash conservation and business efficiencies as survival mechanisms. Since most of DST’s AWD customers are financial services, he talked specifically about the disruption coming to that industry, and how current companies have to drive down costs to be positioned to compete in the new landscape. Only a few minutes into his talk, Hooley mentioned blockchain, and how decentralized trust and transactions have the potential to turn financial services on its ear: in other words, the disruptions are technological as well as cultural.

He turned things over to the main keynote guest speaker, Peter Sheahan, author of several business innovation books as well as head of Karrikins Group. Sheahan talked about finding opportunity in disruption rather than fighting it. He presented four strategies for turning the challenge of disruption into opportunity: move towards the disruption; focus on higher order opportunities; question assumptions; and partner like you mean it. These all depend on looking beyond the status quo to identify where the disruption is happening to drive recognition of the opportunities, not just trying to do the same thing that you’re doing now, just better and faster. Some good case studies, such as Burberry — where the physical stores’ biggest competition is their own online shopping site, forcing them to create unique in-store experiences — with a focus on how the convergence of a number of disruptive forces can result in a cornucopia of opportunities. It’s necessary to look at the higher order opportunities, orienting around outcomes rather than processes, and not spend too much time optimizing lower-level activities without looking at how the entire business model could be disrupted.

A dynamic and inspiring talk to kick off the conference. Not sure I’ll be attending many more sessions before my own presentation this afternoon since I’m doing some last-minute preparations, although there are some pretty interesting ones tempting me.

BPM and IoT in Home and Hospice Healthcare with @PNMSoft

I listened in on a webinar by Vasileios Kospanos of PNMSoft today about business process management (BPM) and the internet of things (IoT). They started with some basic definitions and origins of IoT – I had no idea that the term was coined back in 1999, which is about the same time that the term BPM came into use – as a part of controls engineering that relied on a lot of smart devices and sensors producing data and responding to remote commands. There are some great examples of IoT in use, including environmental monitoring, manufacturing, energy management, and medical systems, in addition to the more well-known consumerized applications such as home automation and smart cars. Gartner claims that there will be 26B devices on the internet by 2020, which is probably not a bad estimate (and is also driving the new IP6 addressing standards).

PNMSoft - Amedar healthcare presentationDominik Mazur from Amedar Consulting Group (a Polish business and technology consulting firm) joined to discuss a case study from one of their healthcare projects, helping to improve the flow of medical information and operational flow that included home care and hospices – parts of the medical system that are often orphaned from an information gathering standpoint – tied into their National Health Fund systems. This included integrating the information from various devices used to measure the patients’ vital statistics, and supported processes for admission and discharge from medical care facilities. The six types of special purpose devices communicate over mobile networks, and can store the data for later forwarding if there is no signal at the point of collection. Doctors and other health care professionals can view the data and participate in remote diagnosis activities or schedule patient visits.

PNMSoft - Amedar healthcare presentationMazur showed the screens used by healthcare providers (with English annotations, since their system is in Polish) as well as some of the underlying architecture and process models implemented in PNMSoft, such as the admitting interview and specialist referrals process for patients, as well as coordination of physician and specialist visits, plus home medical equipment rental and even remote configuration through remote monitoring capabilities. He also showed a live demo of the system, highlighting features such as alarms that appear when patient data falls outside of normal boundaries; they are integrating third-party and open-source tools such as Google for charting data directly into their dashboards. He also discussed how other devices can be paired to the systems using Bluetooth; I assume that this means that a consumer healthcare device could be used as an auxiliary measurement device, although manufacturers of these devices are quick to point out that they are not certified healthcare devices in order to absolve themselves of responsibility for bad data.

He wrapped up with lessons that they learned from the project, which sound much like many other BPM projects: model-driven Agile development (using PNMSoft, in their case), and work closely with key stakeholders. However, the IoT aspect adds complexiy, and they learned some key lessons around that, too: start device integration sooner, and allow 20-30% of time for testing. They developed a list of best practices for similar projects, including extending business applications to mobile devices, and working in parallel on applications, device integration and reporting.

We wrapped up with an audience Q&A, although there were many more questions than we had time for. One of the more interesting ones was around automated decisioning: they are not doing any of that now, just alerting that allows people to make decisions or kick off processes, but this work lays the foundation for learning what can be automated without risk in the future. Both patients and healthcare providers are accepting the new technology, and the healthcare providers in particular find that it is making their processes more efficient (reducing administration) and transparent.

Great webinar. It will be available on demand from the resources section on PNMSoft’s website within a few days.

PNMSoft - Amedar webinar

Update: PNMSoft published the recording on their YouTube channel within a couple of hours. No registration required!

When Lack Of System Integration Incurs Costs – And Embarrassment

BPM systems are often used as integrating mechanisms for disparate systems, passing along information from one to another to ensure that they stay in sync. They aren’t the only type of systems used for integrating and orchestrating – there’s everything from the consumer-focused IFTTT and Zapier to full-on server-side orchestration – but that’s often presented as a primary use case for BPMS.

What happens, however, when you don’t integrate systems, and rely on “swivel chair integration”, where people have to enter the same information twice in two different systems? In many cases, that integration just doesn’t happen on a consistent basis, and that can cost organizations a lot of money. The news headlines here are all about how lawyers were overpaid (really? that’s news? Winking smile), but for me, the real story is buried further down:

[Lawyers’] time-off recorded in a scheduling system known as iCase was not always properly recorded in a parallel payroll system, known as PeopleSoft. Lawyers themselves were supposed to update both systems, but for various reasons did not.

In short, an organization that employs highly-paid professionals expected those people to enter their time (reasonable) – twice, in two different systems (unreasonable). And for some reason, they are surprised that the lawyers didn’t always do this.

Bruce Silver Now Stylish With DMN As Well As BPMN

I thought that Bruce Silver’s blog had been quiet for a while: turns out that he moved to a new, more representative domain name, and my feed reader wasn’t updating from there. He’s rebranding his business, including his blog, under Method & Style, mirroring the title of his popular book and training BPMN Method and Style , and now his new book and training options for DMN: DMN Method and Style: The Practitioner’s Guide to Decision Modeling with Business Rules .

His blog has a ton of new content on DMN, starting with a great piece that compares the path of the DMN standard with that of BPMN, which is considerably more mature. He discusses the five key elements of DMN, then goes into each of those in detail in the next five posts: Decision Requirements Diagrams, Decision Tables, FEEL (a new expression language developed for DMN), Boxed Expressions and the Metamodel and Schema. It’s really interesting to read his analysis comparing the evolution of the two standards: there was a time when everyone thought that BPMN was just about the visual notation, but to make it really useful, the interchange format and execution semantics have to come along at some point. Still, it’s useful to get started in DMN now with DRDs and decision tables, since that at least makes the decision models explicit instead of being buried in text requirements.

Once you’ve brushed up on his posts covering the five key elements, you can also read about conformance levels that vendor can choose to implement, and what didn’t make it into DMN 1.1, which is the first real version of the standard.

He doesn’t pull any punches in his discussion, and is not very complimentary on some aspects of the standard and how some vendor choose to implement it. Just as he is with BPMN. Smile

Smarter Mobile Apps Webinar with Me and @jamet123

I wrote a paper last year with James Taylor on smarter mobile apps that leverage process and decision management technologies, and we’re giving a webinar on the topic next Tuesday, January 19, at 1pm ET. You can read James’ more detailed post on this, or just head over and sign up for the webinar. We will be releasing the paper after the webinar.

HoHoTO 2015: be a sponsor, or just come for the party

HoHoTO is a fundraiser event put on each year by Toronto’s digital community: a great party with dancing, raffles and a chance to catch up with your friends (at the top of your lungs to be heard over the dance tunes). Since its inception in 2008, HoHoTO has raised over $350,000 for the Daily Bread Food Bank – an awesome organization that helps to feed people in our community – but this year, HoHoTO has turned its eye to supporting “the next generation of founders, funders and tech professionals”. In particular, the focus will be on organizations that help to bring more women and minorities into technology and digital businesses. The event is on December 11 at the Mod Club, and early bird tickets are on sale here.

The primary focus is on the YWCA Toronto’s Girl’s Centre, with a 3-year goal to completely fund the Girls’ Centre and push for the opening of another one. This centre provides programs for girls from 9-18 to allow them to try activities and develop skills, including “Miss Media” for designing online media such as blogs and websites. It’s located in Scarborough, the easternmost 1/3 of Toronto, serving a community that has upwards of 65% visible minorities (and the best ethnic food in the world, according to one economist), meaning that it is a great match with HoHoTO’s focus on promoting women and minorities in business and technology from an early age. HoHoTO is also bringing together professional women as mentors, including me.

The HoHoTO event, run by unpaid volunteers, is raising money through tickets and sponsorships. If you or your organization recognizes the value of diversity in business, and wants to support the success of women and minorities in digital and technology fields, consider becoming a sponsor of the event. Details are here, and most of your contribution is eligible for a tax receipt. You’ll get recognition on HoHoTO’s site and at the event, other promotional opportunities throughout the year, a handful of event and drink tickets to bring your team out to enjoy the evening, and a nice warm feeling in your heart.

Join the AIIM paper-free pledge

Pledge_badge1AIIM recently posted about the World Paper-Free Day on November 6th, and although I’m not sure that it’s recognized as a national holiday or anything, it’s certainly a good idea. I blogged almost three years ago about my mostly paperless office, and how to achieve such a thing yourself. Since that time, I’ve added an Epson DS-510 scanner, which has a nice small footprint and a sheet feeder; it sits right on my desk and there is never a backlog of scanning.

It’s not just about scanning and shredding, although those are pretty important activities: you have to have a proper retention plan that adheres to any regulatory requirements, and a secure offsite (cloud or otherwise) backup capability to ameliorate any physical site disasters.

You also need to consider how much backfile conversion that you’ll do: I decided to back-scan everything except my financial records at the time that I started going completely paperless, then scan everything including financials from that date forward. Each year, another batch of old paper financial records reached their destruction date and were shredded, the last of them just last year, and I no longer have any paper files. If back-scanning is too time-consuming for you but you want to start scanning everything day-forward, then store your old paper files by destruction date so that you can easily shred the batch of expired files each year until there are none left.

These things – scanning, document destruction, retention plan, secure backup, backfile conversion – are the same things that I’ve dealt with at large enterprise customers in the past on ECM projects, just on a small-office scale.

Avoiding a surfeit of conferences

This time of year, I’m usually flying back and forth to Las Vegas to engage in the fall conference season: software vendors hold their annual user conferences, and invite me to attend in exchange for covering most of my travel expenses. They don’t pay me to attend unless I give a presentation – in fact, many are not even my clients – and since I’m self-employed, that means I’m giving up billable days to attend. Usually, I consider that a fair trade, since it allows me to get a closer look at the products and talk to the vendor’s employees and customers, and I typically blog about what I see.

This year, however, I stepped away from most of the conferences, including the entire slate of fall events. A couple of family crises over the summer required a lot of my attention and energy, and when I started getting requests to attend fall conferences, I just didn’t feel that they were worth my time.

Many vendors have become overly focused on the amount of blogging that I do at their conference, rather than on strengthening our relationship. My conference blogging, described as “almost like being there”, is seen by some vendors as a savant party trick, and they consider themselves cheated in some way if I don’t publish enough content during the conference. What they forget is that by attending their conference, I’m gaining insights into their company and products that I can use in future discussions with enterprise clients, as well as in any future projects that I might do with the vendor. I generate revenue as a consultant and industry analyst; blogging is something that I do to analyze and solidify my observations, to discuss opinions with others in the field, and to expand my business reach, but I’m never paid for it, and it is never a condition of attending an event – at least in my mind.

Another factor is the race to the bottom in travel expenses. Many vendors require that they book my air travel, and when booking the one conference that I was going to attend this fall, I asked their travel group to pay the $20 fee to select a decent (economy) seat for the 5-hour tourist-class flight, but they refused. Many times in the past I’ve just paid for seat assignments and upgrades out of my own pocket, but this time it became about the principle: the vendor in question, who is not an active client of mine, placed that little value on my attendance.

So if you’re a vendor, here’s the deal. A paid client relationship with me is not a prerequisite of me attending your conference, and has never been in the past, but there has to be a mutual recognition of the value that we each bring to the table. I bring 25 years of experience and opinions as a systems implementer, consultant and industry analyst, and I offer those opinions freely in conversation: consider it free consulting while I’m at your conference. I expect to gain insights into your company, products and customers, through public conference sessions and private discussions. I may blog about what I see and hear (at least the parts not under non-disclosure), or use that information in future discussions with enterprise clients. Or I may not, if I don’t find it relevant or interesting. Lastly, when you ask me to fly somewhere, keep in mind that it is not a treat for me to travel to Las Vegas or Orlando, and at least make sure that I’m not in the middle seat at the back of a 50-row aircraft.

As always, everything after the bar opens is off the record.

Appian Around The World – Toronto

Appian was recently doing a round of road-show conferences, and when they landed in my backyard, I decided to stop in for the day and see what was new. I missed Appian World this year and was looking forward to a bit of a product update as well as some of the local customer stories.

The day started with Edward Hughes, SVP of sales, giving us a high-level overview of Appian and their BPM platform-as-a-service and case management products (for the non-customers in the audience), as well as their shift to becoming more of a broad application development platform rather than just a BPMS. I’ve been seeing this trend with many BPM vendors over the past few years, and Appian has been repositioning this way for a year or two already. Using Appian as an application development platform allows applications to be developed independently of the deployment platform, both on server side (e.g., develop on the cloud version, deploy on premise) and for client interfaces on desktops or mobile devices. The messaging is that you can use their platform to create customer service applications “beyond CRM” to handle the entire customer journey, with a unified interface plus a consolidated view onto enterprise data using their Records function. He also talked about the Appian App Market, which is an expanded version of their Appian Forum, containing add-in components and complete applications from Appian and third parties.

Since it was a small room, the local customers introduced themselves and talked about their Appian experience and applications: 407 ETR with 10 apps integrated with their customer portals so that online actions (e.g., acquiring a new transponder) become Appian processes assigned to the 125 internal users; Manulife, the first Appian cloud customer back in 2008, now migrating their “legacy” Appian apps to the Tempo UI and serving 900 users for work/time tracking and records management in Marketing; and IESO with apps to register and manage information about energy companies participating in electricity markets. We also heard from some of the partners attending: TCS, Princeton Blue, and boutique contender Bits In Glass with 15+ Appian-trained people in Canada and the US. Bits In Glass used to do mostly code-level (Java) bespoke development, and have reduced their efforts and timelines to 1/3 of that using Appian’s model-driven development.

Next up was Michael Beckley, describing his new role as Chief Customer Officer (as well as CTO) as well as giving us a product update on the 7.11 quarterly release. Appian is seeing corporate IT budgets as 20% innovation and 80% maintenance, but they want to flip that ratio so that maintenance is much less expensive than the original build, freeing up time and energy for innovation. Most large enterprises aren’t going to get rid of custom applications, but they do need to make them faster to build and maintain, while enforcing strict security and providing a user-friendly interface for internal and external users. In theory, an integrated application development such as Appian provides all the pieces:user interface, reports, rules, collaboration, process, on-premise cloud, mobile, social, data, content, security, identity, and integration; in practice, most organizations end up doing something outside the model-driven development environment, although it can definitely improve their custom development efforts. Appian’s focus, as with many of the other BPMS vendors pivoting to become app dev vendors, is on providing a platform to build process-centric applications that get things done via automation, with people injected into the process where required.

Beckley gave us a hint of their growth strategy: they tend to build rather than buy in order to keep their technology pure, and since growth by acquisition inevitably requires a large (and underestimated) effort to integrate the new technology.

Here’s a quick list of the Appian 7.11 updates (some of these likely came before 7.11, but I haven’t had an update for a while):

  • Three UI styles for Appian apps: the Tempo social interface, Sites limited-function workspace/worklist for heads-down workers, and Embedded SAIL to embed Appian functionality within an existing portal for internal or external users. Sites have Action Forms for fit-for-purpose apps when a social feed UI isn’t appropriate, and Embedded SAIL has Action Forms for customer-facing apps within a third-party web portal. These latter two are critical for real-world enterprise applications: although I like the Tempo interface, many of my enterprise clients need a different sort of view for heads-down workers, which can be provided by Sites or using Embedded SAIL.
  • A number of improvements to the Tempo news feed and UI, including the Tempo Kudos view to promote collaboration and provide awareness of accomplishments, dynamically-updating filters to better link and manage record data and underlying data sources
  • Improvements to SAIL, including positioning it as a device-independent UI that provides shared model experience (rather than an HTML5 gateway into an existing app as seen in some other mobile-enablement technology) that is natively rendered on each device. The rendering engine can be updated independently of the applications, making it easier to adapt to new OS versions and devices. Appian uses SAIL to build their own components and apps that become part of the product. From a developer functionality standpoint, SAIL has added placeholder text and tooltips on forms, auto first field focus to reduce clicks and improve efficiency, additional image sizes that are auto-scaled to the device, initially-collapsed form sections, “submit” links that can be placed on a graphic element instead of standard buttons, links in milestones and pickers, grid enhancements, and continuing speed improvements. There’s also a new barcode component, although on iOS only and requiring a Verifone device for capture.
  • Mobile offline actions use native encrypted data containers rather than HTML5 storage (some of this is iOS only although Android is planned for later this quarter), with the developer deciding which actions and data are available offline. Changes to the definition of a form while a user is offline will prompt the user to review and resubmit the form with the new/updated form field definitions, so application changes can continue even if there are active offline users. This does not (yet) allow existing records to be locked for offline updates, although tasks can be locked to user before going offline.
  • For designers, the developer portal is being migrated to SAIL and enhanced with build processes; there’s a UI designer navigation tree to allow view/select/edit within a hierarchical tree view of an action form; the expression rule designer (“for those of you who are still writing expressions”, namely power developers) auto-suggests rule inputs and there is some level of expression rule testing; a process report designer can be used to create performance reports; impact analysis reports show where rules are invoked and other object relationships; bulk security updates can be made across objects.
  • For administrators, a big new thing is LDAP/SAML authentication with multiple LDAP servers and complex configurations.

They have frequent product update webinars , free introductory courses and tips & tricks sessions online; in fact, there is a product update webinar tomorrow if you want to hear more about what I’ve listed above.

We heard from Rew Dickinson, a solutions consultant, on what makes a great app — complete with a live demo to show us how it’s done. There were a lot of best practices here that I won’t repeat, better for you to check out one of their webinars, but a few key pointers:

  • Design applications to be omni-channel and easily adaptable.
  • Use Records to organize and model corporate data, regardless of source, for use in an application; bidirectional links between Records and process instances allow for a full view whether you’re coming from the process or data side of things.
  • Use Sites for fit-for-purpose applications, e.g., a worklist for heads-down task execution, as an alternative to full Tempo environment. Effectively, this is a report that can be sorted and filtered, with links to tasks that takes the user to the task form; it can include work management analytics for a manager/dispatcher user to monitor and reallocate task assignments. This made me think that Appian has just reinvented their per-application portal mode with Sites, albeit with better underlying technology, but that’s a discussion for another day.
  • Use Embedded SAIL for customer-facing portal environments, e.g., create service request from a customer order page.

Michael Beckley came back to talk to us about Appian Cloud, that is, their public cloud offering. It uses Amazon AWS/EC2/S3 in a single-tenant architecture, which allows each environment to be upgraded independently — more of a managed hosting model. The web tier is shared and handled by Appian, who also manages servers, load balancing, high availability and upgrades. There can be a VPN tunnel to on-premise data, and in fact the AWS instance does not have to be available on the public internet, but can be restricted to access only through the VPN from a corporate location. This configuration provides the elasticity and availability of the Amazon cloud, but allows private data to remain on premise — something that goes a long ways to resolving geographic data location issues. They’ve obviously been working on the optics of US-owned data centers by listing their privacy chops, but it would have been even more reassuring to see a mention of any Canadian standards such as PIPEDA for this purely Canadian audience. There are tiers for development, medium, large and extra-large deployments, with a redeployment to move between tiers (so not that elastic…) but it supposedly only takes a few minutes if planned. Uptime this year is mostly 5 9’s, with customer credits for missed uptime SLAs. You can also self-host Appian in other environments, e.g., Azure, although the Appian Cloud SaaS offering is currently Amazon only.

We finished up with Mike Cichy, an Appian consultant, discussing their center of excellence offerings and how customers can plug into the vast wealth of information, from checklists to migration guides to training in order to embody best practices. There are a number of tools available such as the Appian Health Check and Deployment Automation in addition to these practices, with an overall goal to help achieve a large improvement in developer speed and quality within customer/partner organizations.

Altogether an informative day, and great catch up with some old friends.