Integrating Inbound and Outbound Correspondence

In the previous session, we heard a lot about the ISIS Papyrus correspondence generation capabilities, but it’s important to look at response management, that is, closing the loop with inbound and outbound mail. This has to be considered in two different scenarios: an inbound customer interaction (paper, email, telephone, social media) triggers an internal process that results in a response interaction; or an outbound document solicits a response from a customer that must be matched to the outbound request. Response processes can be fully automated, manual, or automated with user intervention, depending on the degree of classification and content extraction that can be done on an inbound document. Not only does it handle the entire response management process, it provides analytics, such as campaign responses.

Their capture components include classification based on layout, keywords and/or barcodes; and data extraction from fixed forms or freeform documents. I’m not sure if this is their own recognition software or if they OEM in someone else’s (I suspect the former, since they seem to be doing a lot of innovation in-house), or how the accuracy and throughput compare with market leaders such as Kofax and IBM Datacap.

Once the document is captured, classified and the content extracted, a response letter type can be selected automatically based on business rules or manually by a user, then completed either automatically based on the content of the inbound letter, or with user assistance.

There are specific social media capture functions, such as the ability to track a Twitter hashtag, then analyze the sentiment and open a case directly from a tweet. If the user is identifiable, it can cross-reference to customer information in a CRM system, then present all of the information to a user to follow up on the request or complaint. This is exactly the sort of scenario that I imagined happening internally at Zipcar during the interaction that I described when talking about linking external social presence to core business processes.

If you consider the business scenario, it’s a real winner for handling inbound customer correspondence. First, an inbound document is received, analyzed and routed based on the content, including things such as looking up or extracting customer information from other systems of record. If some manual intervention is required for the response letter, a CSR is presented with the inbound letter, the generated outbound letter, customer information from other systems for context, and instructions for completing the letter. Inbound correspondence can be anything from a paper letter to a tweet, while the outbound can be the channel of choice for that customer, whether paper, fax, email or others.

Business Communication Platform for Correspondence Generation

Annemarie Pucher, CEO of ISIS Papyrus, discussed having a business communication platform for handling personalized content. Rather than having multiple systems that deal with content – ingestion, analysis, processing and generation, multiplied by the number of interaction channels – a single platform can reduce the internal efforts to develop content-centric processes, while presenting a more seamless customer experience across multiple channels. This is particularly important for personalized outbound documents, whether transactional (e.g., statements), ad hoc (online presentment), or as the result of a business interaction (e.g., contracts), since the customer needs to be able to receive the same information regardless of which interaction channel that they choose.

She discussed how their platform provides fully personalized outbound correspondence based on composing documents from reusable elements rather than a simple mail merge approach. The building blocks may be sections of text, graphics, 2-D bar codes and other components, which can then be assembled in different ways based on rules that consider the customer information, the geographic region and other factors. Different language versions of each component can be created. The document assembly can access business information from other systems through a variety of adapters, and interface with different input and output channels. Business people can create and modify the building blocks and the overall document template, allowing them to change the layout and the dynamic content without IT intervention, although IT would be involved to set up the adapters and interfaces to other systems and services.

For interactive correspondence generation, such as what would be done by a customer service representative in response to an inbound call, they provide in-document editing of the dynamically-created letter. This allows the user to type in information directly and select which sections of the document to include, while ensuring that the predefined content and rules are included in the format required. There can be a complete workflow around interactive correspondence generation, where certain changes to the content require approval before the document is sent to the customer.

Regardless of whether the document is created interactively or in batch, that single document can be rendered to multiple output channels as required, including hardcopy and a large variety of online formats. This can include functions such as pooling for combined enveloping (something that I wish my brokerage could learn, rather than sending me multiple confirmations in multiple envelopes on the same day), confirming that a printed document was sent, and handling returned paper and electronic mail. Supporting CMIS allows them to store the documents in other content repositories, not just within their own repository.

We finished up with a demonstration of creating building blocks in their Document Workplace, then assembling these into a document template. The workplace is similar in appearance to Microsoft Word from a text formatting standpoint, making it easy for business people to get started, although there’s quite a bit of complex functionality that would require training. Although no technical skills are required, it does require some degree of analytical skills for designing reusable components, handling variables, and understanding document assembly parameters, so may not be done by the average business user.

I haven’t spent a lot of time reviewing correspondence generation capabilities, but it’s something that comes up in many of the BPM implementations that I’m involved in, and typically isn’t handled all that well (if at all) by those systems. In many cases, it’s a poorly implemented afterthought, performed in a non-integrated fashion in another system, or becomes one of those things that the users ask for but just never receive.

Conference Season Begins

I attended one conference back in January, but the season really starts to ramp up about now through June, and I kicked it off with the Kofax conference in San Diego earlier this week. Just a few disclaimers about my participation in conferences, in case I forget to mention it in each case:

  • Conference organizers provide me with a free conference pass under the category of press, analyst or blogger. In exchange, they receive publicity when I blog or tweet about the conference. That publicity may or may not be favorable to them: giving me a conference pass does not guarantee that I will like the content.
  • For vendor conferences, the vendor always reimburses my travel expenses. This does not give them the right to review or edit any of the blog posts that I publish; in fact, it does not even guarantee that I will publish much while there if I’m really busy investigating their products and talking with their customers. However, it does guarantee that I understand their products and market much better afterwards.
  • If I’m giving a formal presentation at a vendor conference, it’s safe to assume that they paid me a fee to do so; at some other conferences (such as academic or non-profit ones), I may waive my fee. Paying me to speak at a conference does not give a vendor any additional coverage or editorial rights with respect to my blog posts.
  • Everything is on the record during the day, and off the record once the bar opens in the evening, unless otherwise requested. I created the “Kemsley rule” after noting that people tend to spill exciting upcoming news after a few drinks, then follow with a slightly horrified “don’t blog that”. I’m almost always invited to briefings under embargo or NDA at vendor conferences too, which I don’t blog either.

In April, I’m scheduled to give a keynote at Appian World in DC, and possibly sit on a panel (and definitely attend) IBM Impact in Las Vegas. May and June will be pretty busy, and I even have something scheduled in July, which is usually quiet for conferences. More to come on all of these as they get closer.

Kofax and MFPs

Lots of interesting content this afternoon; I had my eye on integrating BPM and Kofax Capture, but ended up at the session on turning MFPs (multi-function printers, aka MFDs or multi-function devices) for point of origination document capture using Kofax Front Office Server (KFS). Rather than collecting documents at remote offices or branches and shipping them to central locations, KFS puts scanning capabilities on the MFP that already exists in many offices to get the documents captured hours or days earlier, and eliminate some of the paper movement and handling. This isn’t just about scanning, however: it’s necessary to extract metadata from the documents in order to make them actionable.

They presented several scenarios, starting with the first simple touchless pattern:

  • Branch user authenticates at MFP using login, which can be synchronized with Active Directory
  • Branch user scans batch of documents using a button on the panel corresponding to the workflow that these documents belong to; these buttons are configured on Kofax Front Office Server to correspond to specific scan batch classes
  • VRS and KTM process the documents, doing image correction and auto-indexing if possible
  • The documents are committed to a content repository
  • The user can receive a confirmation email when the batch is created, or a link to the document in the repository after completion

Different buttons/options can be presented on the MFP panel for different users, depending on which shortcuts that are set up for them during KFS configuration; this means that the MFP panel doesn’t have to be filled up with a bunch of buttons that are used by only a few users, but is effectively tailored for each user role. There are also global shortcuts that can be seen on the MFP panel before login, and are available to all logged-in users.

A more complex scenario had them scan at the MFD, then return to their computer and use a web client to do validation and correction required before committing to the repository; this is the thin client version of the KTM validation rather than a function of KFS, I believe. This has the advantage of not requiring any software to be installed at the desktop clients, but this is still fundamentally a data entry functionality, not something that you want a lot of branch users to be doing regardless of how slick the interface is.

The speaker stated that KFS is designed for ad hoc capture, not batch capture, so there are definite limitations on the volume passing through a single KFS server. In particular, it does not appear to be suitable (or optimized) for large batches, but really for scanning a small set of documents quickly, such as a handful of documents related to a particular customer file. Also, the images need to pass to KFS in color or greyscale mode for processing, then are thresholded by VRS to pure black and white before passing on to KTM, so it may be better to locate KFS at the branches where there are multiple MFPs in order to reduce bandwidth requirements. Fundamentally, KFS is a glorified scanning module; it doesn’t do any of the recognition or auto-indexing, although you can use it to capture manual index values at the MFD.

It’s also possible to do some controlled ad hoc scanning: instead of just uncontrolled scan to email (which many of the MFPs support natively, but ends up being turned off by organizations who are nervous about that), you can scan to an email, with the advantage that KFS can convert the document to searchable PDF rather than just an image format. However, it’s not clear that you can restrict the recipients – only the originators, since the user has to have this function in their profile – so organizations that don’t currently allow scan to email (if that function exists on the MFP) may not enable this either.

There is also a KFS web client for managing documents after MFP scanning before they go to Capture and KTM, which allows for pages to be reviewed, (permanently) redacted, reordered, documents split and merged, burn text notes into the document, and other functions. Since this allows for document editing – changing the actual images before committal to the content management system – you couldn’t enable this functionality in scenarios that are concerned with legal admissibility of the documents. The web client has some additional functions, such as generating a cover page that you pre-fill (on the screen) with the batch index fields, then print and use as a batch cover page that will be recognized by KTM. It also supports thin client scanning with a desktop scanner, which is pretty cool – as long as Windows recognizes the scanner (TWAIN), the web client can control it.

As he pointed out, all of the documentation is available online without having to register – you can find the latest KFS documentation here and support notes here. I wish all vendors did this.

They finished up with some configuration information; there appears to be two different configuration options that correspond to some of their scenarios:

  • The “Capture to Process” scenario, where you’re using the MFP as just another scanner for your existing capture process, has KFS, VRS and KC on the KFS server. KTM, KTM add-ons and Kofax Monitor are on the centralized server where presumably dedicated KC workstations also feed into it.
  • Touchless Processing scenario moves KTM from the centralized server to the KFS server, so that the images are completely processed by the time that they leave that server. I think.

I need to get a bit more clarity on configuration alternatives, but one key point for distributed capture via MFP is that documents scanned in greyscale/color at the MFP move to KFS in that resolution (hence much larger images); the VRS module that is co-located with KFS does the image enhancement and thresholding to a binary image. That means that you want to ensure fast/cheap connectivity between the MFP and the KFS server, but that the bandwidth can be considerably lower for the link from KFS to KTM.

Kofax Capture Technical Session

It’s been a long time since I looked at much of the Kofax technology, so I took the opportunity of their Transform conference to attend a two-hour advanced technical session with Bret Hassler, previously the Capture product manager but now responsible for BPM, and Bruce Orcutt from product marketing. They started by asking the attendees about areas of interest so that they could tailor the session, and thereby rescue us from the PowerPoint deck that would be the default. This session contained a lot more technical detail than I will ever use (such as the actual code used to perform some of the functions), but that part went by fairly quickly and overall it was a useful session for me. I captured some of the capabilities and highlights following.

Document routing allows large scan batches to be broken up into sub-batches that can be tracked and routed independently, and move documents and pages between the child batches. This makes sense both for splitting work to create manageable sizes for human processing, but also so that there doesn’t need to be as much presorting of documents prior to scanning. For my customers who are considering scanning at the point of origination, this can make a lot of sense where, for example, a batch loaded on an MFD in a regional office may contain multiple types of transactions that go to different types of users in the back office. Child batch classes to be changed independently of the main batch, so that properties and rules to be applied are based on the child batch class rather than the original class. A reference batch ID, which can be exported to an ECM repository as metadata on the resulting documents, can be used to recreate the original batch and the child batch that a document belonged to during capture. Batch splitting, and the ability to change routing and permissions on the child batch, makes particular sense for processing that is done in the Capture workflow, so that the child batches follow a specific processing path and is available to specific roles. This will also feed well when they start to integrate TotalAgility (the Singularity product that they acquired last year) for full process management, as described in this morning’s keynote. Integrating TotalAgility for capture workflow will also, as Hassler pointed out, will bring in a graphical process modeler; currently, this is all done in code.

Disaster recovery allows remote capture sites connected to a centralized server to fail over to a DR site with no administrative intervention. In addition to supporting ongoing operations, batches in flight are replicated between the central sites (using, in part, third-party replication software) and held at remote capture locations until replication is confirmed, so that batches can be resumed on the DR server. The primary site manages batch classes and designated/manages the alternate sites. There’s some manual cleanup to do after a failure, but that’s to be expected.

Kofax has just released a Pega connector; like other custom connectors, they ship it with source code so that you can make changes to it (that, of course, is not necessarily a good idea since it might compromise your upgradability). The Kofax Export Connector for PRPC does not send the images to Pega, since Pega is not a content repository; instead, it exports the document to an IBM FileNet, EMC Documentum or SharePoint repository, gets the CMIS ID back again, then creates a Pega work object that has that document ID as an attachment. Within Pega, a user can then open the document directly from that link attachment. You have to configure Pega to create a web service method that allows a work object to be created for a specific work class (which will be invoked from Kofax), and create the attribute that will hold the CMIS document ID (which will be specified in the invocation method parameters). There are some technicalities around the data transformation and mapping, but it looks fairly straightforward. The advantage of doing this rather than pushing documents into Pega directly as embedded attachments is that the chain of custody of documents is preserved and the documents are immediately available to other users of the ECM system.

Good updates, although I admit to doing some extracurricular reading during the parts with too much detail.

Recording a “Hello World” Podcast with @PBearne at #pcto2012

I’ve been blogging a long time, and participate in webinars with some of my vendor clients, but I don’t do any podcasting (yet). Here at PodCamp Toronto 2012, I had the opportunity to sit through a short session with Paul Bearne on doing a simple podcast: record, edit and post to WordPress.

In addition to a headset and microphone – you can start with a minimal $30 headset/mic combo such as a USB Skype headset that he showed us with decent transcoding included, or move up to a more expensive microphone for better quality – he also recommends at least a basic two-channel audio mixer.

He walked us through what we need from a software standpoint:

  • Audacity – Free open source audio recording software. We recorded a short podcast using Audacity based on a script that Bearne provided, checked the playback for distortion and other quality issues, trimmed out the unwanted portions, adjusted the gain. I’ve used Audacity a bit before to edit audio so this wasn’t completely unfamiliar, but saw a few new tricks. Unfortunately, he wasn’t completely familiar with the tool when it came to some features since it appears that he actually uses some other tool for this, so there was a bit of fumbling around when it came to inserting a pre-recorded piece of intro music and converting the mono voice recording to stereo. There’s also the option to add metadata for the recording, such as title and artist.
  • Levelator – After exporting from Audacity project as a WAV or AIFF file, we could read into Levelator for fixing the recording levels. It’s not clear if there are any settings for Levelator or if it just equalizes the levels, but the result was a total mess the first time, not as expected. He ran it again (using an AIFF export instead of WAV) and the result was much better, although not clear what caused the difference. After fixing the levels with Levelator and importing back into Audacity, the final podcast was exported in MP3 format.
  • WordPress – As he pointed out, the difference between a podcast and a regular audio recording is the ability to subscribe to it, and using WordPress for publishing podcasts allows anyone to subscribe to them using an RSS reader or podcatcher. You may not host the files on your WordPress site since you may not have the storage or bandwidth there, but we used WordPress in this case to set up the site that provides the links and feed to the podcasts.
  • Filezilla FTP – For transferring the resulting MP3 files to the destination.
  • PowerPress – A WordPress plugin from Blubrry allows you to do things such as creating the link to iTunes so that the podcast appears in the podcast directory there, and publishing the podcast directly into a proper podcast post that has its own unique media RSS feed, allowing you to mix podcasts and regular posts on the same WordPress site.

He also discussed the format of the content, such as the inclusion of intro music, title and sponsorship information before the actual content begins.

There was definitely value in this session, although if I wasn’t already familiar with a lot of these concepts, it would have been a lot less useful. Still not sure that I’m going to be podcasting any time soon, but interesting to know how to make it work.

TIBCO Spotfire 4.0

I had a briefing with TIBCO on their Spotfire 4.0 release, announced today and due to be released by the end of November. Spotfire is the analytics platform that TIBCO acquired a few years back, and provides end-user tools for dimensional analysis of data. This includes both visualization and mashups with other data sources, such as ERP systems.

In 4.0, they have focused on two main areas:

  • Analytic dashboards for monitoring and interactive drilldowns. This seems more like the traditional BI dashboards market, whereas Spotfire is known for its multidimensional visualizations, but I expect that business customers find that just a bit too unstructured at times.
  • Social collaboration around data analysis, both in terms of finding contributors and publishing results, by allowing Spotfire analysis to be embedded in SharePoint or shared with tibbr, and by including tibbr context in an analysis.

I did get a brief demo, starting with the dashboards. This starts out like a pretty standard dashboard, but does show some nice tools for the user to change the views, apparently including custom controls that can be created without development. The dynamic visualization is good, as you would expect if you have ever seen Spotfire in full flight: highlighting parts of one visualization object (graph or table) causes the corresponding bits in the other visualizations on the screen to be highlighted, for example.

Spotfire 4.0 - tibbr in sidebar of dashboard

There’s also some built-in collaboration: a chart on the Spotfire dashboard can be shared on tibbr, which has a static snapshot of the chart shared in a discussion thread but links back directly to the live visualization, [Insert obligatory iPad demo here] whereas sharing in SharePoint embeds the live visualization rather than a static shot. Embedding a tibbr discussion as context within an analysis is really less of an integration than just a side-by-side complementary viewing: you can have a tibbr discussion thread viewed on the same page as part of the analysis, although the tibbr thread is not itself being analyzed.

I found that the integration/collaboration was a bit lightweight, some of it no more than screen sharing (like the difference between a portal and a composite application). However, the push into the realm of more traditional dashboards will allow Spotfire to take on the more traditional BI vendors, particularly for data related to other TIBCO products, such as ActiveMatrix BPM.

[Update: All screenshots from briefing; for some reason, Flickr doesn’t want to show them as an embedded slideshow]

Taking Time To Remember

296354_10150930653380305_645435304_21318425_2030905227_nToday is Remembrance Day in Canada (Veterans’ Day if you are in the US), which marks the anniversary of the signing of the armistice in World War I on November 11, 1918. Today, this day is used to honor soldiers of all wars.

I started a little project last year, after finding my grandfather’s WWI journals and my father’s WWII journal: I have been blogging the journals on a daily basis, with each day’s entry on the same day, just shifted by 94 years and 67 years, respectively. My grandfather’s journal covers the entire period from the day he shipped out in 1916 until when he arrived home in 1919; we’re at November 1917 right now, so have a year and half to go. My father’s journal, unfortunately, only covers the period from January-September 1944, so is already finished, but considering that he was in the navy, and took troops into the beaches of Normandy during the invasion, there’s some pretty interesting reading.

For the most part, these are just a few sentences each day written by small-town boys who were not recognized war heroes: not usually the kind of stories that we read about the wars. My grandfather’s sense of melancholy and my father’s sense of adventure are interesting contrasts. Feel free to follow along, and help out with the handwriting when I can’t decipher the journal myself.

SAP NetWeaver Business Warehouse with HANA

Continuing in the SAP World Tour in Toronto today, I went to a breakout innovation session on NetWeaver Business Warehouse (BW) and HANA, with Steve Holder from their BusinessObjects center of excellence. HANA, in case you’ve been hiding from all SAP press releases in the past two years, is an analytic appliance (High-performance ANalytic Applicance, in fact) that includes hardware and in-memory software for real-time analysis of non-aggregated information (i.e., not complex event processing). Previously, you would have had to move your BW data (which had probably already been ETL’d from your ERP to BW) over to HANA in order to take advantage of that processing power; now, you can actually make HANA be the persistence layer for BW instead of a relational database such as Oracle or DB2, so that the database behind BW becomes HANA. All the features of BW (such as cubes and analytic metadata) can be used just as they always could be, and any customizations such as custom extractors already done on BW by customers are supported, but moving to an in-memory provides a big uplift in speed.

Previously, BW provided data modeling, an analytical/planning engine, and data management, with the data storage in a relationship database. Now, BW only provides the data modeling, and everything else is pushed into HANA for in-memory performance. What sort of performance increases? Early customer pilots are seeing 10x faster data loading, 30x faster reporting (3x faster than BW Accelerator, another SAP in-memory analytics option), and a 20% reduction in administration and maintenance (no more RDBMS admins and servers). This is before the analytics have been optimized for in-memory: this is just a straight-up conversion of their existing data into HANA’s in-memory columnar storage. Once you turn on in-memory InfoCubes, you can eliminate physical cubes in favor of virtual cubes; there are a lot of other optimizations that can be done by eventually refactoring to take advantage of HANA’s capabilities, allowing for things such as interfacing to predictive analytics, and providing linear scaling of data, users and analysis.

This is not going to deprecate BW Accelerator, but provides options for moving forward that include a transition migration path from BWA to BW on HANA. BWA, however, provides performance increases for only a subset of BW data, so you can be sure that SAP will be encouraging people to move from BWA to BW on HANA.

A key message is that customers’ BW investments are completely preserved (although not time spent on BWA), since this is really just a back-end database conversion. Eventually, the entire Business Suite ERP system will run on top of HANA, so that there will be no ETL delay in moving operational data over to HANA for analysis; presumably, this will have the same sort of transparency to the front-end applications as does BW on HANA.

SAP World Tour Toronto: Morning Keynotes

There was a big crowd out for SAP’s only Canadian stop in its World Tour today: about 900 people in the keynote as Mark Aboud took the stage to discuss how SAP helps companies run their business, and look at the business trends in Canada right now: focus on the customer to create an experience; improve employee engagement by providing them with better tools and information to do their job better, increase speed in operations, managing information and distributing information. He moved on to talk about three technology trends, which echo what I heard at CASCON earlier this week: big data, cloud and mobility. No surprises there. He then spoke about what SAP is doing about these business and technology trends, which is really the reason that we’re all here today: cloud, analytics and mobility. Combined with their core ERP business, these “new SAP” products are where SAP is seeing market growth, and where they seem to be focusing their strategy.

He then invited CBC business correspondent Amanda Lang to the stage to talk further about productivity and innovation. It’s not just about getting better – it’s about getting better faster. This was very much a Canadian perspective, which means a bit of an inferiority complex comparing ourselves to the Americans, but also some good insights into the need to change corporate culture in order to foster an atmosphere of innovation, including leaving room for failure. Aboud is also providing some good insights into how SAP is transforming itself, in addition to what their customers are doing. SAP realized that they needed to bring game-changing technology to the market, and now see HANA as being as big for SAP as R/3 was back in the day. As Lang pointed out, service innovation is as important (or even more so) than product innovation in Canada, and SAP is supporting service businesses such as banking in addition to their more traditional position in product manufacturing companies.

Next up was Gary Hamel, recently named by the Wall Street Journal as the world’s most influential business thinker. Obviously, I’m just not up on my business thinkers, because I’ve never heard of him; certainly, he was a pro at business-related sound bytes.  He started off by asking what makes us inefficient, and talking about how we’re at an inflection point in terms of the rate of change required by business today. Not surprisingly, he sees management as the biggest impediment to efficiency and innovation, and listed three problematic characteristics that many companies have today:

  • Inertial (not very adaptable)
  • Incremental (not very innovative)
  • Insipid (not very inspiring)

He believes that companies need to foster with initiative, creativity and passion in their employees, not obedience, diligence and intellect. I’m not sure that a lot of companies would survive without intellect, but I agree with his push from feudal “Management 1.0” systems to more flexible organizations that empower employees. Management 1.0 is based on standardization, specialization, hierarchy, alignment, conformance, predictability and extrinsic rewards. Management 2.0 is about transparency (giving people the information that they need to do their job), disaggregation (breaking down the corporate power structures to give people responsibility and authority), natural hierarchies (recognizing people’s influence as measured by how much value they add), internal markets (providing resources inside companies based on market-driven principles rather than hierarchies, allowing ideas to come from anyone), communities of passion (allowing people to work on the things for which they have passion in order to foster innovation), self-determination (allowing freedom to move within corporate control structures based on value added), and openness (external crowdsourcing). Lots of great ideas here, although guaranteed to shake up most companies today.

The only bad note of the morning (aside from having to get up early, rent a Zipcar and drive through morning rush hour to an airport-area conference center far from downtown) was on the Women’s Leadership Forum breakfast. Moderated by a Deloitte partner, the panel included a VP of Marketing from Bell and Director of Legal for Medtronic. Where are the women in technology? Where are the women entrepreneurs? The woman from Bell, when asked about lessons that she could share, started with “work harder, every day – just that extra half hour or so”. That is so wrong. We need to be working smarter, not longer hours, and we need to take time away from work so that we’re not focused on it every day of our life if we expect to show true innovative leadership. About 20 minutes into the conversation, when the moderator turned the talk away from business and started asking about their children, horseback riding and the dreaded “work-life balance”, I left. What other business leadership forum that didn’t have the word “women” in the title would have included such topics? Quite frankly, this was an embarrassment.