Aligning BPM and EA Tutorial at BBCCon11

I reworked my presentation on BPM in an enterprise architecture context (a.k.a., “why this blog is called ‘Column 2’”) that I originally did at the IRM BPM conference in London in June, and presented it at the Building Business Capability conference in Fort Lauderdale last week. I removed much of the detailed information on BPMN, refined some of the slides, and added in some material from Michael zur Muehlen’s paper on primitives in BPM and EA. Some nice improvements, I thought, and it came in right on time at 3 hours without having to skip over some material as I did in London.

Here are some of the invaluable references that I used in creating this presentation:

That should give you plenty of follow-on reading if you find my slides to be too sparse on their own.

Improving Process Quality with @TJOlbrich

My last session at Building Business Capability before heading home, and I just had to sit in on Thomas Olbrich’s session on some of the insights into process quality that he has gained through the Process TestLab. Just before the session, he decided to retitle it as “How to avoid being mentioned by Roger Burlton”, namely, not being one of the process horror stories that Roger loves to share.

According to many analyst studies, only 18% of business process projects achieve their scope and objectives while staying on time and on budget, making process quality more of an exception than the rule. In the Process TestLab, they see a lot of different types of process quality errors:

  • 92% have logical errors
  • 62% have business errors
  • 95% have dynamic defects that would manifest in the environment of multiple processes running simultaneously, and having to adapt to changing conditions
  • 30% are unsuited to the real-world business situation

Looking at their statistics for 2011 to date, about half of the process defects are due to discrepancies between models and the verbal/written description – what would typically be considered “requirements” – with the remainder spread across a variety of defects in the process models themselves. The process model defects may manifest as endless loops, disappearing process instances, missing data and a variety of other undesired results.

He presented four approaches for improving process quality:

  • Check for process defects at the earliest possible point in the design phase
  • Validate the process before implementing, either through manual reenactment, simulation, the TestLab approach which simulates the end-user experience as well as the flow, or a BPMS environment such as IBM BPM (formerly Lombardi) that allows playback of models and UI very early in the design phase
  • Check for practicability to determine if the process will work in real life
  • Understand the limits of the process to know when it will cease to deliver when circumstances change

Olbrich’s approach is based on the separation of business-based modeling of processes from IT implementation: he sees that these sort of process quality checks are done “before you send the process over to IT for implementation”, which is where their service fits in. Although that’s still the norm in many cases, as model-driven development becomes more business-friendly, the line between business modeling and implementation is getting fuzzier in some situations. However, in most complex line-of-business processes, especially those that use quite a bit of automation and have complex user experience, this separation is still prevalent.

Some of his case studies certainly bear this out: a fragment of the process models sent to them by a telecom customer filled an entire slide, even though the activities in the processes were only slightly bigger than individual pixels. The customer had “tested” the process themselves already, but using the typical method of showing the process, encouraging people to walk through it as quickly as possible, and sign off on it. In the Process TestLab, they found 120 defects in process logic alone, meaning that the processes would never have executed as modeled, and 20 process integration defects that determine how different processes related to each other. Sure, IT would have worked around those defects during implementation, but then the process as implemented would be significantly different from the process as modeled by the business. That means that the business’ understanding and documentation of their processes are flawed, and that IT had to make changes to the processes – possibly without signoff from the business – that may actually change the business intention of the processes.

It’s necessary to use context when analyzing and optimizing processes in order to avoid verschlimmbesserung, roughly translated as “improvements that make things worse”, since the interaction between processes is critical: change is seldom limited to a single process. This is where process architecture can help, since it can show the relations between processes as well as the processes themselves.

Testing process models by actually experiencing them, as if they were already live, allows business users and analysts to detect flaws while they are still in the model stage by standing in for the users of the intended process and seeing if they could do the assigned business task given the user interface and information at that point in the process. Process TestLab is certainly one way to do that, although a sufficiently agile model-driven BPMS could probably do something similar if it were used that way (which most aren’t). In addition to this type of live testing, they also do more classic simulation, highlighting bottlenecks and other timing-related problems across process variations.

The key message: process quality starts at the very beginning of the process lifecycle, so test your processes before you implement, rather than trying to catch them during system testing. The later that errors are identified, the more expensive it is to fix them.

What Analysts Need to Understand About Business Events

Paul Vincent, CTO of Business Rules and CEP at TIBCO (and possibly the only person at Building Business Capability sporting a bow tie), presented a less technical view of events that you would normally see in one of his presentation, intended to have the business analysts here at Building Business Capability understand what events are, how they impact business processes, and how to model them. He started with a basic definition of events – an observation, a change in state, or a message – and why we should care about them. I cover events in the context of processes in many of the presentations that I give (including the BPM in EA tutorial that I did here on Monday), and his message is the same: life is event-driven, and our business processes need to learn to deal with that fact. Events are one of the fundamentals of business and business systems, but many systems do not handle external events well. Furthermore, many process analysts don’t understand events or how to model them, and can end up creating massive spaghetti process models to try and capture the result of events since they don’t understand how to model events explicitly.

He went through several different model types that allow for events to be captured and modeled explicitly, and compared the pros and cons of each: state models, event process chain models, resources events agents (REA) models, and BPMN models. The BPMN model is the only one that really models events in the context of business processes, and relates events as drivers of process tasks, but is really only appropriate for fairly structured processes. It does, however, allow for modeling 63 different types of events, meaning that there’s probably nothing that can happen that can’t be modeled by a BPMN event. The heavy use of events in BPMN models can make sense for heavily automated processes, and can make the process models much more succinct. Once the event notation is understood, it’s fairly easy to trace through them, but events are the one thing in BPMN that probably won’t be immediately obvious to the novice process analyst.

In many cases, individual events are not the interesting part, but rather a correlation between many events; for example, fraud events may be detected only have many small related transactions have occurred. This is the heart of complex event processing (CEP), which can be applied to a wide variety of business situations that rely on large volumes of events, and distinguishes between simple process patterns and business rules that can be applied to individual transactions.

Looking at events from an analyst’s view, it’s necessary to identify actors and roles, just as in most use cases, then identify what they do and (more importantly) when they do it in order to drive out the events, their sources and destinations. Events can be classified as positive (e.g., something that you are expecting to happen actually happened), negative (e.g., something that you are expecting to happen didn’t happen within a specific time interval) or sets (e.g., the percentage of a particular type of event is exceeding an SLA). In many cases, the more complex events that we start to see in sets are the ones that you’re really interested in from a business standpoint: fraud, missed SLAs, gradual equipment failure, or customer churn.

He presented the EPTS event reference architecture for complex events, then discussed how the different components are developed during analysis:

  • Event production and consumption, namely, where events come from and where they go
  • Event preparation, or what selection operations need to be performed to extract the events, such as monitoring, identification and filtering
  • Event analysis, or the computations that need to be performed on the individual events
  • Complex event detection, that is, the event correlations and patterns that need to performed in order to determine if the complex event of interest has occurred
  • Event reaction, or what event actions need to be performed in reaction to the detected complex event; this can overlap to some degree with predictive analytics in order to predict and learn the appropriate reactions

He discussed event dependencies models, which show event orderings, and relate events together as meaningful facts that can then be used in rules. Although not a common practice, this model type does show relationships between events as well as linking to business rules.

He finished with some customer case studies that include CEP and event decision-making: FedEx achieving zero latency in determining where a package is right now; and Allstate using CEP to adjust their rules on a daily basis, resulting in a 15% increase in closing rates.

A final thought that he left us with: we want agile processes and agile decisions; process changes and rule changes are just events. Analyzing business events is good, but exploiting business events is even better.

Process and Information Architectures

Last day of the Building Business Capability conference, and I attended Louise Harris’ session on process and information architectures as the missing link to improving enterprise performance. She was on the panel on business versus IT architecture that I moderated yesterday, and had a lot of great insight into business architecture and enterprise architecture.

Today’s session highlighted how business processes and information are tightly interconnected – business processes create and maintain information, and information informs and guides business processes – but that different types of processes use information differently. This is a good distinction: looking at what she called “transactional” (structured)  versus “creative” (case management) versus “social” (ad hoc), where transactional processes required exact data, but the creative and social processes may require interpretation of a variety of information sources that may not be known at design time. She showed the Burlton Hexagon to illustrate how information is not just input to be processed into output, but also used to guide processes, inform desisions and measure process results.

This led to Harris’ definition of a business process architecture as “defining the business processes delivering results to stakeholders and supported by the organization/enterprise showing how they are related to each other and to the strategic goals of the organization/enterprise”. (whew) This includes four levels of process models:

  • Business capability models, also called business service models or end-to-end business process models, which is the top level of the work hierarchy that defined what business processes are, but not how they are performed. Louise referenced this to a classic EA standpoint as being row 1 of Zachman (in column 2).
  • Business process models, which provide deeper decomposition of the end-to-end models that tie them to the KPIs/goals. This has the effect of building process governance into the architecture directly.
  • Business process flow models, showing the flow of business processes at the level of logistical flow, such as value chains or asset lifecycles, depending on the type of process.
  • Business process scope models (IGOEs, that is, Inputs, Guides, Outputs, Enablers), identifying the resources involved in the process, including information, people and systems.

She moved on to discuss information architecture, and its value in defining information assets as well as content and usage standards. This includes three models:

  • Information concept model with the top level of the information related to the business, often organized into domains such as finance or HR. For example, in the information domain of finance, we might have information subject areas (concepts) of Invoicing, capital assets, budget, etc.
  • Information relationship model defines the relationships between the concepts identified in the information concept model, which can span different subject areas. This can look like an ERD, but the objects being connected are higher-level business objects rather than database objects: this makes it fairly tightly tied to the processes that those business objects undergo.
  • Information governance model, which defines that has to be done to maintain information integrity: governance structure, roles responsible, and policy and business standards.

Next was bringing together the process and information architectures, which is where IGOE (business process scope models) come into play, since they align information subject areas with top level business processes or business capabilities, allowing identification of gaps between process and information. This creates a framework for ensuring alignment at the design and operational levels, but does not map information subject areas to business functions since that is too dependent on the organizational structure.

Harris presented these models as being the business architecture, corresponding to rows 1 and 2 of Zachman (for example), which can then be used to provide context for the remainder of the enterprise architecture and into design. For example, once these models are established, the detailed process design can be integrated with logical data models.

She finished up by looking at how process and information architectures need to be developed in lock step, since business process ensures information quality, while information ensures process effectiveness.

Assessing BPM Maturity with @RogerBurlton

Roger Burlton held a joint session across several of the tracks on assessing BPM maturity, starting with the BPTrends pyramid of process maturity, which ranges from a wide base of the implementation level, to the middle tier of the business process level, up to the enterprise level that includes strategy and process architecture. He also showed his own “Burlton Hexagon” of the disciplines that form around business process and performance: policy and rules, human capital, enabling technologies, supporting infrastructure, organizational structure, and intent and strategy. His point is that not everyone is ready for maturity in all the areas that impact BPM (such as organizational structure), although they may be doing process transformation projects that require greater maturity in many of these other areas. At some level, these efforts must be able to be traced back to corporate strategy.

He presented a process maturity model based on the SEI capability maturity model, showing the following levels:

  1. Initial – zero process organizations
  2. Repeatable – departmental process improvement projects, some cross-functional process definition
  3. Defined – business processes delivered and measurements defined
  4. Managed – governance system implemented
  5. Optimizing – ongoing process improvement

Moving from level 2 to 3 is a pretty straightforward progression that you will see in many BPM “project to program” initiatives, but the jump to level 4 requires getting the high-level management on board and starting to make some cultural shifts. Organizations have to be ready to accept a certain level of change and maturity: in fact, organizational readiness will always constrain achievement of greater maturity, and may even end up getting the process maturity team in trouble.

He presented a worksheet for assessing your enterprise BPM gap, with several different factors on which you are intended to mark the current state, the desired future state, and the organizational management (labeled as “how far will management let you go?”). The factors include enterprise context, value chain models, alignment of resources with business processes, process performance measurement system, direct management responsibility for value chains, and a process CoE. By marking the three states (as is, to be, and what can we get away with) on each of these as a starting point, it allows you to see not just the spread between where you are and where you need to be, but adds in that extra dimension of organizational readiness for moving to a certain level of process maturity.

Depending on whether your organization is ready to crawl, walk or run (as defined by your organizational readiness relative to the as-is and to-be states), there are different techniques for getting to the desired maturity state: for those with low organizational readiness, for example, you need to focus on increasing that first, then evolve the process capabilities together with readiness as it increases. Organizational readiness at the executive level manifests as understanding, willingness and ability to do their work differently: in many cases, executives don’t want to change how they do their work, although they do want to reap the benefits of increased process maturity.

He showed a more detailed spreadsheet of a maturity and readiness assessment for a large technology company, color-coded based on which factors contribute most to an increase in maturity, and which hold the most risk since they represent the biggest jump in maturity without necessarily having the readiness.

With such a focus on readiness, change management is definitely a big issue with increasing process maturity. In order to address this, there are a number of steps in a communication plan: understand the stakeholders’ concerns, determine the messages, identify the media for delivering the messages, identify timetables for communication and change, identify the messengers, create/identify change agents (who is sometimes the biggest detractor to start), and deliver the message and handle the feedback. In looking at stakeholder concerns as part of the communication plan, you need to look at levels from informational (“what is it”), personal (“how will it impact my job”), management (“how will the change happen”), consequences (“what are the benefits”) and on into collaboration where the buy-in really starts to happen.

Ultimately, you’re not trying to sell business process change (or BPMS) within the organization: you’re trying to sell improvements in business performance, particularly for processes that are currently painful. Focus on the business goals, and use a model of the customer experience to illustrate how process improvements can improve that experience and therefore help meet overall business goals.

Finishing up with the maturity model, if you’re at level 1 or 2, get an initial process steering committee and CoE in place for governance, and plan a simple process architecture targeted at change initiatives rather than governance. Get standards for tools and templates in place, and start promoting the process project successes via the CoE. This is really about getting some lightweight governance in place, showing some initial successes, and educating all stakeholders on what process can do for them.

If you’re at level 3 or 4, you need to be creating your robust process architecture in collaboration with the business, and socialize it across the enterprise. With the Process Council (steering committee) in place, make sure that the process stewards/owners report up to the council. Put process measurements are in place, and ensure that the business is being managed relative to those KPIs. Expand process improvement out to the related areas across the enterprise architecture, and create tools and methods within the CoE that make it easy to plan, justify and execute process initiatives.

Accepting The Business Architecture Challenge with @logicalleap

Forrester analyst Jeff Scott presented at Building Business Capability on what business architecture is and what business architects do. According to their current research, interest in business architecture is very high – more than half of organizations consider it “very important”, and all organizations survey showed some interest – and more than half also have an active business architecture initiative. This hasn’t changed all that much since their last survey on this in 2008, although the numbers have crept up slightly. Surprisingly, less than half of the business architecture activities are on an enterprise-wide level, although if you combine that with those that have business architecture spanning multiple lines of business, it hits about 85%. When you look at where these organizations plan to take their business architecture programs, over 80% are planning for them to be enterprise-wide but that hasn’t changed in 3 years, meaning that although the intention is there, that may not actually be happening with any speed.

He moved on to a definition of business architecture, and how it has changed in the past 15 years. In the past, it used to be more like business analysis and requirements, but now it’s considered an initiative (either by business, EA or IT) to improve business performance and business/IT alignment. The problem is, in my opinion, that the term “business/IT alignment” has become a bit meaningless in the past few years as every vendor uses it in their marketing literature. Process models are considered a part of business architecture by a large majority of organizations with a business architecture initiative, as are business capability models and business strategy, application portfolio assessments, organizational models and even IT strategy.

Business architecture has become the hot new professional area to get into, whether you’re a business analyst or an enterprise architecture, which means that it’s necessary to have a better common understanding of what business architecture actually is and who the business architects are. I’m moderating a panel on this topic with three business/IT/enterprise architects today at 4:30pm, and plan to explore this further with them. Scott showed their research on what people did before they became (or labeled themselves as) business architects: most held a business analyst role, although many also were enterprise architects, application architects and other positions. Less than half of the business architects are doing it full time, so may still be fulfilling some of those other roles in addition. Many of them are resident in the EA group, and more than half of organizations consider EA to be responsible for the outcomes of business architecture.

It’s really a complex set of factors in figuring out what business architects do: some of them are working on enterprise-wide business transformation, while others are looking at efficiency within a business unit or project. The background of the business architect – that is, what they did before they became a business architect – can hugely impact (obviously) the scope and focus of their work as a business architect. In fact, is business architecture a function performed by many players, or is it a distinct role? Who is really involved in business architecture besides those who hold the title, and where do they fit in the organization? As Scott pointed out, these are unanswered questions that will plague business architecture for a few years still to come.

He presented several shifts to make in thinking:

  • Give up your old paradigms (think different; act different to get different results)
  • Start with “why” before you settle on the how and what
  • “Should” shouldn’t matter when mapping from “what is” to “what can be”
  • Exploration, not standardization, since enterprise architecture is still undergoing innovation on its way to maturity
  • Business architecture, not technology architecture, is what provides insight, risk management and leadership (rather than engineering, knowledge and management)
  • Stress on “business” in business architecture, not “architecture”, which may not fit into the EA frameworks that are more focused on knowledge
  • Focus on opportunity rather than efficiency, which is aligned with the shift in focus for BPM benefits that I’ve been seeing in the past few years
  • Complex problems need different solutions, including looking at the problems in context rather than just functional decomposition.
  • Solve the hard “soft” problems of building business architect skills and credibility, leveraging local successes to gain support and sponsorship, and overcome resistance to change
  • Think like the business before applying architectural thinking to the business problems and processes

He finished up with encouragement to become more business savvy: not just the details of business, but innovation and strategy. This can be done via some good reading resources, finding a business mentor and building relationships, while keeping in mind that business architecture should be an approach to clarify and illuminate the organization’s business model.

He wrote a blog post on some of the challenges facing business architects back in July, definitely worth a read as well.

Agile Predictive Process Platforms for Business Agility with @jameskobielus

James Kobielus of Forrester brought the concepts of predictive analytics to processes to discuss optimizing processes using the Next Best Action (NBA): using analytics and predictive models to figure out what you should do next in a process in order to optimize customer-facing processes.

As we heard in this morning’s keynote, agility is mandatory not just for competitive differentiation, for but basic business survival. This is especially true for customer-facing processes: since customer relationships are fragile and customer satisfaction is dynamic, the processes need to be highly agile. Customer happiness metrics need to be built into process design, since customer (un)happiness can be broadcast via social media in a heartbeat. According to Kobielus, if you have the right data and can analyze it appropriately, you can figure out what a customer needs to experience in order to maximize their satisfaction and maximizing your profits.

Business agility is all about converging process, data, rules and analytics. Instead of static business processes, historical business intelligence and business rules silos, we need to have real-time business Intelligence, dynamic processes, and advanced analytics and rules that guide and automate processes. It’s all about business processes, but processes infused with agile intelligence.  This has become a huge field of study (and implementation) in customer-facing scenarios, where data mining and behavioral studies are used to create predictive models on what the next best action is for a specific customer, given their past behavior as your customer, and even social media sentiment analysis.

He walked through a number of NBA case studies, including auto-generating offers based on a customer’s portal behavior in retail; tying together multichannel customer communications in telecom; and personalizing cross-channel customer interactions in financial services. These are based on coupling front and back-office processes with predictive analytics and rules, while automating the creation of the predictive models so that they are constantly fine-tuned without human intervention.

Process Excellence at Elevations Credit Union

Following the opening keynote at Building Business Capability, I attended the session about Elevations Credit Union’s journey to process excellence. Rather than a formal presentation, this was done as a sit-down discussion with Carla Wolfe, senior business analyst at Elevations CU being interviewed by Mihnea Galateanu, Chief Storyteller for Blueworks Live at IBM. Elevations obviously has a pretty interesting culture, because they publicly state – on their Facebook page, no less – that achieving the Malcolm Baldrige National Quality Award is their big hairy audacious goal (BHAG). To get there, they first had to get their process house in order.

They had a lot of confusion about what business processes even are, and how to discover the business processes that they had and wanted to improve. They used the AQPC framework as a starting point, and went out to all of their business areas to see who “Got Process?”. As they found out, about 80% didn’t have any idea of their business processes, and certainly didn’t have them documented or managed in any coherent manner. As they went through process discovery, they pushed towards “enterprise process maps”: namely, their end-to-end processes, or value streams.

Elevations is a relatively small company, only 260 employees; they went from having 60 people involved in process management (which is an amazingly high percentage to begin with) to a “much higher” number now. By publicly stating the Baldridge award – which is essentially about business process quality – as a BHAG, they couldn’t back away from this; this was a key motivator that kept people involved in the process improvement efforts. As they started to look at how processes needed to work, there was a lot of pain, particularly as they looked as some of the seriously broken processes (like when the marketing department created a promotion using a coupon to bring in new customers, but didn’t inform operations about the expected bump of new business, nor tell the front line tellers how to redeem the coupons). Even processes that are perceived as being dead simple – such as cashing a $100 bill at a branch – ended up involving many more steps and people that anyone had anticipated.

What I found particularly interesting about their experience was how they really made this about business processes (using value stream terminology, but processes nonetheless), so that everything that they looked at had to relate to a value stream. “Processes are the keys to the kingdom”, said Wolfe, when asked why they focused on process rather than, for example, customers. As she pointed out, if you get your processes in order, everything else falls into place. Awesome.

It was a major shift in thinking for people to see how they fit into these processes, and how they supported the overall value stream. Since most people (not just those at Elevations) just think about their own silo, and don’t think beyond their immediate process neighbors. Now, they think about process first, transforming the entire organization into process thinking mode. As they document their processes (using, in part, a Six Sigma SIPOC movel), they add a picture of the process owner to each of the processes or major subprocesses, which really drives home the concept of process ownerships. I should point out that most of the pictures that she showed of this was of paper flow diagrams pasted on walls; although they are a Blueworks Live customer, the focus here was really on their process discovery and management. She did, however, talk about the limitations of paper-based process maps (repository management, collaboration, ease of use), and how they used Blueworks Live once they had stabilized their enterprise process maps in order to allow better collaboration around the process details. By developing the SIPOCs of the end-to-end processes first on paper, they then recreated those in Blueworks Live to serve as a framework for collaboration, and anyone creating a new process had to link it to one of those existing value streams.

It’s important to realize that this was about documenting and managing manual processes, not implementing them in an automated fashion using a BPMS execution engine. Process improvement isn’t (necessarily) about technology, as they have proved, although the the process discovery uses a technology tool, and the processes include steps that interact with their core enterprise systems. Fundamentally, these are manual processes that include system interaction. Which means, of course, that there may be a whole new level of improvement that they could consider by adding some process automation to link together their systems and possibly automate some manual steps, plus automate some of the metrics and controls.

So where are they in achieving their BHAG? One year after launching their process improvement initiative, they won the Timberline level of the Colorado Performance Excellence (CPEx) Award, and continue to have their sights set on the Baldridge in the long term. Big, hairy and audacious, indeed.

Building Business Capability Keynote with @Ronald_G_Ross, @KathleenBarret and @RogerBurlton

After a short (and entertaining) introduction by Gladys Lam, we heard the opening keynote with conference chairs Ron Ross, Kathleen Barret and Roger Burlton. These three come from the three primary areas covered by this conference – business rules, business analysis and business process – and we heard about what attendees can expect to learn about and take away from the conference:

  • The challenge of business agility, which can be improved through the use of explicit and external business rules, instead of hard-coding rules into applications and documents. Making rules explicit also allows the knowledge within those rules to be more explicitly viewed and managed.
  • The need to think differently and use new solutions to solve today’s problems, and development of a new vocabulary to describe these problems and solutions.
  • You need to rewire the house while the lights are on, that is, you can’t stop your business while you take the time to improve it, but need to ensure that current operations are maintained in the interim.
  • Business rules need to be managed in a business sense, including traceability, in order to become a key business capability. They also need to be defined declaratively, independent from the business processes in which they might be involved.
  • Process and rules are the two key tools that should be in every business analyst’s toolkit: it’s not enough just to analyze the business, but you must be looking at how the identification and management of process and rules can improve the business.

The key message from all three of the chairs is that the cross-pollination between process, rules, analysis and architecture is essential in order to identify, manage and take advantage of the capabilities of your business. There is a lot of synergy between all of these areas, so don’t just stick with your area of expertise, but check out sessions in other tracks as well. We were encouraged to step up to a more business-oriented view of solving business problems, rather than just thinking about software and systems.

I’m adding the sessions that I attend to the Lanyrd site that I created for the conference, and linking my blog posts, presentations, etc. in the “coverage” area for each session. If you’re attending or presenting at a session, add it on Lanyrd so that others can socialize around it.

I’m moderating two panels during the remainder of the conference: today at 4:30pm is a BPM vendor panel on challenges in BPM adoption, then tomorrow at 4:30pm is a panel on business architecture versus IT architecture.

Tracking Your Conference Social Buzz

I’m pretty active on social media: primarily, I blog and tweet, but I also participate in Foursquare, Facebook and, recently, the social conference site Lanyrd. When I was preparing for this week’s Building Business Capabilities conference in Fort Lauderdale, I added the sessions that I’ll be giving and a few others to the Lanyrd site that I created for the conference, and encouraged others to do the same. Just to explain, this isn’t the official BBC site, but a shadow crowd-sourced site that allows people to socialize their participation in the conference: think of it as a wiki for the conference, including some structured data that makes it more than just plain text. Logging in via your Twitter account, you can create a session (or a whole conference), add speakers to it, indicate that you’re attending or speaking at the conference, and add links to any coverage (blog posts, slides, video, etc.).

For someone tracking the conference remotely, or attending but unable to attend all of the sessions, this is a great way to find information about the sessions that is just too fast-moving to expect the conference organizers to add to the official site. If you’re at BBC, or tracking it from your desk at home, I recommend that you check out the Lanyrd site for BBC, add any sessions that you’re attending or presenting that are missing (I only added a dozen or so, so feel free to go wild there), and link in any coverage of the conference or sessions that you read about on blogs or other sites.

I’ve been using Lanyrd for about a year, sometimes just to add conferences that I know are happening, but also to add myself to ones created by others, as a speaker, participant or just a tracker. There’s also a Lanyrd iPhone app that downloads all of this to your phone. Although BBC has a mobile site, it’s slow to load and doesn’t have a lot of the social features that you’ll find in the Lanyrd app, or the ability to save details offline.

I also had an interesting social interaction about my hotel room here at the Westin Diplomat, where the conference is being held. I checked in just after noon yesterday and arrived at the room to find it was nestled right beside a very noisy mechanical room, and looked out directly at several large air conditioning units about 10 feet away on the roof of the adjacent structure. It sounded like I was in the engine room of a ship. Unable to raise the front desk by phone, I went back down, and spent 20 minutes waiting for service. Fuming slightly, I tweeted, and ended up in a conversation with the Starwood hotels Twitter presence, StarwoodBuzz, which responded almost immediately to my mention of a Westin property. The second room was beside the elevator shaft so still a bit noisy, but tolerable; however, when I returned from dinner around 10pm, the carpet was flooded from a leaking windowpane due to the torrential rain that we had all evening, and another room change was required.

The hotel responded appropriately, for the most part (the service for the first room change could have been a bit better, and I expected a really quiet room after complaining about noise in the first room), but the real surprise was the near-immediate feedback and constant care provided by the nameless person/people at StarwoodBuzz, which you can see in the Bettween widget below:

[ Update: Unfortunately, Bettween went offline, and I didn’t capture a screen shot of the conversation. 🙁 I went back and faked it by favoriting all of the tweets in the conversation, then taking a screen snap.]

This is an excellent example of how some companies monitor the social conversation about their brands, and respond in a timely and helpful manner. Kudos to Starwood for putting this service in place. This is also a good example of why you should tweet using your real name (assuming that you’re not in a situation where that would be harmful to your person): StarwoodBuzz was able to notify the hotel management of my predicament. It’s possibly that by showing that I’m a real person, rather than a whiner complaining about their hotel while hiding behind a pseudonym, they were able to better address the problem.

The really funny thing is that everyone who I’ve run into at the conference so far said that they saw my original tweet, and wanted to know what happened with my room. Now they can watch it live on Twitter.