What Price Integrity?

As an interesting follow on to the previous session on blog monetization, I attended a panel on maintaining integrity on blogs when you do advertising or promotions on your site, featuring Danny Brown, Gini Dietrich and Eden Spodek. A lot of this is about transparency and disclosure; one audience member said that she writes paid reviews on her blog but that although you can buy her review, you can’t buy her opinion: there’s a fine line here. This is particularly an issue for lifestyle bloggers, since they often receive offers of free product in exchange for a review; this might be seen as being less of a “payment” than cash, although it still constitutes payment.

When I write a product review here, I am never compensated for that, although arguably it can impact my relationship with the vendor and can lead to other things, including paid engagements and conference trips. That’s quite different from being paid to blog about something, which I don’t do; I’ve had offers of payment from vendors to blog about them, and they don’t really understand when I tell them that I just don’t do that. Of course, you might say that when I’m at a vendor’s conference where they paid my travel expenses and I’m blogging about it, that’s paid blogging, but if you’ve ever spent much time at these conferences, you know that’s not much of a perq after a while. In fact, I’m giving up potential paid time in order to spend my time unpaid at the conference, so it ends up costing me in order to stay up to date on the products and customer experiences.

By the way, my “no compensation for blogging” doesn’t go for book reviews: it is almost 100% guaranteed that if I write a book review, the author or publisher sent me a free copy (either paper or electronic) since I just don’t buy a lot of books. I currently have a backlog of books to be read and reviewed since that’s not my main focus, so this isn’t such a great deal for either party.

The key advice of the panel is that if you do accept free product or some other payment in exchange for a product review, make sure that you remain authentic with your review, and disclose your relationship with the product vendor. In some countries, such as the US and the UK, this is now required; in places where it isn’t, it’s just good practice.

I was going to stay on for a session on webinars but the speaker seems to be a no-show, so this may be it for me and PodCamp Toronto 2011. Glad that I stopped by for the afternoon, definitely some worthwhile material and some food for thought on monetization and integrity.

Blog Monetization

The next session that I attended was Andrea Tomkins talking about how to make money through advertising on your blog. She started with ways that blogs can pay off without direct monetization, such as driving other sorts of business (just as this blog often drives first contacts for my consulting business) and leveraging free trips to conferences, but her main focus was on how she sells ads on her blog.

She believes that selling your own ad space results in higher quality advertising by allowing you to select the advertisers who you want on your site and control many of the design aspects. Plus, you get to keep all the cash. She believes in charging a flat monthly rate rather than by impressions or clicks, and to set the rates, she looked at the rates for local newspapers; however, newspapers are very broad-based whereas blog audiences are much more narrowly focused, meaning that the people reading your blog come from a specific demographic that certain advertisers would really like to have access to. Andrea’s blog is a “parenting lifestyle” blog – a.k.a. “mommyblogger” – and she has 1,300-1,400 daily views, many of whom are local to her Ottawa area.

She started out charging $50/month/ad, and bumped it for new clients as well as an annual increase until she reached a sweet spot in the pricing (which she didn’t disclose). She doesn’t sell anything less than a 3-month term, and some advertisers have signed up for a 12-month spot. Her first advertiser, who is still with her, is a local candy store that she and her family frequented weekly – she felt that if she loved it so much, then her readers would probably enjoy it as well. She approached the store directly to solicit the ad, although now many of her new advertisers come to her when they see her blog and how it might reach their potential audience.

She controls the overall ad design: the ad space is a 140×140 image with a link to their website, with the images being updated as often as the advertisers wish. New ads are added to the bottom of the list, so advertisers are incented to maintain their relationship with her in order to maintain their placement on the site.

She also writes a welcome post for each advertiser; she writes this as her authentic opinion, and doesn’t just publish some PR from the advertiser since she doesn’t want to alienate her readers. Each advertiser has the opportunity to host a giveaway or contest for each 3-month term, although she doesn’t want to turn her blog into a giveaway blog because that doesn’t match her blogging style. She also uses her social network to promote her advertisers in various ways, whether through personal recommendations, on her Facebook page or Twitter; because she only takes advertisers that she believes in, she can really give a personal recommendation for any of them.

Before you call a potential advertiser, she recommends understanding your traffic, figuring out an ad design and placement, and coming up with a rate sheet. Don’t inflate your traffic numbers: you’ll be found out and look like an idiot, and most advertisers are more interested in quality engagement than raw numbers anyway. Everyone pays the same rate on Andrea’s blog; she doesn’t charge more for “above the fold” ads or use a placement randomizer, so sometimes has some new advertisers (who are added to the bottom) complain about placement.

A rate sheet should be presented as a professionally-prepared piece of collateral coordinated with your business cards, blog style and other marketing pieces. It needs to include something about you, the deal you’re offering, your blog, your audience and traffic, and optionally some testimonials from other advertisers.

Handling your own ads does create work. You need to handle contacts regarding ads (she doesn’t publish her rates), invoice and accept payments, track which ads need to run when, set up contracts, and provide some reporting to the advertisers. Obviously, there has to be a better way to manage this without resorting to giving away some big percentage to an ad network. She also writes personal notes to advertisers about when their ad might have been noticed in something that Andrea did (like a TV appearance) or when she is speaking and hence might have their ads be more noticed. She does not publish ads in her feed, but publishes partial feeds so readers are driven to her site to read the full posts, and therefore see the ads. She has started sending out a newsletter and may be selling advertising separately for that.

This started a lot of ideas in my head about advertising. I used to have Google ads in my sidebar, which pretty much just paid my hosting fees, but I took them out when it started to feel a bit…petty. As long as I get a good part of my revenue from end-customer organizations to help them with their BPM implementations, it would be difficult to accept ads here and maintain the appearance of independence. Although I do work for vendors as an analyst and keep those parts of my business completely separate, with appropriate disclosure to clients, it is just as important to have the public appearance of impartiality as well as actually be impartial. An ongoing dilemma.

Psychology of Websites and Social Media Campaigns

I arrived at PodCamp Toronto after the lunch break today; “PodCamp” is a bit of a misnomer since this unconference now covers all sorts of social media.

My first session of the day with Brian Cugelman on the psychology of websites was a bit of a disappointment: too much of a lecture and not enough of a discussion, although there was a huge crowd in the room so a real discussion would have been difficult. He did have one good slide that compared persuasive websites with persuasive people:

  • They’re reputable
  • They’re likable with personality
  • They demonstrate expertise
  • They appear trustworthy
  • You understand them easily
  • What they say is engaging and relevant
  • They respect your time

He went through some motivational psychology research findings and discussed how this translates to websites, specifically looking at the parts of websites that correspond to the motivational triggers and analyzing some sites for how they display those triggers. Unfortunately, most of this research doesn’t seem to extend to social media sites, so although it works fairly well for standard websites, it breaks down when applied to things such as Facebook pages that are not specifically about making a sale or triggering an action. It will be interesting to see how this research extends in the future to understand the value of “mindshare” as separate from a direct link to sales or actions.

SAP Analytics Update

A group of bloggers had an update today from Steve Lucas, GM of the SAP business analytics group, covering what happened in 2010 and some outlook and strategy for 2011.

No surprise, they saw an explosion in growth in 2010: analytics has been identified as a key competitive differentiator for a couple of years now due to the huge growth into the amount of information and event being generated for every business; every organization is at least looking at business analytics, if not actually implementing it. SAP has approached analytics across several categories: analytic applications, performance management, business intelligence, information management, data warehousing, and governance/risk/compliance. In other words, it’s not just about the pretty data visualizations, but about all the data gathering, integration, cleanup, validation and storage that needs to go along with it. They’ve also released an analytics appliance, HANA, for sub-second data analysis and visualization on a massive scale. Add it all up, and you’ve got the right data, instantly available.

SAP Analytics products

New features in the recent product releases include an event processing/management component, to allow for real-time event insight for high-volume transactional systems: seems like a perfect match for monitoring events from, for example, an SAP ERP system. There has also been some deep integration into their ERP suite using the Business Intelligence Consumer Services (BICS) connector, although all of the new functionality in their analytics suite really pertains to Business Objects customers who are not SAP ERP customers; interestingly, he refers to customers who have an SAP analytics product but not their ERP suite as “non-SAP customers” – some things never change.

In a move that will be cheered by every SAP analytics user, they’ve finally standardized the user interface so that all of their analytics products share a common (or similar, it wasn’t clear) user experience – this is a bit of catch-up on their part, since they’ve brought together a number of different analytics acquisitions to form their analytics suites.

They’ve been addressing the mobile market as well as the desktop market, and are committing to all mainstream mobile platforms, including RIM’s Playbook. They’re developing their own apps, which will hurt partners such as Roambi who have made use of the open APIs to build apps that access SAP analytics data; there will be more information about the SAP apps in some product announcements coming up on the 23rd. Mobile information consumption is good, and possibly sufficient for some users, but I still think that most people need the ability to take action on the analytics, not just view them. That tied into a question about social BI; Lucas responded that there would be more announcements on the 23rd, but also pointed us towards their StreamWork product, which provides more of the sort of event streaming and collaboration environment that I wrote about earlier in Appian’s Tempo. In other words, maybe the main app on a mobile device will be StreamWork, so that actions and collaboration can be done, rather than the analytics apps directly. It will be interesting to see how well they integrate analytics with StreamWork so that a user doesn’t have to hop around from app to app in order to view and take action on information.

Process Knowledge Initiative Technical Team

When I got involved in the Process Knowledge Initiative to help create an open-source body of knowledge, I knew that the first part, with all the forming of committees and methodology and such, would be a bit excruciating for me. I was not wrong. However, it has been thankfully short due the contributions of many people with more competence and patience in that area than I, and I’m pleased to announce that we’ve put together an initial structure and will soon be starting on the fun stuff (in my opinion): working with the global BPM community to create the actual BoK content.

From our announcement earlier this week:

The month of November was a busy one for the Process Knowledge Initiative. In execution of our startup activities, we defined the PKBoK governance process and technical team structure, recruited our first round of technical experts, and secured preliminary funding via our Catalyst Program.

On the PKBoK development side, the team is actively researching and defining the candidate knowledge (or focus) areas in preparation for a January community review release.

With the knowledge area release, the development of the PKBoK becomes a full community activity, from content contributions, working group collaboration, and public commentary to content improvement and annotation.

It’s impossible to do something like this without some sort of infrastructure to get things kicked off, although we expect most of the actual content to be created by the community, not a committee. To that end, we’ve put forward an initial team structure as follows:

  • Technical Integration Team is responsible for establishing the PKBoK blueprint (scope, knowledge areas, ontology, content templates), recruiting working group leaders, and coordinating content development, publication and review.
  • Methodology Advisory Board provides guidance and support on PKBoK development and review processes. The Methodology Advisory Board does not participate in content creation or review; rather it provides rigor to ensure the final content represents the community perspective.
  • Technical Advisory Board provides expert input to, and review of, deliverables from the Technical Integration Team and Working Groups. Technical Advisors may lead, or contribute content to working groups within their area of specialization.
  • Working Groups develop PKBoK content for a particular knowledge area, task or set of tasks. Working groups will form via public calls for participation. The first call is planned for April 2011.
  • BPM Community reviews, contributes to, and consumes the PKBoK. All BPM community members are welcome to participate in the development of the PKBoK or utilize the delivered content in their individual BPM practices.

You can see the people who are participating in the first three of these in the announcement – including academia, industry analysts, standards associations, vendors and end-user organizations – and we’re looking for more people to join these groups as we move along.

Most of the content creation will be done by the working groups and the global BPM community; the other groups are there to provide support and guidance as required. We’ll soon be putting forward the proposed knowledge areas for discussion, which will kick off the process of forming the working groups and creating content.

I’m also starting to look at wiki platforms that we can use for this, since this really needs to be an inclusive community effort that embraces multiple points of view, not a moderated walled garden. This open model for content creation, as well as a liberal Creative Commons license for distribution, is intended to gain maximum participation both from contributors and consumers of the BoK.

Friday Diversion: The Kemsley Wartime Journals

For those of you who follow me on Twitter or Facebook, you may have already seen Frank Kemsley’s Journal: the blog of my grandfather’s WWI daily diary from the time that he spent in the Canadian army. I launched it last week on Remembrance Day, and started the regular blogging on November 16th, corresponding to his first journal entry on November 16th, 1916. I’m publishing the scanned pages of the journal along with the posts, embedded in the first post corresponding to that page, and today was the first full journal page. His journals (there are 3 of them) run until he returns home in 1919, and I will do my best to keep up my transcription of his journals as long as he took to write the journals.

I’ve had some tremendous feedback so far on this, including retweets by the mayor of TorontoTony Baer’s tweet that this is obviously where I get my blogging gene 😉 plus an interesting comment from Martin Cleaver that these paper journals may outlive all of our online journaling.

When I discovered my grandfather’s journals, I also found a WWII journal of my father’s from 1944, which I will be starting to blog on January 1st. It only covers the period from January-September 1944, but he was in the Canadian Navy in the Atlantic, so there’s some interesting stuff right around June of that year.

What Organizations Want From Case Management

There was an AIIM webinar today on supporting the information worker with case management, featuring Craig Le Clair from Forrester.

Le Clair introduced the concept of information workers, a term that they use instead of knowledge worker, defined as “everyone between 18 and 88 with a job in which they use a computer or other connected device”, which I find to be a sufficiently broad definition as to be completely useless but allows them to use the cute abbreviation iWorker. Today, however, he’s just focused on those iWorkers who are involved with case management, in other words, what the rest of us would call knowledge workers. Whatever.

Forrester uses the term dynamic case management – others use advanced or adaptive case management, but we’re talking about the same thing – to mean “a semistructured but also collaborative, dynamic, human, and information-intensive process that is driven by outside events and requires incremental and progressive responses from the business domain handling the case.” Le Clair provided a quick summary of dynamic case management, with the document-centric case file as the focus, surrounded by processes/tasks, data, events and other aspects that make up the entire history of a case. There are some business challenges now that are driving the adoption of case management, including cost and risk management for servicing customer requests, enforcing compliance in less structured processes, and support for ad hoc knowledge work. He spoke specifically about transparency in regulatory compliance situations, where case management provides a way to handle regulatory processes for maximum flexibility while still enforcing necessary rules and capturing a complete history of a case, although most customers are more focused on case management specifically for improving customer service.

He described case management as a convergence of a number of technologies, primarily ECM, BPM, analytics and user experience, although I would argue that events and rules are equally important. Dynamic allocation of work is key: a case can select which tasks that should be applied to a case, and even who should be involved, in order to reach the specified states/goals of the case. Some paths will include structured processes, others will be completely ad hoc, others may involved a task checklist. Different paths selected may trigger rules and events, or offer guidance on completion. Different views of the case may be available to different roles. In other words, case management tries to capture the flexible experience of working on a case manually, but provides a guided experience where regulations demand it, and captures a complete audit trail as well as analytics of what happened to the case.

Forrester predicts that three categories of case management will emerge – investigative, service requests and incident management (can you sense three separate Forrester Waves coming?) – focused on different aspects of customer experience, cost control and risk mitigation. Key to making these work will be integration of core customer data directly into the case management environment, both for display to a case worker as well as allowing for automated rules to be triggered based on customer data. There are some challenges ahead: IT is still leading the configuration of case management applications, and it just takes too long to make changes to the rules, process models and reporting.

He was followed by Ken Bisconti from IBM’s ECM software products group, since IBM sponsored the webinar, talking about their new Case Manager product; I wrote about what Ken and many others said about this at the IOD conference last month, and just had an in-depth briefing on the product that I will be writing about, so won’t cover his part of the presentation today.

Smarter Infrastructure For A Smarter Planet

Kristof Kloeckner, IBM’s VP of Strategy & Enterprise Initiatives System and Software, & CTO of Cloud Computing, delivered today’s keynote on the theme of a smarter planet and IBM’s cloud computing strategy. Considering that this is the third IBM conference that I’ve been to in six months (Impact, IOD and now CASCON), there’s not a lot new here: people + process + information = smarter enterprise; increasing agility; connecting and empowering people; turning information into insights; driving effectiveness and efficiency; blah, blah, blah.

I found it particularly interesting that the person in charge of IBM’s cloud computing strategy would make a comment from the stage that he could see audience members “surreptitiously using their iPads”, as if those of us using an internet-connected device during his talk were not paying attention or connecting with his material. In actual fact, some of us (like me) are taking notes and blogging on his talk, tweeting about it, looking up references that he makes, and other functions that are more relevant to his presentation than he understands.

I like the slide that he had on the hype versus enterprise reality of IT trends, such as how the consumerization of IT hype is manifesting in industrialization of IT, or how the Big Switch is becoming reality through multiple deployment choices ranging from fully on-premise to fully virtualized public cloud infrastructure. I did have to laugh, however, when he showed a range of deployment models where he labeled the on-premise enterprise data center as a “private cloud”, as well as enterprise data centers that are on-premise but operated by a 3rd party, and enterprise infrastructure that is hosted and operated by a 3rd party for an organization’s exclusive use. It’s only when he gets into shared and public cloud services that he reaches what many of us consider to be “cloud”: the rest is just virtualization and/or managed hosting services where the customer organization still pays for the entire infrastructure.

It’s inevitable that larger (or more paranoid) organizations will continue to have on-premise systems, and might combine them with cloud infrastructure in a hybrid cloud model; there’s a need to have systems management that spans across these hybrid environments, and open standards are starting to emerge for cloud-to-enterprise communication and control.

Kloeckner feels that one of the first major multi-tenanted platforms to emerge(presumably amongst their large enterprise customers) will be databases; although it seems somewhat counterintuitive that organizations nervous about the security and privacy of shared services would use them for their data storage, in retrospect, he’s probably talking about multi-tenanted on-premise or private hosted systems, where the multiple tenants are parts of the same organization. I do agree with his concept of using cloud for development and test environments – I’m seeing this as a popular solution – but believe that the public cloud infrastructure will have the biggest impact in the near term on small and medium businesses by driving down their IT costs, and in cross-organization collaborative applications.

I’m done with CASCON 2010; none of the afternoon workshops piqued my interest, and tomorrow I’m presenting at a seminar hosted by Pegasystems in downtown Toronto. As always, CASCON has been a great conference on software research of all types.

CASCON Keynote: 20th Anniversary, Big Data and a Smarter Planet

With the morning workshop (and lunch) behind us, the first part of the afternoon is the opening keynote, starting with Judy Huber, who oversees the 5,000 people at the IBM Canada software labs, which includes the Centre for Advanced Studies (CAS) technology incubation lab that spawned this conference. This is the 20th year of CASCON, and some of the attendees have been here since the beginning, but there are a lot of younger faces who were barely born when CASCON started.

To recognize the achievements over the years, Joanna Ng, head of research at CAS, presented awards for the high-impact papers from the first decade of CASCON, one each for 1991 to 2000 inclusive. Many of the authors of those papers were present to receive the award. Ng also presented an award to Hausi Müller from University of Victoria for driving this review and selection process. The theme of this year’s conference is smarter technology for a smarter planet – I’ve seen that theme at all three IBM conferences that I’ve attended this year – and Ng challenged the audience to step up to making the smarter planet vision into reality. Echoing the words of Brenda Dietrich that I heard last week, she stated that it’s a great time to be in this type of research because of the exciting things that are happening, and the benefits that are accruing.

Following the awards, Rod Smith, VP of IBM emerging internet technologies and an IBM fellow, gave the keynote address. His research group, although it hasn’t been around as long as CAS, has a 15-year history of looking at emerging technology, with a current focus on “big data” analytics, mobile, and browser application environments. Since they’re not a product group, they’re able to take their ideas out to customers 12-18 months in advance of marketplace adoption to test the waters and fine-tune the products that will result from this.

They see big data analytics as a new class of application on the horizon, since they’re hearing customers ask for the ability to search, filter, remix and analyze vast quantities of data from disparate sources: something that the customers thought of as Google’s domain. Part of IBM’s BigInsights project (which I heard about a bit last week at IOD)  is BigSheets, an insight engine for enabling ad hoc discovery for business users, on a web scale. It’s like a spreadsheet view on the web, which is a metaphor easily understood by most business users. They’re using the Hadoop open source project to power all of the BigInsights projects.

It wouldn’t be a technical conference in 2010 if someone didn’t mention Twitter, and this is no exception: Smith discussed using BigSheets to analyze and visualize Twitter streams related to specific products or companies. They also used IBM Content Analytics to create the analysis model, particularly to find tweets related to mobile phones with a “buy signal” in the message. They’ve also done work on a UK web archive for the British Library, automating the web page classification and making 128 TB of data available to researchers. In fact, any organization that has a lot of data, mostly unstructured, and wants to open it up for research and analysis is a target for these sort of big data solutions. It stands to reason that the more often you can generate business insights from the massive quantity of data constantly being generated, the greater the business value.

Next up was Christian Couturier, co-chair of the conference and Director General of the Institute of Information Technology at the Canada’s National Research Council. NRC provides some of the funding to IBM Canada CAS Research, driven by the government’s digital economy strategy which includes not just improving business productivity but creating high-paying jobs within Canada. He mentioned that Canadian businesses lag behind other countries in adoption of certain technologies, and I’m biting my tongue so that I don’t repeat my questions of two years ago at IT360 where I challenged the Director General of Industry Canada on what they were doing about the excessively high price of broadband and complete lack of net neutrality in Canada.

The program co-chairs presented the award for best paper at this show, on Testing Sequence Diagram to Colored Petri Nets Transformation, and the best student paper, on Integrating MapReduce and RDBMSs; I’ll check these out in the proceedings as well as a number of other interesting looking papers, even if I don’t get to the presentations.

Oh yeah, and in addition to being a great, free conference, there’s birthday cake to celebrate 20 years!

Are You Exceeding Your Customers’ Expectations? Toronto Seminar This Week

I’ll be the opening speaker at an event in Toronto this Thursday, hosted by InformationWeek and Pegasystems, focused on how you can exceed your customers’ expectations. It’s also in Chicago on Wednesday, although I won’t be at that one. You can register for either event here: expect breakfast, networking, speakers including me and some customers, and an open discussion.

By the way, I’m the “InformationWeek Editor” speaker, since this blog is occasionally syndicated on Intelligent Enterprise, which is an InformationWeek property.