Mashup Camp 2 Day 1: Aggregating profile data

Sarah Harmer’s done, and I’m on to President Alien. It really is getting late, I swear that this is my last post for the day. Or maybe my second last.

Following the speed geeking, we had the breakout third session, and I attended the one on aggregating profile data. Although this was hijacked slightly at the end by the OpenID attendees, it was an interesting discussion on a number of profile/identity-related subjects:

  • Making yourself available and findable. How do you create an identity that others can easily find on any given social networking system?
  • Finding others. How do you find others on a social networking system? (through a directly tranmitted URL or reference, by email, by common screen name, etc.). How can you create a common identity view of someone that you know? Is the onus on the person being found or the person doing the finding to create that view? Being able to track changes on another person’s profile. Being alerted about events in their life, e.g., birthdays.
  • Using different profiles for different purposes, namely the compartmentalization of different parts of life. Choosing what to share (photos, blogs, contact info) with which people (family, friends, work colleagues). Intentionally creating non-correlatable profiles, e.g., Superman/Clark Kent, work/play. The different levels of intimacy, e.g., comparing the Flickr friends and family categories with the concept of an IM buddy, and what information might be shared with acquaintances of different levels of closeness.
  • Exchanging profile data between social networking systems. Open standards for profile data exchange.The concept of a People Service for social networking group/profile management, which also came up in the People Aggregator mashup during speed geeking.

I attended this out of general interest; although I had hopes that it might be useful in enteprise profile management, I’m not so sure but still found it interesting.

Mashup Camp 2 Day 1: Speed geeking

We started the afternoon with speed geeking, where each mashup developer set up on a table, and the rest of us circulated around spending 5 minutes at each table for a quick demo. There were 21 tables but only 10 timeslots, which made for a quick triage before I started out. We’re having another speed geeking session tomorrow so I’ll have time to catch any that I missed before I hand out my wooden nickel — the token that each of us were given at registration to use for voting for our favourite mashup. First prize is $5000 of geek-ware of some sort, so this is a pretty big deal.

Tons of ideas here: wedding registry finder, cell phone as shopping cart, news aggregator, train schedules to SMS, backend storage, music videos, Bungee Labs mega-mashup, restaurant reviews, auto-tagging Flickr photos by event, Frappr, pricing, travel data….

Because I was moving tables every five minutes, all my notes about the speed geeking are on paper but I’ll post more details after tomorrow’s speed geeking session. My faves so far:

  • ChunkLove, a gift registry finder
  • TrainCheck, which sends the next three BART times at any stop to your mobile via SMS or email
  • PhoTiger, which matches up your Flickr photos with your Eventful events to help auto-tag (and eventually auto-name and geocode) the photos based on the event data
  • MileGuru, which aggregates all your frequent flyer and frequent stayer (hotel) points and other data into a single place

Mashup Camp 2 Day 1: AJAX design patterns

Veneer was great, but short, so I’m on to Sarah Harmer’s You Were Here. I’m starting to slow down — it’s was a long day of travel yesterday and a long day of idea generation today — so may not finish up all my posts today.

The next session for me was AJAX design patterns, which was good although focussed a bit too much on security issues for what I was looking for. Some great stuff on UI and performance issues. The wiki page has all the technical details, includes references to further JSON reading, so I’ll just touch on some of the things that stuck in my mind during the session about AJAX UI design:

  • Action of the back button: was the last user activity a navigation or an action? Can it be “undone” by navigating back, or is it appropriate to return to a higher level/home page?
  • Action of the refresh button.
  • URL and permalinks: appended # arguments that don’t hit the server but are pure client processes to make the AJAX calls. Implications for search engines (agents can’t index pages directly and would require an alternative representation to be referenced by robots.txt, but doesn’t handle issues of relevancy through links), emailing permalink references.
  • Tradeoffs between user experience and technical issues.
  • Some actions need to be synchronous (e.g., “buy it” and other transactions), requires forcing synchronicity in AJAX or breaking out of AJAX for that part of the transaction.

Mashup Camp 2 Day 1: Mashdowns

As I mentioned in my previous post, I had to do all my blogging today offline because of the spotty wifi in the Computer History Museum, and I have to say that Windows Notepad makes a pretty sucky offline blogging tool. However, I’m relaxing back in my room listening to the newly-downloaded and extremely enjoyable Veneer (just available on iTunes, after I couldn’t buy the CD after a month of trying on Amazon.ca), cleaning up the blog posts and paper notes from today.

Following the kickoff session, we headed off to breakout sessions proposed by anyone and everyone during the kickoff. Each session was supposed to update the wiki with notes from anyone at the session, and you can find the grid of sessions here with links to the wiki pages with the notes. I’ll link to the notes for each of the sessions that I attended.

The first one that I headed to was “Mashdowns: mashing for competitive advantage in rich client/enterprise applications”, led by Mike Fisher and Ben Widhelm from ElephantDrive. They see this as a second generation of mashups: more tightly integrated into desktop or enterprise applications, and more focussed on “doing” rather than “consuming” — which seems pretty much aligned with my ideas about BPM and mashups. I hate their term “mashdown”, however, preferring the more-commonly used “enterprise mashup”. Really, the distinction between first and second generation mashups is primarily between consumer mashups and business/enterprise mashups.

We gathered a number of ideas about the difference between first and second generation mashups:

  • First generation mashups are about the “what”, and are primarily about aggregating/joining/federating data. They’re generally seen as useful by users (consumers), and because they’re focussed on the consumer market, they tend to be public, and developed rapidly and a bit loosely. The revenue model is usually based on ad revenues, since few end-users pay for the mashups.
  • Second generation mashups are about the “how”, and are about aggregating external and internal (enterprise) services. They’re useful to business for all the usual business ROI reasons: improving process efficiency, reducing IT costs and increasing business agility; like any other plan that reduces technology capital investment, they also tend to level the playing field for smaller companies since they can use the same technology as the big guys but not have to build it or buy it outright. Unlike the consumer mashups, however, they have to be industrial-strength, private and secure. Equally importantly, they have to be supported by some sort of service level agreement backed by appropriate high availability and disaster recovery scenarios, which most of the current API vendors are not willing to provide.

The key difference for me is that second generation mashups are about integrating into the business processes. This breakout was a significant conversation since it’s the first one that I’ve heard at either Mashup Camp where business processes were a major focus. I’m feeling very positive about BPM and Web 2.0 today.

We had a conversation about one of the main problems of enterprise mashups, which is their current lack of acceptance by IT. Part of this is IT attitudes: not trusting the external APIs, either in terms of data integrity or in terms of reliability, plus the NIH problem. An equally important part is the relatively lack of readiness of the APIs themselves in terms of SLAs, authentication and other indutrial-grade issues that would have external APIs be on equal footing alongside internal ones. Even with internal-only mashups, that use lighter-weight mashup techniques on internal APIs, there’s resistance to a new way of doing things. That really comes back to the question of the the difference between a mashup and any other web services orchestration, especially as lightweight (non-WS-*) integration methods are used for faster application assembly internally.

This was a great session for focussing my thoughts on how to talk to my enterprise customers about mashups.

Michael Scherotter was also there from Mindjet, distributing copies of their application on flashdrives. Haven’t had a chance to install and try it out yet.

Mashup Camp 2 kickoff

David Berlind started Mashup Camp 2 a bit after 9am (which is great for us east coasters, but probably early for the locals) with the logistics and agenda framework for the day. As with Mashup Camp 1, and any other unconference, there is no real agenda, just time slots and rooms where anyone who has a topic of interest can faciliate a session. Kudos to David for getting this off the ground successfully — again! — and attracting almost 400 people here for the two days.

This was followed by all of the API/technology providers giving their 30-second spiel on what they do: EVDB, Yahoo Local (maps), AOL (mashup hosting, OpenAIM API, MapQuest, MusicNow), Microsoft Live, Commendo, Good Storm, Webalo, HotOrNot, Intel, Amazon, Plaxo, StrikeIron, OpenID, IBM, eBay (he introduced his company as “a small internet auction company”), Zazzle, Mindjet, O’Reilly, 411sync, Mobido, and at least one other that I missed. I don’t recall anyone from Google up there, but they have a strong enough presence here that that’s probably not required.

After that, the interesting part started: anyone interested in leading a session or making a presentation wrote their idea on a page, announced it into the microphone and stuck it on the schedule, with developers having first dibs on the time and space. There are a ton of interesting sessions proposed for the next two days: voting (as in political), data mining, PHP, user retention, music/movies, API versioning/backwards compatibility, using mashups for prototyping, mashups for non-geeks/small businesses, Google Checkout/AdWords mashup, client-side customization, incorporating mashups into desktop/enterprise environments (“mashdowns”), Ruby on Rails hands-on mashup development, wikis as a mashup platform (specifically twiki), social networking, API pricing models and licensing, content taxonomies, microformats and standardization for APIs, monetization of mashups, access control/authentication for feeds, security and identity, API developer programs, email mashups, aggregating profile data from different web sources, multimedia mashups, business-oriented mashups, mapping mashups (from the guy who developed Frappr), user-centricity, Google Gadgets, mobile mashups, open source social entrepreneurship and more.

I have no idea how I’m going to see all the things that I want to see. I do know that the wifi in the museum is spotty, and I’m having a hard time staying connected, so all this blogging will pile up for the end of the day.

Back at Mashup Camp

Or is that Mashup Camp 2.0? I’m back at the delightfully quirky Hotel Avante again, meeting up with some people who I met back at the original Mashup Camp in February, and starting to meet a bunch of new people. We’ll all be off to the Computer History Museum in the morning for the official start; tonight was a party by the pool (unfortunately I arrived a bit late and missed most of the action) and some great demos to a lot of people in a tiny room.

I had the Air Canada experience from hell getting here, which probably deserves a post of its own about total lack of good customer service as well as a web user interface that doesn’t bother to tell you when you just paid $855 for a ticket but don’t actually have a reserved seat on the plane…

SOA in OMG newsletter

The Spring OMG newsletter is available online (direct link to PDF) with a 2-page article “OMG and Service-Oriented Architecture”:

In essence, SOA is an architectural approach that seeks to align business processes with service protocols and the underlying software components and legacy applications that implement them.

So far, so good. Then they go on to say:

Both processes and services need to be carefully coordinated to assure an effective SOA implementation. You can’t really do SOA without a clear model of the business process to be supported.

Not sure that I fully agree with that: you have to have a clear model of your business process before you can implement SOA? Aren’t the underlying services supposed to be reusable even if the business process changes? Isn’t that really the whole point of SOA?

And you can’t link your business processes to your service models without the modeling standards the OMG is developing as part of its Model Driven Architecture® (MDA®).

Oh, I get it now.

They do include a nice diagram showing where the OMG standards fit in one representation of an SOA environment (see the newsletter for the full-size version). You can see where BPMN, BPDM and BPEL fit in, which I talked about in my posts from the BPM Think Tank last week, plus other standards such as SBVR (Semantics of Business Vocabulary and Rules) for business rules.

I also like that they’re platform-independent about this, and that they don’t equate SOA with WS-*.

You can check out the newly-formed OMG SIG on SOA if you want to get involved in discussing this MDA approach to SOA.

BPM Think Tank wrapup

Since I only finished posting about yesterday’s sessions at the end of this morning, I decided to just do a final conference wrapup instead of separate wrapups for yesterday and today.

In general, the BPM Think Tank was great, and I’ll definitely attend again in the future. I learned a lot about some of the standards that I didn’t know much about before (like BPDM), and met some really smart people with lots of opinions on the topic of standards. It’s been so long since I was involved in any sort of standards work (AIIM in the early 90’s, and topographic data interchange formats for the Canadian Council of Surveying and Mapping back in the late 80’s), and I had forgotten about both the frustrations of dealing with standards committees and the excitement of being able to contribute to a little bit of computing history that will make things work better for a lot of people.

I’m still mulling over the XPDL/BPDM conundrum (and, to a lesser extent, BPEL), but the fact that different standards bodies are all here participating is a good indicator that there is the collective will to head off problems like this. At last year’s Think Tank, discussions between BPMI and OMG around the competing graphical process models of BPMN and UML activity diagrams helped lead to the absorption of BPMI into OMG, and the championing of a single standard, BPMN, being put forward by the merged organization. We can only hope that something similar will happen with XPDL and BPDM in order to avoid future problems in the BPMN serialization domain.

I had the chance to meet several people who I had connected with online but never met face-to-face: Dana Morris of OMG, Bruce Silver, John Evdemon (who I’ll be having ongoing discussions with about BPM and Web 2.0) and others. Jeanne Baker, who did such a great job at keeping things moving along during the sessions, even remembered one of my posts from last year about a webinar that she gave on standards — she turned to me at lunch yesterday and asked “Did you write that blog post called ‘Alphabet soup for lunch‘?” — proof that people will remember if you mention them in print. I missed other people completely in the crowd (Phil, where were you?).

There were a few logistical problems (conference rooms way too cold, no free wifi, not enough herbal tea, and no free t-shirts with vendor logos, about which I heard a lot of whining when I got home), but these were only minor annoyances in an otherwise well-executed conference with excellent content.