Mashup Camp IV Day 1: Why DIY when APIs are available?

This was really Chris Radcliff’s bitch session on why don’t people just use the Eventful API instead of writing their own event functionality over and over again 🙂 but we discussed a number of interesting points that have analogies in any development environment:

Why people write their own functions even if something exists:

  • Discoverability, that is, they don’t know that the function exists externally
  • Lack of functionality (or perceived lack)
  • Lack of control over functionality, which is a sort of subset of the previous point
  • Lack of service level agreement
  • “Not invented here” syndrome
  • Complexity of external APIs

Why people shouldn’t write their own functions, but use one that already exists:

  • Someone else maintains the code
  • That particular function is not a core competency and not a competitive differentiator
  • It takes longer to get up and running if you write it yourself than if you use an existing API

There was an extended discussion of event APIs and functionality in general, which was not really the point of this session, but it’s an interesting case study for looking at the issues. There’s a ton of other examples here: spam filtering, address lookups, geocoding; all of these are readily available from a couple of competing sources. Of course, it’s all a matter of timing: I can recall when we wrote TIFF decompression and display algorithms in the late 1980’s because nothing else existed, something that would never be considered now.

There’s obvious differences in APIs that deliver content versus those that manipulate content with respect to both copyright issues and currency: if an API is maintaining an up-to-date database of information behind the API (like Eventful, which has about 4 million future events in their database at any given time), then it may be much better positioned to deliver the right content than something that you build yourself.

Mashup Camp IV Day 1: Enterprise Mashups

My speed notes from the speed geeking sessions are all on paper, so I’ll have to transcribe them later. In the meantime, however, I’m at the next session on enterprise mashups.

This was a bit slow-moving; I blame that on the post-lunch energy dip and the frenzy of speed-geeking that wore everyone out.

We talked around a number of subjects:

  • Enterprise mashups have a focus on multiple data sources, especially unstructured data, acting in part as a replacement for manual cut-and-paste.
  • Current IT development methodologies are not sufficiently agile to develop mashups, leading to the discussion about whether enterprise mashups should be done outside of IT: are mashups the next generation of end-user computing, replacing Excel and Access applications created by the business units? If so, who’s responsible for the result of the mashup, and for the underlying data sources?
  • The current IT environment tends to be command+control, and doesn’t lend itself to enabling mashups to occur in the business units. They need to unlearn scarcity of computing resources, and learn abundance.
  • What’s the boundary between EAI and mashups? What’s the boundary between business processes and social collaboration?

Mashup Camp IV Day 1: Opening up the social web

Another vendor-proposed session (Plaxo), but no formal presentation and not at all around Plaxo’s stuff so not really commercial at all.

The issue is all the multiple unlinked social networks to which we all belong, most of which aren’t open to data extraction or mashup in any way. For example, Facebook will link to a couple of different online address books (such as Gmail) to see if any of your contacts are already on Facebook, but there’s no programmatic way to do the same so that you could, for example, check to see if any of your LinkedIn contacts are also on Facebook (something that I’m checking out a lot lately as more business contacts start to find me on Facebook).

Most of the social networks are very much walled gardens, with no way to even get your own information out. LinkedIn allows you to download each individual’s contacts in as a vCard, but doesn’t allow for bulk export or API access to that data.

We listed data that should be opened up (i.e., made more easily accessible) in social networks:

  • My profile
  • Who I know
  • Friends’ content
  • Permissions that I’ve set on people/objects (which might include some implied categorization, like the Flickr family/friends subsets)
  • Attention or activity with contacts

We also discussed some of the problems with social networks, such as how you add people to your network but rarely delete them even if you never interact with them any more because that seems a bit harsh to just dump them from your network.

Getting back to the “set my data free” problem, there’s really a need for standards that would allow data to flow between sites that I allow to communicate. Although Plaxo provides some of that functionality, it’s not an open standard and it doesn’t interact with most of the social network sites; possibly something like Plaxo could be used to broker the data and relationships between these sites. LinkedIn’s toolbar for Outlook does a bit of this too, by allowing you to easily link up what you have in Outlook with what you have in LinkedIn; again, it’s not open and only covers that pair of data sources.

One issue is how to recognize the same person across sites: email address is most commonly used, but not perfect because many people use different email addresses for different sites, like a business email on LinkedIn and a personal email on Facebook.

Mashup Camp IV Day 1: AOL and Feed Mashups

Since sponsors are allowed to propose sessions, AOL proposed a session on feed mashups where they gave a short presentation/demo of their new customizable portal page (comparable to Netvibes or iGoogle) that also includes Mgnet, a way to do what they refer to as mashing up feeds, although it appears to be feed aggregation and filtering. Maybe that’s a good question: when does aggregation and filtering (which are basically functions of a feed reader) become a mashup? It appears that some of the interactive “mix & share” functionality is similar to the “share this” functionality in Google Reader, where you can set certain posts (or even a whole folder/collection of third-party feeds) to become part of a new, customized feed that can be shared with others.

The cool part is a set of APIs (both REST and RPC) that allow this functionality to be accessed programmatically rather than interactively:

  • Manage users’ bookmarks and feed subscriptions, organized by tags or folders
  • Retrieve feed articles
  • Apply chained operations (sort, trim, html) during feed processing

This allows aggregated feeds to be accessed in an application as a URL, create a mixed feed from a folder or tag, or dynamically create a synthetic feed from several feeds. This makes it similar to Yahoo Pipes for feed manipulation, but with a bit more flexibility.

AOL also seems to  be providing the only t-shirts at camp, since there’s no official Mashup Camp t-shirt this time; having scored a t-shirt, I’ve fulfilled my home obligations and can relax. 🙂

The session turned into an interesting discussion about widget standards, including how IBM and BEA are both supporting Google gadgets in their portals, making it somewhat of a de facto standard. Even the AOL guys admit that a standard widget format (even if it’s not theirs) is valuable, and they also support Google gadgets.

We also discussed how the difficulties with authentication for feed subscribers is part of what’s inhibiting adoption by businesses, particularly for delivering any sort of secure information to customers outside the firewall, such as a feed of transactions from your financial institution. AOL is using OpenID as a provider (every AOL username also corresponds to a set of OpenID credentials), but isn’t accepting OpenID — this seems to be the way that a lot of sites are going, which is not going to work until they all start accepting as well as providing credentials: providing OpenID credentials without accepting them is little better, in my opinion, than implementing a proprietary credentials scheme. One attendee pointed out, with some head-nodding around the room, that the dream of OpenID may actually be better than the practice since most people don’t want a single point of failure for their online credentials: you might use OpenID for all the logins that don’t contain any sensitive information so as to have a single signon around the web, but are unlikely to use it for financial and other critical sites.

I think that feeds are becoming more important in general, and also are going to start making some significant inroads within enterprises, as I saw at the recent Enterprise 2.0 conference. Inside the firewall, the credentials issue gets much easier, but there’s a much bigger cultural gap to using feeds as applications.

Mashup Camp IV, Day 1: Opening Session

A slow start, in spite of the announced 8:30am start time, and a smaller crowd than I remember from last year’s Mashup Camps, but a few familiar faces and lots of enthusiasm in a low-key sort of way. Like every unconference that I’ve been to, there’s a few minutes before the grid of sessions starts to fill up when I’m convinced that this is all a great waste of time, then people get up and propose interesting sessions, and I’m hooked.

David Berlin is our able host, as usual, and Kaliya Hamlin is facilitating the Open Space concepts for us, providing a bit of education on how an unconference works and getting people up in front of the room to propose sessions and sign up for speed-geeking.

You can keep an eye on the sessions grid here, which should eventually have links to notes from the individual sessions.

Scotiabank Toronto Waterfront Marathon

For those of you who know what a non-athletic person I am, don’t get too excited: I’m not running a marathon, I’m not even running. However, I am walking 5km to raise money for the Fort York Food Bank on September 30th, a charity to which I’ve donated in the past due to the diligent efforts of my friend Ingrid.

You can click here to sponsor me; all donations will receive a tax receipt (although that may only be good for those of us who pay taxes in Canada).

LongJump

I really hate going through a lengthy interview about a hot new product, only to have them tell me at the end that half of what they told me is off the record. Not embargoed until some near-future announcement date, just off the record. Grrr.

Other than that, I had a pretty good demo last week from Pankaj Malviya, CEO of LongJump, who I missed at last week’s Enterprise 2.0 conference. LongJump is the brand for a service provided by Relationals, which has been in business since 2003 with a hosted CRM plaform targetted at media companies. Bonus marks to Pankaj, who starts into the presentation saying that their target users are business managers who organize information and create workflow, then in an aside, says “I see that you’re a business process management expert”, which means that either he’s looked at my blog or his PR person has briefed him well. 🙂

All of their solutions, including both those with the Relationals brand and the new LongJump hosted solutions, are focussed on the small to medium business market. LongJump, in fact, is created on the same underlying platform as the Relationals CRM, including components such as the MySQL database.

LongJump is a platform for creating data-driven, widget-based web applications, as well as offering a shared catalog environment for offering those applications by subscription to other users, including monetization of the applications. The application assembly environment is similar to an iGoogle home page, or other similar portal environments: widgets can be placed on the page, although they can’t exchange data or do other data mashup/filtering functionality like Yahoo Pipes. They have their own widget format, but it’s similar to the Google widget format and they’re working on making it identical so that widget created for Google could be used in LongJump.

LongJump Asset Tracking demo

The demo application was an asset tracking application, and I didn’t see much difference from the seemly endless array of lightweight web-based application development environments until he started showing me how to apply workflow to objects. There’s no graphical process mapper, but you can set states through which an object must pass and the predefined responses at each of those states, which in turn creates a sequence of tasks assigned to specific people or roles. The workflows can be triggered by data events, such as “renewal date less than 30 days”. This is crude from a BPM standpoint — it reminds me a lot of what IBM had in their Content Manager application a couple of years ago to do simple object-based workflow routing — but I haven’t seen anything else like it in this space. They plan to enhance this capability further, and I think that they could have a real competitive differentiator here.

Each application is “packaged” for publication on their catalog; for example, the Asset Tracking application above consists of all of the tabs that you see along the top (Home, Directory, PCs and Servers, Phones, Equipment, Reports, About), where each of those tabs has its own set of widgets and the underlying data sources. The catalog then makes published applications available for subscription by others, and handles the monetization.

LongJump is in an early (closed) beta now, with an open beta expected by end of year — I find this a longish timeline for this sort of application, but it’s coming from a more traditional company so I expect that their internal test and release procedures are different from the usual hair-on-fire Web 2.0 startup. They received 1,800 applicants for the closed beta, and have about 10 customers on there now.