Mashup Camp IV: Speed Geeking 2 and wrap-up

It’s taken me a couple of days to get my notes transcribed from Mashup Camp’s last speed geeking session, since I’ve spent the weekend frolicking around in San Francisco. I really have to learn to do this before I kill all my brain cells on a weekend…

I saw many less speed geeking sessions than the previous day, in part because some of them just didn’t bother to participate in the second round. I’d like to point out to the demonstrators that if you can’t manage to wrap up your predetermined 6-minute conversation even when a siren goes off in the room in order to restart the next round, don’t expect me to believe that you could ever deliver anything on schedule.

Here’s what I saw:

QEDwiki: Not sure what the mashup was called, but it was built on IBM’s QEDwiki and combined Google maps with Upcoming.org (or should I say upcoming.yahoo.com) events, allowing you to search for events by subject/keyword, plot them on a map, then click through directly to the Upcoming event page. As with other QEDwiki sites that I’ve seen, this was multi-pane, with the top pane containing an RSS feed of events matching the search criteria, and the bottom pane containing the map. Future plans include adding Eventful and other event listings.

Cold Call Assistant: Another QEDwiki mashup, this one combining Salesforce.com data with a variety of other sources. You create a sales campaign on Salesforce as usual, which consolidates a list of contacts and their details, and this mashes it up with competitor information, news about the selected competitor or IBM (this was created by an IBM’er), local weather, restaurants and golf courses. The idea is that it provides context for a conversation that you’re about to have with the customer that you’re cold-calling for this sales campaign; why they picked golf courses rather than strip clubs, I’m not sure. ๐Ÿ™‚ There’s no feedback to Salesforce.com, that is, if anything comes from this call, you’d have to go back to Salesforce.com manually and enter in the data.

Best Price Food Shopping: An interesting idea for a mashup, which combines one feed per product (such as milk or bread) using Javascript to fetch and parse the RSS with Google maps to plot out where to buy your groceries. There’s some colour coding for which is the cheapest and most expensive, but this whole idea is reliant on the stores providing feeds for the underlying data, which isn’t currently happening. Useful for people with lots of kids who care about the price of milk, which isn’t me — I do much of my shopping in Toronto’s Chinatown, where I buy fruits and vegetables for pocket change.

Mashup Telephone: Probably the funniest (and most useless) mashup that I’ve seen, the mashup version of the telephone game: you know, the one where you whisper something in someone’s ear, they do the same to the person next to them, and so on until you’re all laughing about how scrambled that it got along the way to the 10th person. In this case, a search term was passed through successive mashup APIs sequentially to see what came up.

Flickrball: Another useless but charming mashup: a game where you have to get from one word to another in six moves (the old Six Degrees of Kevin Bacon thing) using tags on Flickr photos. Some nice eye candy in the UI, but it wouldn’t otherwise captivate me.

Seegest: A social movie rating site, where you rate what you’ve seen, what you’d like to see, and what you own. If your friends want to see the same movie as you, it helps to facilitate a movie night. If you own movies that you’re willing to lend, it can match them up with your friends (to whom presumably you’d be willing to loan a movie). It uses Yahoo authentication, so you’ll need a Yahoo ID to use it, plus feeds from Amazon for movie/DVD information and video of trailers from YouTube.

The last breakout session of the day, scheduled for after the speed geeking, sort of didn’t happen; most people were just hanging around chatting or demonstrating, and no one was in the room scheduled for the session that I had planned to attend. Given the opportunity to get north of the valley towards San Francisco before the 3pm car pool lanes kicked in, I headed out, skipping the closing session and the awards. At 2:58, I crossed the point just south of San Francisco airport where the car pool lanes cease to exist, and continued north for my weekend in San Francisco before the start of BPM Think Tank on Monday.

This was my third Mashup Camp, and likely my last; in fact, if it hadn’t been that I was coming down for BPM Think Tank anyway, I probably wouldn’t have attended this one. I enjoy Mashup Camp, but I’ve seen a lot of the stuff already, or am tracking it online; since I don’t write code any more, much of this is a bit peripheral to what I do, making it difficult to justify both the travel expenses and the time away from paying clients. Maybe I’ll be back when Mashup Camp hits Toronto, there’s certainly a strong enough tech community and BarCamp community to support one there.

Mashup Camp IV: Speed Geeking 1

Taking a break from the sessions just before lunch; there’s nothing that I want to go to right now, and this gives me a chance to catch up on my notes from yesterday’s speed geeking before today’s session starts after lunch.

Keep in mind that each session was a maximum of six minutes longer (shorter if the presenter didn’t actually restart when the siren went off), so there are likely to be errors and omissions here.

Here’s what I saw yesterday:

Voice + TWiki: This is an integration of LignUp‘s VOIP service with TWiki to allow for the integration of voice information into the wiki environment. Although the author claimed that this was for non-technical people (he actually used the phrase “over 40”) to be able to add content, Twiki is an easy enough environment that I don’t think that this is really necessary, but one of the use cases that he showed is more compelling reason: someone is out in the field without internet access and wants to add a comment to a particular page related to that field location. For example, someone at a building site sending in a verbal report of what’s going on; I can also imagine insurance adjusters doing something similar. If you’re on the wiki and doing it interactively, clicking a button causes LignUp to call your phone and prompts you through the recording process. If you’re in the field, you can call a number, enter a PIN and the desired project (page) ID, and record. The next step for this type of integration would be to convert the voice to text, although I’m not sure if LignUp offers that sort of service; I suspect in any sort of inspection scenario, you wouldn’t want to trust an automated conversion but would still want the voice recording.

5thbar: Not the place that you go after the 4th bar, 5thbar is a mashup of information about mobile phones and accessories that provides a good one-stop site. It combines blogs and news via feeds, videos via the YouTube API, listings of the device for sale via the eBay API, and allows signed-in users to add tags and reviews. Although some carriers provide a subset of this information for the phones that they carry, 5thbar covers information for a wide range of phones across multiple US and Canadian carriers. It doesn’t yet provide comparison charts between devices, although the author sounded like he’s considering that.

Fast Mash: This is a demonstration coming from a university research project, showing how unstructured data can be extracted from sites and served up in a more structured format by the use of a reference data set in the same domain of knowledge. The example that he showed was taking “cars for sale” listings from Craigslist and creating a structured, searchable data set, more like a relational/tabular form, for further filtering and display (potentially as a data feed to other mashups). He did this in part by using Edmunds car buying guide as a reference data set in order to understand the particular tokens that might occur in the unstructured data; this is the key to the technique.

ClubStumbler: Finally, someone’s doing something useful with Google Maps: a way to plot the optimal route between a number of different bars/clubs that you select in your area. ๐Ÿ™‚ This uses the Eventful API to find out what’s going on at the clubs as well as Google and Flickr APIs, plus the author’s own API for finding the best route that he’s developing and commercializing. Seriously, though, this could be used in a number of commercial and business applications, such as optimizing delivery routes or real estate tours.

Plaxo Pulse: This mashup discovers information about all of your Plaxo contacts and feeds it back to you in a feed reader-type style (or it can be subscribed to in a regular feed reader). This includes any information found about the people from blogs, Flickr, Digg and other locations: sort of an automated identity aggregator based purely on feeds and RSS APIs from these sources. Although I don’t recall hearing the details, I suspect that the auto-discovery is done on some combination of what information that’s in the Plaxo contact record plus searches on the various content sites based on name and email address, since many allow the positive identification of a person if you know their email address. There’s also a new version of Plaxo out that looks less virus-like than the predecessors, I might give it a look.

Chime.tv: This is essentially a playlist of web-based video extracted from multiple sources (e.g., YouTube, Google Video) using the media RSS format, and played as a continuous stream. There are several pre-made “channels” of content of certain types, or you can create your own channels for private use or sharing. The Find And Play feature allows you to enter either a website URL that contains video or search terms, and get back a continuous-playing playlist of matching videos so that you don’t have to click your way through each result sequentially.

Billhighway.net: This is a social money website that allows you to collect money from friends, family, roommates, etc. for specific events or purposes, such as “mom and dad’s anniversary present” or “this month’s electricity bill”. You organize a group around whatever the event is that requires payment, then send out invoices to people. They can pay by eCheck or credit card, even if they don’t have a PayPal account, and the person organizing the group doesn’t have to have a credit card merchant account to accept payment by credit card. There is a 3% fee, of course, since those transfers have to be paid for in some way through the various financial services organizations that billhighway uses to process payments behind the scenes. Payments to the group are organized and reported as such, so that you know how much money that you received by group/event rather than just by person, and you can get RSS feeds of some of the data on the site such as transactions with your contacts. $US only at this time.

di.ngb.at: Although that domain’s not actually used, that name is on the site that was demonstrated so I’ve used it for naming purposes. I found this mashup particularly interesting since I’ve wanted to do something like this with Toronto’s library system ever since I started attending Mashup Camp last year. This uses an Amazon wish list to hold books that you want to read (since the Amazon API can retrieve the wish list using only your email address), then looks up the books on several different library sources (OCLC and LibraryThing were mentioned) to generalize the search by mapping the single ISBN from the Amazon wish list to all ISBN’s for that item, such as paperback, hardcover, audio and video versions. Metadata is also extracted from Amazon and the other library sources for display, and the local library systems (in this case, the author lives at the intersection of four overlapping library systems so searches all of them) to present a list of what’s available at the branches and allow holds to be placed on items. I so want this for myself in Toronto!

LignUp: I’ve already seen one mashup created using LignUp, and this was another one by someone from LignUp showing how to add voice alerts and SMS messages on a system administration functionality mashed up with the Intel Web 2.0 TDK. For example, a part fails or storage space starts to run low, and the designated system administrator receives an automated call or voice message, then can enter their current location by zip code and receive an SMS message with the address of the closest location to buy replacement parts. I moved on to the next mashup and found that it was also LignUp, showing voice mashups within Facebook: voice annotation of pictures by calling in, or web calling out either for 2 or 3-person calls or an automated broadcast message to an entire group. Another application, ReachMeRules, handled inbound calls and voice mails through the use of a cut-out number that masks your real phone number, and can provide specific responses to specific inbound numbers. This latter sounds like some similar functionality to what’s on Iotum, except that LignUp is offering a development platform that allows this to be created, not a specific fixed service.

SnapLogic: SnapLogic showed a mashup that highlighted the functionality of their data duration framework (available as open source under GPL), extracting data from Quickbooks and Salesforce.com, matching it up and displaying it as a combined table. Although the methods of extracting from these two data sources are quite different, the SnapDojo mashup demo showed how they can be wrapped and then consumed in a consistent fashion using SnapLogic’s APIs via JSON.

Where.com: The mashup by Where.com showed Google Street View on a mobile phone, using techniques originally discovered during WhereCamp. They have support for Sprint phones — I saw a Samsung demonstrated — for an extra $2.99 on your monthly bill, and unofficial support for the Blackberry. I believe that it’s all running in a mobile browser rather than downloading an app to the phone, although I’m not completely sure.

Twitterlicious and Bookmark Cleaner: Twitterlicious is a mobile app that allows you to capture your tweets (if you don’t know what Twitter or tweets are, move on) as del.ico.us or Ma.gnolia bookmarks with the tag “tweet” so that you can review them later, especially useful when a tweet contains a URL that you can’t open on your mobile device. They’re stored as private bookmarks, so I’m not sure if they’ll get picked up in the automatic Links blog posts that I create from my del.icio.us bookmarks; obviously, I wouldn’t want that to happen. The use of del.icio.us to store the tweets for later review then necessitated the Bookmark Cleaner, a utility to delete all bookmarks with a certain tag; he’s also extended this to do some other useful cleanups on your bookmarks, such as finding and deleting all bookmarks that point to a 404 (page not found), and finding all bookmarks that appear to point to a malware or phishing site, which can then be previewed using snap.com and deleted as required. In the future, he also want to find 302’s (permanent redirect) and replace the bookmark URL with the redirected URL.

Today’s speed geeking is just about to start, I’m out of here…

Mashup Camp IV Day 2: Automating web services and mashups

I’m in a session on automating the discovery and consumption of web services to create mashups. Coming from the enterprise integration side, lots of this stuff is pretty familiar to me: using a directory service (e.g., UDDI) to discover service providers in both the corporate SOA and internet web services (SOAP, REST, AJAX, XML-RPC, JSON, etc.) and extract the service description (e.g., WSDL), but he’s talking about addingย a lot more intelligence to the discovery stage.

They’re making some pretty esoteric points about RDF, OWL and the semantic web, and this moved for a while into a two-way conversation between the main presenter and someone from a vendor who is obviously deeply into these issues as well.

Then we get to an interesting echo of yesterday’s session on why DIY when there are APIs available, and someone stated that he was more likely to write something himself because it’s too difficult to find services/APIs on the web. The other side was considered now: what if you put an API out there, and people use it in “stupid” (i.e., inefficient) ways that bring down your service? I think that the API developers need to put some checks and balances in place, like key limiting; someone from Google pointed out that if you don’t do that, people won’t take care to optimize their code to minimize the API calls. In fact, there’s services available to handle the API keys and their usage, such as Mashery.

Mashup Camp IV Day 2: Google Mashup Editor

Best Mashup Camp quote: “music is to mashups as porn was to the internet: it’s what drives it”.

I’m in the session on “Mashing” client-side mashup tools, where Jason Cooper from Google is demonstrating a mashup called Jookebox that he created using multiple Yahoo Pipes, such as one to retrieve album tracks from Amazon, assembled using the Google Mashup Editor (which I just received an account on the closed beta yesterday, by coincidence).

You can build the entire web page in the browser-based Mashup Editor in XML format, with the big difference between Pipes and GME is that Pipes outputs an RSS feed whereas GME outputs a web page. We had a discussion about how the mashup tools can often be categorized into enablers (which build the widgets, data feeds, etc. from underlying data sources, like Pipes) and builders (which assemble the components into a mashup, like GME).

Google’s goals in building their mashup editor was to remain standards-based, and eventually to open it up for extensions. There’s a whole gallery of mashups built using GME here (weirdly, not searchable…), and if you go to any of these, then there’s a “View Source” link in the top right that allows you to grab the source to learn how it was written. There’s a lot of mapping applications there, obviously, but there’s also things like a simple feed reader and a task list with nary a map in sight (or in site, if you like bad geek puns). You’ll also find a list of resources in the sidebar of the gallery page, such as how to get started, event handling, using the JavaScript API, etc.

We discussed a number of competitors, such as IBM’s QEDwiki, BEA’s Pages and Bungee Labs, although the Google guys state that these products play in a different place than them: GME is much more of a developer tool, since it’s basically a browser-based text editor that you drop code into rather than a drag-and-drop environment. They may decide to add nicer UI stuff in the future, such as a design view to accompany the code view.

Jason also talked about more complex mashups, such as using Dapper to parse a page into a more structured data source, feed it into Pipes for further slicing and dicing, then take the output feed from that and create the mashup using Google Mashup Editor.

We ended up with a discussion about the use Google’s geocoding in a GME-created mashup; currently, all GME apps use a single geocoding API key so there’s no issue of going over your daily limit, although there may be changes to this in the future.

Product roadmap:

  • Open up the beta by the end of this quarter
  • Allowing mashups to be hosted on other domains
  • Feeds from Google Calendar and other sources
  • New UI widgets

I’m looking forward to trying it out.

Mashup Camp IV Day 1: Why DIY when APIs are available?

This was really Chris Radcliff’s bitch session on why don’t people just use the Eventful API instead of writing their own event functionality over and over again ๐Ÿ™‚ but we discussed a number of interesting points that have analogies in any development environment:

Why people write their own functions even if something exists:

  • Discoverability, that is, they don’t know that the function exists externally
  • Lack of functionality (or perceived lack)
  • Lack of control over functionality, which is a sort of subset of the previous point
  • Lack of service level agreement
  • “Not invented here” syndrome
  • Complexity of external APIs

Why people shouldn’t write their own functions, but use one that already exists:

  • Someone else maintains the code
  • That particular function is not a core competency and not a competitive differentiator
  • It takes longer to get up and running if you write it yourself than if you use an existing API

There was an extended discussion of event APIs and functionality in general, which was not really the point of this session, but it’s an interesting case study for looking at the issues. There’s a ton of other examples here: spam filtering, address lookups, geocoding; all of these are readily available from a couple of competing sources. Of course, it’s all a matter of timing: I can recall when we wrote TIFF decompression and display algorithms in the late 1980’s because nothing else existed, something that would never be considered now.

There’s obvious differences in APIs that deliver content versus those that manipulate content with respect to both copyright issues and currency: if an API is maintaining an up-to-date database of information behind the API (like Eventful, which has about 4 million future events in their database at any given time), then it may be much better positioned to deliver the right content than something that you build yourself.

Mashup Camp IV Day 1: Enterprise Mashups

My speed notes from the speed geeking sessions are all on paper, so I’ll have to transcribe them later. In the meantime, however, I’m at the next session on enterprise mashups.

This was a bit slow-moving; I blame that on the post-lunch energy dip and the frenzy of speed-geeking that wore everyone out.

We talked around a number of subjects:

  • Enterprise mashups have a focus on multiple data sources, especially unstructured data, acting in part as a replacement for manual cut-and-paste.
  • Current IT development methodologies are not sufficiently agile to develop mashups, leading to the discussion about whether enterprise mashups should be done outside of IT: are mashups the next generation of end-user computing, replacing Excel and Access applications created by the business units? If so, who’s responsible for the result of the mashup, and for the underlying data sources?
  • The current IT environment tends to be command+control, and doesn’t lend itself to enabling mashups to occur in the business units. They need to unlearn scarcity of computing resources, and learn abundance.
  • What’s the boundary between EAI and mashups? What’s the boundary between business processes and social collaboration?

Mashup Camp IV Day 1: Opening up the social web

Another vendor-proposed session (Plaxo), but no formal presentation and not at all around Plaxo’s stuff so not really commercial at all.

The issue is all the multiple unlinked social networks to which we all belong, most of which aren’t open to data extraction or mashup in any way. For example, Facebook will link to a couple of different online address books (such as Gmail) to see if any of your contacts are already on Facebook, but there’s no programmatic way to do the same so that you could, for example, check to see if any of your LinkedIn contacts are also on Facebook (something that I’m checking out a lot lately as more business contacts start to find me on Facebook).

Most of the social networks are very much walled gardens, with no way to even get your own information out. LinkedIn allows you to download each individual’s contacts in as a vCard, but doesn’t allow for bulk export or API access to that data.

We listed data that should be opened up (i.e., made more easily accessible) in social networks:

  • My profile
  • Who I know
  • Friends’ content
  • Permissions that I’ve set on people/objects (which might include some implied categorization, like the Flickr family/friends subsets)
  • Attention or activity with contacts

We also discussed some of the problems with social networks, such as how you add people to your network but rarely delete them even if you never interact with them any more because that seems a bit harsh to just dump them from your network.

Getting back to the “set my data free” problem, there’s really a need for standards that would allow data to flow between sites that I allow to communicate. Although Plaxo provides some of that functionality, it’s not an open standard and it doesn’t interact with most of the social network sites; possibly something like Plaxo could be used to broker the data and relationships between these sites. LinkedIn’s toolbar for Outlook does a bit of this too, by allowing you to easily link up what you have in Outlook with what you have in LinkedIn; again, it’s not open and only covers that pair of data sources.

One issue is how to recognize the same person across sites: email address is most commonly used, but not perfect because many people use different email addresses for different sites, like a business email on LinkedIn and a personal email on Facebook.

Mashup Camp IV Day 1: AOL and Feed Mashups

Since sponsors are allowed to propose sessions, AOL proposed a session on feed mashups where they gave a short presentation/demo of their new customizable portal page (comparable to Netvibes or iGoogle) that also includes Mgnet, a way to do what they refer to as mashing up feeds, although it appears to be feed aggregation and filtering. Maybe that’s a good question: when does aggregation and filtering (which are basically functions of a feed reader) become a mashup? It appears that some of the interactive “mix & share” functionality is similar to the “share this” functionality in Google Reader, where you can set certain posts (or even a whole folder/collection of third-party feeds) to become part of a new, customized feed that can be shared with others.

The cool part is a set of APIs (both REST and RPC) that allow this functionality to be accessed programmatically rather than interactively:

  • Manage users’ bookmarks and feed subscriptions, organized by tags or folders
  • Retrieve feed articles
  • Apply chained operations (sort, trim, html) during feed processing

This allows aggregated feeds to be accessed in an application as a URL, create a mixed feed from a folder or tag, or dynamically create a synthetic feed from several feeds. This makes it similar to Yahoo Pipes for feed manipulation, but with a bit more flexibility.

AOL also seems to  be providing the only t-shirts at camp, since there’s no official Mashup Camp t-shirt this time; having scored a t-shirt, I’ve fulfilled my home obligations and can relax. ๐Ÿ™‚

The session turned into an interesting discussion about widget standards, including how IBM and BEA are both supporting Google gadgets in their portals, making it somewhat of a de facto standard. Even the AOL guys admit that a standard widget format (even if it’s not theirs) is valuable, and they also support Google gadgets.

We also discussed how the difficulties with authentication for feed subscribers is part of what’s inhibiting adoption by businesses, particularly for delivering any sort of secure information to customers outside the firewall, such as a feed of transactions from your financial institution. AOL is using OpenID as a provider (every AOL username also corresponds to a set of OpenID credentials), but isn’t accepting OpenID — this seems to be the way that a lot of sites are going, which is not going to work until they all start accepting as well as providing credentials: providing OpenID credentials without accepting them is little better, in my opinion, than implementing a proprietary credentials scheme. One attendee pointed out, with some head-nodding around the room, that the dream of OpenID may actually be better than the practice since most people don’t want a single point of failure for their online credentials: you might use OpenID for all the logins that don’t contain any sensitive information so as to have a single signon around the web, but are unlikely to use it for financial and other critical sites.

I think that feeds are becoming more important in general, and also are going to start making some significant inroads within enterprises, as I saw at the recent Enterprise 2.0 conference. Inside the firewall, the credentials issue gets much easier, but there’s a much bigger cultural gap to using feeds as applications.

Mashup Camp IV, Day 1: Opening Session

A slow start, in spite of the announced 8:30am start time, and a smaller crowd than I remember from last year’s Mashup Camps, but a few familiar faces and lots of enthusiasm in a low-key sort of way. Like every unconference that I’ve been to, there’s a few minutes before the grid of sessions starts to fill up when I’m convinced that this is all a great waste of time, then people get up and propose interesting sessions, and I’m hooked.

David Berlin is our able host, as usual, and Kaliya Hamlin is facilitating the Open Space concepts for us, providing a bit of education on how an unconference works and getting people up in front of the room to propose sessions and sign up for speed-geeking.

You can keep an eye on the sessions grid here, which should eventually have links to notes from the individual sessions.