Tag Archives: BigDataTO

Smart City initiative with @TorontoComms at BigDataTO

Winding down the second day of Big Data Toronto, Stewart Bond of IDC Canada interviewed Michael Kolm, newly-appointed Chief Transformation Officer at the city of Toronto, on the Smart City initiative. This initiative is in part about using “smart” technology – by which he appears to mean well-designed, consumer-facing applications – as well as good mobile infrastructure to support an ecosystem of startup and other small businesses for creating new technology solutions. He gave an example from the city’s transportation department, where historical data is used to analyze traffic patterns, allowing for optimization of traffic flow and predictive modeling for future traffic needs due to new development. This includes input in projects such as the King Street Pilot Study that is going into effect later this year, that will restrict private vehicle traffic on a stretch of King in order to optimize streetcar and pedestrian flows. In general, the city has no plan to monetize data, but prefer to use city-owned data (which is, of course, owned by the public) to foster growth through Open Data initiatives.

There were some questions about how the city will deal with autonomous vehicles, short-term (e.g., AirBnB) rentals and other areas where advancing technology is colliding with public policies. Kolm also spoke about how the city needs to work with the startup/small business community for bringing innovation into municipal government services, and also how our extensive network of public libraries are an untapped potential channel for civic engagement. For more on digital transformation in the city of Toronto, check out my posts on the TechnicityTO conference from a few months back.

I was going to follow this session with the one on intelligent buildings and connected communities by someone from Tridel, which likely would have made an interesting complement to this presentation, but unfortunately the speaker had to cancel at the last minute. That gives me a free hour to crouch in a hallway by an electrical outlet to charge my phone. Winking smile

Consumer IoT potential: @ZoranGrabo of @ThePetBot has some serious lessons on fun

I’m back for a couple of sessions at the second day at Big Data Toronto, and just attended a great session by Zoran Grabovac of PetBot on the emerging markets for consumer IoT devices. His premise is that creating success with IoT devices is based on saving/creating time, strengthening connections, and having fun.

It also helps to be approaching an underserved market, and if you believe his somewhat horrifying stat that 70% of pet owners consider themselves to be “pet parents”, there’s a market with people who want to interact with and entertain their pets with technology while they are gone during working hours. PetBot’s device gives you a live video feed of your pet remotely, but can also play sounds, drop treats (cue Pavlov) and record pet selfies using facial recognition to send to you while you’re out. This might seem a bit frivolous, but his lessons on using devices to “create” time (allowing for interaction during a time that you would not normally be available), make your own type of interactions (e.g., create a training regimen using voice commands), and have fun to promote usage retention (who doesn’t like cute pet selfies?).

I asked about integrating with pet activity trackers and he declined to comment, so we might see something from them on this front.; other audience questions asked about the potential for learning and recognition algorithms that could automatically reward specific behaviours. I’m probably not going to run out and get a PetBot – it seems much more suited for dogs than cats – but his insights into consumer IoT devices are valid across a broader range of applications.

Data-driven deviations with @maxhumber of @borrowell at BigDataTO

Any session at a non-process conference with the word “process” in the title gets my attention, and I’m here to see Max Humber of Borrowell discuss how data-driven deviations allow you to make changes while maintaining the integrity of legacy enterprise processes. Borrowell is a fintech company focused on lending applications: free credit score monitoring, and low-interest personal loans for debt consolidation or reducing credit card debt. They partner with existing financial institutions such as Equifax and CIBC to provide the underlying credit monitoring and lending capabilities, with Borrowell providing a technology layer that’s more than just a pretty face: they use a lot of information sources to create very accurate risk models for automated loan adjudication. As Borrowell’s deep learning platforms learn more about individual and aggregate customer behaviour, their risk models and adjudication platform becomes more accurate, reducing the risk of loan defaults while fine-tuning loan rates to optimize the risk/reward curve.

Great application of AI/ML technology to financial services, which sorely need some automated intelligence applied to many of their legacy processes.

IBM’s cognitive, AI and ML with @bigdata_paulz at BigDataTO

I’ve been passing on a lot of conferences lately – just too many trips to Vegas for my liking, and insufficient value for my time – but tend to drop in on ones that happen in Toronto, where I live. This week, it’s Big Data Toronto, held in conjunction with Connected Plus and AI Toronto.

Paul Zikopoulos, VP of big data systems at IBM gave a keynote on what cognitive, AI and machine learning mean to big data. He pointed out that no one has a problem collecting data – all companies are pros at that – but the problem is knowing what to do with it in order to determine and act on competitive advantage, and how to value it. He talked about some of IBM’s offerings in this area, and discussed a number of fascinating uses of AI and natural language that are happening in business today. There are trendy chatbot applications, such as Sephora’s lipstick selection bot (upload your selfie and a picture of your outfit to match to get recommendations and purchase directly); and more mundane but useful cases of your insurance company recommending that you move your car into the garage since a hailstorm is on the way to your area. He gave us a quick lesson on supervised and unsupervised learning, and how pattern detection is a fundamental capability of machine learning. Cognitive visual inspection – the descendent of the image pattern analysis algorithms that I wrote in FORTRAN about a hundred years ago – now happens by training an algorithm with examples rather than writing code. Deep learning can be used to classify pictures of skin tumors, or learn to write like Ernest Hemingway, or auto-translate a sporting event. He finished with a live demo combining open source tools such as sentiment analysis, Watson for image classification, and a Twitter stream into a Bluemix application that classified pictures of cakes at Starbucks – maybe not much of a practical application, but you can imagine the insights that could be extracted and analyzed in the same fashion. All of this computation doesn’t come cheap, however, and IBM would love to sell you a few (thousand) servers or cloud infrastructure to make it happen.

After being unable to get into three breakout sessions in a row – see my more detailed comments on conference logistics below – I decided to head back to my office for a couple of hours. With luck, I’ll be able to get into a couple of other interesting sessions later today or tomorrow.

A huge thumbs down to the conference organizers (Corp Agency), by the way. The process to pick up badges for pre-registered attendees was a complete goat rodeo, and took me 20+ minutes to simply pick up a pre-printed badge from a kiosk; the person staffing the “I-L” line started at the beginning of the Ks and flipped his way through the entire stack of badge to find mine, so it was taking about 2 minutes per person in our line while the other lines were empty. The first keynote of the day, which was only 30 minutes long, ran 15 minutes late. The two main breakout rooms were woefully undersized, meaning that it was literally standing room in many of the sessions – which I declined to attend because I can’t type while standing – although there was a VIP section with open seats for those who bought the $300 VIP pass instead of getting the free general admission ticket. There was no conference wifi or charging stations for attendees. There was no free water/coffee service (and the paid food items didn’t look very appetizing); this is a mostly free conference but with sponsors such as IBM, Deloitte, Cloudera and SAS, it seems like they could have had a couple of coffee urns set up for free under a sponsor’s name. The website started giving me an error message about out of date content every time I viewed it on my phone; at least I think it was about out of date content, since it was inexplicably only in French. The EventMobi conference app was very laggy, and was missing huge swaths of functionality if you didn’t have a data connection (see above comments about no wifi or charging stations). I’ve been to a lot of conference, and the logistics can really make a big difference for the attendees and sponsors. In cases like this, where crappy logistics actually prevent attendees from going to sessions that feature vendor sponsor speakers (IBM, are you listening?), it’s inexcusable. Better to charge a small fee for everyone and actually have a workable conference.