IBM’s cognitive, AI and ML with @bigdata_paulz at BigDataTO

I’ve been passing on a lot of conferences lately – just too many trips to Vegas for my liking, and insufficient value for my time – but tend to drop in on ones that happen in Toronto, where I live. This week, it’s Big Data Toronto, held in conjunction with Connected Plus and AI Toronto.

Paul Zikopoulos, VP of big data systems at IBM gave a keynote on what cognitive, AI and machine learning mean to big data. He pointed out that no one has a problem collecting data – all companies are pros at that – but the problem is knowing what to do with it in order to determine and act on competitive advantage, and how to value it. He talked about some of IBM’s offerings in this area, and discussed a number of fascinating uses of AI and natural language that are happening in business today. There are trendy chatbot applications, such as Sephora’s lipstick selection bot (upload your selfie and a picture of your outfit to match to get recommendations and purchase directly); and more mundane but useful cases of your insurance company recommending that you move your car into the garage since a hailstorm is on the way to your area. He gave us a quick lesson on supervised and unsupervised learning, and how pattern detection is a fundamental capability of machine learning. Cognitive visual inspection – the descendent of the image pattern analysis algorithms that I wrote in FORTRAN about a hundred years ago – now happens by training an algorithm with examples rather than writing code. Deep learning can be used to classify pictures of skin tumors, or learn to write like Ernest Hemingway, or auto-translate a sporting event. He finished with a live demo combining open source tools such as sentiment analysis, Watson for image classification, and a Twitter stream into a Bluemix application that classified pictures of cakes at Starbucks – maybe not much of a practical application, but you can imagine the insights that could be extracted and analyzed in the same fashion. All of this computation doesn’t come cheap, however, and IBM would love to sell you a few (thousand) servers or cloud infrastructure to make it happen.

After being unable to get into three breakout sessions in a row – see my more detailed comments on conference logistics below – I decided to head back to my office for a couple of hours. With luck, I’ll be able to get into a couple of other interesting sessions later today or tomorrow.

A huge thumbs down to the conference organizers (Corp Agency), by the way. The process to pick up badges for pre-registered attendees was a complete goat rodeo, and took me 20+ minutes to simply pick up a pre-printed badge from a kiosk; the person staffing the “I-L” line started at the beginning of the Ks and flipped his way through the entire stack of badge to find mine, so it was taking about 2 minutes per person in our line while the other lines were empty. The first keynote of the day, which was only 30 minutes long, ran 15 minutes late. The two main breakout rooms were woefully undersized, meaning that it was literally standing room in many of the sessions – which I declined to attend because I can’t type while standing – although there was a VIP section with open seats for those who bought the $300 VIP pass instead of getting the free general admission ticket. There was no conference wifi or charging stations for attendees. There was no free water/coffee service (and the paid food items didn’t look very appetizing); this is a mostly free conference but with sponsors such as IBM, Deloitte, Cloudera and SAS, it seems like they could have had a couple of coffee urns set up for free under a sponsor’s name. The website started giving me an error message about out of date content every time I viewed it on my phone; at least I think it was about out of date content, since it was inexplicably only in French. The EventMobi conference app was very laggy, and was missing huge swaths of functionality if you didn’t have a data connection (see above comments about no wifi or charging stations). I’ve been to a lot of conference, and the logistics can really make a big difference for the attendees and sponsors. In cases like this, where crappy logistics actually prevent attendees from going to sessions that feature vendor sponsor speakers (IBM, are you listening?), it’s inexcusable. Better to charge a small fee for everyone and actually have a workable conference.

9 thoughts on “IBM’s cognitive, AI and ML with @bigdata_paulz at BigDataTO”

  1. I agree with almost everything you said except costs. Everything I showed I did for free. Some of those API calls are 1/10th of a cent. They are on open frameworks. Not to say there are not big projects with price tags, but there are small ones too.

    1. I get that what you showed was free, possibly I didn’t make that clear since I was writing on the fly during the session. However, for many enterprise deep learning/AI/machine learning projects, some more powerful infrastructure will be required: hence the pics that you showed of the large server implementation.

      1. Indeed you need the power to go deep and neural. The cost of curiosity and true deep learning will need disruption.

  2. IBM did it with Deep Blue and once again with Watson. Creating a technology demo that is mostly not representative of what can be done. If you buy Watson you buy 50 experts for 50 million to create a one-time function of generally available machine learning. Not intelligent at all because pattern recognition is not intelligence and not AI. Statistical learning does not predict the future! When will they ever stop to make that silly claim.

    1. Hey Max — I figure people are welcome to their opinions but not their own set of facts. You must not have been at my talk or you would have watched me piece together various cognitive applications using APIs … it wasn’t 50 experts – it was any person in the room can do it. Like the fella that created a chatBot service to fight parking tickets … (Great story: https://www.theguardian.com/technology/2016/jun/28/chatbot-ai-lawyer-donotpay-parking-tickets-london-new-york). I’m not sure anyone or thing can predict that future – who is ‘they’ making that silly claim. Surely not IBM … or your facts would be ‘off’ again. Can anything predict the future – no. Well Yoda did a fine job of it. But finding patterns in loads of data is suggesting there are probabilities and paths to be gleamed from that … for sure. We can just find more of them no than ever before. If you have any questions be happy to answer … you can contact me offline … but “If you buy Watson you buy 50 experts for 50 million to create a one time function of a GA ML” is like US Fake-news *wink* — all kidding aside. Seriously … it’s important .. facts are important.

      1. Paul, I do not believe things just because they are written up in a newspaper or marketing brochure or shown in a demo. To claim my post is fake-news-like is like in politics a pointer to unproven claims that do not have enough facts to back them up. You are barking up the wrong tree. I know the ML open source modules and what for example Apple is bringing in their ML-Kit. I have been designing and using ML algorithms since the late 90’s originally for character and document recognition. I know what they can do … and not! We also can just point an ML component to monitor a real-time data source of well-structured and related data and make recommendation through mapping user-actions to data patterns using transductive training. The users can reject suggestions and thus perform deep-learning optimization. I actually hold the patent on such functionality as the User-Trained Agent. To enable the UTA required way more than 50 man-years of development to provide such simplicity in our platform. I have been involved in Watson projects as an observer and like the software that won Jeopardy it is all one-time functionality that took a similar amount of expert manpower to build the data gathering, data cleansing, feature selection, model building and recommendations filtering. What won Jeopardy was a glorified, semantic full-text search with a probability filter at the end. It was VERY hard-coded and not usable for anything else. Watson is not software you can buy but it is a project business. I have been at many Watson presentations and the claims about predicting the future have been made openly at those. If it would be as easy as your demo there would be no money made from it and IBM would not spend advertizing money galore to market it. I am thus not disputing ML functionality and yes, a pattern matching device that finds the probability of skin cancer is great but it is not something you build in your backyard. It is useful to use machine power to recognize data patterns faster and with more certainty than a human can. The decisions which data and which algorithms have still to be made by ML experts. Being a mathematician who understands for example Hidden-Markov-Models helps a lot. The results are at best input to human intelligence because ML has no cognitive powers per-se. All models are wrong but some are useful also works here. It is all a far cry away from human intelligence to which pattern matching is just input. AI is all hype. ML is hard work. That is the fact.

        1. Max a lot of what you say is great above … and the examples are accurate as well. Skin cancer and morphology of a lesion or mole is not something you build in a your backyard. The chatbot to fight traffic tickets per my last post actual was.

          I know you have your cred (I’m sure we have Googled each other – so I’m sure you know mine) – it was more this statement, “If you buy Watson you buy 50 experts for 50 million to create a one-time function of generally available machine learning.”

          And that is not what it is … perhaps if you are training Watson to fight cancer as a consortium it is … (not sure of the price tag). That is the part that caused a horizontal head sway. I will give you this, I called IBM Marketing out on stage for instilling this notion in people … so perhaps I can’t blame you … lol. Everyone thinks Watson is what you just said, but it’s a set of APIs, rentable, invokable piece parts too and those are the very facts I showed on stage.

          Calling a image recognition API in the same manner that you could from the Google library isn’t ML … and ML is hard work, it’s leveraging the hard work we did (and it is amongst the best image recognition services out there) as an API. But I will assume you didn’t know you could do all that based on the comment. But I didn’t mean it disrespectfully, apologies if it came across that way. Just frustrating to watch.

      2. Paul, many of the early Watson healthcare customers would agree with Max. Actually some have spent even more. Part of IBM’s challenge, rightly or wrongly, is your services – GBS and Technology account for more than half of your revenues. The assumption at most customers and prospects is IBM has many mouths to feed. If IBM were to spin Watson or the services off, it could change that perception but today that is reality, not fake news.

        1. I think I stand by what I said. Are there large contracts. Sure. Never said there wasn’t. I’m fact in my talk I highlighted that very perception. So I offer you (and the conference) another perspective. Both with proof points. I’m just suggesting at times folks are guilty of not knowing what they could already know. That’s why I reference the APIs and some success stories of small time success. It’s why I built the small stuff I built life. Just me. And free.

Leave a Reply to Max J. Pucher Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.