My first AI winter

After moving to London in September 2016, I experienced my first AI winter! It’s not as ominous as it sounds…there’s no cause for alarm. My play on words stems from the various machine learning events I attended in the last quarter of the year.

Creative AI: Artists and creatives in the age of machine intelligence

This extremely popular event explored applying machine learning methods to artistic endeavours. I secured my place through volunteering (which entailed checking in attendees). The organiser, Luba Elliot gave the first talk which provided an insightful overview of progress in the field. Next, Terence Broad gave an entertaining exposition of his dissertation on autoencoding Blade Runner and the copyright implications which followed! Finally, Bob L. Sturm explained the process of using deep recurrent networks to compose music. I was amazed by the standard of machine made tunes. They actually sound really good! I highly encourage you to listen to some on this dynamic playlist created by Bob! Whilst there were many technical details I didn’t understand, I was nonetheless inspired by the possibilities of making novel art using technology!

Creative AI: Generating music and images with deep learning

Like the first creative AI event, Luba gave an accessible overview after which Kai Arulkumaran (a PhD student at Imperial) led us through the wonders of deep image generating models. Intuitively, to create a generative model, you first need to collect a large quantity of data. This data is then used to train the model. The goal is to make a model which produces data similar to that in the training set. Following this exposition, Aäron van den Oord, a research scientist from DeepMind described the inner workings of generative models for audio and images. This involved looking at the WaveNet and PixelCNN papers. Honestly, I was really tired and struggled to follow the content. However, I found both talks to be intellectually inspiring even though they were both technically challenging! A plus of going to this event was that I found out about the deep learning network group.

London.AI #6

Even though this was a much more application-oriented event courtesy of London.AI, it was certainly no less enjoyable. It was awesome seeing an alternative outlook after having seen the artistic and technical perspectives already. I was curious. What kind of impact was theory having on industry? Thankfully, the three companies – Gluru, Aire and Cardiologs – delivered compelling AI driven products. First was Gluru, a super cool personal assistant which is able to understand the context in your email, work out what needs to be done and display suggested tasks through their mobile interface. I was surprised at the sophistication of this technology and look forward to integrating it in my life when it matures further!

Next was Aire who’s machine learning algorithms provide an alternative way to determine credit scores. Why is this important? It turns out that current credit rating methods rely on a credit model created in 1989! Consequently, the model penalises people who don’t necessarily fit the past mould e.g. self employed individuals who have multiple gigs. From an entrepreneurial perspective, I learned about the importance of delivering value at every point of your products evolution and the way to turn research into a product is through persistence!

Cardiologs was the final company presenting. It was incredible. Turns out that you can diagnose approximately 150 different heart conditions using ECG data. Unfortunately, the number of people who have expertise in reading ECG’s accurately is relatively low. For non-specialists, this means that mistakes can be made leading to missed diagnosis with sometimes fatal consequences! Cardiologs’ aims to solve this problem by taking digital ECG data, training an AI agent with medical practitioners and distributing this expertise across hospitals around the world. This was undoubtedly a great example of democratising AI to make the world a better place!

Workshop – Data Science in a day using Python: from pre-processing to predictive models

My first machine learning workshop was intense! Many concepts and techniques were covered. We started with learning about popular libraries such as scipy, numpy and pandas. After doing some exploratory data analysis and scaling the dataset, I employed scatter plots to investigate the relationship between input features. Using jupyter notebooks made this extremely straightforward. The next part of the workshop involved learning about and training various models. This covered k-means clustering, decision trees, random forests and support vector machines. I got the intuition behind the differing methods but will certainly need to practice implementing the techniques on other data sets before I can be comfortable with the material! However, even though there is much technical improvement to be made, I’m happy that I got exposed to the perils of over-fitting and different ways of partitioning data for use in model validation.


I have been fortunate to attend these events full of incredible people who are pushing the envelope of what’s possible with artificial intelligence. Whilst no one can predict the future, my current belief formed from my first hand experience talking with experts plus credible articles online is that this is a field with untapped potential. Intellectually and in terms of applications, I have “tested” the waters and am fired up to to invest time and effort into acquiring skills from this domain which I can use to solve problems!

Bookmark the permalink.