I am extremely grateful for being able to attend the 3rd Research and Applied Artificial Intelligence Summit 2017 which was held at Google’s St Pancras offices in London. Due to the popularity of the event, attendees were encouraged to come in early to gain entry into the live room. Having arrived early, I started interacting with similarly prompt attendees. That’s when the magic started and I had one of the greatest days of my life so far! Reflecting on the event now, I have an indescribable feeling that in 10 years, this event will personally feature as a life changing experience 😀 Why is this? Well, I attended the 2nd RAAIS event last year. Combined with other developments such as AlphaGo’s triumphs, I was 70% convinced about the potential for AI. Now, I am 100% long AI!
With regards to investing time and energy into a technology or venture, my current heuristic is based on tracking the frequency and quality of signals over time. The recent research developments by DeepMind, OpenAI and labs such as CSAIL plus the applications of AI tech to self driving cars are examples of credible signals. At RAAIS 2017, I obtained the strongest signal yet! Key trends (from both academia, industry and media attention) since the last conference were summarised masterfully by Nathan Benaich. Furthermore, the number of companies who presented diverse solutions to significant industry problems was incredible! Whilst last year’s businesses were fantastic, I personally felt that the enterprises this year were more applied in their focus. Below, I will briefly describe what each company does and/or what I learned about AI. This is not complete at the moment, I need to add 6 more companies but I wanted to publish and update when I make time.
At this point, I would like to express my gratitude to Nathan Benaich and his team for organising the best conference I’ve attended yet. My appreciation also extends to the amazing AI community that was present who made conversation fun and insightful. It was awesome sharing this experience with such a supportive cadre of AI enthusiasts and I definitely made some really cool friends. Of course, none of this would have been possible without the superb companies that presented. They are at the vanguard of the 4th industrial evolution so definitely check them out!
To be honest, there was a lot I did not understand as I’m a relative novice in the field. My observations may be elementary but that’s what it means to have a beginner’s mind. All mistakes/misrepresentations made are mine. Get in touch and I will make the necessary fixes 😀
Making humans work smarter with vertical AI solutions for supply chain and pricing – Micheal Feindt (Blue Yonder)
Since I used to work in retail, I really identified with the automation of operational decisions that Blue Yonder enables. I discovered that questions of how much to order, what price to sell items at and who to market to could be automated. By applying predictive analytics instead of using a traditional ERP system, you could get both a number (e.g. how much stock) and a probability expressing the confidence level in the prediction! Super cool as this helps with risk management when it comes to inventory level optimisation. I was intrigued by their automatic integration of external data such as the weather, holidays and events to drive better business decisions.
Transforming pixels into actionable insights – Matthew Chwastek (Orbital Insight)
Wow, this was an eye opener with regards to the power of leveraging multiple technological innovations. By using satellite imagery at all resolutions, deep learning and cloud/GPU tech, Orbital Insight is able to build the macroscope. Intuitively, this is an intelligent map of our physical world based on analysis of millions of images. Using their tech, you can count cars in parking lots which correlates with the sales that a store generates. You could even calculate the oil levels in open containers which can help in determining oil inventory levels for countries which are reticent with their reporting. So many potential applications of this tech e.g. poverty mapping, disaster relief and insurance. Incorporating synthetic aperture radar will allow 24/7 surveillance which won’t be obscured by clouds. Real time analysis of the world, here we come!
AI and the self-flying car – Luuk van Dijk (DaedaleanAI) in conversation with Kenneth Cukier (The Economist)
I remember this as being an especially humorous interaction where I learned about autonomous driving levels. DaeleanAI’s mission is to bring level 5 autonomy to electric planes. Current volumes of planes are relatively small compared to cars because you require a pilot’s license (which is a significant investment of resources). However, what happens if you give the licence to the plane? Suddenly, anyone can fly! Interestingly, they made no guarantee of making money to their investors. However, they believe that when personal aerial vehicle technology takes off, these companies will need their automated flight control solution. One of my favourite quotes from the conversation was “art of flying is landing when you really want to” haha. Whilst it sounds obvious, also highlights that landing is actually the hardest part of flying since current autopilot software is quite adept at cruise control. I’m seriously hoping their futuristic vision becomes reality!
Image super-resolution and compression – Zehan Wang (Twitter)
The problem which was Zehan expounded on was of reducing a high resolution image to a lower resolution image first. Then using the low resolution image to get a high resolution image. Easily stated, much harder to implement! Turned out to be just as challenging to understand. However, it was still super cool how GANs (Generative Adversarial Networks) specifically the SRGAN outperformed more established approaches. Whilst I’d heard of them before, I got a better high level intuition of how they work. Essentially, a GAN is composed of two components; a generator and a discriminator. The generator produces examples e.g images with the aim of fooling the discriminator which has to decide if the example e.g. an image is good enough. In the context of the SRGAN, it would be deciding whether the generated image’s resolution was of an acceptable quality. It’s kind of similar to the well known Turing test where a human (discriminator) has to decide whether an unknown agent they’re speaking to is human or not. Except in the GAN framework, it would be multiple agents over time.
Swarm engineering across scales – Sabine Hauert (University of Bristol)
Swarms, swarms everywhere…from microscopic agents the size of a blood cell to robots the size of a hand were explored in this fascinating lecture. I learned of the strides that were being made in research to create swarms that could help treat ailments such as cancer in a highly targeted way. By modifying individual’s agents properties e.g. shape, size, the material of the outer shell and the internal drug, all kinds of behaviour could be engendered. These kinds of modifications could increase delivery of drugs to tumour sites with minimal adverse effects to other healthy tissue. It was cool how the number of robots in the swarm influenced the amount of calibration required for each agent. For swarms of size n, where n < 20, a lot of calibration is required whereas when n > 20, calibration is much less as you model the swarm as a probabilistic system instead! Definitely got a lot of value by discovering this fantastic resource all about robotics.
Intelligence processors – Simon Knowles (Graphcore)
Phenomenal! I was amazed by the insight required to abstract the core of machine learning technologies and build a chip that works now and have utility for the next 20 years! Bold, very bold 😀 I learned about processors. CPUs are designed for scalar computations whilst GPUs which were originally designed for graphics have evolved over time for high performance computing applications e.g. training deep neural networks. IPUs (intelligence processing units) are designed for intelligence. So how does an IPU differ from other chips? There’s less logic on the die, instead we have a 75% RAM, 25% FPU split which enables greater utilisation of the available power supplied to a chip. On Graphcore’s IPU: memory is local, re-computation is prioritised over storage and communication is serialised. After this mind-altering exposition, I am intensely driven to learn more!
What’s my AI strategy going forward?
As stated earlier, I am absolutely long AI! From the vantage point I have gained so far, I am going to invest time and energy into applying AI technologies to solve problems. From an entrepreneurial perspective, ideas are only valuable when they are implemented effectively. I asked various startup founders what they would do if they were a novice in AI and wanted to create an AI company in the future? The strategy I derived from their input was to become skilled at executing/implementing AI solutions. As a first step, they recommended:
- Being constantly on the lookout for interesting (real world) problems
- Try and implement simple methods first (since many domains don’t need cutting edge level machine learning methods to generate value)
- Should those fail, then go up the sliding scale of techniques
- Analyse what could have gone better
- Rinse and repeat
I am going to combine the above with a roadmap I crafted from advice from two helpful research scientist from DeepMind i.e.
- Do an online course that goes over the basics e.g. overfitting, regression, etc
- Start practicing on Kaggle yesterday
- Implement methods from papers but beware that a lot of fundamental details are missing
I will also continue going to AI events, building my network and making friends as this makes the journey much more fun. On another note, I am open to working at AI startups 😉 Finally, you can learn more from the recorded talks.
Thanks for reading 😀
Also published on Medium.