Investing In Artificial Intelligence- Nathan Benaich, Playfair Capital

Guest Post by mHealth Israel conference speaker, Nathan Benaich, Partner at Playfair Capital.
Artificial Intelligence (AI) is one of the most exciting and transformative opportunities of our time. From my vantage point as a venture investor at Playfair Capital, where I focus on investing and building community around AI, this is a great time for investors to help build companies in this space. There are three key reasons.
First, with 40 percent of the world’s population now online, and more than 2 billion smartphones being used with increasing addiction every day (KPCB), we’re creating data assets, the raw material for AI, that describe our behaviors, interests, knowledge, connections and activities at a level of granularity that has never existed.
Second, the costs of compute and storage are both plummeting by orders of magnitude, while the computational capacity of today’s processors is growing, making AI applications possible and affordable.
Third, we’ve seen significant improvements recently in the design of learning systems, architectures and software infrastructure that, together, promise to further accelerate the speed of innovation. Indeed, we don’t fully appreciate what tomorrow will look and feel like.
We also must realize that AI-driven products are already out in the wild, improving the performance of search engines, recommender systems (e.g., e-commerce, music), ad serving and financial trading (amongst others).
Companies with the resources to invest in AI are already creating an impetus for others to follow suit — or risk not having a competitive seat at the table. Together, therefore, the community has a better understanding and is equipped with more capable tools with which to build learning systems for a wide range of increasingly complex tasks.
Nathan Benaich, Partner, Playfair Capital
How Might You Apply AI Technologies?
With such a powerful and generally applicable technology, AI companies can enter the market in different ways. Here are six to consider, along with example businesses that have chosen these routes:
There are vast amounts of enterprise and open data available in various data silos, whether web or on-premise. Making connections between these enables a holistic view of a complex problem, from which new insights can be identified and used to make predictions (e.g., DueDil, Premise and Enigma).
Leverage the domain expertise of your team and address a focused, high-value, recurring problem using a set of AI techniques that extend the shortfalls of humans (e.g., Sift Science or Ravelin for online fraud detection).
Productize existing or new AI frameworks for feature engineering, hyperparameter optimization, data processing, algorithms, model training and deployment (amongst others) for a wide variety of commercial problems (e.g., H2O.ai, Seldon and SigOpt).
Automate the repetitive, structured, error-prone and slow processes conducted by knowledge workers on a daily basis using contextual decision making (e.g., Gluru, x.ai and SwiftKey).
Endow robots and autonomous agents with the ability to sense, learn and make decisions within a physical environment (e.g., Tesla, Matternet and SkyCatch).
Take the long view and focus on research and development (R&D) to take risks that would otherwise be relegated to academia — but due to strict budgets, often isn’t anymore (e.g., DNN Research, DeepMind and Vicarious).
There’s more on this discussion here. A key consideration, however, is that the open sourcing of technologies by large incumbents (Google, Microsoft, Intel, IBM) and the range of companies productizing technologies for cheap means that technical barriers are eroding fast. What ends up moving the needle are proprietary data access/creation, experienced talent and addictive products.
Which Challenges Are Faced By Operators And Closely Considered By Investors?
I see a range of operational, commercial and financial challenges that operators and investors closely consider when working in the AI space. Here are the main points to keep top of mind:
Operational
How to balance the longer-term R&D route with monetization in the short term? While more libraries and frameworks are being released, there’s still significant upfront investment to be made before product performance is acceptable. Users will often be benchmarking against a result produced by a human, so that’s what you’re competing against.
The talent pool is shallow: few have the right blend of skills and experience. How will you source and retain talent?
Think about balancing engineering with product research and design early on. Working on aesthetics and experience as an afterthought is tantamount to slapping lipstick onto a pig. It’ll still be a pig.
Most AI systems need data to be useful. How do you bootstrap your system w/o much data in the early days?
Commercial
AI products are still relatively new in the market. As such, buyers are likely to be non-technical (or not have enough domain knowledge to understand the guts of what you do). They might also be new buyers of the product you sell. Hence, you must closely appreciate the steps/hurdles in the sales cycle.
How to deliver the product? SaaS, API, open source?
Include chargeable consulting, set up, or support services?
Will you be able to use high-level learnings from client data for others?
Financial
Which type of investors are in the best position to appraise your business?
What progress is deemed investable? MVP, publications, open source community of users or recurring revenue?
Should you focus on core product development or work closely on bespoke projects with clients along the way?
Consider buffers when raising capital to ensure that you’re not going out to market again before you’ve reached a significant milestone.
Build With The User In The Loop
There are two big factors that make involving the user in an AI-driven product paramount. One, machines don’t yet recapitulate human cognition. To pick up where software falls short, we need to call on the user for help. And two, buyers/users of software products have more choice today than ever. As such, they’re often fickle (the average 90-day retention for apps is 35 percent).
Returning expected value out of the box is key to building habits (hyperparameter optimization can help). Here are some great examples of products that prove that involving the user in the loop improves performance:
Search: Google uses autocomplete as a way of understanding and disambiguating language/query intent.
Vision: Google Translate or Mapillary traffic sign detection enable the user to correct results.
Translation: Unbabel community translators perfect machine transcripts.
Email Spam Filters: Google, again, to the rescue.
We can even go a step further, I think, by explaining how machine-generated results are obtained. For example, IBM Watson surfaces relevant literature when supporting a patient diagnosis in the oncology clinic. Doing so improves user satisfaction and helps build confidence in the system to encourage longer-term use and investment. Remember, it’s generally hard for us to trust something we don’t truly understand.
What’s The AI Investment Climate Like These Days?
To put this discussion into context, let’s first look at the global VC market: Q1-Q3 2015 saw $47.2 billion invested, a volume higher than each of the full year totals for 17 of the last 20 years (NVCA).
We’re likely to breach $55 billion by year’s end. There are roughly 900 companies working in the AI field, most of which tackle problems in business intelligence, finance and security. Q4 2014 saw a flurry of deals into AI companies started by well-respected and achieved academics: Vicarious, Scaled Inference, MetaMind and Sentient Technologies.
So far, we’ve seen about 300 deals into AI companies (defined as businesses whose description includes such keywords as artificial intelligence, machine learning, computer vision, NLP, data science, neural network, deep learning) from January 1, 2015 through December 1, 2015 (CB Insights).
In the U.K., companies like Ravelin, Signal and Gluru raised seed rounds. approximately $2 billion was invested, albeit bloated by large venture debt or credit lines for consumer/business loan providers Avant ($339 million debt+credit), ZestFinance ($150 million debt), LiftForward ($250 million credit) and Argon Credit ($75 million credit). Importantly, 80 percent of deals were < $5 million in size, and 90 percent of the cash was invested into U.S. companies versus 13 percent in Europe. Seventy-five percent of rounds were in the U.S.
Financing and exit markets for AI companies are still nascent
The exit market has seen 33 M&A transactions and 1 IPO. Six events were for European companies, 1 in Asia and the rest were accounted for by American companies. The largest transactions were TellApart/Twitter ($532 million; $17 million raised), Elastica/Blue Coat Systems ($280 million; $45 million raised) and SupersonicAds/IronSource ($150 million; $21 million raised), which return solid multiples of invested capital. The remaining transactions were mostly for talent, given that median team size at the time of the acquisition was 7 people.
Altogether, AI investments will have accounted for roughly 5 percent of total VC investments for 2015. That’s higher than the 2 percent claimed in 2013, but still tracking far behind competing categories like adtech, mobile and BI software.
The key takeaway points are a) the financing and exit markets for AI companies are still nascent, as exemplified by the small rounds and low deal volumes, and b) the vast majority of activity takes place in the U.S. Businesses must therefore have exposure to this market.
Which Problems Remain To Be Solved?
Healthcare
I spent a number of summers in university and three years in grad school researching the genetic factors governing the spread of cancer around the body. A key takeaway I left with is the following: therapeutic development is very challenging, expensive, lengthy and regulated, and ultimately offers a transient solution to treating disease.
Instead, I truly believe that what we need to improve healthcare outcomes is granular and longitudinal monitoring of physiology and lifestyle. This should enable early detection of health conditions in near real time, driving down cost of care over a patient’s lifetime while consequently improving outcomes.
Consider the digitally connected lifestyles we lead today. The devices some of us interact with on a daily basis are able to track our movements, vital signs, exercise, sleep and even reproductive health. We’re disconnected for fewer hours of the day than we’re online, and I think we’re less apprehensive to storing various data types in the cloud (where they can be accessed, with consent, by third-parties). Sure, the news might paint a different story, but the fact is that we’re still using the web and its wealth of products.
On a population level, therefore, we have the chance to interrogate data sets that have never before existed. From these, we could glean insights into how nature and nurture influence the genesis and development of disease. That’s huge.
AI-driven products are already out in the wild.
Look at today’s clinical model. A patient presents into the hospital when they feel something is wrong. The doctor must conduct a battery of tests to derive a diagnosis. These tests address a single (often late-stage) time point, at which moment little can be done to reverse damage (e.g., in the case of cancer).
Now imagine the future. In a world of continuous, non-invasive monitoring of physiology and lifestyle, we could predict disease onset and outcome, understand which condition a patient likely suffers from and how they’ll respond to various therapeutic modalities. There are loads of applications for artificial intelligence here: intelligence sensors, signal processing, anomaly detection, multivariate classifiers, deep learning on molecular interactions…
Some companies are already hacking away at this problem:
Sano: Continuously monitor biomarkers in blood using sensors and software.
Enlitic/MetaMind/Zebra Medical: Vision systems for decision support (MRI/CT).
Deep Genomics/Atomwise: Learn, model and predict how genetic variation influence health/disease and how drugs can be repurposed for new conditions.
Flatiron Health: Common technology infrastructure for clinics and hospitals to process oncology data generated from research.
Google: Filed a patent covering an invention for drawing blood without a needle. This is a small step toward wearable sampling devices.
A point worth noting is that the U.K. has a slight leg up on the data access front. Initiatives like the UK Biobank (500,000 patient records), Genomics England (100,000 genomes sequenced), HipSci (stem cells) and the NHS care.data program are leading the way in creating centralized data repositories for public health and therapeutic research.
Enterprise Automation
Could businesses ever conceivably run themselves? AI-enabled automation of knowledge work could cut employment costs by $9 trillion by 2020 (BAML). Coupled with the efficiency gains worth $1.9 trillion driven by robots, I reckon there’s a chance for near-complete automation of core, repetitive businesses functions in the future.
Think of all the productized SaaS tools that are available off the shelf for CRM, marketing, billing/payments, logistics, web development, customer interactions, finance, hiring and BI. Then consider tools like Zapier or Tray.io, which help connect applications and program business logic. These could be further expanded by leveraging contextual data points that inform decision making.
Perhaps we could eventually re-image the new eBay, where you’ll have fully automated inventory procurement, pricing, listing generation, translation, recommendations, transaction processing, customer interaction, packaging, fulfillment and shipping. Of course, this is probably a ways off.
Artificial intelligence is one of the most exciting and transformative opportunities of our time.
I’m bullish on the value to be created with artificial intelligence across our personal and professional lives. I think there’s currently low VC risk tolerance for this sector, especially given shortening investment horizons for value to be created. More support is needed for companies driving long-term innovation, especially considering that far less is occurring within universities. VC was born to fund moonshots.
We must remember that access to technology will, over time, become commoditized. It’s therefore key to understand your use case, your user, the value you bring and how it’s experienced and assessed. This gets to the point of finding a strategy to build a sustainable advantage such that others find it hard to replicate your offering.
Aspects of this strategy may in fact be non-AI and non-technical in nature (e.g., the user experience layer ). As such, there’s renewed focus on core principles: build a solution to an unsolved/poorly served high-value, persistent problem for consumers or businesses.
Finally, you must have exposure to the U.S. market, where the lion’s share of value is created and realized. We have an opportunity to catalyze the growth of the AI sector in Europe, but not without keeping close tabs on what works/doesn’t work across the pond.