Canadian neuroscientist Blake Richards says companies are going to have to start working with the technology
FP: Where is AI deployed prominently now and what applications can we expect in the near future?
The financial sector is already using AI for data analytics, but there are huge opportunities for analytics in other domains, namely consumer behaviour. Right now, we’re doing market research based on traditional focus groups, questionnaires and polls, where data is collected and analyzed. But as we’ve seen on social media, you can track behaviour in a more subtle, fine-grained way and AI systems could give us more accurate insights into how consumers choose one product or one brand over another.
FP: What about longer-term applications?
BR: It’s akin to the invention of the steam engine in the dawn of the industrial revolution. If you really have a system that can replicate human cognition, there is no way to actually enumerate the list of potential applications. There are the possible ones like driving a car or running a factory, but then additional applications that we can’t imagine yet because seeing those opportunities will require first seeing how the economy evolves to take advantage of AI.
FP: What are you most excited about in terms of future AI uses?
BR: I think we’ll see the neurotech economy really blow up over the next decade, with more medical and consumer applications. Being able to identify the practices that help someone be optimally happy or cognitively aware.
For example, in tracking brain waves, we could determine which foods make our brains less attentive or change our moods. Or we could able to control our phone with our thoughts for better multitasking. While theoretically feasible, these applications have been impossible to date because neural data is so complicated and traditional statistical techniques to analyze this data aren’t sufficient.
For me, finally seeing neuroscience really impact lives, rather than sit within the confines of academia, would be really exciting.
FP: What is still holding companies back?
BR: For older companies, it’s adapting their databases to make them readily trollable by AI systems. The other is that AI systems cannot always be trusted to be truthful. They’re trained to give statistically appropriate responses, but that doesn’t mean they’re factually accurate. It’s a task for researchers in industry to resolve and there are companies currently working on these kinds of problems. It’s an eminently solvable issue I expect will be resolved within the next few years.
FP: How do we ensure AI is deployed as ethically as possible in the meantime?
BR: There are no quick answers, unfortunately. There is work being done on how to track model bias and evaluate systems and then how to modify data sets and training protocols to limit future issues, but the bigger part is about ensuring that we have the appropriate regulations in place to prevent AI products from being misused in the first place.
Every day, businesses of all kinds are collecting data of some sort. My advice is to go into this from the get-go with a sense of how to ensure you are collecting data in a useful and ethical manner.
It’s also important that all employees understand how AI models are operating — the same way they understand how a coffee machine works, for example — so they can operate appropriately within the system and not contribute to data breaches or privacy issues.
FP: What’s your advice for business leaders looking to develop and implement AI strategies?
BR: Don’t be afraid to ask yourself hard questions about how your business runs and whether massive efficiencies could be gained by automating certain components of what people do. Don’t just look at existing systems to tweak, but think much bigger. It doesn’t necessarily mean you’re going to have to fire people, as you could drastically increase their productivity by unlocking bottlenecks you didn’t realize existed in your workflow.