AI/Automation
Taking Artificial Intelligence Where the Human Brain Goes
The human brain — intelligent and unique — is challenging scientists, who are determined to decode its complexity and unlock possibilities to enhance human lives. By harnessing artificial intelligence (AI), they have already made breakthroughs in man-machine interactions through Watson, Siri, and more. But, for AI to have a truly transformational impact, artificial neural networks need to be further reinforced by human native intelligence.
The human brain has advanced over time in responding to survival instincts, harnessing intellectual curiosity, and managing demands of nature. When humans got an inkling about the dynamics of the environment, we began our quest to replicate nature.
Our success in imitating nature has been related to advances in science and technology. Take for example, our aspiration for flight. We replicated wings to achieve safe and long-haul air travel. However, we are aware that inflexible aircraft wings are not an exact replacement, and a likely solution may lie in the Self-Assembly Laboratory at the Massachusetts Institute of Technology (MIT). It is developing a 4D printing technique to create aircraft wings that adapt to aerodynamic conditions.
While the human brain finds ways to exceed our physical capabilities, the combination of mathematics, algorithms, computational methods, and statistical models is accelerating our scientific pursuit. Artificial Intelligence (AI) gathered momentum after Alan Mathison Turing developed a mathematical model for biological morphogenesis, and authored a seminal paper on computing intelligence. Today, AI has grown from data models for problem-solving to artificial neural networks — a computational model based on the structure and functions of human biological neural networks.
Teaching the machine
The first generation of AI created machine learning systems. Machine learning focuses on the development of computer programs that can change, or learn, when exposed to new data. Algorithms from the first generation of AI ‘taught’ machines to identify images and objects, see obstructions, correlate, and discover relationships between variables. It resulted in intelligent applications that managed single tasks at a time.
AI makes industrial machinery accurate, reliable, and self-healing; and paves the way for calibrated performance resembling human action. Modeling techniques locate indecisive voters, identify crops that are most suitable for a specific topography, and verify clinical diagnosis and treatment. AI integrates with robotic controls, vision-based sensing, and geospatial systems to automate advanced systems. It enhances disease prevention and treatment, boosts engineering systems, and drives self-organizing supply chains. As of today, AI provides near-human customer care at the Royal Bank of Scotland, and assesses insurance claims at Fukoku Mutual Life Insurance.
In fact, we now rely on machines for decision-making across processes — underwriting, recruitment, fraud detection, maintenance, and more. Real Core Energy uses machine learning algorithms that evaluate production and performance parameters to guide oil drilling operations as well as investment decisions. 1800-Flowers.com gift concierge service uses AI to recommend gifts. It combines customer interaction with macro buying trends and consumer behavior to recommend personalized gifting ideas. Philips has developed a deep learning-based, automatic screening solution for detecting tuberculosis, a disease that affects 2.5 million people in India.
The human race conceded to artificial intelligence in move # 37 of the game between Lee Sedol, the world champion of Go, and AlphaGo in Seoul, South Korea. Experts took weeks to understand the ‘wisdom’ of the AlphaGo machine.
The structure of artificial neural networks is inspired by the human nervous system. It helps ‘train’ machines to make sense of speech, images, and patterns. DeepFace, the Facebook facial recognition system, was trained to recognize human faces in digital images by using millions of uploaded images. Researchers at MIT have developed a model for facial recognition that duplicates the neurological functions of the human brain.
Machines learn to think
Computational neuroscience bridges the gap between human intelligence and AI by creating theoretical models of the human brain for inter-disciplinary studies on its functions, including vision, motion, sensory control, and learning.
Research in human cognition is revealing a deeper understanding of our nervous system and its complex processing capabilities. Models that offer rich insights into memory, information processing, and speech / object recognition are simultaneously reshaping AI.
A nuanced understanding of the structure of the human brain can help restructure hierarchical deep learning models. Deep learning, a branch of machine learning, is based on a set of algorithms that attempt to model high-level abstractions in data. It will enhance speech / image recognition programs and language processing tools by understanding facial expressions, gestures, tone of voice, and other abstracts. We are at the threshold of experiencing advances in speech technology that will lead to more practical digital assistants and accurate facial recognition that will take security systems to the next level.
However, contemporary deep neural networks do not process information the way the human brain does. These networks are highly data-dependent and should be trained to accomplish even simple tasks. Complex processes require large volumes of data to be annotated with rich descriptors and tagged accurately for the machine to ‘learn.’ Further, deep learning systems consume far more power than the human brain (20 watts) for the same amount of work.
We need to discover less intensive machine learning approaches to augment artificial intelligence with native intelligence. Our world is awash with data from Internet of Things (IOT) applications. Deep neural networks capable of consuming big data for self-learning will be immensely useful. Just as children identify trees despite variations in size, shape, and orientation, augmented intelligence systems should learn with less data or independently harness knowledge from the ecosystem to accelerate learning. Such self-learning algorithms are necessary for truly personalized products and services.
The interface imperative
The merger of human intelligence and AI will turn computers into super-humans or humanoids that far exceed human abilities. However, it requires computing models that integrate visual and natural language processing, just as the human brain does, for comprehensive communication.
Language learning skill is one of the defining traits of human intelligence. Since the meaning of words changes with context, ‘learning’ human language is difficult for computers. AI-embedded, virtual assistants can address complex requests and engage in meaningful dialogue only when they ‘think and speak’ the human language. Machines should learn to understand richer context for human-like communication skills. They should be endowed with richer cognitive capabilities to interpret voice and images correctly.
AI systems such as IBM’s Watson, Amazon’s Alexa, Apple’s Siri, and Google Assistant will become more useful, if enhancements to the quality of language and sensory processing, reasoning, and contextualization are achieved. Voice-activated devices and smart machines will create a centralized, artificial intelligence network or ‘intelligent Internet,’ which will redefine man-machine and machine-machine collaboration.
In the near future, drones with built-in navigation systems will deliver goods in crowded cities. And smart home appliances will translate recipes, assemble ingredients in response to voice commands, and serve gourmet meals.
Of course, as computers become more powerful, more networked, and more human, they become capable of independent interaction with stakeholders. However, creativity and strategic thinking differentiate the human race from artificially intelligent entities. Today, we do not fully understand concepts that make our intelligence unique. We need to know more deeply how the human mind operates, as a means to incorporate emotional and social intelligence into machines. Human beings will continue to control everything until machines become self-referential systems. Till then, we must revisit our ecosystem, which spans education systems, skill development processes, and social welfare models, to make way for more efficient methods.
AI systems will be a force multiplier for every industry and human activity. It can transform billions of lives via myriad applications, and solve fundamental issues: clean the air we breathe, purify the water we drink, enrich the food we consume, and ensure our wellness. All it needs is that the person-to-machine user interface mimics the brain-to-brain interface.
The world will be a better place for successive generations when technology works in transformative and invisible ways. While native intelligence of the human race has produced inventions that are ubiquitous, the confluence of human intelligence and artificial intelligence will amplify growth and deliver sustainable progress.