AI/Automation

AI Can Ensure The News You Read Is Real

Credit the pursuits of biomedical engineers for developing a microscope called 'SCAPE' (Swept Confocally Aligned Planar Excitation) that can not only view groups of neurons in a living brain; it can do so while the person is busy engaged in an activity. With this innovation, scientists hope to get a deeper understanding into what fuels the brain of a human. We can also hope that SCAPE will help scientist come closer to understanding human 'thought' and decision-making. I find it fitting that this kind of scientific achievement is happening in tandem with the development of machine learning.

That's why I was surprised by the latest scourge of 'fake news' on the Internet, which is largely going undetected. People who get their news from social media sites and not traditional newspapers or television networks are particularly susceptible to fake news. That's because people often don't realize that what appears on social media may not be legitimate news. These social media sites have legions of followers but do not take responsibility for the fake news they disseminate. No platform is telling its users: Don't tune into our site, and why would they, after all their less-then-scrupulous practices are bringing them heavy traffic. Thus far, these social media platforms have not been held accountable for promoting fake news.

Brain Science Versus AI Development

The onus is on the followers of these social media sites to differentiate between remarkably similar real and fake news. Which leads me back to SCAPE and the field of neuroscience. As any scientist would tell you, we are still at the point where we know very little about the brain even after 40 years of intensive research. It's ironic that in the world of Information Technology, machine learning is advancing faster than the study of the human brain. Part of the reason is that our brain houses 86 billion neurons. These neurons form a web of 500 trillion connections. Yes, we're that complex.

Yet it's easy to fool this complex and powerful organ with fake news placed on platforms with which we humans have formed a sense of trust. A case in point is how a fake story about Ebola leading to an entire suburb in Texas being quarantined went viral on Facebook and was shared 339,837 times. The other issue at play here is that if the brain is unable to differentiate fake news from the real thing, think of what can happen someday to a world connected by the Internet of Things (IoT). Unless we devote more effort towards robust cybersecurity powered by artificial intelligence (AI) , fake news will be the least of our worries.

For example, hackers can actually turn IoT-connected devices against us. IoT is in its infancy, yet hackers have exploited millions of personal Internet accounts by using so-called back doors such as Samsung refrigerators and other kitchen appliances. Entire hospital IT networks have been compromised when hackers got access to connected medical equipment. And how did the enormous hack of the Big Box retailer, Target, occur in 2014? Reportedly through holes in its Internet-connected heating, ventilating, and air conditioning (HVAC) system. These are events right out of a science fiction movie.

Addressing the Concern of Fake News

Just recently one of the more prominent purveyors of fake news, Facebook, announced that it would begin to vet certain posts on its platform and even bring in a combination of algorithms and independent organizations to help them do it. This is a step in the right direction and I think the move to vet reports to see if they're legitimate news stories or fabrications says a lot about how social media sites and even technology leaders should respond to the world that we now face.

It is perhaps time for large news platforms to pause for a moment, assess the current situation, and figure out which AI-enabled security technology could best be wrapped around their proprietary, consumer-facing offerings. It's not unlike medical researchers that have begun to make great strides studying the human brain. Not only should content providers be proactive about protecting the authenticity of information on their platforms, there is also a need to acknowledge that for IoT to succeed, they must have robust security measures in place that are powered by AI. If it doesn't, then you know the old saying: The bigger they come, the harder they fall.