HomeCyber SecurityOnline Safety In An Artificial Intelligence (AI) World

Online Safety In An Artificial Intelligence (AI) World

By Stephen Balkam’s keynote address at the eSafety19 conference | Sydney, Australia

Stephen Balkam – Founder & CEO, Family Online Safety Institute

So, a few months ago, I was walking down the street in Shenzhen, in the Guangdong Province of southeastern China. I was hungry and looking for lunch.  Armed with my credit card and plenty of the local currency, I strode out of my hotel to check out the many street vendors selling delicious-smelling food. Using Google Translate, I was able to order a fried fish dish, but when I went to pay, the vendor refused my credit card.  Undaunted I pulled out cash, but that too was refused. The guy pointed me to a large QR code and asked me to pay using the WeChat app.  As this was my first day in China, I hadn’t yet set the app to pay for things, so I walked away, a little embarrassed.

Still hungry, I came to a large junction and saw a promising looking restaurant across a busy street.  A little impatient for the light to change, I noticed a gap in the traffic and was about to sprint across the road, when a crossing guard intervened. Not only did she stop me from jaywalking, she then pointed to a large electronic screen with what looked like dozens of criminal mugshots of ordinary citizens. She also pointed to a nest of three cameras that were strategically placed on a nearby lamppost and on lampposts on every corner of the junction. 

I later discovered that had I crossed on a red, the cameras would have picked up my misdemeanor, recognized my face and, if I was a local, would have fined me through my WeChat app, put a picture of my face up on the large screen facing the street and sent a note out via social media to let my friends know I had been caught. Oh, and if I was a repeat offender, my social credit score would have been docked making it harder for me to take out a loan, travel freely or send my kids to the best schools.

Sounds like a Black Mirror episode, right? Well, yes, check out the 2016 episode called “Nosedive”, starring Bryce Dallas Howard. Rather than science fiction, the reality is that this system of population control – employing artificial intelligence, facial recognition and enormous data sets – already exists in China and is being promoted and exported to emerging markets around the world.

Welcome to the future. It is a future in which, in the words of author, Kevin Kelly, ordinary objects will be “cognitized” or imbued with artificial intelligence and connected to the cloud. And we already have one foot in that future. We have connected thermostats, smart speakers, driverless cars and smart toys like Dino the dinosaur.

Dino is made by a company called, CogniToys. You push its belly button and it speaks. It also listens and answers your questions in real time.  Ask it what 2 plus 2 is and Dino will answer, in an authentic dinosaur voice that, “It is 4. But you can ask me much more difficult questions than that!”  And that’s because Dino is connected via the web to IBM’s Watson, one of the world’s top supercomputers.  

Then there’s Hello Barbie. This new iteration of Barbie is connected to the web via her belt buckle to servers with over 6,000 pre-programed responses. I told her that I wanted to be a doctor when I grew up. She said that that was a very good choice, but had I ever considered fashion. When I said I wasn’t interested in fashion, she countered, “But what’s your favorite color?” Can you see where this conversation is going?

And there’s there’s Hot Wheels id. The world’s most popular toy has had an AI upgrade this year. The model cars now come with an embedded chip which is read by sensors in the smart track. It allows you to play physically in the real world and virtually online with friends, relatives and complete strangers all over the globe. 

So what is AI and how is it going to impact our conversations about kids, online safety and digital citizenship in the coming years? According to one definition, AI is used to describe machines that mimic cognitive functions that we associate with the human mind, such as learning, problem solving, voice recognition and translation. I should also note that machine learning is the process by which a computer is able to improve its own performance by continuously incorporating new data into an existing statistical model. Or, to put it another way, machine learning is devoted to building algorithms that allow computers to develop new behaviors based on experience.

And while we’re considering definitions, we should also note that the concept of AI is often split into three branches: artificial narrow intelligence or ANI; artificial general intelligence, AGI and artificial superintelligence also known as ASI.

Examples of artificial narrow intelligence include the GPS in your car, Siri on your phone or your Nest thermostat.  These AI are very good at a narrow set of tasks such as directing you to your destination, answering straightforward questions or regulating the temperature in your house. 

Artificial general intelligence is the capacity to understand or learn any intellectual task that a human being can.  Such machines would not only pass the Turing Test, but might also be considered “conscious” as in the robot, Ava, in the movie Ex Machina. AI experts disagree wildly on when a machine would attain AGI – from 20 to 200 years from now, but the majority agree that it will happen sometime in this century.

Finally, ASI is the point in which our machines simply use their general intelligence to vastly surpass our mental abilities and set off on their own, no longer needing us to program or supply them with data.  I’ll leave that scenario, whether it be benign or dystopian, to your imagination.

At this point, it is worth asking who are the major players in artificial intelligence and how will their biases and inclinations influence the development and deployment of it?

Not surprisingly, they are mostly large multinational corporations – chiefly based in the US and China.  In her new book, The Big Nine, the futurist Amy Webb labels six companies in the US as the G-MAFIA or Google, Microsoft, Apple, Facebook, IBM and Amazon, and three companies in China as BAT or Baidu, Alibaba and Tencent.

In her analysis, Webb sees these companies serving two different and competing masters.  For the G-MAFIA it is Wall Street and the companies’ shareholders. For the BAT companies, while they make enormous profits, they ultimately have to answer to the Chinese government. She lays out three potential scenarios – one optimistic, one pragmatic and the last, catastrophic and then challenges the reader to better understand how our and our children’s data is mined and refined by the Big Nine. Also, she implores us to be smarter consumers of media and to hold our political leaders to account for their actions and inactions on AI.

In another recent book, AI Superpowers, Kai Fu Lee, former head of Google China and now an investor in Chinese tech start-ups, also sees a titanic battle between the US and China over AI. This will have huge implications for both white- and blue-collar jobs, which our children will undertake or not, in the coming decades. He calls out the advantages that the Chinese have in their much larger population which generate huge datasets which, in turn improve machine learning.  Also, their very different view of personal privacy means that there are fewer if any restraints on the companies in collecting personally identifiable information.  And in their greater willingness to have the government and the Chinese Party to have direct oversight and control of this emerging technology.

You could see these two world views clash in an extraordinary debate at the World AI Conference in Shanghai just two weeks ago. Jack Ma, co-founder of Alibaba sparred with Elon Musk, the head of Tesla and SpaceX, over the future of AI and its implications for humanity.  In Ma’s optimistic view, AI will only bring great advances for us and our kids. Musk sees it as our “biggest existential threat” and he wants to see far greater ethical oversight of its development and deployment.

Another fascinating voice on this topic is the Israeli philosopher and futurist, Yuval Harari. In his book, Homo Deus, he suggests that our three greatest dangers are: climate change, nuclear weapons and the combination of AI and biotechnology. He concludes his look into the near future with this question: “What will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves?” What indeed!

Now I’d like to pivot from this lofty look at artificial intelligence and the promise and perils that it poses, by having a look at the vexed question of screen time. I don’t know about here, but we in the U.S. are having a bit of a moral panic about the amount of time our kids are spending in front of their screens. That’s not to say that there aren’t legitimate concerns and plenty of anecdotal evidence about the negative effects of too much screen time. And, as the kids I talk to have told me, they’re worried about their own parents’ obsession with their devices!

But the scientific evidence is simply not there. Professor Andy Przybylski, of the Oxford Internet Institute, and his colleague, Amy Orben, have written extensively on this issue.  Based on data of more than 10,000 teenagers, they found “little clear-cut evidence that screen time decreases adolescent well-being even if the use of digital technology occurs directly before bedtime.” Now I know that this flies in the face of “commonsense” and there are advocates in this space that say that not accepting the harmful effects of screen time is equivalent to being a climate denier. Well, I beg to differ. But I digress.

What I wanted to explore for a moment is the very nature of screens over the past century and how AI may eventually make them disappear. The first American cinema opened over 100 years ago in Pittsburgh, Pennsylvania.  It was set up like a theater with rows of seats stretching to the back of the hall, much as modern cinemas are today. The large movie screen, then as now, could be as much as 100 feet away from the viewer. 

About 30 years later, televisions began to appear in living rooms – usually placed along a wall with a family seated some 10 or 12 feet away. Fast forward to the 1980s, and personal computers began appearing on desktops with screens just a foot or two away. By the mid-90s, smart phones arrived with tiny screens which we held in the palm of our hands less than a foot away. Then, just five years ago, Google Glass arrived quickly followed by Oculus, HoloLens and Magic Leap mixing virtual and augmented reality with the real world, less than an inch away from our eyes.

AI, in tandem with miniaturization and increased processing power has brought our screens from across the hall to the bridge of our nose. But we won’t stop there. Smart contact lenses are already here. For just $44 you can purchase Google Smart Contact Lenses on Amazon to monitor diabetes from tears. There are lenses being developed to improve or fix your vision, allowing you to see text, photos and videos. It is the ultimate wearable.

But, if Elon Musk and other innovators have their way, we won’t need screens or lenses at all. Musk’s Neuralink project will embed tiny computers in people’s brains to fully access the brain’s capacities and allow us to do a search simply by thinking it, no screen or screen time involved. It is Musk’s bid to help humanity keep up with the advances in AI and allow humans to merge with the machines as they get closer to AGI or artificial general intelligence.

So what does all this portend for our work in Online Safety?  How will the exponential advances in AI and machine learning impact how we think about both child protection as well as youth empowerment on the web?  What will we be doing and what will we be discussing at the eSafety conference of 2029 – just 10 years from now? 

Ray Kurzweil, inventor and futurist and author of The Singularity Is Near, has predicted that by 2029:

  • a $1,000 computer will be 1,000 times more powerful than the human brain.
  • VR glasses and headphones will be replaced with computer implants.
  • artificial intelligences will claim to be conscious and openly petition for recognition of this fact.

Whatever you think of Kurzweil and his predictions, he does point to a near future that is considerably different than today. And it is one that our children will inherit and have to make sense of. So how do we all protect, prepare and project our kids into this future?

A few years ago, we at FOSI developed a working model of online safety to address the universe of issues that arose with the use of digital technology. Put simply, we aim to acknowledge the risks, mitigate the harms while reaping the rewards of our and our kids’ digital lives. And we believe strongly that you don’t get rewards if you don’t take risks. The way to be risk and harm free is to ban, lock down and exile digital technology from our homes, schools and places where our kids are still allowed to gather. That is neither practical nor desirable.

But we are also not advocating that you put a TV, a laptop and a phone in your kids bedroom (as some parents do) and walk away, hoping for the best. Nor are we opposed to government legislation in this space.  Quite the contrary, we work with governments around the world who are struggling to find the sweet spot of regulations that don’t squash individual liberties (including those of young people) or technological innovation. 

Instead, we promote the development of a Culture of Responsibility – one in which all of us have different, but overlapping areas of responsibility for fostering safety online. What this chart illustrates, is that it is not a top down, authoritarian model, but a multi-stakeholder approach that includes the kids, themselves. We advocate for reasonable government oversight and support. Laws and regulations that are evidence-based and not a knee-jerk reaction to a Daily Mail headline.

We admire the work of the office of the eSafety Commission here in Australia and have proposed a position of Chief Online Safety Officer for the United States – to work alongside the US CTO in the office of the National Telecommunications and Information Administration. We have supported bills in the US such as CAMRA – the Children and Media Research Advancement Act – which would provide the National Institute of Health $95M in funds to research the impact of media such as mobile devices, social media, apps, artificial intelligence, video games, and virtual and augmented reality. And we are working closely with the US Federal Trade Commission in their consultation of the COPPA Rule – the Children’s Online Privacy Protection Act.  

When thinking about artificial intelligence, we need to see more emphasis on online safety in the national AI plans that have emerged in recent years. Not surprisingly, these plans tend to focus on jobs, the economy and transformations of manufacturing, agriculture and the service industries. More thought and planning must be given to the risks, harms and rewards of AI that our children and young people will encounter as they come of age in this new world. Next, we need well-resourced and trained law enforcement to deal with the bad actors who will use the advancements in AI to their advantage. And while providing police with the tech tools they need, we must also remain vigilant that we do not create an unaccountable surveillance system that abuses the extraordinary power of machine learning, facial recognition and national databases.

Of course, the tech industry is a key part of building this Culture of Responsibility. Over the past two years we have seen an extraordinary backlash against failures by the tech firms – from privacy breaches to questionable content moderation policies to allowing hate speech and worse to populate their platforms. We must continue to demand robust and comprehensive industry self-regulatory efforts – tools to filter, to report, to keep posts private and to encourage positive behaviors. Their policies and practices must keep pace with the increasing scale of their operations both domestically and around the world. And there must be more than lip service to the concept of time well spent and digital well-being.

As we have seen, the Big Nine companies have a particular role to play to mitigate the harms that their AI-powered products and services may bring. It is essential that they collaborate with each other and with government and law enforcement to keep the potential harmful effects of AI to a minimum. Teachers and educators are also a key component. It is estimated that AI in US education will grow by 47% from a baseline in 2017 to 2021. Machine learning will be used for the development of skills and testing systems. To help fill gaps in learning and to achieve much greater personalization and efficiencies that will free teachers up to provide personal coaching and adaptability. Given that our children will inherit an AI-rich world, it is essential that schools expose them to and use AI as part of their teaching repertoire. 

We will need empowered parents to confidently navigate the online world with their kids. One of the reasons we set up our Good Digital Parenting initiative is that parents are having to make decisions about their children’s technology at an earlier and earlier age, yet they are the least well served group when it comes to online safety messages. Just ten years ago, we advised folks to keep the family computer in a common room.  Sounds quaint by today’s standards. Yet in ten years from now, our messaging, tips, tools and educational materials will have to dramatically evolve to deal with the tsunami of digital devices, apps and wearables that AI will bring. Never mind the inevitable request for an implant or two!

We in the online safety community must keep our heads and not revert to fear-based messaging or dire warnings. It is incumbent on us to not only keep abreast of the dramatic technological advances that AI will bring, but also to distill what we’ve learned into clear and easy-to-use guidance and tools that a busy and harried parent can follow. 

Finally, children and young people must be brought into the discussions and decisions about what this future will look like. Of course, we must protect our young kids from the worst of the web. It is irresponsible, for instance, to hand a young child an unfiltered iPad and leave them to wander to whatever or wherever the cursor takes them.

But we will deal differently with a 17-year old than a 7-year old. As a teen grows, the filters will start to come off and our role, as parents, changes. We are more likely to monitor and positively guide them on their digital journeys, rather than simply saying no. To do that, we will need to be good digital role models, ourselves, showing our kids that we know when and where to switch off our own devices and how to behave positively on social media.

It won’t come as a shock to this audience when I say that many of us adults are anything but good role models. And when some of our political leaders lie, bully and harass others online, it is hardly surprising that our young people look at us and sigh. But we can use the worst abuses of Twitter and other platforms as teachable moments.  We should show our kids that, yes, there are adults who behave badly online and when their behavior is inappropriate or abusive, we should report them. 

Giving young people agency over their online lives is perhaps the greatest gift we can give them – helping them to develop resiliency and the strength to stand up to bullies, predators and others who act out inappropriately online and off. If we get this right, we will encourage a generation of young people to make wise choices about the content they access and post; about who they contact and who they allow to contact them; and how they conduct themselves online.  In so many parts of the world, we are witnessing young people using social media and new tech tools to become upstanders and not bystanders. To create social movements that address our biggest challenges. 

And while AI and machine learning poses many daunting challenges to our work in online safety, let me leave you with an inspirational story – of a young person using the power of social media to stir the conscience of millions. I am speaking, of course, of Greta Thunberg, the 16-year-old from Stockholm who began a solitary protest about climate change outside the Swedish parliament just over a year ago. After her schoolmates declined to join her on her Friday protests, she decided to post a photo of herself and her sign, “School strike for the climate” on Instagram and Twitter and gave herself a hashtag, #FridaysForFuture. 

Within a week she was joined by 35 others. Then local journalists came calling and within weeks she was known and followed all over the world, inspiring other young people to organize their own Friday climate crisis strikes. As of today, Greta has 1.3 million followers on Twitter and over 3 million on Instagram.  She is in New York this week for the United Nations General Assembly having crossed the Atlantic in a carbon-neutral sailing boat. 

So, what will the future hold for her and her generation? How will they harness the promise of AI to tackle our greatest challenges – including climate change? And what legacy are we adults leaving our children? Will we be able to get to grips with the technology we have built and pass on level-headed safeguards and solutions to the problems we have created? Do we have the collective vision to anticipate the side effects and existential threats that AI and machine learning might pose? Can we keep our eyes on the prize of the extraordinary benefits and rewards that this new technology can bring?

I certainly hope so. If we could all be a little more like Greta, the world and our digital spaces will be a much better place to live in.

Technology For You
Technology For Youhttps://www.technologyforyou.org
Technology For You - One of the Leading Online TECHNOLOGY NEWS Media providing the Latest & Real-time news on Technology, Cyber Security, Smartphones/Gadgets, Apps, Startups, Careers, Tech Skills, Web Updates, Tech Industry News, Product Reviews and TechKnowledge...etc. Technology For You has always brought technology to the doorstep of the Industry through its exclusive content, updates, and expertise from industry leaders through its Online Tech News Website. Technology For You Provides Advertisers with a strong Digital Platform to reach lakhs of people in India as well as abroad.
spot_img

CYBER SECURITY NEWS

TECH NEWS

TOP NEWS