AI is slowly growing in its use and its maturity. So, should there be controls placed on its use, development, and future?
As a young man, I spent a considerable amount of time reading, and my favorite reading pastimes were science fiction and science. I enjoyed science fiction authors, such as Larry Niven and Isaac Asimov, who took the time to apply hard science to their science fiction writing as it made it far more real and less fanciful. Isaac Asimov in particular, was a university professor who wrote a lot of science fiction and was best known for his inclusion of autonomous robots in many of his stories. Although the word “robot” originated in a play by K. Capek, “Rossum’s Universal Robots”, Asimov seems to have become the best-known promoter of the concept of an artificial life form with artificial intelligence and self-direction.
Artificial Intelligence has evolved from its place in pure science fiction to a sort of Holy Grail for computer software developers seeking advanced computer processing power today. Of course, with advanced software processing capabilities you must also have hardware that is capable of running said software, so the evolution of hardware and software in this field are often linked. IBM’s Watson, Google AI, and Microsoft AI are all examples of Artificial Intelligence projects by major technology vendors. But, what does exactly define Artificial Intelligence?
Machine Learning, Deep Learning, and Artificial Intelligence are all typically lumped under the same umbrella of Artificial Intelligence, but there are differences between these. For example, the common perception is that Artificial Intelligence implies something like one of Asimov’s robots. These robots could reason similar to a human; make decisions based on information received and perceived; apply experience to decision-making and thinking, and seem to be very similar to humans in these respects to the point where it was often difficult to tell them apart from humans.
If we look at Machine Learning first, then we will see that this is basically a set of computer algorithms which work as a unit to analyze information that is fed to them and provide output, but the twist is that the algorithms can learn by recognizing patterns in the information being processed and then applying this “learned” information going forward – this is something humans do all the time.
Deep Learning is really a subset of Machine Learning and just takes the latter an extra step by employing neural networks (massively interconnected computing resources and algorithms) to process information in a nonlinear way (again, similar to a capability of the human mind). Nonlinear processing is very useful in circumstances where you are looking for patterns that indicate a specific type of activity might be occurring (e.g. detecting fraud activity in monetary transactions) whilst Machine Learning is very good at processing big data. Today, many products claim to use Artificial Intelligence, when in reality they are using Machine Learning and, in some cases, Deep Learning.
The capability to take Machine Learning and Deep Learning to the next step, true Artificial Intelligence (as most of us perceive its appearance to be), is a jump in magnitude that is considerable, but progress is being made. IBM’s Watson, for example, has been used to build chatbots. Now, this may not sound like the pinnacle of progress we might all be hoping for with Artificial Intelligence, but these chatbots (software used to carry on conversations with humans) can be very convincing when a human converses with them and, by “convincing” I mean that they are able to mimic an actual human on the other end of the conversation. This takes more capabilities than just pure Machine Learning or Deep Learning to accomplish. Does any of this get us to a world with fully autonomous robots walking around, able to make human-type decisions? Not quite, but there is also the delivery vehicle (a chassis other than a large computer system) to consider as well as the seemingly more mundane uses of these technologies.
In August 2019, Russia sent its Skybot F-850 (nicknamed “FEDOR”) to the International Space Station. In April 2017, a video was circulated showing this same robot accurately shooting two handguns simultaneously at targets. The robot used AI to perform menial tasks while in the space station and Russia assured the world that FEDOR was not a “killer robot” (despite the pistol-shooting video that they had filmed). Russia was later cut off by some of the foreign suppliers of parts for FEDOR after the video of the pistol shooting was shared in social media.
Aside from the more futuristic aspects being used in devices such as autonomous or semi-autonomous robots, AI has also been utilized on the software side of things for more than just data processing or conversing with humans; one very negative example of AI in action is Deepfake.
Deepfakes (the term originates from Deep Learning) modifies an existing image or video by applying someone else’s face or facial elements (e.g. mouth) to the original image or video.
This sounds like simple image editing but it is far more complex, as a modified video, for example, could be used to make the person in the original video say something that they did not (and might never) actually say. It can also be used for more benign purposes such as combining video elements to make a new video (e.g. for marketing purposes). This technology, however, has been used to generate a lot of nefarious and sometimes damaging content that looks like original content, and is used regularly by entities such as nation states for propaganda and counterintelligence type of operations with the scourge of state-sponsored fake news.
Because Deepfake can convincingly modify a human face in an image or video, this technology could, as recent reports and tests indicate, offer some positive uses, such as modifying your face in an online image or video chat, so that your true identity would remain masked, thereby affording you a level of protection against identity theft. What is even more impressive is that your newly modified “online face” would typically be an amalgamation of millions of facial features from other images and therefore not match any other person’s face. With facial recognition software running rampant on the Internet, this could also offer protection from that technology as well.
A recently released app named Zao offered a capability similar to the one described above, but it allows the user to put their own face over actors’ faces in movies or television shows. The effect is very realistic but Zao’s privacy policy claimed full rights to any and all content created with their app (including your own facial images) and so a privacy backlash resulted. This example, however, demonstrates how pervasive technologies based on AI principles have become and how fast the application of AI is moving.
Deepfakes can, as previously noted, be used for notorious purposes such as revenge adult content, personal attacks, bullying, etc. Because the modified content can be made to look convincingly like original content, there are now privacy and legal disciplines dedicated to dealing with Deepfake technology and its negative consequences. From copyright infringement, privacy violations, and cyberbullying, to state-sponsored fake news and propaganda, the illicit use of AI-generated technologies is already happening and protections against the consequences of these activities are evolving to try to meet these new threats head-on. What else can be done to try to keep AI and its associate spin-off technologies from becoming more harmful than helpful? Well, in my opinion, universal and global standardization and international agreements on AI are critical.
A discipline known as Machine Ethics has evolved to try to define ethical creation, use, and implementation of AI, but its primary focus is on AI in the form that we all typically think of – autonomous artificial life forms of some type. Major universities, such as Duke University, offer university diplomas and Masters in Tech Ethics which cover everything tech-related, including AI. I currently volunteer on a Standards Council of Canada committee dedicated to reviewing standards related to security in IT, and I am a fan of standardization to help support innovation, and ISO has already taken on the challenge of building standards for AI.
ISO has published three AI standards and is developing 13 more. The topics in the ISO AI standards family include AI Management Systems, AI Systems Engineering, Trustworthiness, and even ethics.
I know that there are some who might say, “…but creating restrictive standards stifles creative development of technologies” but, if that were true, then electrical devices of any sort would never have made it past being only the lightbulb, or computer technology would never have evolved past binary code running on house-sized processing devices. Standards are never written to stifle creativity or evolution but rather to support it in a structured and planned manner.
Beyond privacy and similar risks that mismanaged AI technology might bring to individuals, there is the real threat of weaponizing AI. Some may claim this has already happened when taking into account the effect of artificially generated fake news and chatbots used to destabilize the USA elections process, but truly weaponized AI could do much more damage and possibly sooner than some might think. Deepfakes can certainly be disruptive, but an AI built to adapt cyberattacks against a foreign nation-state could do damage such as destabilizing the country’s power grid. Deepfakes then become just one tool in a nation-state’s cyber arsenal. There was a call at the United Nations a short while ago asking for a ban on the weaponization of AI (similar to the UN ban on land mines) and the UNIDIR (UN Institute for Disarmament Research) has published a paper on this topic.
As with all new or evolving technologies, the correlation of threat, impact, likelihood, and risk often involves a human element when it comes to artificial intelligence and its products, such as Deepfakes. How these new technologies are used, the associated ethics, and the need for an educated public are all crucial elements that must be considered and planned for. The basic principles of self-protection are also crucial to keep in mind when using technology, whether it is new or not:
- Never immediately believe everything you see or read online until you have double or triple checked the information.
- Never give away your personal information online (including images of your own face, address, when you are away on vacation, etc.).
- Know that everything you do online should be assumed to be open source and visible to everyone.
We are certainly not at a point where we need to worry about armies of machines taking over the planet from humans, but today we do have to worry about how humans are using AI and its associated technologies.