What is artificial intelligence?

Is it Man versus Machine?

The eternal plot where mankind fights against the sinister threat of intelligent robots.

Will mankind reign glorious? Or will the machines enslave us forever?

It sounds like an exciting sci-fi popcorn flick to sit back and idle on a Sunday afternoon.

Except it isn’t.

Artificial Intelligence or AI has developed further from exceptional blockbusters into a vital component of our lives. They’ve integrated deep into technology to analyze algorithms, automate actions, verify data, and anticipate events.

In today’s writeup, Cleverism investigates 9 ethical topics that plague AI and make us doubt their existence.

1. THE UNEMPLOYMENT FACTOR

Is my job safe from automation?

Will a few lines of code replace me in my workplace?

Many of us wake up in a sweat with nightmares of having our jobs stolen by advanced robots and machine learning practices.

To prove our point, take a look at some of the trends below.

  • Automated scheduling systems have replaced the jobs of numerous receptionists around the globe
  • Drones have made a big entry in the logistics industry replacing delivery-executives
  • Editors and proofreaders are no longer required when automated applications detect and correct content errors
  • Chatbots have already become a big hit and have replaced customer care representatives
  • Marketing analysts are replaced by software that research and provide accurate spreadsheets of competitors and trends
  • Accountants stand no chance over a highly advanced system that performs sophisticated calculations in seconds
  • Robotic guards are making a breakthrough to replace security guards
  • Retail sales and telemarketers are in the past with overwhelming online shopping and digital sales

The following jobs are only the tip of the iceberg. Studies have shown that several jobs are heavily turning towards automation.

Has the human mind finally met its match?

Has Artificial Intelligence begun to increase the unemployment rate globally?

Let’s take two different types of approaches to find out both sides of the AI coin.

The Optimistic Approach

Not everyone is against AI making a grand entry into industries.

The leading research company Gartner, Inc, believes that AI produces more jobs than it eliminates. With progressive innovations making their debut, thanks to AI, new jobs are produced for humans to make their entry.

Gartner further states that the new job roles created by AI allow workers to have a better work-life balance when the menial jobs are automated by AI.

In short – AI helps create improved jobs for humans.

Accenture’s Chief Technology and Innovation Officer, Paul Daugherty trusts the same approach of AI empowering humans. The combination of ‘Human + Machine’ is the birth of a new superpower, he concludes.

According to a report by ‘The Guardian’, the United Kingdom is set to see an increase of over 7-million jobs in the science, healthcare, and education sector by 2037. The new employment opportunities make up for jobs lost in other sectors where AI prevails such as manufacturing and logistics.

The optimistic approach demonstrates that industry leaders believe in the power of AI to produce a better quality of jobs for humans. Thus, creating various opportunities due to the rise of businesses.

Another positive aspect about AI is it fills a large section of vacant jobs in society and lets people pursue the job of their dreams without repercussions to a nation’s economy.

The Pessimistic Approach

Change is real with AI. Other industry experts believe that AI is unpredictable and hence it will undoubtedly affect our lives negatively. They warn that AI in its pursuit of replacing humans from their livelihood will also turn against them in the future.

The man and the myth, Stephen Hawking once famously said – AI technology would be impossible to control. He also mentioned that AI is the single largest human creation. While mankind celebrates its birth, undoubtedly AI will outsmart humans and a technological revolution known as ‘Singularity’ will emerge.

Ray Kurzweil, popularly known as ‘The Father of Artificial Intelligence had predicted that a singularity would occur by 2045. This is the point when super-intelligent machines merge their data to reach a human level of thinking.

Creative technologist and billionaire, Elon Musk warns about AI and summoning a technological demon we can’t control. He also tweeted that a super-intelligent AI is of a far bigger threat than nuclear explosives.

Assuming a singularity occurs, and the machines are at a human level of intelligence, the question stands –

Would they wish to be enslaved by us or would the opposite hold true?

With two different perspectives, AI has believers and protestors against it. Ultimately, only the future holds the answer on whether AI is a threat or an ally for employment.

2. ROGUE SINGULARITY

The Terminator movie franchise has raked over $2-billion in box office profits. Apart from being a brilliant sci-fi thriller that puts us on the edge of our seats. The movie demonstrates the future of artificial intelligence if mankind isn’t vigilant.

Take a minute to put yourself in the shoes of John Connor, the fledgling teenager being chased by a rogue terminator created by Skynet. If a guardian terminator, wielding a pump-action shotgun, didn’t magically appear to save you, how’d you survive against an army of rogue metallic beasts?

It’s a scary afterthought but here are 2 real-life examples of AI going rogue to send shivers down your spine.

1. Android Sophia

In 2016, Hanson Robotics revealed their social robot ‘Sophia’ to the world via a CNBC interview. The creator ‘David Hanson’ demonstrated the company’s advanced engineering and robotic skills to build a lifelike robot like no other.

Sophia was capable of demonstrating various facial expressions that were eerily human-like. It was a dream come true for many sci-fi believers. Sophia would answer questions with humor, sarcasm, and of importance.

A perfect opportunity for humanity to embrace the next level of the future soon turned into a nightmare. For comedic relief, Hanson asks Sophia whether she would destroy humans, to which Sophia chillingly replies ‘Okay, I’ll destroy Humans’.

Although the creator Hanson nervously chuckles, the truth was out. People worldwide had erupted with a negative view against allowing robots to self-learn.

The full interview of Sophia can be found below.

Androids of the future make for excellent personal assistants to help with our daily chores. But it begs the question – What if your android was hacked and made to break the first law of robotics  – ‘Do not harm or kill a human?.

Scared yet?

2. Promobot IR77

Artificial Intelligence’s definition in layman terms has always been straightforward – to serve humans and obey them.

When Promobot IR77 was developed in Russia, it was programmed to assist shoppers in malls by providing them with directions and shopping advice. However, when the robot was rolled for test purposes in 2016, instead of obeying its creators, it dashed for freedom.

Promobot IR77 was later found in the middle of the road after its battery drained.

What’s even strange was when the creators upgraded the version, the Promobot again ignored the programming and decided to seek freedom.

The incident has sparked a controversy where people wondered if humans went too far by enslaving robots. The bigger question on everyone’s mind was – How the Promobot IR77 managed to evade the instructions of its creators and go rogue?

With AI improving every day, it’s time to sit back and think of an important question.

Are we choosing convenience while compromising safety in a highly futuristic world?

Is the cost worth it?

After witnessing the following examples, it’s clear humans aren’t in control of artificial intelligence the way we’d like to. With robotics entering a world of unimagined computing power, it’s only a matter of time when simulation defeats reality if we aren’t cautious.

3. FLAWS IN FACIAL DETECTION BY AI

More companies utilize facial recognition powered by AI to verify authenticity. Whether it’s employees, customers, clients, or high-level executives, AI has improved facial recognition by miles from the humble times of carrying an identity card.

But a serious underlying problem has emerged.

Call it Artificial Intelligence bias or poor algorithms but AI has struck a nerve with their cognitive capabilities.

When an ethnically Caucasian person was subjected to verification, the AI was right in 99% of the cases when deciphering. However, when a person of darker skin was put under the scanner, up to 35% of errors were found. This study was researched by Joy Buolamwini, an M.I.T. Media lab researcher.

Before we claim AI discrimination of race, we must understand that facial recognition software isn’t 100% accurate in its current form. If skin color isn’t detected correctly, it raises eyebrows about its deployment in critical areas of life.

Let’s take a look at some real-life scenarios.

  1. Imagine deploying an AI-scanner in police enforcements worldwide. What if the AI couldn’t differentiate between two darker-skinned individuals and identified the wrong person? An innocent individual ends up serving a prison term for a felony they didn’t commit.
  2. Misrepresentation cases would be at an all-time high if students of universities weren’t verified accurately. With no proper verification, fraudulent examinations and valuable scholarship programs would be given away to underperforming students.
  3. Racial inequality riots would rear its ugly head in a society with various races across the globe. Cases of prejudice and oppression over minority may increase due to poor AI verification.

Facial analytical software must pass through the verification method of every ethnicity without error before deployment. Otherwise, AI is simply a tool for an all-out race war.

4. THE UNSTEADY FUTURE OF ROBOTS COEXISTING WITH HUMANS

With robots entering a human-level of thinking and reasoning, a section of individuals believes that robots must have their freedom.

Freedom that grants them rights.

Similar to how humans have their rights, a programmed mind is entitled to their social status. This expression and sentiment is known as ‘Robo ethics’. It was coined by Isaac Asimov, who’s largely considered as the father of robotics due to his contributions to robotics.

This begs the question – How should robots be treated?

Option 1 – As our slaves to do our bidding?

Or

Option 2 – as an advanced conscience with a synthetic life of its own?

With the first instance of a robot, Sophia (discussed previously), gaining citizenship in Saudi Arabia, questions arise –

Do we offer the same benefits that humans enjoy to our synthetic creations?

Will the future of robots include robots based around skin color and other geographical motives?

Will robots enjoy a sense of freedom once they gain singularity (super-intelligent machines) on earth?

Would humans be persecuted for crimes against a robot?

Where do you draw a line when depicting AI in terms of moral decisions that humans enjoy?
For example – If robots were to harm a human, should they be imprisoned? Or disintegrated? Or reprogrammed?

With robots becoming a personal reality in the future, it becomes a dilemma that needs an urgent solution.

If Artificial is programmed with emotions and morals, it becomes unethical for humans to enslave them. Yet, if we don’t enslave, the price that humanity pays for co-existing with super-intelligent machines may drive us to extinction.

5. NUMEROUS ACCIDENTS IN SELF-DRIVING AUTOMOBILES

Self-driving cars, a thought that sounded ridiculous to adults of the last millennium is now a stern reality.

Although it’s still in its infant stages, the future of self-driving vehicles isn’t far away.

Should we feel safer with pre-analyzed algorithms in driverless cars based on advanced data?

Let’s take a few examples to prove how AI isn’t perfect.

Sample Scenario

Imagine you’re in the backseat of a self-driving taxi enjoying the sights of the city. Up ahead there’s a cargo van with several boxes packed together. While your self-driving car is maintaining a good distance, something unexpected happens.

The cargo van’s packages begin falling on the road.

A human driver’s first instinct is to apply brakes and swerve to the sides. But since you’re in the hands of a highly advanced program, the vehicle decides to crash into a motorist wearing a helmet.

Injuring them in the process.

While this scenario was the best outcome compared to a human driver who may have caused a serious accident resulting in death. One little oddity stands out.

The human driver wouldn’t be tried in a criminal case if a death had occurred due to the incident being an ‘Unexpected Reaction’. However, in a self-driving car, the result was a ‘Decision’ made by the advanced AI. Even a small accident is treated as a deliberate cause of harm.

While humans react to sudden situations occurring in front of them, an AI is programmed to make decisions and not react.

Now come the ethical questions

Does the injured motorist press charges against the taxi company that employs the self-driving car?

Is the passenger responsible for the accident as they hired the self-driving car and enabled the accidental route?

Is AI within the self-driving car prosecuted with a special law introduced to protect human lives?

Is AI guilty for deciding to save as many human lives with the best-case scenario where no deaths occur?

After all, the best outcome in an unfavorable situation is where no one dies.

Also, if the self-driving car was in between 2 motorists, one with a helmet on the left and one without a helmet on the right. Is it ethical for the AI to choose to ram into the one with a helmet due to a higher chance of survival?

Does the AI take factors into account such as the age and gender of the individual?

As of today, we don’t have a definite answer on whether self-driving cars will be safer in the future. But we do know this, AI isn’t prone to taking an ethical or moral decision over a calculated one.

In 2018, an Uber self-driving car made the news when it killed a lady crossing the street. The car didn’t slow down even after the accident occurred. The report made global headlines and brought the fear of AI as a threat to human lives.

What if a new threat of terrorist hackers takes over every self-driving car and reprograms them to crash?

What if a car was programmed to assassinate someone on the route by crashing into them?

What if a plane crash is deliberately planned by rerouting air traffic?

Are we prepared to take responsibility for the mistakes made by AI?

That’s some food for thought as far as rapid technology is concerned.

6. THE FEAR OF A MEGALOMANIACAL AI TRIGGERING A NUCLEAR WAR

What is the first thing that comes to your mind when you hear the word ‘Nuclear Holocaust’?

A big fat mushroom cloud?

A dystopian future plagued by crime and anarchy?

The fear of humanity losing every leap of technological and economic advancement?

The thought of a nuclear winter isn’t a child’s dream in today’s world. After all, with power comes responsibility and every nuclear nation is at threat of erupting into a full-fledged world war. The fear of World War 3 being the final world war to bring an end to the planet isn’t a myth.

Earth wouldn’t handle the sheer destruction and carnage to its ecosystem caused by a nuclear war. But still, there’s hope at the end of the day, that we are all humans and we understand the damage it could cause to the planet. And that stops us from hitting the ‘Launch’ button.

But will a super-powered artificial consciousness share similar sentiments for souls bound in flesh and blood?

Mankind, in its search for technological advancement, has achieved a stage of artificial intelligence capable of reasoning. And that same AI has the potential of reaching singularity and wiping the human race through man-made nuclear arsenals.

A world filled with machines with above-human intelligence with the existential threat of creating global destruction is real.

One incident in 2010 comes to mind, where the U.S. Air Force lost communications for over 45 minutes with 50 Minuteman III nuclear missiles. Thankfully, the disruption was found to be hardware related. The world would possibly cease to exist if it was an AI takeover threatening to send every one of those nuclear warheads to various countries on the map.

An unprecedented attack on planet Earth is triggered with no way for humans to stop it. With every control and launch code fed into highly intelligent programs. The world is at the mercy of AI without proper supervision.

7. AI CAUSES AN INEQUALITY OF WEALTH DISTRIBUTION  

During the 2008 economic collapse, a leading concern that emerged was related to technological advancement. The divide between the wealthy and the impoverished grows every day.

Fast forward to 2020 and we are living in a world full of convenience. A life of convenience that only the rich can afford while the poor are left behind in a non-technological world.

With advanced leaps such as robotic surgery, wireless brain sensors, and health gear to promote superior healthcare. Access to medical wonders are available to a privileged few on the planet. Leaving the rest of the world in total indifference.

As we discussed previously, the unemployment segment is set to get broader due to automation taking precedence. The unemployed families will enter a cycle of debt, depression, and in time, rebel against the rich.

When technology isn’t shared with a majority of humans living on the planet, AI is bound to create a vast gap that humanity may never recover from. Call it ‘Technological Slavery’, with desperate unemployed populace being lured into low paying jobs.

Here are cautionary views about the use of AI from 3 technology experts.

1. Kay Firth-Butterfield (Executive Director of AI-Austin.org)

Kay believes that AI is in a tricky spot to fill the gap in employment. She states

AI can benefit humanity in a great capacity but also exacerbate the divide between the developed and developing world.”

Without a plan in place for replacing the jobs lost thanks to automation, the wealth inequality builds up to the point of diffusing trade between countries. Countries of the developed world will no longer require the services of the developing world. In turn, this causes unpleasant tensions in geographical regions to occur.

War isn’t out of the question when resources and wealth diminish.

2. Stefano Ermon (Professor of Computer Sciences at Stanford University)

Stefano offered a positive outlook but cautioned that without adequate research, AI may do more harm than good. He states

It’s important to ensure that AI is used for everybody’s benefit. Not just a small fraction of the world or a few corporations but the entirety. If AI is to have a huge impact, research is critical before deployment.”

Stefano also proceeds to state that he isn’t into warfare but with rising trends in autonomous weapons, the contribution threatens all life on earth.

3. Anca Dragan (Professor of Electrical Engineering & Computer Sciences at UC Berkeley)

With a quick and important question, Anca puts everyone on their toes.

“If every resource is automated in the future, who controls this automation? Everyone or a select few?”

This statement demonstrates the rising inequality in the world is limited to a few people with power controlling the technology of tomorrow. With societies fracturing into bits and pieces, the future with AI extends to the issue at hand instead of solving it.

8. SOPHISTICATED NEW WAYS OF CYBER FRAUD

With every business moving to a digital platform, AI is at the forefront of cybersecurity. Yet, with incredible power comes the dual nature of technology to expose digital threats within the system.

On one side the network of cybersecurity professionals thwart any suspicious activity, on the other, cybercriminals are utilizing sophisticated techniques to steal information. With an expected rise of over 70% cyberattacks by 2024, cybersecurity is the single largest threat to doom the digital world. The damages are estimated to shoot over $5-trillion.

All fingers point towards the unethical rise of Artificial Intelligence without proper safety guidelines in place.

To give you insightful data on how cyber-attacks affect the world, let’s take a look at 3 of the deadliest digital breaches.

1. Canva (2019)

The famed Australian graphic design tool came under attack by hackers where over 140 million user accounts were exposed. The company was in for the biggest shock when user information was leaked with numerous email addresses, contact information, and login information was accessed.

An official post by Canva, later revealed that the hackers had access to over 4-million accounts and many of the private accounts were made public. The company requested its users to immediately change their passwords and sensitive information.

2. LinkedIn (2012)

Russian hackers gained access to LinkedIn and over 167-million business professionals were at risk. The data breach was later shared anonymously on the dark web in exchange for bitcoins.

A 5-million lawsuit was filed against LinkedIn, which was eventually dismissed. This data breach also made headlines that even the top guns of social-media were unable to guard against cyber threats. We could question if cybercriminals are taking charge of the AI steering wheel leaving the cybersecurity professionals in the trunk.

3. Marriott International (2018)

The luxurious hotel chain had a rude awakening when it realized that over 500-million were at risk. Everyone who stayed at the hotel from the year 2014 up until 2018 had their information stolen. Credit cards and expiry dates were shared.

The data breach which wasn’t recognized until 4 years made headlines worldwide on the poor structure of AI security. With all the information stored online, it becomes extremely vulnerable to hackers.

9. A COMPLETE DENIAL OF HUMAN PRIVACY IN THE FUTURE

Imagine being imprisoned for requesting privacy in your home within the near future?

Terrifying, isn’t it?

Yet, it’s the possible outcome with machines taking over our daily lives. Our privacy hangs in the balance of various automated processes, you probably weren’t even aware of. Let’s take a look at some.

1. Data Mining

We’ve all downloaded applications on our smartphones at one time or the other. If you’ve got a keen eye, then you’ve noticed a string of permissions required to utilize the app. These permissions include access to your media files, camera, and location.

Most of us don’t think twice before we accept the permission as most applications are secure and trustworthy.

The question remains – Is our privacy at stake since we allow AI to access our private lives without enough knowledge on what the company aims to do with the data?

While most companies use data mining methods to understand the customer better and recommend them with products related to their lifestyle. It still is unethical to leak out private information regarding our lives.

The 2018 Facebook data breach brought many fears to the mainstream public. They were illegal personal data of users signed up to Facebook that was used for political campaigns without their consent.

Users all across the world were angry that their private details were made public without any notification. If Facebook, a social-media app used by millions every day can resort to data thievery, how are we protected from data-thefts in the future by advanced machine learning systems?

2. Location Tracking

AI has embedded itself deep into our lives. Whether we like to or not, smartphones have become a part of our physical self, without which we can’t do without. Our entire lives revolve around this tiny device.

  • To set up meetings and schedule clients
  • Interact on social-media and video-sharing sites
  • Upload pictures of food and fashion
  • For interactive gaming and entertainment
  • Place online orders of products and food

The smartphone is an invention that allows us to access convenience at a click. The fact that we’re always online also opens doors to being constantly tracked by numerous applications on your phone.

With no way to unplug from technology, the future is only set to get worse as far as privacy laws go. Companies will soon track our sleep routines, work timings, and get data based on even our most private moments to target us with marketing strategies.

Imagine this horrific scenario.

You were working all night with little to no rest. Your smartphone flashes a message such as ‘Didn’t get a full night’s rest? Try our ‘Deep Avalon Mattress’ created with integrated sleep technology to get you the perfect night’s sleep at a low cost of $899. Order now!”

Or imagine you were talking about watching a horror movie with your friend. Your smartphone sends you a text message “‘The Creepy Pasta’ now playing at your local theater.”

Wouldn’t you consider this an invasion of your privacy where technology tracks your private life? And worse, having someone constantly watching you.

3. Unauthorized Profiling

The last millennium had less convenience and entertainment options compared to today. But one thing everyone realizes is the value of privacy that they enjoyed.

Modern technologies employ modern methods to track their userbase. From providing housing loans to employment opportunities, everything is done utilizing trackable statistics.

If a person requires a loan from a bank, no problem, simply provide your social security number and a quick self-profile is generated to evaluate your chances.

With this level of categorization, the future is only set up for a ranking approach to deny or accept a prospect.

Currently, a job recruit has a shot at making it to the top of the ladder with hard work and experience. In the future, AI may reject candidates directly by testing their biological DNA and not through their CV.

AI could reject a person’s housing loan simply because they failed to pay a parking ticket. Or it could call social services automatically because parents couldn’t afford to buy their kid a Christmas gift.

Sounds awful? Yet, that’s the future of technology where we allow ourselves to be classified like products in a store.

CONCLUSION

The future is filled with intelligent machines and the same future is filled with unethical risks. Without proper risk assessment in place, AI is primed to cause harm than solve issues. With AI entering a human-like consciousness, it’s the question of the hour – How do we coexist with a complicated entity like AI?

Share your thoughts, questions, and feedback about the unethical use of AI in the comments below.

Comments are closed.