Ethical and moral issues of artificial intelligence

AI and robotics are going to shape our future. Next there are 10 issues that professionals and researchers need to address in order to desing intelligent systems that help humanity.

The flow of misinformation together with our natural inability of perceiving reality based on evidence a phenomenon called confirmation bias is a threat to having an informed democracy.

Russian hackers influencing the US electionsBrexit campaign and Catalonia crisis are examples of how social media can massively spread misinformation and fake news. Recent advances in computer vision make possible to completely fake a video of President Obama. It is an open question how institutions are going to address this threat. The scientific revolution in the 18th century and the industrial revolution in the 19th marked a complete change in society.

For thousands of years before it, economic growth was practically negligible. During the 19th and 20th century, the level of society development was remarkable. In the 19th century there was a group in the UK called the Ludditesthat protested against the automatization of the textile industry by destroying machinery. Since then, a recurrent fear has been that automation and technological advance will produce mass unemployment.

Even though that prediction has proven to be incorrect, it is a fact that there has been a painful job displacement. Under these circumstances, governments and companies should provide workers with tools to adapt to these changes, by supporting education and relocating jobs.

The importance of privacy is all over the news lately due to the Cambridge Analytica scandal, where 87 million Facebook profiles were stolen and used to influence the US election and Brexit campaign. Privacy is a human right and should be protected against misuse. Cibersecurity is one of the biggest concerns of governments and companies, specially banks. AI can help protect against these vulnerabilities, but it can be also used by hackers to find new sophisticated ways of attacking institutions.

Last month, a woman was hit and killed overnight by an Uber self-driving car when walking across the street in the US. As any other technological system, AI systems can make mistakes. It is a common misconception that robots are infalible and infinitely precise. A common way for some professors in my old lab to say hello to their PhD students of robotics was, what have you broken?

There is an ongoing debate about controlling the development of military robots and banning autonomous weapons. An open letterfrom We have to work hard to avoid bias and discrimination when developing AI algorithms.

An specific example was face detection using Haar Cascadesthat has a lower detection rate in dark-skinned people than in light-skinned people. This pattern is more difficult to find in a person with dark skin.

Haar Cascades are not racists, how can an algorithm be? Existing laws have not been developed with AI in mind, however, that does not mean that AI-based product and services are unregulated. As suggested by Brad SmithChief Legal Officer at Microsoft, "Governments must balance support for innovation with the need to ensure consumer safety by holding the makers of AI systems responsible for harm caused by unreasonable practices". Policymakers, researchers and professionals should work together to make sure that AI provides a benefit to humanity.As you read Lowenthal's chapter 13 that discusses "Ethical and Moral Issues in Intelligence," do some critical thinking and ask yourself:.

As you read through this section, consider what other general moral questions a different author with a different background might have asked, or how they might have addressed these issues differently. Reference the first bullet above, and read the news release again on the Penn State Graduate Certificate in Geospatial Intelligence.

Does the press release in any way relate to the discussion of ethics and "good" decision making? What does it tell you about the concerns of the Penn State faculty that had to approve the program? University Park, Pa. The five-course, credit post baccalaureate program is designed to provide students with the core competencies required to effectively and ethically provide geospatial analysis to key decision makers at defense, governmental, business and nongovernmental organizations.

Geospatial intelligence is a combination of remote sensing, imagery capture, geographic surveying and geo-political analysis. Its uses vary widely and can be applied to military planning, environmental resource preservation and even strategic retail store placement.

Since a call to significantly increase the number of geospatial analysts in the government, the demand for qualified individuals has far outpaced the development of newly qualified professionals. There is "a critical need" for this kind of educational offering, according to K.

The Ethical and Legal Issues of Artificial Intelligence

Stuart Shea, president and chairman of the U. Where do you place your resources? How are events on the Earth related? Rather than simply developing students' proficiency with technology, Penn State's geography faculty want to develop students' abilities in critical thinking and spatial analysis, while promoting cultural sensitivity and high ethical standards to students in the field. The capstone course for the program is a virtual field experience.

It will require students to problem solve a crisis situation modeled after real-world experiences — complete with unexpected curve balls thrown in by the instructors. Penn State's Geospatial Intelligence Certificate program is the first online program of its kind in the nation. The certificate requires less than two years to complete, and more information is available at this link: Graduate Certificate in Geospatial Intelligence.

We know from earlier readings that one of the mortal sins in the intelligence business is to politicize intelligence. Fifteen years ago, the Senate Select Committee on Intelligence asked me to testify at the confirmation hearings for Robert M.

Gates, who had been nominated to be director of Central Intelligence. I was asked because I had worked in the CIA's office of Soviet analysis back when Gates was the agency's deputy director for intelligence and chairman of the National Intelligence Council.

More specifically, I was asked to testify because of my knowledge about the creation of a May special National Intelligence Estimate on Iran that had been used to justify the ill-fated deals known as Iran-Contra.

It seems like a long time ago now.

ethical and moral issues of artificial intelligence

Iran-Contra is just one of many scandals that have come and gone in the intervening years. But today, in the aftermath of the U. Induring Ronald Reagan's second term as president, the U.

Casey were known for their aggressive anti-Soviet rhetoric and policies. Gates, as Casey's deputy, shared their ideology. Iran-Contra was in the planning stages then, a secret scheme in which the Reagan administration was going to sell arms to an enemy country, Iran, and use the proceeds to fund the anti-communist Contras in Nicaragua. In order to justify these actions, administration officials felt they needed some analytical backing from the intelligence community. Those in my office knew nothing of their plans, of course, but it was the context in which we were asked, into contribute to the National Intelligence Estimate on the subject of Iran.

Later, when we received the draft NIE, we were shocked to find that our contribution on Soviet relations with Iran had been completely reversed.We use cookies to improve your experience on our website. By using our website you consent to all cookies in accordance with our updated Cookie Notice. Optimizing logistics, detecting fraud, composing art, conducting research, providing translations: intelligent machine systems are transforming our lives for the better.

As these systems become more capable, our world becomes more efficient and consequently richer. Tech giants such as Alphabet, Amazon, Facebook, IBM and Microsoft — as well as individuals like Stephen Hawking and Elon Musk — believe that now is the right time to talk about the nearly boundless landscape of artificial intelligence.

In many ways, this is just as much a new frontier for ethics and risk assessment as it is for emerging technology.

So which issues and conversations keep AI experts up at night? The hierarchy of labour is concerned primarily with automation. Look at trucking: it currently employs millions of individuals in the United States alone.

But on the other hand, if we consider the lower risk of accidents, self-driving trucks seem like an ethical choice.

It's time to address artificial intelligence's ethical problems

The same scenario could happen to office workers, as well as to the majority of the workforce in developed countries. This is where we come to the question of how we are going to spend our time. Most people still rely on selling their time to have enough income to sustain themselves and their families. We can only hope that this opportunity will enable people to find meaning in non-labour activities, such as caring for their families, engaging with their communities and learning new ways to contribute to human society.

If we succeed with the transition, one day we might look back and think that it was barbaric that human beings were required to sell the majority of their waking time just to be able to live. Our economic system is based on compensation for contribution to the economy, often assessed using an hourly wage. The majority of companies are still dependent on hourly work when it comes to products and services.

10 Ethical Issues of Artificial Intelligence and Robotics

But by using artificial intelligence, a company can drastically cut down on relying on the human workforce, and this means that revenues will go to fewer people. Consequently, individuals who have ownership in AI-driven companies will make all the money. We are already seeing a widening wealth gap, where start-up founders take home a large portion of the economic surplus they create.

Inroughly the same revenues were generated by the three biggest companies in Detroit and the three biggest companies in Silicon Valley Artificially intelligent bots are becoming better and better at modelling human conversation and relationships.

Ina bot named Eugene Goostman won the Turing Challenge for the first time. In this challenge, human raters used text input to chat with an unknown entity, then guessed whether they had been chatting with a human or a machine. Eugene Goostman fooled more than half of the human raters into thinking they had been talking to a human being. This milestone is only the start of an age where we will frequently interact with machines as if they are humans; whether in customer service or sales.

While humans are limited in the attention and kindness that they can expend on another person, artificial bots can channel virtually unlimited resources into building relationships.

Even though not many of us are aware of this, we are already witnesses to how machines can trigger the reward centres in the human brain.

Just look at click-bait headlines and video games. This and other methods are used to make numerous video and mobile games become addictive. Tech addiction is the new frontier of human dependency. On the other hand, maybe we can think of a different use for software, which has already become effective at directing human attention and triggering certain actions.

When used right, this could evolve into an opportunity to nudge society towards more beneficial behavior.

ethical and moral issues of artificial intelligence

However, in the wrong hands it could prove detrimental. Systems usually have a training phase in which they "learn" to detect the right patterns and act according to their input.This site uses cookies to improve your experience and deliver personalised advertising.

You can opt out at any time or find out more by reading our cookie policy. Yet for all the potential it has to do harm, AI might have just as much potential to be a force for good in the world.

Harnessing the power for good will require international cooperation, and a completely new approach to tackling difficult ethical questions, the authors of an editorial published in the journal Science argue.

Another is helping us understand how the brain works. The potential for AI to do good is immense, says Taddeo. For example, AI has already been used to sift through hundreds of bird sounds to estimate when songbirds arrived at their Arctic breeding grounds. This kind of analysis will allow researchers to understand how migratory animals are responding to climate change.

Another way we are learning about climate change is through images of coral. An AI trained by looking at hundreds of pictures of coral helped researchers to discover a new species this yearand the technique will be used to analyse coral's resistance to ocean warming. Yet AI is not without its problems.

In order to ensure it can do good, we first have to understand the risks. The potential problems that come with artificial intelligence include a lack of transparency about what goes into the algorithms.

For example, an autonomous vehicle developed by researchers at the chip maker Nvidia went on the roads inwithout anyone knowing how it made its driving decisions. There is also a question over who is responsible if they make a mistake. Take the example of an autonomous car that's about to be involved in a crash.

The car could be programmed to act in the safest way for the passenger, or it could be programmed to protect the people in the other vehicle. Whether or not the manufacturer or the owner makes that decision, who is responsible for the fate of people involved in the car crash?

Earlier this year, a team of scientists designed a way to put the decision in the hands of the human passenger. Another issue is the potential for AI to unfairly discriminate.

ethical and moral issues of artificial intelligence

One example of this, says Tadeo, was Compas, a risk-assessment tool developed by a privately held company and used by the Wisconsin Department of Corrections.From to there have been more than films produced worldwide about artificial intelligence.

And while some scenarios are depicted in a good light, the rest are downright horrific. In movies such as The TerminatorThe MatrixAvengers: Age of Ultron and many others, the movie industry placed into our shared imagination scenes demonstrating how more intelligent machines will take over the world and enslave or totally wipe humanity from existence.

The potential for AIs to become more superior than any human intelligence paints a dark future for humanity. Artificial intelligence is red hot. But what ethical and practical issues should we consider while moving full-steam ahead in embracing AI technology? In our shared goal to transform business sectors using machine intelligence, what risks and responsibilities should innovators consider?

Yes, AI agents will be — and already are — very capable of completing processes parallel to human intelligence. Universities, private organizations and governments are actively developing artificial intelligence with the ability to mimic human cognitive functions such as learning, problem-solving, planning and speech recognition. But if these agents lack empathy, instinct and wisdom in decision-making, should their integration into society be limited, and if so, in what ways?

By way of disclaimer, this article is by no means meant to persuade your opinion, but merely to highlight some of the salient issues, both large and small. While Kambria is a supporter of AI and robotics technology, we are by no means ethics experts and leave it up to decide where you stand.

A robot vacuum is one thing, but ethical questions around AI in medicine, law enforcement, military defense, data privacy, quantum computing, and other areas are profound and important to consider. One of the primary concerns people have with AI is future loss of jobs. Should we strive to fully develop and integrate AI into society if it means many people will lose their jobs — and quite possibly their livelihood?

According to the new McKinsey Global Institute reportby the yearabout million people will lose their jobs to AI-driven robots. Some would argue that if their jobs are taken by robots, perhaps they are too menial for humans and that AI can be responsible for creating better jobs that take advantage of unique human ability involving higher cognitive functions, analysis and synthesis. Another point is that AI may create more jobs — after all, people will be tasked with creating these robots to begin with and then manage them in the future.

One issue related to job loss is wealth inequality. Consider that most modern economic systems require workers to produce a product or service with their compensation based on an hourly wage. In this scenario, the economy continues to grow. But what happens if we introduce AI into the economic flow?

Robots do not get paid hourly nor do they pay taxes. This opens the door for CEOs and stakeholders to keep more company profits generated by their AI workforce, leading to greater wealth inequality. AIs are not immune to making mistakes and machine learning takes time to become useful.

If trained well, using good data, then AIs can perform well. However, if we feed AIs bad date or make errors with internal programming, the AIs can be harmful.

In less than one day, due to the information it was receiving and learning from other Twitter users, the robot learned to spew racist slurs and Nazi propaganda. Yes, AIs make mistakes.

But do they make greater or fewer mistakes than humans? How many lives have humans taken with mistaken decisions? Is it better or worse when an AI makes the same mistake? This means that, as programmed, the machine is not created to do what we want it to do — it does what it learns to do. Jay goes on to describe an incident with a robot called Tallon.

Its computerized gun was jammed and open fired uncontrollably after an explosion killing 9 people and wounding 14 more. These remotely piloted aircraft can fire missiles, although US law requires that humans make the actual kill decisions. But with drones playing more of a role in aerial military defense, we need to further examine their role and how they are used. Is it better to use AIs to kill than to put humans in the line of fire?

What if we only use robots for deterrence rather than actual violence?I n the race to adopt rapidly developing technologies, organisations run the risk of overlooking potential ethical implications. And that could produce unwelcome results, especially in artificial intelligence AI systems that employ machine learning. Machine learning is a subset of AI in which computer systems are taught to learn on their own.

Algorithms allow the computer to analyse data to detect patterns and gain knowledge or abilities without having to be specifically programmed.

It is this type of technology that empowers voice-enabled assistants such as Apple's Siri or the Google Assistant, among myriad other uses.

In the accounting space, the many potential applications of AI include real-time auditing and analysis of company financials. Data is the fuel that powers machine learning. But what happens if the data fed to the machine are flawed or the algorithm that guides the learning isn't properly configured to assess the data it's receiving? Things could go very wrong remarkably quickly. Microsoft learned this lesson in when the company designed a chatbot called Tay to interact with Twitter users.

A group of those users took advantage of a flaw in Tay's algorithm to corrupt it with racist and otherwise offensive ideas. Within 24 hours of launch, the chatbot had said the Holocaust was "made up", expressed support for genocide, and had to be taken offline. With regulatory and legal frameworks struggling to keep pace with the rapid pace of technological change, public demand is growing for greater transparency as to how these tools and technologies are being used.

The UK's Institute of Business Ethics IBE recently issued a briefing urging organisations to examine the risks, impacts, and side effects that AI might have for their business and their stakeholders, as well as wider society. Tackling the issues requires these diverse groups to work together. The research identifies a number of challenges facing business leaders.

These include:. The report also encourages companies to "improve their communications around AI, so that people feel that they are part of its development and not its passive recipients or even victims". For this to be achieved, "[e]mployees and other stakeholders need to be empowered to take personal responsibility for the consequences of their use of AI, and they need to be provided with the skills to do so". The report proposes a framework outlining ten core values and principles for the use of AI in business.

These are intended to "minimise the risk of ethical lapses due to an improper use of AI technologies". The values are:. Companies applying AI to the finance function face the challenge of designing algorithms that produce unbiased results and are not too complex for users to understand how they work and make decisions.

The product uses a hybrid of advanced algorithmic techniques to enhance a human auditor's ability to detect and address unusual financial circumstances. A key aspect of the MindBridge application is that it explains why certain transactions have been highlighted and then leaves final decision-making authority to a human, said chief technology officer Robin Grosset.

This transparency is essential to avoid the "black box" problem, in which a computer or other system produces results but provides little to no explanation for how those results were produced.

In the case of machine learning, the greater the complexity of an algorithm, the more difficult it is for users to understand why the machine has made a certain decision. Human judgement is still a key component of a balanced AI system. Another challenge is to avoid bias in the algorithm and in the dataset the algorithm uses for learning.Public Domain.

Views are his own. Moral de-skilling is the loss of skill at making moral decisions due to lack of experience and practice. As we develop artificial intelligence technologies which will make decisions for us, we will delegate decision-making capacities to these technologies, and humans will become deskilled at making moral decisions, unless we endeavor not to be so.

A comparable concern for deskilling can be found among airline pilots. With the advent of highly sophisticated autopiloting systems, it is technically possible to automate every aspect of air travel from takeoff to landing. However, airlines and pilots have elected not to do this, instead reserving autopilot only for the boring, uneventful parts of flight.

Because those are precisely the parts that require the least skill. The takeoff and landing — the parts that require the most skill — are exactly the parts that the pilots must not lose skill at, because if they did they would become dependent on the autopilot. Then, if the autopilot failed they might not be able to take over from it with an adequate level of skill, especially in an emergency. Moral de-skilling can be thought of as one small and particular effect of a very large and long term movement through human history and evolution.

This trend is the drive towards organization, specialization, and complexity. This trend exists because specialization permits efficiency, and with efficiency, energy is freed up for further more sophisticated actions, in a self-reinforcing cycle.

This can be elucidated by looking at some of the differences between more-complex and less-complex societies [3]. More-complex societies are: specialized, centralized, systematized, interdependent, organized, efficient, and yet brittle, fragile. We can do much more collectively if we do less individually, e. Less-complex societies are: less-specialized, decentralized, less-systematized, independent, less-organized, inefficient, and yet tough, robust.

We can live in less risk if we all know how to do everything, but the trade-off is lack of coordination and cooperation, failure to do complex things, and immense wasted talent and production.

Knowledge that is not practiced is lost. Take, for example, something as simple as growing food. Or hunt or fish well enough to survive? Not very many. This means that we are deskilled: the people of the past could do things that we are no longer capable of, at least not without significant training and preparation.

Artificial Intelligence, Decision-Making, and Moral Deskilling

This is partly a good thing! It means that we are free to do other more-complex work. But as a side-effect, we are in many ways unskilled compared to humans at previous levels of technology. In the past it was much easier to specialize humans towards particular jobs than it was to specialize technologies towards particular jobs.

Humans were the best intelligences and muscles around except sometimes animals. But those days are slowly ending: specialization is now away from humans and towards technology. All of this specialization of human skills into machines allows for incredible efficiencies, never before seen in human history.

ethical and moral issues of artificial intelligence

We can achieve immense productivity with much less labor. While in the past almost all human labor went into producing food, now relatively little human labor is involved in producing food.

Automation and machines have revolutionized society. Continuing in this vein of automation, AI is an amplifier and an accelerant. It takes what we want and gets it faster and more effectively than ever before. If technology is nature, sharpened like a stone knife or sharpened stick, honed to a precise usethen AI is natural human intelligence, sharpened.


thoughts on “Ethical and moral issues of artificial intelligence”

Leave a Reply

Your email address will not be published. Required fields are marked *