Parnas, D. L. (2017). The Real Risks of Artificial Intelligence: Incidents from the early days of AI research are instructive in the current AI environment. (JOSHUA ERDMANN)

From Digital Culture & Society

Jump to: navigation, search

Article Review: The Risks Of Artificial Intelligence

Computers have been around for decades. The primary functions of these computers were to help complete numerical calculations for people but today, computers are used for things liken processing text, images and sound recordings. (Parnas, D. 2017) For an example: In the past it was rare to see a program that was able to play chess autonomously without the help from a person but due to advances technology, computer programs have the power and ability to compete with the best humans in the world. (Parnas, D. 2017)

With that said, this form of intelligence has been classified as artificial intelligence (AI), where researchers claim that the programs used in washing machines and on our mobile devices act as “personal assistants” helping people with there everyday work. Recently in the academic world, scholars have been talking about the growing dangers of AI, which could potentially render human intelligence obsolete. (Parnas, D. 2017) What that means is that people have to be more and more aware of the potential applications of AI programs because they can lead to devices and systems that could be untrustworthy or possibly dangerous. (Parnas, D. 2017)

In the article called: The real risks of artificial intelligence, written by David Parnas, this article examines the risks of AI and the potential implications AI can have on society. His main argument relates to the fact that computer software systems can sometimes be classified as “Heuristic programming”. (Parnas, D. 2017) In this article heuristic programming is defined as a program that does not always get the right answer. What this means is that some AI programs make their decisions in a rule of thumb fashion. What this means is that computer programs are only able to understand and experience by what they have been programed to understand and experience. This has major implications to AI development because unlike computers, humans can use a “rule of thumb” approach to problem solving safety because when rules suggest something stupid, most people are able to recognize that and alter their decision for the better. Computers on the other hand follow program rules without questioning the effects of their decision. (Parnas, D. 2017)

The article then provide multiple examples of how AI has been used today and in the past but still carries this issue of being heuristic and un-trustworthy. (Parnas, D. 2017) On of the examples the author brings up is with the issue of character recognition within AI programs. This is a task that is considered effortless for humans but for computer systems they find it very difficult. This is because computer programs are only able to recognize information based on the rules ascribed to them at that time. Without the proper experiences to pull from computers are often times unable to read words unless they are pulled from a specific log of fonts.

Another example within the article is where the author explains how there are similar issues with how computer programs understand drawings. Computer systems like this again still follow a heuristic approach and when these forms of technology are actually tested in the real work environment they are more often then not very faulty and un-trustworthy. Implications of this are massive because if computer software continue on this path of possible having the wrong answer then there will be many mistakes made in the future by computer programming. This is because a computer isn’t able diverge from the rules previously set for them.

This example leads to the argument of Robot ethics. Where if computer programs are only held accountable to the rules set for them, the big questions is whether or not AI will treat people ethically. This furthers the question that this paper is asking which is: “are machines trustworthy enough to be used?”. (Parnas, D. 2017) With that said, some AI system do almost always work, but then the real issue here is that people may start to rely on technology in our every day lives that could easily be completes by ourselves, and essentially work towards human intelligence being un needed and people just relying on machines to complete tasks for them.

The biggest strength of this article would be in the examples the author gives examples of faulty AI computer programs and how implications could arise in the real world if our lives revolved around technology that seems to be heuristic. The article clearly underlines the issue that we don’t need machines to do things that people do, we need machines to do things people wont or cant do. This is because; humans are able to complete these tasks already instantaneously and without effort. Instead the author pushes the notion that AI researchers should be focusing on problems we cant fix by ourselves rather then trying to push everything to an autonomous hands free model for dealing with artificial intelligence.

The biggest weakness of this article would be with the lack of recent examples used in the author’s article. Although the examples were informative and useful, the author doesn’t look at many machine learning or AI systems that do work consistently and are already put in place by organizations to help reduce the workload on people’s everyday lives. AI Like cellphones and laps tops are products that are massively circulated by the public at the consumer level. These products are massively imbedded and deal with AI computer systems almost on a consistent basis, through things like text messaging and playing video games. There for this article to be more persuasive in explaining that technology and AI systems are heuristic, the author should have gone more in-depth with technology available today.


References

1. Parnas, D. L. (2017). The Real Risks of Artificial Intelligence: Incidents from the early days of AI research are instructive in the current AI environment. Communications Of The ACM, 60(10), 27. doi:10.1145/3132724

Personal tools
Bookmark and Share