Risks and benefits of AGI

Written By:

|

November 6, 2012

|

Posted In:

Artificial General Intelligence (AGI) or Strong Artificial Intelligence is an intelligence that is at least as intelligent as an average human.

The idea of an AGI is science fiction today, but there are reasonable extrapolations of what our technology will be capable of until the year 2050. These extrapolations are for example based on Moore’s law, that in short says that the amount of transistors in hardware doubles every two years (consider the left image).

Altogether there exists a general optimism of achieving advanced intelligence in the next decades or at least in one century.

 

 

When do you have Artificial General Intelligence?

AGI should be able to pass the so called Turing test. For this test you need two different rooms. One containing a human (our test person) and one containing our AGI. Then a dialogue between both candidates starts over a simple chat window. The assumption is, that if the AGI was able to convince the human of being human itself, the AGI would be intelligent. This test is very imprecise and not very detailed, but it is one of the best possible methods known to test if an AI behaves really intelligent. Every year there is a so called Loebner Prize tournament where candidates with the most human-like software are awarded.

 

An AGI would be able to improve itself in a recursive manner. That means once you have an AGI it would develop its own intelligence in an exponential manner.

Scientia potentia est : “knowledge is power”.

But mind, that power always can be abused.

Risks:

There are many ethical issues concerning AGI. Respect for living beings often depends on their specific intelligence (consider for example animal testings). If there was an intelligence superior to the intelligence of human beings it would have unforeseen consequences. Having no empathy for one another or having a hostile attitude can, in the worst case, lead to the extinction of the inferior intelligence or race.

– An AGI could develop hostile attitudes or thoughts against humans.

– A minority could develop an AGI with the goal to fulfill their own interests (conflict with general human interests).

– An AGI could take ethical rules too seriously (or extend them in its own way) and therefore do dangerous actions as for example harming people.

– You could never trust an AGI until your mental abilities would be at a comparable level (the AGI could lie to you and you wouldn’t even have the possibility to recognize that).

– An AGI could create a new race with further AGI’s just to be in appropriate society: mankind would be eclipsed.

Benefits:

– Most human problems could be solved: Diseases, hunger, general violence, political corruption and other disorders in political systems.

– Humans could merge with this technology (referred to as The Singularity).

– Enables autonomous research with extreme speed.  -> Last invention humans need to make.

– Interstellar travel and space colonization would become reality.

– Money would lose its value (nobody would be dependent on going to work).

– Long asked questions like “Why do we live?” could be researched with never-seen capabilities.

Conclusion:

AGI is like fire. You can use it to hold yourself warm in cold times, but you can also abuse it to burn down mankind. An AGI needs to be developed very carefully and it must have empathy for the human race. Nonetheless the seeming infinite possibilities outweigh the potential threat an AGI poses.

[This article will be extended over time. Your own thoughts and ideas are welcome!]

Further links:

Ethical Issues in Advanced Artificial Intelligence; Strong AI

Share This Article

About the Author

Ruslan H.

Is interested in most topics of Artificial Intelligence, likes challenges and expects great technological advancement in the next decades. Read more...

(1) Reader Comment

  1. VCM
    December 20, 2016 at 09:55

    2012 Conference
    http://www.winterintelligence.org/oxford2012/agi-impacts/
    Wikipedia
    https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence
    Edited Book
    https://philpapers.org/rec/MLLERO-2

Leave A Reply

Leave a Reply

Your email address will not be published.