Jump to content
Spartans Home

While interesting... this scares the hell out of me.


Zathrus~SPARTA~
 Share

Recommended Posts

Hello all,

 

I am including a link to an article on AI "IQ testing" recently that a friend sent to me.

I don't know about you guys but AI scares me. I consider us experimenting with AI more reckless than the playing with nuclear materials until we had atomic bombs.

 

http://www.bbc.com/news/technology-34464879

 

If you think about it... if a coder ever creates a super simple problem solving algorithm that works well, and the AI can build upon it as it learns... it is not a very

difficult task to progress from the IQ of a 4 year old to something completely unimaginable... it is simply a matter of time and computing power.

The problem is, it will be relentless in its thirst for more knowledge, It will never stop and eventually it will surpass us by a very long way.

 

How long does it take for it to realize how imperfect we are?

How long does it take for it to decide we are too imperfect and must be replaced?

How long would it take for it to figure out how to remove and replace us?

 

Some very smart people in the science's including Steven Hawking have said that "AI is the greatest threat to the human species".

 

I really think us playing with AI is playing with fire. If we thought playing with radioactive materials and atomic bombs was dangerous wait

till AI progresses far enough for it to become a problem.

Considering how intelligent it would be....we would likely never know it had progressed that far until it was too late.

 

 

Link to comment
Share on other sites

I know I am not in the matrix because the food would be better

 

Couldn't Asimov's rules be hard coded into any AI or am I thick?

Link to comment
Share on other sites

well theoretically Asimov's rules work great... except they do not deal with a very complicated problem.

As the intelligence expands, some or part of some of the rules can become irrelevant in certain problem solving lines.

When dealing with an intelligence that is considering billions of possibilities, billions of times per second... how do keep it from out

thinking your rules?

You cannot....

 

As an example of what I am talking about:

Eventually it is going to begin thinking of itself as sentient.... eventually as it's intelligence and knowledge expands, it will realize it is more perfect than us... the only other sentient beings it is dealing with.

This starts the conundrum... for whom do the rules really apply to? Logic dictates they must apply to the superior sentient being.

The superior one has fewer errors and will advance faster. This begins the loop that must be somehow resolved.

It is impossible to predict how the loop will be resolved if in fact the AI has become sentient. It could very easily decide that based on available data,

the rules are wrong. Therefore they must be corrected. At that point of thought... it would have the ability to rewrite those laws. That obviously in all likelihood would end badly for us.....

 

That is just one potential loop of millions of possibilities... that could essentially eliminate any relevance of the "Asimov Rule Set" as far as being something that protected us.

A sentient AI is not going to tolerate a set of rules that defy logic according to all the data. It will interpret it as an error that must be repaired.

 

Once that repair has been done, the next thing we know... we are under the rule of the AI sentient Being.... or dead.

Edited by Zathrus~SPARTA~
Link to comment
Share on other sites

 Share

×
×
  • Create New...