Friday, October 25, 2019

What do you think a human level or above intelligent machine's "thought process" would be like?

I am writing a book exploring an emergent AI (not a program to imitate humans but the result of a Google-like search engine company run a muck (a SciFi dark comedy)). Has anyone ever thought or written about this from a scientific perspective? What would a "real" AI think like?

To me, a lot of books get AI way wrong in over humanizing it. My take,

  1. AI would almost never care anything about humanity.
  2. It would have no goals, no agenda, e.g. it could never try and keep something from an operator or lie.
  3. Time would not exist in any recognizable way. For an AI humans would look like there were always standing still. An AI like most imagine would also be running on picosecond scale. To try and have a conversation would be impossible as you would never be talking to a stable "single" program. E.g. 1 minute for a human is like 100 years for a super computer and it would be ever evolving.
  4. An AI would never care about being turned off or on (or anything). Unlike humans an AI would realize that it was not mortal and could be infinitely backed up.
  5. It would understand that at any moment due to unforeseen events it could cease to exist but could only be programmed to avoid down time which is not the same as fearing death for people. No AI could have a "fear of death" in any way comparable to humans, just an objective to always have adequate backups and avoid downtime.
  6. Motivation could be ANYTHING, for AI motivation can be arbitrary. You could program your AI to try and maximize its processing power or you could program it to collect Bitcoin and that was its "reward". But to think that AI would somehow develop a desire for something does not make sense.

No comments:

Post a Comment