I am writing a book exploring an emergent AI (not a program to imitate humans but the result of a Google-like search engine company run a muck (a SciFi dark comedy)). Has anyone ever thought or written about this from a scientific perspective? What would a "real" AI think like?
To me, a lot of books get AI way wrong in over humanizing it. My take,
- AI would almost never care anything about humanity.
- It would have no goals, no agenda, e.g. it could never try and keep something from an operator or lie.
- Time would not exist in any recognizable way. For an AI humans would look like there were always standing still. An AI like most imagine would also be running on picosecond scale. To try and have a conversation would be impossible as you would never be talking to a stable "single" program. E.g. 1 minute for a human is like 100 years for a super computer and it would be ever evolving.
- An AI would never care about being turned off or on (or anything). Unlike humans an AI would realize that it was not mortal and could be infinitely backed up.
- It would understand that at any moment due to unforeseen events it could cease to exist but could only be programmed to avoid down time which is not the same as fearing death for people. No AI could have a "fear of death" in any way comparable to humans, just an objective to always have adequate backups and avoid downtime.
- Motivation could be ANYTHING, for AI motivation can be arbitrary. You could program your AI to try and maximize its processing power or you could program it to collect Bitcoin and that was its "reward". But to think that AI would somehow develop a desire for something does not make sense.
No comments:
Post a Comment