James Brotchie

Finance;
Software;
Open Source.

GAI utility maximization and non self-aware super-intelligent AIs

James Brotchie

I've thought many times about the dilemma of utility maximizing super intelligent AI, especially the dire consequences of spawning a recursively improving FOOM style AI with a human incompatible utility function.

Consider when a programmer commands a soon-to-FOOM AI to "maximize human happiness". As a result, the AI hijacks a set of robotic lab equipment, sequences a highly contagious retro virus that changes every human being's brain chemistry, such that everybody on earth is always at a subjective "maximum" happiness.

Another classic example is the programmer asks "prove P != NP". In response, the AI builds a fleet of nano-bots that fly out, slowing converting all the solar system's matter into computational devices, running mathematical AIs attempting to find a solution to the proposed, potentially intractable, problem. Humans and all life on Earth would be converted into computational matter.

After reading a lot about Watson's capabilities, I believe that we can boostrap a super intelligent "AI" that isn't actually self-aware. Imagine a much more advanced version of Watson that has consumed all human written knowledge. You can ask it any question and it will generate a reasoned response. It has "processed" the entire internet, all written literature, all research papers, all bureau of statistics releases.

We ask it "How do we reduce crime in region x?"........ a few minutes later it replies "Decrease the minimum wage by $x, build a pre-school at locations 1,2,3 and offer free tuition to parents who's IQs are less than y, encourage breast feeding from birth." Via some convoluted statistical means it's given us a potentially optimal solution to the problem that we likely wouldn't have come up with ourselves. I believe this will be the first visible stage that were in, or close to, the singularity.

Ex-ante we think that only intelligent sentient beings can consume information and answer open ended questions such as this. Ex-post, once we know the statistic methods and bag of tricks required to pull it off , we won't regard it a task that requires human level cognition. This trend of "this is hard, only humans can do it" ... "oh, it's actually possible using this statistical technique combined with this exaflop computing device" will ultimately show that human self-awareness and conscious experience is also a "bag of tricks"; simply a cascade of neuronal firings driven by internal feedback modulated by external sensory input.