The AI-Box Experiment

<p>When we build AI, why not just keep it in sealed hardware that can’t affect the outside world in any way except through one communications channel with the original programmers?<br />
 That way it couldn’t get out until we were convinced it… Read more

Similar

Complexity No Bar to AI

Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to sm... (more…)

Read more »