CrazyEngineers
  • Google Is Making Sure To Stop Artificial Intelligence When It Steps Out Of Line

    Updated: Oct 26, 2024
    Views: 1.4K
    Hollywood favourite plot twists when it comes to artificial intelligence is when the computer mind becomes sentient and does not take orders from its human overlords. In movies a whole lot of action ensues when it comes to shutting down the unruly AI form but in real life it takes some genius minds sitting in a research lab to do the task. Google’s famous DeepMind division whose AlphaGo program managed to defeat legendary player Lee Sedol in the strategy game Go is conducting a research to find out ways to shut down AI systems when it tries to learn how to shut humans out of their systems.

    Google DeepMind

    The team at London based DeepMind led by research scientist Laurent Orseau and Stuart Armstrong of Oxford University's Future of Humanity Institute have published a research paper called “Safely Interruptible Agents” on the Machine Intelligence Research Institute website where they talk about the situation where it might be necessary for a human supervisor to press a big red button to stop the AI agent from carrying out a harmful sequence of actions that might be harmful for the AI agent itself or to the surrounding. Once the button is pressed it shall lead the agent not to self-destruct (as they do in movies) but to revert to a safer setting. The main purpose of their research was to create a framework where the human controller would be able to repeatedly and safely shutdown the AI while making sure that the AI doesn’t learn to prevent the interruption. Since there are many deep learning algorithms in use, the team say that some of them are inherently interruptible and some need to be modified in order to be safely interrupted.

    Stuart Armstrong
    Stuart Armstrong of Oxford University ​

    The team talks about a hypothetical scenario where a robot has been instructed to either stay inside and sort boxes or go outside and carry boxes inside. Since bringing the boxes in is considered important it is given greater priority. But if the robot is working in a region where it rains quite often then the human must intervene and bring the robot inside when it rains. This introduces a bias that gives the robot the incentive to stay inside rather than go outside. To make sure that the robot does not learn about the interruptions or act under assumption that no interruption shall ever occur again the team suggested to give the agent compensatory rewards to remove the potential induced by a single interruption. This helps the robot to adopt the fact that human interruption is part of the task rather than a one-off thing.

    If you are interested in knowing more about the team’s research you can peruse through the #-Link-Snipped-# or read its coverage on <a href="https://www.businessinsider.in/Google-has-developed-a-big-red-button-that-can-be-used-to-interrupt-artificial-intelligence-and-stop-it-from-causing-harm/articleshow/52571120.cms" target="_blank" rel="nofollow noopener noreferrer">Google has developed a 'big red button' that can be used to interrupt artificial intelligence and stop it from causing harm | Business Insider India</a> and #-Link-Snipped-#.
    0
    Replies
Howdy guest!
Dear guest, you must be logged-in to participate on CrazyEngineers. We would love to have you as a member of our community. Consider creating an account or login.
Home Channels Search Login Register