Home Technology Robots To Learn Ethical Behavior By Reading Children’s Stories

Robots To Learn Ethical Behavior By Reading Children’s Stories

When you purchase through our sponsored links, we may earn a commission. By using this website you agree to our T&Cs.

For some people the increasing sophistication of artificial intelligence is a source of fear. Now a team of researchers has come up with a way to make AI robots more human.

Thanks to a new training method, robots could soon learn how to behave in social situations. It is hoped that this will make humans more comfortable being around them, writes Katherine Derla for Tech Times.

Teaching method uses children’s books to show socially acceptable behavior

Known as “Quixote,” the new technology teaches robots to read stories written for children, get to grips with what is considered appropriate behavior in social situations and learn standard event sequences. Quixote was developed by a team of researchers from the School of Interactive Computing at the Georgia Institute of Technology.

According to Mark Riedl, Entertainment Intelligence Lab’s director and associate professor, if the robots can get to grips with the stories it will help to ward off “psychotic-appearing behavior” in artificial intelligence robots. This will help them to choose behavior options that do not harm humans, while still completing the task at hand.

The method behind Quixote is one of “value alignment” and connects the aims of the robot with appropriate social behavior. Quixote means that the robot will mimic the behavior of a character in a children’s story because it expects a reward if it does so.

Robots could soon get a whole lot more human

One example of the method at work is the following. If a robot is tasked with getting a prescription for a human, it could choose to rob a pharmacy, steal the medicine and run. Alternatively it could talk to a pharmacist to get the medicine, or it could wait in the queue for its turn to speak.

If Quixote were not implemented, the robot would see stealing the medicine as the quickest and cheapest way of acting. However by matching the aims of the robot with socially acceptable behavior, the artificial intelligence learns that it should choose option two or three in order to receive a reward.

“The technique is best for robots that have a limited purpose but need to interact with humans to achieve it. It is a primitive first step toward general moral reasoning in AI,” says Riedl. According to the scientists, the easiest way to teach AI value alignment is by using children’s books.

Our Editorial Standards

At ValueWalk, we’re committed to providing accurate, research-backed information. Our editors go above and beyond to ensure our content is trustworthy and transparent.

Brendan Byrne
Editor

Want Financial Guidance Sent Straight to You?

  • Pop your email in the box, and you'll receive bi-weekly emails from ValueWalk.
  • We never send spam — only the latest financial news and guides to help you take charge of your financial future.