Researchers Work to Make Programming Robots Less Complicated

Monday, April 25, 2016

Researchers Work to Make Programming Robots Less Complicated


Humanoid robots are starting to really enter the workplace, but developing software for robots takes a lot of time and effort. Research at a Japanese university is looking to make such programming much easier and faster.

Researchers at Keio University in Japan, are working on a project called PRINTEPS, a framework for creating practical applications of artificial intelligence, and putting it to practical use in society. The project's head is Takahira Yamaguchi, in the Department of Administration Engineering.

Recently, humanoid robots like Pepper have made their debut in society, but developing software for robots takes a lot of time. So, the aim of the project is to construct a platform that enables robot software development to be done easily.

PRINTEPS (PRactical INTElligent aPplicationS) is a ROS-based total intelligent application development platform that integrates five types of sub systems (knowledge based reasoning systems, speech dialogue systems, human and environment sensing systems, and machine learning systems). PRINTEPS supports end users to participate in AI applications design (user participation design) and to develop applications easily (within hours to days) by combining software modules.

Researchers Work to Make Programming Robots Less Complicated

Related articles
"For people, it's easy to understand words, speak and listen, read people's facial expressions, look at objects and recognize them, use a variety of knowledge to think, learn and move" says Yamaguchi. "But robots need a variety of intelligent programs to do those things." 

What the researchers essentially are working at is making these intelligent sub-program modules that can be added into routines more easily, saving programmers from having to write out complex situations line-by-line, and saving a lot of time.

Pepper Humanoid Robot

The robot software modules are of six types. Broadly speaking, they utilize a knowledge base for inference and learning, speech dialog, emotion recognition, person recognition, object recognition, and motion planning and execution. In human terms, they enable robots to think, listen, speak, feel, see, and move.

As a test case, the researchers have programmed an Aldebaran Pepper robot to serve as a cafe server. The robot seats customers, takes orders (which are prepared and brought to the table by other robots), takes payment and provides parting remarks for the customers.

It looks like the system is a ways away from being ready for a real restaurant, but it seems closer than ever now.

SOURCE  Keio University

By 33rd SquareEmbed


Post a Comment