/ Experts

14.10.2011

Development of Artificial Nervous System for Humanoid Robot (Avatar)

  1.       Introduction

 

The first step toward achieving human immortality is to create a neurally controlled avatar—a humanoid robot with a human-like skeleton and a set of artificial muscles and sensors. It is expected that the avatar will be controlled by signals from the nervous system and the human brain that will be transmitted via various devices. It is also possible to effect direct control of artificial arms and legs using biosignals or to control avatar behavior using mental commands. Feedback providing notification of completed actions or behavior will be received through visual channels, i.e. through the vision of the person operating the avatar. Using such a control system, the avatar is not autonomous—the human operator will have complete control over its actions and behavior. Its internal control system will be severely limited in function, serving to produce and manage simple actions and skills that will be able to be controlled directly by the human operator.

 

The avatar will require autonomy if the human operator cannot visually control the avatar’s actions and behavior. But, in order for the avatar to be fully autonomous, it needs to have its own artificial nervous system (ANS) that can allow it to exhibit human-like behavior in all situations. With an ANS installed, the person will be able to exercise supervisory control of the avatar, i.e. to use his/her thoughts to make the avatar perform certain actions. In this case, the avatar has to confirm that it has performed these actions using telepathic methods—that is, with a mental response—or a remote visual display that would depict the performed actions.

 

It should be observed that if such an ANS is created, the central problem of robotics will also be resolved: the creation of a robot that is completely humanoid, not only in form but also in behavior.

 

As part of this project, a physical design will be made and mathematical groundwork laid for a humanoid robot ANS; a software-based prototype of the system will be created; and experiments related to supervisory control will be performed. After the avatar has been designed, the completed ANS will be adapted to the avatar’s system, furnishing it with the capability of autonomous action and allowing it to be operated by a person via supervisory control.

 

2. Brief overview of the current state of the field

 

Research and development in the field of androids (anthropomorphic robots) is currently moving at a torrid pace in other countries. In their modern forms, these robots are equipped with a control system which includes machine vision (MV), voice control (VCS) and messaging (VMS) systems, a tactile sensing system (TSS), a spatial-orientation system (SOS), a motor control and stability system (MCSS), and a behavior management system (BMS). The most practical versions of such robots have been developed in Japan. They are called "humanoid robots", since both their appearance and, to some extent, their behavior are modeled after those of humans. An array of such robots has been created for commercial use: Asimo (Honda), SDR-4X (Sony), Hoap-2 (Fujitsu), among others. In Russia, the company Novaya ERA OJSC ("New ERA", based in St. Petersburg) developed an anthropomorphic robot over the years 2001 to 2004 as part of the ARNE project (Anthropomorphic Robot by New Era). At present, the company Android Robots is performing research on commercially viable models. Android Robots has marketed a very simple AP-101 model, which features command management, and has developed a newer model, AP-600, which has MV and a relatively simple BMS.

 

The capabilities of anthropomorphic robots are determined by their physical design and their control system. All the robots listed above feature composite metal-and-plastic designs with electromechanical drives ensuring their range of motion (up to 30 drives). The control systems are built into on-board multiprocessor computers that include high-level RISC processors and low-level microcontrollers. The software is built on various algorithms that are typically the company’s know-how.

Here we will provide a brief description of the capabilities of the control systems present in modern anthropomorphic robots. MV is a system based on color video cameras (one or two), from which signals are transmitted digitally to an on-board computer that captures the images, records them to memory, processes them with the purpose of recognizing color objects, and determines their coordinates in the camera’s coordinate system. MV can currently recognize tri-color objects of relatively simply shape given the presence of special lighting and can determine their coordinates. Advanced systems have the additional capability of recognizing the faces of a certain number of people (up to 10).

VCS utilizes microphones (one or several), signals from which are transmitted digitally to an on-board computer that records them to memory and processes them with the purpose of recognizing voice commands given to the robot. VCS can recognize a few dozen commands consisting of three to four words spoken into a microphone in the operator’s headset. The system can experience problems recognizing a familiar voice and voice commands and messages spoken into the robot’s microphone due to the need to isolate the source of speech from background noise.

VMS uses a speaker system with an amplifier and a memorized set of pre-phrased messages that the robot’s operator can choose from, which are then spoken by the robot. Advanced systems are able to hold simple dialogues with the user. In those more advanced systems, the VCS recognizes key words in the user’s questions, then uses that information to select a topic through the VMS and subsequently to pick answers. More complex dialogue systems that can learn, analyze questions semantically, and compose answers are currently at the development stage.

TSS uses a set of tactile sensors located on the feet and grippers of the robot. The sensors on the feet are used to determine the vector force acting on the foot when it comes in contact with a surface. The robot requires these sensors in order to maintain stable movement. The sensors on the grippers are used to determine the vector force in the grippers when it seizes an object. One problem commonly observed in TSS is the need to compensate for the instability of the tactile sensors' readings that arises when the load changes.

 

SOS uses a set of accelerometers, as well as gyroscopes that measure angular velocity and position. The gyroscopes are typically concentrated in the measurement module on the upper part of the robot’s torso, while the accelerometers can be located on various parts of the body, hands, and legs of the robot, creating a distributed inertial system. The signals produced by the system are used to determine the body’s vertical axis and to adjust the robot’s movements in order to maintain stability when walking in conditions with outside disturbances. Due to the robot’s need to change its positioning frequently when moving and interacting with the environment, there are significant computational challenges in calculations of the necessary adjustments.

MCSS uses TSS and SOS to maintain a stable gait on two legs. Parameters for the gait are programmed for the feet and are re-calculated on the degree of freedom of the legs by solving the inverse kinematics problem. Advanced systems use a gait training method based on neural network principles. In certain systems currently under development, there are attempts to control the gait not by using position but by using forces and moments received by a virtual model of the robot. Certain advanced systems (that in the Asimo robot, for example) feature rather elegant movement from heel to toe, though the speed of movement does not yet exceed 0.3-0.5 m/sec. No MCSS solutions have yet been found for a fast walk, running, or jumping with separation of the pivot foot from the ground.

 

BMS uses all the above-listed systems in order to select and perform actions that fit the task and the given situation in the environment. The BMS’s capabilities are directly linked to how intelligent a given behavior is. At present, robots are able to solve navigational problems in a simple environment with obstacles and a set route to a given point; to manipulate hard objects with one or two hands; and to perform targeted actions with the feet, for instance kicking a ball into a net, and so on. Complex targeted actions—for instance, fast walking and running along a curved path, jumping, acrobatic stunts, and so on—cannot yet be performed. Advanced BMSs feature the ability to learn by observation, reproducing movements that it sees (for instance, playing with a ball with the hands or feet).

 

3. Feasibility Study

 

An analysis of already existing anthropomorphic robots indicates that electromechanical drives do not provide for adequate flexibility and responsiveness in the arms and, especially, the legs of a robot, which carry a heavy load. The control systems currently in use cannot produce fully human-like movements and behavior in robots, due to limited computational resources and poor algorithmic underpinning. In-depth, multi-year studies are needed to create robots that can be considered worthy competitors to humans. In developing humanoid robot (HR) prototypes, particular attention must be devoted to the development of the control system that determines the motor and behavioral capabilities of a robot. In future robot designs, it would be advisable to use specially developed mechatronic drive systems (mechanical muscles) for controlling movements and a hardwired learning system for controlling a robot's behavior.

 

Due to the complexity of humanoid robots’ behavior, untraditional approaches must be sought in constructing their control systems. One of these approaches entails using the organizational principles of the human nervous system, which have been detailed by modern psychology and neurophysiology. The underlying principle is the cellular construction of the nervous system. A technical implementation of this principle could be reduced to the creation of hardware or software with multi-level networked-based control systems based on similar cellular computers (artificial neurons or other teachable components). Such systems can be treated as purely connectivist systems, given that they do not use tools based around the processing of symbolic knowledge. In essence, this approach entails creating a so-called "computer brain".

 

The capabilities of contemporary connectivist control systems in intelligent robots, built on the base of a computer brain, still are significantly inferior to those of the human brain. In 1997, the Japanese corporation Fujitsu announced the creation of a neurocomputer robot brain equivalent in intelligence to a system of 100 biological neurons. Such a brain is capable of recognizing patterns and exercising synergistic control of the robot's drives—but its capabilities are insufficient for producing complex human-like behavior when interacting with people or other robots.

 

Robots’ intelligence could be significantly improved by utilizing the cognitive principles and neurological components examined here when creating such robot control systems. Indeed, we expect that cognitive systems and agents that will make it possible for humanoid robots to reproduce complex behavior that approximates that of humans will be built on a base of neurological components.

 

The rapid development of humanoid robots is leading to improvements in their design and to greater complexity in their behavior. The latter is of particular importance when humanoid robots are put to use in an environment with people or with other similar robots. It is of fundamental importance that such robots to exhibit human-like behavior. One way of going about achieving this is to create an artificial nervous system (ANS) similar in functions and behavior to the human nervous system.

Let us examine the main issues related to the creation of an ANS for humanoid robots. There are several interconnected systems existing within the structure of an ANS that share the same names as their counterparts in a biological system. It is thought that an ANS can be implemented on an on-board computer network contained in the robot. Some of the hardware and software can be devoted to the central and peripheral sensory and motor systems with their own sets of cognitive and actuatory structures respectively. The robot’s on-board computer network can have access to outside networks, including the Internet. Given the possibility of being infected with viruses, it makes sense to have an immune system that will protect the on-board network from viruses and maintain normal functioning in other systems. The humanoid robot will be an autonomous machine with its own energy source that will also need to be controlled. Thus we will also need an energy system that controls the robot’s energy supply. It is expected that the ANS will be built on a base of some kind of universal set of hardware and software that will model the cellular structure of the nervous system. In order to teach these systems to perceive, act, and defend, the robot will require a genetic system that will use already formed and renewable genetic information and the ANS's organizational principles.

The genetic system should possess the information and have knowledge of the principles necessary to organize the ANS. The genetic system will make it possible for the ANS to evolve with changing conditions and objectives for the robot. When the ANS is "born", "genetic" information will be used about the structure of the system, the connections between the components, and the principles that define how the robot functions. Initially, at the start of the evolutionary cycle, the central, peripheral, energy, and immune systems of the ANS will be formed and a basic set of internal agents will be created along with their functions and fixed connections to facilitate interaction between them. An internal meta-agent could also be made that would improve coordination between the various systems of the ANS, though the internal agents should in principle be capable of functioning without such an entity. The internal agents will be built on chains of modules capable of learning in real time. These modules could contain information about the robot’s purpose and objectives and about its constraints, as well as some basic knowledge. Later on in the robot’s life, additional knowledge can be gained in the learning process. As the system evolves, the modules could be cloned and taught if there are new tasks to address or if its range of functionality expands.

It seems to be possible to create a complete ANS in that way on the basis of the neurological components, formalized cognitive methods and multi-agent technology described here. It is expected that the ANS will be built as a multi-agent cognitive system in which every internal agent built on neurological components is responsible for its own set of behaviors, receives information from its own sensors or shared sensors, produces signals for controlling its own or shared actuators, and interacts with other internal agents of the system in order to generate rational behavior in a real-life environment. Such a multi-agent cognitive system, according to the organizational principles of the nervous system, should have a hierarchical structure with overlapping functional components that are used to organize the work of internal agents producing individual or collective behavior in the robot. In the case of team work, the ANS controlling the robot should function as an agent of an integrated multi-agent system that controls the functioning of the group of robots. In this case, this ANS-agent should be capable of communicating with ANS-agents controlling other robots when carrying out work as a team.

This multi-agent system built on neurological components can be configured for the desired behaviors by teaching each internal agent a set of cognitive and actuatory functions genetically defined for it, which will manage cognitive and actuatory processes, which in aggregate can be considered an internal agent. The exchange of information and collaboration between internal agents will be carried out by special neurological components that will adapt themselves for the needed interaction by way of trial and error. The meta-agent also adapts, adjusting task priorities and carrying out the processing of "attention".

This system can fulfill the role of an ANS in that it is fully functional and is universal—it can be adapted to robots being applied for any purposes. At the current stage of development in robotics, such systems are used to produce rational behavior in a robot in an environment in which others of its own kind are present (without a person in virtual or real life) or in a human environment (with a person in real life).

Preliminary modeling of such an ANS was done as part of the ARNE project (Anthropomorphic Robot by the company New Era; project based in St. Petersburg over period 2001-2003). The project came to a conclusion in July 2003 with the creation of the ARNE-02 robot with 28 degrees of freedom, a height of 123 cm and a weight of 53 kg. The main (actuatory) part of the control system was based in an on-board network of microcontrollers. Additional control capabilities (interaction with an operator and functioning in a real-life environment with people) were furnished by the intelligent part of the system, based in a remote computer. That remote part of the system receives radio signals from a color video camera and a microphone installed on the robot. The video signal is processed in order to recognize and locate colored reference points specified in the working environment, as well as several objects with simple shapes. This information is used to control the robot's movements and its interactions with the environment. The acoustic signal is processed with the purpose of recognizing voice commands given by an operator. It also has the ability to form voice messages "spoken" by the robot.

The project is currently being reinitiated with the goal of developing a more advanced version of the intelligent control system, called an ANS, based on a cognitive approach and hybrid technology that includes the use of neurological components, equipment for integrating sensory information, and equipment used for intelligent control of actions and behavior. An ANS has been designed that is based on multi-agent technology with virtual cognitive agents that define the robot’s behavior. A three-dimensional vision system has been modeled that allows the robot to identify its location within an environment and to map that environment. Promising versions of neurological and neuromorphic parts have been developed for creating self-learning components in the ANS.

 

Thus, considerable groundwork has been laid that can be used to create a prototype for a humanoid robot ANS in a relatively short amount of time. Below is a list of articles by the authors related to the topic.

  1. Stankevich, L.A.; Tikhomirov, V.V. Control of Stable Gait in an Anthropomorphic Robot. Mechanics, Automation and Control, No. 3, 2004.
  2. Mordovchenko, D.D.; Stankevich, L.A.; Yakovlev, A.V. Lessons of Anthropomorphic Robot Development Project and Simulation Programs at New Era. Materials from robotics seminar held at All-Russian Exhibition Centre in Moscow, February 4-6, 2004.
  3. Stankevich, L.A. Intelligent Robots and Control Systems. Book 20. Article compilation edited by A.A. Kharlamov. M.: Radio Engineering, 2006, 144 pp., pp. 44-66.
  4. Stankevich, L.A.; Trotsky, D. Online Teamwork Training Using Immunological Network Model. Proceedings from second international workshop Autonomous Intelligent Systems: Agents and Data Mining, AIS-ADM 2007; St. Petersburg, Russia, June 2007. Eds. Gorodetsky, V.; Zhang, C.; Skormin, V.; Cao, L.. LNAI 4476, Springer, 2007, pp. 243-255.
  5. Stankevich, L.A. Cognitive Agents in RoboCup Applications. Proceedings from international scientific conference Distributed Intelligent Control Systems (June 15-17, 2008; IMOP-SPBSPU, St. Petersburg), SPBSPU Press, 2008, 6 pp.
  6. Stankevich, L.A. Cognitive Agents in RoboCup Applications. Proceedings from international scientific conference Distributed Intelligent Control Systems (June 15-17, 2008; IMOP-SPBSPU, St. Petersburg), SPBSPU Press, 2008, pp. 91-96.
  7. Stankevich, L.A. Artificial Cognitive Systems. Lecture on neuroinformatics at a scientific session at 12th All-Russian Scientific Conference Neuroinformatics-2010 at MIFI National Atomic Research University (NIYaU), 2010, pp.106-161.

 

4. Main Stages of Work

 

It is proposed that a series of projects be undertaken in collaboration with the groups headed by  A.A. Frolov and V.G. Yakhno with the goal of creating an artificial nervous system for robots that will facilitate a significant increase in the intellectual capabilities of the avatar being developed by the Center and furnish the avatar with dynamic stability and with the abilities to flexibly manipulate objects using two hands and to interact and work with people or with other robots of the same type. Because the system will be universal, it could be used to control other autonomous mobile devices that must be able to exhibit complex behavior in various working environments (on land, under water, in the air, and in space). Below is a brief examination of the project proposal, which stipulates three independent working sections for developing an artificial nervous system for the avatar.

Section 1: Development of Sensory Systems

 

1.1  The robot's kinematic and dynamic states (relative position and speed of movement of the robot’s body parts) shall be gauged using a system of actuator sensors (sensors that record the absolute position of actuators and sensors that take readings of the robot's electric range-of-motion drives) with the following capabilities:

 

(1)   to determine the position of the end points of the robot's body parts (hands, feet, torso, neck, and head);

(2)   to determine the robot's statistical center of mass;

(3)   to gauge the dynamic parameters of the robot's end points (speed and acceleration);

(4)   to determine the robot's dynamic center of mass;

(5)   to recognize the robot's poses (position of the robot's kinematically connected parts).

 

1.2 Diagnostic information on the condition of the robot's systems shall be obtained using a diagnostic system (built on sensors that identify energy levels, drive failures and electronics failures) with the following capabilities:

(1) to gauge energy storage levels;

(2) to identify failures of and run diagnostics on actuators with electric drives;

(3) to identify failures of and run diagnostics on electronics modules;

(4) to identify overloads when robot performs load-bearing activities.

 

1.3 Visual information about an environment shall be gathered with a mechanical vision system with the following set of capabilities:

 

(1)   isolation and segmentation of grayscale objects;

(2)   formation of reference gauges and recognition of grayscale objects of various shapes (robots, cars, people, and so on);

(3)   calculation of coordinates in a three-dimensional space using triangulation;

(4)   location recognition (identifying a set of objects and locating them in a given space).

 

1.4 Acoustic information about an environment shall be collected by a multi-channel auditory system with the following set of capabilities:

(1) recognition of command statements with recognition of content;

(2) recognition of sentences spoken in sequence;

(3) syntactic and semantic analysis of sentences;

(4) formation of a base of semantic knowledge for processing voice messages;

(5) acoustic interference filtration via isolation of the source of a useful signal;

(6) voice recognition (more than 10 voices given typical interference levels);

(7) orientation toward familiar voices.

1.5 Physical forces shall be perceived by a tactile- and force-sensing system with the following capabilities: (1) to determine the shape of an object by feel and without visual information; (2) to determine the site and magnitude of physical forces.

1.6 Spatial-orientation functioning will be provided by a spatial-positioning control system with the following capabilities:

(1) to maintain verticality under the weight of gravity;

(2) to identify dynamic disturbances;

(3) to determine a path of motion;

(4) to identify dynamic disturbances along a given path;

(5) to provide orientation in a given three-dimensional space (path, inclination forward and inclination sideways);

(6) to identify dynamic disturbances in a surrounding environment in three dimensions.

 

Section 2: Development of a “Human-Robot” Interface

 

2.1 Spoken interaction with people and other robots shall be made possible by a voice and articulatory message generation system with the following proposed capabilities:

 

(1)   to generate messages from memory in response to commands (answering machine);

(2)   to articulate written information (speech generator);

(3)   to formulate messages in dialogue (interlocutor).

 

2.2 Visual contact with people will be established using a recognition and tracking system with the following capabilities:

(1) recognition of faces (more than 10);

(2) lip reading and recognition of facial expressions;

(3) recognition of gesture commands;

(4) recognition of simple movements with the purpose of reproducing them later using the control system;

(5) tracking of individual objects;

(6) tracking of changes in a given setting (movement, appearance and disappearance of objects, etc.)


2.3 Supervisory control of the robot shall be carried out using a remote control system with the following proposed capabilities:

(1)   to register, by digital radio signal, and decipher text commands that stipulate a directed action or behavior;

(2)   to encrypt and transmit text reports confirming completion of a task;

(3)   to register, by digital radio signal, and decipher mental commands that stipulate a directed action or behavior;

(4)   to encrypt and transmit mental reports confirming completion of a task.

 

Section 3: Development of a Behavior Control System

3.1 Control of a robot functioning without contact with other robots will be managed by an individualized behavior control system with the following capabilities:

(1) to generate an action plan to complete an assigned task;

(2) when in training mode, to create its own models using integrated data from sensors and sensory systems;

(3) to learn how to operate in accordance with the formulated model;

(4) when in training mode, to create models of environments (settings) using integrated information from a system of binocular vision and tactile sensing;

(5) to navigate using the models of physical environments it has created;

(6) to control its behavior when performing a series of targeted actions;

(7) to learn how to perform actions based on observation of similar actions;

(8) to learn skills that require special sets of actions (playing with a ball, for instance).

3.2 Control of a robot functioning in a team of robots or people shall be managed by a collective behavior control system with the following capabilities:

(1) to coordinate activity in a group of robots;

(2) to coordinate actions when interacting with people;

(3) to learn how to collaborate when working in a group of robots;

(4) to learn how to work with a person;

(5) to learn how to interact with other robots, for instance when playing a game.

 

The above-listed working sections are defined with the assumption that all sensory and computational tools will be inside the robot. However, in the initial phase of work, it will be possible to use an external mechanical vision system, an external interface for interacting with a person, and an external behavior control system. Thus we propose dividing the work into the following stages:

Stage 1: Development of an external intelligent control system with simple recognition and locating functions and the ability to control its movements and manipulations (using a system of mechanical vision with external cameras and remote computers).

 

Stage 2: Development of an on-board intelligent control system with minimal recognition and behavior control functions (as agreed upon with client).

 

Stage 3: Development of a fully functional on-board intelligent control system (with work to be carried out in the above-listed sections).

 

Completion deadlines for each of the stages will be determined upon project approval.

 

In our opinion, work within the framework of the project should be divided into that addressing the creation of an intelligent behavior control system (provisional name: “Smart Head”) and that focused on the creation of an intelligent body control system (provisional name: “Dynamic Body”). The first system is universal and can be used to control the behavior not only of robots but also of other autonomous devices (pilotless flying devices, land and underwater devices, and so others). The second system should be specially designed for use in robots of a certain construction. These systems can be created by different teams and be combined at the final stage of fine-tuning the robot.

 

It is assumed that work will be done using continuously more advanced hardware and software power in the system, given that new processors and functions can be added. A diverse range of specialists will need to be enlisted for work on this project. We expect to create a laboratory at SPBSPU (St. Petersburg State Polytechnical University), SPIIRAS (St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences), and TsNII RTK (Central Research and Development Institute for Robotics and Engineering Cybernetics); part of work on the project would be based in this lab. Specialists from a diverse range of fields will be recruited to provide effective contributions to the project. We do not rule out the possibility of working with foreign universities in order to obtain information and to carry out collaborative projects on this topic.

 

Director and SPBSPU professor                                                         L.A. Stankevich



/Expert
Lev Alexandrovich
Stankevich

The first step toward achieving human immortality is to create a neurally controlled avatar—a humanoid robot with a human-like skeleton and a set of artificial muscles and sensors.

/ experts
More opinions

Login as user:

If you are registered on one of these websites, you can get a quick registration. To do this, please select the wesite and follow the instructions.

Login to 2045.com

Email:
You do not have login to 2045.com? Register!
Dear colleagues, partners, friends! If you support ​the 2045 strategic social initiative goals and values, please register on our website.

Quick registration:

If you are registered on one of these websites, you can get a quick registration. To do this, please select the wesite and follow the instructions.

Registration

Name:
Surname:
Field of activity:
Email:
Password:
Enter the code shown:

Show another picture

Восстановить пароль

Email:

Text:
Contact Email:
Attachment ( not greater than 5 Mb. ):
 
Close
avatar project milestones