Font Size: a A A

Research On Multi-Robot Cooperation And Robot Goal Discovery Based On Immunoevolution System Theory

Posted on:2010-07-07Degree:DoctorType:Dissertation
Country:ChinaCandidate:Dioubate Mamady IFull Text:PDF
GTID:1118360278454003Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
An important need in multi-robot systems is the development of mechanisms that enable robot teams to autonomously generate cooperative behaviours.Interest for cooperating robots arises when a task is inherently too complex for a single robot to accomplish;or when building and using several simple robots can be more flexible, fault-tolerant or cheaper than using a single large robot.Furthermore,intelligent control represents a generalization of the concept of control,which includes interactions of a machine/robot with the environment.Intelligent control systems are typically able to perform one or more of the following functions:planning actions at different levels of detail,learning from past experience,identifying changes that threaten the system behaviour,such as failures,and reacting appropriately.This identifies the areas of Planning and Expert Systems, Fuzzy Systems,Neural Networks,Idiotypic Immune Network,Machine Learning,and Failure Diagnosis,to mention but a few,as existing research areas that are related and important to Intelligent Control.With the ever increasing number of robots in the industrial environment,scientists/technologists are often faced with issues on cooperation and coordination among different robots and their self-governance in a workspace.Therefore,cooperation,division of labor, communication and navigation of robots make heavy demands in all the key areas of robot technology,mechanics,sensors and intelligence.Today, major challenges in collective robotics are how to develop new strategies more effective and easily applicable,which could solve these demands. Because of this,this dissertation investigateS evolutionary immune computations(EICs) applied to robotics.An EIC is a novel evolutionary paradigm inspired by the biological aspects of the immune system.The research work developed serves to illustrate how a biological system can be examined and how inferences can be drawn from its operation that can be exploited in intelligent agents(robots).Some functionalities of the biological immune system(e.g.B-Cell and T-Cell cooperation in an organ named thymus,clonal selection and expansion,immune memory/secondary immune response,and Jerne's idiotypic network...) are identified for use in intelligent agent robots. Based on the above-mentioned immune proprieties,this dissertation focuses mainly on the following issues:ⅰ-Intelligent multi-agent systems (IMAS) and their characteristics;ⅱ- Communication and cooperation methods for sensing the center of gravity G_c of any payload with precision(particularly,objects with regular shapes and those with non-regular shapes),based on information criteria inspired from idiotypic immune network hypothesis;ⅲ- The method for improving the performance of robots(i.e.evolved T-lymphocyte "ELC" which represents robot sensor/detector),based on evolutionary computation techniques;ⅳ- Robust control method,based on immune system genetic algorithm(ISGA);ⅴ- The method for simulating the robust control of a robot using simulink is provided.The aforementioned subjects are firstly employed for moving any object through space in terms of translating its center of gravity from one place to another.Secondly,we extend our work to an industrial problem called robot goal-discovery "ROGODIS".The robot goal-discovery requires a single robot to explore a limited area and discover a small gate to which the robot must put the object,avoiding any obstacles encountered.The task demands that the gate is discovered and reached successfully.The following are the detail introduction of these aspects.Many popular multi-robot control systems available for object detection are based on centralized control and operations.While relatively easy to implement,the application and scaling of these systems have often been limited by the large computational and communications associated with their centralized control.Though,the main challenge of robots cooperation is that information is distributed.Sharing efficiently information via communication is thus crucial for robots' cooperation. Because of this,we developed natural immune system-based intelligent multi agent architecture and then we applied artificial immune system to multi agent systems for the computational intelligence of agents.The architecture draws an analogy between the immune system and intelligent agent methodologies.It applies the immune system principles to the agents to achieve a global goal in a decentralized manner.Our strategy has been applied to multi robot cooperation where we build by simulation a group of robots which behave in a self-organizing manner to detect the center of gravity of an object without any centralized control mechanism,but rather by using interaction mechanisms.Thus,to apply interaction mechanisms between robots at the local level,we use four main immunological metaphors.The first is B-cells, where a robot represents a B-cell and each robot has a particular strategy on how to detect an object and its center of gravity.The work to be done by the robots is analogous to antigens(Ag),which represent objects. Secondly,we use a kind of Ag called multivalent and multi-determinant antigen(MVMD-Ag) which present several epitopes.In immunology, this kind of antigen can be recognized by several different B-cells;hence, the same antigen can be recognized by several cells memories.For this dissertation,a new computational attribute called object-antigen(OAg) will be used to represent each object to be examined by robots.The third is the immune network,which allows interaction between robots(i.e. communication between robots is achieved via the idiotypic immune network).The fourth is the calculation of B-cell stimulation,where the more a robot is stimulated,and the better its strategy is to be considered (i.e.if a robot's stimulation level is considered low,then its strategy is considered too weak and it is suppressed.By revenge,if the robot is well stimulated,its strategy is considered to be good and is preserved).In order to calculate B-cell(robot) stimulation,we proposed a new computational method which includes these steps:stimulation function, affinity function(activation threshold),B-cell cloning,mature actions, immune memory(memory response and plasma response),and suppression function.Here,the B-cell cloning mechanism is used to represent messages of one robot to other robots.Based on these steps,a robot is stimulated by interacting with other neighboring robots and the work environment.If a robot is achieving the work,then it receives more stimulation.If that robot becomes well stimulated,it produces clone B-cells that contain information about the work it is doing,since it is considered to be good work.To assess and improve the performance of each robot,we use the genetic algorithm(GA) techniques to evolve a T-lymphocyte more suited to the task.Each B-cell(robot) with an evolved lymphocyte(ELC) acquires the ability and performance for the task.The acquired performance on one hand enables B-cell(robot) to detect G_c with high precision and on the other hand determines the movement criteria based on forces which affect translations.For implementation,our strategy has been applied to both objects with only geometrical regular shape and objects with and without a geometrical regular shape.The results show that the cooperation of robots using the method of detection based on the general shape is more adaptable,more effective,easily feasible and less costly in terms of time.We therefore proved that both interaction and passing of messages and the acquisition of high ability enable a group of robots to emerge cooperative behaviour. The problem of navigation,motion planning,and autonomous vehicle or robot control consist of selecting the geometric path and robot velocities so as to avoid obstacles in a dynamic environment and to minimize some cost function such as time or energy.But,selecting the wrong velocities may cause the robot to lose its path,or waste time or energy,or even worse,become unstable.Many difficult control problems have been easily solved relying on the evolutionary approach.Although some models were useful for navigation through static environments,they were less robust when applied to real dynamic environments.Based on previous research,we thus extended this research proposing a more robust strategy to dynamically changing environments.Our intent was to create a more emergent behaviour within the network of robots through cooperation/competition of antigens compared to antibodies rate.As a result,we have developed a new strategy where three phases of study have been described.The first phase uses the artificial immune clustering algorithm based on the adapted idiotypic immune network theory(A-IIN) for antigens' interaction-cooperation-competition,and the clone choice principle(CLONALG) to obtain a pair of antigens.This couple of antigens represents two borders of the gate for robot goal-discovery (ROGODIS) problems.The second phase uses antibodies(Ab) which determine steering angles for mobile robot sensor.The third phase investigates immune principles based on the problem of motion planning and autonomous robot control in a dynamic environment.However,when using evolutionary techniques in order to cope with dynamic environments it is necessary to overcome some limitations inherent to traditional evolutionary algorithms(i.e.maintenance of diversity).In addition,when dealing with stability of a mobile robot system,strong convergence can be problematic,because many evolutionary techniques (like GA) are unable to respond effectively to motion control algorithms. To address these concerns,this dissertation proposes for the third phase quoted above:1 - an immune system genetic algorithm(ISGA) to obtain the optimal control parameters that govern locomotion control of a mobile robot.The new proposed method is referred to as immune system-based genetic algorithm and the mainly used techniques are:gene library evolution by human immune,gene library evolution by artificial immune system,somatic hypermutation/transformation,and memory B-cells(immune secondary response).These evolutionary techniques are translated and included into the standard Genetic Algorithm(GA) for promoting diversity.The ISGA focuses on evolving the control parameters used in a robust locomotion controller to obtain time optimal, shortest path,and minimum energy performance.2 - description of the environment where the experiments were carried out in real time and in simulation.These experiments test the influence of different parameters, such as mutation rate,crossover,and transformation(that represents hypermutation operator).3 - The ability to remember past situations with faster and stronger reactions obtained over time is based on the secondary response,typical of the natural immune system.For a specific implementation extended to a single robot,simulation experiment demonstrates that it is possible for a robot to acquire the essential exploration and goal-discovering skills necessary to accomplish the task successfully.The emergent behaviour is shown to be intelligent,adaptive, flexible and self-regulatory.In order to make our work similar to real situations,verifying motion planning and autonomous robot control,we assume a mobile robot located on a 2D plane in which a global cartesian coordinate system is defined.The robot possesses three degrees of freedom in its relative positioning which are represented by a posture p(t) which is function of time t.The robot's motion is controlled by its linear velocityνand angular velocityω,which are also functions of time t.The robot's kinematics is defined by Jacobian matrix J(θ).The mathematical models which have been described in this dissertation enabled us to carry out a simulator under Simulink(Matlab).In spite of various constraints and disturbances related to environmental effects(such as friction,saturation, slipping,obstacles etc.),the simulator operation is carried out in less than one minute;what proves that the ISGA has good stability,good robustness,and can control the robot motion very effectively.The ultimate goal of this dissertation is to develop more effective techniques for multi-robot learning and adaptation that will generalize to cooperative robot applications in many domains,thus facilitating the practical use of multi-robot teams in a wide variety of real-world applications.
Keywords/Search Tags:B-Cells, T-Cells, Multivalent and multi-determinant antigen (MVMD-Ag), Evolved lymphocyte (ELC), Intelligent multi-agent model, centre-of-gravity, Multi-robot cooperation, Adapted idiotypic immune network (A-IIN), Immune system genetic algorithm (ISGA)
PDF Full Text Request
Related items