An emotional model for swarm robotics

Swarm robotics is a system of multiple robots where a desired collective behavior emerges from the interactions between the robots and with the environment. This paper proposes an emotional model for the robots, to allow emerging behaviors. The emotional model uses four universal emotions: joy, sadness, fear and anger, assigned to each robot based on the level of satisfaction of its basic needs. These emotions lie on a spectrum where, depending on where the emotion of the robot lies, it can affect its behavior and of its neighboring robots. The more negative the emotion is, the more individualistic it becomes in its decisions. The more positive the robot is in its emotion, the more it will consider the group and global goals. Each robot is able to recognize another robot's emotion in the system based on their current state, using the AR2P recognition algorithm. Specifically, the paper addresses emotions’ influence on the behavior of the system, at the individual and collective levels, and the emotions’ effects on the emergent behaviors of a multi-robot system. The paper analyses two emergent scenarios: nectar harvesting and object transportation, and shows the importance of the emotions into the emergent behavior in a multi-robot system.


Introduction
Swarm robotics is a set of autonomous robots that work together to accomplish a task. The robots coordinate their actions in a decentralized way. Swarm robotics attempts to model social organism collaborative behaviors like the insects, to achieve complex tasks beyond any individual's capability. For example, it studies how bees mark trails with pheromones to map geographical locations, in order to replicate that same behavior with robots.
On the other hand, the emotions determine how an individual perceives a stimulus, allowing it to act differently in similar situations depending on its mood. The inclusion of the emotions in the context of a multi-robot system seeks to study their influence on the behavior of the system at the individual and collective level. In specific, the emotions determine the process of decision-making of the robots, and make possible the emergent phenomena in the system. Particularly, the representation and management of the emotions in each robot allow extending its behavior (an agent that only takes smart decisions) towards a more social behavior, such that each robot can pick up social cues (emotions) of the other robots, in function of which it acts, in order to adapt it to the environment.
In swarm robotics, some tasks are a product of emergent and self-organized behaviors, such as foraging, clustering, aggregation, formation, among others [1,2]. In this work, the effect of the emotions on the emergent behavior of a swarm robotics, or multi-robot system, and additionally, their process of recognition, is studied. The emotion defines the behavior of a robot in a given moment, which influences on the collective behavior of the multi-robot system, generating emergence and self-organization in the system.
Multi-robot systems have been studied through different approaches: control architectures, cooperation mechanisms, emergence, among others [3,4]. There are different works for these kinds of systems: in [5], a multirobot system for hunting tasks under a swarm approach is proposed. In [6], a robot swarm that implements an algorithm based on the beehive for food gathering is described. The authors of [7] define a hybrid approximation for multi-robot systems; work [8] proposes an algorithm based on the human immunological system for the cooperative transportation of objects. Other works have added emotions to the robots in the system; in [9,10,11,12,13] are specified emotions in multi-robot systems, to consider the influence of them in the decision making of each robot. On the other hand, the learning and recognition also have been analyzed in multi-robot systems [14,15].
Specifically, works [13,16] propose a control architecture for robotic agents that is inspired by the nature of animal emotions, which act both as an internal behavior modulator, and as an implicit communication mechanism that allows an emerging coordination. In [12], a cognitive-emotional model for eldercare robots is proposed. By combining the Gabor filter, the Local Binary Pattern algorithm (LBP) and K-Nearest Neighbor algorithm (KNN), are extracted and recognized facial emotional features, to reduce negative emotional state. In this case, the model is used in a care robot, with high human-robot interactions. The authors of [9,10] propose a computational architecture to model emotion based on the Markov modeling theory. In this work are used the emotions for the execution of cooperative tasks. With the emotional capability, each robot can distinguish the changed environment, can react adapting it, and also, can understand the colleague robot's state. In [11], the authors propose an emotional model for robots, which measures the individual cooperative willingness in the problem of multi-robot task allocation. In this way, the emotions are used as a factor in the assignment of tasks, such that the communication between individuals is affected by the emotional state that can be recognized in the others.
In this paper, we propose an emotional model, in order to allow emerging behaviors in a multi-robot system. For our model, the value of the "emotion" represents a robot's general mood, based on its battery/operation/security/interaction current state, which are intrinsic parameters of the operation of the robot. In this context, the robots must recognize the emotions and act according to their nterests and the recognized emotion. We propose a multi-robot system where the robots are equipped with a set of basic emotions that influence their perception and performance processes. The emotional state in an instant t of a robot defines its behavior; by aggregation, the emotional states of the n robots define the collective state of the system, and therefore, the behavior of the same. We do not consider how the emotions are generated in the individuals of the system, but rather, focuses on the process of recognition.
In addition, in this paper, the AR2P recognition algorithm [17,18,19] is used to equip each robot with the ability to recognize emotions. The AR2P algorithm is a model for the recognition of patterns inspired by the model of operation of the neocortex proposed by Kurzweil [17]. It suggests that the brain contains a hierarchy of pattern recognizers, in this way, the complex patterns can be defined on different levels. AR2P exploits this idea of recursivity, and unbundling/integration of the pattern, in the recognition process. The capability of the robot to recognize the emotions of others at a certain time allows it to make better decisions, facilitating emerging processes. For example, if a robot needs to recruit other members of the group to transport an object and requires selecting individuals willing to collaborate (assuming that their willingness to collaborate is related to the emotional state of joy), then it will optimize the task by sending messages to individuals whose emotional state is joy.
A previous work presented preliminary results about the emergent behavior in multirobot systems [20]. This paper describes in detail the emergent behavior in a multi-robot system. Additionally, it presents an exhaustive analysis of the emergent behavior through four experiments, and a detailed comparison with other works. We define two scenarios for testing the emotion recognition in a multi-robot system, to generate emergent behaviors: one inspired in the search of nectar of the bees, and another inspired in the behaviors of ants when transporting an object. The first scenario has to do with a mechanism of efficient collective decision. The dances in the collecting bees to decide good food sources are similar to the decision process of the robots when they find several energy supplies. The second scenario has to do with the mechanism of recruitment. When ants need a collaborative transport, then they leave traces to recruit others to move the object. It is similar when the robots transmit special messages to transport an object to other robots according to their emotional states.
The rest of the paper is organized as follows. Section 2 describes the emotional model for multi-robot systems. In Section 3 is described the AR2P model for the recognition of emotions in the context of a multi-robot system. Section 4 presents the case studies. Section 5 presents an analysis of the emergent behavior in multi-robot systems using emotions, and finally, Section 6 describes the conclusions.

Our emotional model
In the literature, there are several emotional models. In general, the emotions have been added to robots, in order to make them more reactive to the environment dynamics where they live. The authors of [21,22,23,24,25] propose emotional models in the context of multi-agent systems, where the emotional spectrum is reduced to a set. In the context of multi-robot systems, [9,10,11] are some of the works about the emotions.
Work [11] proposes an emotional model to measure the self-interested robots' individual cooperative willingness in the problem of multi-robot task assignment. They define an emotional cooperation factor based on emotional attenuation and external stimuli. With this factor, they propose a task allocation algorithm in order to create a collaborators team. In [28], an emotional model based on the emotion -feeling -mood relationship, is implemented in a robot. The robot does not recognize emotions in the human, but gestures that serve as a stimulus. The robot perceives a stimulus (gesture) and relates it to a feeling that leads to a certain emotional state. The model considers four emotions: joy, sadness, fear and anger. In our proposal, the emotional spectrum is based on four emotions that vary their intensity from a high negative value to a high positive value that is related to a robot satisfaction index. The main difference is that the work [28] is based on the robot-human interaction, and our system in the robot-robot interaction to recognize the state of the other. In [29], an emotional model for robots based on human personality factors is proposed. They consider four factors: openness, extraversion, agreeableness and neuroticism, which are related to a set of six emotions represented in a two-dimensional space, where The X-axis represents the human personality factors and the Y-axis the emotion of the individual. In our proposal, we use factors related to the performance of the robot, which is similar to the factors used in [29] to model the personality. In addition, our model simplifies the spectrum to a one-dimensional space that represents the degree of Robot satisfaction. In [30], an artificial hormone-based model to define the emotions of a robot is proposed. In this work, the emotions influence the performance of a robot, specifically its moving in a given time. In our model, the robots modify their actions according to the emotional state that is activated at a given time. In general, the works about emotional models offer a good representation of the emotions, but they are computationally complex, or have been used for specific tasks (task allocation, robot-human interaction, etc.).
In this paper, we have selected an emotional model with the necessary and sufficient characteristics for our purpose, where the value of the emotion represents the robot's general mood, based on its battery/operation/security/interaction state. In particular, the authors of [31] define a set of emotions, positives and negatives, which influence the robot's disposition towards collective or individual behaviors. Our model is based on these ideas, and uses four basic emotions: joy, sadness, fear and anger, with a "neutral" state. These emotions are tied to the satisfaction levels of a robot, and are represented in a one-dimensional space, in the interval [-1, 1], where three subdivisions define the emotion-behavior relationships, which can be: reactive, cognitive and imitative behaviors, respectively (see Fig. 1).
The model seeks to provide a simplified emotional spectrum to the robots that act in the system, in order to affect the way how they conduct their behaviors, which also affects the collective behavior. These three types of behavior (imitative, cognitive and reactive) will be associated to an emotional state, like has been proposed in [31,32]. We use the same assumptions of these works: the negative emotions predispose the individual to the resolution problem using an individual approach, while the positive emotions lead the individual to global goals, using a collective approach. The authors of [33,34] describe a control architecture for multi-robot systems with emergent behaviors, which includes the behavioral model [35] that manages the emotions of the robots in the system. Specifically, our emotional model is implemented in the behavioral module described in [35]. This architecture is divided into three levels. The first level provides local support to the robot, manages its processes of action, perception and communication, as well as its behavioral component. The behavioral component considers the reactive, cognitive, social and affective aspects of a robot, which influence its behavior and how it interacts with the environment and with the other robots in the system. The second level supports the collective processes of the system, as are the mechanisms of cooperation, collaboration, planning, and/or negotiation, which may be needed at any given time. This level of the architecture is based on the emerging coordination concept. The third level is responsible for the knowledge management and learning processes, both individually and collectively, that are occurring in the system.
Each layer manages a kind of behavior tied to the robot's emotions. In this paper, the emotion represents the current state of a robot, based on its battery/operation/security/interaction situation: On the other hand, the robots are connected to the cloud, and according to the architecture proposed in [33,36], they share their internal state using the following format: <file_robot_n> <body> sub_state_1= value</body> <body> sub_state _2= value </body> <body> sub_state _3= value </body> <body> sub_state _4= value </body> <file_robot_n/> Particularly, the sub-states are used by the emotional model to calculate the emotional intensity of the robots (see Fig. 2).

AR2P System
AR2P is based on a set of recognition modules, organized hierarchically (from X 1 until X m level) [17,19]. In our case, AR2P defines modules to recognize emotions based on the set of state signals in the cloud. According to the combination of the different states and their weights (intensity of the signals), different emotions are modeled. Each X i level contains its own set of recognition modules. A Γ ji recognition module (the subscript j represents the module, and the subscript i the level where that module belongs) is formally defined as a 3-tuple [19]: Where, E is an array composed in turn of 2-tuple E=<S, C> (see Table 1), where, S=<Signal, State> is an array of the set of signals ( ) that represent the pattern/emotion to be recognized by Γ, and their respective states. The state variable can be "True/T" when the signal is present, and "False/F" otherwise. The number of signals for each pattern is specific to each module Γ. The pair =<s i , State> means that the signal i has been or has not been recognized (by default, the state is false). C: contains the descriptors associated with each i signal, such that |C| = |S|. It is defined by the 3-tuple C=<D, V, P>, where D represents the descriptors of Γ; V is the domain vector for each D; W is the weight (importance) of each D for the recognition of the pattern (the weight of importance is used during the pattern matching). Finally, U is the thresholds vector used by the module (Γ) to recognize its respective pattern. There are two types of thresholds: U1 is the threshold for the recognition by key signals, and U2 is the threshold for the recognition by partial or total mapping. The ΔU1 threshold should be stricter than ΔU2, given that the process of recognition by key signals utilizes only a few signals.  The authors of [19,37] present the formal description of the recursive pattern recognition algorithm, and work [18] presents the learning algorithm of AR2P. The strategies used in each Γ to recognize a pattern are: a first strategy of recognition by key signals, and another strategy by partial recognition of signals. The first uses the weight of importance of the input signals identified as keys, and the second uses the partial or total presence of the signals.
Definition 1: a key signal. A signal in the  module is a key signal if its importance weight has a value greater or equal to the average weight of all the signals in  (see Eq. (1)).

Theorem 1: Strategy by key signals.
A  pattern is recognized by key signals if the average of the weights of the key signals recognized is superior to the ΔU1 threshold. This recognition uses the descriptors (signals or sub-patterns) with greater weight of importance. The formula is: Theorem 2: Strategy by partial mapping. This strategy consists on validate that a signal number minimum present in Γ is superior to the ΔU2 threshold, the formula is: This process of calculation is carried out for each module of each level of recognition X i (from X 1 until X m ), during the recognition processes.

Emotion recognition using the AR2P algorithm
In order to show the capabilities of AR2P, in this Section, we describe how AR2P recognizes the emotions of sadness and joy. It is instantiated based on the emotional model of the multi-robot system proposed in Section 2. For this purpose, the necessary data structures (defined in the previous section), in conjunction with the recognition strategies, are instantiated. For this, let's assume:

Supposition 3.
There are two levels of recognition in the hierarchy of the AR2P algorithm (see Figure 3), at the first X 1 level are the  j1 modules for the atomic pattern recognition (LOP, HOP, NOP, LSP, HSP, etc.); and at the second X 2 level are the  j2 modules for the emotional patterns (such as, "anger", "neutral", "sadness", "joy"). In the case of recognition of the sadness, AR2P must receive the next file as input s() = "file_robot1" (this information is represented in Figure 3  Let's assume that the modules of the X 1 level are instantiated (see Table 2, it is the instance of the data structure of Table 1 of the sadness emotional pattern). The superscript of each value in the domain (V) is related to the weight of the column (W). For example, for the descriptor BS, LBP=0.8 (Signal 1) means that this descriptor is important to recognize the emotion of sadness, and HBP=0.1 and NBP=0.5 are not. Same description for the other descriptors OS, SS and IS. The general process for the recognition of the sadness emotion since the <file_robot1> input is as follows: First, the input sub-patterns (LBP, LOP, HSP and LPI) are recognized in (X 1 ), and then, the recognition of these descriptors is sent to the next level in the hierarchy. Next, AR2P recognizes or not a pattern of (X 2 ), using the theorems presented in Section 2 [21]. More in detail, the signals of the sadness pattern of (X 2 ) are activated (T) (see Table 2). Then, we can apply Theorem 1. According to this theorem, it is necessary to determine which of the signals are the keys. For this, we use Eq. (1): Once the average is calculated using Eq. (1), it is determined the key signals. The key signals for the <file_robot1> input are the signals 1, 3 and 4 (definition l). Now, it is used Eq. (2) to recognize the input pattern by key signals. This value does not exceed the threshold (0.83 < 0.85). Next, the second alternative is to recognize by partial signals using Eq. (3) and ΔU2=0.75. In this case (0.8>7.5), with which the pattern matching is recognized, and how it is the last level of the hierarchy X i=2 , then it produces the output signal S o , which becomes in the system output signal: "robot1(Sad)". Case 2: Recognition of a happy robot with AR2P In this case, the description of the joy pattern of the recognition module in AR2P is shown in Table 3. The input file with the information about this pattern to recognize is s() = "file_robot2" (this information is represented in Figure 3 by the blue lines):
The key signals for the input <file_robot2> are the signals 1 to 4. Now, we use Eq. (2) to recognize the input pattern by key signals. As all key signals are activated, the pattern matching is successful, and how it is the last level of the hierarchy (X 2 ), it produces the output signal S o , which becomes in the system output signal: "robot2(Happy)".
In both cases, AR2P can recognize the emotions. One of the main aspects is its recognition flexibility in different contexts, such as the absence of information and/or changes in the intensity (weights) of signals, or sub-patterns of entry, among other aspects.

Test platform
We suppose a test environment composed of walls, obstacles, marks on the ground that represent areas of battery charge, as well as marks that simulate objects to move (see Fig. 4).

Figure 4. Test platform
Additionally, the multi-robot system is composed of general-purpose robots, based on the architecture proposed in [33,34]. This architecture presents the following characteristics: i) It is a fully distributed architecture; ii) Decisions are made locally by each robot; iii) There is a collective memory. Additionally, the information about the internal state of the robots is shared across the collective memory (in the cloud), and the robots invoke the AR2P algorithm to recognize the emotions of the other robots.

Case studies
The purpose of the case studies is to show the emergent behavior in multi-robot systems based on the robot's capabilities to recognize emotions in the other robots. For example, for recruitment, decision making, among other situations. To carry out the tests, the next scenarios are considered: the first one is the nectar harvesting by the bees, and the second is the transportation of an object by the ants. The tasks of collecting nectar by bees and transportation of objects by ants are classic examples of emergent processes [39]. In both situations, it is assumed that emotions influence the execution of the tasks of the robots in multi-robot systems. That is, the robots adapt their behavior using their emotional components.

Case 1: Nectar Harvesting
In the case of the nectar harvesting, the following process is considered: in the beehive, a group of bee scouts leaves in search of food sources that can be exploited. When a food source is located, then the scout bees communicate to other bees the location of the source and its profitability. By modeling this process in multi-robot systems, the following elements are defined:  Source of food: although in nature the value of a food source depends on multiple factors, in our artificial model, it is defined by a numerical value. The need to obtain energy by the robot is related to the bee's need for food. When the BS sub-state explained in a previous Section is in a low value, the robot needs to be recharged, and this affects the values of the other sub-states.  Employed bees of Foraging: They are represented by robots whose emotional state tends to be positive, where their BS and OS sub-states meet with normal or high values.  Unemployed bees of Foraging: this group of bees is in search of a food source, and remains in the beehive waiting to choose a source. In this case, it is considered that some robots observe other robots, if their BS and OS are at low levels. Due to this, their emotional states are sadness (similar to Source of food), because they require an energy recharge.  Scout bees: They are responsible for searching for sources. According to our emotional model, positive emotions (e.g. joy) generate a collective behavior in individuals to search for food. In our case, the scout robots can directly send the message to the unemployed, if they recognize the sadness emotion in them. In our case, we use for the recognition of this emotion the AR2P algorithm (see Section 3.2).
In this scenario, when the values of OS and BS decay, then the robots cannot operate normally, degrading their performance. It is modeled by robots with the sadness emotion. In this way, they expect to receive a message that notifies them about a recharge source (in the nectar harvesting task, the scout bees dance to communicate about the quality of the sources). In this scenario, when the robots discover a source of recharges, then they communicate the discovery to the robots that need recharging their battery (sad robots).

Case 2: Transportation of an Object
In the second case, the transportation of an object is a common task in several species of insects. For example, the ants can collectively transport large objects. In general, the process begins when an ant finds an object and tries to move it; if it cannot move it, then it tries to recruit other ants, which are formed around the object, to move it [39].
In our case, the robots with an emotional state of joy (their sub-states are at normal or high levels) would be willing to carry an object. According to our emotional model, a joy emotion generates a collective behavior in the individuals. In this way, the robot that finds an object tries to recruit other robots who are in the same emotional state, to help it to move the object. The recruitment is carried out through direct messages to the robots in the same state. In this way, the robot must recognize the joy emotion (see Section 3.2). It is the same case for the construction of formations by a swarm of robots, where the robots with a joy emotion would be willing to participate in the formation. Thus, in the case of the "Transportation of an object" scenario, the robot that is recruiting tries to recruit the happy robots.
So, the emotion recognition will be used by the multi-robot system in the two emergent scenarios. In the first scenario is necessary to recognize the sadness emotion, and in the other is necessary to recognize the joy emotion. This facilitates the appearance of emerging behaviors in the system, due to the robot's emotional states are defined by the values of the sub-states of the robots.

The emotional model in the context of emergent behaviors of our multi-robot system
Now, we are going to test different aspects to determine the capability of the emotional model to generate emergent behaviors in multi-robot systems. The first aspect is to analyze the emotional recognition quality of AR2P, and then, the performance of the emergent behavior generated by the emotions, from the point of view qualitative and quantitative.

AR2P algorithm to recognize the emotions in a multi-robot system
To test the AR2P algorithm in the context of the multi-robot system, we are going to suppose:  Table 4 shows this process:  In total, 80 simulations were performed for the recognition of emotions: 40 of sadness emotion and 40 of joy emotion (see the dataset in the Appendix). According to the results (see Table 5), the precision = 1 determines that AR2P recognizes the patterns, without making other unexpected recognitions. This precision value is because AR2P learns very specific and unique situations. This means that each robot can recognize an emotion in multiple variations of the states of the variables that characterize it. Recall = 1 indicates that AR2P can discover all the emotional patterns. This means that each robot recognizes all the emotions of other robots. F1 = 1 means that each robot will have the ability to accurately recognize any emotion. According to our two scenarios, the recognition of these emotions determines the collective behavior of the robots. For example, in the case of the first scenario, the sad robots are informed about a recharge site. In the second case, the recognition of a positive emotion (e.g. joy) determines that the robots can be recruited. This facilitates the emerging behaviors in the system because the behavior of the robots cannot be predicted a priori. The emotional component depends on the state of a robot, which defines when and how it can act. For example, a robot that is "sad" will not explore the environment in the same way as a robot that is "happy".
In terms of the efficiency and usability of the AR2P algorithm, it is implemented in the cloud, where it is invoked by each robot. Moreover, robots' states are stored in a shared memory, in order to be used by the other robots in their process of recognition.

Emergent behavior analysis based on the MASOES model
In order to validate the influence of the emotions in the emergent behavior of the multi-robot system, we consider the verification method proposed in [32], called MASOES. This work proposes a set of concepts related to emerging properties, which include the emotiveness, the emotion types and the behavior component. These concepts are determined by the changes in the emotional states, the types of emotions in the individuals in the system, and the existence of a component in the system that determines its behavior according to the internal states as the emotions of the individuals. In the verification method proposed in [32], these concepts are part of a set of architectural concepts defined in a Fuzzy Cognitive Map (FCM) [26,38,41,42], which has been implemented in the FCM Designer Tool [45]. The fuzzy cognitive map verifies the emergent behavior of a system, according to the characteristics of the architectural concepts in the system [32]. Table 6 presents the results using the verification method, for the transportation scenario. The values of the concepts can be low, medium or high, according to their contribution to the system. In the transportation scenario, the initial value of emotiveness is high (0.8) because the robots can change of emotional states, the emotion types are low (0.35) because this scenario does not use a lot of emotions, and the behavior component is high (0.95) because our system has a behavior component based on our emotional model. For this scenario, the emergence can be verified (final state of this concept is 0.98), such that the emotiveness contributes significantly to the emergence in the system. In the case of the type of emotion, its diversity is not very important. This means that more types of emotions are not relevant to improving the emergent behavior of the system. Finally, the behavior component is very important in order to reach an emergent behavior. Table 7 shows the value of the emergence concept using MASOES, generated with our multi-robot systems in the two case studies, when they are used with/without emotions. In the case of the nectar harvesting scenario with emotion, the initial value of emotiveness is high (0.8) because the robots can change of emotional states, the emotion type concept is low (0.35) because this scenario only requires emotions of sadness and joy, and the behavior component is high (0.95) because our system has a behavior component based on our emotional model. Also, with respect to the other values of the Table 6 of the transportation scenario, in this case study, the initial value of aggregation mechanism and synthesis concepts are higher (0.8) because the robots with low battery must analyze different energy sources informed by different robots, the cognitive component is lower (0.1) because they do not need to know the environment, and the behavior type and diversity concepts are higher (0.8) because there are more types of robots (scouts, etc.). In the case of the nectar harvesting scenario without emotion, the initial value of emotiveness is null because the robots have not emotional states, the emotion type concept is null because the emotions are not used, and the behavior component is regular (0.55) because our system has this component degraded. This is similar to the case of transportation without emotions. According to Table 7, the inclusion of the emotions is very important in order to guarantee an emergent behavior in a multi-robot system. In the nectar harvesting scenario is more important because according to the emotions, the robots have different roles in the systems (scouts, employed, etc.). Without emotions, there is a degradation in the performance of the system because the robots cannot recognize with who collaborate in a given moment. The knowledge of the emotion allows optimizing their behavior, and particularly, a collective behavior (for example, to move an object).

Emergent behavior analysis using simulations
In general, the process of recognition of emotions contributes at the emergent phenomena in the system, because the ability of a robot to recognize the emotion in the others determines its behavior, which, in turn, influences the collective behavior of the system. That is, the recognition system of the emotional model guides the process of decision making of the robots, in order to determine their behavior.
In order to test the emergence due to the emotions of the robots, in this Section, we have used a scenario composed of 10 robots, where the initial values of the factors that influence their emotional states are established in a random way. The idea is to observe how the emotional state of each robot changes, and at the end of the simulation, to use the FCM to evaluate the emergence in the system with the data obtained from the simulation. Table 8 presents the number of times an emotion appeared in the system and the associated behavior types that were generated during the simulation of 10 robots in 100 units of time. It is observed that the emotional state of the robots was concentrated in slightly positive and slightly negative emotions. In this way, the average behavior of the robots is of cognitive type, according to the emotional model proposed in [35].  5 shows the evolution of the emotional states of the robots during the simulation. At first, the robots go through an emotional instability, existing peaks of satisfaction rate (they are very satisfied or very dissatisfied), then the emotional state begins to present changes, but within the same emotional spectrum. As the simulation goes on and the robots are running out of battery, or the occurrence of negative events increases (collisions, little interaction, etc.), then their emotions begin to have a negative tendency. From this test case, it is possible to obtain the values of the architectural concepts used to verify the emergence in the system using MASOES. In Table 9 are presented the values obtained with the simulation and the final values, after executing the FCM. It is observed that the concept of emergence reaches a high value, similar to the self-organization concept. The cognitive component decreases its value, but remains high, which is consistent with the data obtained. Collectively, the group of robots has a slightly positive average emotional state during the simulation period with a cognitive overall performance; cognitive behavior does not exclude the reactive behavior of the system. These behaviors have priority over other ones when they are necessary for maintaining the integrity of the robots, for example: to avoid a collision

Emergent behavior analysis using an entropy metric
In [43] is defined the emergence in a dynamic and complex system, based on the entropy concept. The emergence is assumed as the transformation of old information into new information, seen from different approaches, one of them is the production of new information since the dynamics of the system. If the emergence involves an increase in information, then it is analogous to entropy and disorder. Thus, the emergence of a process E can be considered as this new information I: In this context, E represents the Shannon's entropy, also called information entropy. According to [43], this entropy can be considered as the amount of information contained in the elements of the system. When all the elements provide relevant information, the entropy is maximized. This represents the emergence of a new state in the system.
According to [43], the behavioral changes in the dynamics of the system generate new information. In the MASOES model, the concepts associated with the generation of new information are: the behavior component, the aggregation mechanisms, the synthesis, the cognitive component and the social component. So, using Eq. (4) and the values in Tables 6, 7 and 9, we calculate the emergence of the system based on the entropy metric. In this case, it is the average of the final values of these concepts (we suppose a similar importance of these concepts). These results are shown in Table 10. In this case, we can again verify the emergent behavior due to emotions. In the case of transportation scenario, the final values of the concepts "cognitive component" and "synthesis" penalize the emergence. Particularly, these values that have certain importance in this scenario to know the different objects in the context, they are not very high at the beginning, which is not improved during the evolution of the FCM and degrades the generation of new information.
In the case of the nectar harvesting scenario, the "cognitive component" concept is lower, but it is compensated by the initial values of the concepts of aggregation mechanism and synthesis, which are higher. In this case, they allow obtaining a good value of entropy (emergence). In the case of the scenarios without emotions, the bad initial value of the behavior component has a high impact on the final value of the emergence. That means, an emergent behavior of the system is not reached. Finally, in the case of the simulation of Section 5.3, it is obtained an emergent behavior that is penalized by the values of aggregation and cognitive components. If we consider the next affirmation of [43] "the behavioral changes product of the dynamics of the system", it can be associated with the behavior component. In the scenarios of Section 5.2, this final value is E = I = 0.94 for both case studies when are used the emotions, and close to 0.5 without emotion. Similarly, in the simulation this final value is 0.81. Again, this demonstrates that the emotions are a very important component to generate an emergent behavior in the context of a multi-robot system.

Robustness of our model and comparison with other works
In this section, we compare our approach with other works that use emotional component in their architecture, and in addition, we analyze its robustness. Particularly, we are interested in investigations that include emotions in their robotic systems, in order to manage their behaviors. The criteria used in this comparison are:  Emotional component in a multi-robot system (CE): this criterion is proposed, to evaluate if the models implement emotions in the individuals that compose them.  Verification of the emergence and self-organization (VE): this criterion is related to the inclusion of some verification mechanism in the system.  Adaptability (A): it is related to the diversity of tasks that the system can carry out, and the adaptation of the group to them.  Diversity of the group (D): related to the heterogeneity of the group of robots.  Social learning (AS): it is established to evaluate whether the models have social learning mechanisms. Table 11 presents the works evaluated, based on the proposed criteria. In general, it is observed that the architectures studied present components based on behaviors, in order to facilitate their interaction with humans or improve their adaptability in the environment. In addition, the robots/agents have sets of behaviors, which, according E=I (4) to the stimuli received, are activated in a given moment. At the same time, they are adaptable and can act in the execution of different tasks, and in general, they are heterogeneous, and some have a collective learning mechanism. However, these works do not present an emotional component that influences such behaviors. Our proposal includes a behavioral component that uses an emotional component, in order to improve the adaptability of the system and the management of heterogeneous groups. Too, the emergence of behaviors is not mentioned in any of the previous works. In our work, the inclusion of emotions and the design of the architecture facilitates the emergence and self-organized processes, since the individuals can respond in different ways to the context [33,34,35].
In general, works [9,10,11,12,13,16,44] are closely related to our work in the aspect of including emotions in multi-robot systems, but with different purposes and through the application of different techniques. Our contribution focuses on the use of emotions in order to facilitate the appearance of emergent phenomena. Besides, we propose a method for the recognition of emotions in the robots, so that robots know the emotional state of others that can have a direct influence on their behaviors, and their adaptations at the group. In order to analyze the robustness of our architecture, we study its behavior during the emergent process when there are noises. In general, our architecture can manage the different noises present in the context due to internal or external factors without problem. For example: Internal Factors: These factors are related to internal noises in the emergent system, which can be: Absence of information to recognize: This type of noise has been considered in the AR2P recognition system, which allows the recognition of emotional patterns with the absence of information that characterize them (see Section 3) Bad recognition: This noise is associated with the quality of AR2P in the recognition tasks (see Section 5.1). Robot failures: The system, due to its emergent nature, can be fault-tolerant, due to unforeseen alterations of the robots (physical and logics).
External Factors: These factors are related to external noises that affect the system, the main one is the human: Human factor: The supervisor can change the system directly. For example, he can change the emotional state of a robot. This situation can be considered in real-time by our system, in a new iteration/adaptation of the system.
In general, several works analyze swarm robotics in the context of emergent and self-organized behaviors [1,2,5,6,7]. For example, in [6], the authors propose a robot swarm based on the bee hive for food gathering. The inclusion of emotions in multi-robot systems has been considered in several works [9,10,11,12,13,16,44], in the context of the decision making of each robot. The previous works have not mixed the analysis of the emergent and self-organized behaviors in multi-robot systems with emotions. Particularly, our goal is the study of the emotions like a mechanism of the emergent phenomena in a multi-robot system.

Conclusions
One of the main characteristics of our emotional model for multi-robot systems is that it allows the recognition of the emotions, in order to generate emergent behaviors, which gives a large flexibility to the system to execute different tasks. For example, the method for the recruitment of robots for a job is based on the emotions. The emotions in the robots define different reactions to similar stimuli in each one, which determine their basic behaviors, facilitating the emergence of behaviors in the system. The inclusion of emotions in multi-robot systems helps their adaptation in dynamic environments, improving their decision making in a certain moment. The emotion defines the predisposition of a robot in a given moment, which determines how it can respond to a stimulus. In this way, the robots can respond differently before the same stimulus, which converts the collective behavior in unpredictable, facilitating the emergence and self-organization in the system. Additionally, this paper presents an algorithm for the recognition of the emotions, in order to test our model. The paper presents the formalization of the emotional model for the recognition problem in a multi-robot system, based on AR2P algorithm. In this way, we can test the interactions between the robots based on the perceived emotion, and the influence in the collective behavior of the swarm. AR2P can be invoked for each robot to recognize emotional states (it is a skill of the robots), and according to its perception about the emotional state of the robots around it, it acts (recruit, inform, etc.). The model uses two strategies for recognition; the first strategy exploits the important signals that facilitate the recognition, and the second strategy exploits completely the input signals. The first strategy can predict a pattern even with missing information (using the key signals). At the computational level, the proposed model offers a concise, readable and elegant solution (recursion).
The paper presents how to use the emotions to generate an emergent behavior in a multi-robot system. We have proposed two scenarios. The first one in order to inform the place to recharge batteries, for the robots that need recharging the battery (sad robots); and the other for the recruitment of robots (happy robots) to move an object Next works will be dedicated to extend this model considering more emotions, more situations (scenarios), etc., to test the scalability of our approach. The extension of the emotional model will allow achieving a greater approximation to the human emotional spectrum, therefore, the inclusion of more complex emotional states, and thus to study their influence on the collective behavior of the system. Finally, as future work, experiments will be carried out with real robots, which follow the same architecture proposed in [33,35], and must improve the used FCM [27].