Pump Scheduling Optimization Using Asynchronous Parallel Evolutionary Algorithms


 
 
Optimizing the pump-scheduling is an interesting proposal to achieve cost reductions in water distribution pumping stations. As systems grow, pump-scheduling becomes a very difficult task. In order to attack harder pump-scheduling problems, this work proposes the use of parallel asynchronous evolutionary algorithms as a tool to aid in solving an optimal pump-scheduling problem. In particular, this work considers a pump-scheduling problem having four objectives to be minimized: electric energy cost, maintenance cost, maximum power peak, and level variation in a reservoir. Parallel and sequential versions of different evolutionary algorithms for multi- objective optimization were implemented and their results compared using a set of experimental metrics. Analysis of metric results shows that our parallel asynchronous implementation of evolutionary algorithms is effective in searching for solutions among a wide range of alternative optimal pump schedules to choose from. 
 
 



Introduction
In conventional water supply systems, pumping of treated water represents a major expenditure in the total energy budget [1,2]. Because so much energy is required for pumping, a saving of one or two percent can add up to several thousand dollars over the course of a year. In many pump stations an investment in a few small pump modifications or operational changes may result in significant savings [2]. In the general case, because of economical reasons, it is very difficult to achieve reductions by means of modifications in the current facilities of pump stations. Therefore, pump-scheduling optimization has proven to be a practical and highly effective method to reduce pumping costs without making changes to the actual infrastructure of the whole system.
Typically, a pumping station consists of a set of pumps having different capacities. These pumps are used in combination to drive water to one or more reservoirs. While doing this, hydraulic and technical constraints must be fulfilled. Thus, at a particular point in time, some pumps would be working but others would not. In this context, scheduling the pump's operation means choosing the right combination of pumps that will be working at each time interval of a scheduling period.
Then, a pump schedule is the set of all pump combinations chosen for every time interval of the scheduling horizon. An optimal pump schedule can be defined as a pump schedule that optimizes particular objectives, while fulfilling system constraints. Depending on the number of variables and objectives considered, optimizing a pump-scheduling problem may be very difficult, especially for large systems. Being a practical and challenging problem, it is no surprise that several authors have been studying it introducing different approaches [1][2][3][4][5][6][7][8][9].
Ormsbee et al. [3] present a detailed review of linear, non-linear, integer, dynamic, mixed, and other kinds of programming used to optimize a single objective: the electric energy cost. Lansey et al. [4] introduce the number of pump switches as an alternate way to evaluate the pumps' maintenance cost, which became the second objective considered until that date. In order to minimize the operating costs associated with water supply pumping systems several researchers have developed optimal control formulations. Mays [1] lists and classifies various algorithms that have been developed to solve the associated control problem. In the past few years, Evolutionary Computation techniques were introduced in the study of the optimal pump-scheduling problem. In fact, Mackle et al. [5] present a single objective optimization (electric energy cost) using Genetic Algorithms (GAs). Also, Savic et al. [6] propose a hybridization of a GA with a local search method, to optimize two objectives: electric energy cost and pump maintenance cost. In addition, Schaetzen [7] presents a single objective optimization using GAs considering system constraints by establishing penalties.
Evolutionary algorithms have proven to be useful tools helping decision makers (DM) to solve a multi-objective pump scheduling problem [8]. Since there is always a need for improvement, in [10], the use of a parallel model for Multiobjective Evolutionary Algorithms (MOEA) is proposed to provide DM with better solutions. Given that parallel implementations in [10] use a centralized migration topology, there is a bottleneck when the number of processes scales up. Our work extends the comparison of parallel models for MOEAs using a different parallel asynchronous approach than the one presented in [10].
This paper has been organized as follows: Section 2 presents a description of the optimal pumpscheduling problem considered. Section 3 presents motivations to choose the MOEA approach and to use parallel concepts to improve the efficiency attained by sequential implementations for the considered problem. Also, in this section the parallel implementation model is presented. Section 4 presents empirical comparisons of six different algorithms. Finally, conclusions of this work are presented.

Multi-objective Optimal Pump Scheduling Problem
In modeling pump operation, a simple framework can be used because only the large mains between the pump station and tanks are important in calculations [2]. Therefore, this work considers a simplified hydraulic model based on a real pumping station in Paraguay similar to the one presented in [7]. This model is composed of: • an inexhaustible water source: the potable water reservoir; • an elevated reservoir, which supplies water on demand to a community; • a potable water pumping station with n p pumps used to pump water from a water source to the elevated reservoir; and • a main pipeline used to drive water from a pumping station to an elevated reservoir.
The proposed model, using five pumps, is drawn in Figure 1. The pumping station is comprised of a set of n different constant velocity centrifugal pumps working in parallel association [2]. Pumping capacities are assumed constant during every time interval. Therefore, for a time interval of one hour, each pump combination has an assigned fixed discharge, electric energy consumption and power. The discharge rate of a pump combination may not be a linear combination of the pumping capacities of the individual pumps. Therefore, non-linearities in the combination of pumps are handled through a table of pump combination characteristics as presented in Table 1. The only data considered outside the model are the community's water demand.
Therefore, a mass balance mathematical model was chosen. According to this, the amount of water that goes into the reservoir must be equal to the amount of water that comes out of it [2]. As an additional advantage, this model allows the same schedule to be used several times if water demand does not change substantially.
As water demand is an input data in this problem, it has to be obtained from reliable sources. The quality and applicability of an algorithm's solution depend on how good the predictions of the water demand are. Data are obtained through a statistical study of the community demand during many years. Through these studies, an estimated water demand can be established, according to certain parameters. Several models to predict this demand are presented in [2,11].
In order to code the pumping schedule, a binary alphabet is used. At every time interval, a bit represents each pump. A 0 represents a pump that is not working, while a 1 represents a pump that is working. An optimization period of one day divided into twenty-four intervals of one hour each is considered. Thus, pumps can be turned on or off only at the beginning of each time interval in this model.

Mathematical definition of the problem: Objectives
In order to define the multi-objective optimal pump-scheduling problem to be solved in this work, the next subsections introduce the four different objectives considered in the optimization.

Electric energy cost (f 1 )
Electric energy cost is the cost of all electric energy consumed by all pumps of the pumping station during the optimization period. An important issue to be considered when analyzing electric energy cost is the charge structure used by the electric company. In most electricity supply systems, electric energy cost is not the same throughout the whole day. This work considers the following charge structure: • Low cost (C l ): from 0:00 to 17:00 hours and from 22:00 to 24:00 hours.
The influence of this variable in the pump scheduling is remarkable. Electric energy costs can be substantially reduced if the optimal pump schedule uses the smallest possible number of pumps working during the high cost period [5,6]. Water already stored in the reservoir can be used during this period in order to satisfy the community's water demand. A different charge structure can also be considered if needed. The mathematical expression to calculate the electric energy cost E c is given by Equation (1) [5]: where i : time interval index p i : pump combination at interval i using n p to denote the number of pumps in the station, p i can be coded by a binary string in {0, 1} np , see code in Table 1 for n p = 5 c(p i ) : electric energy consumed by pump combination p i at time interval i, see power in Table 1.

Pump maintenance cost (f 2 )
Pump maintenance cost can be as important as electric energy cost or even more relevant. In Lansey et al. [4], the concept of the number of pump switches is introduced as an option to measure maintenance cost, i.e. a pump's wear can be measured indirectly through the number of times it has been switched on. A pump switch is considered only if the pump was not working in the preceding time interval and it has been turned on. A pump that was already working in the preceding time interval and continues to be in the same state or is switched off, does not count as a pump switch for the present work. This way, pump maintenance cost can be reduced indirectly by reducing the number of pump switches.
The total number of pump switches N s is simply calculated by adding the number of pump switches at every time interval. The number of pump switches between the last time interval of the preceding optimization period (day before) and the first time interval of the day being analyzed, is also computed. However, just half of that quantity is added to the total number of pump switches, in order to consider possible switches between two consecutive optimization periods, supposing there is a certain periodicity between consecutive pump schedules, as shown in Equation (2): where | · | represents the 1-norm of a vector.

Reservoir level variation (f 3 )
There are three levels to be considered in the reservoir: • a minimum level that guarantees enough pressure in the pipeline. This level must also be maintained for security reasons, since unexpected events, as a fire, may demand a large amount of water in a short time; • a maximum level, compatible with the reservoir's capacity; and • an initial level that has to be attained by the end of the optimization period.
Maximum and minimum levels are considered as constraints. Hence, at the end of each time interval, water level must end up in some position between the maximum level (h max ) and the minimum level (h min ). However, level variation between the beginning and the end of the optimization period (∆h), is stated as another objective to be minimized, since small variations do not necessarily make a solution not unacceptable, as shown in Equation (3): subject to Other constraints are considered as follows: • The water source is supposed to supply enough water at any time and without additional costs.
• Maximum and minimum pressure constraints in the pipeline are always fulfilled, no matter at what level the reservoir is kept.
• Valves in the system are not considered.

Maximum power peak (f 4 )
Some electricity companies charge their big clients according to a reserved power peak. This reserved power has a fixed charge, but an expensive additional charge, or penalty, is added when this reserved power is exceeded. The penalty is computed using the maximum peak power reached during the time considered for billing purpose. Therefore, reducing such penalties becomes very important. This work approaches this issue proposing the daily power peak P max as another objective to be minimized. This is easily computed using Equation (5): where: P (p i ) : power at interval i using pump combination p i , see Table 1.

Multi-objective pump scheduling problem
With each of the four objectives defined, the multi-objective pump scheduling problem can be stated as follows: x ∈ X ⊆ B 24·np is the decision vector, B = {0, 1} y = (y 1 , y 2 , y 3 , y 4 ) ∈ Y ⊂ R 4 is the objective vector In summary, the defined multi-objective pump scheduling problem considers the pumps' characteristics (pumping capacities) in order to satisfy water demand, while fulfilling other constraints such as the maximum and minimum levels in the reservoir. At the same time, electric energy cost, pump maintenance cost, maximum power peak, and level variation in the reservoir between the beginning and the end of the optimization period, are minimized. Clearly, in a 24 hour period, these objectives may be conflictive, e.g., for minimizing power peak a good schedule uses a small amount of energy during the whole day, while for minimizing cost it is better to consume energy while the cost is lower, trying to turn off the pumps during the high cost period.

Discussion
In multi-objective optimization problems with several conflicting objectives there is no single optimal solution optimizing all objectives simultaneously, but rather a set of alternative solutions representing optimal trade-off between the various objectives. These solutions are known as Paretooptimal or non-dominated solutions. A solution is said to be Pareto-optimal regarding a given subset of solutions if no other solution in the subset can be considered as better when all objectives are taking in account and no other preference information is provided. A solution is called as a true Pareto-optimal solution if it is non-dominated with respect to the whole search space. Pareto-optimal solutions form the so-called Pareto-optimal set and its image in the objective space is known as the Pareto-optimal front. A true Pareto-optimal set is composed of all the true Paretooptimal solutions of the considered problem. The true Pareto-optimal set and its corresponding true Pareto-optimal front are denoted as P true and P F true respectively. In many multi-objective optimization problems, knowledge about the true Pareto-optimal front helps the decision maker to choose the best compromise solution according to her preferences. Classical search methods handle multi-objective problems by means of scalarization techniques. Therefore, they really work with only one objective, formed by a composition of the other objectives. In this way, these methods aren't able to deal adequately with the simultaneous optimization of various conflicting objectives. Since traditional methods were developed with one objective in mind, they are not well suited to obtaining multiple solutions for a multi-objective problem in a single run.
In addition, exact methods are not adequate for searching solutions in huge search spaces because of the impossibility to explore the entire domain. For example, having a pump station comprised of five pumps and an optimization scope of 24 one-hour intervals, there are 2 120 > 10 36 solutions to explore for this optimal pump-scheduling problem. Problem constrains reduce the search space to a subset of feasible solutions, but the cardinality of such subset is still too large to be exhaustively analyzed by classical methods.
When computing the true Pareto-optimal front is computationally expensive or infeasible and exacts methods can't be applied, a good approximation to the real Pareto set is desirable. MOEAs do not guarantee identification of the true Pareto-optimal front, but they have demonstrated their ability to explore effectively and efficiently huge and complex search spaces, finding good approximations of the entire true Pareto-optimal set for many difficult multi-objective problems in a single run. Therefore, MOEAs become a promising alternative to solving the pump scheduling problem.
At each generation of a MOEA's execution, a certain set of trade-off solutions is identified. These solutions can be considered as Pareto-optimal regarding the current genetic population. This set is represented by P current (t), where t stands for the generation number. The Pareto front associated with P current (t) is denoted as P F current (t). It is expected that, while evolutionary process goes on, P current (t) approximates to P true . Then, when a stop criterion is reached, the final solution set obtained by a MOEA has the potential to be a good enough approximation for P true . This final solution set is represented by P known , while P F known denotes its associated Pareto front.
MOEAs are stochastic search algorithms. Hence, they don't guarantee to find the global optimum in a given execution. Therefore, to obtain a set of good solutions it is usual to perform various executions of a given MOEA and combine their reported results. Since multi-objective functions may be computationally expensive, the size of a MOEA population and the number of generations have to be limited in order to obtain solutions in a reasonable time.
Both population size and number of generations affect the quality of final solutions. Hence, it is desirable to provide a method that can explore a huge search space and/or carry out more generations in a given wall-clock time period, improving the quality of obtained solutions.
Parallelization of MOEAs appears to be a very good option to expand the search space an algorithm can examine [10]. Also, by interchanging individuals between several populations, it is possible to speed up the convergence of these algorithms to the true Pareto-optimal front.
To demonstrate these two statements for the pump scheduling problem, this work proposes a parallel asynchronous model for MOEAs. Using this model, parallel implementations of various outstanding MOEAs are tested. Then, results obtained by sequential and parallel MOEA executions were compared using a set of metrics. Thus, this work extends the comparison of the parallel MOEA implementation presented in [10] by using a new parallel model as well as a slightly different comparison method.

Parallel Asynchronous MOEAs
The parallel model for MOEA presented in this work is based on a multi-deme or island genetic algorithms approach [12,13]. In multi-deme genetic algorithm, one population is divided into subpopulations called islands, regions or demes. Each subpopulation runs a separate genetic algorithm. The fitness value of an individual is calculated only relative to other individuals from the same region. Additionally to the basic operators of a genetic algorithm, a migration operator is introduced. This operator controls the exchange of individuals between islands. By dividing the population into regions and by specifying a migration policy, the multi-deme model can be adapted to various parallel architectures, especially for MIMD machines [12].
The proposed parallel framework consists of two kinds of processes, a collector and several pMOEAs (parallel Multi-objective Evolutionary Algorithms). The collector structure is presented in Algorithm 1. This procedure spawns all pMOEA processes and receives calculated solutions from them. In addition, the collector maintains an archive of the non-dominated solutions interchanged between demes and provides the final approximation set. This process does not utilize any evolutionary operator and does not interfere with the evolutionary process that is done by each pMOEA process. If the number of solutions in the collector process exceeds a desired number, an SPEA clustering procedure [14] is used to prune the set of solutions.
Meanwhile, pMOEAs are responsible for performing the real computational work. These pMOEAs basically differ from their sequential counterparts as follows: Algorithm 2 presents the general framework for pMOEA processes. In each island, parameters are received from the collector and an initial population is generated. Then, as long as a stop criterion is not reached, the evolutionary process proceeds. At each generation, the migration condition is tested. In this work, the migration condition is based on a probability test. If the migration condition is true, migrants are selected.
Since there is no unique best solution to migrate, some criterion must be applied. In this work, elements to migrate are considered only among non-dominated solutions in the current generation. In some cases, the number of non-dominated solutions in a population may be very large. Hence, a parameter controlling the maximum number of migrants is provided. Therefore, migration of individuals is controlled by two parameters, one for the frequency of communications, and another for the number of migrants. In this case, migrating elements may represent a fraction of the non-dominated set of individuals that currently are in a MOEA's population. Thus, a number of individuals are randomly selected from the set of non-dominated solutions. In this way, all non-dominated solutions are treated equal and no other consideration is needed. After choosing individuals to migrate, these are broadcasted to all other processes.
Once the migration condition is tested and the corresponding procedure has been executed, it is checked if there are received solutions. If they are not, the procedure just goes on. Otherwise, replacement actions are taken. There are many alternatives to receive and replace individuals, among them: 1. apply selection to the union set of received migrants and current population; 2. randomly, replace elements in the genetic population; 3. randomly, replace elements dominated by the received ones.
From the above alternatives, the first requires major modifications of the original algorithm. The other approaches store received solutions in an auxiliary buffer and copy solutions to the genetic population if a condition is satisfied. The second approach permits the loss of good quality solutions Algorithm 2 General structure of pMOEA procedures Algorithm pMOEA() Receive parameters of the MOEA execution plus the migration probability p mig and the maximum number of non-dominated solution to migrate n mig Generates an initial population P (0) at random, and set t = 0 while the stop criterion is not reached do t = t + 1 Generates a new population P (t) using a given MOEA procedure if condition to migrate is reached then Select migrants from P (t) according to a specified policy Send migrants to all other processes end if if there are received solutions from other demes then Replace individuals in P (t) by received ones according to an specified policy end if end while Send P known to collector with termination signal when they are replaced by bad migrants. On the other hand, the third option ensures that nondominated solutions will not be replaced at random. At the same time, this last method has a non-zero probability of not losing the worst solution, preserving genetic information of several fronts, while guaranteeing the maintenance of good solutions. When a pMOEA reaches its stop criterion, final non-dominated solutions are sent to the collector before finishing.

Parameters
In order to test the performance of pMOEA implementations in solving the multi-objective pumpscheduling problem, a test problem was chosen. The results of different executions of these implementations were compared under a set of selected metrics [15]. The multi-objective pump scheduling test problem parameters used in this work are based on technical characteristics of the main pumping station of a water supply system in Asuncion, Paraguay's capital, as described below: • A set of n p = 5 pumps is used.
• An elevated reservoir with the following dimensions is considered: • A demand curve based on statistical data of water consumption in a typical day, as presented in Figure 2. • An electricity cost structure with C h = 2C l .
With these specific values, sequential and parallel implementations of six algorithms were developed. Algorithms used in this work are: Multiple Objective Genetic Algorithm (MOGA) [16], Niched Pareto Genetic Algorithm (NPGA) [17], Non Dominated Sorting Genetic Algorithm (NSGA) [18], Strength Pareto Evolutionary Algorithm (SPEA) [14], NSGA-II [19] and Controlled Elitist NSGA-II (CNSGA-II) [20]. These algorithms were selected for consistency with previous authors' works [8,10]. Also, these algorithms are good representative examples of different ages in MOEA research [21][22][23]. Detailed information on each implemented MOEA can be found in the referenced papers. A general background on various implementations of evolutionary algorithms is provided in [24]. It is expected that many possible solutions obtained by the algorithms do not fulfill hydraulic and technical constrains. Therefore, a heuristic method was combined with each implemented MOEA in order to transform a general solution into a feasible one [8].
For each considered MOEA, 10 different executions were carried out in their sequential and parallel versions using 1, 2, 4, 8 and 16 processes placed on different machines. Each execution used a different random seed. In parallel runs, a migration probability (p mig ) of 0.5 was used i.e., good solutions are transmitted to other processors in around half of the generations. In these executions, the maximum number of non-dominated solutions interchanged (n mig ) was 10% of the genetic population's size. Considering the parallel platform used, these values represent a good experimental trade-off between the frequency of interchanges and the number of interchanged  solutions. A parallel MOEA using only 1 processor differs from a sequential execution because it uses a collector process storing non-dominated solutions as the evolutionary process proceeds, introducing a sort of elitism. The implemented MOEAs use the following parameters [24] (see Table 2):

Metrics
Having several optimization criteria, it is not clear what quality of a solution means. For example, it may refer to the closeness to the optimal front, the number of solutions, the spread of solutions, etc. In fact, in [25] three general goals are identified: 1. The size of the obtained Pareto front should be maximized, i.e., a wide range of Pareto solutions is preferred.
2. The distance of the obtained Pareto front to the true Pareto-optimal front should be minimized.
3. A good distribution of solutions, usually in objective space, is desirable.
Then, in order to evaluate experimental results from the implemented algorithms a set of metrics is used, this comparison itself being multi-objective. Explanations of the selected metrics can be found in [26]. However, it is important to note their most important aspects: • Overall non-dominated vector generation (ON V G): This metric reports the number of solutions in P F known . It is expected that good algorithms have a large number of solutions.
ON V G metric is defined as: where || · || represents cardinality.
• Overall true non-dominated vector generation (OT N V G): counts the number of solutions in P F known that are also in P F true and is defined as: • Maximum Pareto Front Error (ME): This metric indicates the largest distance between a point in P F known and its nearest neighbor in P F true . Thus, all the other points in P known are closer to P true than this worst-case distance. A value of 0 is ideal. The ME metric is formally defined as follows: where d min i is the Euclidean distance (in objective space) between each vector in P F known and its nearest neighbor in P F true and n is the number of points in P F known • Spacing (S): this metric serves as indicator of the distribution of solutions in P F known and is based on the average (arithmetic mean) distance of each point from its nearest neighbor.
The spacing metric is mathematically defined as: where the average d min is defined as: An ideal value for the spacing metric is 0.
Since some of these metrics require P F true to be computed, an approximation of it was calculated from the non-dominated solutions in the union set of all obtained results. This experimental set is taken as the P F true of reference.
ON V G and OT N V G are used in combination to measure the size and quality of the set of calculated solutions. Both are considered because ON V G alone is a very poor indicator of the comparable quality between two sets. A given MOEA could produce a hundred of non-dominated points that are very close to the true Pareto Front, while another MOEA could produce a thousand points far from P F true . With ON V G, the latter would appear to be better. Since an approximation set is used, OT N V G appears to be a better metric to compare the quality of the solutions. Note that using both metrics, an error rate can be easily computed.   Table 4: Statistics values for OT N V G metric that the average ON V G value grows with the number of processors. The use of a separate process storing non-dominated solutions improves the performance in this metric of first generation non-elitist MOEAs (FFGA, NPGA and NSGA), making them competitive with elitist approaches, especially as the number of processors grows. In fact, the best result for this metric is obtained by the parallel implementation of NSGA using 16 processors.

Results
In spite of the growing number of solutions for parallel implementations of FFGA, NPGA and NSGA, they do not find any true Pareto solution. Specifically, OT N V G = 0 for every executed run of these 3 algorithms. Thus, these algorithms do not provide any solution to the true Pareto Front of reference.
In view of the inefficacy of the above MOEAs, only results obtained by SPEA, NSGA-II and CNSGA-II are discussed in what follows. A complete set of results taking into account other algorithms and metrics can be found in [23].

Run
Average  In Table 4 some statistical values for the OT N V G metric are presented. The average value is computed by adding the OT N V G values of 10 executions and dividing by 10. Median, maximum, minimum and standard deviation values [27] are also computed. The median is the middle value of the set when ordered by value.
As can be noted, the best average value for this metric is obtained by a parallel implementation of CNSGA-II using 16 processors. On average an execution of pCNSGA-II with 16 processes reports 121.6 solutions that belongs to the true Pareto-optimal set. There are three times more solutions than with the corresponding sequential run.
Considering the OT N V G metric for SPEA, the effect of parallelization is even more impressive. For SPEA, the average value is biased by some very good executions; thus, the median value is a better indicator for the distribution of OT N V G. Note that the SPEA median value for the OT N V G metric is just 1 for the sequential implementation, increasing to 51 for pSPEA with 16 processors.
Minimum and maximum values of OT N V G also improve with the number of processors. Note that as minimum and maximum pCNSGA-II values increase, the differences between them are reduced, as happens with the standard deviation, i.e., pCNSGA-II becomes more stable. On the contrary, the same difference and the standard deviation increase with the number of processors for pSPEA implementations, while no clear relation is observed with pNSGA-II. In conclusion, considering the number of true Pareto-optimal solutions found, pCNSGA-II may be considered as the most stable implementation as the number of processors increases. Table 5 shows values for the metric ME. As can be noted, the average values are of the same order as the number of processors increases, yet the maximum ME decreases for the general case.

Run
Average  To evaluate this metric the column maximum of Table 5 will be considered. This column provides the value of the worst ME considering 10 executions of a certain MOEA. Therefore, it is an upper bound on the error strip for a given run. Consequently, the minimum of such values provides the best worst-value; it indicates that other solutions of the considered algorithm are closer to the true Pareto-optimal front. With these considerations, it can be noted that pNSGA-II using 8 processors provides solution sets with the lowest upper ME.
The last metric to be considered is Spacing. Table 6 shows results for this metric. It can be seen that values are very close to the optimum (0). Taking into account average S values, parallel MOEAs are better than their sequential versions, and the best value is obtained with pNSGA-II using 8 processors.
In summary, parallel implementation of MOEAs find a larger number of solutions (ON V G & OT N V G), better solutions (OT N V G & ME), and are also better distributed (S) than their sequential counterparts.

Conclusions
A parallel asynchronous model for multi-objective evolutionary optimization was presented and applied to six recognized MOEAs to solve an optimal pump-scheduling problem considering four minimization objectives. Various executions of parallel and sequential implementations were conducted and their results compared, using a set of metrics.

Run
OT N V G ON V G ME Spacing CNSGA-II 16   To have a notion of goodness of the different solution sets, the following lexicographical order of average metric values is finally considered: 3. ME, 4. Spacing. Table 7 shows a ranking of algorithms using the aforementioned lexicographic order of metrics. As can be seen, the best position is obtained by CNSGA-II with 16 processors. However, it should be emphasized that another algorithm would be considered the best one using another preference between metrics.
Our experimental results have shown that parallel evolutionary algorithms are capable of providing a larger number of better alternatives for pump scheduling than their sequential counterparts.