Improved DV-Hop based on parallel and compact whale optimization algorithm for localization in wireless sensor networks

Improving localization performance is one of the critical issues in Wireless Sensor Networks (WSN). As a range-free localization algorithm, Distance Vector-Hop(DV-Hop) is well-known for its simplicity but is hindered by its low accuracy and poor stability. Therefore, it is necessary to improve DV-Hop to achieve a competitive performance. However, the comprehensive performance of WSN is limited by computing and storage capabilities of sensor nodes. In this paper, we propose an algorithm with parallel and compact techniques based on Whale Optimization Algorithm (PCWOA) to improve DV-Hop performance. The compact technique saves memory consumption by reducing the original population. The parallel techniques enhance the ability to jump out of local optimization and improve the solution accuracy. The proposed algorithm is tested on CEC2013 benchmark functions and compared with some popular algorithms and compact algorithms. Experimental results show that the improved algorithm achieves competitive results over compared algorithms. Finally, simulation research is conducted to verify the localization performance of our proposed algorithm.


Introduction
In recent decades, Wireless Sensor Networks (WSN) have been widely applied in various fields due to its low cost, reliability, scalability, flexibility, and ease of deployment [1][2][3]. Some essential elements, such as routing, load balancing, node localization, and data aggregation will affect the performance of WSN [4,5]. Usually, a good performance of WSN is achieved through the tradeoff between costs and outcome, such as limited energy, memory consumption or computing capability of sensor nodes and a competitive accuracy or efficiency [6][7][8][9][10]. Where node localization is an important issue that directly affects the performance of WSN. In this issue, a robust algorithm can provide a highly accurate localization solution, and at the same time can avoid coverage holes and connection failures caused by random deployment of nodes [11]. Hence, how to design a locating algorithm to achieve a refined tradeoff between limited resources of nodes and high location accuracy of nodes has become one of the challenges in WSN.
The traditional localization algorithms are categorized as range-based and range-free [6][7][8]. The range-based algorithms, including Time of Arrival(TOA), Time of Difference of Arrival(TODA), Angle of Arrival(AOA) and Received Signal Strength Indicator(RSSI), utilize absolute point-to-point distance between neighbor nodes to achieve the localization results. However, improvements on accuracy depend on extra hardware and therefore increase costs.
Ruo-Bin Wang and Wei-Feng Wang contributed equally to this work and should be considered co-first authors. Different from range-based algorithms, range-free algorithms' performance does not closely depend on hardware. Some typical range-free localization algorithms include Amorphous, Centroid, Approximate Point-In Triangle Test (APIT), and Distance Vector-Hop (DV-Hop). DV-Hop is popular for its low environmental impact, low cost and power consumption, wide locating coverage, and it depends only on a network topology to realize localization [9,10]. Nevertheless, it is still a challenge for locating unknown nodes accurately with original DV-Hop. The problems are caused by the deviation between the unknown node position estimated by the trilateral method and the actual position [12]. From the perspective of computation, the locating issue can be expressed as an NP-hard optimization problem, which can be solved by optimization techniques [13]. Yet, with the increase of nonlinearity and constraints, the traditional optimization techniques often can not find the global optimal solution. Recently, Meta-Heuristic Optimization Algorithms(MHOAs) have received extensive attention in the scientific community and have made significant progress. They are mainly used to address complex optimization issues and are considered as effective and reliable optimization techniques [14,15].
Whale Optimization Algorithm (WOA) is a MHOA proposed by Mirjalili in recent years. It is widely employed in the issues of optimization because of its simple operation, few adjustment parameters, fast optimization, and self-adaptive mechanism [16]. WOA mimics the hunting behavior of humpback whales using a distinctive bubblenet attacking strategy [17]. Compared to other well-known algorithms, its unique attacking strategy helps it gain competitive exploration capabilities, and its self-adaptive mechanism can balance the phases of exploration and exploitation. Moreover, the self-adaptive mechanism of WOA can facilitate the searching for promising regions in the search space of the initial steps of iterations. Therefore, it gets closer to the optimum faster after passing through nearly half of the iterations [16,17]. Whereas in dealing with complex optimization problems, it is prone to fall into local optimums, and sometimes it does not converge. In response to these shortcomings, researchers have improved WOA, such as chaotic WOA [18,19], improved WOA [20,21], binary WOA [22,23], hybrid WOA [24,25], and multi-objective WOA [26,27]. In recent years, WOA has been employed in range-based node localization algorithms [28][29][30][31][32], but few applications in DV-Hop. For instance, Chai et al. [33] only utilized parallel techniques to improve WOA and applied it to enhance the performance of DV-Hop, but they do not take into account the memory consumption and storage capacity of the node. In this paper, we utilize compact technique to compress the memory of algorithm, then we propose novel parallel techniques to improve the solution accuracy without adding additional equipment.
The main contributions of the paper are listed as follows: (i) A Parallel and Compact WOA (PCWOA) is proposed. This paper adopts a compact technique to compress the algorithm, and then proposes novel parallel techniques to enhance the ability to escape from the local optimal and improve the computing capability of the algorithm and solution accuracy. (ii) The PCWOA was tested with CEC2013 benchmark functions, and compared with some compact and popular algorithms to verify the competence of our proposed algorithm. (iii) The PCWOA is combined with DV-Hop, and multiple simulation experiments are carried out to verify the algorithm's performance on accuracy and stability of location optimization. Chen et al. [34] proposed an improved DV-Hop localization. This algorithm mainly utilizes the least squares method to optimize the average hop-size and proposes a dynamic weight coefficient to correct the average hop-size. Tomic et al. [35] proposed three improved algorithms by adding several geometric improving steps based on retaining the basic steps of the original DV-Hop algorithm. These methods increase the locating accuracy in a certain extent, also increased is the cost for the computational complexity. Moreover, these methods essentially use the Least Square Method(LSM) to calculate the position of an unknown node, which is the reason for errors of DV-hop algorithm.
Recently, there has been a series of literature about MHOA immediately combined with DV-Hop in WSN to achieve a better performance. Kanwar et al. [36] proposed a new framework of localization for displaced sensor nodes using Particle Swarm Optimization (PSO). This framework reduces communication between unknown and anchor nodes by calculating the hop size of all anchor nodes on unknown nodes. It effectively minimizes the elapsed time, localization errors, and energy consumption. However, compared with the traditional algorithms, only two of the six schemes proposed in the literature have less localization error and energy consumption results. Therefore, its performance is unstable. Ghagour et al. [37] used the recently developed Squirrel Search Algorithm (SSA) to improve DV-Hop. This scheme achieved higher localization accuracy, stability, and a faster convergence rate. However, it does not consider the impact of node memory and communication radius. Cui et al. [38] presented a high accurate localization algorithm with DV-Hop and Differential Evolution (DE). This paper lessened the hop-size value by the number of common one-hop nodes between adjacent nodes and converted the discrete values of hop-size to continuous values to achieve a better accuracy. However, the DE algorithm has its own defects, such as complex operation and too many operators involved in the calculation, which causes the over consumption of memory and therefore shortens the lifetime of sensor nodes.
The literatures above mainly focused on improvements of DV-Hop algorithm but not on MHOAs themselves. Generally, MHOAs have their own shortcomings, such as slow convergence speed, inability to converge and being prone to trap into local optimum, therefore, employing original MHOAs directly to improve DV-hop will be counterproductive. Moreover, the direct combination of the two algorithms will significantly increase the overall memory consumption and lead to excessive energy consumption of sensor nodes, along with the reduction of transmission rate between sensor nodes. Hence, it is reasonable to boost the performance of the original MHOAs to improve DV-Hop algorithm.
Ouyang et al. [39] proposed an improved Genetic Algorithm (GA) instead of LSM . They randomly exchanged the execution order of mutation operators and crossover operators through a probability mechanism and used a dynamic adjustment mechanism to update parameters dynamically. However, this scheme does not perform well when processing nodes with large radius. And its performance is also affected by the number of unknown node hops: the more hops, the greater the error. Chen et al. [40] weighted the average hop-size of each anchor node, and optimized the location estimated by the 2-dimensions hyperbolic localization algorithm with PSO. In their research, they weight the average hop-size of each anchor, which will lengthen the amount time of calculation and increase the consumption of memory. Cui et al. [41] designed a Cuckoo Search algorithm (CS) that improved the cyber-physical system's DV-Hop performance. This algorithm shows that the one with hybrid distribution combined with the Levy distribution and Cauchy distribution achieves the best performance. Chai et al. [33] proposed a parallel WOA and applied it to optimize the localization of WSN. Parallel techniques can effectively improve the searchability and population diversity of the algorithm. The experimental results show that the proposed parallel WOA can achieve better results. However, this scheme did not consider the influence of node memory, which led to performance degradation in large-scale networks. Li et al. [42] proposed a parallel compact Cat Swarm Optimization (CSO) and applied it to DV-Hop. It effectively improved the localization accuracy and saved the WSN memory. However, the CSO algorithm has defects of poor convergence and over consumption.
To the present author's knowledge, most of the literatures on applications of WOA in WSN localization use range-based techniques [28][29][30][31][32]. The experimental results show that the proposed schemes achieve better localization accuracy, delivery radio, and delay. However, the rangebased localization algorithm needs to measure the angle or distance between the anchor node and the unknown node. It also requires additional hardware, which causes high costs in large-scale deployment.
Compared with the existing methods based on MHOAs, the algorithm proposed in this paper has the following advantages: (a) The WOA selected in this paper has advantages of simple structure, adaptive-mechanism, and robustness, which makes it be competitive. (b) The compact technique effectively compresses the number of the original population, and therefore reduces the consumption of computing resources, which is essential for node processors. (c) The novel parallel techniques improve the capability of avoiding trapping into a local optimal and accelerate convergence, which will compensate the loss of accuracy caused by compression.

Original WOA
WOA is inspired by the hunting behavior of humpback whales. There are three phases: encircling prey, bubble-net attacking method, and searching for prey.
In the real world, a whale can make out the position of prey, but the localization cannot be confirmed in the algorithm. Therefore, the algorithm assumes that after initializing the whale population, the current best whale localization is selected as the prey localization, and the rest of the whales approach it. The mathematical model of encircling prey is defined as the following equations: where t is the current iteration, X best and X indicate the position of the best solution obtained so far and the current solution, respectively. A, R are variables and calculated as follows: A ¼2ar À a; ð2Þ where r is a random value between 0 and 1. The a is linearly decreased from 2 to 0 in the iteration process, and it is calculated as: where T is the max iteration, the value of a changes along with iteration, which affects the fluctuation range of A. Combined with Eqs. (2) and (4), it is calculated that A is a random value in the interval ½À2; 2. WOA utilizes the adaptive variation of A to ensure a smooth transition between exploration and exploitation. When jAj ! 1, the algorithm does a lot of searching in the exploration phase until it finds the region with a promising solution. When jAj\1, the algorithm enters the phase of exploitation. The solutions can move anywhere between the current and best solutions and are infinitely close to the best solution during the exploitation phase.
In the exploitation phase, the bubble-net attacking method imitates the hunting behavior of whales, and it consists of two approaches: shrinking encircling mechanism and spiral updating position. The former depends on the encircle encircling range determined by A in the Eq. (2). The a decrease along with the encircle range smaller. The latter utilizes the cos function to create a spiral equation that mimics the helix-shaped movement of whales. It is defined as follows: where c is a constant for defining the shape of a logarithmic spiral, and h is a random number in the interval ½À1; 1. During the bubble-net attacking method, the algorithm uses a probability p to choose which way to surround. When p ! 0:5, the algorithm selects the latter, or else, selects the former.
The best solution is chosen initially, and other individuals are close to it. This mechanism can easily make the algorithm fall into local optimum, which need a large number of exploration to search prey. The mathematical model is as below equation: where X rand is a random solution chosen from the current solution.

Compact WOA
Considering the node memory, this paper uses compact technique to reduce the number of populations of the algorithm. The compact technique [44][45][46][47][48] utilizes the distribution characteristic of the original population to construct a probability model. Then, in the iterative process, the probability model is used for algorithm operation to generate new solutions. By comparing the generated solutions, the probability model is updated to replace the original population of the algorithm. The essence of compact technique is population-less. It develops a virtual population instead of the actual population to solve the problem in the process and achieves less memory usage and shorter computing time.
As mentioned above, this virtual population is a probability model of population solutions and is encoded in a data structure. The data structure is named Perturba-tionVector and indicated with PV. Specifically, there are N particles in a population from a macro perspective, and each particle has D dimensions. So, the PV is a K Â D matrix. After adding the compact technique, each dimension can be indicated with a normal distribution, and the PV becomes a 2 Â D matrix: where c represents the current iterations, l and r are, respectively, for each design variable, mean and standard deviation value of PV. To keep the algorithm steady, each pair of mean and standard deviation corresponds to a Gaussian Probability Density Function (PDF), which is truncated within the interval ½À1; 1 and normalized to an area with an amplitude of 1.
The sampling mechanism of compact technique is associated with the design variable x[i]. The candidate solution x from PV requires extensive explanation. For each design variable indexed by i, a truncated Gaussian PDF with the mean l½i and standard deviation r½i are related. PDF is defined as follows: where erf is the error function. By consulting a Chebyshev Polynomial, the PDF can correspond to a Cumulative Distribution Function (CDF) with randomly changed from 0 to 1, and the relationship between PDF and CDF is The CDF can be used to describe the value of the actual solution of the population by a similar probability distribution, while PDF is the macroscopic probability distribution constructed by the population characteristics. The solution x[i] can be randomly generated from the inverse CDF using the PV vector. Thus, the CDF can be written as: During the execution of the algorithm, the PV vector needs to be continuously updated. The solution generated by PV is compared with the previous solution to determine which is better. The better is marked as a winner, and the worse is marked as a loser. Use the winner and loser to update l and r in the PV vector. The update formula is as follows: In the equation, N p represents the actual population, l cþ1 i , r cþ1 i represents the mean value and standard deviation after updating, and the initial value of l and r are 0 and 10.
Based on the above description of the compact technique, the compact WOA (CWOA) can be implemented, including the following steps. First, generate a random number x with uniform distribution from 0 to 1. Second, use PV to generate a solution by the inverse function of CDF. The inverse function of CDF is defined as: where ðerf Þ À1 is the inverse function of erf. Since the range of solutions generated by using the inverse CDF is between À1 and 1. Third, we need to map the value of y to the actual decision space through the following equation.
where ub and lb are the upper and lower bounds of the actual decision space. A new solution is finally generated by executing the algorithm. The fitness value of the new solution is compared with the fitness value of the stored best solution to determine the winner and loser, and then use winner and loser update l, r, and PV.
The pseudo-code of CWOA is shown in Algorithm 1, where d represents the dimension of the problem, and ub represents the upper bound of the actual decision space. Use f min , fit winner to store the fitness value of the global optimal solution and winner. Inf represents an infinite number.

Parallel techniques
After adding a compact technique, the number of populations involved in the calculation of the algorithm will be reduced. The number of populations changes from k to 2. In this paper, the value of k is 30. The compact technique saves memory and shortens the computation time of an algorithm. Still, it easily makes the algorithm fall into local optimal and loses the accuracy of the solution. Therefore, the parallel technique is employed to avoid the defects. Parallelism [49][50][51][52][53][54], from a macro perspective, is that multiple groups operate simultaneously. Communication and exchange of information among groups can be realized by parallelism, and its common idea uses better solutions from some groups to replace worse solutions from other groups. In this paper, a novel parallel technique is proposed, which includes three different methods. Two groups are randomly selected and compared from the current, random, and best group. The details of the parallel techniques are as follows: (a) Technique one: Randomly select a group to compare with the current group. If the fitness value of the current group is worse, the fitness value of the random group will replace it. Specifically, the solutions in the two groups are sorted in descending order, and the average of the solutions in the two groups is calculated. If the average value of the current group reaches a better solution, the new group consists of the second half of the two groups. Then, compare the fitness value of the current group and the new group. The better result replaces the worse. Figure 1 shows the core idea of technique one.
(b) Technique two: The current group is compared with the global optimum to determine whether to restructure the current group. If not, the best fitness value of the current group is assigned to the global optimum. Otherwise, sort the solutions stored in the global in descending order and average the solutions. If the average of the current group is better, a new group is constructed in the same way as technique one. The purpose of forming a new group is to determine whether to update the global optimal fitness value and the corresponding position. The process of technique two is shown in Fig. 2.
(c) Technique three: Technique three compares the fitness value of the random group with the global optimum. The update strategy and implementation steps are the same as technique two.
The parallel techniques proposed in this paper are to use the current solution and random solution to affect the global optimal solution, strengthen the communication between groups and improve the accuracy of the solution.
According to the above introduction, the parallel compact WOA (PCWOA) has been constructed, and Algorithm 2 shows its pseudo-code. According to Algorithm 2, rand is a random number between 0 and 1. The initial method of PV, best, and f min of each group is the same as Algorithm 1. G[i].best is used to store the best in the group, and G[i].fmin is used to store the fitness value corresponding to G[i].best. Here, fit winner represents the fitness value of the winner. Globalbest and Globalfmin represent the optimal solution and fitness value in all groups.

Time and space complexity of PCWOA
In this section, the time and space complexity of PCWOA are discussed. Mirjalili et al. [16] proved the time complexity of standard WOA. For the convenience to compare, this section redefines some variable names. Here, I stands for the maximum number of iterations, D for the dimension, G for the number of groups, and P for the number of populations. The details of the time and space complexity of WOA, CWOA, and PCWOA are shown in Table 1.
According to Table 1, compared with WOA, the CWOA has a specific improvement in time and space complexity.
The time complexity and space complexity of PCWOA depend on the number of groups. Generally, the number of groups in the parallel technique does not exceed 5. In this paper, the number of groups is 3, and the population is 30. Thus, PCWOA outperform WOA both on time and space complexity.

Benchmark functions and algorithm parameters
In this subsection, firstly, we verify the performance of the PCWOA, which is experimentally tested on CEC2013 [55]. It contains 28 benchmark functions, in which F1-F5 are unimodal functions, F6-F20 are multi-modal functions, and F21-F28 are composition functions. Secondly, in order to further test the performance of PCWOA, we will compare it with WOA and CWOA. In addition, PCWOA is compared with other compact algorithms, such as compact PSO and compact SCA. All algorithm parameters are listed in Table 2.
All algorithms are implemented in MATLAB R2019b, and are conducted on a computer with an Intel Core i5 @3.10GHz CPU and 16GB memory running 64-bit Windows 10.

Compared with WOA and its variants
According to Algorithm 1 and Algorithm 2, this paper proposed two different versions of improved WOA including CWOA and PCWOA. The parameters of the two algorithms are listed in Table 2. In the 30-dimensions, the two algorithms will possess multiple variables: x, best, l, and r. In the execution of the algorithm, the consumption of memory is different due to the difference in the technology used by each algorithm. The total number of iterations is the same so that the evaluations of each algorithm are identical, and the Max À iteration is set to 1000.
Then, use CEC2013 benchmark functions to test algorithm performance. Each algorithm is run 30 times, respectively, and the average value is used to measure the performance of the algorithm. The Win represents the number of optimal average values obtained by the algorithm in 28 benchmark functions.
According to the data in Table 3, PCWOA outperforms other algorithms on most of functions. Among the 28 benchmark functions, the PCWOA obtains the 23 best results. Moreover, the improvements in F1, F2, F4, F5, and F17 are noticeable, ten times more than the original. The WOA with only a compact technique performs commonly on most benchmarks and only achieves good results on F7, F25, and F27. Data in Table 3 show that compact technique is easy to lead the algorithm fall into local optimal, such as the performance of the algorithm on F1, F3, F5, F10, and F19. In sum, using only compact technique to improve the algorithm may backfire on the performance of the algorithm. We must consider the loss of accuracy when significantly reduce the population participating in the calculation of the algorithm.
In Table 3, the average value obtained by the algorithm in many test functions has little difference, which may not manifest the advantages of PCWOA. Therefore, on this basis, this paper evaluates the solution accuracy through the convergence curve of the algorithm. According to Fig. 3, the optimization and the convergence ability of PCWOA is similar to that of WOA in Fig. 3(a), (c), (f), (m), (o). On other test functions, PCWOA can always find a better optimal solution, especially in Fig. 3(d), (e), (j), (k), (p), (q), (s). It can also be seen from Fig. 3 that WOA quickly falls into convergence. The parallel technique proposed in this paper effectively enhances the ability of the algorithm to jump out of the local optimum. However, the solution accuracy of CWOA is not good, and the convergence speed is slow. The above results show that compact technique can save algorithm memory effectively, but not beneficial for accuracy and convergence.

Compared with other compact algorithms
In this section, PCWOA is compared with other compact algorithms. This paper selects compact BA, compact SCA, and compact SCA with multiple groups and strategies(MCSCA). The purpose of choosing compact SCA and MCSCA is to highlight further the advantages of adding parallel technique to compact technique and highlight the benefits of the parallel technique proposed in this paper. The parameters of these algorithms are listed in Table 2, Fig. 2 The parallel technique two(The current group was compared with the global optimum.) and each algorithm runs 30 times. In Table 3 and Fig. 3, the performance of PCWOA has been verified better than WOA, therefore WOA is not added to the comparison in this section. The algorithms are compared by Mean, Standard Deviation, and Friedman ranking tests. The results of the algorithms are shown in Table 4. The Ave and Std represent the mean value and standard deviation value of the algorithm. The Rank indicates the ranking of the algorithm, and the Mean represents the average ranking of algorithms. According to Table 4, it is found that the mean and standard deviation of the algorithm with parallel technique and compact technique have better performance than those with the compact technique only. According to the Friedman ranking test, PCWOA and MCSCA ranked the first and the second, respectively, compact BA ranked the third, and compact SCA ranked the fourth. The ranking results show that PCWOA is much ahead of the average ranking of MCSCA. Unfortunately, PCWOA does not always perform well, such as on F8, F11, F12, F13, F18, F24, and F28. But among the 28 test functions, PCWOA took the lead in 20 test functions. This paper again uses the convergence curve to evaluate the performance of the algorithm. Fig. 4 shows the convergence curve of compact algorithms. From Fig. 4a-p, PCWOA can find the optimal solution. Although the convergence speed is not as fast as MCSCA in Fig. 4(h), (i), (m), (n), its solution accuracy is improved by 23%, 14%, 33%, and 27% compared with other algorithms, respectively.
To sum up, the results in Tables 3 and 4 show that the performance obtained by adding only the compact technology are not good enough. Some results are improved, while others are not as good as the original ones. The results obtained by utilizing both compact technique and    parallel techniques are superior to the initial results, and the quality of the parallel techniques directly determine the performance of the algorithm. The parallel techniques in this paper are obviously better than that of MCSCA.

Compared with popular algorithms
This section compares PCWOA with the other four popular MHOAs proposed in recent years. Mean and best values are used to evaluate their performance. The parameters of the 5 algorithms are listed in Table 2. In Table 5, the Ave indicates the mean value, the Best represents the optimal value. Compared with SCA, PCWOA performs poorly on F4, F7, F12, F17, and F28 in average value. However, the best value of F7, F17 surpasses the SCA. The PCWOA achieves better results than ALO in mean value and best value except for F11, F15, and F22. Compared with AOA, PCWOA achieves good promising results on most test functions except F4 and F14. Among the 4 popular algorithms, SSA performs best. It is superior to PCWOA in F7, F9, F12, F13, F14, F17, F22, and F28, but not competitive in other functions. In F8, F16, and F20, all algorithms find  5 DV-Hop localization algorithm based on PCWOA

DV-Hop localization algorithm
DV-Hop localization algorithm [43] can be divided into the following three steps: Step 1: The anchor node broadcasts its location information to the neighbor nodes to obtain the minimum hopsize between the unknown and anchor nodes.
Step 2: According to the location information and hopsizes between each anchor node recorded in step 1, the average actual distance per hop-size is estimated by Eq. (14): where HopSize i represents the average per hop-size between anchor node i and others, n indicates the number of anchor nodes and n ! 3, ðx i ; y i Þ and ðx j ; y j Þ are the coordinates of anchor node i and j respectively, h ij represents the hop-size between anchor node i and j. Then, the distance between the unknown node and the anchor node is calculated by Eq. (15).
where u represents the unknown node, h iu indicates the distance between anchor node i and unknown node u.
Step 3: The unknown node uses d iu recorded in step 2 and uses the trilateral measurement method or maximum likelihood estimation method to calculate its coordinates.

Apply PCWOA in DV-Hop localization algorithm
In the traditional DV-Hop algorithm, the LSM is used to calculate the location of unknown nodes. The reason for the low localization accuracy of traditional DV-Hop is the calculation error of LSM. PCWOA is employed to replace LSM to eliminate error. The Eq. (16) calculates the error between anchor nodes and unknown nodes: where error u indicates the sum of distance errors of unknown node u to all anchor nodes. ðx u ; y u Þ represents the coordinate of unknown node u. Other parameter settings are shown above. The purpose of using PCWOA is to minimize the error u . Therefore, we utilize the following fitness function to achieve this object: where (x, y) and ðx i ; y i Þ represent the coordinate of unknown nodes and anchor node i. The d i indicates the estimated distance between unknown nodes and anchor node i. All individuals of PCWOA represent the position of sensor nodes, and then we use the fitness function to find the optimal position.

Simulation research
In this section, we will conduct simulation research. This paper combines PCWOA with DV-Hop to compensate for the localization error of the original DV-Hop for WSN. We also apply PCWOA, parallel WOA (PWOA) [33], WOA [16], and PSO [34] to DV-Hop and compare them with original DV-Hop [43] in WSN. Then, the average localization error (ALE) is defined in Eq. (18) to evaluate their performance.
where m represents the number of unknown nodes, (X, Y) and ðx u ; y u Þ indicate the actual and estimated coordinates of unknown nodes, respectively. r represents the communication radius of the node. All algorithms are executed 30 times under the same environment to achieve a fair comparison. The max iteration is set as 100, and other parameters are shown in Table 2. According to the research on the existing literatures [33,37,40,42,43], the nodes in WSN are randomly distributed in a 2-dimensional fixed space which is set charged with 100 m Â 100 m in Fig. 5. The total number of nodes is limited within the interval [100,400], and the variation range of anchor nodes is [5,40], the node communication radius is set to change in the interval of [20,40]. It is worth noting that the scope of the simulation environment we set should meet the communication requirements of the least anchor nodes. According to the literature [43], the location of unknown nodes in two-dimensional space needs at least three anchor nodes to be determined.
The minimum number of anchor nodes set in this paper is 5, and the minimum communication radius of nodes is 20. Thus, the simulation area set in this paper exactly meets the minimum communication requirements.

Influences of the number of anchor nodes
To ensure the generalization of the experiment, we set the total number of nodes to 200 and randomly deployed them in an area of 100 m 2 , and the node communication radius is set to 20 m. In this experiment, we study the influence of the number of anchor nodes on the node error. Thus, we gradually increase the number of anchor nodes from 5 to 40 with an interval of 5.
From Table 6, the localization error gradually decreases with the increasing number of anchor nodes. The application of the MHOA to DV-Hop significantly improves the performance. WOA is an improved algorithm compared with PSO. It can be seen from Table 6 that the performance of WOA is markedly higher than that of PSO. PCWOA has competitive performance than the original WOA, except for the case of 10 anchor nodes. When the number of anchor nodes is 5, the performance of PWOA overtakes that of PCWOA. Part of the reason is that adding compact technique to the algorithm impairs the algorithm's ability to jump out of local optimum. Another part is that in the proposed parallel techniques, discarding a part of the poor individuals may lose the diversity of the population and perform poorly when dealing with a small number of anchor nodes. In addition, from Table 6, the locating errors decreases with the increase of anchor nodes, but the decreasing trend is mild. Although increasing anchor nodes can improve locating accuracy, employing too many anchor nodes is costly and therefore unnecessary when using DV-Hop or an improved algorithm.

Influences of communication radius
To simulate the layout environment of WSN realistically, we study the influences of communication radius. In this experiment, the total number of nodes is set to 200, and the number of anchor nodes is fixed to 20. So, the number of unknown nodes is 180. We change the size of the communication radius and control its range in [15,40].
According to Table 7, it can be seen that the localization error calculated by PCWOA is superior to that of other algorithms. All algorithms achieve the minimum error when the communication radius is 30. When the total number of network nodes and the number of anchor nodes remain unchanged, the larger the communication radius, the higher the connectivity of the network. However, if the communication radius increases while the simulation area remains unchanged, one or several nodes may be omitted in calculating the average hop-size of anchor nodes.

Influences of the total number of nodes
To research the influences of the total number of nodes and test the comprehensive performance of PCWOA. In this experiment, we fixed the number of anchor nodes and communication radius to 20 and 30, respectively. The performance of PCWOA is evaluated by changing the total number of nodes, and its range is controlled between 100 and 400. It can be seen from Table 8 that the performance of PCWOA is robust in the case of different numbers of nodes. Compared with the original DV-Hop algorithm and other MHOAs, the performance of PCWOA is significantly improved. From Table 8, all algorithms reach the minimum error when the total number of nodes is 200.
When the communication radius is 30, and the total number of nodes is 200, PCWOA performs best in 100 m Â 100 m fixed space. In Table 6, PCWOA performs best when the number of anchor nodes is 30 and increases by 0.1518 compared with DV-Hop. PCWOA serves best when the communication radius is 25 and increases by 0.1307 compared with DV-Hop in Table 7. In Table 8, PCWOA performs best when the total nodes are 100 and increases by 0.1614 compared with DV-Hop. PCWOA performs best when the total number of nodes is 100, the anchor node is 30, and the communication radius is 25. According to the results listed in Tables 6, 7 and 8, we can sum up cautiously that the PCWOA has competitive performance than the original algorithm.

Conclusions
This paper proposes a PCWOA and employs it to optimize DV-Hop localization algorithm for WSN. Firstly, WOA with a compact technique can effectively save the memory of the algorithm and reduce the consumption of sensor nodes. Still, it inevitably increases the probability of falling into a local optimum. Secondly, aiming at the defects of the compact algorithm, we propose novel parallel techniques to enhance the capacity of jumping out of a local optimum and improve the solution accuracy. Experiments have shown that the algorithm we proposed performs better, and the accuracy of the solution is strengthened. Finally, the improved algorithm is applied in node localization of WSN, which is verified through experiments that the improvements can significantly increase the localization accuracy.
Through the research, it is revealed that DV-Hop combined with PCWOA performs well on 2-dimensional enviorments. In the future, we will apply this algorithm to 3-dimensional environments and change statically distributed nodes to dynamic. Additionally, we will also conduct further simulation and field experiments by changing different simulation environments and the number of nodes to test the robustness of an improved localization algorithm.
Funding The authors did not receive support from any organization for the submitted work.