Next Article in Journal
Unraveling the Role of Plastic Waste Pollution in the Amvrakikos Wetlands National Park, Greece: The Stakeholders’ Views
Next Article in Special Issue
Platform Optimization and Cost Analysis in a Floating Offshore Wind Farm
Previous Article in Journal
Antioxidant Response and Oxidative Stress in the Respiratory Tree of Sea Cucumber (Apostichopus japonicus) Following Exposure to Crude Oil and Chemical Dispersant
Previous Article in Special Issue
Model Test and Numerical Simulation of Grouted Connections for Offshore Wind Turbine Under Static Axial Load
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving the Computational Efficiency for Optimization of Offshore Wind Turbine Jacket Substructure by Hybrid Algorithms

1
Department of Engineering Science and Ocean Engineering, National Taiwan University, Taipei 10617, Taiwan
2
R/D Section, Research Department, CR Classification Society, Taipei 10487, Taiwan
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2020, 8(8), 548; https://doi.org/10.3390/jmse8080548
Submission received: 8 July 2020 / Revised: 13 July 2020 / Accepted: 18 July 2020 / Published: 22 July 2020
(This article belongs to the Special Issue Analysis and Design of Offshore Wind Turbine Support Structures)

Abstract

:
When solving real-world problems with complex simulations, utilizing stochastic algorithms integrated with a simulation model appears inefficient. In this study, we compare several hybrid algorithms for optimizing an offshore jacket substructure (JSS). Moreover, we propose a novel hybrid algorithm called the divisional model genetic algorithm (DMGA) to improve efficiency. By adding different methods, namely particle swarm optimization (PSO), pattern search (PS) and targeted mutation (TM) in three subpopulations to become “divisions,” each division has unique functionalities. With the collaboration of these three divisions, this method is considerably more efficient in solving multiple benchmark problems compared with other hybrid algorithms. These results reveal the superiority of DMGA in solving structural optimization problems.

1. Introduction

Offshore wind energy has become a pertinent source of renewable energy worldwide. The material cost of offshore wind substructures (OWSs) accounts for approximately 17% of the total cost of installation [1]. Therefore, reducing the mass of the structure is a primary objective of optimization for reducing cost. Safety constraints are also a fundamental concern to make optimizing OWSs a multi-objective task. Because analyzing offshore wind turbines (OWTs) must consider multiple design loads, namely nonlinear wind, wave, and gravity loads, optimizing OWSs has been recognized as a highly nonlinear problem with complex computation [2]. Furthermore, calculating objective function is based on finite element analysis (FEA) without an analytical form, also seen as “black-box optimization.”
Knowledge of algorithms is of great importance for OWS optimization. Deterministic algorithms such as the gradient-based method have been widely used in OWS optimization [3,4]; however, this method may cause solutions trapping in a local optimum. In contrast to the gradient-based method, stochastic algorithms evaluate all the particles in a search space using a specific search process. Because of the randomness of the algorithms, the stochastic algorithms can be more likely to find the optimum of the real-world problem. The most common stochastic algorithm, standard genetic algorithm (SGA), was introduced to achieve a global optimum [5,6,7,8]. SGA includes crossover and mutation as evolutionary operators, originating from the concept of competition and survival of biological evolution. Such processes help SGA find a global optimum due to its randomness. Colliding body optimization [9] has recently been applied to OWS optimization. The colliding bodies are many particles simulating the behavior of colliding, which is also a searching process.
However, stochastic algorithms usually require more function evaluations. Numerous studies have proposed a surrogate model when applying SGA in complex real-world problems, which is too inefficient to be integrated with simulation analyses [10]. Conversely, the gradient method is efficient enough to be integrated with simulation analyses in optimization processes, but it fails to obtain a global optimum. Designing a more efficient stochastic algorithm has become a pertinent topic for solving real-world problems using simulation models with acceptable computational costs.
Hybridization of algorithms is an effective strategy for improving the performance of original algorithms. Hybrid algorithms can preserve the desired features of the original algorithms. For highly nonlinear complex problems, conventional deterministic methods tend to find a local optimum. Some global modifications, e.g., Hofmeister [11] did overcome this issue, but stochastic algorithms can mostly ensure the reliability of solutions. Pattern search (PS) is a common local search method that is hybridized with stochastic algorithms. Because stochastic algorithms are inaccurate and converge slowly during optimization, some researchers have combined local search with an evolutionary algorithm to improve its accuracy and convergence. In 2017, Li [12] selected the best particle among SGA to execute the PS method to obtain a relatively accurate result faster, namely pattern search genetic algorithm (PSGA). Similarly, particle swarm optimization (PSO) can also be hybridized with PS. For the optimized fuzzy proportional integral controller of the automatic generation controller, Sahu [13] combined PS with PSO by executing the two algorithms in turn. While PSO contributes to global exploration, PS is used for local exploitation.
Many researchers have attempted to combine SGA with other stochastic algorithms. For example, simulated annealing (SA) was inspired by the annealing process, whereas the probability for particles jumping to a neighborhood was based on the annealing function. Researchers implemented this concept into SGA to determine whether a particle can be replaced by a new particle from crossover operators [14]. This arrangement helped SGA to evolve toward a correct direction, and hence improved its efficiency. Pourvaziri [15] used SA for local search in an island model genetic algorithm (IMGA) to help each population search for feasible space.
Hybridizing SGA and PSO is also a feasible approach to finding a global optimum. Krink [16] proposed a simple hybrid scheme looping SGA, PSO, and hill-climber algorithms in sequence for local search, namely the lifecycle model genetic algorithm (LMGA). If the number of generations without improvement exceeds a given threshold, the executing algorithm changes.
In this study, we proposed a novel hybrid algorithm called the divisional model genetic algorithm (DMGA). We combined PS with SGA and PSO to accelerate the convergence of the optimization process while maintaining the diversity of the particles. The algorithm contains three divisions, with each division having a unique functionality, namely a divisional model. Detailed description of these algorithms is contained in Section 2. Then we conducted performance analyses of the proposed algorithm comparing with various hybrid algorithms in Section 3. Finally, in Section 4 we solved a jacket substructure (JSS) optimization problem with several hybrid algorithms. The main goal of the present study is to demonstrate the superiority of the proposed hybrid algorithms in solving JSS optimization problem. We chose a four-legged jacket substructure with X-type interleaving braces planted in the Taiwan Strait as the optimization model. Through updating the radii and thicknesses of structural members, we targeted a lighter design under safety constraints in ultimate environmental loads.

2. Materials and Methods

2.1. Original Algorithms

The introduction and development of the algorithms referred to in this paper are described as follows. In this section, we present the original algorithms: SGA, PSO and PS. Any modifications or improvement of DMGA are based on these algorithms and will be discussed in Section 2.2.

2.1.1. Standard Genetic Algorithm (SGA)

In 1950, Turing [17] stated that computing intelligence could be achieved through the evolutionary process of “the survival of the fittest.” In 1975, Holland [18] proposed a random search algorithm, SGA, inspired by Darwin’s theory of evolution. In SGA, characterized evolutionary operators, namely selection, crossover, and mutation, are dedicated to generating new chromosomes (data points) to replace old chromosomes. Because operators can be easily hybridized with other algorithms, many modified or hybrid algorithms have originated from SGA.
Because of the randomness of SGA, obtaining a global optimum can be guaranteed if the evaluation time is infinite [19]. Although the probability of finding a global optimum using SGA is higher than in other algorithms, practically SGA might obtain a local optimum under a limited computational time or slowly converge in nonlinear or multi-optimum problems, which can be improved by hybridizing with other algorithms.

2.1.2. Particle Swarm Optimization (PSO)

PSO is an evolutionary algorithm developed by Kennedy in 1995 [20]. This algorithm was inspired by social interaction of animals. For example, when a flood occurs, birds search for food, with each bird simultaneously flying in a direction instructed by other birds aligned with self-experience. Shi modified PSO in 1998 [21] by adding inertial weight to balance the global and local search by exciting the “flying” of PSO. Shi stated that the inertia weight should be between 0.8 and 1.2 to obtain a global optimum in a moderate number of iterations. This modified form of PSO is as follows:
v i t + 1 = ω v i t + φ 1 β 1 ( p i x i t ) + φ 2 β 2 ( p g x i t ) ,
x i t + 1 = x i t + v i t + 1 ,
where ω is inertia weight; φ 1 and φ 2 are two positive constants defined according to the problem; β 1 and β 2 are two random numbers between 0 and 1; x i t and v i t denote the position and velocity of i t h particle in t t h generation, respectively. Next, the velocity of i t h particle in t + 1 t h generation v i t + 1 is updated, through solving Equation (1), to refer to p i and p g , which represent the personal best position and global best position, respectively. Afterwards, particles are updated to a new position x i t + 1 by invoking Equation (2).

2.1.3. Pattern Search (PS)

Developed in 1961 by Hooke [22], PS is a type of local search that does not require the gradient of a function. PS varies in a single dimension at one time by a step size to improve the solution. The step size is halved when the result has variations in any direction. Finally, when the step size is small enough, the optimization is judged to be convergent.

2.2. Divisional Model Genetic Algorithm (DMGA)

In this section, the structure and the procedure of the divisional model are described in detail. Next, the SGA operators and the three divisions collaborating in DMGA are introduced.
The divisional model divides the whole population into three subpopulations to execute in the respective divisions. At the beginning of the optimization, all the subpopulations are initialized and the fitness of every chromosome is calculated. Afterwards, the best chromosomes in each division gather in the pattern search genetic algorithm division, PS-GA, in preparation for local search, named “Elite migration.” Conversely, the least favorable particles migrate to the targeted mutation division, which is called “Abandon” to increase the diversity of the division, as presented in Figure 1.
Following the migration process, the chromosomes evolve in different divisions as shown in the depicted flowchart of DMGA in Figure 2. After an elite migration process, the optimal chromosome in the PS-GA division is selected as the initial point to conduct PS with an adaptive strategy, as described in Section 2.2.3. This chromosome is regarded as the best solution in this generation. Then the chromosomes in the PSO division are updated by using the best solution as the global best position for PSO. The targeted mutation (TM) division then determines the targeted mutation range according to the performance of all the chromosomes abandoned inside. The main TM range of each dimension is located in the part of the particles with better performance. Finally, SGA with the acquired TM range operates in the PS-GA and TM divisions to generate new chromosomes. Two chromosomes from the divisions are selected and complete a single-point crossover and mutation separately. The optimization process stops until the termination criteria is satisfied. To understand the procedure more clearly, the pseudo-code of DMGA is referred to Appendix A.

2.2.1. Initialization

The full population N was divided into three parts for the three divisions. Hence, the population of one division n is equal to N/3. Next, the initial populations of all the divisions were randomly generated in a feasible space. We then randomized the initial populations according to the lower bound and search range of the problem, as per the following formula:
x i j = l o w e r b o u n d j + r a n g e j r a n d [ 0 , 1 ] ,
where x i j represents the i t h particle/chromosome in the j t h dimension; l o w e r b o u n d j and r a n g e j represent the lower bound and range defined in the j t h dimension.
When we considered the parameters of each algorithm, the initial velocity of PSO is defined as zero for all the particles. Next, we initialized the parameters for SGA: mutation probability p m , and the size of the dominant population N e is equal to n/2. Lastly, we defined the parameters of PS, the step size and tolerance, according to the scale of the target problem.

2.2.2. SGA Operators

Three types of SGA operators are used in DMGA: selection, crossover, and mutation. The selection operator is responsible for selecting two particles from a dominant population as parents for the crossover operator. After a fitness evaluation, the better half of the chromosomes is chosen as the dominant population. Next, two different chromosomes as parents are selected, referring to the selection probability p s of each chromosome, which is the ratio of its ranking to the summation of rankings, as per the following formula:
p s i = 1 n s + 1 r a n k i n g i i = 1 n s r a n k i n g i ,
where p s i denotes the selection probability of i t h chromosome in the division; r a n k i n g i is the fitness ranking of i t h chromosome. The total number of the dominant population ns is half that of the division population.
Single-point crossover and mutation were adopted in this study. Two selected chromosomes coded in binary were parents, and we randomly generated a position in the string as the cutting point. The single-point crossover operator swaps the genes of the parents behind the cutting point, generating two children chromosomes. The mutation operator is performed on a single gene of a chromosome. If a randomly generated number from 0 to 1 is under pm, a gene is randomly chosen and replaced by a random value in the mutation range. The mutation range is contracted as the TM range when the random number is under pm/2, as revealed in Figure 3.

2.2.3. PS-GA Division

The PS-GA division modifies the PSGA. After each generation, the optimal chromosome of the entire population is selected to complete PS as a local search around the present position. Therefore, after the stochastic algorithm in any division generates a favorable solution, the deterministic algorithm directly improves the solution in this division. The flowchart of PS is presented in Figure 4.
To improve the efficiency of PS, we added an adaptive strategy to adjust the step size. As the generation passes, the initial PS step size and PS tolerance decreased linearly with the number of the generation to prevent too many evaluations being required in the forepart of optimization. When the step size becomes relatively small, the optimization becomes more concise, and thus the PS process is improved. The step size and tolerance of the PS are initialized according to Equations (5) and (6).
s t e p _ s i z e = i n i t i a l _ s i z e ( M a x _ g e n e r a t i o n g e n e r a t i o n ) / M a x _ g e n e r a t i o n
t o l e r a n c e = i n i t i a l _ t o l e r a n c e ( M a x _ g e n e r a t i o n g e n e r a t i o n ) / M a x _ g e n e r a t i o n
Finally, the selection, crossover, and mutation operators are performed in this division. Therefore, the beneficial chromosomes are delivered to the next generation and are considered to have the most potential to achieve a better solution than other divisions due to the gathering of superior particles.

2.2.4. PSO Division

PSO is another stochastic algorithm used in DMGA. However, PSO searches for a global solution in a different manner from SGA. PSO updates every particles by selecting both the global best and self-experience particles, whereas SGA updates chromosomes by selecting parents to deliver their beneficial genes to the next generation. This strategy reduces the risk of trapping in a local optimum. PSO independently updates every particle that belongs to a division in accordance with Equations (1) and (2).
Regarding the global best position in the population of the PS-GA division, PSO in this division converges faster than the traditional PSO. As particles approach to the global best position, a better solution may be discovered in the next generation. Thus, DMGA increases the diversity of searching directions to offer a higher probability of achieving a global optimum.

2.2.5. TM Division

In contrast to the PS-GA division, the least favorable particles of other divisions are gathered in the TM division. The TM division targets a narrower mutation range with a higher possibility of containing a global solution. As shown in Figure 5, all the particles in the division are firstly divided into two sections according to the median of variables in each dimension. Then the total fitness of the particles in both sections are calculated—for instance, high fitness means lighter structural mass under constrained safety criteria in this study—to determine the superior section. The superior section in each dimension contributes to form the TM range for the mutation operator in the PS-GA and TM divisions. Finally, particles in the TM division perform the SGA operators to generate a new generation.
The TM range is arranged according to the superior part of each dimension, as per Equations (7)–(9), by contracting the section with the lower fitness by one-third and the other section by two-thirds. If the mean fitness values of the two sections are equal, however, both sections shrink by half.
If   the   lower   part   is   superior :   { l o w e r b o u n d T M j = l o w e r b o u n d j 2 3 + m e d i a n j 1 3 r a n g e T M j = 1 3 ( r a n g e j + m e d i a n j l o w e r b o u n d j ) ,
If   the   upper   part   is   superior :   { l o w e r b o u n d T M j = l o w e r b o u n d j 1 3 + m e d i a n j 2 3 r a n g e T M j = 2 3 ( r a n g e j m e d i a n j 2 + l o w e r b o u n d j 2 ) ,
If   the   both   parts   are   equal :   { l o w e r b o u n d T M j = l o w e r b o u n d j 1 2 + m e d i a n j 1 2 r a n g e T M j = 1 2 r a n g e j ,
where l o w e r b o u n d T M j and r a n g e T M j represent the TM lower bound and range in j t h dimension, whereas l o w e r b o u n d j and r a n g e j denote the original ones. According to median value m e d i a n j of chromosomes in j t h dimension, we obtain the lower bound and range for TM.

3. Benchmark Study

The performance of DMGA is discussed in this section and compared with other algorithms. All algorithms were coded using MATLAB software. To avoid attributing performance to a randomly chosen initial population, we conducted 100 runs of each test function and recorded whether we reached the optimum and the number of function evaluations. Every run started from a different initial population, with the population size set at 60. In order to prove the potential of DMGA in diverse applications, we chose diverse test functions with different difficulties. Because some test functions are more complex than others, to make them comparable, we set the limitation of function evaluations very high, say 600,000, so that convergence (if exists) can be achieved without being bounded by the limit. For real optimization applications, the number of evaluations should be set according to limitation of computation time, for example, the jacket substructure problem in Section 4 was limited to 1500 because of time-consuming finite element analysis of each evaluation. Finally, the effectiveness and efficiency of the algorithms were calculated by the success rate and the number of function evaluations, respectively.

3.1. Test Functions and Algorithms Setup

We selected 13 common test functions for the optimization with high diversity for analyzing the performance of algorithms. The test functions came from the CEC2013 benchmark and other resources [23,24,25,26]. All the test functions and references are listed in Table A2 in Appendix B. The search range and tolerance for each test function is given according to the property of the function.
We defined the mutation probability p m as 0.5 and p m i as 0.1. Other parameters for PS and PSO were set according to the property of each test function, as listed in Table A3. The results for all the test functions calculated by DMGA are presented and compared with SGA, PSO, and their hybrids. In this study, we inferred four hybrid algorithms (as described in the Section 1), which are based on SGA, PSO and PS. We listed the name and reference of the algorithms below:
  • PSGA by Li et al. [12]
  • Hybrid pattern search PSO (PSPSO) by Sahu et al. [13]
  • Lifecycle model GA (LMGA) by Krink and Løvbjerg [16]
  • Hybrid GA and PSO (HGAPSO) by Juang [27]
It is worth mentioning that we replaced the hill-climber in LMGA by PS in this study to make a fair comparison with other algorithms.

3.2. Performance Analysis

The overall computational results listed in Table 1 present the efficiency and effectiveness of selected algorithms when solving different test functions. The results of DMGA obtained 100% effectiveness in 9 of the 14 test functions, and the number of function evaluations generally stayed lower than other algorithms when reaching similar effectiveness. However, several test functions were relatively difficult for DMGA to solve, but it could still obtain over 60% effectiveness.
Although SGA and PSO are both stochastic algorithms, they have their own specialties in different test functions. As a result, their hybrids with PS, PSGA and PSPSO showed the same tendency to solve some functions whereas the other algorithms did not. It is worth noting that these two algorithms perform well in the Rastrigin function, which shows the hybridization of PS improving the convergence toward local minimum. However, PS in LMGA became less effective in the Rastrigin function because of the limited number of function evaluations when three algorithms took turns to solve the problem.
HGAPSO is the combination of SGA and PSO conducted in each half of the population separately, showing superiorities from both original algorithms. Nevertheless, it generally cost more function evaluations than other algorithms composed of PS. From the previous discussion, it can be seen that DMGA successfully combined the three original algorithms in an effective way, thus improving effectiveness and efficiency.
In addition, we conducted a ranking analysis based on the effectiveness and efficiency of each algorithm. First, we compared the effectiveness of the algorithms. If there were algorithms with the same level of effectiveness, we then compared the efficiencies as a ranking rule. Table 2 lists the ranking of every algorithm in solving each function. The average of the rankings of functions was calculated to compare the overall performance among the algorithms.
Although there is rarely an algorithm outperforming other algorithms for all functions since the performance is always problem dependent, DMGA was ranked first and performed the best in 7 out of 13 test functions. With regard to the Eggholder function and Schaffer’s function, DMGA was ineffective because of the ill-conditioning. DMGA failed to detect the correct TM range and might trap in the local minimum during the PS process. Despite this, in other functions, DMGA was superior in both effectiveness and efficiency. Similar to DMGA, LMGA is also a hybridization of PS, SGA, and PSO, but each operator in LMGA is executed in sequence. Therefore, LMGA would not change the operator until the optimization gets stuck in the same solution for several generations resulting in an unused original algorithm.
DMGA was superior for two main reasons. First, the collaboration of the divisional model improved the performance. Through the migration between the divisions, the best solution in each generation proceeded through different operators. Thus, the local optimum became active to keep the optimization process searching for the global optimum. Figure 6 reveals that in the forepart of the DMGA optimization, each division reached the best solution one by one for a rapid convergence. In the initial population, the best solution was located in the TM division. Afterward, PS-GA division improved the solution during 7th to 9th iterations. After trapping in the local optimum for several iterations through repeated collaboration of PSO and PS-GA, the global optimum was obtained.
The second reason for the superiority of the DMGA was the mutation strategy. The original mutation and TM operators were used simultaneously to mutate more effectively. As shown in Figure 7, Easom’s function is a highly nonlinear function without apparent tendency. The minimum of this function is −1 when x1 and x2 = π, whereas the value stays zero at other locations. Therefore, we might search the solution mainly through mutation. DMGA had the best mutation strategy, resulting in the fastest convergence.
We then checked the TM range shown in Figure 8 and revealed that the solution was contained in the TM range throughout the optimization process. Hence, this strategy enhanced the efficiency of the mutation. DMGA not only converged faster than other algorithms but also searched for a global optimum with an efficient mutation operator. DMGA is competitive at completing multiple optimization problems.

4. Jacket Substructure Optimization

4.1. Finite Element Modelling

For a real-world problem, an offshore wind turbine substructure was used to minimize its weight as the objective function. We built a finite element analysis model of the JSS by using Abaqus commercial software package, as depicted as Figure 9. The reference design was obtained from the preliminary drawing in a Taiwan offshore pilot project, provided by a private wind energy developer. The height and foot print of this JSS are 38 m and 11 m by 11 m square, respectively. The topology of the tubular members are four stories, of which they are categorized in three zones. The four legs are connected by X-type braces on each face and in each story. Next, we defined sectional properties, thickness and radius of the members as design variables, numbered from brace to leg, and the atmospheric zone to the immersion zone. The design variables included the thickness and radius, numbered from brace to leg, and the atmospheric zone to the immersion zone. t 1   ~   t 8 and r 1   ~   r 8 represent the thickness and radius of the brace and leg in the atmospheric, splash, and immersion zones, respectively. The topologies of the piles and members of the footprint remained as is. The whole structure was modelled by beam elements, where the wind load was imposed at the top of the tower and the wave loads were computed on the nodes of elements. The beam elements in the support structure were categorized into primary (legs) and secondary (braces) elements. The joints at legs and braces were modelled by truncating the secondary beams with the profiles of the primary beams, so that the overlapping problem was eliminated, i.e., the braces did not extrude into legs (shown in detail in Figure 9). Then a rigid connection represented the joint bridges and the truncated breaks from the leg member to the braces. After the length of each beam and its corresponding profile were determined, the mass calculation was performed element by element, multiplying the sectional mass and its length. The whole model contained 720 nodes in the substructure. The model was conducted as elastic static analysis, with a concern of computational efficiency for thousands of design evaluations in the optimization exploration. Although some studies prefer dynamic analysis to capture more realistic structural responses, those combined with stochastic approaches suffered from low computational efficiency [28]. It could be a rational simplification of the analysis in the design iteration, and the proposed DMGA attempts an alternation algorithm, which is also compatible with a dynamic solver in the future. The masses of the equipment and secondary structures (e.g., Nacelle, blades and access platform) were added to the finite element model, i.e., 240 tons of RNA, 60 tons of the flange. The tower was also modelled as tapered tubular members at a height of 90 m and 515 tons of weight, according to the specification of this wind turbine, and it was connected to the top of the jacket through rigid elements.
The loads of wind, wave, and current were applied to the finite element models through line loads (i.e., force per unit length) on the beam elements and a concentrated force for the blades wind loads at the nacelle. Despite fatigue usually being the design-driving load case for jacket structures, this study focuses on the performance of optimization algorithms. Thus, only survival conditions were considered to reduce the computational time of function evaluation. According to the ultimate limit state (ULS) specified in the design load case in the local requirement [29] for this reference offshore wind turbine, the turbine has to comply with class IA specifications as per the IEC61400 [30], and the substructure has to survive in an ultimate environmental condition of 50-year return period. A load factor of 1.35 was applied to the environmental loads for ultimate strength assessment [31]. A parked wind turbine with a blade pitch control fault condition under the extreme gust sustained for 3 s was considered. Wind speed of class IA was applied on this turbine, 50 m/s. The gust factor was defined as 1.4 according to the rule [30]. Therefore, the gust speed reached 70 m/s in the analysis, with a wind profile of ground shear exponent 0.13. A 4 MW wind turbine, derived from the NREL 5 MW reference model [32], was with a rotor diameter of 120 m. The wind load on the rotor and nacelle assembly (RNA) was generated by utilizing steady blade element momentum method (BEM), which was 2030 kN. The line load on the tower was then computed via empirical formula, which models the pressure drag as a drag coefficient times diameter and velocity square at that elevation, resulting in a total 683 kN. Wind loads on the jacket and flange were regarded as small compared to the wind turbine, and hence were neglected. Due to lack of information about the structural profiles of blades of this assumed turbine and damping of RNA, the aero-elasticity and induced dynamic load by gust were neglected. A constant wind load was applied, 2030 kN horizontally, resulting in a 260 MN·m overturning moment on the legs of substructure. The acting direction aligned with the diagonal of the footprint, so that compression stress was taken by one leg.
On the other hand, according to the site investigation of the pretend location in the Taiwan Strait [33], the local requirements of the wave condition for the Taiwan offshore demonstrating wind turbines pose a significant wave height at 8 m in a 50-year return period. Therefore, the maximum wave height in this extreme sea state was expected to be 14.8 m, i.e., 1.85 times the significant wave height [33]. According to the categories of regular wave theory, a minimum wave period (highest wave dynamics) of 12.7 s was determined on the breaking limit (Figure 10), which corresponds to 5th-order Stokes wave regime. Table 3 lists the properties of the extreme wind and wave conditions. According to the aforementioned local requirements, steady current velocity with a seabed shear profile on the surface, 1.0 m/s, was superposed on the velocity field of wave. In contrast to the steady wind load, the transient wave loads are more complex and nonlinear. As the diameters of JSS tubular members are much smaller than the wave length, the wave loads are can be computed by the semi-empirical Morison equation [34], as per Equation (10), where ρ is the water density; C m and C d are the coefficients of the inertia force and the drag force respectively, which differ for the abovementioned zones; u and u ˙ are the flow velocity and acceleration respectively; and D is the member diameter.
F = ρ C m π 4 D 2 u ˙ + 1 2 ρ C d D u | u |
Observing the Equation (10), the total wave load is proportional not only to D but also to D 2 . The nonlinear wave pertains a sharper crest and higher particle acceleration. Figure 11 demonstrates the wave profile and the two components of wave load. As the radius changes, as shown in Figure 12 for example, owing to change of volume and area applied by the wave loads, the nonlinear force and its corresponding time of extremity change during the wave propagation. Due to the topology of the jacket not changing, u and u ˙ are the same when changing radius. The velocities and accelerations are computed on the nodes of structure in advance, and then the forces are computed based on the Morison equation and provide the radius of each member in the design iteration. In order to apply the maximum wave load on the structure, the wave direction is assumed to align with wind and the load in a whole wave period is computed, and then its maximum value is extracted for static structural analysis. For the reference design, the maximum overturning moment on the substructure is about 100 MN·m. Since the dynamics of wind load is not considered, the relatively lower wave load is also treated as static.

4.2. Algorithm Setup

The optimization problem of JSS is defined as a constrained nonlinear programming problem, which can be illustrated as follows. The design variables are the thickness t and radius r of members of the substructure, constrained between the lower bound t L j ,   r L j and upper bound t U j ,   r U j , as per Equations (11) and (12); the objective of the optimization is the mass of the substructure, as per Equation (13), which is subject to four constraints, two geometric constraints g 1 ,   g 2 (Equations (14) and (15)), and two safety constraints g 3 ,   g 4 as per NORSOK N-004 (Equations (16) and (17)).
t = [ t 1 , t 2 , t 3 , , t n ] R n ,   t L j t j t U j ,    j = 1 , , n
r = [ r 1 , r 2 , r 3 , , r n ] R n , r L j r j r U j ,    j = 1 , , n
min f ( t , r )
g 1 ( t , r ) = r j / t j 60 ,    j = 1 , , n  
g 2 ( t , r ) = t j r j 0 ,    j = 1 , , n
s . t .   g 3 ( t , r ) 355
g 4 ( t , r ) 1
Regarding the rule, the lower bound of the structural thickness t L j is 6 mm. The ratio of radius to thickness must be less than 60, and the radius must be larger than the thickness. Accordingly, we set two geometry constraints g 1 ,   g 2 . Since the dimensions of the structure legs are generally greater than those of braces, we set the minimum thickness of the legs to be 26 mm for accelerating the optimization process.
Random initialization over the design domain was used to avoid potential local minimum biased by the reference design. In the first generation, most chromosomes violated the safety constraints; hence, we added multiples of mass to the objective function to manifest the violation of safety constraints, i.e., soft constraints [36]. Soft constraints can be violated during the process, but they introduce high penalties on the objective function. g 3 is the ultimate stress criteria, indicating the yield strength σ y S 355 (=355 MPa) for S355 steel. g 4 is the local buckling criteria, calculated through equations given by NORSOK N-004 [37], as described in Appendix C. If the computational result violates the constraints, the objective function will increase by the multiple of the structure mass. Because the local buckling criteria are stricter than the ultimate stress criteria, the multiplier for ultimate stress criteria is higher. To filter out appropriate designs in the early generations, we divided the violation of the local buckling criteria into three stages, thus improving the efficiency of the searching process. The overview of the optimization procedure is depicted in Figure 13.
f ( t ) = M a s s + U S + L C ,
U S = { M a s s 10 , g 3 σ y S 355 0 , g 3 < σ y S 355
L C = { M a s s 5 , g 4 1.5 M a s s 3 , 1.5 > g 4 1.0 0 , g 4 < 1.0
In this section, we applied DMGA and other algorithms to optimize JSS, and compared the performance of design exploration among them to determine which algorithm is superior in the jacket optimization problem. We considered the ultimate stress and local buckling criteria of offshore structures as highly nonlinear constraints. The single objective function, the penalized mass of JSS, is also a nonlinear function in terms of sectional properties. To solve this complicated problem, the optimization was based on the evaluations of FEA model. By coding the algorithms in Python, we could repeatedly conduct analysis for different cases automatically. Finally, the convergence rates and resulting designs obtained by each algorithm were compared.
We selected the algorithms for comparison according to the results of the performance analyses. SGA and PSO are original algorithms of the DMGA. By ranking the algorithms, PS-GA and LMGA are considered better hybrid algorithms. All the algorithms with the same initial population set to 60 are coded in Python to run an optimization process in Abaqus software. Therefore, we set all the needed parameters for the JSS optimization, as listed in Table 4.

4.3. Results and Discussion

Under 1500 function evaluations, the optimization processes of all the algorithms are shown in Figure 14. The results indicate that although the DMGA’s mass descended slower than that of the PS-GA in early generations, it plunged earlier than other algorithms in approximately 300 function evaluations. The mass decreased considerably until 400 function evaluations in DMGA, reaching the lightest mass among all the algorithms (Figure 14). Moreover, DMGA only consumed half of the function evaluations SGA spent. This indicated that DMGA was the most efficient among these hybrid algorithms. The two conventional algorithms, SGA and PSO, converged slower than hybrid algorithms with a higher mass. SGA outperformed PSO with a more favorable result and convergence rate, while PSGA outperformed LMGA. From the previous discussion in performance analysis, it can be seen that simply taking the original algorithms in turn cannot improve but slowed down the efficiency by PSO in this case. In terms of objective function, we found that DMGA reached the lightest mass. Even though the results of other three algorithms are only slightly heavier than DMGA’s below 10 tons, DMGA consumed only one-third computational time. By comparing to the conventional SGA and PSO, DMGA saved 26 and 67 tons relatively. This indicates that with DMGA it is possible to save the computational cost significantly and acquire a competitive result. All the facts prove that DMGA is suitable to deal with an OWS optimization problem.
Table 5 lists the results of the JSS optimization, including the thickness, radius, and total structural mass obtained by each algorithm.
Generally, the braces of the structure had a lower thickness and radius than the legs of the structure. Hybrid algorithms enlightened the mass more than the original ones due to the local search by PS, which regulated values concisely to a local optimum. DMGA obtained the lightest mass (200 tons), 12% lighter than that obtained by SGA, the most commonly used algorithm in JSS optimization. The dimensions of the DMGA’s result suggested that the atmospheric members were thicker than those of the immersed ones, whereas the others stated the opposite suggestion. Manufacturing the large difference of diameter in a continuous leg might be impracticable and expensive, and the abnormal distribution of mass along the vertical direction may also cause problems due to its dynamic behavior. This is because the design variables are free in the design space and practically, more design constraints, such as the cost model and the natural frequency [36], should be included. Nevertheless, this indicates that DMGA obtained a different local optimum from the other algorithms, with this result potentially being the global optimum in the present tests. DMGA obtained a better result not merely dependent on the local search, but the wide search to find the global optimum. As more load cases are considered, the local search is expected to be restrained, contracting to the outstanding DMGA.
Figure 15 shows the stress contours of the optimized OWSs. Stress concentrated along the structural leg, mostly in the atmospheric zone and immersion zone with maximum value below 250 MPa, reacting to the direction of external forces. It is worth noting that DMGA strengthened the lower leg in the atmospheric zone and splash zone. The radius of structural members distributed unevenly, which was unusual among all the structures. Moreover, the upper braces were thicker than the lower ones. These evidences support that DMGA’s result was closer to the global optimum than others. Apart from this, all nominal stresses in the contours were below the allowable stress. We can deduce that the LC constraint governed this case rather than the US constraints. The same conclusion can be inferred that the maximum stress in the reference jacket was lower than other optimized ones. That means the buckling constraint had not been reached in the reference design, so there was an allowance to reduce weight from it.

5. Conclusions

This study introduced a novel hybrid algorithm called DMGA that combines GA, PSO, and PS in a divisional model. Each division in DMGA represents one algorithm as the searching scheme and shares information with the other divisions through migration. In performance analyses, the proposed algorithm was tested with 13 test functions to determine the rate of problem-solving as the effectiveness and the computational cost as efficiency. The results demonstrated that DMGA outperformed the other four hybrid algorithms. Moreover, DMGA was the most effective algorithm in seven test functions, obtaining high efficiency.
Because the purpose of this study was comparing hybrid algorithms for a JSS optimization problem, we applied these hybrid algorithms to reducing the mass of the JSS along with structural safety constraints. With a complex and nonlinear loading condition, the simulation analysis of the JSS requires considerable computational cost. Thus, the efficiency and ability to find a global minimum are critical. Finally, DMGA obtained the lightest structural mass, which was 12% (26 tons) lighter than that of SGA, and DMGA required only half number of evaluations to reach a superior result compared with the other algorithms. These results indicate that DMGA is an effective and efficient algorithm, which can be applied to optimize the JSS for offshore wind turbines.

Author Contributions

Conceptualization, D.-P.L. and H.-H.H.; methodology, D.-P.L. and T.-Y.L.; software, D.-P.L. and T.-Y.L.; validation, D.-P.L. and T.-Y.L.; formal analysis, D.-P.L.; investigation, D.-P.L. and T.-Y.L.; data curation, D.-P.L. and T.-Y.L.; writing—original draft preparation, D.-P.L; writing—review and editing, T.-Y.L. and H.-H.H.; visualization, D.-P.L. and T.-Y.L.; supervision, H.-H.H.; funding acquisition, H.-H.H. All authors have read and agreed to the published version of the manuscript.

Funding

The research reported in this paper is supported by the Ministry of Science and Technology (MOST), Taiwan under Grant No. 108-2628-E-002-007-MY3, and by National Taiwan University, Taipei, Taiwan under Grant Nos. NTU-CDP-108L7743 and NTU-CDP-109L7725.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Pseudo-code of the DMGA algorithm.
Table A1. Pseudo-code of the DMGA algorithm.
Procedure of DMGA
  • Setting parameters
  • Initialize random chromosomes/particles in three departments: PS-GA, PSO and TM
  • repeat
  • Evaluate the fitness of all chromosomes/particles
  • Replace two worst x i P S G A with the best of x i P S O and x i T M respectively
  • Replace two worst x i T M with the worst of x i P S G A and x i P S O respectively
  • pg = Pattern search (the best of x i P S G A     )
  • Replace the worst of x i P S O with pg
  • Update x i P S O according to Equations (2) and (3)
  • for each dimension j do
  • LOW:={ x i T M j   |   x i T M j <median( x j )}
  • HIGH:={ x i T M j |   x i T M j >median( x j )}
  • lowj:=(Σfitness( x i T M j LOW))/n(LOW)
  • highj:=(Σfitness( x i T M j HIGH))/n(HIGH)
  • if (lowj > highj) then
  • update targeted_rangej by Equation (7)
  • else-if (lowj < highj) then
  • update targeted_rangej by Equation (8)
  • else
  • update targeted_rangej by Equation (9)
  • end-if
  • end-for
  • x i P S G A : = SGA(PS-GA)
  • x i T M : = SGA(TM)
  • until termination conditions are met

Appendix B

Table A2. List of test functions.
Table A2. List of test functions.
Function NameEquationSearch RangeTolerance %Ref.
Ackley

D = 5
F 1 ( x ) = 20 exp ( 0.2 1 D i = 1 D x i 2 ) exp ( 1 D i = 1 D cos ( 2 π x i ) ) + e + 20 [−32,32]D10−2[24]
Schwefel
D = 5
F 2 ( x ) = 418.9829 D + i = 1 D x i sin | x i | [−500,500]D10−2[24]
Rastrigin
D = 10
F 3 ( x ) = i = 1 D [ x i 2 10 cos ( 2 π x i ) + 10 ] [−10,10]D10−2[24]
De Joung
D = 3
F 4 ( x ) = i = 1 D x i 2 [−5,5]D10−2[22]
Rosenbrock
D = 4
F 5 ( x ) = i = 1 D [ 100 ( x i 2 x i + 1 ) 2 + ( x i 1 ) 2 ] [−5,10]D10−3[22]
Goldstein-Price
D = 2
F 6 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 13 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] [ 30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] [−2,2]D10−2[22]
Easom
D = 2
F 7 ( x ) = cos ( x 1 ) cos ( x 2 ) exp { [ ( x 1 π ) 2 + ( x 2 π ) 2 ] }   [−100,100]D10−2[22]
Zakharov
D = 5
F 8 ( x ) = i = 1 D x i 2 + ( i = 1 D 0.5 i x i ) 2 + ( i = 1 D 0.5 i x i ) 4 [−5,10]D10−3[22]
Hartmann (H6,4)
D = 6
F 9 ( x ) = i = 1 4 c i exp ( j = 1 6 a i j ( x j p i j ) 2 ) ,  
a i j = [ 10.0 3.00 17.0 3.50 1.70 8.00 0.05 10.0 17.0 0.10 8.00 14.0 3.00 3.50 1.70 10.0 17.0 8.00 17.0 8.00 0.05 10.0 0.10 14.0 ] i j ;
c i = [ 1.0 1.2 3.0 3.2 ] i
p i j = [ 0.1312 0.1696 0.5569 0.0124 0.8283 0.5886 0.2329 0.4135 0.8307 0.3736 0.1004 0.9991 0.2348 0.1451 0.3522 0.2883 0.3047 0.6650 0.4047 0.8828 0.8732 0.5743 0.1091 0.0381 ] i j
[0,1]D10−2[22]
Eggholder
D = 2
F 10 ( x ) = i = 1 D { ( x i + 1 + 47 ) sin ( | x i + 1 + x i 2 + 47 | ) }
  sin { | x i + x i + 1 2 + 47 | } ( x i )
[−512,512]D10−1[23]
Schaffer
D = 2
F 11 ( x ) = 0.5 + sin 2 ( x 1 2 + x 2 2   ) 0.5 [ 1 + 0.001 ( x 1 2 + x 2 2 ) ] 2 [−100,100]D10−3[23]
Styblinski-Tang
D = 5
F 12 ( x ) = 1 / 2 i = 1 D ( x i 4 16 x i 2 + 5 x i ) [−5,5]D10−3[23]
Beale
D = 2
F 13 ( x ) = ( 1.5 x 1 + x 1 x 2 ) 2 + ( 2.25 x 1 + x 1 x 2 2 ) 2 + ( 2.625 x 1 + x 1 x 2 3 ) 2 [−4.5,4.5]D10−3[25]
Table A3. Parameter settings for the algorithms in the performance analyses.
Table A3. Parameter settings for the algorithms in the performance analyses.
Test FunctionPS Step SizePS Toleranceω for PSO φ 1   for   PSO φ 2   for   PSO
Ackley10.010.411
Schwefel100.10.411
Rastrigin10.0010.811
De Joung0.10.0010.40.50.5
Rosenbrock0.10.0010.811
Goldstein-Price0.10.0010.40.50.5
Easom100.010.411
Zakharov10.010.411
Hartman(H6,4)0.10.0010.822
Eggholder100.10.822
Schaffer100.0010.822
Styblinski-Tang10.0010.411
Beale0.10.0010.411
Parameters for PS were defined as the power of 10 according to the search range to complete local searches with high efficiency. Next, by the number of local minima of each function, we set inertial weight ω as a multiple of 0.4 to control the activity of PSO. Similar to inertial weight, φ 1 and φ 2 are positive constants for the global experience term and self-experience term, respectively, which control the convergence of the algorithm. In this study, we set the values of these constants as a multiple of 0.5.

Appendix C

One of the safety constraints g 4 is calculated by the local buckling criteria of axial compression and bending in NORSOK N004 [35], shown as following equations, whichever is greater:
N Sd N c l , Rd + M y , Sd 2 + M z , Sd 2 M Rd 1.0 ,
or
N Sd N c , Rd + 1 M Rd { [ C my M y , Sd 1 N Sd N Ey ] 2 + [ C mz M z , Sd 1 N Sd N Ez ] 2 } 1.0 ,
where N Sd denotes design axial compression force, and M y , Sd , M z , Sd represent in-plane and out-of-plane design bending moment. The above variables are obtained from the FEA results. N Ey and N Ez are Euler buckling strengths corresponding to the member y and z axes, respectively; N c , Rd , N c l , Rd denote design axial compressive resistance and axial local buckling resistance, respectively; M Rd is design bending moment resistance; C my and C mz represent reduction factors corresponding to the member y and z axes, respectively. These variables are calculated by further defined equations in the standard.

References

  1. Delay, T.; Jennings, T. Offshore Wind Power: Big Challenge, Big Opportunity; Carbon Trust CTC743: London, UK, 2008. [Google Scholar]
  2. Muskulus, M.; Schafhirt, S. Design optimization of wind turbine support structures-a review. J. Ocean Wind Energy 2014, 1, 12–22. [Google Scholar]
  3. Chew, K.H.; Tai, K.; Ng, E.; Muskulus, M. Optimization of offshore wind turbine support structures using an analytical gradient-based method. Energy Procedia 2015, 80, 100–107. [Google Scholar] [CrossRef] [Green Version]
  4. Chew, K.H.; Tai, K.; Ng, E.; Muskulus, M. Analytical gradient-based optimization of offshore wind turbine substructures under fatigue and extreme loads. Mar. Struct. 2016, 47, 23–41. [Google Scholar] [CrossRef] [Green Version]
  5. Pasamontes, L.B.; Torres, F.G.; Zwick, D.; Schafhirt, S.; Muskulus, M. Support structure optimization for offshore wind turbines with a genetic algorithm. In Proceedings of the ASME 2014 33rd International Conference on Ocean, Offshore and Arctic Engineering, San Francisco, California, USA, 8–13 June 2014; p. V09BTA033-V09BT09A. [Google Scholar]
  6. Schafhirt, S.; Zwick, D.; Muskulus, M. Reanalysis of jacket support structure for computer-aided optimization of offshore wind turbines with a genetic algorithm. In Proceedings of the The Twenty-Fourth International Ocean and Polar Engineering Conference: International Society of Offshore and Polar Engineers, Busan, Korea, 15–20 June 2014. [Google Scholar]
  7. AlHamaydeh, M.; Barakat, S.; Nasif, O. Optimization of Support Structures for Offshore Wind Turbines Using Genetic Algorithm with Domain-Trimming. Math. Probl. Eng. 2017, 2017, 5978375. [Google Scholar] [CrossRef] [Green Version]
  8. Gentils, T.; Wang, L.; Kolios, A. Integrated structural optimisation of offshore wind turbine support structures based on finite element analysis and genetic algorithm. Appl. Energy 2017, 199, 187–204. [Google Scholar] [CrossRef] [Green Version]
  9. Kaveh, A.; Sabeti, S. Structural optimization of jacket supporting structures for offshore wind turbines using colliding bodies optimization algorithm. Struct. Design Tall Spec. Build. 2018, 27, e1494. [Google Scholar] [CrossRef]
  10. Chehouri, A.; Younes, R.; Ilinca, A.; Perron, J. Review of performance optimization techniques applied to wind turbines. Appl. Energy 2015, 142, 361–388. [Google Scholar] [CrossRef]
  11. Hofmeister, B.; Bruns, M.; Rolfes, R. Finite element model updating using deterministic optimisation: A global pattern search approach. Eng. Struct. 2019, 195, 373–381. [Google Scholar] [CrossRef]
  12. Li, F.; Lam, K.Y.; Wang, L. Power allocation in cognitive radio networks over Rayleigh-fading channels with hybrid intelligent algorithms. Wirel. Netw. 2018, 24, 2397–2407. [Google Scholar] [CrossRef]
  13. Sahu, R.K.; Panda, S.; Sekhar, G.C. A novel hybrid PSO-PS optimized fuzzy PI controller for AGC in multi area interconnected power systems. Int. J. Electr. Power Energy Syst. 2015, 64, 880–893. [Google Scholar] [CrossRef]
  14. Gandomkar, M.; Vakilian, M.; Ehsan, M.A. Combination of genetic algorithm and simulated annealing for optimal DG allocation in distribution networks. In Proceedings of the Canadian Conference on Electrical and Computer Engineering, Saskatoon, SA, Canada, 1–4 May 2005; pp. 645–648. [Google Scholar]
  15. Pourvaziri, H.; Naderi, B. A hybrid multi-population genetic algorithm for the dynamic facility layout problem. Appl. Soft Comput. 2014, 24, 457–469. [Google Scholar] [CrossRef]
  16. Krink, T.; Løvbjerg, M. The lifecycle model: Combining particle swarm optimisation, genetic algorithms and hillclimbers. In International Conference on Parallel Problem Solving from Nature—PPSN VII. PPSN 2002. Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2002; Volume 2439, pp. 621–630. [Google Scholar]
  17. Turing, A.M. Computing machinery and intelligence. Mind 1950, 59, 433–460. [Google Scholar] [CrossRef]
  18. Holland, J.H. Adaptation in natural and artificial systems. In An Introductory Analysis with Application to Biology, Control, and Artificial Intelligence; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  19. Xing, L.N.; Chen, Y.W.; Cai, H.P. An intelligent genetic algorithm designed for global optimization of multi-minima functions. Appl. Math. Comput. 2006, 178, 355–371. [Google Scholar] [CrossRef]
  20. Kennedy, J. Particle swarm optimization. In Encyclopedia of Machine Learning; Springer: Berlin/Heidelberg, Germany, 1995; pp. 760–766. [Google Scholar]
  21. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the Evolutionary Computation Proceedings, 1998 IEEE World Congress on Computational Intelligence, Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar]
  22. Hooke, R.; Jeeves, T.A. “Direct Search” solution of numerical and statistical problems. J. ACM (JACM) 1961, 8, 212–229. [Google Scholar] [CrossRef]
  23. Chelouah, R.; Siarry, P. Tabu search applied to global optimization. Eur. J. Oper. Res. 2000, 123, 256–270. [Google Scholar] [CrossRef]
  24. Mishra, S.K. Some new test functions for global optimization and performance of repulsive particle swarm method. SSRN Electron. J. 2006, 1–24. [Google Scholar] [CrossRef] [Green Version]
  25. Liang, J.J.; Qu, B.Y.; Suganthan, P.N.; Hernández-Díaz, A.G. Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session on Real-Parameter Optimization; Technical Report; Computational Intelligence Laboratory, Zhengzhou University: Zhengzhou, China; Nanyang Technological University: Singapore, 2013; Volume 201212, pp. 3–18. [Google Scholar]
  26. Beale, E. On an Iterative Method for Finding a Local Minimum of a Function of More than One Variable; Technical Report 25; Princeton University, Statistical Techniques Research Group: Princeton, NJ, USA, 1958. [Google Scholar]
  27. Juang, C.F. A hybrid of genetic algorithm and particle swarm optimization for recurrent network design. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2004, 34, 997–1006. [Google Scholar] [CrossRef] [PubMed]
  28. Häfele, J. A Numerically Efficient and Holistic Approach to Design Optimization of Offshore wind Turbine Jacket Substructures. Ph.D. Thesis, Gottfried Wilhelm Leibniz Universität, Hanover, Germany, 2019. [Google Scholar]
  29. Development Plan of Wind Power in Taiwan (English Translation of Chinese Title). Available online: https://energywhitepaper.tw/upload/201712/151376189216283.pdf (accessed on 29 May 2020).
  30. International Electrotechnical Commission (IEC). IEC 61400-3 Part 3: Design Requirements for Offshore Wind Turbines; IEC: Geneva, Switzerland, 2009. [Google Scholar]
  31. Det Norske Veritas and Germanischer Lloyd (DNVGL). Loads and Site Conditions for Wind Turbines; DNVGL-ST-0437; DNVGL: Oslo, Norway, 2016. [Google Scholar]
  32. Jonkman, J.; Butterfield, S.; Musial, W.; Scott, G. Definition of a 5-MW Reference Wind Turbine for Offshore System Development; NREL/TP-500-38060; National Renewable Energy Lab (NREL): Golden, CO, USA, 2009. [Google Scholar]
  33. Offshore Wind Power Demonstration Incentive Program–Annex 3: Requirements of Specification (English Translation of Chinese Title). Available online: https://law.moj.gov.tw/LawClass/LawAll.aspx?pcode=J0130063 (accessed on 29 May 2020).
  34. Morison, J.; Johnson, J.; Schaaf, S. The force exerted by surface waves on piles. J. Pet. Technol. 1950, 2, 149–154. [Google Scholar] [CrossRef]
  35. Chakrabarti, S.K. Hydrodynamics of Offshore Structures; WIT Press: Southampton, UK, 1987. [Google Scholar]
  36. Constrained Optimization—Wikipedia. Available online: https://en.wikipedia.org/wiki/Constrained_optimization (accessed on 29 May 2020).
  37. Standards Norway. NORSOK Standard N-004 Design of Steel Structures; Standards Norway: Lysaker, Norway, 2004. [Google Scholar]
Figure 1. Structure of Divisional Model Genetic Algorithm (DMGA) algorithm, in which the population is divided into Targeted Mutation (TM), Pattern Search-Genetic Algorithm (PS-GA) and Particle Swarm Optimization (PSO) divisions.
Figure 1. Structure of Divisional Model Genetic Algorithm (DMGA) algorithm, in which the population is divided into Targeted Mutation (TM), Pattern Search-Genetic Algorithm (PS-GA) and Particle Swarm Optimization (PSO) divisions.
Jmse 08 00548 g001
Figure 2. Flowchart of the divisional model genetic algorithm (DMGA).
Figure 2. Flowchart of the divisional model genetic algorithm (DMGA).
Jmse 08 00548 g002
Figure 3. Operators of standard genetic algorithm (SGA): (a) crossover and (b) mutation operators. The chromosomes are coded in binary.
Figure 3. Operators of standard genetic algorithm (SGA): (a) crossover and (b) mutation operators. The chromosomes are coded in binary.
Jmse 08 00548 g003
Figure 4. Flowchart of the pattern search (PS).
Figure 4. Flowchart of the pattern search (PS).
Jmse 08 00548 g004
Figure 5. Contraction of the search range through TM. Every point and the white line indicate every chromosome and median in one dimension. In this case, the right side is the superior part of this dimension, applied with a 2/3 contraction.
Figure 5. Contraction of the search range through TM. Every point and the white line indicate every chromosome and median in one dimension. In this case, the right side is the superior part of this dimension, applied with a 2/3 contraction.
Jmse 08 00548 g005
Figure 6. Fitness plot of each division in the DMGA versus iterations in solving Schwefel’s function.
Figure 6. Fitness plot of each division in the DMGA versus iterations in solving Schwefel’s function.
Jmse 08 00548 g006
Figure 7. Surface plot of Easom’s function when x1 and x2 = −10 to 10.
Figure 7. Surface plot of Easom’s function when x1 and x2 = −10 to 10.
Jmse 08 00548 g007
Figure 8. Search range of the DMGA by targeted mutation versus iterations in solving Easom’s function.
Figure 8. Search range of the DMGA by targeted mutation versus iterations in solving Easom’s function.
Jmse 08 00548 g008
Figure 9. Finite element analysis (FEA) model of the jacket support structure.
Figure 9. Finite element analysis (FEA) model of the jacket support structure.
Jmse 08 00548 g009
Figure 10. Regular wave theory and the design wave [35].
Figure 10. Regular wave theory and the design wave [35].
Jmse 08 00548 g010
Figure 11. Wave profile and load component.
Figure 11. Wave profile and load component.
Jmse 08 00548 g011
Figure 12. Wave load on a monopile tubular member with different diameters.
Figure 12. Wave load on a monopile tubular member with different diameters.
Jmse 08 00548 g012
Figure 13. Optimization procedure.
Figure 13. Optimization procedure.
Jmse 08 00548 g013
Figure 14. Mass versus the number of function evaluations in solving the jacket substructure optimization problem by hybrid algorithms.
Figure 14. Mass versus the number of function evaluations in solving the jacket substructure optimization problem by hybrid algorithms.
Jmse 08 00548 g014
Figure 15. Nominal stress contour of each design, (a) Reference, (b) DMGA, (c) SGA, (d) PSO, (e) PSGA, and (f) Lifecycle model GA (LMGA).
Figure 15. Nominal stress contour of each design, (a) Reference, (b) DMGA, (c) SGA, (d) PSO, (e) PSGA, and (f) Lifecycle model GA (LMGA).
Jmse 08 00548 g015
Table 1. Computational results of the SGA and PSO and their hybrids.
Table 1. Computational results of the SGA and PSO and their hybrids.
AlgorithmDMGASGAPSOPSGAPSPSOHGAPSOLMGA
FunctionEffectiveness 1
Efficiency 2
Ackley100056681009199
538001451252,2781891219,31013,040
Schwefel10010061002100100
1722175,399133034,47619548,41422,643
Rastrigin100007110000
319700203,757438900
De Joung100100100100100100100
1833434395192194686419
Rosenbrock9916727910098
35,237320,5808424235,32410,07264,466154,925
Goldstein-Price100100100100100100100
44633654826435271259524
Easom1001009910095100100
96211,91272443284160862881909
Zakharov10001331009599
10,37001209507,3152251265,63257,163
Hartman (H6,4)8096698544056
7502161,007136,88065,96483632,07284,422
Eggholder648607284997
90,029164,8350164,08025,30266,124106,781
Schaffer8493095273124
118,778188,0190159,42856,88674,706286,874
Styblinski-Tang10042169912100100
738387,744108435,96824226,35613,725
Beale1001008910094100100
729377553724846246559749
1 Times of success in 100 runs of each function. 2 Average number of function evaluations in 100 runs for each function.
Table 2. Rankings of the performance analyses.
Table 2. Rankings of the performance analyses.
AlgorithmDMGASGAPSOPSGAPSPSOHGAPSOLMGA
FunctionRank
Ackley2765143
Schwefel1563742
Rastrigin1443244
De Joung1742365
Rosenbrock2756413
Goldstein-Price1725463
Easom1563742
Zakharov2756143
Hartman (H6,4)3271564
Eggholder4273651
Schaffer3271546
Styblinski-Tang1564732
Beale1463652
Avg. Rank1.774.925.463.464.464.313.08
Table 3. Parameters for wind and wave loads.
Table 3. Parameters for wind and wave loads.
Mean Wind SpeedGust FactorGust WindWave HeightWave PeriodWave Length
50 m/s1.470 m/s14.8 m12.7 s197 m
Table 4. Parameters for JSS optimization.
Table 4. Parameters for JSS optimization.
PS Step SizePS Toleranceω for PSO φ 1   for   PSO φ 2   for   PSO
80/6 120/1.5 1110.5
1 These parameters are for the radius and thickness, respectively.
Table 5. The results of the JSS optimization.
Table 5. The results of the JSS optimization.
AlgorithmDMGASGAPSOPSGALMGAReference
MemberThickness (mm)
Radius (mm)
Atmospheric brace7.2147.41214.518.9
100100165100101203
Atmospheric upper leg43.44933.94541.742.5
463447673463513600
Atmospheric lower leg494948.9564042.5
390492557363450600
Splash brace61218.6616.612.8
10693277100143226
Splash leg27.230.934.12832.234
569527521566510600
Immersion brace10.69107618.9
119115387124173203
Immersion upper leg36.74828.14234.942.5
565461674507580600
Immersion lower leg336258.53341.867.5
837546611837644650
Mass (ton)200226267207209300
Converged iterations400850over 150014201400-
Maximum stress (MPa)229228228229235203
Member of maximum stressImmersion lower legAtmospheric upper legImmersion lower legImmersion lower legImmersion lower legSplash leg

Share and Cite

MDPI and ACS Style

Liu, D.-P.; Lin, T.-Y.; Huang, H.-H. Improving the Computational Efficiency for Optimization of Offshore Wind Turbine Jacket Substructure by Hybrid Algorithms. J. Mar. Sci. Eng. 2020, 8, 548. https://doi.org/10.3390/jmse8080548

AMA Style

Liu D-P, Lin T-Y, Huang H-H. Improving the Computational Efficiency for Optimization of Offshore Wind Turbine Jacket Substructure by Hybrid Algorithms. Journal of Marine Science and Engineering. 2020; 8(8):548. https://doi.org/10.3390/jmse8080548

Chicago/Turabian Style

Liu, Ding-Peng, Tsung-Yueh Lin, and Hsin-Haou Huang. 2020. "Improving the Computational Efficiency for Optimization of Offshore Wind Turbine Jacket Substructure by Hybrid Algorithms" Journal of Marine Science and Engineering 8, no. 8: 548. https://doi.org/10.3390/jmse8080548

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop