This study delineates the development of the optimization framework for the preliminary design phase of Floating Offshore Wind Turbines(FOWTs),and the central challenge addressed is the optimization of the FOWT platfo...This study delineates the development of the optimization framework for the preliminary design phase of Floating Offshore Wind Turbines(FOWTs),and the central challenge addressed is the optimization of the FOWT platform dimensional parameters in relation to motion responses.Although the three-dimensional potential flow(TDPF)panel method is recognized for its precision in calculating FOWT motion responses,its computational intensity necessitates an alternative approach for efficiency.Herein,a novel application of varying fidelity frequency-domain computational strategies is introduced,which synthesizes the strip theory with the TDPF panel method to strike a balance between computational speed and accuracy.The Co-Kriging algorithm is employed to forge a surrogate model that amalgamates these computational strategies.Optimization objectives are centered on the platform’s motion response in heave and pitch directions under general sea conditions.The steel usage,the range of design variables,and geometric considerations are optimization constraints.The angle of the pontoons,the number of columns,the radius of the central column and the parameters of the mooring lines are optimization constants.This informed the structuring of a multi-objective optimization model utilizing the Non-dominated Sorting Genetic Algorithm Ⅱ(NSGA-Ⅱ)algorithm.For the case of the IEA UMaine VolturnUS-S Reference Platform,Pareto fronts are discerned based on the above framework and delineate the relationship between competing motion response objectives.The efficacy of final designs is substantiated through the time-domain calculation model,which ensures that the motion responses in extreme sea conditions are superior to those of the initial design.展开更多
In the generalized continuum mechanics(GCM)theory framework,asymmetric wave equations encompass the characteristic scale parameters of the medium,accounting for microstructure interactions.This study integrates two th...In the generalized continuum mechanics(GCM)theory framework,asymmetric wave equations encompass the characteristic scale parameters of the medium,accounting for microstructure interactions.This study integrates two theoretical branches of the GCM,the modified couple stress theory(M-CST)and the one-parameter second-strain-gradient theory,to form a novel asymmetric wave equation in a unified framework.Numerical modeling of the asymmetric wave equation in a unified framework accurately describes subsurface structures with vital implications for subsequent seismic wave inversion and imaging endeavors.However,employing finite-difference(FD)methods for numerical modeling may introduce numerical dispersion,adversely affecting the accuracy of numerical modeling.The design of an optimal FD operator is crucial for enhancing the accuracy of numerical modeling and emphasizing the scale effects.Therefore,this study devises a hybrid scheme called the dung beetle optimization(DBO)algorithm with a simulated annealing(SA)algorithm,denoted as the SA-based hybrid DBO(SDBO)algorithm.An FD operator optimization method under the SDBO algorithm was developed and applied to the numerical modeling of asymmetric wave equations in a unified framework.Integrating the DBO and SA algorithms mitigates the risk of convergence to a local extreme.The numerical dispersion outcomes underscore that the proposed SDBO algorithm yields FD operators with precision errors constrained to 0.5‱while encompassing a broader spectrum coverage.This result confirms the efficacy of the SDBO algorithm.Ultimately,the numerical modeling results demonstrate that the new FD method based on the SDBO algorithm effectively suppresses numerical dispersion and enhances the accuracy of elastic wave numerical modeling,thereby accentuating scale effects.This result is significant for extracting wavefield perturbations induced by complex microstructures in the medium and the analysis of scale effects.展开更多
This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective o...This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective optimization problems,with a particular focus on robotic leg-linkage design.The study introduces an innovative approach that integrates deep learning-based surrogate models with the robust Non-dominated Sorting Genetic Algorithm II,aiming to enhance the efficiency and precision of the optimization process.Through a series of empirical experiments and algorithmic analyses,the paper demonstrates a high degree of correlation between solutions generated by the DeepSurNet-NSGA II and those obtained from direct experimental methods,underscoring the algorithm’s capability to accurately approximate the Pareto-optimal frontier while significantly reducing computational demands.The methodology encompasses a detailed exploration of the algorithm’s configuration,the experimental setup,and the criteria for performance evaluation,ensuring the reproducibility of results and facilitating future advancements in the field.The findings of this study not only confirm the practical applicability and theoretical soundness of the DeepSurNet-NSGA II in navigating the intricacies of multi-objective optimization but also highlight its potential as a transformative tool in engineering and design optimization.By bridging the gap between complex optimization challenges and achievable solutions,this research contributes valuable insights into the optimization domain,offering a promising direction for future inquiries and technological innovations.展开更多
Analyzing rock mass seepage using the discrete fracture network(DFN)flow model poses challenges when dealing with complex fracture networks.This paper presents a novel DFN flow model that incorporates the actual conne...Analyzing rock mass seepage using the discrete fracture network(DFN)flow model poses challenges when dealing with complex fracture networks.This paper presents a novel DFN flow model that incorporates the actual connections of large-scale fractures.Notably,this model efficiently manages over 20,000 fractures without necessitating adjustments to the DFN geometry.All geometric analyses,such as identifying connected fractures,dividing the two-dimensional domain into closed loops,triangulating arbitrary loops,and refining triangular elements,are fully automated.The analysis processes are comprehensively introduced,and core algorithms,along with their pseudo-codes,are outlined and explained to assist readers in their programming endeavors.The accuracy of geometric analyses is validated through topological graphs representing the connection relationships between fractures.In practical application,the proposed model is employed to assess the water-sealing effectiveness of an underground storage cavern project.The analysis results indicate that the existing design scheme can effectively prevent the stored oil from leaking in the presence of both dense and sparse fractures.Furthermore,following extensive modification and optimization,the scale and precision of model computation suggest that the proposed model and developed codes can meet the requirements of engineering applications.展开更多
A hybrid identification model based on multilayer artificial neural networks(ANNs) and particle swarm optimization(PSO) algorithm is developed to improve the simultaneous identification efficiency of thermal conductiv...A hybrid identification model based on multilayer artificial neural networks(ANNs) and particle swarm optimization(PSO) algorithm is developed to improve the simultaneous identification efficiency of thermal conductivity and effective absorption coefficient of semitransparent materials.For the direct model,the spherical harmonic method and the finite volume method are used to solve the coupled conduction-radiation heat transfer problem in an absorbing,emitting,and non-scattering 2D axisymmetric gray medium in the background of laser flash method.For the identification part,firstly,the temperature field and the incident radiation field in different positions are chosen as observables.Then,a traditional identification model based on PSO algorithm is established.Finally,multilayer ANNs are built to fit and replace the direct model in the traditional identification model to speed up the identification process.The results show that compared with the traditional identification model,the time cost of the hybrid identification model is reduced by about 1 000 times.Besides,the hybrid identification model remains a high level of accuracy even with measurement errors.展开更多
Groundwater inverse modeling is a vital technique for estimating unmeasurable model parameters and enhancing numerical simulation accuracy.This paper comprehensively reviews the current advances and future prospects o...Groundwater inverse modeling is a vital technique for estimating unmeasurable model parameters and enhancing numerical simulation accuracy.This paper comprehensively reviews the current advances and future prospects of metaheuristic algorithm-based groundwater model parameter inversion.Initially,the simulation-optimization parameter estimation framework is introduced,which involves the integration of simulation models with metaheuristic algorithms.The subsequent sections explore the fundamental principles of four widely employed metaheuristic algorithms-genetic algorithm(GA),particle swarm optimization(PSO),simulated annealing(SA),and differential evolution(DE)-highlighting their recent applications in water resources research and related areas.Then,a solute transport model is designed to illustrate how to apply and evaluate these four optimization algorithms in addressing challenges related to model parameter inversion.Finally,three noteworthy directions are presented to address the common challenges among current studies,including balancing the diverse exploration and centralized exploitation within metaheuristic algorithms,local approxi-mate error of the surrogate model,and the curse of dimensionality in spatial variational heterogeneous pa-rameters.In summary,this review paper provides theoretical insights and practical guidance for further advancements in groundwater inverse modeling studies.展开更多
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode...Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.展开更多
Numerical challenges,incorporating non-uniqueness,non-convexity,undefined gradients,and high curvature,of the positive level sets of yield function are encountered in stress integration when utilizing the return-mappi...Numerical challenges,incorporating non-uniqueness,non-convexity,undefined gradients,and high curvature,of the positive level sets of yield function are encountered in stress integration when utilizing the return-mapping algorithm family.These phenomena are illustrated by an assessment of four typical yield functions:modified spatially mobilized plane criterion,Lade criterion,Bigoni-Piccolroaz criterion,and micromechanics-based upscaled Drucker-Prager criterion.One remedy to these issues,named the"Hop-to-Hug"(H2H)algorithm,is proposed via a convexification enhancement upon the classical cutting-plane algorithm(CPA).The improved robustness of the H2H algorithm is demonstrated through a series of integration tests in one single material point.Furthermore,a constitutive model is implemented with the H2H algorithm into the Abaqus/Standard finite-element platform.Element-level and structure-level analyses are carried out to validate the effectiveness of the H2H algorithm in convergence.All validation analyses manifest that the proposed H2H algorithm can offer enhanced stability over the classical CPA method while maintaining the ease of implementation,in which evaluations of the second-order derivatives of yield function and plastic potential function are circumvented.展开更多
With the continuous growth of power demand and the diversification of power consumption structure,the loss of distribution network has gradually become the focus of attention.Given the problems of single loss reductio...With the continuous growth of power demand and the diversification of power consumption structure,the loss of distribution network has gradually become the focus of attention.Given the problems of single loss reduction measure,lack of economy,and practicality in existing research,this paper proposes an optimization method of distribution network loss reduction based on tabu search algorithm and optimizes the combination and parameter configuration of loss reduction measure.The optimization model is developed with the goal of maximizing comprehensive benefits,incorporating both economic and environmental factors,and accounting for investment costs,including the loss of power reduction.Additionally,the model ensures that constraint conditions such as power flow equations,voltage deviations,and line transmission capacities are satisfied.The solution is obtained through a tabu search algorithm,which is well-suited for solving nonlinear problems with multiple constraints.Combined with the example of 10kV25 node construction,the simulation results show that the method can significantly reduce the network loss on the basis of ensuring the economy and environmental protection of the system,which provides a theoretical basis for distribution network planning.展开更多
Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’...Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’s edge.However,the complexity of model conversion techniques,diverse inference mechanisms,and varied learning strategies make designing and deploying these models challenging.Additionally,deploying TinyML models on resource-constrained hardware with specific software frameworks has broadened EdgeML’s applications across various sectors.These factors underscore the necessity for a comprehensive literature review,as current reviews do not systematically encompass the most recent findings on these topics.Consequently,it provides a comprehensive overview of state-of-the-art techniques in model conversion,inference mechanisms,learning strategies within EdgeML,and deploying these models on resource-constrained edge devices using TinyML.It identifies 90 research articles published between 2018 and 2025,categorizing them into two main areas:(1)model conversion,inference,and learning strategies in EdgeML and(2)deploying TinyML models on resource-constrained hardware using specific software frameworks.In the first category,the synthesis of selected research articles compares and critically reviews various model conversion techniques,inference mechanisms,and learning strategies.In the second category,the synthesis identifies and elaborates on major development boards,software frameworks,sensors,and algorithms used in various applications across six major sectors.As a result,this article provides valuable insights for researchers,practitioners,and developers.It assists them in choosing suitable model conversion techniques,inference mechanisms,learning strategies,hardware development boards,software frameworks,sensors,and algorithms tailored to their specific needs and applications across various sectors.展开更多
Basin effect was first described following the analysis of seismic ground motion associated with the 1985 MW8.1 earthquake in Mexico.Basins affect the propagation of seismic waves through various mechanisms,and severa...Basin effect was first described following the analysis of seismic ground motion associated with the 1985 MW8.1 earthquake in Mexico.Basins affect the propagation of seismic waves through various mechanisms,and several unique phenomena,such as the basin edge effect,basin focusing effect,and basin-induced secondary waves,have been observed.Understanding and quantitatively predicting these phenomena are crucial for earthquake disaster reduction.Some pioneering studies in this field have proposed a quantitative relationship between the basin effect on ground motion and basin depth.Unfortunately,basin effect phenomena predicted using a model based only on basin depth exhibit large deviations from actual distributions,implying the severe shortcomings of single-parameter basin effect modeling.Quaternary sediments are thick and widely distributed in the Beijing-Tianjin-Hebei region.The seismic media inside and outside of this basin have significantly different physical properties,and the basin bottom forms an interface with strong seismic reflections.In this study,we established a three-dimensional structure model of the Quaternary sedimentary basin based on the velocity structure model of the North China Craton and used it to simulate the ground motion under a strong earthquake following the spectral element method,obtaining the spatial distribution characteristics of the ground motion amplification ratio throughout the basin.The back-propagation(BP)neural network algorithm was then introduced to establish a multi-parameter mathematical model for predicting ground motion amplification ratios,with the seismic source location,physical property ratio of the media inside and outside the basin,seismic wave frequency,and basin shape as the input parameters.We then examined the main factors influencing the amplification of seismic ground motion in basins based on the prediction results,and concluded that the main factors influencing the basin effect are basin shape and differences in the physical properties of media inside and outside the basin.展开更多
With the rapid development of Internet of Things technology,the sharp increase in network devices and their inherent security vulnerabilities present a stark contrast,bringing unprecedented challenges to the field of ...With the rapid development of Internet of Things technology,the sharp increase in network devices and their inherent security vulnerabilities present a stark contrast,bringing unprecedented challenges to the field of network security,especially in identifying malicious attacks.However,due to the uneven distribution of network traffic data,particularly the imbalance between attack traffic and normal traffic,as well as the imbalance between minority class attacks and majority class attacks,traditional machine learning detection algorithms have significant limitations when dealing with sparse network traffic data.To effectively tackle this challenge,we have designed a lightweight intrusion detection model based on diffusion mechanisms,named Diff-IDS,with the core objective of enhancing the model’s efficiency in parsing complex network traffic features,thereby significantly improving its detection speed and training efficiency.The model begins by finely filtering network traffic features and converting them into grayscale images,while also employing image-flipping techniques for data augmentation.Subsequently,these preprocessed images are fed into a diffusion model based on the Unet architecture for training.Once the model is trained,we fix the weights of the Unet network and propose a feature enhancement algorithm based on feature masking to further boost the model’s expressiveness.Finally,we devise an end-to-end lightweight detection strategy to streamline the model,enabling efficient lightweight detection of imbalanced samples.Our method has been subjected to multiple experimental tests on renowned network intrusion detection benchmarks,including CICIDS 2017,KDD 99,and NSL-KDD.The experimental results indicate that Diff-IDS leads in terms of detection accuracy,training efficiency,and lightweight metrics compared to the current state-of-the-art models,demonstrating exceptional detection capabilities and robustness.展开更多
Gas-bearing volcanic reservoirs have been found in the deep Songliao Basin, China. Choosing proper interpretation parameters for log evaluation is difficult due to complicated mineral compositions and variable mineral...Gas-bearing volcanic reservoirs have been found in the deep Songliao Basin, China. Choosing proper interpretation parameters for log evaluation is difficult due to complicated mineral compositions and variable mineral contents. Based on the QAPF classification scheme given by IUGS, we propose a method to determine the mineral contents of volcanic rocks using log data and a genetic algorithm. According to the QAPF scheme, minerals in volcanic rocks are divided into five groups: Q(quartz), A (Alkaline feldspar), P (plagioclase), M (mafic) and F (feldspathoid). We propose a model called QAPM including porosity for the volumetric analysis of reservoirs. The log response equations for density, apparent neutron porosity, transit time, gamma ray and volume photoelectrical cross section index were first established with the mineral parameters obtained from the Schlumberger handbook of log mineral parameters. Then the volumes of the four minerals in the matrix were calculated using the genetic algorithm (GA). The calculated porosity, based on the interpretation parameters, can be compared with core porosity, and the rock names given in the paper based on QAPF classification according to the four mineral contents are compatible with those from the chemical analysis of the core samples.展开更多
In order to solve the problems of potential incident rescue on expressway networks, the opportunity cost-based method is used to establish a resource dispatch decision model. The model aims to dispatch the rescue reso...In order to solve the problems of potential incident rescue on expressway networks, the opportunity cost-based method is used to establish a resource dispatch decision model. The model aims to dispatch the rescue resources from the regional road networks and to obtain the location of the rescue depots and the numbers of service vehicles assigned for the potential incidents. Due to the computational complexity of the decision model, a scene decomposition algorithm is proposed. The algorithm decomposes the dispatch problem from various kinds of resources to a single resource, and determines the original scene of rescue resources based on the rescue requirements and the resource matrix. Finally, a convenient optimal dispatch scheme is obtained by decomposing each original scene and simplifying the objective function. To illustrate the application of the decision model and the algorithm, a case of the expressway network is studied on areas around Nanjing city in China and the results show that the model used and the algorithm proposed are appropriate.展开更多
A solution to compute the optimal path based on a single-line-single-directional(SLSD)road network model is proposed.Unlike the traditional road network model,in the SLSD conceptual model,being single-directional an...A solution to compute the optimal path based on a single-line-single-directional(SLSD)road network model is proposed.Unlike the traditional road network model,in the SLSD conceptual model,being single-directional and single-line style,a road is no longer a linkage of road nodes but abstracted as a network node.Similarly,a road node is abstracted as the linkage of two ordered single-directional roads.This model can describe turn restrictions,circular roads,and other real scenarios usually described using a super-graph.Then a computing framework for optimal path finding(OPF)is presented.It is proved that classical Dijkstra and A algorithms can be directly used for OPF computing of any real-world road networks by transferring a super-graph to an SLSD network.Finally,using Singapore road network data,the proposed conceptual model and its corresponding optimal path finding algorithms are validated using a two-step optimal path finding algorithm with a pre-computing strategy based on the SLSD road network.展开更多
Current dynamic finite element model updating methods are not efficient or restricted to the problem of local optima. To circumvent these, a novel updating method which integrates the meta-model and the genetic algori...Current dynamic finite element model updating methods are not efficient or restricted to the problem of local optima. To circumvent these, a novel updating method which integrates the meta-model and the genetic algorithm is proposed. Experimental design technique is used to determine the best sampling points for the estimation of polynomial coefficients given the order and the number of independent variables. Finite element analyses are performed to generate the sampling data. Regression analysis is then used to estimate the response surface model to approximate the functional relationship between response features and design parameters on the entire design space. In the fitness evaluation of the genetic algorithm, the response surface model is used to substitute the finite element model to output features with given design parameters for the computation of fitness for the individual. Finally, the global optima that corresponds to the updated design parameter is acquired after several generations of evolution. In the application example, finite element analysis and modal testing are performed on a real chassis model. The finite element model is updated using the proposed method. After updating, root-mean-square error of modal frequencies is smaller than 2%. Furthermore, prediction ability of the updated model is validated using the testing results of the modified structure. The root-mean-square error of the prediction errors is smaller than 2%.展开更多
To realize automatic modeling and dynamic simulation of the educational assembling-type robot with open structure, a general dynamic model for the educational assembling-type robot and a fast simulation algorithm are ...To realize automatic modeling and dynamic simulation of the educational assembling-type robot with open structure, a general dynamic model for the educational assembling-type robot and a fast simulation algorithm are put forward. First, the educational robot system is abstracted to a multibody system and a general dynamic model of the educational robot is constructed by the Newton-Euler method. Then the dynamic model is simplified by a combination of components with fixed connections according to the structural characteristics of the educational robot. Secondly, in order to obtain a high efficiency simulation algorithm, based on the sparse matrix technique, the augmentation algorithm and the direct projective constraint stabilization algorithm are improved. Finally, a numerical example is given. The results show that the model and the fast algorithm are valid and effective. This study lays a dynamic foundation for realizing the simulation platform of the educational robot.展开更多
A new arrival and departure flight classification method based on the transitive closure algorithm (TCA) is proposed. Firstly, the fuzzy set theory and the transitive closure algorithm are introduced. Then four diff...A new arrival and departure flight classification method based on the transitive closure algorithm (TCA) is proposed. Firstly, the fuzzy set theory and the transitive closure algorithm are introduced. Then four different factors are selected to establish the flight classification model and a method is given to calculate the delay cost for each class. Finally, the proposed method is implemented in the sequencing problems of flights in a terminal area, and results are compared with that of the traditional classification method(TCM). Results show that the new classification model is effective in reducing the expenses of flight delays, thus optimizing the sequences of arrival and departure flights, and improving the efficiency of air traffic control.展开更多
Aiming at the real-time fluctuation and nonlinear characteristics of the expressway short-term traffic flow forecasting the parameter projection pursuit regression PPPR model is applied to forecast the expressway traf...Aiming at the real-time fluctuation and nonlinear characteristics of the expressway short-term traffic flow forecasting the parameter projection pursuit regression PPPR model is applied to forecast the expressway traffic flow where the orthogonal Hermite polynomial is used to fit the ridge functions and the least square method is employed to determine the polynomial weight coefficient c.In order to efficiently optimize the projection direction a and the number M of ridge functions of the PPPR model the chaos cloud particle swarm optimization CCPSO algorithm is applied to optimize the parameters. The CCPSO-PPPR hybrid optimization model for expressway short-term traffic flow forecasting is established in which the CCPSO algorithm is used to optimize the optimal projection direction a in the inner layer while the number M of ridge functions is optimized in the outer layer.Traffic volume weather factors and travel date of the previous several time intervals of the road section are taken as the input influencing factors. Example forecasting and model comparison results indicate that the proposed model can obtain a better forecasting effect and its absolute error is controlled within [-6,6] which can meet the application requirements of expressway traffic flow forecasting.展开更多
In order to decrease model complexity of rice panicle for its complicated morphological structure,an interactive L-system based on substructure algorithm was proposed to model rice panicle in this study.Through the an...In order to decrease model complexity of rice panicle for its complicated morphological structure,an interactive L-system based on substructure algorithm was proposed to model rice panicle in this study.Through the analysis of panicle morphology,the geometrical structure models of panicle spikelet,axis and branch were constructed firstly.Based on that,an interactive panicle L-system model was developed by using substructure algorithm to optimize panicle geometrical models with the similar structure.Simulation results showed that the interactive L-system panicle model based on substructure algorithm could fast construct panicle morphological structure in reality.In addition,this method had the well reference value for other plants model research.展开更多
基金financially supported by the National Natural Science Foundation of China(Grant No.52371261)the Science and Technology Projects of Liaoning Province(Grant No.2023011352-JH1/110).
文摘This study delineates the development of the optimization framework for the preliminary design phase of Floating Offshore Wind Turbines(FOWTs),and the central challenge addressed is the optimization of the FOWT platform dimensional parameters in relation to motion responses.Although the three-dimensional potential flow(TDPF)panel method is recognized for its precision in calculating FOWT motion responses,its computational intensity necessitates an alternative approach for efficiency.Herein,a novel application of varying fidelity frequency-domain computational strategies is introduced,which synthesizes the strip theory with the TDPF panel method to strike a balance between computational speed and accuracy.The Co-Kriging algorithm is employed to forge a surrogate model that amalgamates these computational strategies.Optimization objectives are centered on the platform’s motion response in heave and pitch directions under general sea conditions.The steel usage,the range of design variables,and geometric considerations are optimization constraints.The angle of the pontoons,the number of columns,the radius of the central column and the parameters of the mooring lines are optimization constants.This informed the structuring of a multi-objective optimization model utilizing the Non-dominated Sorting Genetic Algorithm Ⅱ(NSGA-Ⅱ)algorithm.For the case of the IEA UMaine VolturnUS-S Reference Platform,Pareto fronts are discerned based on the above framework and delineate the relationship between competing motion response objectives.The efficacy of final designs is substantiated through the time-domain calculation model,which ensures that the motion responses in extreme sea conditions are superior to those of the initial design.
基金supported by project XJZ2023050044,A2309002 and XJZ2023070052.
文摘In the generalized continuum mechanics(GCM)theory framework,asymmetric wave equations encompass the characteristic scale parameters of the medium,accounting for microstructure interactions.This study integrates two theoretical branches of the GCM,the modified couple stress theory(M-CST)and the one-parameter second-strain-gradient theory,to form a novel asymmetric wave equation in a unified framework.Numerical modeling of the asymmetric wave equation in a unified framework accurately describes subsurface structures with vital implications for subsequent seismic wave inversion and imaging endeavors.However,employing finite-difference(FD)methods for numerical modeling may introduce numerical dispersion,adversely affecting the accuracy of numerical modeling.The design of an optimal FD operator is crucial for enhancing the accuracy of numerical modeling and emphasizing the scale effects.Therefore,this study devises a hybrid scheme called the dung beetle optimization(DBO)algorithm with a simulated annealing(SA)algorithm,denoted as the SA-based hybrid DBO(SDBO)algorithm.An FD operator optimization method under the SDBO algorithm was developed and applied to the numerical modeling of asymmetric wave equations in a unified framework.Integrating the DBO and SA algorithms mitigates the risk of convergence to a local extreme.The numerical dispersion outcomes underscore that the proposed SDBO algorithm yields FD operators with precision errors constrained to 0.5‱while encompassing a broader spectrum coverage.This result confirms the efficacy of the SDBO algorithm.Ultimately,the numerical modeling results demonstrate that the new FD method based on the SDBO algorithm effectively suppresses numerical dispersion and enhances the accuracy of elastic wave numerical modeling,thereby accentuating scale effects.This result is significant for extracting wavefield perturbations induced by complex microstructures in the medium and the analysis of scale effects.
文摘This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective optimization problems,with a particular focus on robotic leg-linkage design.The study introduces an innovative approach that integrates deep learning-based surrogate models with the robust Non-dominated Sorting Genetic Algorithm II,aiming to enhance the efficiency and precision of the optimization process.Through a series of empirical experiments and algorithmic analyses,the paper demonstrates a high degree of correlation between solutions generated by the DeepSurNet-NSGA II and those obtained from direct experimental methods,underscoring the algorithm’s capability to accurately approximate the Pareto-optimal frontier while significantly reducing computational demands.The methodology encompasses a detailed exploration of the algorithm’s configuration,the experimental setup,and the criteria for performance evaluation,ensuring the reproducibility of results and facilitating future advancements in the field.The findings of this study not only confirm the practical applicability and theoretical soundness of the DeepSurNet-NSGA II in navigating the intricacies of multi-objective optimization but also highlight its potential as a transformative tool in engineering and design optimization.By bridging the gap between complex optimization challenges and achievable solutions,this research contributes valuable insights into the optimization domain,offering a promising direction for future inquiries and technological innovations.
基金sponsored by the General Program of the National Natural Science Foundation of China(Grant Nos.52079129 and 52209148)the Hubei Provincial General Fund,China(Grant No.2023AFB567)。
文摘Analyzing rock mass seepage using the discrete fracture network(DFN)flow model poses challenges when dealing with complex fracture networks.This paper presents a novel DFN flow model that incorporates the actual connections of large-scale fractures.Notably,this model efficiently manages over 20,000 fractures without necessitating adjustments to the DFN geometry.All geometric analyses,such as identifying connected fractures,dividing the two-dimensional domain into closed loops,triangulating arbitrary loops,and refining triangular elements,are fully automated.The analysis processes are comprehensively introduced,and core algorithms,along with their pseudo-codes,are outlined and explained to assist readers in their programming endeavors.The accuracy of geometric analyses is validated through topological graphs representing the connection relationships between fractures.In practical application,the proposed model is employed to assess the water-sealing effectiveness of an underground storage cavern project.The analysis results indicate that the existing design scheme can effectively prevent the stored oil from leaking in the presence of both dense and sparse fractures.Furthermore,following extensive modification and optimization,the scale and precision of model computation suggest that the proposed model and developed codes can meet the requirements of engineering applications.
基金supported by the Fundamental Research Funds for the Central Universities (No.3122020072)the Multi-investment Project of Tianjin Applied Basic Research(No.23JCQNJC00250)。
文摘A hybrid identification model based on multilayer artificial neural networks(ANNs) and particle swarm optimization(PSO) algorithm is developed to improve the simultaneous identification efficiency of thermal conductivity and effective absorption coefficient of semitransparent materials.For the direct model,the spherical harmonic method and the finite volume method are used to solve the coupled conduction-radiation heat transfer problem in an absorbing,emitting,and non-scattering 2D axisymmetric gray medium in the background of laser flash method.For the identification part,firstly,the temperature field and the incident radiation field in different positions are chosen as observables.Then,a traditional identification model based on PSO algorithm is established.Finally,multilayer ANNs are built to fit and replace the direct model in the traditional identification model to speed up the identification process.The results show that compared with the traditional identification model,the time cost of the hybrid identification model is reduced by about 1 000 times.Besides,the hybrid identification model remains a high level of accuracy even with measurement errors.
基金supported by the Fundamental Research Funds for the Central Universities(XJ2023005201)the National Natural Science Foundation of China(NSFC:U2267217,42141011,and 42002254).
文摘Groundwater inverse modeling is a vital technique for estimating unmeasurable model parameters and enhancing numerical simulation accuracy.This paper comprehensively reviews the current advances and future prospects of metaheuristic algorithm-based groundwater model parameter inversion.Initially,the simulation-optimization parameter estimation framework is introduced,which involves the integration of simulation models with metaheuristic algorithms.The subsequent sections explore the fundamental principles of four widely employed metaheuristic algorithms-genetic algorithm(GA),particle swarm optimization(PSO),simulated annealing(SA),and differential evolution(DE)-highlighting their recent applications in water resources research and related areas.Then,a solute transport model is designed to illustrate how to apply and evaluate these four optimization algorithms in addressing challenges related to model parameter inversion.Finally,three noteworthy directions are presented to address the common challenges among current studies,including balancing the diverse exploration and centralized exploitation within metaheuristic algorithms,local approxi-mate error of the surrogate model,and the curse of dimensionality in spatial variational heterogeneous pa-rameters.In summary,this review paper provides theoretical insights and practical guidance for further advancements in groundwater inverse modeling studies.
文摘Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.
基金supported by the National Natural Science Foundation of China (Grant Nos.12372376 and U22A20596).
文摘Numerical challenges,incorporating non-uniqueness,non-convexity,undefined gradients,and high curvature,of the positive level sets of yield function are encountered in stress integration when utilizing the return-mapping algorithm family.These phenomena are illustrated by an assessment of four typical yield functions:modified spatially mobilized plane criterion,Lade criterion,Bigoni-Piccolroaz criterion,and micromechanics-based upscaled Drucker-Prager criterion.One remedy to these issues,named the"Hop-to-Hug"(H2H)algorithm,is proposed via a convexification enhancement upon the classical cutting-plane algorithm(CPA).The improved robustness of the H2H algorithm is demonstrated through a series of integration tests in one single material point.Furthermore,a constitutive model is implemented with the H2H algorithm into the Abaqus/Standard finite-element platform.Element-level and structure-level analyses are carried out to validate the effectiveness of the H2H algorithm in convergence.All validation analyses manifest that the proposed H2H algorithm can offer enhanced stability over the classical CPA method while maintaining the ease of implementation,in which evaluations of the second-order derivatives of yield function and plastic potential function are circumvented.
文摘With the continuous growth of power demand and the diversification of power consumption structure,the loss of distribution network has gradually become the focus of attention.Given the problems of single loss reduction measure,lack of economy,and practicality in existing research,this paper proposes an optimization method of distribution network loss reduction based on tabu search algorithm and optimizes the combination and parameter configuration of loss reduction measure.The optimization model is developed with the goal of maximizing comprehensive benefits,incorporating both economic and environmental factors,and accounting for investment costs,including the loss of power reduction.Additionally,the model ensures that constraint conditions such as power flow equations,voltage deviations,and line transmission capacities are satisfied.The solution is obtained through a tabu search algorithm,which is well-suited for solving nonlinear problems with multiple constraints.Combined with the example of 10kV25 node construction,the simulation results show that the method can significantly reduce the network loss on the basis of ensuring the economy and environmental protection of the system,which provides a theoretical basis for distribution network planning.
文摘Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’s edge.However,the complexity of model conversion techniques,diverse inference mechanisms,and varied learning strategies make designing and deploying these models challenging.Additionally,deploying TinyML models on resource-constrained hardware with specific software frameworks has broadened EdgeML’s applications across various sectors.These factors underscore the necessity for a comprehensive literature review,as current reviews do not systematically encompass the most recent findings on these topics.Consequently,it provides a comprehensive overview of state-of-the-art techniques in model conversion,inference mechanisms,learning strategies within EdgeML,and deploying these models on resource-constrained edge devices using TinyML.It identifies 90 research articles published between 2018 and 2025,categorizing them into two main areas:(1)model conversion,inference,and learning strategies in EdgeML and(2)deploying TinyML models on resource-constrained hardware using specific software frameworks.In the first category,the synthesis of selected research articles compares and critically reviews various model conversion techniques,inference mechanisms,and learning strategies.In the second category,the synthesis identifies and elaborates on major development boards,software frameworks,sensors,and algorithms used in various applications across six major sectors.As a result,this article provides valuable insights for researchers,practitioners,and developers.It assists them in choosing suitable model conversion techniques,inference mechanisms,learning strategies,hardware development boards,software frameworks,sensors,and algorithms tailored to their specific needs and applications across various sectors.
基金funded by the General Program of the National Natural Science Foundation of China(No.42174070)the General Program of the Beijing Natural Science Foundation(No.8222035).
文摘Basin effect was first described following the analysis of seismic ground motion associated with the 1985 MW8.1 earthquake in Mexico.Basins affect the propagation of seismic waves through various mechanisms,and several unique phenomena,such as the basin edge effect,basin focusing effect,and basin-induced secondary waves,have been observed.Understanding and quantitatively predicting these phenomena are crucial for earthquake disaster reduction.Some pioneering studies in this field have proposed a quantitative relationship between the basin effect on ground motion and basin depth.Unfortunately,basin effect phenomena predicted using a model based only on basin depth exhibit large deviations from actual distributions,implying the severe shortcomings of single-parameter basin effect modeling.Quaternary sediments are thick and widely distributed in the Beijing-Tianjin-Hebei region.The seismic media inside and outside of this basin have significantly different physical properties,and the basin bottom forms an interface with strong seismic reflections.In this study,we established a three-dimensional structure model of the Quaternary sedimentary basin based on the velocity structure model of the North China Craton and used it to simulate the ground motion under a strong earthquake following the spectral element method,obtaining the spatial distribution characteristics of the ground motion amplification ratio throughout the basin.The back-propagation(BP)neural network algorithm was then introduced to establish a multi-parameter mathematical model for predicting ground motion amplification ratios,with the seismic source location,physical property ratio of the media inside and outside the basin,seismic wave frequency,and basin shape as the input parameters.We then examined the main factors influencing the amplification of seismic ground motion in basins based on the prediction results,and concluded that the main factors influencing the basin effect are basin shape and differences in the physical properties of media inside and outside the basin.
基金supported by the Key Research and Development Program of Hainan Province(Grant Nos.ZDYF2024GXJS014,ZDYF2023GXJS163)the National Natural Science Foundation of China(NSFC)(Grant Nos.62162022,62162024)Collaborative Innovation Project of Hainan University(XTCX2022XXB02).
文摘With the rapid development of Internet of Things technology,the sharp increase in network devices and their inherent security vulnerabilities present a stark contrast,bringing unprecedented challenges to the field of network security,especially in identifying malicious attacks.However,due to the uneven distribution of network traffic data,particularly the imbalance between attack traffic and normal traffic,as well as the imbalance between minority class attacks and majority class attacks,traditional machine learning detection algorithms have significant limitations when dealing with sparse network traffic data.To effectively tackle this challenge,we have designed a lightweight intrusion detection model based on diffusion mechanisms,named Diff-IDS,with the core objective of enhancing the model’s efficiency in parsing complex network traffic features,thereby significantly improving its detection speed and training efficiency.The model begins by finely filtering network traffic features and converting them into grayscale images,while also employing image-flipping techniques for data augmentation.Subsequently,these preprocessed images are fed into a diffusion model based on the Unet architecture for training.Once the model is trained,we fix the weights of the Unet network and propose a feature enhancement algorithm based on feature masking to further boost the model’s expressiveness.Finally,we devise an end-to-end lightweight detection strategy to streamline the model,enabling efficient lightweight detection of imbalanced samples.Our method has been subjected to multiple experimental tests on renowned network intrusion detection benchmarks,including CICIDS 2017,KDD 99,and NSL-KDD.The experimental results indicate that Diff-IDS leads in terms of detection accuracy,training efficiency,and lightweight metrics compared to the current state-of-the-art models,demonstrating exceptional detection capabilities and robustness.
基金National Natural Science Foundation of China (No. 49894194-4)
文摘Gas-bearing volcanic reservoirs have been found in the deep Songliao Basin, China. Choosing proper interpretation parameters for log evaluation is difficult due to complicated mineral compositions and variable mineral contents. Based on the QAPF classification scheme given by IUGS, we propose a method to determine the mineral contents of volcanic rocks using log data and a genetic algorithm. According to the QAPF scheme, minerals in volcanic rocks are divided into five groups: Q(quartz), A (Alkaline feldspar), P (plagioclase), M (mafic) and F (feldspathoid). We propose a model called QAPM including porosity for the volumetric analysis of reservoirs. The log response equations for density, apparent neutron porosity, transit time, gamma ray and volume photoelectrical cross section index were first established with the mineral parameters obtained from the Schlumberger handbook of log mineral parameters. Then the volumes of the four minerals in the matrix were calculated using the genetic algorithm (GA). The calculated porosity, based on the interpretation parameters, can be compared with core porosity, and the rock names given in the paper based on QAPF classification according to the four mineral contents are compatible with those from the chemical analysis of the core samples.
基金The National Natural Science Foundation of China (No.50422283)the Science and Technology Key Plan Project of Henan Province (No.072102360060)
文摘In order to solve the problems of potential incident rescue on expressway networks, the opportunity cost-based method is used to establish a resource dispatch decision model. The model aims to dispatch the rescue resources from the regional road networks and to obtain the location of the rescue depots and the numbers of service vehicles assigned for the potential incidents. Due to the computational complexity of the decision model, a scene decomposition algorithm is proposed. The algorithm decomposes the dispatch problem from various kinds of resources to a single resource, and determines the original scene of rescue resources based on the rescue requirements and the resource matrix. Finally, a convenient optimal dispatch scheme is obtained by decomposing each original scene and simplifying the objective function. To illustrate the application of the decision model and the algorithm, a case of the expressway network is studied on areas around Nanjing city in China and the results show that the model used and the algorithm proposed are appropriate.
基金The National Key Technology R&D Program of China during the 11th Five Year Plan Period(No.2008BAJ11B01)
文摘A solution to compute the optimal path based on a single-line-single-directional(SLSD)road network model is proposed.Unlike the traditional road network model,in the SLSD conceptual model,being single-directional and single-line style,a road is no longer a linkage of road nodes but abstracted as a network node.Similarly,a road node is abstracted as the linkage of two ordered single-directional roads.This model can describe turn restrictions,circular roads,and other real scenarios usually described using a super-graph.Then a computing framework for optimal path finding(OPF)is presented.It is proved that classical Dijkstra and A algorithms can be directly used for OPF computing of any real-world road networks by transferring a super-graph to an SLSD network.Finally,using Singapore road network data,the proposed conceptual model and its corresponding optimal path finding algorithms are validated using a two-step optimal path finding algorithm with a pre-computing strategy based on the SLSD road network.
文摘Current dynamic finite element model updating methods are not efficient or restricted to the problem of local optima. To circumvent these, a novel updating method which integrates the meta-model and the genetic algorithm is proposed. Experimental design technique is used to determine the best sampling points for the estimation of polynomial coefficients given the order and the number of independent variables. Finite element analyses are performed to generate the sampling data. Regression analysis is then used to estimate the response surface model to approximate the functional relationship between response features and design parameters on the entire design space. In the fitness evaluation of the genetic algorithm, the response surface model is used to substitute the finite element model to output features with given design parameters for the computation of fitness for the individual. Finally, the global optima that corresponds to the updated design parameter is acquired after several generations of evolution. In the application example, finite element analysis and modal testing are performed on a real chassis model. The finite element model is updated using the proposed method. After updating, root-mean-square error of modal frequencies is smaller than 2%. Furthermore, prediction ability of the updated model is validated using the testing results of the modified structure. The root-mean-square error of the prediction errors is smaller than 2%.
基金Hexa-Type Elites Peak Program of Jiangsu Province(No.2008144)Qing Lan Project of Jiangsu ProvinceFund for Excellent Young Teachers of Southeast University
文摘To realize automatic modeling and dynamic simulation of the educational assembling-type robot with open structure, a general dynamic model for the educational assembling-type robot and a fast simulation algorithm are put forward. First, the educational robot system is abstracted to a multibody system and a general dynamic model of the educational robot is constructed by the Newton-Euler method. Then the dynamic model is simplified by a combination of components with fixed connections according to the structural characteristics of the educational robot. Secondly, in order to obtain a high efficiency simulation algorithm, based on the sparse matrix technique, the augmentation algorithm and the direct projective constraint stabilization algorithm are improved. Finally, a numerical example is given. The results show that the model and the fast algorithm are valid and effective. This study lays a dynamic foundation for realizing the simulation platform of the educational robot.
文摘A new arrival and departure flight classification method based on the transitive closure algorithm (TCA) is proposed. Firstly, the fuzzy set theory and the transitive closure algorithm are introduced. Then four different factors are selected to establish the flight classification model and a method is given to calculate the delay cost for each class. Finally, the proposed method is implemented in the sequencing problems of flights in a terminal area, and results are compared with that of the traditional classification method(TCM). Results show that the new classification model is effective in reducing the expenses of flight delays, thus optimizing the sequences of arrival and departure flights, and improving the efficiency of air traffic control.
基金The National Natural Science Foundation of China(No.71101014,50679008)Specialized Research Fund for the Doctoral Program of Higher Education(No.200801411105)the Science and Technology Project of the Department of Communications of Henan Province(No.2010D107-4)
文摘Aiming at the real-time fluctuation and nonlinear characteristics of the expressway short-term traffic flow forecasting the parameter projection pursuit regression PPPR model is applied to forecast the expressway traffic flow where the orthogonal Hermite polynomial is used to fit the ridge functions and the least square method is employed to determine the polynomial weight coefficient c.In order to efficiently optimize the projection direction a and the number M of ridge functions of the PPPR model the chaos cloud particle swarm optimization CCPSO algorithm is applied to optimize the parameters. The CCPSO-PPPR hybrid optimization model for expressway short-term traffic flow forecasting is established in which the CCPSO algorithm is used to optimize the optimal projection direction a in the inner layer while the number M of ridge functions is optimized in the outer layer.Traffic volume weather factors and travel date of the previous several time intervals of the road section are taken as the input influencing factors. Example forecasting and model comparison results indicate that the proposed model can obtain a better forecasting effect and its absolute error is controlled within [-6,6] which can meet the application requirements of expressway traffic flow forecasting.
基金Supported by National Natural Science Foundation of China(60802040)Youth Fund in Southwest University of Science and Technology(10zx3106)~~
文摘In order to decrease model complexity of rice panicle for its complicated morphological structure,an interactive L-system based on substructure algorithm was proposed to model rice panicle in this study.Through the analysis of panicle morphology,the geometrical structure models of panicle spikelet,axis and branch were constructed firstly.Based on that,an interactive panicle L-system model was developed by using substructure algorithm to optimize panicle geometrical models with the similar structure.Simulation results showed that the interactive L-system panicle model based on substructure algorithm could fast construct panicle morphological structure in reality.In addition,this method had the well reference value for other plants model research.