The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the intera...The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the interaction among decision variables is intricate,leading to large group sizes and suboptimal optimization effects;hence a large-scale multi-objective optimization algorithm based on weighted overlapping grouping of decision variables(MOEAWOD)is proposed in this paper.Initially,the decision variables are perturbed and categorized into convergence and diversity variables;subsequently,the convergence variables are subdivided into groups based on the interactions among different decision variables.If the size of a group surpasses the set threshold,that group undergoes a process of weighting and overlapping grouping.Specifically,the interaction strength is evaluated based on the interaction frequency and number of objectives among various decision variables.The decision variable with the highest interaction in the group is identified and disregarded,and the remaining variables are then reclassified into subgroups.Finally,the decision variable with the strongest interaction is added to each subgroup.MOEAWOD minimizes the interactivity between different groups and maximizes the interactivity of decision variables within groups,which contributed to the optimized direction of convergence and diversity exploration with different groups.MOEAWOD was subjected to testing on 18 benchmark large-scale optimization problems,and the experimental results demonstrate the effectiveness of our methods.Compared with the other algorithms,our method is still at an advantage.展开更多
With the development of big data and social computing,large-scale group decisionmaking(LGDM)is nowmerging with social networks.Using social network analysis(SNA),this study proposes an LGDM consensus model that consid...With the development of big data and social computing,large-scale group decisionmaking(LGDM)is nowmerging with social networks.Using social network analysis(SNA),this study proposes an LGDM consensus model that considers the trust relationship among decisionmakers(DMs).In the process of consensusmeasurement:the social network is constructed according to the social relationship among DMs,and the Louvain method is introduced to classify social networks to form subgroups.In this study,the weights of each decision maker and each subgroup are computed by comprehensive network weights and trust weights.In the process of consensus improvement:A feedback mechanism with four identification and two direction rules is designed to guide the consensus of the improvement process.Based on the trust relationship among DMs,the preferences are modified,and the corresponding social network is updated to accelerate the consensus.Compared with the previous research,the proposedmodel not only allows the subgroups to be reconstructed and updated during the adjustment process,but also improves the accuracy of the adjustment by the feedbackmechanism.Finally,an example analysis is conducted to verify the effectiveness and flexibility of the proposed method.Moreover,compared with previous studies,the superiority of the proposed method in solving the LGDM problem is highlighted.展开更多
The existing algorithms for solving multi-objective optimization problems fall into three main categories:Decomposition-based,dominance-based,and indicator-based.Traditional multi-objective optimization problemsmainly...The existing algorithms for solving multi-objective optimization problems fall into three main categories:Decomposition-based,dominance-based,and indicator-based.Traditional multi-objective optimization problemsmainly focus on objectives,treating decision variables as a total variable to solve the problem without consideringthe critical role of decision variables in objective optimization.As seen,a variety of decision variable groupingalgorithms have been proposed.However,these algorithms are relatively broad for the changes of most decisionvariables in the evolution process and are time-consuming in the process of finding the Pareto frontier.To solvethese problems,a multi-objective optimization algorithm for grouping decision variables based on extreme pointPareto frontier(MOEA-DV/EPF)is proposed.This algorithm adopts a preprocessing rule to solve the Paretooptimal solution set of extreme points generated by simultaneous evolution in various target directions,obtainsthe basic Pareto front surface to determine the convergence effect,and analyzes the convergence and distributioneffects of decision variables.In the later stages of algorithm optimization,different mutation strategies are adoptedaccording to the nature of the decision variables to speed up the rate of evolution to obtain excellent individuals,thusenhancing the performance of the algorithm.Evaluation validation of the test functions shows that this algorithmcan solve the multi-objective optimization problem more efficiently.展开更多
The problem of multiple attribute decision making under fuzzy linguistic environments, in which decision makers can only provide their preferences (attribute values)in the form of trapezoid fuzzy linguistic variable...The problem of multiple attribute decision making under fuzzy linguistic environments, in which decision makers can only provide their preferences (attribute values)in the form of trapezoid fuzzy linguistic variables(TFLV), is studied. The formula of the degree of possibility between two TFLVs is defined, and some of its characteristics are studied. Based on the degree of possibility of fuzzy linguistic variables, an approach to ranking the decision alternatives in multiple attribute decision making with TFLV is developed. The trapezoid fuzzy linguistic weighted averaging (TFLWA) operator method is utilized to aggregate the decision information, and then all the alternatives are ranked by comparing the degree of possibility of TFLV. The method can carry out linguistic computation processes easily without loss of linguistic information, and thus makes the decision results reasonable and effective. Finally, the implementation process of the proposed method is illustrated and analyzed by a practical example.展开更多
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci...Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.展开更多
A method is proposed to deal with the uncertain multiple attribute group decision making problems,where 2-dimension uncertain linguistic variables(2DULVs)are used as the reliable way for the experts to express their f...A method is proposed to deal with the uncertain multiple attribute group decision making problems,where 2-dimension uncertain linguistic variables(2DULVs)are used as the reliable way for the experts to express their fuzzy subjective evaluation information.Firstly,in order to measure the 2DULVs more accurately,a new method is proposed to compare two 2DULVs,called a score function,while a new function is defined to measure the distance between two 2DULVs.Secondly,two optimization models are established to determine the weight of experts and attributes based on the new distance formula and a weighted average operator is used to determine the comprehensive evaluation value of each alternative.Then,a score function is used to determine the ranking of the alternatives.Finally,the effectiveness of the proposed method is proved by an illustrated example.展开更多
Structured modeling is the most commonly used modeling method, but it is not quite addaptive to significant changes in environmental conditions. Therefore, Decision Variables Analysis(DVA), a new modelling method is p...Structured modeling is the most commonly used modeling method, but it is not quite addaptive to significant changes in environmental conditions. Therefore, Decision Variables Analysis(DVA), a new modelling method is proposed to deal with linear programming modeling and changing environments. In variant linear programming , the most complicated relationships are those among decision variables. DVA classifies the decision variables into different levels using different index sets, and divides a model into different elements so that any change can only have its effect on part of the whole model. DVA takes into consideration the complicated relationships among decision variables at different levels, and can therefore sucessfully solve any modeling problem in dramatically changing environments.展开更多
The society in the digital transformation era demands new decision schemes such as e-democracy or based on social media.Such novel decision schemes require the participation of many experts/decision makers/stakeholder...The society in the digital transformation era demands new decision schemes such as e-democracy or based on social media.Such novel decision schemes require the participation of many experts/decision makers/stakeholders in the decision processes.As a result,large-scale group decision making(LSGDM)has attracted the attention of many researchers in the last decade and many studies have been conducted in order to face the challenges associated with the topic.Therefore,this paper aims at reviewing the most relevant studies about LSGDM,identifying the most profitable research trends and analyzing them from a critical point of view.To do so,the Web of Science database has been consulted by using different searches.From these results a total of 241 contributions were found and a selection process regarding language,type of contribution and actual relation with the studied topic was then carried out.The 87 contributions finally selected for this review have been analyzed from four points of view that have been highly remarked in the topic,such as the preference structure in which decision-makers’opinions are modeled,the group decision rules used to define the decision making process,the techniques applied to verify the quality of these models and their applications to real world problems solving.Afterwards,a critical analysis of the main limitations of the existing proposals is developed.Finally,taking into account these limitations,new research lines for LSGDM are proposed and the main challenges are stressed out.展开更多
Emergency decision-making problems usually involve many experts with different professional backgrounds and concerns,leading to non-cooperative behaviors during the consensus-reaching process.Many studies on noncooper...Emergency decision-making problems usually involve many experts with different professional backgrounds and concerns,leading to non-cooperative behaviors during the consensus-reaching process.Many studies on noncooperative behavior management assumed that the maximumdegree of cooperation of experts is to totally accept the revisions suggested by the moderator,which restricted individuals with altruistic behaviors to make more contributions in the agreement-reaching process.In addition,when grouping a large group into subgroups by clustering methods,existing studies were based on the similarity of evaluation values or trust relationships among experts separately but did not consider them simultaneously.In this study,we introduce a clustering method considering the similarity of evaluation values and the trust relations of experts and then develop a consensusmodel taking into account the altruistic behaviors of experts.First,we cluster experts into subgroups by a constrained Kmeans clustering algorithm according to the opinion similarity and trust relationship of experts.Then,we calculate the weights of experts and clusters based on the centrality degrees of experts.Next,to enhance the quality of consensus reaching,we identify three kinds of non-cooperative behaviors and propose corresponding feedback mechanisms relying on the altruistic behaviors of experts.A numerical example is given to show the effectiveness and practicality of the proposed method in emergency decision-making.The study finds that integrating altruistic behavior analysis in group decision-making can safeguard the interests of experts and ensure the integrity of decision-making information.展开更多
A water loop variable refrigerant flow(WLVRF)air-conditioning system is designed to be applied in large-scale buildings in northern China.The system is energy saving and it is an integrated system consisting of a va...A water loop variable refrigerant flow(WLVRF)air-conditioning system is designed to be applied in large-scale buildings in northern China.The system is energy saving and it is an integrated system consisting of a variable refrigerant flow(VRF)air-conditioning unit,a water loop and an air source heat pump.The water loop transports energy among different regions in the buildings instead of refrigerant pipes,decreasing the scale of the VRF air-conditioning unit and improving the performance.Previous models for refrigerants and building loads are cited in this investigation.Mathematical models of major equipment and other elements of the system are established using the lumped parameter method based on the DATAFIT software and the MATLAB software.The performance of the WLVRF system is simulated.The initial investments and the running costs are calculated based on the results of market research.Finally,a contrast is carried out between the WLVRF system and the traditional VRF system.The results show that the WLVRF system has a better working condition and lower running costs than the traditional VRF system.展开更多
Pythagorean fuzzy set(PFS) can provide more flexibility than intuitionistic fuzzy set(IFS) for handling uncertain information, and PFS has been increasingly used in multi-attribute decision making problems. This paper...Pythagorean fuzzy set(PFS) can provide more flexibility than intuitionistic fuzzy set(IFS) for handling uncertain information, and PFS has been increasingly used in multi-attribute decision making problems. This paper proposes a new multiattribute group decision making method based on Pythagorean uncertain linguistic variable Hamy mean(PULVHM) operator and VIKOR method. Firstly, we define operation rules and a new aggregation operator of Pythagorean uncertain linguistic variable(PULV) and explore some properties of the operator.Secondly, taking the decision makers' hesitation degree into account, a new score function is defined, and we further develop a new group decision making approach integrated with VIKOR method. Finally, an investment example is demonstrated to elaborate the validity of the proposed method. Sensibility analysis and comprehensive comparisons with another two methods are performed to show the stability and advantage of our method.展开更多
MPEG-4 High-Efficiency Advanced Audio Coding (HE-AAC) is designed for low bit rate applications, such as audio streaming in mobile communications. The HE-AAC audio codec offers a better coding efficiency since variabl...MPEG-4 High-Efficiency Advanced Audio Coding (HE-AAC) is designed for low bit rate applications, such as audio streaming in mobile communications. The HE-AAC audio codec offers a better coding efficiency since variable-length codes (VLCs) are adopted. However, HE-AAC has originally been designed for storage and error-free transmission conditions. For the transmission over bit error-prone channels, error propagation is a serious problem for the VLCs. Therefore, a robust HE-AAC decoder is desired, especially for mobile communications. In contrast to traditional hard-decision decoding, utilizing bit-wise channel reliability information, softdecision (SD) decoding has been known to offer better audio quality. In HE-AAC, the global gain parameter is coded with fixedlength codes (FLCs), while the scale factors and quantized spectral coefficients are coded with VLCs. In this work, we apply FL/SD decoding to the global gain parameter, VL/SD decoding to the parameters scale factors and quantized spectral coefficients. Especially, in order to apply VL/SD decoding to the quantized spectral coefficients, a new modified trellis representation in VL/SD decoding is proposed. An improved HE-AAC performance is clearly observed, with the support of both instrumental measurements and a subjective listening test.展开更多
A variable precision rough set (VPRS) model is used to solve the multi-attribute decision analysis (MADA) problem with multiple conflicting decision attributes and multiple condition attributes. By introducing confide...A variable precision rough set (VPRS) model is used to solve the multi-attribute decision analysis (MADA) problem with multiple conflicting decision attributes and multiple condition attributes. By introducing confidence measures and a β-reduct, the VPRS model can rationally solve the conflicting decision analysis problem with multiple decision attributes and multiple condition attributes. For illustration, a medical diagnosis example is utilized to show the feasibility of the VPRS model in solving the MADA problem with multiple decision attributes and multiple condition attributes. Empirical results show that the decision rule with the highest confidence measures will be used as the final decision rules in the MADA problem with multiple conflicting decision attributes and multiple condition attributes if there are some conflicts among decision rules resulting from multiple decision attributes. The confidence-measure-based VPRS model can effectively solve the conflicts of decision rules from multiple decision attributes and thus a class of MADA problem with multiple conflicting decision attributes and multiple condition attributes are solved.展开更多
The characteristics of the financing model are firstly analyzed when the e-commerce enterprises participate in the supply chain finance. Internet supply chain finance models are divided into three categories with the ...The characteristics of the financing model are firstly analyzed when the e-commerce enterprises participate in the supply chain finance. Internet supply chain finance models are divided into three categories with the standard of whether the electronic commerce enterprises provide funds for small and medium enterprises instead of banks. And then we further study the financing process and the functions of the e-commerce platform with specific examples. Finally, combined with the characteristics of the supply chain finance model, we set up a small and medium enterprises credit evaluation model based on the principle of variable weight with its dynamic data. At the same time, a multi-time points and multi-indicators decision-making method based on the principle of variable weight is proposed and a specific example is presented. In this paper, the multi-criteria decision-making model with the principle of variable weight has been used two times. At last, a typical case has been analyzed based on this model with a higher accuracy rate of credit risk assessment.展开更多
The characteristics of the financing model are firstly analyzed when the e-commerce enterprises participate in the supply chain finance. Internet supply chain finance models are divided into three categories with the ...The characteristics of the financing model are firstly analyzed when the e-commerce enterprises participate in the supply chain finance. Internet supply chain finance models are divided into three categories with the standard of whether the Electronic commerce enterprises provide funds for small and medium enterprises instead of banks. And then we further study the financing process and the functions of the e-commerce platform with specific examples. Finally, combined with the characteristics of the supply chain finance model, we set up a small and medium enterprises credit evaluation model based on the principle of variable weight with its dynamic data. At the same time, a multi time points and multi indicators decision-making method based on the principle of variable weight is proposed and a specific example is presented. In this paper, the Multi-criteria decision-making model with the principle of variable weight has been used two times. At last, a typical case has been analyzed based on this model with a higher accuracy rate of credit risk assessment.展开更多
Distance measures between exact linguistic variables and between uncertain linguistic variables are introduced respectively. Based on exact linguistic variables and uncertain linguistic variables, the concepts of posi...Distance measures between exact linguistic variables and between uncertain linguistic variables are introduced respectively. Based on exact linguistic variables and uncertain linguistic variables, the concepts of positive linguistic ideal solution and negative linguistic ideal solution of attribute values are defined. To rank and select alternatives, based on the distance measures of two types of linguistic variables and the linguistic ideal solutions, a method for multiple attribute decision making with different types of linguistic information is proposed, by which all alternatives can be ranked. The method can carry out linguistic computation processes easily without loss of linguistic information, and thus makes the decision result reasonable and effective. Finally, the implementation process of the proposed method is illustrated and analyzed by a numerical example.展开更多
To build any spatial soil database, a set of environmental data including digital elevation model(DEM) and satellite images beside geomorphic landscape description are essentials. Such a database, integrates field obs...To build any spatial soil database, a set of environmental data including digital elevation model(DEM) and satellite images beside geomorphic landscape description are essentials. Such a database, integrates field observations and laboratory analyses data with the results obtained from qualitative and quantitative models. So far, various techniques have been developed for soil data processing. The performance of Artificial Neural Network(ANN) and Decision Tree(DT) models was compared to map out some soil attributes in Alborz Province, Iran. Terrain attributes derived from a DEM along with Landsat 8 ETM+, geomorphology map, and the routine laboratory analyses of the studied area were used as input data. The relationships between soil properties(including sand, silt, clay, electrical conductivity, organic carbon, and carbonates) and the environmental variables were assessed using the Pearson Correlation Coefficient and Principle Components Analysis. Slope, elevation, geomforms, carbonate index, stream network, wetness index, and the band’s number 2, 3, 4, and 5 were the most significantly correlated variables. ANN and DT did not show the same accuracy in predicting all parameters. The DT model showed higher performances in estimating sand(R^2=0.73), silt(R^2=0.70), clay(R^2=0.72), organic carbon(R^2=0.71), and carbonates(R^2=0.70). While the ANN model only showed higher performance in predicting soil electrical conductivity(R^2=0.95). The results showed that determination the best model to use, is dependent upon the relation between the considered soil properties with the environmental variables. However, the DT model showed more reasonable results than the ANN model in this study. The results showed that before using a certain model to predict variability of all soil parameters, it would be better to evaluate the efficiency of all possible models for choosing the best fitted model for each property. In other words, most of the developed models are sitespecific and may not be applicable to use for predicting other soil properties or other area.展开更多
Ballistic missile defense system (BMDS) is important for its special role in ensuring national security and maintaining strategic balance. Research on modeling and simulation of the BMDS beforehand is essential as dev...Ballistic missile defense system (BMDS) is important for its special role in ensuring national security and maintaining strategic balance. Research on modeling and simulation of the BMDS beforehand is essential as developing a real one requires lots of manpower and resources. BMDS is a typical complex system for its nonlinear, adaptive and uncertainty characteristics. The agent-based modeling method is well suited for the complex system whose overall behaviors are determined by interactions among individual elements. A multi-agent decision support system (DSS), which includes missile agent, radar agent and command center agent, is established based on the studies of structure and function of BMDS. Considering the constraints brought by radar, intercept missile, offensive missile and commander, the objective function of DSS is established. In order to dynamically generate the optimal interception plan, the variable neighborhood negative selection particle swarm optimization (VNNSPSO) algorithm is proposed to support the decision making of DSS. The proposed algorithm is compared with the standard PSO, constriction factor PSO (CFPSO), inertia weight linear decrease PSO (LDPSO), variable neighborhood PSO (VNPSO) algorithm from the aspects of convergence rate, iteration number, average fitness value and standard deviation. The simulation results verify the efficiency of the proposed algorithm. The multi-agent DSS is developed through the Repast simulation platform and the constructed DSS can generate intercept plans automatically and support three-dimensional dynamic display of missile defense process.展开更多
The binary decision diagrams (BDDs) can give canonical representation to Boolean functions; they have wide applications in the design and verification of digital systems. A new method based on cultural algorithms fo...The binary decision diagrams (BDDs) can give canonical representation to Boolean functions; they have wide applications in the design and verification of digital systems. A new method based on cultural algorithms for minimizing the size of BDDs is presented in this paper. First of all, the coding of an individual representing a BDDs is given, and the fitness of an individual is defined. The population is built by a set of the individuals. Second, the implementations based on cultural algorithms for the minimization of BDDs, i.e., the designs of belief space and population space, and the designs of acceptance function and influence function, are given in detail. Third, the fault detection approaches using BDDs for digital circuits are studied. A new method for the detection of crosstalk faults by using BDDs is presented. Experimental results on a number of digital circuits show that the BDDs with small number of nodes can be obtained by the method proposed in this paper, and all test vectors of a fault in digital circuits can also be produced.展开更多
This study examined the variability in frequency of tropical night occurrence (i.e., minimum air tem- perature 25℃) in Beijing, using a homogenized daily temperature dataset during the period 1960–2008. Our result...This study examined the variability in frequency of tropical night occurrence (i.e., minimum air tem- perature 25℃) in Beijing, using a homogenized daily temperature dataset during the period 1960–2008. Our results show that tropical nights occur most frequently in late July and early August, which is consis- tent with relatively high air humidity associated with the rainy season in Beijing. In addition, year-to-year variation of tropical night occurrence indicates that the tropical nights have appeared much more frequently since 1994, which can be illustrated by the yearly days of tropical nights averaged over two periods: 9.2 days of tropical nights per year during 1994–2008 versus 3.15 days during 1960–1993. These features of tropical night variations suggest a distinction between tropical nights and extreme heat in Beijing. We further investigated the large-scale circulations associated with the year-to-year variation of tropical night occurrence in July and August, when tropical nights appear most frequently and occupy 95% of the annual sum. After comparing the results in the two reanalysis datasets (NCEP/NCAR and ERA-40) and considering the possible effects of decadal change in the frequency of tropical nights that occurred around 1993/94, we conclude that on the interannual time scale, the cyclonic anomaly with a barotropic structure centered over Beijing is responsible for less frequent tropical nights, and the anticyclonic anomaly is responsible for more frequent occurrence of tropical nights over Beijing.展开更多
基金supported in part by the Central Government Guides Local Science and TechnologyDevelopment Funds(Grant No.YDZJSX2021A038)in part by theNational Natural Science Foundation of China under(Grant No.61806138)in part by the China University Industry-University-Research Collaborative Innovation Fund(Future Network Innovation Research and Application Project)(Grant 2021FNA04014).
文摘The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the interaction among decision variables is intricate,leading to large group sizes and suboptimal optimization effects;hence a large-scale multi-objective optimization algorithm based on weighted overlapping grouping of decision variables(MOEAWOD)is proposed in this paper.Initially,the decision variables are perturbed and categorized into convergence and diversity variables;subsequently,the convergence variables are subdivided into groups based on the interactions among different decision variables.If the size of a group surpasses the set threshold,that group undergoes a process of weighting and overlapping grouping.Specifically,the interaction strength is evaluated based on the interaction frequency and number of objectives among various decision variables.The decision variable with the highest interaction in the group is identified and disregarded,and the remaining variables are then reclassified into subgroups.Finally,the decision variable with the strongest interaction is added to each subgroup.MOEAWOD minimizes the interactivity between different groups and maximizes the interactivity of decision variables within groups,which contributed to the optimized direction of convergence and diversity exploration with different groups.MOEAWOD was subjected to testing on 18 benchmark large-scale optimization problems,and the experimental results demonstrate the effectiveness of our methods.Compared with the other algorithms,our method is still at an advantage.
基金The work was supported by Humanities and Social Sciences Fund of the Ministry of Education(No.22YJA630119)the National Natural Science Foundation of China(No.71971051)Natural Science Foundation of Hebei Province(No.G2021501004).
文摘With the development of big data and social computing,large-scale group decisionmaking(LGDM)is nowmerging with social networks.Using social network analysis(SNA),this study proposes an LGDM consensus model that considers the trust relationship among decisionmakers(DMs).In the process of consensusmeasurement:the social network is constructed according to the social relationship among DMs,and the Louvain method is introduced to classify social networks to form subgroups.In this study,the weights of each decision maker and each subgroup are computed by comprehensive network weights and trust weights.In the process of consensus improvement:A feedback mechanism with four identification and two direction rules is designed to guide the consensus of the improvement process.Based on the trust relationship among DMs,the preferences are modified,and the corresponding social network is updated to accelerate the consensus.Compared with the previous research,the proposedmodel not only allows the subgroups to be reconstructed and updated during the adjustment process,but also improves the accuracy of the adjustment by the feedbackmechanism.Finally,an example analysis is conducted to verify the effectiveness and flexibility of the proposed method.Moreover,compared with previous studies,the superiority of the proposed method in solving the LGDM problem is highlighted.
基金the Liaoning Province Nature Fundation Project(2022-MS-291)the National Programme for Foreign Expert Projects(G2022006008L)+2 种基金the Basic Research Projects of Liaoning Provincial Department of Education(LJKMZ20220781,LJKMZ20220783,LJKQZ20222457)King Saud University funded this study through theResearcher Support Program Number(RSPD2023R704)King Saud University,Riyadh,Saudi Arabia.
文摘The existing algorithms for solving multi-objective optimization problems fall into three main categories:Decomposition-based,dominance-based,and indicator-based.Traditional multi-objective optimization problemsmainly focus on objectives,treating decision variables as a total variable to solve the problem without consideringthe critical role of decision variables in objective optimization.As seen,a variety of decision variable groupingalgorithms have been proposed.However,these algorithms are relatively broad for the changes of most decisionvariables in the evolution process and are time-consuming in the process of finding the Pareto frontier.To solvethese problems,a multi-objective optimization algorithm for grouping decision variables based on extreme pointPareto frontier(MOEA-DV/EPF)is proposed.This algorithm adopts a preprocessing rule to solve the Paretooptimal solution set of extreme points generated by simultaneous evolution in various target directions,obtainsthe basic Pareto front surface to determine the convergence effect,and analyzes the convergence and distributioneffects of decision variables.In the later stages of algorithm optimization,different mutation strategies are adoptedaccording to the nature of the decision variables to speed up the rate of evolution to obtain excellent individuals,thusenhancing the performance of the algorithm.Evaluation validation of the test functions shows that this algorithmcan solve the multi-objective optimization problem more efficiently.
基金2008 Soft Science Program of Jiangsu Science and Technology Department (No.BR2008098)
文摘The problem of multiple attribute decision making under fuzzy linguistic environments, in which decision makers can only provide their preferences (attribute values)in the form of trapezoid fuzzy linguistic variables(TFLV), is studied. The formula of the degree of possibility between two TFLVs is defined, and some of its characteristics are studied. Based on the degree of possibility of fuzzy linguistic variables, an approach to ranking the decision alternatives in multiple attribute decision making with TFLV is developed. The trapezoid fuzzy linguistic weighted averaging (TFLWA) operator method is utilized to aggregate the decision information, and then all the alternatives are ranked by comparing the degree of possibility of TFLV. The method can carry out linguistic computation processes easily without loss of linguistic information, and thus makes the decision results reasonable and effective. Finally, the implementation process of the proposed method is illustrated and analyzed by a practical example.
基金supported by the National Key R&D Program of China(No.2021YFB0301200)National Natural Science Foundation of China(No.62025208).
文摘Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.
基金This work was supported by the Natural Science Foundation of Liaoning Province(2013020022).
文摘A method is proposed to deal with the uncertain multiple attribute group decision making problems,where 2-dimension uncertain linguistic variables(2DULVs)are used as the reliable way for the experts to express their fuzzy subjective evaluation information.Firstly,in order to measure the 2DULVs more accurately,a new method is proposed to compare two 2DULVs,called a score function,while a new function is defined to measure the distance between two 2DULVs.Secondly,two optimization models are established to determine the weight of experts and attributes based on the new distance formula and a weighted average operator is used to determine the comprehensive evaluation value of each alternative.Then,a score function is used to determine the ranking of the alternatives.Finally,the effectiveness of the proposed method is proved by an illustrated example.
文摘Structured modeling is the most commonly used modeling method, but it is not quite addaptive to significant changes in environmental conditions. Therefore, Decision Variables Analysis(DVA), a new modelling method is proposed to deal with linear programming modeling and changing environments. In variant linear programming , the most complicated relationships are those among decision variables. DVA classifies the decision variables into different levels using different index sets, and divides a model into different elements so that any change can only have its effect on part of the whole model. DVA takes into consideration the complicated relationships among decision variables at different levels, and can therefore sucessfully solve any modeling problem in dramatically changing environments.
基金supported by the Spanish Ministry of Economy and Competitiveness through the Spanish National Project PGC2018-099402-B-I00the Postdoctoral fellow Ramón y Cajal(RYC-2017-21978)+6 种基金the FEDER-UJA project 1380637ERDF,the Spanish Ministry of Science,Innovation and Universities through a Formación de Profesorado Universitario(FPU2019/01203)grantthe Junta de Andalucía,Andalusian Plan for Research,Development,and Innovation(POSTDOC 21-00461)the National Natural Science Foundation of China(61300167,61976120)the Natural Science Foundation of Jiangsu Province(BK20191445)the Natural Science Key Foundation of Jiangsu Education Department(21KJA510004)Qing Lan Project of Jiangsu Province。
文摘The society in the digital transformation era demands new decision schemes such as e-democracy or based on social media.Such novel decision schemes require the participation of many experts/decision makers/stakeholders in the decision processes.As a result,large-scale group decision making(LSGDM)has attracted the attention of many researchers in the last decade and many studies have been conducted in order to face the challenges associated with the topic.Therefore,this paper aims at reviewing the most relevant studies about LSGDM,identifying the most profitable research trends and analyzing them from a critical point of view.To do so,the Web of Science database has been consulted by using different searches.From these results a total of 241 contributions were found and a selection process regarding language,type of contribution and actual relation with the studied topic was then carried out.The 87 contributions finally selected for this review have been analyzed from four points of view that have been highly remarked in the topic,such as the preference structure in which decision-makers’opinions are modeled,the group decision rules used to define the decision making process,the techniques applied to verify the quality of these models and their applications to real world problems solving.Afterwards,a critical analysis of the main limitations of the existing proposals is developed.Finally,taking into account these limitations,new research lines for LSGDM are proposed and the main challenges are stressed out.
基金supported by the National Natural Science Foundation of China (Nos.71771156,71971145,72171158).
文摘Emergency decision-making problems usually involve many experts with different professional backgrounds and concerns,leading to non-cooperative behaviors during the consensus-reaching process.Many studies on noncooperative behavior management assumed that the maximumdegree of cooperation of experts is to totally accept the revisions suggested by the moderator,which restricted individuals with altruistic behaviors to make more contributions in the agreement-reaching process.In addition,when grouping a large group into subgroups by clustering methods,existing studies were based on the similarity of evaluation values or trust relationships among experts separately but did not consider them simultaneously.In this study,we introduce a clustering method considering the similarity of evaluation values and the trust relations of experts and then develop a consensusmodel taking into account the altruistic behaviors of experts.First,we cluster experts into subgroups by a constrained Kmeans clustering algorithm according to the opinion similarity and trust relationship of experts.Then,we calculate the weights of experts and clusters based on the centrality degrees of experts.Next,to enhance the quality of consensus reaching,we identify three kinds of non-cooperative behaviors and propose corresponding feedback mechanisms relying on the altruistic behaviors of experts.A numerical example is given to show the effectiveness and practicality of the proposed method in emergency decision-making.The study finds that integrating altruistic behavior analysis in group decision-making can safeguard the interests of experts and ensure the integrity of decision-making information.
文摘A water loop variable refrigerant flow(WLVRF)air-conditioning system is designed to be applied in large-scale buildings in northern China.The system is energy saving and it is an integrated system consisting of a variable refrigerant flow(VRF)air-conditioning unit,a water loop and an air source heat pump.The water loop transports energy among different regions in the buildings instead of refrigerant pipes,decreasing the scale of the VRF air-conditioning unit and improving the performance.Previous models for refrigerants and building loads are cited in this investigation.Mathematical models of major equipment and other elements of the system are established using the lumped parameter method based on the DATAFIT software and the MATLAB software.The performance of the WLVRF system is simulated.The initial investments and the running costs are calculated based on the results of market research.Finally,a contrast is carried out between the WLVRF system and the traditional VRF system.The results show that the WLVRF system has a better working condition and lower running costs than the traditional VRF system.
基金supported by the National Natural Science Foundation of China(61402260,61473176)Taishan Scholar Project of Shandong Province(TSQN201812092)
文摘Pythagorean fuzzy set(PFS) can provide more flexibility than intuitionistic fuzzy set(IFS) for handling uncertain information, and PFS has been increasingly used in multi-attribute decision making problems. This paper proposes a new multiattribute group decision making method based on Pythagorean uncertain linguistic variable Hamy mean(PULVHM) operator and VIKOR method. Firstly, we define operation rules and a new aggregation operator of Pythagorean uncertain linguistic variable(PULV) and explore some properties of the operator.Secondly, taking the decision makers' hesitation degree into account, a new score function is defined, and we further develop a new group decision making approach integrated with VIKOR method. Finally, an investment example is demonstrated to elaborate the validity of the proposed method. Sensibility analysis and comprehensive comparisons with another two methods are performed to show the stability and advantage of our method.
文摘MPEG-4 High-Efficiency Advanced Audio Coding (HE-AAC) is designed for low bit rate applications, such as audio streaming in mobile communications. The HE-AAC audio codec offers a better coding efficiency since variable-length codes (VLCs) are adopted. However, HE-AAC has originally been designed for storage and error-free transmission conditions. For the transmission over bit error-prone channels, error propagation is a serious problem for the VLCs. Therefore, a robust HE-AAC decoder is desired, especially for mobile communications. In contrast to traditional hard-decision decoding, utilizing bit-wise channel reliability information, softdecision (SD) decoding has been known to offer better audio quality. In HE-AAC, the global gain parameter is coded with fixedlength codes (FLCs), while the scale factors and quantized spectral coefficients are coded with VLCs. In this work, we apply FL/SD decoding to the global gain parameter, VL/SD decoding to the parameters scale factors and quantized spectral coefficients. Especially, in order to apply VL/SD decoding to the quantized spectral coefficients, a new modified trellis representation in VL/SD decoding is proposed. An improved HE-AAC performance is clearly observed, with the support of both instrumental measurements and a subjective listening test.
基金The National Natural Science Foundation of China (No.70221001)the Knowledge Innovation Program of Chinese Academyof Sciences (No.3547600)Strategy Research Grant of City University of Hong Kong (No.7001677)
文摘A variable precision rough set (VPRS) model is used to solve the multi-attribute decision analysis (MADA) problem with multiple conflicting decision attributes and multiple condition attributes. By introducing confidence measures and a β-reduct, the VPRS model can rationally solve the conflicting decision analysis problem with multiple decision attributes and multiple condition attributes. For illustration, a medical diagnosis example is utilized to show the feasibility of the VPRS model in solving the MADA problem with multiple decision attributes and multiple condition attributes. Empirical results show that the decision rule with the highest confidence measures will be used as the final decision rules in the MADA problem with multiple conflicting decision attributes and multiple condition attributes if there are some conflicts among decision rules resulting from multiple decision attributes. The confidence-measure-based VPRS model can effectively solve the conflicts of decision rules from multiple decision attributes and thus a class of MADA problem with multiple conflicting decision attributes and multiple condition attributes are solved.
文摘The characteristics of the financing model are firstly analyzed when the e-commerce enterprises participate in the supply chain finance. Internet supply chain finance models are divided into three categories with the standard of whether the electronic commerce enterprises provide funds for small and medium enterprises instead of banks. And then we further study the financing process and the functions of the e-commerce platform with specific examples. Finally, combined with the characteristics of the supply chain finance model, we set up a small and medium enterprises credit evaluation model based on the principle of variable weight with its dynamic data. At the same time, a multi-time points and multi-indicators decision-making method based on the principle of variable weight is proposed and a specific example is presented. In this paper, the multi-criteria decision-making model with the principle of variable weight has been used two times. At last, a typical case has been analyzed based on this model with a higher accuracy rate of credit risk assessment.
文摘The characteristics of the financing model are firstly analyzed when the e-commerce enterprises participate in the supply chain finance. Internet supply chain finance models are divided into three categories with the standard of whether the Electronic commerce enterprises provide funds for small and medium enterprises instead of banks. And then we further study the financing process and the functions of the e-commerce platform with specific examples. Finally, combined with the characteristics of the supply chain finance model, we set up a small and medium enterprises credit evaluation model based on the principle of variable weight with its dynamic data. At the same time, a multi time points and multi indicators decision-making method based on the principle of variable weight is proposed and a specific example is presented. In this paper, the Multi-criteria decision-making model with the principle of variable weight has been used two times. At last, a typical case has been analyzed based on this model with a higher accuracy rate of credit risk assessment.
基金The National Natural Science Foundation of China(Nos.70571087,70472033).
文摘Distance measures between exact linguistic variables and between uncertain linguistic variables are introduced respectively. Based on exact linguistic variables and uncertain linguistic variables, the concepts of positive linguistic ideal solution and negative linguistic ideal solution of attribute values are defined. To rank and select alternatives, based on the distance measures of two types of linguistic variables and the linguistic ideal solutions, a method for multiple attribute decision making with different types of linguistic information is proposed, by which all alternatives can be ranked. The method can carry out linguistic computation processes easily without loss of linguistic information, and thus makes the decision result reasonable and effective. Finally, the implementation process of the proposed method is illustrated and analyzed by a numerical example.
基金College of Agriculture and Natural Resources,University of Tehran for financial support of the study(Grant No.7104017/6/24 and 28)
文摘To build any spatial soil database, a set of environmental data including digital elevation model(DEM) and satellite images beside geomorphic landscape description are essentials. Such a database, integrates field observations and laboratory analyses data with the results obtained from qualitative and quantitative models. So far, various techniques have been developed for soil data processing. The performance of Artificial Neural Network(ANN) and Decision Tree(DT) models was compared to map out some soil attributes in Alborz Province, Iran. Terrain attributes derived from a DEM along with Landsat 8 ETM+, geomorphology map, and the routine laboratory analyses of the studied area were used as input data. The relationships between soil properties(including sand, silt, clay, electrical conductivity, organic carbon, and carbonates) and the environmental variables were assessed using the Pearson Correlation Coefficient and Principle Components Analysis. Slope, elevation, geomforms, carbonate index, stream network, wetness index, and the band’s number 2, 3, 4, and 5 were the most significantly correlated variables. ANN and DT did not show the same accuracy in predicting all parameters. The DT model showed higher performances in estimating sand(R^2=0.73), silt(R^2=0.70), clay(R^2=0.72), organic carbon(R^2=0.71), and carbonates(R^2=0.70). While the ANN model only showed higher performance in predicting soil electrical conductivity(R^2=0.95). The results showed that determination the best model to use, is dependent upon the relation between the considered soil properties with the environmental variables. However, the DT model showed more reasonable results than the ANN model in this study. The results showed that before using a certain model to predict variability of all soil parameters, it would be better to evaluate the efficiency of all possible models for choosing the best fitted model for each property. In other words, most of the developed models are sitespecific and may not be applicable to use for predicting other soil properties or other area.
文摘Ballistic missile defense system (BMDS) is important for its special role in ensuring national security and maintaining strategic balance. Research on modeling and simulation of the BMDS beforehand is essential as developing a real one requires lots of manpower and resources. BMDS is a typical complex system for its nonlinear, adaptive and uncertainty characteristics. The agent-based modeling method is well suited for the complex system whose overall behaviors are determined by interactions among individual elements. A multi-agent decision support system (DSS), which includes missile agent, radar agent and command center agent, is established based on the studies of structure and function of BMDS. Considering the constraints brought by radar, intercept missile, offensive missile and commander, the objective function of DSS is established. In order to dynamically generate the optimal interception plan, the variable neighborhood negative selection particle swarm optimization (VNNSPSO) algorithm is proposed to support the decision making of DSS. The proposed algorithm is compared with the standard PSO, constriction factor PSO (CFPSO), inertia weight linear decrease PSO (LDPSO), variable neighborhood PSO (VNPSO) algorithm from the aspects of convergence rate, iteration number, average fitness value and standard deviation. The simulation results verify the efficiency of the proposed algorithm. The multi-agent DSS is developed through the Repast simulation platform and the constructed DSS can generate intercept plans automatically and support three-dimensional dynamic display of missile defense process.
基金supported by Natural Science Foundation of Guangdong Provincial of China (No.7005833)
文摘The binary decision diagrams (BDDs) can give canonical representation to Boolean functions; they have wide applications in the design and verification of digital systems. A new method based on cultural algorithms for minimizing the size of BDDs is presented in this paper. First of all, the coding of an individual representing a BDDs is given, and the fitness of an individual is defined. The population is built by a set of the individuals. Second, the implementations based on cultural algorithms for the minimization of BDDs, i.e., the designs of belief space and population space, and the designs of acceptance function and influence function, are given in detail. Third, the fault detection approaches using BDDs for digital circuits are studied. A new method for the detection of crosstalk faults by using BDDs is presented. Experimental results on a number of digital circuits show that the BDDs with small number of nodes can be obtained by the method proposed in this paper, and all test vectors of a fault in digital circuits can also be produced.
基金supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST Grant No 2010-0028715)
文摘This study examined the variability in frequency of tropical night occurrence (i.e., minimum air tem- perature 25℃) in Beijing, using a homogenized daily temperature dataset during the period 1960–2008. Our results show that tropical nights occur most frequently in late July and early August, which is consis- tent with relatively high air humidity associated with the rainy season in Beijing. In addition, year-to-year variation of tropical night occurrence indicates that the tropical nights have appeared much more frequently since 1994, which can be illustrated by the yearly days of tropical nights averaged over two periods: 9.2 days of tropical nights per year during 1994–2008 versus 3.15 days during 1960–1993. These features of tropical night variations suggest a distinction between tropical nights and extreme heat in Beijing. We further investigated the large-scale circulations associated with the year-to-year variation of tropical night occurrence in July and August, when tropical nights appear most frequently and occupy 95% of the annual sum. After comparing the results in the two reanalysis datasets (NCEP/NCAR and ERA-40) and considering the possible effects of decadal change in the frequency of tropical nights that occurred around 1993/94, we conclude that on the interannual time scale, the cyclonic anomaly with a barotropic structure centered over Beijing is responsible for less frequent tropical nights, and the anticyclonic anomaly is responsible for more frequent occurrence of tropical nights over Beijing.