Existing traditional ocean vertical-mixing schemes are empirically developed without a thorough understanding of the physical processes involved,resulting in a discrepancy between the parameterization and forecast res...Existing traditional ocean vertical-mixing schemes are empirically developed without a thorough understanding of the physical processes involved,resulting in a discrepancy between the parameterization and forecast results.The uncertainty in ocean-mixing parameterization is primarily responsible for the bias in ocean models.Benefiting from deep-learning technology,we design the Adaptive Fully Connected Module with an Inception module as the baseline to minimize bias.It adaptively extracts the best features through fully connected layers with different widths,and better learns the nonlinear relationship between input variables and parameterization fields.Moreover,to obtain more accurate results,we impose KPP(K-Profile Parameterization)and PP(Pacanowski–Philander)schemes as physical constraints to make the network parameterization process follow the basic physical laws more closely.Since model data are calculated with human experience,lacking some unknown physical processes,which may differ from the actual data,we use a decade-long time record of hydrological and turbulence observations in the tropical Pacific Ocean as training data.Combining physical constraints and a nonlinear activation function,our method catches its nonlinear change and better adapts to the oceanmixing parameterization process.The use of physical constraints can improve the final results.展开更多
In the domain of knowledge graph embedding,conventional approaches typically transform entities and relations into continuous vector spaces.However,parameter efficiency becomes increasingly crucial when dealing with l...In the domain of knowledge graph embedding,conventional approaches typically transform entities and relations into continuous vector spaces.However,parameter efficiency becomes increasingly crucial when dealing with large-scale knowledge graphs that contain vast numbers of entities and relations.In particular,resource-intensive embeddings often lead to increased computational costs,and may limit scalability and adaptability in practical environ-ments,such as in low-resource settings or real-world applications.This paper explores an approach to knowledge graph representation learning that leverages small,reserved entities and relation sets for parameter-efficient embedding.We introduce a hierarchical attention network designed to refine and maximize the representational quality of embeddings by selectively focusing on these reserved sets,thereby reducing model complexity.Empirical assessments validate that our model achieves high performance on the benchmark dataset with fewer parameters and smaller embedding dimensions.The ablation studies further highlight the impact and contribution of each component in the proposed hierarchical attention structure.展开更多
Parameterization is a critical step in modelling ecosystem dynamics.However,assigning parameter values can be a technical challenge for structurally complex natural plant communities;uncertainties in model simulations...Parameterization is a critical step in modelling ecosystem dynamics.However,assigning parameter values can be a technical challenge for structurally complex natural plant communities;uncertainties in model simulations often arise from inappropriate model parameterization.Here we compared five methods for defining community-level specific leaf area(SLA)and leaf C:N across nine contrasting forest sites along the North-South Transect of Eastern China,including biomass-weighted average for the entire plant community(AP_BW)and four simplified selective sampling(biomass-weighted average over five dominant tree species[5DT_BW],basal area weighted average over five dominant tree species[5DT_AW],biomass-weighted average over all tree species[AT_BW]and basal area weighted average over all tree species[AT_AW]).We found that the default values for SLA and leaf C:N embedded in the Biome-BGC v4.2 were higher than the five computational methods produced across the nine sites,with deviations ranging from 28.0 to 73.3%.In addition,there were only slight deviations(<10%)between the whole plant community sampling(AP_BW)predicted NPP and the four simplified selective sampling methods,and no significant difference between the predictions of AT_BW and AP_BW except the Shennongjia site.The findings in this study highlights the critical importance of computational strategies for community-level parameterization in ecosystem process modelling,and will support the choice of parameterization methods.展开更多
The Stokes production coefficient(E_(6))constitutes a critical parameter within the Mellor-Yamada type(MY-type)Langmuir turbulence(LT)parameterization schemes,significantly affecting the simulation of turbulent kineti...The Stokes production coefficient(E_(6))constitutes a critical parameter within the Mellor-Yamada type(MY-type)Langmuir turbulence(LT)parameterization schemes,significantly affecting the simulation of turbulent kinetic energy,turbulent length scale,and vertical diffusivity coefficient for turbulent kinetic energy in the upper ocean.However,the accurate determination of its value remains a pressing scientific challenge.This study adopted an innovative approach by leveraging deep learning technology to address this challenge of inferring the E_(6).Through the integration of the information of the turbulent length scale equation into a physical-informed neural network(PINN),we achieved an accurate and physically meaningful inference of E_(6).Multiple cases were examined to assess the feasibility of PINN in this task,revealing that under optimal settings,the average mean squared error of the E_(6) inference was only 0.01,attesting to the effectiveness of PINN.The optimal hyperparameter combination was identified using the Tanh activation function,along with a spatiotemporal sampling interval of 1 s and 0.1 m.This resulted in a substantial reduction in the average bias of the E_(6) inference,ranging from O(10^(1))to O(10^(2))times compared with other combinations.This study underscores the potential application of PINN in intricate marine environments,offering a novel and efficient method for optimizing MY-type LT parameterization schemes.展开更多
Retrieval of Thin-Ice Thickness(TIT)using thermodynamic modeling is sensitive to the parameterization of the independent variables(coded in the model)and the uncertainty of the measured input variables.This article ex...Retrieval of Thin-Ice Thickness(TIT)using thermodynamic modeling is sensitive to the parameterization of the independent variables(coded in the model)and the uncertainty of the measured input variables.This article examines the deviation of the classical model’s TIT output when using different parameterization schemes and the sensitivity of the output to the ice thickness.Moreover,it estimates the uncertainty of the output in response to the uncertainties of the input variables.The parameterized independent variables include atmospheric longwave emissivity,air density,specific heat of air,latent heat of ice,conductivity of ice,snow depth,and snow conductivity.Measured input parameters include air temperature,ice surface temperature,and wind speed.Among the independent variables,the results show that the highest deviation is caused by adjusting the parameterization of snow conductivity and depth,followed ice conductivity.The sensitivity of the output TIT to ice thickness is highest when using parameterization of ice conductivity,atmospheric emissivity,and snow conductivity and depth.The retrieved TIT obtained using each parameterization scheme is validated using in situ measurements and satellite-retrieved data.From in situ measurements,the uncertainties of the measured air temperature and surface temperature are found to be high.The resulting uncertainties of TIT are evaluated using perturbations of the input data selected based on the probability distribution of the measurement error.The results show that the overall uncertainty of TIT to air temperature,surface temperature,and wind speed uncertainty is around 0.09 m,0.049 m,and−0.005 m,respectively.展开更多
The study on designs for the baseline parameterization has aroused attention in recent years. This paper focuses on two-level regular designs for the baseline parameterization. A general result on the relationship bet...The study on designs for the baseline parameterization has aroused attention in recent years. This paper focuses on two-level regular designs for the baseline parameterization. A general result on the relationship between K-aberration and word length pattern is developed.展开更多
Atlantic Meridional Overturning Circulation(AMOC)plays a central role in long-term climate variations through its heat and freshwater transports,which can collapse under a rapid increase of greenhouse gas forcing in c...Atlantic Meridional Overturning Circulation(AMOC)plays a central role in long-term climate variations through its heat and freshwater transports,which can collapse under a rapid increase of greenhouse gas forcing in climate models.Previous studies have suggested that the deviation of model parameters is one of the major factors in inducing inaccurate AMOC simulations.In this work,with a low-resolution earth system model,the authors try to explore whether a reasonable adjustment of the key model parameter can help to re-establish the AMOC after its collapse.Through a new optimization strategy,the extra freshwater flux(FWF)parameter is determined to be the dominant one affecting the AMOC’s variability.The traditional ensemble optimal interpolation(EnOI)data assimilation and new machine learning methods are adopted to optimize the FWF parameter in an abrupt 4×CO_(2) forcing experiment to improve the adaptability of model parameters and accelerate the recovery of AMOC.The results show that,under an abrupt 4×CO_(2) forcing in millennial simulations,the AMOC will first collapse and then re-establish by the default FWF parameter slowly.However,during the parameter adjustment process,the saltier and colder sea water over the North Atlantic region are the dominant factors in usefully improving the adaptability of the FWF parameter and accelerating the recovery of AMOC,according to their physical relationship with FWF on the interdecadal timescale.展开更多
In this paper,we propose a neural network approach to learn the parameters of a class of stochastic Lotka-Volterra systems.Approximations of the mean and covariance matrix of the observational variables are obtained f...In this paper,we propose a neural network approach to learn the parameters of a class of stochastic Lotka-Volterra systems.Approximations of the mean and covariance matrix of the observational variables are obtained from the Euler-Maruyama discretization of the underlying stochastic differential equations(SDEs),based on which the loss function is built.The stochastic gradient descent method is applied in the neural network training.Numerical experiments demonstrate the effectiveness of our method.展开更多
The dramatic rise in the number of people living in cities has made many environmental and social problems worse.The search for a productive method for disposing of solid waste is the most notable of these problems.Ma...The dramatic rise in the number of people living in cities has made many environmental and social problems worse.The search for a productive method for disposing of solid waste is the most notable of these problems.Many scholars have referred to it as a fuzzy multi-attribute or multi-criteria decision-making problem using various fuzzy set-like approaches because of the inclusion of criteria and anticipated ambiguity.The goal of the current study is to use an innovative methodology to address the expected uncertainties in the problem of solid waste site selection.The characteristics(or sub-attributes)that decision-makers select and the degree of approximation they accept for various options can both be indicators of these uncertainties.To tackle these problems,a novel mathematical structure known as the fuzzy parameterized possibility single valued neutrosophic hypersoft expert set(ρˆ-set),which is initially described,is integrated with a modified version of Sanchez’s method.Following this,an intelligent algorithm is suggested.The steps of the suggested algorithm are explained with an example that explains itself.The compatibility of solid waste management sites and systems is discussed,and rankings are established along with detailed justifications for their viability.This study’s strengths lie in its application of fuzzy parameterization and possibility grading to effectively handle the uncertainties embodied in the parameters’nature and alternative approximations,respectively.It uses specific mathematical formulations to compute the fuzzy parameterized degrees and possibility grades that are missing from the prior literature.It is simpler for the decisionmakers to look at each option separately because the decision is uncertain.Comparing the computed results,it is discovered that they are consistent and dependable because of their preferred properties.展开更多
The lattice parameter,measured with sufficient accuracy,can be utilized to evaluate the quality of single crystals and to determine the equation of state for materials.We propose an iterative method for obtaining more...The lattice parameter,measured with sufficient accuracy,can be utilized to evaluate the quality of single crystals and to determine the equation of state for materials.We propose an iterative method for obtaining more precise lattice parameters using the interaction points for the pseudo-Kossel pattern obtained from laser-induced X-ray diffraction(XRD).This method has been validated by the analysis of an XRD experiment conducted on iron single crystals.Furthermore,the method was used to calculate the compression ratio and rotated angle of an LiF sample under high pressure loading.This technique provides a robust tool for in-situ characterization of structural changes in single crystals under extreme conditions.It has significant implications for studying the equation of state and phase transitions.展开更多
In the context of the diversity of smart terminals,the unity of the root of trust becomes complicated,which not only affects the efficiency of trust propagation,but also poses a challenge to the security of the whole ...In the context of the diversity of smart terminals,the unity of the root of trust becomes complicated,which not only affects the efficiency of trust propagation,but also poses a challenge to the security of the whole system.In particular,the solidification of the root of trust in non-volatile memory(NVM)restricts the system’s dynamic updating capability,which is an obvious disadvantage in a rapidly changing security environment.To address this issue,this study proposes a novel approach to generate root security parameters using static random access memory(SRAM)physical unclonable functions(PUFs).SRAM PUFs,as a security primitive,show great potential in lightweight security solutions due to their inherent physical properties,low cost and scalability.However,the stability of SRAM PUFs in harsh environments is a key issue.These environmental conditions include extreme temperatures,high humidity,and strong electromagnetic radiation,all of which can affect the performance of SRAM PUFs.In order to ensure the stability of root safety parameters under these conditions,this study proposes an integrated approach that covers not only the acquisition of entropy sources,but also the implementation of algorithms and configuration management.In addition,this study develops a series of reliability-enhancing algorithms,including adaptive parameter selection,data preprocessing,auxiliary data generation,and error correction,which are essential for improving the performance of SRAM PUFs in harsh environments.Based on these techniques,this study establishes six types of secure parameter generation mechanisms,which not only improve the security of the system,but also enhance its adaptability in variable environments.Through a series of experiments,we verify the effectiveness of the proposed method.Under 10 different environmental conditions,our method is able to achieve full recovery of security data with an error rate of less than 25%,which proves the robustness and reliability of our method.These results not only provide strong evidence for the stability of SRAM PUFs in practical applications,but also provide a new direction for future research in the field of smart terminal security.展开更多
In order to improve the accuracy of the photogrammetric joint roughness coefficient(JRC)value,the present study proposed a novel method combining an autonomous shooting parameter selection algorithm with a composite e...In order to improve the accuracy of the photogrammetric joint roughness coefficient(JRC)value,the present study proposed a novel method combining an autonomous shooting parameter selection algorithm with a composite error model.Firstly,according to the depth map-based photogrammetric theory,the estimation of JRC from a three-dimensional(3D)digital surface model of rock discontinuities was presented.Secondly,an automatic shooting parameter selection algorithm was novelly proposed to establish the 3D model dataset of rock discontinuities with varying shooting parameters and target sizes.Meanwhile,the photogrammetric tests were performed with custom-built equipment capable of adjusting baseline lengths,and a total of 36 sets of JRC data was gathered via a combination of laboratory and field tests.Then,by combining the theory of point cloud coordinate computation error with the equation of JRC calculation,a composite error model controlled by the shooting parameters was proposed.This newly proposed model was validated via the 3D model dataset,demonstrating the capability to correct initially obtained JRC values solely based on shooting parameters.Furthermore,the implementation of this correction can significantly reduce errors in JRC values obtained via photographic measurement.Subsequently,our proposed error model was integrated into the shooting parameter selection algorithm,thus improving the rationality and convenience of selecting suitable shooting parameter combinations when dealing with target rock masses with different sizes.Moreover,the optimal combination of three shooting parameters was offered.JRC values resulting from various combinations of shooting parameters were verified by comparing them with 3D laser scan data.Finally,the application scope and limitations of the newly proposed approach were further addressed.展开更多
Promoting the high penetration of renewable energies like photovoltaic(PV)systems has become an urgent issue for expanding modern power grids and has accomplished several challenges compared to existing distribution g...Promoting the high penetration of renewable energies like photovoltaic(PV)systems has become an urgent issue for expanding modern power grids and has accomplished several challenges compared to existing distribution grids.This study measures the effectiveness of the Puma optimizer(PO)algorithm in parameter estimation of PSC(perovskite solar cells)dynamic models with hysteresis consideration considering the electric field effects on operation.The models used in this study will incorporate hysteresis effects to capture the time-dependent behavior of PSCs accurately.The PO optimizes the proposed modified triple diode model(TDM)with a variable voltage capacitor and resistances(VVCARs)considering the hysteresis behavior.The suggested PO algorithm contrasts with other wellknown optimizers from the literature to demonstrate its superiority.The results emphasize that the PO realizes a lower RMSE(Root mean square errors),which proves its capability and efficacy in parameter extraction for the models.The statistical results emphasize the efficiency and supremacy of the proposed PO compared to the other well-known competing optimizers.The convergence rates show good,fast,and stable convergence rates with lower RMSE via PO compared to the other five competitive optimizers.Moreover,the lowermean realized via the PO optimizer is illustrated by the box plot for all optimizers.展开更多
To guarantee safe and efficient tunneling of a tunnel boring machine(TBM),rapid and accurate judgment of the rock mass condition is essential.Based on fuzzy C-means clustering,this paper proposes a grouped machine lea...To guarantee safe and efficient tunneling of a tunnel boring machine(TBM),rapid and accurate judgment of the rock mass condition is essential.Based on fuzzy C-means clustering,this paper proposes a grouped machine learning method for predicting rock mass parameters.An elaborate data set on field rock mass is collected,which also matches field TBM tunneling.Meanwhile,target stratum samples are divided into several clusters by fuzzy C-means clustering,and multiple submodels are trained by samples in different clusters with the input of pretreated TBM tunneling data and the output of rock mass parameter data.Each testing sample or newly encountered tunneling condition can be predicted by multiple submodels with the weight of the membership degree of the sample to each cluster.The proposed method has been realized by 100 training samples and verified by 30 testing samples collected from the C1 part of the Pearl Delta water resources allocation project.The average percentage error of uniaxial compressive strength and joint frequency(Jf)of the 30 testing samples predicted by the pure back propagation(BP)neural network is 13.62%and 12.38%,while that predicted by the BP neural network combined with fuzzy C-means is 7.66%and6.40%,respectively.In addition,by combining fuzzy C-means clustering,the prediction accuracies of support vector regression and random forest are also improved to different degrees,which demonstrates that fuzzy C-means clustering is helpful for improving the prediction accuracy of machine learning and thus has good applicability.Accordingly,the proposed method is valuable for predicting rock mass parameters during TBM tunneling.展开更多
To investigate the influence of different longitudinal constraint systems on the longitudinal displacement at the girder ends of a three-tower suspension bridge,this study takes the Cangrong Xunjiang Bridge as an engi...To investigate the influence of different longitudinal constraint systems on the longitudinal displacement at the girder ends of a three-tower suspension bridge,this study takes the Cangrong Xunjiang Bridge as an engineering case for finite element analysis.This bridge employs an unprecedented tower-girder constraintmethod,with all vertical supports placed at the transition piers at both ends.This paper aims to study the characteristics of longitudinal displacement control at the girder ends under this novel structure,relying on finite element(FE)analysis.Initially,based on the Weigh In Motion(WIM)data,a random vehicle load model is generated and applied to the finite elementmodel.Several longitudinal constraint systems are proposed,and their effects on the structural response of the bridge are compared.The most reasonable system,balancing girder-end displacement and transitional pier stress,is selected.Subsequently,the study examines the impact of different viscous damper parameters on key structural response indicators,including cumulative longitudinal displacement at the girder ends,maximum longitudinal displacement at the girder ends,cumulative longitudinal displacement at the pier tops,maximum longitudinal displacement at the pier tops,longitudinal acceleration at the pier tops,and maximum bending moment at the pier bottoms.Finally,the coefficient of variation(CV)-TOPSIS method is used to optimize the viscous damper parameters for multiple objectives.The results show that adding viscous dampers at the side towers,in addition to the existing longitudinal limit bearings at the central tower,can most effectively reduce the response of structural indicators.The changes in these indicators are not entirely consistent with variations in damping coefficient and velocity exponent.The damper parameters significantly influence cumulative longitudinal displacement at the girder ends,cumulative longitudinal displacement at the pier tops,and maximum bending moments at the pier bottoms.The optimal damper parameters are found to be a damping coefficient of 5000 kN/(m/s)0.2 and a velocity exponent of 0.2.展开更多
Purpose–To investigate the influence of vehicle operation speed,curve geometry parameters and rail profile parameters on wheel–rail creepage in high-speed railway curves and propose a multi-parameter coordinated opt...Purpose–To investigate the influence of vehicle operation speed,curve geometry parameters and rail profile parameters on wheel–rail creepage in high-speed railway curves and propose a multi-parameter coordinated optimization strategy to reduce wheel–rail contact fatigue damage.Design/methodology/approach–Taking a small-radius curve of a high-speed railway as the research object,field measurements were conducted to obtain track parameters and wheel–rail profiles.A coupled vehicle-track dynamics model was established.Multiple numerical experiments were designed using the Latin Hypercube Sampling method to extract wheel-rail creepage indicators and construct a parameter-creepage response surface model.Findings–Key service parameters affecting wheel–rail creepage were identified,including the matching relationship between curve geometry and vehicle speed and rail profile parameters.The influence patterns of various parameters on wheel–rail creepage were revealed through response surface analysis,leading to the establishment of parameter optimization criteria.Originality/value–This study presents the systematic investigation of wheel–rail creepage characteristics under multi-parameter coupling in high-speed railway curves.A response surface-based parameter-creepage relationship model was established,and a multi-parameter coordinated optimization strategy was proposed.The research findings provide theoretical guidance for controlling wheel–rail contact fatigue damage and optimizing wheel–rail profiles in high-speed railway curves.展开更多
The vehicle-road coupling dynamics problem is a prominent issue in transportation,drawing significant attention in recent years.These dynamic equations are characterized by high-dimensionality,coupling,and time-varyin...The vehicle-road coupling dynamics problem is a prominent issue in transportation,drawing significant attention in recent years.These dynamic equations are characterized by high-dimensionality,coupling,and time-varying dynamics,making the exact solutions challenging to obtain.As a result,numerical integration methods are typically employed.However,conventional methods often suffer from low computational efficiency.To address this,this paper explores the application of the parameter freezing precise exponential integrator to vehicle-road coupling models.The model accounts for road roughness irregularities,incorporating all terms unrelated to the linear part into the algorithm's inhomogeneous vector.The general construction process of the algorithm is detailed.The validity of numerical results is verified through approximate analytical solutions(AASs),and the advantages of this method over traditional numerical integration methods are demonstrated.Multiple parameter freezing precise exponential integrator schemes are constructed based on the Runge-Kutta framework,with the fourth-order four-stage scheme identified as the optimal one.The study indicates that this method can quickly and accurately capture the dynamic system's vibration response,offering a new,efficient approach for numerical studies of high-dimensional vehicle-road coupling systems.展开更多
In this study, we mainly introduce two salinity parameterization schemes used in Sea Ice Simulator (SIS), that is, isosaline scheme and salinity profile scheme. Comparing the equation of isosaline scheme with that o...In this study, we mainly introduce two salinity parameterization schemes used in Sea Ice Simulator (SIS), that is, isosaline scheme and salinity profile scheme. Comparing the equation of isosaline scheme with that of salinity profile scheme, we found that there was one different term between the two schemes named the salinity different term. The thermodynamic effect of the salinity difference term on sea ice thickness and sea ice concentration showed that: in the freezing processes from November to next May, the sea ice temperature could rise on the influence of the salinity difference term and restrain sea ice freezing; at the first melting phase from June to August, the upper ice melting rate was faster than the lower ice melting rate. Then sea ice temperature could rise and accelerate the sea ice melting; at the second melting phase from September to October, the upper ice melting rate was slower than the lower ice melting rate, then sea ice temperature could decrease and restrain sea ice melting. However, the effect of the salinity difference term on the sea ice thickness and sea ice concentration was weak. To analyze the impacts of the salinity different term on Arctic sea ice thickness and sea ice concentration, we also designed several experiments by introducing the two salinity parameterizations to the ice-ocean coupled model, Modular Ocean Model (MOM4), respectively. The simulated results confirmed the previous results of formula derivation.展开更多
Improving and validating land surface models based on integrated observations in deserts is one of the challenges in land modeling. Particularly, key parameters and parameterization schemes in desert regions need to b...Improving and validating land surface models based on integrated observations in deserts is one of the challenges in land modeling. Particularly, key parameters and parameterization schemes in desert regions need to be evaluated in-situ to improve the models. In this study, we calibrated the land-surface key parameters and evaluated several formulations or schemes for thermal roughness length (z 0h ) in the common land model (CoLM). Our parameter calibration and scheme evaluation were based on the observed data during a torrid summer (29 July to 11 September 2009) over the Taklimakan Desert hinterland. First, the importance of the key parameters in the experiment was evaluated based on their physics principles and the significance of these key parameters were further validated using sensitivity test. Second, difference schemes (or physics-based formulas) of z 0h were adopted to simulate the variations of energy-related variables (e.g., sensible heat flux and surface skin temperature) and the simulated variations were then compared with the observed data. Third, the z 0h scheme that performed best (i.e., Y07) was then selected to replace the defaulted one (i.e., Z98); the revised scheme and the superiority of Y07 over Z98 was further demonstrated by comparing the simulated results with the observed data. Admittedly, the revised model did a relatively poor job of simulating the diurnal variations of surface soil heat flux, and nighttime soil temperature was also underestimated, calling for further improvement of the model for desert regions.展开更多
Presented is a review of the radiative properties of ice clouds from three perspectives: light scattering simulations, remote sensing applications, and broadband radiation parameterizations appropriate for numerical ...Presented is a review of the radiative properties of ice clouds from three perspectives: light scattering simulations, remote sensing applications, and broadband radiation parameterizations appropriate for numerical models. On the subject of light scattering simulations, several classical computational approaches are reviewed, including the conventional geometric-optics method and its improved forms, the finite-difference time domain technique, the pseudo-spectral time domain technique, the discrete dipole approximation method, and the T-matrix method, with specific applications to the computation of the singlescattering properties of individual ice crystals. The strengths and weaknesses associated with each approach are discussed.With reference to remote sensing, operational retrieval algorithms are reviewed for retrieving cloud optical depth and effective particle size based on solar or thermal infrared(IR) bands. To illustrate the performance of the current solar- and IR-based retrievals, two case studies are presented based on spaceborne observations. The need for a more realistic ice cloud optical model to obtain spectrally consistent retrievals is demonstrated. Furthermore, to complement ice cloud property studies based on passive radiometric measurements, the advantage of incorporating lidar and/or polarimetric measurements is discussed.The performance of ice cloud models based on the use of different ice habits to represent ice particles is illustrated by comparing model results with satellite observations. A summary is provided of a number of parameterization schemes for ice cloud radiative properties that were developed for application to broadband radiative transfer submodels within general circulation models(GCMs). The availability of the single-scattering properties of complex ice habits has led to more accurate radiation parameterizations. In conclusion, the importance of using nonspherical ice particle models in GCM simulations for climate studies is proven.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.42130608 and 42075142)the National Key Research and Development Program of China(Grant No.2020YFA0608000)the CUIT Science and Technology Innovation Capacity Enhancement Program Project(Grant No.KYTD202330)。
文摘Existing traditional ocean vertical-mixing schemes are empirically developed without a thorough understanding of the physical processes involved,resulting in a discrepancy between the parameterization and forecast results.The uncertainty in ocean-mixing parameterization is primarily responsible for the bias in ocean models.Benefiting from deep-learning technology,we design the Adaptive Fully Connected Module with an Inception module as the baseline to minimize bias.It adaptively extracts the best features through fully connected layers with different widths,and better learns the nonlinear relationship between input variables and parameterization fields.Moreover,to obtain more accurate results,we impose KPP(K-Profile Parameterization)and PP(Pacanowski–Philander)schemes as physical constraints to make the network parameterization process follow the basic physical laws more closely.Since model data are calculated with human experience,lacking some unknown physical processes,which may differ from the actual data,we use a decade-long time record of hydrological and turbulence observations in the tropical Pacific Ocean as training data.Combining physical constraints and a nonlinear activation function,our method catches its nonlinear change and better adapts to the oceanmixing parameterization process.The use of physical constraints can improve the final results.
基金supported by the National Science and Technology Council(NSTC),Taiwan,under Grants Numbers 112-2622-E-029-009 and 112-2221-E-029-019.
文摘In the domain of knowledge graph embedding,conventional approaches typically transform entities and relations into continuous vector spaces.However,parameter efficiency becomes increasingly crucial when dealing with large-scale knowledge graphs that contain vast numbers of entities and relations.In particular,resource-intensive embeddings often lead to increased computational costs,and may limit scalability and adaptability in practical environ-ments,such as in low-resource settings or real-world applications.This paper explores an approach to knowledge graph representation learning that leverages small,reserved entities and relation sets for parameter-efficient embedding.We introduce a hierarchical attention network designed to refine and maximize the representational quality of embeddings by selectively focusing on these reserved sets,thereby reducing model complexity.Empirical assessments validate that our model achieves high performance on the benchmark dataset with fewer parameters and smaller embedding dimensions.The ablation studies further highlight the impact and contribution of each component in the proposed hierarchical attention structure.
基金This research was funded by the National Natural Science Foundation of China(Grant Nos.31870426).
文摘Parameterization is a critical step in modelling ecosystem dynamics.However,assigning parameter values can be a technical challenge for structurally complex natural plant communities;uncertainties in model simulations often arise from inappropriate model parameterization.Here we compared five methods for defining community-level specific leaf area(SLA)and leaf C:N across nine contrasting forest sites along the North-South Transect of Eastern China,including biomass-weighted average for the entire plant community(AP_BW)and four simplified selective sampling(biomass-weighted average over five dominant tree species[5DT_BW],basal area weighted average over five dominant tree species[5DT_AW],biomass-weighted average over all tree species[AT_BW]and basal area weighted average over all tree species[AT_AW]).We found that the default values for SLA and leaf C:N embedded in the Biome-BGC v4.2 were higher than the five computational methods produced across the nine sites,with deviations ranging from 28.0 to 73.3%.In addition,there were only slight deviations(<10%)between the whole plant community sampling(AP_BW)predicted NPP and the four simplified selective sampling methods,and no significant difference between the predictions of AT_BW and AP_BW except the Shennongjia site.The findings in this study highlights the critical importance of computational strategies for community-level parameterization in ecosystem process modelling,and will support the choice of parameterization methods.
基金The National Key Research and Development Program of China under contract No.2022YFC3105002the National Natural Science Foundation of China under contract No.42176020the project from the Key Laboratory of Marine Environmental Information Technology,Ministry of Natural Resources,under contract No.2023GFW-1047.
文摘The Stokes production coefficient(E_(6))constitutes a critical parameter within the Mellor-Yamada type(MY-type)Langmuir turbulence(LT)parameterization schemes,significantly affecting the simulation of turbulent kinetic energy,turbulent length scale,and vertical diffusivity coefficient for turbulent kinetic energy in the upper ocean.However,the accurate determination of its value remains a pressing scientific challenge.This study adopted an innovative approach by leveraging deep learning technology to address this challenge of inferring the E_(6).Through the integration of the information of the turbulent length scale equation into a physical-informed neural network(PINN),we achieved an accurate and physically meaningful inference of E_(6).Multiple cases were examined to assess the feasibility of PINN in this task,revealing that under optimal settings,the average mean squared error of the E_(6) inference was only 0.01,attesting to the effectiveness of PINN.The optimal hyperparameter combination was identified using the Tanh activation function,along with a spatiotemporal sampling interval of 1 s and 0.1 m.This resulted in a substantial reduction in the average bias of the E_(6) inference,ranging from O(10^(1))to O(10^(2))times compared with other combinations.This study underscores the potential application of PINN in intricate marine environments,offering a novel and efficient method for optimizing MY-type LT parameterization schemes.
文摘Retrieval of Thin-Ice Thickness(TIT)using thermodynamic modeling is sensitive to the parameterization of the independent variables(coded in the model)and the uncertainty of the measured input variables.This article examines the deviation of the classical model’s TIT output when using different parameterization schemes and the sensitivity of the output to the ice thickness.Moreover,it estimates the uncertainty of the output in response to the uncertainties of the input variables.The parameterized independent variables include atmospheric longwave emissivity,air density,specific heat of air,latent heat of ice,conductivity of ice,snow depth,and snow conductivity.Measured input parameters include air temperature,ice surface temperature,and wind speed.Among the independent variables,the results show that the highest deviation is caused by adjusting the parameterization of snow conductivity and depth,followed ice conductivity.The sensitivity of the output TIT to ice thickness is highest when using parameterization of ice conductivity,atmospheric emissivity,and snow conductivity and depth.The retrieved TIT obtained using each parameterization scheme is validated using in situ measurements and satellite-retrieved data.From in situ measurements,the uncertainties of the measured air temperature and surface temperature are found to be high.The resulting uncertainties of TIT are evaluated using perturbations of the input data selected based on the probability distribution of the measurement error.The results show that the overall uncertainty of TIT to air temperature,surface temperature,and wind speed uncertainty is around 0.09 m,0.049 m,and−0.005 m,respectively.
文摘The study on designs for the baseline parameterization has aroused attention in recent years. This paper focuses on two-level regular designs for the baseline parameterization. A general result on the relationship between K-aberration and word length pattern is developed.
基金supported by the National Key R&D Program of China [grant number 2023YFF0805202]the National Natural Science Foun-dation of China [grant number 42175045]the Strategic Priority Research Program of the Chinese Academy of Sciences [grant number XDB42000000]。
文摘Atlantic Meridional Overturning Circulation(AMOC)plays a central role in long-term climate variations through its heat and freshwater transports,which can collapse under a rapid increase of greenhouse gas forcing in climate models.Previous studies have suggested that the deviation of model parameters is one of the major factors in inducing inaccurate AMOC simulations.In this work,with a low-resolution earth system model,the authors try to explore whether a reasonable adjustment of the key model parameter can help to re-establish the AMOC after its collapse.Through a new optimization strategy,the extra freshwater flux(FWF)parameter is determined to be the dominant one affecting the AMOC’s variability.The traditional ensemble optimal interpolation(EnOI)data assimilation and new machine learning methods are adopted to optimize the FWF parameter in an abrupt 4×CO_(2) forcing experiment to improve the adaptability of model parameters and accelerate the recovery of AMOC.The results show that,under an abrupt 4×CO_(2) forcing in millennial simulations,the AMOC will first collapse and then re-establish by the default FWF parameter slowly.However,during the parameter adjustment process,the saltier and colder sea water over the North Atlantic region are the dominant factors in usefully improving the adaptability of the FWF parameter and accelerating the recovery of AMOC,according to their physical relationship with FWF on the interdecadal timescale.
基金Supported by the National Natural Science Foundation of China(11971458,11471310)。
文摘In this paper,we propose a neural network approach to learn the parameters of a class of stochastic Lotka-Volterra systems.Approximations of the mean and covariance matrix of the observational variables are obtained from the Euler-Maruyama discretization of the underlying stochastic differential equations(SDEs),based on which the loss function is built.The stochastic gradient descent method is applied in the neural network training.Numerical experiments demonstrate the effectiveness of our method.
文摘The dramatic rise in the number of people living in cities has made many environmental and social problems worse.The search for a productive method for disposing of solid waste is the most notable of these problems.Many scholars have referred to it as a fuzzy multi-attribute or multi-criteria decision-making problem using various fuzzy set-like approaches because of the inclusion of criteria and anticipated ambiguity.The goal of the current study is to use an innovative methodology to address the expected uncertainties in the problem of solid waste site selection.The characteristics(or sub-attributes)that decision-makers select and the degree of approximation they accept for various options can both be indicators of these uncertainties.To tackle these problems,a novel mathematical structure known as the fuzzy parameterized possibility single valued neutrosophic hypersoft expert set(ρˆ-set),which is initially described,is integrated with a modified version of Sanchez’s method.Following this,an intelligent algorithm is suggested.The steps of the suggested algorithm are explained with an example that explains itself.The compatibility of solid waste management sites and systems is discussed,and rankings are established along with detailed justifications for their viability.This study’s strengths lie in its application of fuzzy parameterization and possibility grading to effectively handle the uncertainties embodied in the parameters’nature and alternative approximations,respectively.It uses specific mathematical formulations to compute the fuzzy parameterized degrees and possibility grades that are missing from the prior literature.It is simpler for the decisionmakers to look at each option separately because the decision is uncertain.Comparing the computed results,it is discovered that they are consistent and dependable because of their preferred properties.
基金National Natural Science Foundation of China(12102410)Fund of National Key Laboratory of Shock Wave and Detonation Physics(JCKYS2022212005)。
文摘The lattice parameter,measured with sufficient accuracy,can be utilized to evaluate the quality of single crystals and to determine the equation of state for materials.We propose an iterative method for obtaining more precise lattice parameters using the interaction points for the pseudo-Kossel pattern obtained from laser-induced X-ray diffraction(XRD).This method has been validated by the analysis of an XRD experiment conducted on iron single crystals.Furthermore,the method was used to calculate the compression ratio and rotated angle of an LiF sample under high pressure loading.This technique provides a robust tool for in-situ characterization of structural changes in single crystals under extreme conditions.It has significant implications for studying the equation of state and phase transitions.
基金supported by National key Research and Development Program“Security Protection Technology for Critical Information Infrastructure of Distribution Network”(2022YFB3105100).
文摘In the context of the diversity of smart terminals,the unity of the root of trust becomes complicated,which not only affects the efficiency of trust propagation,but also poses a challenge to the security of the whole system.In particular,the solidification of the root of trust in non-volatile memory(NVM)restricts the system’s dynamic updating capability,which is an obvious disadvantage in a rapidly changing security environment.To address this issue,this study proposes a novel approach to generate root security parameters using static random access memory(SRAM)physical unclonable functions(PUFs).SRAM PUFs,as a security primitive,show great potential in lightweight security solutions due to their inherent physical properties,low cost and scalability.However,the stability of SRAM PUFs in harsh environments is a key issue.These environmental conditions include extreme temperatures,high humidity,and strong electromagnetic radiation,all of which can affect the performance of SRAM PUFs.In order to ensure the stability of root safety parameters under these conditions,this study proposes an integrated approach that covers not only the acquisition of entropy sources,but also the implementation of algorithms and configuration management.In addition,this study develops a series of reliability-enhancing algorithms,including adaptive parameter selection,data preprocessing,auxiliary data generation,and error correction,which are essential for improving the performance of SRAM PUFs in harsh environments.Based on these techniques,this study establishes six types of secure parameter generation mechanisms,which not only improve the security of the system,but also enhance its adaptability in variable environments.Through a series of experiments,we verify the effectiveness of the proposed method.Under 10 different environmental conditions,our method is able to achieve full recovery of security data with an error rate of less than 25%,which proves the robustness and reliability of our method.These results not only provide strong evidence for the stability of SRAM PUFs in practical applications,but also provide a new direction for future research in the field of smart terminal security.
基金financially supported by the National Natural Science Foundation of China(Grant Nos.52225904 and 52039007)the Fundamental Research Funds for the Central Universities,CHD(Grant No.300102212207).
文摘In order to improve the accuracy of the photogrammetric joint roughness coefficient(JRC)value,the present study proposed a novel method combining an autonomous shooting parameter selection algorithm with a composite error model.Firstly,according to the depth map-based photogrammetric theory,the estimation of JRC from a three-dimensional(3D)digital surface model of rock discontinuities was presented.Secondly,an automatic shooting parameter selection algorithm was novelly proposed to establish the 3D model dataset of rock discontinuities with varying shooting parameters and target sizes.Meanwhile,the photogrammetric tests were performed with custom-built equipment capable of adjusting baseline lengths,and a total of 36 sets of JRC data was gathered via a combination of laboratory and field tests.Then,by combining the theory of point cloud coordinate computation error with the equation of JRC calculation,a composite error model controlled by the shooting parameters was proposed.This newly proposed model was validated via the 3D model dataset,demonstrating the capability to correct initially obtained JRC values solely based on shooting parameters.Furthermore,the implementation of this correction can significantly reduce errors in JRC values obtained via photographic measurement.Subsequently,our proposed error model was integrated into the shooting parameter selection algorithm,thus improving the rationality and convenience of selecting suitable shooting parameter combinations when dealing with target rock masses with different sizes.Moreover,the optimal combination of three shooting parameters was offered.JRC values resulting from various combinations of shooting parameters were verified by comparing them with 3D laser scan data.Finally,the application scope and limitations of the newly proposed approach were further addressed.
基金supported via funding from Prince Sattam Bin Abdulaziz University project number(PSAU/2025/R/1446).
文摘Promoting the high penetration of renewable energies like photovoltaic(PV)systems has become an urgent issue for expanding modern power grids and has accomplished several challenges compared to existing distribution grids.This study measures the effectiveness of the Puma optimizer(PO)algorithm in parameter estimation of PSC(perovskite solar cells)dynamic models with hysteresis consideration considering the electric field effects on operation.The models used in this study will incorporate hysteresis effects to capture the time-dependent behavior of PSCs accurately.The PO optimizes the proposed modified triple diode model(TDM)with a variable voltage capacitor and resistances(VVCARs)considering the hysteresis behavior.The suggested PO algorithm contrasts with other wellknown optimizers from the literature to demonstrate its superiority.The results emphasize that the PO realizes a lower RMSE(Root mean square errors),which proves its capability and efficacy in parameter extraction for the models.The statistical results emphasize the efficiency and supremacy of the proposed PO compared to the other well-known competing optimizers.The convergence rates show good,fast,and stable convergence rates with lower RMSE via PO compared to the other five competitive optimizers.Moreover,the lowermean realized via the PO optimizer is illustrated by the box plot for all optimizers.
基金Natural Science Foundation of Shandong Province,Grant/Award Number:ZR202103010903Doctoral Fund of Shandong Jianzhu University,Grant/Award Number:X21101Z。
文摘To guarantee safe and efficient tunneling of a tunnel boring machine(TBM),rapid and accurate judgment of the rock mass condition is essential.Based on fuzzy C-means clustering,this paper proposes a grouped machine learning method for predicting rock mass parameters.An elaborate data set on field rock mass is collected,which also matches field TBM tunneling.Meanwhile,target stratum samples are divided into several clusters by fuzzy C-means clustering,and multiple submodels are trained by samples in different clusters with the input of pretreated TBM tunneling data and the output of rock mass parameter data.Each testing sample or newly encountered tunneling condition can be predicted by multiple submodels with the weight of the membership degree of the sample to each cluster.The proposed method has been realized by 100 training samples and verified by 30 testing samples collected from the C1 part of the Pearl Delta water resources allocation project.The average percentage error of uniaxial compressive strength and joint frequency(Jf)of the 30 testing samples predicted by the pure back propagation(BP)neural network is 13.62%and 12.38%,while that predicted by the BP neural network combined with fuzzy C-means is 7.66%and6.40%,respectively.In addition,by combining fuzzy C-means clustering,the prediction accuracies of support vector regression and random forest are also improved to different degrees,which demonstrates that fuzzy C-means clustering is helpful for improving the prediction accuracy of machine learning and thus has good applicability.Accordingly,the proposed method is valuable for predicting rock mass parameters during TBM tunneling.
基金supported by the National Key Research and Development Program of China(No.2022YFB3706704)the Academician Special Science Research Project of CCCC(No.YSZX-03-2022-01-B).
文摘To investigate the influence of different longitudinal constraint systems on the longitudinal displacement at the girder ends of a three-tower suspension bridge,this study takes the Cangrong Xunjiang Bridge as an engineering case for finite element analysis.This bridge employs an unprecedented tower-girder constraintmethod,with all vertical supports placed at the transition piers at both ends.This paper aims to study the characteristics of longitudinal displacement control at the girder ends under this novel structure,relying on finite element(FE)analysis.Initially,based on the Weigh In Motion(WIM)data,a random vehicle load model is generated and applied to the finite elementmodel.Several longitudinal constraint systems are proposed,and their effects on the structural response of the bridge are compared.The most reasonable system,balancing girder-end displacement and transitional pier stress,is selected.Subsequently,the study examines the impact of different viscous damper parameters on key structural response indicators,including cumulative longitudinal displacement at the girder ends,maximum longitudinal displacement at the girder ends,cumulative longitudinal displacement at the pier tops,maximum longitudinal displacement at the pier tops,longitudinal acceleration at the pier tops,and maximum bending moment at the pier bottoms.Finally,the coefficient of variation(CV)-TOPSIS method is used to optimize the viscous damper parameters for multiple objectives.The results show that adding viscous dampers at the side towers,in addition to the existing longitudinal limit bearings at the central tower,can most effectively reduce the response of structural indicators.The changes in these indicators are not entirely consistent with variations in damping coefficient and velocity exponent.The damper parameters significantly influence cumulative longitudinal displacement at the girder ends,cumulative longitudinal displacement at the pier tops,and maximum bending moments at the pier bottoms.The optimal damper parameters are found to be a damping coefficient of 5000 kN/(m/s)0.2 and a velocity exponent of 0.2.
基金sponsored by the National Natural Science Foundation of China(Grant No.52405443)the Technology Research and Development Plan of China Railway(Grant No.N2023G063)the Fund of China Academy of Railway Sciences Corporation Limited(Grant No.2023YJ054).
文摘Purpose–To investigate the influence of vehicle operation speed,curve geometry parameters and rail profile parameters on wheel–rail creepage in high-speed railway curves and propose a multi-parameter coordinated optimization strategy to reduce wheel–rail contact fatigue damage.Design/methodology/approach–Taking a small-radius curve of a high-speed railway as the research object,field measurements were conducted to obtain track parameters and wheel–rail profiles.A coupled vehicle-track dynamics model was established.Multiple numerical experiments were designed using the Latin Hypercube Sampling method to extract wheel-rail creepage indicators and construct a parameter-creepage response surface model.Findings–Key service parameters affecting wheel–rail creepage were identified,including the matching relationship between curve geometry and vehicle speed and rail profile parameters.The influence patterns of various parameters on wheel–rail creepage were revealed through response surface analysis,leading to the establishment of parameter optimization criteria.Originality/value–This study presents the systematic investigation of wheel–rail creepage characteristics under multi-parameter coupling in high-speed railway curves.A response surface-based parameter-creepage relationship model was established,and a multi-parameter coordinated optimization strategy was proposed.The research findings provide theoretical guidance for controlling wheel–rail contact fatigue damage and optimizing wheel–rail profiles in high-speed railway curves.
基金Supported by the National Natural Science Foundation of China(No.U22A20246)the Key Project of Natural Science Foundation of Hebei Province of China(Basic Research Base Project)(No.A2023210064)the Science and Technology Program of Hebei Province of China(Nos.246Z1904G and 225676162GH)。
文摘The vehicle-road coupling dynamics problem is a prominent issue in transportation,drawing significant attention in recent years.These dynamic equations are characterized by high-dimensionality,coupling,and time-varying dynamics,making the exact solutions challenging to obtain.As a result,numerical integration methods are typically employed.However,conventional methods often suffer from low computational efficiency.To address this,this paper explores the application of the parameter freezing precise exponential integrator to vehicle-road coupling models.The model accounts for road roughness irregularities,incorporating all terms unrelated to the linear part into the algorithm's inhomogeneous vector.The general construction process of the algorithm is detailed.The validity of numerical results is verified through approximate analytical solutions(AASs),and the advantages of this method over traditional numerical integration methods are demonstrated.Multiple parameter freezing precise exponential integrator schemes are constructed based on the Runge-Kutta framework,with the fourth-order four-stage scheme identified as the optimal one.The study indicates that this method can quickly and accurately capture the dynamic system's vibration response,offering a new,efficient approach for numerical studies of high-dimensional vehicle-road coupling systems.
基金supported by the National Natural Science Foundation of China(No.41075030,41106004,41106159 and 41206013)the Ocean Public Welfare Science Research Project,State Oceanic Administration,People's Republic of China(No.201005019)
文摘In this study, we mainly introduce two salinity parameterization schemes used in Sea Ice Simulator (SIS), that is, isosaline scheme and salinity profile scheme. Comparing the equation of isosaline scheme with that of salinity profile scheme, we found that there was one different term between the two schemes named the salinity different term. The thermodynamic effect of the salinity difference term on sea ice thickness and sea ice concentration showed that: in the freezing processes from November to next May, the sea ice temperature could rise on the influence of the salinity difference term and restrain sea ice freezing; at the first melting phase from June to August, the upper ice melting rate was faster than the lower ice melting rate. Then sea ice temperature could rise and accelerate the sea ice melting; at the second melting phase from September to October, the upper ice melting rate was slower than the lower ice melting rate, then sea ice temperature could decrease and restrain sea ice melting. However, the effect of the salinity difference term on the sea ice thickness and sea ice concentration was weak. To analyze the impacts of the salinity different term on Arctic sea ice thickness and sea ice concentration, we also designed several experiments by introducing the two salinity parameterizations to the ice-ocean coupled model, Modular Ocean Model (MOM4), respectively. The simulated results confirmed the previous results of formula derivation.
基金jointly funded by the National Natural Science Foundation of China(GrantNo40775019)Desert Meteorology Science Foundation of China(Grant NoSqj2009012)Project of Key Laboratory of Oasis Ecology(Xinjiang University)Ministry of Education(Grant NoXJDX0206-2009-08)
文摘Improving and validating land surface models based on integrated observations in deserts is one of the challenges in land modeling. Particularly, key parameters and parameterization schemes in desert regions need to be evaluated in-situ to improve the models. In this study, we calibrated the land-surface key parameters and evaluated several formulations or schemes for thermal roughness length (z 0h ) in the common land model (CoLM). Our parameter calibration and scheme evaluation were based on the observed data during a torrid summer (29 July to 11 September 2009) over the Taklimakan Desert hinterland. First, the importance of the key parameters in the experiment was evaluated based on their physics principles and the significance of these key parameters were further validated using sensitivity test. Second, difference schemes (or physics-based formulas) of z 0h were adopted to simulate the variations of energy-related variables (e.g., sensible heat flux and surface skin temperature) and the simulated variations were then compared with the observed data. Third, the z 0h scheme that performed best (i.e., Y07) was then selected to replace the defaulted one (i.e., Z98); the revised scheme and the superiority of Y07 over Z98 was further demonstrated by comparing the simulated results with the observed data. Admittedly, the revised model did a relatively poor job of simulating the diurnal variations of surface soil heat flux, and nighttime soil temperature was also underestimated, calling for further improvement of the model for desert regions.
基金supported by the NSF (Grants AGS-1338440 and AGS-0946315)the endowment funds related to the David Bullock Harris Chair in Geosciences at the College of Geosciences, Texas A&M University
文摘Presented is a review of the radiative properties of ice clouds from three perspectives: light scattering simulations, remote sensing applications, and broadband radiation parameterizations appropriate for numerical models. On the subject of light scattering simulations, several classical computational approaches are reviewed, including the conventional geometric-optics method and its improved forms, the finite-difference time domain technique, the pseudo-spectral time domain technique, the discrete dipole approximation method, and the T-matrix method, with specific applications to the computation of the singlescattering properties of individual ice crystals. The strengths and weaknesses associated with each approach are discussed.With reference to remote sensing, operational retrieval algorithms are reviewed for retrieving cloud optical depth and effective particle size based on solar or thermal infrared(IR) bands. To illustrate the performance of the current solar- and IR-based retrievals, two case studies are presented based on spaceborne observations. The need for a more realistic ice cloud optical model to obtain spectrally consistent retrievals is demonstrated. Furthermore, to complement ice cloud property studies based on passive radiometric measurements, the advantage of incorporating lidar and/or polarimetric measurements is discussed.The performance of ice cloud models based on the use of different ice habits to represent ice particles is illustrated by comparing model results with satellite observations. A summary is provided of a number of parameterization schemes for ice cloud radiative properties that were developed for application to broadband radiative transfer submodels within general circulation models(GCMs). The availability of the single-scattering properties of complex ice habits has led to more accurate radiation parameterizations. In conclusion, the importance of using nonspherical ice particle models in GCM simulations for climate studies is proven.