In this paper an error in[4]is pointed out and a method for constructingsurface interpolating scattered data points is presented.The main feature of the methodin this paper is that the surface so constructed is polyno...In this paper an error in[4]is pointed out and a method for constructingsurface interpolating scattered data points is presented.The main feature of the methodin this paper is that the surface so constructed is polynomial,which makes the construction simple and the calculation easy.展开更多
Swarm robot systems are an important application of autonomous unmanned surface vehicles on water surfaces.For monitoring natural environments and conducting security activities within a certain range using a surface ...Swarm robot systems are an important application of autonomous unmanned surface vehicles on water surfaces.For monitoring natural environments and conducting security activities within a certain range using a surface vehicle,the swarm robot system is more efficient than the operation of a single object as the former can reduce cost and save time.It is necessary to detect adjacent surface obstacles robustly to operate a cluster of unmanned surface vehicles.For this purpose,a LiDAR(light detection and ranging)sensor is used as it can simultaneously obtain 3D information for all directions,relatively robustly and accurately,irrespective of the surrounding environmental conditions.Although the GPS(global-positioning-system)error range exists,obtaining measurements of the surface-vessel position can still ensure stability during platoon maneuvering.In this study,a three-layer convolutional neural network is applied to classify types of surface vehicles.The aim of this approach is to redefine the sparse 3D point cloud data as 2D image data with a connotative meaning and subsequently utilize this transformed data for object classification purposes.Hence,we have proposed a descriptor that converts the 3D point cloud data into 2D image data.To use this descriptor effectively,it is necessary to perform a clustering operation that separates the point clouds for each object.We developed voxel-based clustering for the point cloud clustering.Furthermore,using the descriptor,3D point cloud data can be converted into a 2D feature image,and the converted 2D image is provided as an input value to the network.We intend to verify the validity of the proposed 3D point cloud feature descriptor by using experimental data in the simulator.Furthermore,we explore the feasibility of real-time object classification within this framework.展开更多
An assistant surface was constructed on the base of boundary that being auto-matically extracted from the scattered data.The parameters of every data point corre-sponding to the assistant surface and their applied fie...An assistant surface was constructed on the base of boundary that being auto-matically extracted from the scattered data.The parameters of every data point corre-sponding to the assistant surface and their applied fields were calculated respectively.Inevery applied region,a surface patch was constructed by a special Hermite interpolation.The final surface can be obtained by a piecewise bicubic Hermite interpolation in the ag-gregate of applied regions of metrical data.This method avoids the triangulation problem.Numerical results indicate that it is efficient and accurate.展开更多
The development of artificial intelligence(AI)technologies creates a great chance for the iteration of railway monitoring.This paper proposes a comprehensive method for railway utility pole detection.The framework of ...The development of artificial intelligence(AI)technologies creates a great chance for the iteration of railway monitoring.This paper proposes a comprehensive method for railway utility pole detection.The framework of this paper on railway systems consists of two parts:point cloud preprocessing and railway utility pole detection.Thismethod overcomes the challenges of dynamic environment adaptability,reliance on lighting conditions,sensitivity to weather and environmental conditions,and visual occlusion issues present in 2D images and videos,which utilize mobile LiDAR(Laser Radar)acquisition devices to obtain point cloud data.Due to factors such as acquisition equipment and environmental conditions,there is a significant amount of noise interference in the point cloud data,affecting subsequent detection tasks.We designed a Dual-Region Adaptive Point Cloud Preprocessing method,which divides the railway point cloud data into track and non-track regions.The track region undergoes projection dimensionality reduction,with the projected results being unique and subsequently subjected to 2D density clustering,greatly reducing data computation volume.The non-track region undergoes PCA-based dimensionality reduction and clustering operations to achieve preprocessing of large-scale point cloud scenes.Finally,the preprocessed results are used for training,achieving higher accuracy in utility pole detection and data communication.Experimental results show that our proposed preprocessing method not only improves efficiency but also enhances detection accuracy.展开更多
For the accurate extraction of cavity decay time, a selection of data points is supplemented to the weighted least square method. We derive the expected precision, accuracy and computation cost of this improved method...For the accurate extraction of cavity decay time, a selection of data points is supplemented to the weighted least square method. We derive the expected precision, accuracy and computation cost of this improved method, and examine these performances by simulation. By comparing this method with the nonlinear least square fitting (NLSF) method and the linear regression of the sum (LRS) method in derivations and simulations, we find that this method can achieve the same or even better precision, comparable accuracy, and lower computation cost. We test this method by experimental decay signals. The results are in agreement with the ones obtained from the nonlinear least square fitting method.展开更多
Understanding the mechanisms and risks of forest fires by building a spatial prediction model is an important means of controlling forest fires.Non-fire point data are important training data for constructing a model,...Understanding the mechanisms and risks of forest fires by building a spatial prediction model is an important means of controlling forest fires.Non-fire point data are important training data for constructing a model,and their quality significantly impacts the prediction performance of the model.However,non-fire point data obtained using existing sampling methods generally suffer from low representativeness.Therefore,this study proposes a non-fire point data sampling method based on geographical similarity to improve the quality of non-fire point samples.The method is based on the idea that the less similar the geographical environment between a sample point and an already occurred fire point,the greater the confidence in being a non-fire point sample.Yunnan Province,China,with a high frequency of forest fires,was used as the study area.We compared the prediction performance of traditional sampling methods and the proposed method using three commonly used forest fire risk prediction models:logistic regression(LR),support vector machine(SVM),and random forest(RF).The results show that the modeling and prediction accuracies of the forest fire prediction models established based on the proposed sampling method are significantly improved compared with those of the traditional sampling method.Specifically,in 2010,the modeling and prediction accuracies improved by 19.1%and 32.8%,respectively,and in 2020,they improved by 13.1%and 24.3%,respectively.Therefore,we believe that collecting non-fire point samples based on the principle of geographical similarity is an effective way to improve the quality of forest fire samples,and thus enhance the prediction of forest fire risk.展开更多
Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minute...Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.展开更多
Multi-view laser radar (ladar) data registration in obscure environments is an important research field of obscured target detection from air to ground. There are few overlap regions of the observational data in dif...Multi-view laser radar (ladar) data registration in obscure environments is an important research field of obscured target detection from air to ground. There are few overlap regions of the observational data in different views because of the occluder, so the multi-view data registration is rather difficult. Through indepth analyses of the typical methods and problems, it is obtained that the sequence registration is more appropriate, but needs to improve the registration accuracy. On this basis, a multi-view data registration algorithm based on aggregating the adjacent frames, which are already registered, is proposed. It increases the overlap region between the pending registration frames by aggregation and further improves the registration accuracy. The experiment results show that the proposed algorithm can effectively register the multi-view ladar data in the obscure environment, and it also has a greater robustness and a higher registration accuracy compared with the sequence registration under the condition of equivalent operating efficiency.展开更多
In order to enhance modeling efficiency and accuracy,we utilized 3D laser point cloud data for indoor space modeling.Point cloud data was obtained with a 3D laser scanner and optimized with Autodesk Recap and Revit so...In order to enhance modeling efficiency and accuracy,we utilized 3D laser point cloud data for indoor space modeling.Point cloud data was obtained with a 3D laser scanner and optimized with Autodesk Recap and Revit software to extract geometric information about the indoor environment.Furthermore,we proposed a method for constructing indoor elements based on parametric components.The research outcomes of this paper will offer new methods and tools for indoor space modeling and design.The approach of indoor space modeling based on 3D laser point cloud data and parametric component construction can enhance modeling efficiency and accuracy,providing architects,interior designers,and decorators with a better working platform and design reference.展开更多
Digital technology provides a method of quantitative investigation and data analysis for contemporary landscape spatial analysis,and related research is moving from image recognition to digital algorithmic analysis,pr...Digital technology provides a method of quantitative investigation and data analysis for contemporary landscape spatial analysis,and related research is moving from image recognition to digital algorithmic analysis,providing a more scientific and macroscopic way of research.The key to refinement design is to refine the spatial design process and the spatial improvement strategy system.Taking the ancient city of Zhaoyu in Qixian County,Shanxi Province as an example,(1)based on obtaining the integrated data of the ancient city through the drone tilt photography,the style and landscape of the ancient city are modeled;(2)the point cloud data with spatial information is imported into the point cloud analysis platform and the data analysis is carried out from the overall macroscopic style of the ancient city to the refinement level,which results in the formation of a more intuitive landscape design scheme,thus improving the precision and practicability of the landscape design;(3)Based on spatial big data,it starts from the spatial aggregation level,spatial distribution characteristics and other evaluation index system to achieve the refinement analysis of the site.Digital technology and methods are used throughout the process to explore the refined design path.展开更多
Taking AutoCAD2000 as platform, an algorithm for the reconstruction ofsurface from scattered data points based on VBA is presented. With this core technology customerscan be free from traditional AutoCAD as an electro...Taking AutoCAD2000 as platform, an algorithm for the reconstruction ofsurface from scattered data points based on VBA is presented. With this core technology customerscan be free from traditional AutoCAD as an electronic board and begin to create actual presentationof real-world objects. VBA is not only a very powerful tool of development, but with very simplesyntax. Associating with those solids, objects and commands of AutoCAD 2000, VBA notably simplifiesprevious complex algorithms, graphical presentations and processing, etc. Meanwhile, it can avoidappearance of complex data structure and data format in reverse design with other modeling software.Applying VBA to reverse engineering can greatly improve modeling efficiency and facilitate surfacereconstruction.展开更多
According to the requirement of heterogeneous object modeling in additive manufacturing(AM),the Non-Uniform Rational B-Spline(NURBS)method has been applied to the digital representation of heterogeneous object in this...According to the requirement of heterogeneous object modeling in additive manufacturing(AM),the Non-Uniform Rational B-Spline(NURBS)method has been applied to the digital representation of heterogeneous object in this paper.By putting forward the NURBS material data structure and establishing heterogeneous NURBS object model,the accurate mathematical unified representation of analytical and free heterogeneous objects have been realized.With the inverse modeling of heterogeneous NURBS objects,the geometry and material distribution can be better designed to meet the actual needs.Radical Basis Function(RBF)method based on global surface reconstruction and the tensor product surface interpolation method are combined to RBF-NURBS inverse construction method.The geometric and/or material information of regular mesh points is obtained by RBF interpolation of scattered data,and the heterogeneous NURBS surface or object model is obtained by tensor product interpolation.The examples have shown that the heterogeneous objects fitting to scattered data points can be generated effectively by the inverse construction methods in this paper and 3D CAD models for additive manufacturing can be provided.展开更多
In the dynamic environment of hospitals, valuable real-world data often remain underutilised despite their potential to revolutionize cancer research and personalised medicine. This study explores the challenges and o...In the dynamic environment of hospitals, valuable real-world data often remain underutilised despite their potential to revolutionize cancer research and personalised medicine. This study explores the challenges and opportunities in managing hospital-generated data, particularly within the Masaryk Memorial Cancer Institute (MMCI) in Brno, Czech Republic. Utilizing Next-Generation Sequencing (NGS) technology, MMCI generates substantial volumes of genomic data. Due to inadequate curation, these data remain difficult to integrate with clinical records for secondary use (such as personalised treatment outcome prediction and patient stratification based on their genomic profiles). This paper proposes solutions based on the FAIR principles (Findability, Accessibility, Interoperability, and Reusability) to enhance data sharing and reuse. The primary output of our work is the development of an automated pipeline that continuously processes and integrates NGS data with clinical and biobank information upon their creation. It stores the data in a special secured repository for sensitive data in a structured form to ensure smooth retrieval.展开更多
To transmit customer power data collected by smart meters(SMs)to utility companies,data must first be transmitted to the corresponding data aggregation point(DAP)of the SM.The number of DAPs installed and the installa...To transmit customer power data collected by smart meters(SMs)to utility companies,data must first be transmitted to the corresponding data aggregation point(DAP)of the SM.The number of DAPs installed and the installation location greatly impact the whole network.For the traditional DAP placement algorithm,the number of DAPs must be set in advance,but determining the best number of DAPs is difficult,which undoubtedly reduces the overall performance of the network.Moreover,the excessive gap between the loads of different DAPs is also an important factor affecting the quality of the network.To address the above problems,this paper proposes a DAP placement algorithm,APSSA,based on the improved affinity propagation(AP)algorithm and sparrow search(SSA)algorithm,which can select the appropriate number of DAPs to be installed and the corresponding installation locations according to the number of SMs and their distribution locations in different environments.The algorithm adds an allocation mechanism to optimize the subnetwork in the SSA.APSSA is evaluated under three different areas and compared with other DAP placement algorithms.The experimental results validated that the method in this paper can reduce the network cost,shorten the average transmission distance,and reduce the load gap.展开更多
Parameterization is one of the key problems in the construction of a curve to interpolate a set of ordered points. We propose a new local parameterization method based on the curvature model in this paper. The new met...Parameterization is one of the key problems in the construction of a curve to interpolate a set of ordered points. We propose a new local parameterization method based on the curvature model in this paper. The new method determines the knots by mi- nimizing the maximum curvature of quadratic curve. When the knots by the new method are used to construct interpolation curve, the constructed curve have good precision. We also give some comparisons of the new method with existing methods, and our method can perform better in interpolation error, and the interpolated curve is more fairing.展开更多
In this contribution we deal with the problem of producing“reasonable”data,when considering recorded energy consumption data,which are at certain sections incomplete and/or erroneous.This task is important,when ener...In this contribution we deal with the problem of producing“reasonable”data,when considering recorded energy consumption data,which are at certain sections incomplete and/or erroneous.This task is important,when energy providers employ prediction models for expected energy consumption,which are based on past recorded consumption data,which then of course should be reliable and valid.In a related contribution Yilmaz(2022),GAN-based methods for producing such“artificial data”have been investigated.In this contribution,we describe an alternative and complementary method based on signal inpainting,which has been successfully applied to audio processing Lieb and Stark(2018).After giving a short overview of the theory of proximity-based convex optimization,we describe and adapt an iterative inpainting scheme to our problem.The usefulness of this approach is demonstrated by analyzing real-world-data provided by a German energy supplier.展开更多
An integration processing system of three-dimensional laser scanning information visualization in goaf was developed. It is provided with multiple functions, such as laser scanning information management for goaf, clo...An integration processing system of three-dimensional laser scanning information visualization in goaf was developed. It is provided with multiple functions, such as laser scanning information management for goaf, cloud data de-noising optimization, construction, display and operation of three-dimensional model, model editing, profile generation, calculation of goaf volume and roof area, Boolean calculation among models and interaction with the third party soft ware. Concerning this system with a concise interface, plentiful data input/output interfaces, it is featured with high integration, simple and convenient operations of applications. According to practice, in addition to being well-adapted, this system is favorably reliable and stable.展开更多
Plant height can be used for assessing plant vigor and predicting biomass and yield. Manual measurement of plant height is time-consuming and labor-intensive. We describe a method for measuring maize plant height usin...Plant height can be used for assessing plant vigor and predicting biomass and yield. Manual measurement of plant height is time-consuming and labor-intensive. We describe a method for measuring maize plant height using an RGB-D camera that captures a color image and depth information of plants under field conditions. The color image was first processed to locate its central area using the S component in HSV color space and the Density-Based Spatial Clustering of Applications with Noise algorithm. Testing showed that the central areas of plants could be accurately located. The point cloud data were then clustered and the plant was extracted based on the located central area. The point cloud data were further processed to generate skeletons, whose end points were detected and used to extract the highest points of the central leaves. Finally, the height differences between the ground and the highest points of the central leaves were calculated to determine plant heights. The coefficients of determination for plant heights manually measured and estimated by the proposed approach were all greater than 0.95. The method can effectively extract the plant from overlapping leaves and estimate its plant height. The proposed method may facilitate maize height measurement and monitoring under field conditions.展开更多
Leaf normal distribution is an important structural characteristic of the forest canopy. Although terrestrial laser scanners(TLS) have potential for estimating canopy structural parameters, distinguishing between le...Leaf normal distribution is an important structural characteristic of the forest canopy. Although terrestrial laser scanners(TLS) have potential for estimating canopy structural parameters, distinguishing between leaves and nonphotosynthetic structures to retrieve the leaf normal has been challenging. We used here an approach to accurately retrieve the leaf normals of camphorwood(Cinnamomum camphora) using TLS point cloud data.First, nonphotosynthetic structures were filtered by using the curvature threshold of each point. Then, the point cloud data were segmented by a voxel method and clustered by a Gaussian mixture model in each voxel. Finally, the normal vector of each cluster was computed by principal component analysis to obtain the leaf normal distribution. We collected leaf inclination angles and estimated the distribution, which we compared with the retrieved leaf normal distribution. The correlation coefficient between measurements and obtained results was 0.96, indicating a good coincidence.展开更多
As point cloud of one whole vehicle body has the traits of large geometric dimension, huge data and rigorous reverse precision, one pretreatment algorithm on automobile body point cloud is put forward. The basic idea ...As point cloud of one whole vehicle body has the traits of large geometric dimension, huge data and rigorous reverse precision, one pretreatment algorithm on automobile body point cloud is put forward. The basic idea of the registration algorithm based on the skeleton points is to construct the skeleton points of the whole vehicle model and the mark points of the separate point cloud, to search the mapped relationship between skeleton points and mark points using congruence triangle method and to match the whole vehicle point cloud using the improved iterative closed point (ICP) algorithm. The data reduction algorithm, based on average square root of distance, condenses data by three steps, computing datasets' average square root of distance in sampling cube grid, sorting order according to the value computed from the first step, choosing sampling percentage. The accuracy of the two algorithms above is proved by a registration and reduction example of whole vehicle point cloud of a certain light truck.展开更多
文摘In this paper an error in[4]is pointed out and a method for constructingsurface interpolating scattered data points is presented.The main feature of the methodin this paper is that the surface so constructed is polynomial,which makes the construction simple and the calculation easy.
基金supported by the Future Challenge Program through the Agency for Defense Development funded by the Defense Acquisition Program Administration (No.UC200015RD)。
文摘Swarm robot systems are an important application of autonomous unmanned surface vehicles on water surfaces.For monitoring natural environments and conducting security activities within a certain range using a surface vehicle,the swarm robot system is more efficient than the operation of a single object as the former can reduce cost and save time.It is necessary to detect adjacent surface obstacles robustly to operate a cluster of unmanned surface vehicles.For this purpose,a LiDAR(light detection and ranging)sensor is used as it can simultaneously obtain 3D information for all directions,relatively robustly and accurately,irrespective of the surrounding environmental conditions.Although the GPS(global-positioning-system)error range exists,obtaining measurements of the surface-vessel position can still ensure stability during platoon maneuvering.In this study,a three-layer convolutional neural network is applied to classify types of surface vehicles.The aim of this approach is to redefine the sparse 3D point cloud data as 2D image data with a connotative meaning and subsequently utilize this transformed data for object classification purposes.Hence,we have proposed a descriptor that converts the 3D point cloud data into 2D image data.To use this descriptor effectively,it is necessary to perform a clustering operation that separates the point clouds for each object.We developed voxel-based clustering for the point cloud clustering.Furthermore,using the descriptor,3D point cloud data can be converted into a 2D feature image,and the converted 2D image is provided as an input value to the network.We intend to verify the validity of the proposed 3D point cloud feature descriptor by using experimental data in the simulator.Furthermore,we explore the feasibility of real-time object classification within this framework.
文摘An assistant surface was constructed on the base of boundary that being auto-matically extracted from the scattered data.The parameters of every data point corre-sponding to the assistant surface and their applied fields were calculated respectively.Inevery applied region,a surface patch was constructed by a special Hermite interpolation.The final surface can be obtained by a piecewise bicubic Hermite interpolation in the ag-gregate of applied regions of metrical data.This method avoids the triangulation problem.Numerical results indicate that it is efficient and accurate.
文摘The development of artificial intelligence(AI)technologies creates a great chance for the iteration of railway monitoring.This paper proposes a comprehensive method for railway utility pole detection.The framework of this paper on railway systems consists of two parts:point cloud preprocessing and railway utility pole detection.Thismethod overcomes the challenges of dynamic environment adaptability,reliance on lighting conditions,sensitivity to weather and environmental conditions,and visual occlusion issues present in 2D images and videos,which utilize mobile LiDAR(Laser Radar)acquisition devices to obtain point cloud data.Due to factors such as acquisition equipment and environmental conditions,there is a significant amount of noise interference in the point cloud data,affecting subsequent detection tasks.We designed a Dual-Region Adaptive Point Cloud Preprocessing method,which divides the railway point cloud data into track and non-track regions.The track region undergoes projection dimensionality reduction,with the projected results being unique and subsequently subjected to 2D density clustering,greatly reducing data computation volume.The non-track region undergoes PCA-based dimensionality reduction and clustering operations to achieve preprocessing of large-scale point cloud scenes.Finally,the preprocessed results are used for training,achieving higher accuracy in utility pole detection and data communication.Experimental results show that our proposed preprocessing method not only improves efficiency but also enhances detection accuracy.
基金supported by the Preeminent Youth Fund of Sichuan Province,China(Grant No.2012JQ0012)the National Natural Science Foundation of China(Grant Nos.11173008,10974202,and 60978049)the National Key Scientific and Research Equipment Development Project of China(Grant No.ZDYZ2013-2)
文摘For the accurate extraction of cavity decay time, a selection of data points is supplemented to the weighted least square method. We derive the expected precision, accuracy and computation cost of this improved method, and examine these performances by simulation. By comparing this method with the nonlinear least square fitting (NLSF) method and the linear regression of the sum (LRS) method in derivations and simulations, we find that this method can achieve the same or even better precision, comparable accuracy, and lower computation cost. We test this method by experimental decay signals. The results are in agreement with the ones obtained from the nonlinear least square fitting method.
基金financially supported by the National Natural Science Fundation of China(Grant Nos.42161065 and 41461038)。
文摘Understanding the mechanisms and risks of forest fires by building a spatial prediction model is an important means of controlling forest fires.Non-fire point data are important training data for constructing a model,and their quality significantly impacts the prediction performance of the model.However,non-fire point data obtained using existing sampling methods generally suffer from low representativeness.Therefore,this study proposes a non-fire point data sampling method based on geographical similarity to improve the quality of non-fire point samples.The method is based on the idea that the less similar the geographical environment between a sample point and an already occurred fire point,the greater the confidence in being a non-fire point sample.Yunnan Province,China,with a high frequency of forest fires,was used as the study area.We compared the prediction performance of traditional sampling methods and the proposed method using three commonly used forest fire risk prediction models:logistic regression(LR),support vector machine(SVM),and random forest(RF).The results show that the modeling and prediction accuracies of the forest fire prediction models established based on the proposed sampling method are significantly improved compared with those of the traditional sampling method.Specifically,in 2010,the modeling and prediction accuracies improved by 19.1%and 32.8%,respectively,and in 2020,they improved by 13.1%and 24.3%,respectively.Therefore,we believe that collecting non-fire point samples based on the principle of geographical similarity is an effective way to improve the quality of forest fire samples,and thus enhance the prediction of forest fire risk.
基金National Natural Science Foundation of China(No.41801379)Fundamental Research Funds for the Central Universities(No.2019B08414)National Key R&D Program of China(No.2016YFC0401801)。
文摘Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.
文摘Multi-view laser radar (ladar) data registration in obscure environments is an important research field of obscured target detection from air to ground. There are few overlap regions of the observational data in different views because of the occluder, so the multi-view data registration is rather difficult. Through indepth analyses of the typical methods and problems, it is obtained that the sequence registration is more appropriate, but needs to improve the registration accuracy. On this basis, a multi-view data registration algorithm based on aggregating the adjacent frames, which are already registered, is proposed. It increases the overlap region between the pending registration frames by aggregation and further improves the registration accuracy. The experiment results show that the proposed algorithm can effectively register the multi-view ladar data in the obscure environment, and it also has a greater robustness and a higher registration accuracy compared with the sequence registration under the condition of equivalent operating efficiency.
基金supported by the Innovation and Entrepreneurship Training Program Topic for College Students of North China University of Technology in 2023.
文摘In order to enhance modeling efficiency and accuracy,we utilized 3D laser point cloud data for indoor space modeling.Point cloud data was obtained with a 3D laser scanner and optimized with Autodesk Recap and Revit software to extract geometric information about the indoor environment.Furthermore,we proposed a method for constructing indoor elements based on parametric components.The research outcomes of this paper will offer new methods and tools for indoor space modeling and design.The approach of indoor space modeling based on 3D laser point cloud data and parametric component construction can enhance modeling efficiency and accuracy,providing architects,interior designers,and decorators with a better working platform and design reference.
文摘Digital technology provides a method of quantitative investigation and data analysis for contemporary landscape spatial analysis,and related research is moving from image recognition to digital algorithmic analysis,providing a more scientific and macroscopic way of research.The key to refinement design is to refine the spatial design process and the spatial improvement strategy system.Taking the ancient city of Zhaoyu in Qixian County,Shanxi Province as an example,(1)based on obtaining the integrated data of the ancient city through the drone tilt photography,the style and landscape of the ancient city are modeled;(2)the point cloud data with spatial information is imported into the point cloud analysis platform and the data analysis is carried out from the overall macroscopic style of the ancient city to the refinement level,which results in the formation of a more intuitive landscape design scheme,thus improving the precision and practicability of the landscape design;(3)Based on spatial big data,it starts from the spatial aggregation level,spatial distribution characteristics and other evaluation index system to achieve the refinement analysis of the site.Digital technology and methods are used throughout the process to explore the refined design path.
文摘Taking AutoCAD2000 as platform, an algorithm for the reconstruction ofsurface from scattered data points based on VBA is presented. With this core technology customerscan be free from traditional AutoCAD as an electronic board and begin to create actual presentationof real-world objects. VBA is not only a very powerful tool of development, but with very simplesyntax. Associating with those solids, objects and commands of AutoCAD 2000, VBA notably simplifiesprevious complex algorithms, graphical presentations and processing, etc. Meanwhile, it can avoidappearance of complex data structure and data format in reverse design with other modeling software.Applying VBA to reverse engineering can greatly improve modeling efficiency and facilitate surfacereconstruction.
文摘According to the requirement of heterogeneous object modeling in additive manufacturing(AM),the Non-Uniform Rational B-Spline(NURBS)method has been applied to the digital representation of heterogeneous object in this paper.By putting forward the NURBS material data structure and establishing heterogeneous NURBS object model,the accurate mathematical unified representation of analytical and free heterogeneous objects have been realized.With the inverse modeling of heterogeneous NURBS objects,the geometry and material distribution can be better designed to meet the actual needs.Radical Basis Function(RBF)method based on global surface reconstruction and the tensor product surface interpolation method are combined to RBF-NURBS inverse construction method.The geometric and/or material information of regular mesh points is obtained by RBF interpolation of scattered data,and the heterogeneous NURBS surface or object model is obtained by tensor product interpolation.The examples have shown that the heterogeneous objects fitting to scattered data points can be generated effectively by the inverse construction methods in this paper and 3D CAD models for additive manufacturing can be provided.
基金funding from the project SALVAGE(P JACreg.no.CZ.02.01.01/00/22_008/0004644)-funded by the European Union and by the State Budget of the Czech Republic,from MH CZ-DRO(MMCI,00209805)+1 种基金BBMRI.cz(no.LM2023033)Computational resources were provided by the e-INFRA CZ project(no.LM2023054).
文摘In the dynamic environment of hospitals, valuable real-world data often remain underutilised despite their potential to revolutionize cancer research and personalised medicine. This study explores the challenges and opportunities in managing hospital-generated data, particularly within the Masaryk Memorial Cancer Institute (MMCI) in Brno, Czech Republic. Utilizing Next-Generation Sequencing (NGS) technology, MMCI generates substantial volumes of genomic data. Due to inadequate curation, these data remain difficult to integrate with clinical records for secondary use (such as personalised treatment outcome prediction and patient stratification based on their genomic profiles). This paper proposes solutions based on the FAIR principles (Findability, Accessibility, Interoperability, and Reusability) to enhance data sharing and reuse. The primary output of our work is the development of an automated pipeline that continuously processes and integrates NGS data with clinical and biobank information upon their creation. It stores the data in a special secured repository for sensitive data in a structured form to ensure smooth retrieval.
基金supported by the Fujian University of Technology under Grant GYZ20016,GY-Z18183,and GY-Z19005partially supported by the National Science and Technology Council under Grant NSTC 113-2221-E-224-056-.
文摘To transmit customer power data collected by smart meters(SMs)to utility companies,data must first be transmitted to the corresponding data aggregation point(DAP)of the SM.The number of DAPs installed and the installation location greatly impact the whole network.For the traditional DAP placement algorithm,the number of DAPs must be set in advance,but determining the best number of DAPs is difficult,which undoubtedly reduces the overall performance of the network.Moreover,the excessive gap between the loads of different DAPs is also an important factor affecting the quality of the network.To address the above problems,this paper proposes a DAP placement algorithm,APSSA,based on the improved affinity propagation(AP)algorithm and sparrow search(SSA)algorithm,which can select the appropriate number of DAPs to be installed and the corresponding installation locations according to the number of SMs and their distribution locations in different environments.The algorithm adds an allocation mechanism to optimize the subnetwork in the SSA.APSSA is evaluated under three different areas and compared with other DAP placement algorithms.The experimental results validated that the method in this paper can reduce the network cost,shorten the average transmission distance,and reduce the load gap.
基金Supported by National Research Foundation for the Doctoral Program of Higher Education of China(20110131130004)Independent Innovation Foundation of Shandong University,IIFSDU(2012TB013)
文摘Parameterization is one of the key problems in the construction of a curve to interpolate a set of ordered points. We propose a new local parameterization method based on the curvature model in this paper. The new method determines the knots by mi- nimizing the maximum curvature of quadratic curve. When the knots by the new method are used to construct interpolation curve, the constructed curve have good precision. We also give some comparisons of the new method with existing methods, and our method can perform better in interpolation error, and the interpolated curve is more fairing.
基金supported by the German Ministry of Education and Research(BMBF)within the project“AGENS:Analytischgenerative-Netzwerke zur Systemidentifikation”(grant no:05M20WFA).
文摘In this contribution we deal with the problem of producing“reasonable”data,when considering recorded energy consumption data,which are at certain sections incomplete and/or erroneous.This task is important,when energy providers employ prediction models for expected energy consumption,which are based on past recorded consumption data,which then of course should be reliable and valid.In a related contribution Yilmaz(2022),GAN-based methods for producing such“artificial data”have been investigated.In this contribution,we describe an alternative and complementary method based on signal inpainting,which has been successfully applied to audio processing Lieb and Stark(2018).After giving a short overview of the theory of proximity-based convex optimization,we describe and adapt an iterative inpainting scheme to our problem.The usefulness of this approach is demonstrated by analyzing real-world-data provided by a German energy supplier.
基金Project(51274250)supported by the National Natural Science Foundation of ChinaProject(2012BAK09B02-05)supported by the National Key Technology R&D Program during the 12th Five-year Plan of China
文摘An integration processing system of three-dimensional laser scanning information visualization in goaf was developed. It is provided with multiple functions, such as laser scanning information management for goaf, cloud data de-noising optimization, construction, display and operation of three-dimensional model, model editing, profile generation, calculation of goaf volume and roof area, Boolean calculation among models and interaction with the third party soft ware. Concerning this system with a concise interface, plentiful data input/output interfaces, it is featured with high integration, simple and convenient operations of applications. According to practice, in addition to being well-adapted, this system is favorably reliable and stable.
基金supported by the Key Project of Intergovernmental Collaboration for Science and Technology Innovation under the National Key R&D Plan (2019YFE0103800)CAU Special Fund to Build World-class University (in disciplines) and Guide Distinctive Development (2021AC006)。
文摘Plant height can be used for assessing plant vigor and predicting biomass and yield. Manual measurement of plant height is time-consuming and labor-intensive. We describe a method for measuring maize plant height using an RGB-D camera that captures a color image and depth information of plants under field conditions. The color image was first processed to locate its central area using the S component in HSV color space and the Density-Based Spatial Clustering of Applications with Noise algorithm. Testing showed that the central areas of plants could be accurately located. The point cloud data were then clustered and the plant was extracted based on the located central area. The point cloud data were further processed to generate skeletons, whose end points were detected and used to extract the highest points of the central leaves. Finally, the height differences between the ground and the highest points of the central leaves were calculated to determine plant heights. The coefficients of determination for plant heights manually measured and estimated by the proposed approach were all greater than 0.95. The method can effectively extract the plant from overlapping leaves and estimate its plant height. The proposed method may facilitate maize height measurement and monitoring under field conditions.
文摘Leaf normal distribution is an important structural characteristic of the forest canopy. Although terrestrial laser scanners(TLS) have potential for estimating canopy structural parameters, distinguishing between leaves and nonphotosynthetic structures to retrieve the leaf normal has been challenging. We used here an approach to accurately retrieve the leaf normals of camphorwood(Cinnamomum camphora) using TLS point cloud data.First, nonphotosynthetic structures were filtered by using the curvature threshold of each point. Then, the point cloud data were segmented by a voxel method and clustered by a Gaussian mixture model in each voxel. Finally, the normal vector of each cluster was computed by principal component analysis to obtain the leaf normal distribution. We collected leaf inclination angles and estimated the distribution, which we compared with the retrieved leaf normal distribution. The correlation coefficient between measurements and obtained results was 0.96, indicating a good coincidence.
基金This project is supported by Provincial Technology Cooperation Program of Yunnan,China(No.2003EAAAA00D043).
文摘As point cloud of one whole vehicle body has the traits of large geometric dimension, huge data and rigorous reverse precision, one pretreatment algorithm on automobile body point cloud is put forward. The basic idea of the registration algorithm based on the skeleton points is to construct the skeleton points of the whole vehicle model and the mark points of the separate point cloud, to search the mapped relationship between skeleton points and mark points using congruence triangle method and to match the whole vehicle point cloud using the improved iterative closed point (ICP) algorithm. The data reduction algorithm, based on average square root of distance, condenses data by three steps, computing datasets' average square root of distance in sampling cube grid, sorting order according to the value computed from the first step, choosing sampling percentage. The accuracy of the two algorithms above is proved by a registration and reduction example of whole vehicle point cloud of a certain light truck.