Groundwater is a crucial water source for urban areas in Africa, particularly where surface water is insufficient to meet demand. This study analyses the water quality of five shallow wells (WW1-WW5) in Half-London Wa...Groundwater is a crucial water source for urban areas in Africa, particularly where surface water is insufficient to meet demand. This study analyses the water quality of five shallow wells (WW1-WW5) in Half-London Ward, Tunduma Town, Tanzania, using Principal Component Analysis (PCA) to identify the primary factors influencing groundwater contamination. Monthly samples were collected over 12 months and analysed for physical, chemical, and biological parameters. The PCA revealed between four and six principal components (PCs) for each well, explaining between 84.61% and 92.55% of the total variance in water quality data. In WW1, five PCs captured 87.53% of the variability, with PC1 (33.05%) dominated by pH, EC, TDS, and microbial contamination, suggesting significant influences from surface runoff and pit latrines. In WW2, six PCs explained 92.55% of the variance, with PC1 (36.17%) highlighting the effects of salinity, TDS, and agricultural runoff. WW3 had four PCs explaining 84.61% of the variance, with PC1 (39.63%) showing high contributions from pH, hardness, and salinity, indicating geological influences and contamination from human activities. Similarly, in WW4, six PCs explained 90.83% of the variance, where PC1 (43.53%) revealed contamination from pit latrines and fertilizers. WW5 also had six PCs, accounting for 92.51% of the variance, with PC1 (42.73%) indicating significant contamination from agricultural runoff and pit latrines. The study concludes that groundwater quality in Half-London Ward is primarily affected by a combination of surface runoff, pit latrine contamination, agricultural inputs, and geological factors. The presence of microbial contaminants and elevated nitrate and phosphate levels underscores the need for improved sanitation and sustainable agricultural practices. Recommendations include strengthening sanitation infrastructure, promoting responsible farming techniques, and implementing regular groundwater monitoring to safeguard water resources and public health in the region.展开更多
Joint roughness coefficient(JRC)is the most commonly used parameter for quantifying surface roughness of rock discontinuities in practice.The system composed of multiple roughness statistical parameters to measure JRC...Joint roughness coefficient(JRC)is the most commonly used parameter for quantifying surface roughness of rock discontinuities in practice.The system composed of multiple roughness statistical parameters to measure JRC is a nonlinear system with a lot of overlapping information.In this paper,a dataset of eight roughness statistical parameters covering 112 digital joints is established.Then,the principal component analysis method is introduced to extract the significant information,which solves the information overlap problem of roughness characterization.Based on the two principal components of extracted features,the white shark optimizer algorithm was introduced to optimize the extreme gradient boosting model,and a new machine learning(ML)prediction model was established.The prediction accuracy of the new model and the other 17 models was measured using statistical metrics.The results show that the prediction result of the new model is more consistent with the real JRC value,with higher recognition accuracy and generalization ability.展开更多
Due to the complexity of underground engineering geology,the tunnel boring machine(TBM)usually shows poor adaptability to the surrounding rock mass,leading to machine jamming and geological hazards.For the TBM project...Due to the complexity of underground engineering geology,the tunnel boring machine(TBM)usually shows poor adaptability to the surrounding rock mass,leading to machine jamming and geological hazards.For the TBM project of Lanzhou Water Source Construction,this study proposed a neural network called PCA-GRU,which combines principal component analysis(PCA)with gated recurrent unit(GRU)to improve the accuracy of predicting rock mass classification in TBM tunneling.The input variables from the PCA dimension reduction of nine parameters in the sample data set were utilized for establishing the PCA-GRU model.Subsequently,in order to speed up the response time of surrounding rock mass classification predictions,the PCA-GRU model was optimized.Finally,the prediction results obtained by the PCA-GRU model were compared with those of four other models and further examined using random sampling analysis.As indicated by the results,the PCA-GRU model can predict the rock mass classification in TBM tunneling rapidly,requiring about 20 s to run.It performs better than the previous four models in predicting the rock mass classification,with accuracy A,macro precision MP,and macro recall MR being 0.9667,0.963,and 0.9763,respectively.In Class II,III,and IV rock mass prediction,the PCA-GRU model demonstrates better precision P and recall R owing to the dimension reduction technique.The random sampling analysis indicates that the PCA-GRU model shows stronger generalization,making it more appropriate in situations where the distribution of various rock mass classes and lithologies change in percentage.展开更多
Ore production is usually affected by multiple influencing inputs at open-pit mines.Nevertheless,the complex nonlinear relationships between these inputs and ore production remain unclear.This becomes even more challe...Ore production is usually affected by multiple influencing inputs at open-pit mines.Nevertheless,the complex nonlinear relationships between these inputs and ore production remain unclear.This becomes even more challenging when training data(e.g.truck haulage information and weather conditions)are massive.In machine learning(ML)algorithms,deep neural network(DNN)is a superior method for processing nonlinear and massive data by adjusting the amount of neurons and hidden layers.This study adopted DNN to forecast ore production using truck haulage information and weather conditions at open-pit mines as training data.Before the prediction models were built,principal component analysis(PCA)was employed to reduce the data dimensionality and eliminate the multicollinearity among highly correlated input variables.To verify the superiority of DNN,three ANNs containing only one hidden layer and six traditional ML models were established as benchmark models.The DNN model with multiple hidden layers performed better than the ANN models with a single hidden layer.The DNN model outperformed the extensively applied benchmark models in predicting ore production.This can provide engineers and researchers with an accurate method to forecast ore production,which helps make sound budgetary decisions and mine planning at open-pit mines.展开更多
[Objectives] This study aimed to establish HPLC fingerprint and conduct cluster analysis and principle component analysis for Citri Reticulatae Pericarpium Viride. [Methods] Using the HPLC method, the determination wa...[Objectives] This study aimed to establish HPLC fingerprint and conduct cluster analysis and principle component analysis for Citri Reticulatae Pericarpium Viride. [Methods] Using the HPLC method, the determination was performed on XSelect~® HSS T3-C_(18) column with mobile phase of acetonitrile-0.5% acetic acid solution(gradient elution) at the flow rate of 1.0 mL/min. The detection wavelength was 360 nm. The column temperature was 25℃. The sample size was 10 μL. With peak of hesperidin as the reference, HPLC fingerprints of 10 batches of Citri Reticulatae Pericarpium Viride were determined. The similarity of the 10 batches of samples was evaluated by Similarity Evaluation System for Chromatographic Fingerprint of TCM(2012 edition) to determine the common peaks. Cluster analysis and principal component analysis were performed by using SPSS 17.0 statistical software. [Results] The HPLC fingerprints of the 10 batches of medicinal materials had total 11 common peaks, and the similarity was 0.919-1.000, indicating that the chemical composition of the 10 batches of medicinal materials was consistent. There were 11 common components in the 10 batches of medicinal materials, but their contents were different. When the Euclidean distance was 20, the 10 batches of samples were divided into two categories, S4 in the first category, and the others in the second one. When the Euclidean distance was 5, the second category could be further divided into two sub-categories, S1 and S10 in one sub-category, and S2, S3, S5, S6, S7, S8 and S9 in the other one. The principle component analysis showed that cumulative contribution rate of the two main component factors was 92.797%, and the comprehensive score of S7 was the highest with the best quality. [Conclusions] The results of HPLC fingerprinting, cluster analysis and principle component analysis can provide reference for the quality control of Citri Reticulatae Pericarpium Viride.展开更多
The healthy condition of the milling tool has a very high impact on the machining quality of the titanium components.Therefore,it is important to recognize the healthy condition of the tool and replace the damaged cut...The healthy condition of the milling tool has a very high impact on the machining quality of the titanium components.Therefore,it is important to recognize the healthy condition of the tool and replace the damaged cutter at the right time.In order to recognize the health condition of the milling cutter,a method based on the long short term memory(LSTM)was proposed to recognize tool health state in this paper.The various signals collected in the tool wear experiments were analyzed by time-domain statistics,and then the extracted data were generated by principal component analysis(PCA)method.The preprocessed data extracted by PCA is transmitted to the LSTM model for recognition.Compared with back propagation neural network(BPNN)and support vector machine(SVM),the proposed method can effectively utilize the time-domain regulation in the data to achieve higher recognition speed and accuracy.展开更多
Principal component analysis(PCA)has been already employed for fault detection of air conditioning systems.The sliding window,which is composed of some parameters satisfying with thermal load balance,can select the ta...Principal component analysis(PCA)has been already employed for fault detection of air conditioning systems.The sliding window,which is composed of some parameters satisfying with thermal load balance,can select the target historical fault-free reference data as the template which is similar to the current snapshot data.The size of sliding window is usually given according to empirical values,while the influence of different sizes of sliding windows on fault detection of an air conditioning system is not further studied.The air conditioning system is a dynamic response process,and the operating parameters change with the change of the load,while the response of the controller is delayed.In a variable air volume(VAV)air conditioning system controlled by the total air volume method,in order to ensure sufficient response time,30 data points are selected first,and then their multiples are selected.Three different sizes of sliding windows with 30,60 and 90 data points are applied to compare the fault detection effect in this paper.The results show that if the size of the sliding window is 60 data points,the average fault-free detection ratio is 80.17%in fault-free testing days,and the average fault detection ratio is 88.47%in faulty testing days.展开更多
The Internet of things(IoT)is a wireless network designed to perform specific tasks and plays a crucial role in various fields such as environmental monitoring,surveillance,and healthcare.To address the limitations im...The Internet of things(IoT)is a wireless network designed to perform specific tasks and plays a crucial role in various fields such as environmental monitoring,surveillance,and healthcare.To address the limitations imposed by inadequate resources,energy,and network scalability,this type of network relies heavily on data aggregation and clustering algorithms.Although various conventional studies have aimed to enhance the lifespan of a network through robust systems,they do not always provide optimal efficiency for real-time applications.This paper presents an approach based on state-of-the-art machine-learning methods.In this study,we employed a novel approach that combines an extended version of principal component analysis(PCA)and a reinforcement learning algorithm to achieve efficient clustering and data reduction.The primary objectives of this study are to enhance the service life of a network,reduce energy usage,and improve data aggregation efficiency.We evaluated the proposed methodology using data collected from sensors deployed in agricultural fields for crop monitoring.Our proposed approach(PQL)was compared to previous studies that utilized adaptive Q-learning(AQL)and regional energy-aware clustering(REAC).Our study outperformed in terms of both network longevity and energy consumption and established a fault-tolerant network.展开更多
This paper aims to deepen the quality of life of people with celiac disease with a focus on compliance to the diet through Principle Component Analysis and Analyse des Données. In particular, we will try to under...This paper aims to deepen the quality of life of people with celiac disease with a focus on compliance to the diet through Principle Component Analysis and Analyse des Données. In particular, we will try to understand whether these analyzes are also applicable in the context of research web2.0 carried out with web-survey.展开更多
There are a variety of classification techniques such as neural network, decision tree, support vector machine and logistic regression. The problem of dimensionality is pertinent to many learning algorithms, and it de...There are a variety of classification techniques such as neural network, decision tree, support vector machine and logistic regression. The problem of dimensionality is pertinent to many learning algorithms, and it denotes the drastic raise of computational complexity, however, we need to use dimensionality reduction methods. These methods include principal component analysis (PCA) and locality preserving projection (LPP). In many real-world classification problems, the local structure is more important than the global structure and dimensionality reduction techniques ignore the local structure and preserve the global structure. The objectives is to compare PCA and LPP in terms of accuracy, to develop appropriate representations of complex data by reducing the dimensions of the data and to explain the importance of using LPP with logistic regression. The results of this paper find that the proposed LPP approach provides a better representation and high accuracy than the PCA approach.展开更多
Matrix principal component analysis (MatPCA), as an effective feature extraction method, can deal with the matrix pattern and the vector pattern. However, like PCA, MatPCA does not use the class information of sampl...Matrix principal component analysis (MatPCA), as an effective feature extraction method, can deal with the matrix pattern and the vector pattern. However, like PCA, MatPCA does not use the class information of samples. As a result, the extracted features cannot provide enough useful information for distinguishing pat- tern from one another, and further resulting in degradation of classification performance. To fullly use class in- formation of samples, a novel method, called the fuzzy within-class MatPCA (F-WMatPCA)is proposed. F-WMatPCA utilizes the fuzzy K-nearest neighbor method(FKNN) to fuzzify the class membership degrees of a training sample and then performs fuzzy MatPCA within these patterns having the same class label. Due to more class information is used in feature extraction, F-WMatPCA can intuitively improve the classification perfor- mance. Experimental results in face databases and some benchmark datasets show that F-WMatPCA is effective and competitive than MatPCA. The experimental analysis on face image databases indicates that F-WMatPCA im- proves the recognition accuracy and is more stable and robust in performing classification than the existing method of fuzzy-based F-Fisherfaces.展开更多
Concentration of elements or element groups in a geological body is the result of multiple stages of rockforming and ore-forming geological processes.An ore-forming element group can be identified by PCA(principal com...Concentration of elements or element groups in a geological body is the result of multiple stages of rockforming and ore-forming geological processes.An ore-forming element group can be identified by PCA(principal component analysis)and be separated into two components using BEMD(bi-dimensional empirical mode decomposition):(1)a high background component which represents the ore-forming background developed in rocks through various geological processes favorable for mineralization(i.e.magmatism,sedimentation and/or metamorphism);(2)the anomaly component which reflects the oreforming anomaly that is overprinted on the high background component developed during mineralization.Anomaly components are used to identify ore-finding targets more effectively than ore-forming element groups.Three steps of data analytical procedures are described in this paper;firstly,the application of PCA to establish the ore-forming element group;secondly,using BEMD on the o re-forming element group to identify the anomaly components created by different types of mineralization processes;and finally,identifying ore-finding targets based on the anomaly components.This method is applied to the Tengchong tin-polymetallic belt to delineate ore-finding targets,where four targets for Sn(W)and three targets for Pb-Zn-Ag-Fe polymetallic mineralization are identified and defined as new areas for further prospecting.It is shown that BEMD combined with PCA can be applied not only in extracting the anomaly component for delineating the ore-finding target,but also in extracting the residual component for identifying its high background zone favorable for mineralization from its oreforming element group.展开更多
文摘Groundwater is a crucial water source for urban areas in Africa, particularly where surface water is insufficient to meet demand. This study analyses the water quality of five shallow wells (WW1-WW5) in Half-London Ward, Tunduma Town, Tanzania, using Principal Component Analysis (PCA) to identify the primary factors influencing groundwater contamination. Monthly samples were collected over 12 months and analysed for physical, chemical, and biological parameters. The PCA revealed between four and six principal components (PCs) for each well, explaining between 84.61% and 92.55% of the total variance in water quality data. In WW1, five PCs captured 87.53% of the variability, with PC1 (33.05%) dominated by pH, EC, TDS, and microbial contamination, suggesting significant influences from surface runoff and pit latrines. In WW2, six PCs explained 92.55% of the variance, with PC1 (36.17%) highlighting the effects of salinity, TDS, and agricultural runoff. WW3 had four PCs explaining 84.61% of the variance, with PC1 (39.63%) showing high contributions from pH, hardness, and salinity, indicating geological influences and contamination from human activities. Similarly, in WW4, six PCs explained 90.83% of the variance, where PC1 (43.53%) revealed contamination from pit latrines and fertilizers. WW5 also had six PCs, accounting for 92.51% of the variance, with PC1 (42.73%) indicating significant contamination from agricultural runoff and pit latrines. The study concludes that groundwater quality in Half-London Ward is primarily affected by a combination of surface runoff, pit latrine contamination, agricultural inputs, and geological factors. The presence of microbial contaminants and elevated nitrate and phosphate levels underscores the need for improved sanitation and sustainable agricultural practices. Recommendations include strengthening sanitation infrastructure, promoting responsible farming techniques, and implementing regular groundwater monitoring to safeguard water resources and public health in the region.
基金funding from the National Natural Science Foundation of China (Grant No.42277175)the pilot project of cooperation between the Ministry of Natural Resources and Hunan Province“Research and demonstration of key technologies for comprehensive remote sensing identification of geological hazards in typical regions of Hunan Province” (Grant No.2023ZRBSHZ056)the National Key Research and Development Program of China-2023 Key Special Project (Grant No.2023YFC2907400).
文摘Joint roughness coefficient(JRC)is the most commonly used parameter for quantifying surface roughness of rock discontinuities in practice.The system composed of multiple roughness statistical parameters to measure JRC is a nonlinear system with a lot of overlapping information.In this paper,a dataset of eight roughness statistical parameters covering 112 digital joints is established.Then,the principal component analysis method is introduced to extract the significant information,which solves the information overlap problem of roughness characterization.Based on the two principal components of extracted features,the white shark optimizer algorithm was introduced to optimize the extreme gradient boosting model,and a new machine learning(ML)prediction model was established.The prediction accuracy of the new model and the other 17 models was measured using statistical metrics.The results show that the prediction result of the new model is more consistent with the real JRC value,with higher recognition accuracy and generalization ability.
基金State Key Laboratory of Hydroscience and Hydraulic Engineering of Tsinghua University,Grant/Award Number:2019-KY-03Key Technology of Intelligent Construction of Urban Underground Space of North China University of Technology,Grant/Award Number:110051360022XN108-19+3 种基金Research Start-up Fund Project of North China University of Technology,Grant/Award Number:110051360002Yujie Project of North China University of Technology,Grant/Award Number:216051360020XN199/006National Natural Science Foundation of China,Grant/Award Numbers:51522903,51774184National Key R&D Program of China,Grant/Award Numbers:2018YFC1504801,2018YFC1504902。
文摘Due to the complexity of underground engineering geology,the tunnel boring machine(TBM)usually shows poor adaptability to the surrounding rock mass,leading to machine jamming and geological hazards.For the TBM project of Lanzhou Water Source Construction,this study proposed a neural network called PCA-GRU,which combines principal component analysis(PCA)with gated recurrent unit(GRU)to improve the accuracy of predicting rock mass classification in TBM tunneling.The input variables from the PCA dimension reduction of nine parameters in the sample data set were utilized for establishing the PCA-GRU model.Subsequently,in order to speed up the response time of surrounding rock mass classification predictions,the PCA-GRU model was optimized.Finally,the prediction results obtained by the PCA-GRU model were compared with those of four other models and further examined using random sampling analysis.As indicated by the results,the PCA-GRU model can predict the rock mass classification in TBM tunneling rapidly,requiring about 20 s to run.It performs better than the previous four models in predicting the rock mass classification,with accuracy A,macro precision MP,and macro recall MR being 0.9667,0.963,and 0.9763,respectively.In Class II,III,and IV rock mass prediction,the PCA-GRU model demonstrates better precision P and recall R owing to the dimension reduction technique.The random sampling analysis indicates that the PCA-GRU model shows stronger generalization,making it more appropriate in situations where the distribution of various rock mass classes and lithologies change in percentage.
基金This work was supported by the Pilot Seed Grant(Grant No.RES0049944)the Collaborative Research Project(Grant No.RES0043251)from the University of Alberta.
文摘Ore production is usually affected by multiple influencing inputs at open-pit mines.Nevertheless,the complex nonlinear relationships between these inputs and ore production remain unclear.This becomes even more challenging when training data(e.g.truck haulage information and weather conditions)are massive.In machine learning(ML)algorithms,deep neural network(DNN)is a superior method for processing nonlinear and massive data by adjusting the amount of neurons and hidden layers.This study adopted DNN to forecast ore production using truck haulage information and weather conditions at open-pit mines as training data.Before the prediction models were built,principal component analysis(PCA)was employed to reduce the data dimensionality and eliminate the multicollinearity among highly correlated input variables.To verify the superiority of DNN,three ANNs containing only one hidden layer and six traditional ML models were established as benchmark models.The DNN model with multiple hidden layers performed better than the ANN models with a single hidden layer.The DNN model outperformed the extensively applied benchmark models in predicting ore production.This can provide engineers and researchers with an accurate method to forecast ore production,which helps make sound budgetary decisions and mine planning at open-pit mines.
基金Supported by National Natural Science Foundation of China(81603251)Key Research and Development Plan of Shanxi Province(201603D3113021)Project of Collaborative Innovation Center for the Comprehensive Development and Utilization of Medicinal Herbs in Shanxi Province(2017-JYXT-05)
文摘[Objectives] This study aimed to establish HPLC fingerprint and conduct cluster analysis and principle component analysis for Citri Reticulatae Pericarpium Viride. [Methods] Using the HPLC method, the determination was performed on XSelect~® HSS T3-C_(18) column with mobile phase of acetonitrile-0.5% acetic acid solution(gradient elution) at the flow rate of 1.0 mL/min. The detection wavelength was 360 nm. The column temperature was 25℃. The sample size was 10 μL. With peak of hesperidin as the reference, HPLC fingerprints of 10 batches of Citri Reticulatae Pericarpium Viride were determined. The similarity of the 10 batches of samples was evaluated by Similarity Evaluation System for Chromatographic Fingerprint of TCM(2012 edition) to determine the common peaks. Cluster analysis and principal component analysis were performed by using SPSS 17.0 statistical software. [Results] The HPLC fingerprints of the 10 batches of medicinal materials had total 11 common peaks, and the similarity was 0.919-1.000, indicating that the chemical composition of the 10 batches of medicinal materials was consistent. There were 11 common components in the 10 batches of medicinal materials, but their contents were different. When the Euclidean distance was 20, the 10 batches of samples were divided into two categories, S4 in the first category, and the others in the second one. When the Euclidean distance was 5, the second category could be further divided into two sub-categories, S1 and S10 in one sub-category, and S2, S3, S5, S6, S7, S8 and S9 in the other one. The principle component analysis showed that cumulative contribution rate of the two main component factors was 92.797%, and the comprehensive score of S7 was the highest with the best quality. [Conclusions] The results of HPLC fingerprinting, cluster analysis and principle component analysis can provide reference for the quality control of Citri Reticulatae Pericarpium Viride.
基金National Natural Science Foundation of China(No.51805079)Shanghai Natural Science Foundation,China(No.17ZR1400600)Fundamental Research Funds for the Central Universities,China(No.16D110309)
文摘The healthy condition of the milling tool has a very high impact on the machining quality of the titanium components.Therefore,it is important to recognize the healthy condition of the tool and replace the damaged cutter at the right time.In order to recognize the health condition of the milling cutter,a method based on the long short term memory(LSTM)was proposed to recognize tool health state in this paper.The various signals collected in the tool wear experiments were analyzed by time-domain statistics,and then the extracted data were generated by principal component analysis(PCA)method.The preprocessed data extracted by PCA is transmitted to the LSTM model for recognition.Compared with back propagation neural network(BPNN)and support vector machine(SVM),the proposed method can effectively utilize the time-domain regulation in the data to achieve higher recognition speed and accuracy.
基金Fundamental Research Funds for the Central Universities of Ministry of Education of China。
文摘Principal component analysis(PCA)has been already employed for fault detection of air conditioning systems.The sliding window,which is composed of some parameters satisfying with thermal load balance,can select the target historical fault-free reference data as the template which is similar to the current snapshot data.The size of sliding window is usually given according to empirical values,while the influence of different sizes of sliding windows on fault detection of an air conditioning system is not further studied.The air conditioning system is a dynamic response process,and the operating parameters change with the change of the load,while the response of the controller is delayed.In a variable air volume(VAV)air conditioning system controlled by the total air volume method,in order to ensure sufficient response time,30 data points are selected first,and then their multiples are selected.Three different sizes of sliding windows with 30,60 and 90 data points are applied to compare the fault detection effect in this paper.The results show that if the size of the sliding window is 60 data points,the average fault-free detection ratio is 80.17%in fault-free testing days,and the average fault detection ratio is 88.47%in faulty testing days.
文摘The Internet of things(IoT)is a wireless network designed to perform specific tasks and plays a crucial role in various fields such as environmental monitoring,surveillance,and healthcare.To address the limitations imposed by inadequate resources,energy,and network scalability,this type of network relies heavily on data aggregation and clustering algorithms.Although various conventional studies have aimed to enhance the lifespan of a network through robust systems,they do not always provide optimal efficiency for real-time applications.This paper presents an approach based on state-of-the-art machine-learning methods.In this study,we employed a novel approach that combines an extended version of principal component analysis(PCA)and a reinforcement learning algorithm to achieve efficient clustering and data reduction.The primary objectives of this study are to enhance the service life of a network,reduce energy usage,and improve data aggregation efficiency.We evaluated the proposed methodology using data collected from sensors deployed in agricultural fields for crop monitoring.Our proposed approach(PQL)was compared to previous studies that utilized adaptive Q-learning(AQL)and regional energy-aware clustering(REAC).Our study outperformed in terms of both network longevity and energy consumption and established a fault-tolerant network.
文摘This paper aims to deepen the quality of life of people with celiac disease with a focus on compliance to the diet through Principle Component Analysis and Analyse des Données. In particular, we will try to understand whether these analyzes are also applicable in the context of research web2.0 carried out with web-survey.
文摘There are a variety of classification techniques such as neural network, decision tree, support vector machine and logistic regression. The problem of dimensionality is pertinent to many learning algorithms, and it denotes the drastic raise of computational complexity, however, we need to use dimensionality reduction methods. These methods include principal component analysis (PCA) and locality preserving projection (LPP). In many real-world classification problems, the local structure is more important than the global structure and dimensionality reduction techniques ignore the local structure and preserve the global structure. The objectives is to compare PCA and LPP in terms of accuracy, to develop appropriate representations of complex data by reducing the dimensions of the data and to explain the importance of using LPP with logistic regression. The results of this paper find that the proposed LPP approach provides a better representation and high accuracy than the PCA approach.
文摘Matrix principal component analysis (MatPCA), as an effective feature extraction method, can deal with the matrix pattern and the vector pattern. However, like PCA, MatPCA does not use the class information of samples. As a result, the extracted features cannot provide enough useful information for distinguishing pat- tern from one another, and further resulting in degradation of classification performance. To fullly use class in- formation of samples, a novel method, called the fuzzy within-class MatPCA (F-WMatPCA)is proposed. F-WMatPCA utilizes the fuzzy K-nearest neighbor method(FKNN) to fuzzify the class membership degrees of a training sample and then performs fuzzy MatPCA within these patterns having the same class label. Due to more class information is used in feature extraction, F-WMatPCA can intuitively improve the classification perfor- mance. Experimental results in face databases and some benchmark datasets show that F-WMatPCA is effective and competitive than MatPCA. The experimental analysis on face image databases indicates that F-WMatPCA im- proves the recognition accuracy and is more stable and robust in performing classification than the existing method of fuzzy-based F-Fisherfaces.
基金funded by the Na-tional Natural Science Foundation of China(Grant Nos.41672329,41272365)the National Key Research and Development Project of China(Grant No.2016YFC0600509)the Project of China Geological Survey(Grant No.1212011120341)
文摘Concentration of elements or element groups in a geological body is the result of multiple stages of rockforming and ore-forming geological processes.An ore-forming element group can be identified by PCA(principal component analysis)and be separated into two components using BEMD(bi-dimensional empirical mode decomposition):(1)a high background component which represents the ore-forming background developed in rocks through various geological processes favorable for mineralization(i.e.magmatism,sedimentation and/or metamorphism);(2)the anomaly component which reflects the oreforming anomaly that is overprinted on the high background component developed during mineralization.Anomaly components are used to identify ore-finding targets more effectively than ore-forming element groups.Three steps of data analytical procedures are described in this paper;firstly,the application of PCA to establish the ore-forming element group;secondly,using BEMD on the o re-forming element group to identify the anomaly components created by different types of mineralization processes;and finally,identifying ore-finding targets based on the anomaly components.This method is applied to the Tengchong tin-polymetallic belt to delineate ore-finding targets,where four targets for Sn(W)and three targets for Pb-Zn-Ag-Fe polymetallic mineralization are identified and defined as new areas for further prospecting.It is shown that BEMD combined with PCA can be applied not only in extracting the anomaly component for delineating the ore-finding target,but also in extracting the residual component for identifying its high background zone favorable for mineralization from its oreforming element group.