This paper presents an optimized strategy for multiple integrations of photovoltaic distributed generation (PV-DG) within radial distribution power systems. The proposed methodology focuses on identifying the optimal ...This paper presents an optimized strategy for multiple integrations of photovoltaic distributed generation (PV-DG) within radial distribution power systems. The proposed methodology focuses on identifying the optimal allocation and sizing of multiple PV-DG units to minimize power losses using a probabilistic PV model and time-series power flow analysis. Addressing the uncertainties in PV output due to weather variability and diurnal cycles is critical. A probabilistic assessment offers a more robust analysis of DG integration’s impact on the grid, potentially leading to more reliable system planning. The presented approach employs a genetic algorithm (GA) and a determined PV output profile and probabilistic PV generation profile based on experimental measurements for one year of solar radiation in Cairo, Egypt. The proposed algorithms are validated using a co-simulation framework that integrates MATLAB and OpenDSS, enabling analysis on a 33-bus test system. This framework can act as a guideline for creating other co-simulation algorithms to enhance computing platforms for contemporary modern distribution systems within smart grids concept. The paper presents comparisons with previous research studies and various interesting findings such as the considered hours for developing the probabilistic model presents different results.展开更多
The simulation of wind power time series is a key process in renewable power allocation planning,operation mode calculation,and safety assessment.Traditional single-point modeling methods discretely generate wind powe...The simulation of wind power time series is a key process in renewable power allocation planning,operation mode calculation,and safety assessment.Traditional single-point modeling methods discretely generate wind power at each moment;however,they ignore the daily output characteristics and are unable to consider both modeling accuracy and efficiency.To resolve this problem,a wind power time series simulation model based on typical daily output processes and Markov algorithm is proposed.First,a typical daily output process classification method based on time series similarity and modified K-means clustering algorithm is presented.Second,considering the typical daily output processes as status variables,a wind power time series simulation model based on Markov algorithm is constructed.Finally,a case is analyzed based on the measured data of a wind farm in China.The proposed model is then compared with traditional methods to verify its effectiveness and applicability.The comparison results indicate that the statistical characteristics,probability distributions,and autocorrelation characteristics of the wind power time series generated by the proposed model are better than those of the traditional methods.Moreover,modeling efficiency considerably improves.展开更多
The PPSV (Proportional Pulse in the System Variable) algorithm is a convenient method for the stabilization of the chaotic time series. It does not require any previous knowledge of the system. The PPSV method also ha...The PPSV (Proportional Pulse in the System Variable) algorithm is a convenient method for the stabilization of the chaotic time series. It does not require any previous knowledge of the system. The PPSV method also has a shortcoming, that is, the determination off. is a procedure by trial and error, since it lacks of optimization. In order to overcome the blindness, GA (Genetic Algorithm), a search algorithm based on the mechanics of natural selection and natural genetics, is used to optimize the λi The new method is named as GAPPSV algorithm. The simulation results show that GAPPSV algorithm is very efficient because the control process is short and the steady-state error is small.展开更多
For the unforced dynamical non-linear state–space model,a new Q1 and efficient square root extended kernel recursive least square estimation algorithm is developed in this article.The proposed algorithm lends itself ...For the unforced dynamical non-linear state–space model,a new Q1 and efficient square root extended kernel recursive least square estimation algorithm is developed in this article.The proposed algorithm lends itself towards the parallel implementation as in the FPGA systems.With the help of an ortho-normal triangularization method,which relies on numerically stable givens rotation,matrix inversion causes a computational burden,is reduced.Matrix computation possesses many excellent numerical properties such as singularity,symmetry,skew symmetry,and triangularity is achieved by using this algorithm.The proposed method is validated for the prediction of stationary and non-stationary Mackey–Glass Time Series,along with that a component in the x-direction of the Lorenz Times Series is also predicted to illustrate its usefulness.By the learning curves regarding mean square error(MSE)are witnessed for demonstration with prediction performance of the proposed algorithm from where it’s concluded that the proposed algorithm performs better than EKRLS.This new SREKRLS based design positively offers an innovative era towards non-linear systolic arrays,which is efficient in developing very-large-scale integration(VLSI)applications with non-linear input data.Multiple experiments are carried out to validate the reliability,effectiveness,and applicability of the proposed algorithm and with different noise levels compared to the Extended kernel recursive least-squares(EKRLS)algorithm.展开更多
The feasibility of a parameter identification method based on symbolic time series analysis (STSA) and the adaptive immune clonal selection algorithm (AICSA) is studied. Data symbolization by using STSA alleviates the...The feasibility of a parameter identification method based on symbolic time series analysis (STSA) and the adaptive immune clonal selection algorithm (AICSA) is studied. Data symbolization by using STSA alleviates the effects of harmful noise in raw acceleration data. The effect of the parameters in STSA is theoretically evaluated and numerically verified. AICSA is employed to minimize the error between the state sequence histogram (SSH) that is transformed from raw acceleration data by STSA. The proposed methodology is evaluated by comparing it with AICSA using raw acceleration data. AICSA combining STSA is proved to be a powerful tool for identifying unknown parameters of structural systems even when the data is contaminated with relatively large amounts of noise.展开更多
Pattern discovery from the seasonal time-series is of importance. Traditionally, most of the algorithms of pattern discovery in time series are similar. A novel mode of time series is proposed which integrates the Gen...Pattern discovery from the seasonal time-series is of importance. Traditionally, most of the algorithms of pattern discovery in time series are similar. A novel mode of time series is proposed which integrates the Genetic Algorithm (GA) for the actual problem. The experiments on the electric power yield sequence models show that this algorithm is practicable and effective.展开更多
Pattern discovery from time series is of fundamental importance. Most of the algorithms of pattern discovery in time series capture the values of time series based on some kinds of similarity measures. Affected by the...Pattern discovery from time series is of fundamental importance. Most of the algorithms of pattern discovery in time series capture the values of time series based on some kinds of similarity measures. Affected by the scale and baseline, value-based methods bring about problem when the objective is to capture the shape. Thus, a similarity measure based on shape, Sh measure, is originally proposed, andthe properties of this similarity and corresponding proofs are given. Then a time series shape pattern discovery algorithm based on Sh measure is put forward. The proposed algorithm is terminated in finite iteration with given computational and storage complexity. Finally the experiments on synthetic datasets and sunspot datasets demonstrate that the time series shape pattern algorithm is valid.展开更多
Neural network and genetic algorithms are complementary technologies in the design of adaptive intelligent system. Neural network learns from scratch by adjusting the interconnections betweens layers. Genetic algorith...Neural network and genetic algorithms are complementary technologies in the design of adaptive intelligent system. Neural network learns from scratch by adjusting the interconnections betweens layers. Genetic algorithms are a popular computing framework that uses principals from natural population genetics to evolve solutions to problems. Various forecasting methods have been developed on the basis of neural network, but accuracy has been matter of concern in these forecasts. In neural network methods forecasted values depend to the choose of neural predictor structure, the number of the input, the lag. To remedy to these problem, in this paper, the authors are investing the applicability of an automatic design of a neural predictor realized by real Genetic Algorithms to predict the future value of a time series. The prediction method is tested by using meteorology time series that are daily and weekly mean temperatures in Melbourne, Australia, 1980-1990.展开更多
This hybrid methodology for structural health monitoring (SHM) is based on immune algorithms (IAs) and symbolic time series analysis (STSA). Real-valued negative selection (RNS) is used to detect damage detection and ...This hybrid methodology for structural health monitoring (SHM) is based on immune algorithms (IAs) and symbolic time series analysis (STSA). Real-valued negative selection (RNS) is used to detect damage detection and adaptive immune clonal selection algorithm (AICSA) is used to localize and quantify the damage. Data symbolization by using STSA alleviates the effects of harmful noise in raw acceleration data. This paper explains the mathematical basis of STSA and the procedure of the hybrid methodology. It also describes the results of an simulation experiment on a five-story shear frame structure that indicated the hybrid strategy can efficiently and precisely detect, localize and quantify damage to civil engineering structures in the presence of measurement noise.展开更多
In time series modeling, the residuals are often checked for white noise and normality. In practice, the useful tests are Ljung Box test. Mcleod Li test and Lin Mudholkar test. In this paper, we present a nonparame...In time series modeling, the residuals are often checked for white noise and normality. In practice, the useful tests are Ljung Box test. Mcleod Li test and Lin Mudholkar test. In this paper, we present a nonparametric approach for checking the residuals of time series models. This approach is based on the maximal correlation coefficient ρ 2 * between the residuals and time t . The basic idea is to use the bootstrap to form the null distribution of the statistic ρ 2 * under the null hypothesis H 0:ρ 2 * =0. For calculating ρ 2 * , we proposes a ρ algorithm, analogous to ACE procedure. Power study shows this approach is more powerful than Ljung Box test. Meanwhile, some numerical results and two examples are reported in this paper.展开更多
In the real world, the inputs of many complicated systems are time-varying functions or processes. In order to predict the outputs of these systems with high speed and accuracy, this paper proposes a time series predi...In the real world, the inputs of many complicated systems are time-varying functions or processes. In order to predict the outputs of these systems with high speed and accuracy, this paper proposes a time series prediction model based on the wavelet process neural network, and develops the corresponding learning algorithm based on the expansion of the orthogonal basis functions. The effectiveness of the proposed time series prediction model and its learning algorithm is proved by the Macke-Glass time series prediction, and the comparative prediction results indicate that the proposed time series prediction model based on the wavelet process neural network seems to perform well and appears suitable for using as a good tool to predict the highly complex nonlinear time series.展开更多
A new second-order neural Volterra filter (SONVF) with conjugate gradient (CG) algorithm is proposed to predict chaotic time series based on phase space delay-coordinate reconstruction of chaotic dynamics system i...A new second-order neural Volterra filter (SONVF) with conjugate gradient (CG) algorithm is proposed to predict chaotic time series based on phase space delay-coordinate reconstruction of chaotic dynamics system in this paper, where the neuron activation functions are introduced to constraint Volterra series terms for improving the nonlinear approximation of second-order Volterra filter (SOVF). The SONVF with CG algorithm improves the accuracy of prediction without increasing the computation complexity. Meanwhile, the difficulty of neuron number determination does not exist here. Experimental results show that the proposed filter can predict chaotic time series effectively, and one-step and multi-step prediction performances are obviously superior to those of SOVF, which demonstrate that the proposed SONVF is feasible and effective.展开更多
A dynamic parallel forecasting model is proposed, which is based on the problem of current forecasting models and their combined model. According to the process of the model, the fuzzy C-means clustering algorithm is ...A dynamic parallel forecasting model is proposed, which is based on the problem of current forecasting models and their combined model. According to the process of the model, the fuzzy C-means clustering algorithm is improved in outliers operation and distance in the clusters and among the clusters. Firstly, the input data sets are optimized and their coherence is ensured, the region scale algorithm is modified and non-isometric multi scale region fuzzy time series model is built. At the same time, the particle swarm optimization algorithm about the particle speed, location and inertia weight value is improved, this method is used to optimize the parameters of support vector machine, construct the combined forecast model, build the dynamic parallel forecast model, and calculate the dynamic weight values and regard the product of the weight value and forecast value to be the final forecast values. At last, the example shows the improved forecast model is effective and accurate.展开更多
Molding and simulation of time series prediction based on dynamic neural network(NN) are studied. Prediction model for non-linear and time-varying system is proposed based on dynamic Jordan NN. Aiming at the intrinsic...Molding and simulation of time series prediction based on dynamic neural network(NN) are studied. Prediction model for non-linear and time-varying system is proposed based on dynamic Jordan NN. Aiming at the intrinsic defects of back-propagation (BP) algorithm that cannot update network weights incrementally, a hybrid algorithm combining the temporal difference (TD) method with BP algorithm to train Jordan NN is put forward. The proposed method is applied to predict the ash content of clean coal in jigging production real-time and multi-step. A practical example is also given and its application results indicate that the method has better performance than others and also offers a beneficial reference to the prediction of nonlinear time series.展开更多
Water level predictions in the river,lake and delta play an important role in flood management.Every year Mekong River delta of Vietnam is experiencing flood due to heavy monsoon rains and high tides.Land subsidence m...Water level predictions in the river,lake and delta play an important role in flood management.Every year Mekong River delta of Vietnam is experiencing flood due to heavy monsoon rains and high tides.Land subsidence may also aggravate flooding problems in this area.Therefore,accurate predictions of water levels in this region are very important to forewarn the people and authorities for taking timely adequate remedial measures to prevent losses of life and property.There are so many methods available to predict the water levels based on historical data but nowadays Machine Learning(ML)methods are considered the best tool for accurate prediction.In this study,we have used surface water level data of 18 water level measurement stations of the Mekong River delta from 2000 to 2018 to build novel time-series Bagging based hybrid ML models namely:Bagging(RF),Bagging(SOM)and Bagging(M5P)to predict historical water levels in the study area.Performances of the Bagging-based hybrid models were compared with Reduced Error Pruning Trees(REPT),which is a benchmark ML model.The data of 19 years period was divided into 70:30 ratio for the modeling.The data of the period 1/2000 to 5/2013(which is about 70%of total data)was used for the training and for the period 5/2013 to 12/2018(which is about 30%of total data)was used for testing(validating)the models.Performance of the models was evaluated using standard statistical measures:Coefficient of Determination(R2),Root Mean Square Error(RMSE)and Mean Absolute Error(MAE).Results show that the performance of all the developed models is good(R2>0.9)for the prediction of water levels in the study area.However,the Bagging-based hybrid models are slightly better than another model such as REPT.Thus,these Bagging-based hybrid time series models can be used for predicting water levels at Mekong data.展开更多
Optimization is a concept, a process, and a method that all people use on a daily basis to solve their problems. The source of many optimization methods for many scientists has been the nature itself and the mechanism...Optimization is a concept, a process, and a method that all people use on a daily basis to solve their problems. The source of many optimization methods for many scientists has been the nature itself and the mechanisms that exist in it. Neural networks, inspired by the neurons of the human brain, have gained a great deal of recognition in recent years and provide solutions to everyday problems. Evolutionary algorithms are known for their efficiency and speed, in problems where the optimal solution is found in a huge number of possible solutions and they are also known for their simplicity, because their implementation does not require the use of complex mathematics. The combination of these two techniques is called neuroevolution. The purpose of the research is to combine and improve existing neuroevolution architectures, to solve time series problems. In this research, we propose a new improved strategy for such a system. As well as comparing the performance of our system with an already existing system, competing with it on five different datasets. Based on the final results and a combination of statistical results, we conclude that our system manages to perform much better than the existing system in all five datasets.展开更多
Patterned-based time series segmentation (PTSS) is an important task for many time series data mining applications. In this paper, according to the characteristics of PTSS, a generalized model is proposed for PTSS. Fi...Patterned-based time series segmentation (PTSS) is an important task for many time series data mining applications. In this paper, according to the characteristics of PTSS, a generalized model is proposed for PTSS. First, a new inter-pretation for PTSS is given by comparing this problem with the prototype-based clustering (PC). Then, a novel model, called clustering-inverse model (CI-model), is presented. Finally, two algorithms are presented to implement this model. Our experimental results on artificial and real-world time series demonstrate that the proposed algorithms are quite effective.展开更多
<span style="font-family:Verdana;">Several authors have used different classical statistical models to fit the Nigerian Bonny Light crude oil price but the application of machine learning models and Fu...<span style="font-family:Verdana;">Several authors have used different classical statistical models to fit the Nigerian Bonny Light crude oil price but the application of machine learning models and Fuzzy Time Series model on the crude oil price has been grossly understudied. Therefore, in this study, a classical statistical model</span><span><span><span style="font-family:;" "=""><span style="font-family:Verdana;">—</span><span style="font-family:Verdana;">Autoregressive Integrated Moving Average (ARIMA), two machine learning models</span><span style="font-family:Verdana;">—</span><span style="font-family:Verdana;">Artificial Neural Network (ANN) and Random Forest (RF) and Fuzzy Time Series (FTS) Model were compared in modeling the Nigerian Bonny Light crude oil price data for the periods </span></span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">from</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;"> January, 2006 to December, 2020. The monthly secondary data were collected from the Nigerian National Petroleum Corporation (NNPC) and Reuters website and divided into train (70%) and test (30%) sets. The train set was used in building the models and the models were validated using the test set. The performance measures used for the comparison include: The modified Diebold-Mariano test, the Root Mean Square Error (RMSE), the Mean Absolute Percentage Error (MAPE) and Nash-Sutcliffe Efficiency (NSE) values. Based on the performance measures, ANN (4, 1, 1) and RF performed better than ARIMA (1, 1, 0) model but FTS model using Chen’s algorithm outperformed every other model. The results recommend the use of FTS model for forecasting future values of the Nigerian Bonny Light Crude oil. However, a hybrid model of ARIMA-ANN or ARIMA-RF should be built and compared with Chen’s algorithm FTS model for the same data set to further verify the power of FTS model using Chen’s algorithm.</span></span></span>展开更多
A new real-time model based on parallel time-series mining is proposed to improve the accuracy and efficiency of the network intrusion detection systems. In this model, multidimensional dataset is constructed to descr...A new real-time model based on parallel time-series mining is proposed to improve the accuracy and efficiency of the network intrusion detection systems. In this model, multidimensional dataset is constructed to describe network events, and sliding window updating algorithm is used to maintain network stream. Moreover, parallel frequent patterns and frequent episodes mining algorithms are applied to implement parallel time-series mining engineer which can intelligently generate rules to distinguish intrusions from normal activities. Analysis and study on the basis of DAWNING 3000 indicate that this parallel time-series mining-based model provides a more accurate and efficient way to building real-time NIDS.展开更多
文摘This paper presents an optimized strategy for multiple integrations of photovoltaic distributed generation (PV-DG) within radial distribution power systems. The proposed methodology focuses on identifying the optimal allocation and sizing of multiple PV-DG units to minimize power losses using a probabilistic PV model and time-series power flow analysis. Addressing the uncertainties in PV output due to weather variability and diurnal cycles is critical. A probabilistic assessment offers a more robust analysis of DG integration’s impact on the grid, potentially leading to more reliable system planning. The presented approach employs a genetic algorithm (GA) and a determined PV output profile and probabilistic PV generation profile based on experimental measurements for one year of solar radiation in Cairo, Egypt. The proposed algorithms are validated using a co-simulation framework that integrates MATLAB and OpenDSS, enabling analysis on a 33-bus test system. This framework can act as a guideline for creating other co-simulation algorithms to enhance computing platforms for contemporary modern distribution systems within smart grids concept. The paper presents comparisons with previous research studies and various interesting findings such as the considered hours for developing the probabilistic model presents different results.
基金supported by the China Datang Corporation project“Study on the performance improvement scheme of in-service wind farms”,the Fundamental Research Funds for the Central Universities(2020MS021)the Foundation of State Key Laboratory“Real-time prediction of offshore wind power and load reduction control method”(LAPS2020-07).
文摘The simulation of wind power time series is a key process in renewable power allocation planning,operation mode calculation,and safety assessment.Traditional single-point modeling methods discretely generate wind power at each moment;however,they ignore the daily output characteristics and are unable to consider both modeling accuracy and efficiency.To resolve this problem,a wind power time series simulation model based on typical daily output processes and Markov algorithm is proposed.First,a typical daily output process classification method based on time series similarity and modified K-means clustering algorithm is presented.Second,considering the typical daily output processes as status variables,a wind power time series simulation model based on Markov algorithm is constructed.Finally,a case is analyzed based on the measured data of a wind farm in China.The proposed model is then compared with traditional methods to verify its effectiveness and applicability.The comparison results indicate that the statistical characteristics,probability distributions,and autocorrelation characteristics of the wind power time series generated by the proposed model are better than those of the traditional methods.Moreover,modeling efficiency considerably improves.
文摘The PPSV (Proportional Pulse in the System Variable) algorithm is a convenient method for the stabilization of the chaotic time series. It does not require any previous knowledge of the system. The PPSV method also has a shortcoming, that is, the determination off. is a procedure by trial and error, since it lacks of optimization. In order to overcome the blindness, GA (Genetic Algorithm), a search algorithm based on the mechanics of natural selection and natural genetics, is used to optimize the λi The new method is named as GAPPSV algorithm. The simulation results show that GAPPSV algorithm is very efficient because the control process is short and the steady-state error is small.
基金funded by Prince Sultan University,Riyadh,Saudi Arabia。
文摘For the unforced dynamical non-linear state–space model,a new Q1 and efficient square root extended kernel recursive least square estimation algorithm is developed in this article.The proposed algorithm lends itself towards the parallel implementation as in the FPGA systems.With the help of an ortho-normal triangularization method,which relies on numerically stable givens rotation,matrix inversion causes a computational burden,is reduced.Matrix computation possesses many excellent numerical properties such as singularity,symmetry,skew symmetry,and triangularity is achieved by using this algorithm.The proposed method is validated for the prediction of stationary and non-stationary Mackey–Glass Time Series,along with that a component in the x-direction of the Lorenz Times Series is also predicted to illustrate its usefulness.By the learning curves regarding mean square error(MSE)are witnessed for demonstration with prediction performance of the proposed algorithm from where it’s concluded that the proposed algorithm performs better than EKRLS.This new SREKRLS based design positively offers an innovative era towards non-linear systolic arrays,which is efficient in developing very-large-scale integration(VLSI)applications with non-linear input data.Multiple experiments are carried out to validate the reliability,effectiveness,and applicability of the proposed algorithm and with different noise levels compared to the Extended kernel recursive least-squares(EKRLS)algorithm.
文摘The feasibility of a parameter identification method based on symbolic time series analysis (STSA) and the adaptive immune clonal selection algorithm (AICSA) is studied. Data symbolization by using STSA alleviates the effects of harmful noise in raw acceleration data. The effect of the parameters in STSA is theoretically evaluated and numerically verified. AICSA is employed to minimize the error between the state sequence histogram (SSH) that is transformed from raw acceleration data by STSA. The proposed methodology is evaluated by comparing it with AICSA using raw acceleration data. AICSA combining STSA is proved to be a powerful tool for identifying unknown parameters of structural systems even when the data is contaminated with relatively large amounts of noise.
文摘Pattern discovery from the seasonal time-series is of importance. Traditionally, most of the algorithms of pattern discovery in time series are similar. A novel mode of time series is proposed which integrates the Genetic Algorithm (GA) for the actual problem. The experiments on the electric power yield sequence models show that this algorithm is practicable and effective.
文摘Pattern discovery from time series is of fundamental importance. Most of the algorithms of pattern discovery in time series capture the values of time series based on some kinds of similarity measures. Affected by the scale and baseline, value-based methods bring about problem when the objective is to capture the shape. Thus, a similarity measure based on shape, Sh measure, is originally proposed, andthe properties of this similarity and corresponding proofs are given. Then a time series shape pattern discovery algorithm based on Sh measure is put forward. The proposed algorithm is terminated in finite iteration with given computational and storage complexity. Finally the experiments on synthetic datasets and sunspot datasets demonstrate that the time series shape pattern algorithm is valid.
文摘Neural network and genetic algorithms are complementary technologies in the design of adaptive intelligent system. Neural network learns from scratch by adjusting the interconnections betweens layers. Genetic algorithms are a popular computing framework that uses principals from natural population genetics to evolve solutions to problems. Various forecasting methods have been developed on the basis of neural network, but accuracy has been matter of concern in these forecasts. In neural network methods forecasted values depend to the choose of neural predictor structure, the number of the input, the lag. To remedy to these problem, in this paper, the authors are investing the applicability of an automatic design of a neural predictor realized by real Genetic Algorithms to predict the future value of a time series. The prediction method is tested by using meteorology time series that are daily and weekly mean temperatures in Melbourne, Australia, 1980-1990.
文摘This hybrid methodology for structural health monitoring (SHM) is based on immune algorithms (IAs) and symbolic time series analysis (STSA). Real-valued negative selection (RNS) is used to detect damage detection and adaptive immune clonal selection algorithm (AICSA) is used to localize and quantify the damage. Data symbolization by using STSA alleviates the effects of harmful noise in raw acceleration data. This paper explains the mathematical basis of STSA and the procedure of the hybrid methodology. It also describes the results of an simulation experiment on a five-story shear frame structure that indicated the hybrid strategy can efficiently and precisely detect, localize and quantify damage to civil engineering structures in the presence of measurement noise.
文摘In time series modeling, the residuals are often checked for white noise and normality. In practice, the useful tests are Ljung Box test. Mcleod Li test and Lin Mudholkar test. In this paper, we present a nonparametric approach for checking the residuals of time series models. This approach is based on the maximal correlation coefficient ρ 2 * between the residuals and time t . The basic idea is to use the bootstrap to form the null distribution of the statistic ρ 2 * under the null hypothesis H 0:ρ 2 * =0. For calculating ρ 2 * , we proposes a ρ algorithm, analogous to ACE procedure. Power study shows this approach is more powerful than Ljung Box test. Meanwhile, some numerical results and two examples are reported in this paper.
基金Project supported by the National Natural Science Foundation of China (Grant No 60572174)the Doctoral Fund of Ministry of Education of China (Grant No 20070213072)+2 种基金the 111 Project (Grant No B07018)the China Postdoctoral Science Foundation (Grant No 20070410264)the Development Program for Outstanding Young Teachers in Harbin Institute of Technology (Grant No HITQNJS.2007.010)
文摘In the real world, the inputs of many complicated systems are time-varying functions or processes. In order to predict the outputs of these systems with high speed and accuracy, this paper proposes a time series prediction model based on the wavelet process neural network, and develops the corresponding learning algorithm based on the expansion of the orthogonal basis functions. The effectiveness of the proposed time series prediction model and its learning algorithm is proved by the Macke-Glass time series prediction, and the comparative prediction results indicate that the proposed time series prediction model based on the wavelet process neural network seems to perform well and appears suitable for using as a good tool to predict the highly complex nonlinear time series.
基金Project supported by the National Natural Science Foundation of China (Grant No 60276096), the National Ministry Foundation of China (Grant No 51430804QT2201).
文摘A new second-order neural Volterra filter (SONVF) with conjugate gradient (CG) algorithm is proposed to predict chaotic time series based on phase space delay-coordinate reconstruction of chaotic dynamics system in this paper, where the neuron activation functions are introduced to constraint Volterra series terms for improving the nonlinear approximation of second-order Volterra filter (SOVF). The SONVF with CG algorithm improves the accuracy of prediction without increasing the computation complexity. Meanwhile, the difficulty of neuron number determination does not exist here. Experimental results show that the proposed filter can predict chaotic time series effectively, and one-step and multi-step prediction performances are obviously superior to those of SOVF, which demonstrate that the proposed SONVF is feasible and effective.
基金supported by the National Defense Preliminary Research Program of China(A157167)the National Defense Fundamental of China(9140A19030314JB35275)
文摘A dynamic parallel forecasting model is proposed, which is based on the problem of current forecasting models and their combined model. According to the process of the model, the fuzzy C-means clustering algorithm is improved in outliers operation and distance in the clusters and among the clusters. Firstly, the input data sets are optimized and their coherence is ensured, the region scale algorithm is modified and non-isometric multi scale region fuzzy time series model is built. At the same time, the particle swarm optimization algorithm about the particle speed, location and inertia weight value is improved, this method is used to optimize the parameters of support vector machine, construct the combined forecast model, build the dynamic parallel forecast model, and calculate the dynamic weight values and regard the product of the weight value and forecast value to be the final forecast values. At last, the example shows the improved forecast model is effective and accurate.
文摘Molding and simulation of time series prediction based on dynamic neural network(NN) are studied. Prediction model for non-linear and time-varying system is proposed based on dynamic Jordan NN. Aiming at the intrinsic defects of back-propagation (BP) algorithm that cannot update network weights incrementally, a hybrid algorithm combining the temporal difference (TD) method with BP algorithm to train Jordan NN is put forward. The proposed method is applied to predict the ash content of clean coal in jigging production real-time and multi-step. A practical example is also given and its application results indicate that the method has better performance than others and also offers a beneficial reference to the prediction of nonlinear time series.
基金funded by Vietnam Academy of Science and Technology(VAST)under Project Codes KHCBTÐ.02/19-21 and UQÐTCB.02/19-20.
文摘Water level predictions in the river,lake and delta play an important role in flood management.Every year Mekong River delta of Vietnam is experiencing flood due to heavy monsoon rains and high tides.Land subsidence may also aggravate flooding problems in this area.Therefore,accurate predictions of water levels in this region are very important to forewarn the people and authorities for taking timely adequate remedial measures to prevent losses of life and property.There are so many methods available to predict the water levels based on historical data but nowadays Machine Learning(ML)methods are considered the best tool for accurate prediction.In this study,we have used surface water level data of 18 water level measurement stations of the Mekong River delta from 2000 to 2018 to build novel time-series Bagging based hybrid ML models namely:Bagging(RF),Bagging(SOM)and Bagging(M5P)to predict historical water levels in the study area.Performances of the Bagging-based hybrid models were compared with Reduced Error Pruning Trees(REPT),which is a benchmark ML model.The data of 19 years period was divided into 70:30 ratio for the modeling.The data of the period 1/2000 to 5/2013(which is about 70%of total data)was used for the training and for the period 5/2013 to 12/2018(which is about 30%of total data)was used for testing(validating)the models.Performance of the models was evaluated using standard statistical measures:Coefficient of Determination(R2),Root Mean Square Error(RMSE)and Mean Absolute Error(MAE).Results show that the performance of all the developed models is good(R2>0.9)for the prediction of water levels in the study area.However,the Bagging-based hybrid models are slightly better than another model such as REPT.Thus,these Bagging-based hybrid time series models can be used for predicting water levels at Mekong data.
文摘Optimization is a concept, a process, and a method that all people use on a daily basis to solve their problems. The source of many optimization methods for many scientists has been the nature itself and the mechanisms that exist in it. Neural networks, inspired by the neurons of the human brain, have gained a great deal of recognition in recent years and provide solutions to everyday problems. Evolutionary algorithms are known for their efficiency and speed, in problems where the optimal solution is found in a huge number of possible solutions and they are also known for their simplicity, because their implementation does not require the use of complex mathematics. The combination of these two techniques is called neuroevolution. The purpose of the research is to combine and improve existing neuroevolution architectures, to solve time series problems. In this research, we propose a new improved strategy for such a system. As well as comparing the performance of our system with an already existing system, competing with it on five different datasets. Based on the final results and a combination of statistical results, we conclude that our system manages to perform much better than the existing system in all five datasets.
文摘Patterned-based time series segmentation (PTSS) is an important task for many time series data mining applications. In this paper, according to the characteristics of PTSS, a generalized model is proposed for PTSS. First, a new inter-pretation for PTSS is given by comparing this problem with the prototype-based clustering (PC). Then, a novel model, called clustering-inverse model (CI-model), is presented. Finally, two algorithms are presented to implement this model. Our experimental results on artificial and real-world time series demonstrate that the proposed algorithms are quite effective.
文摘<span style="font-family:Verdana;">Several authors have used different classical statistical models to fit the Nigerian Bonny Light crude oil price but the application of machine learning models and Fuzzy Time Series model on the crude oil price has been grossly understudied. Therefore, in this study, a classical statistical model</span><span><span><span style="font-family:;" "=""><span style="font-family:Verdana;">—</span><span style="font-family:Verdana;">Autoregressive Integrated Moving Average (ARIMA), two machine learning models</span><span style="font-family:Verdana;">—</span><span style="font-family:Verdana;">Artificial Neural Network (ANN) and Random Forest (RF) and Fuzzy Time Series (FTS) Model were compared in modeling the Nigerian Bonny Light crude oil price data for the periods </span></span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">from</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;"> January, 2006 to December, 2020. The monthly secondary data were collected from the Nigerian National Petroleum Corporation (NNPC) and Reuters website and divided into train (70%) and test (30%) sets. The train set was used in building the models and the models were validated using the test set. The performance measures used for the comparison include: The modified Diebold-Mariano test, the Root Mean Square Error (RMSE), the Mean Absolute Percentage Error (MAPE) and Nash-Sutcliffe Efficiency (NSE) values. Based on the performance measures, ANN (4, 1, 1) and RF performed better than ARIMA (1, 1, 0) model but FTS model using Chen’s algorithm outperformed every other model. The results recommend the use of FTS model for forecasting future values of the Nigerian Bonny Light Crude oil. However, a hybrid model of ARIMA-ANN or ARIMA-RF should be built and compared with Chen’s algorithm FTS model for the same data set to further verify the power of FTS model using Chen’s algorithm.</span></span></span>
文摘A new real-time model based on parallel time-series mining is proposed to improve the accuracy and efficiency of the network intrusion detection systems. In this model, multidimensional dataset is constructed to describe network events, and sliding window updating algorithm is used to maintain network stream. Moreover, parallel frequent patterns and frequent episodes mining algorithms are applied to implement parallel time-series mining engineer which can intelligently generate rules to distinguish intrusions from normal activities. Analysis and study on the basis of DAWNING 3000 indicate that this parallel time-series mining-based model provides a more accurate and efficient way to building real-time NIDS.