How to design a multicast key management system with high performance is a hot issue now. This paper will apply the idea of hierarchical data processing to construct a common analytic model based on directed logical k...How to design a multicast key management system with high performance is a hot issue now. This paper will apply the idea of hierarchical data processing to construct a common analytic model based on directed logical key tree and supply two important metrics to this problem: re-keying cost and key storage cost. The paper gives the basic theory to the hierarchical data processing and the analyzing model to multieast key management based on logical key tree. It has been proved that the 4-ray tree has the best performance in using these metrics. The key management problem is also investigated based on user probability model, and gives two evaluating parameters to re-keying and key storage cost.展开更多
This paper investigates the economic development within the Guangdong-Hong Kong-Macao Greater Bay Area from two perspectives-spatial pattern and influencing factors-to promote coordinated development across the area.T...This paper investigates the economic development within the Guangdong-Hong Kong-Macao Greater Bay Area from two perspectives-spatial pattern and influencing factors-to promote coordinated development across the area.This paper employs Moran's I test and local Getis-Ord G statistic from spatial statistics.Furthermore,it constructs a hierarchical spatial econometric model to facilitate empirical investigation.It is found that the overall economic development of the Guangdong-Hong Kong-Macao Greater Bay Area exhibits a"mountainshaped"spatial lpattern of the high-level homogeneous regions withh"highhigh correlation"and the lowlevel homogeneous regions with"low-low correlation."The internal difference in economic density is moderate,with an obvious trend of decrease year by year.Economic density shows a significant spatial positive correlation,with the expansion of the scopeof areas exhibiting"high-high correlation."The differences in economic density between hotspots and sub-hotspots have decreased,but the economic density of cold spots has failed to keep up with the development of other regions.The difference in factor input density among the influencing factors explains most of the differences in economic density among different regions.The results from the R&D capital investment coefficient indicate that in recent years,the effect of investments in urban scientific and tetechnological innovation factors has been more extensive and uniform among the regions under its jurisdiction,but the spatial spillover effect of innovation factors at both layers is not significantly positive.Apart from the city's location within the Greater Bay Area,the relative location of the jurisdictions within the city equally influences the economic development configuration of the Greater Bay Area.Although economic density in regions adjacent to cities outside the Greater Bay Area is notably lower than in other regions,their growth rate and production efficiency remain on par with other regions.T-test and model results underscore the rapid development of the areas encircling the bay.The coefficient of location dummy variables in areas adjacent to cities in the Greater Bay Area varies among cities.At a particular factor input density,some cities have higher output efficiency in areas contiguous to cities in the Greater Bay Area.This study uniquely adopts low-level city jurisdictions and high-level cities to shape a two-tiered hierarchical dataset with nested geographic units.This innovative approach fully leverages insights from distinct layers,delving into spatial interdependence and interplay across layers.This paper aims to explore the spatial pattern and influencing factors steering economic development in the Guangdong-HonggKong-Macao Greater Bay Area.In doing so,it aims to identify problems and present pertinent policy recommendations.展开更多
Efficient real time data exchange over the Internet plays a crucial role in the successful application of web-based systems. In this paper, a data transfer mechanism over the Internet is proposed for real time web bas...Efficient real time data exchange over the Internet plays a crucial role in the successful application of web-based systems. In this paper, a data transfer mechanism over the Internet is proposed for real time web based applications. The mechanism incorporates the eXtensible Markup Language (XML) and Hierarchical Data Format (HDF) to provide a flexible and efficient data format. Heterogeneous transfer data is classified into light and heavy data, which are stored using XML and HDF respectively; the HDF data format is then mapped to Java Document Object Model (JDOM) objects in XML in the Java environment. These JDOM data objects are sent across computer networks with the support of the Java Remote Method Invocation (RMI) data transfer infrastructure. Client's defined data priority levels are implemented in RMI, which guides a server to transfer data objects at different priorities. A remote monitoring system for an industrial reactor process simulator is used as a case study to illustrate the proposed data transfer mechanism.展开更多
This paper proposes a security policy model for mandatory access control in class B1 database management system whose level of labeling is tuple. The relation hierarchical data model is extended to multilevel relatio...This paper proposes a security policy model for mandatory access control in class B1 database management system whose level of labeling is tuple. The relation hierarchical data model is extended to multilevel relation hierarchical data model. Based on the multilevel relation hierarchical data model, the concept of upper lower layer relational integrity is presented after we analyze and eliminate the covert channels caused by the database integrity. Two SQL statements are extended to process polyinstantiation in the multilevel secure environment. The system is based on the multilevel relation hierarchical data model and is capable of integratively storing and manipulating multilevel complicated objects ( e.g., multilevel spatial data) and multilevel conventional data ( e.g., integer, real number and character string).展开更多
SOZL (structured methodology + object-oriented methodology + Z language) is a language that attempts to integrate structured method, object-oriented method and formal method. The core of this language is predicate dat...SOZL (structured methodology + object-oriented methodology + Z language) is a language that attempts to integrate structured method, object-oriented method and formal method. The core of this language is predicate data flow diagram (PDFD). In order to eliminate the ambiguity of predicate data flow diagrams and their associated textual specifications, a formalization of the syntax and semantics of predicate data flow diagrams is necessary. In this paper we use Z notation to define an abstract syntax and the related structural constraints for the PDFD notation, and provide it with an axiomatic semantics based on the concept of data availability and functionality of predicate operation. Finally, an example is given to establish functionality consistent decomposition on hierarchical PDFD (HPDFD).展开更多
The High-energy Fragment Separator(HFRS),which is currently under construction,is a leading international radioactive beam device.Multiple sets of position-sensitive twin time projection chamber(TPC)detectors are dist...The High-energy Fragment Separator(HFRS),which is currently under construction,is a leading international radioactive beam device.Multiple sets of position-sensitive twin time projection chamber(TPC)detectors are distributed on HFRS for particle identification and beam monitoring.The twin TPCs'readout electronics system operates in a trigger-less mode due to its high counting rate,leading to a challenge of handling large amounts of data.To address this problem,we introduced an event-building algorithm.This algorithm employs a hierarchical processing strategy to compress data during transmission and aggregation.In addition,it reconstructs twin TPCs'events online and stores only the reconstructed particle information,which significantly reduces the burden on data transmission and storage resources.Simulation studies demonstrated that the algorithm accurately matches twin TPCs'events and reduces more than 98%of the data volume at a counting rate of 500 kHz/channel.展开更多
<strong>Objective</strong><span><span><span style="font-family:;" "=""><span style="font-family:Verdana;"><strong>: </strong>Since the...<strong>Objective</strong><span><span><span style="font-family:;" "=""><span style="font-family:Verdana;"><strong>: </strong>Since the identification of COVID-19 in December 2019 as a pandemic, over 4500 research papers were published with the term “COVID-19” contained in its title. Many of these reports on the COVID-19 pandemic suggested that the coronavirus was associated with more serious chronic diseases and mortality particularly in patients with chronic diseases regardless of country and age. Therefore, there is a need to understand how common comorbidities and other factors are associated with the risk of death due to COVID-19 infection. Our investigation aims at exploring this relationship. Specifically, our analysis aimed to explore the relationship between the total number of COVID-19 cases and mortality associated with COVID-19 infection accounting for other risk factors. </span><b><span style="font-family:Verdana;">Methods</span></b><span style="font-family:Verdana;">: Due to the presence of over dispersion, the Negative Binomial Regression is used to model the aggregate number of COVID-19 cases. Case-fatality associated with this infection is modeled as an outcome variable using machine learning predictive multivariable regression. The data we used are the COVID-19 cases and associated deaths from the start of the pandemic up to December 02-2020, the day Pfizer was granted approval for their new COVID-19 vaccine. </span><b><span style="font-family:Verdana;">Results</span></b><span style="font-family:Verdana;">: Our analysis found significant regional variation in case fatality. Moreover, the aggregate number of cases had several risk factors including chronic kidney disease, population density and the percentage of gross domestic product spent on healthcare. </span><b><span style="font-family:Verdana;">The Conclusions</span></b><span style="font-family:Verdana;">: There are important regional variations in COVID-19 case fatality. We identified three factors to be significantly correlated with case fatality</span></span></span></span><span style="font-family:Verdana;">.</span>展开更多
The advantage of recursive programming is that it is very easy to write and it only requires very few lines of code if done correctly.Structured query language(SQL)is a database language and is used to manipulate data...The advantage of recursive programming is that it is very easy to write and it only requires very few lines of code if done correctly.Structured query language(SQL)is a database language and is used to manipulate data.In Microsoft SQL Server 2000,recursive queries are implemented to retrieve data which is presented in a hierarchical format,but this way has its disadvantages.Common table expression(CTE)construction introduced in Microsoft SQL Server 2005 provides the significant advantage of being able to reference itself to create a recursive CTE.Hierarchical data structures,organizational charts and other parent-child table relationship reports can easily benefit from the use of recursive CTEs.The recursive query is illustrated and implemented on some simple hierarchical data.In addition,one business case study is brought forward and the solution using recursive query based on CTE is shown.At the same time,stored procedures are programmed to do the recursion in SQL.Test results show that recursive queries based on CTEs bring us the chance to create much more complex queries while retaining a much simpler syntax.展开更多
A new method for accelerated walkthroughs of virtual environments is presented. In order to improve rendering quality, it partitions visibility computation PVS preprocessing and runtime dynamic visibility computation,...A new method for accelerated walkthroughs of virtual environments is presented. In order to improve rendering quality, it partitions visibility computation PVS preprocessing and runtime dynamic visibility computation, at the meantime level-of-detail models are managed to speed up the rendering. The method relies on the hierarchical spatial data structure (HSDS) of 3D objects in virtual environment. Compared with some classical speedup methods such as the graphic hardware method and a few typical software speedup techniques, the new method embodies obvious improvement in speedup performance.展开更多
A form evaluation system for brush-written Chinese characters is developed.Calligraphic knowledge used in the system is represented in the form of ruleswith the help of a data structure proposed in this paper. Reflect...A form evaluation system for brush-written Chinese characters is developed.Calligraphic knowledge used in the system is represented in the form of ruleswith the help of a data structure proposed in this paper. Reflecting the spe-cific hierarchical relations among radicals and strokes of Chinese characters,the proposed data structure is based upon a character model that can generatebrush-written Chinese characters on a computer. Through evaluation experi-ments using the developed system, it is shown that representation of calligraphicknowledge and form evaluation of Chinese characters can be smoothly realizedif the data structure is utilized.展开更多
Purpose-Patient treatment trajectory data are used to predict the outcome of the treatment to particular disease that has been carried out in the research.In order to determine the evolving disease on the patient and ...Purpose-Patient treatment trajectory data are used to predict the outcome of the treatment to particular disease that has been carried out in the research.In order to determine the evolving disease on the patient and changes in the health due to treatment has not considered existing methodologies.Hence deep learning models to trajectory data mining can be employed to identify disease prediction with high accuracy and less computation cost.Design/methodology/approach-Multifocus deep neural network classifiers has been utilized to detect the novel disease class and comorbidity class to the changes in the genome pattern of the patient trajectory data can be identified on the layers of the architecture.Classifier is employed to learn extracted feature set with activation and weight function and then merged on many aspects to classify the undetermined sequence of diseases as a new variant.The performance of disease progression learning progress utilizes the precision of the constituent classifiers,which usually has larger generalization benefits than those optimized classifiers.Findings-Deep learning architecture uses weight function,bias function on input layers and max pooling.Outcome of the input layer has applied to hidden layer to generate the multifocus characteristics of the disease,and multifocus characterized disease is processed in activation function using ReLu function along hyper parameter tuning which produces the effective outcome in the output layer of a fully connected network.Experimental results have proved using cross validation that proposed model outperforms methodologies in terms of computation time and accuracy.Originality/value-Proposed evolving classifier represented as a robust architecture on using objective function to map the data sequence into a class distribution of the evolving disease class to the patient trajectory.Then,the generative output layer of the proposed model produces the progression outcome of the disease of the particular patient trajectory.The model tries to produce the accurate prognosis outcomes by employing data conditional probability function.The originality of the work defines 70%and comparisons of the previous methods the method of values are accurate and increased analysis of the predictions.展开更多
In this paper, we target a similarity search among data supply chains, which plays an essential role in optimizing the supply chain and extending its value. This problem is very challenging for application-oriented da...In this paper, we target a similarity search among data supply chains, which plays an essential role in optimizing the supply chain and extending its value. This problem is very challenging for application-oriented data supply chains because the high complexity of the data supply chain makes the computation of similarity extremely complex and inefficient. In this paper, we propose a feature space representation model based on key points,which can extract the key features from the subsequences of the original data supply chain and simplify it into a feature vector form. Then, we formulate the similarity computation of the subsequences based on the multiscale features. Further, we propose an improved hierarchical clustering algorithm for a similarity search over the data supply chains. The main idea is to separate the subsequences into disjoint groups such that each group meets one specific clustering criteria; thus, the cluster containing the query object is the similarity search result. The experimental results show that the proposed approach is both effective and efficient for data supply chain retrieval.展开更多
基金Supported by the National High-Technology Re-search and Development Programof China(2001AA115300) the Na-tional Natural Science Foundation of China (69874038) ,the Nat-ural Science Foundation of Liaoning Province(20031018)
文摘How to design a multicast key management system with high performance is a hot issue now. This paper will apply the idea of hierarchical data processing to construct a common analytic model based on directed logical key tree and supply two important metrics to this problem: re-keying cost and key storage cost. The paper gives the basic theory to the hierarchical data processing and the analyzing model to multieast key management based on logical key tree. It has been proved that the 4-ray tree has the best performance in using these metrics. The key management problem is also investigated based on user probability model, and gives two evaluating parameters to re-keying and key storage cost.
基金the key project of National Social Science Foundation of China"Research on Statistical Measurement and Decision Support System of the Coordinated Development Mechanism System ofthe Guangdong-Hong Kong-Macao Greater Bay Area"(No.19ATJ004)the project of Guangdong Natural Science Foundation"Stochastic Frontier Models from the Endogeneity Perspective:Estimation,Testing and Application"(No.2019A1515110267)the Guangdong Provincial Humanities and Social Sciences Innovation Team Project(No.2020WCXTD008).
文摘This paper investigates the economic development within the Guangdong-Hong Kong-Macao Greater Bay Area from two perspectives-spatial pattern and influencing factors-to promote coordinated development across the area.This paper employs Moran's I test and local Getis-Ord G statistic from spatial statistics.Furthermore,it constructs a hierarchical spatial econometric model to facilitate empirical investigation.It is found that the overall economic development of the Guangdong-Hong Kong-Macao Greater Bay Area exhibits a"mountainshaped"spatial lpattern of the high-level homogeneous regions withh"highhigh correlation"and the lowlevel homogeneous regions with"low-low correlation."The internal difference in economic density is moderate,with an obvious trend of decrease year by year.Economic density shows a significant spatial positive correlation,with the expansion of the scopeof areas exhibiting"high-high correlation."The differences in economic density between hotspots and sub-hotspots have decreased,but the economic density of cold spots has failed to keep up with the development of other regions.The difference in factor input density among the influencing factors explains most of the differences in economic density among different regions.The results from the R&D capital investment coefficient indicate that in recent years,the effect of investments in urban scientific and tetechnological innovation factors has been more extensive and uniform among the regions under its jurisdiction,but the spatial spillover effect of innovation factors at both layers is not significantly positive.Apart from the city's location within the Greater Bay Area,the relative location of the jurisdictions within the city equally influences the economic development configuration of the Greater Bay Area.Although economic density in regions adjacent to cities outside the Greater Bay Area is notably lower than in other regions,their growth rate and production efficiency remain on par with other regions.T-test and model results underscore the rapid development of the areas encircling the bay.The coefficient of location dummy variables in areas adjacent to cities in the Greater Bay Area varies among cities.At a particular factor input density,some cities have higher output efficiency in areas contiguous to cities in the Greater Bay Area.This study uniquely adopts low-level city jurisdictions and high-level cities to shape a two-tiered hierarchical dataset with nested geographic units.This innovative approach fully leverages insights from distinct layers,delving into spatial interdependence and interplay across layers.This paper aims to explore the spatial pattern and influencing factors steering economic development in the Guangdong-HonggKong-Macao Greater Bay Area.In doing so,it aims to identify problems and present pertinent policy recommendations.
文摘Efficient real time data exchange over the Internet plays a crucial role in the successful application of web-based systems. In this paper, a data transfer mechanism over the Internet is proposed for real time web based applications. The mechanism incorporates the eXtensible Markup Language (XML) and Hierarchical Data Format (HDF) to provide a flexible and efficient data format. Heterogeneous transfer data is classified into light and heavy data, which are stored using XML and HDF respectively; the HDF data format is then mapped to Java Document Object Model (JDOM) objects in XML in the Java environment. These JDOM data objects are sent across computer networks with the support of the Java Remote Method Invocation (RMI) data transfer infrastructure. Client's defined data priority levels are implemented in RMI, which guides a server to transfer data objects at different priorities. A remote monitoring system for an industrial reactor process simulator is used as a case study to illustrate the proposed data transfer mechanism.
文摘This paper proposes a security policy model for mandatory access control in class B1 database management system whose level of labeling is tuple. The relation hierarchical data model is extended to multilevel relation hierarchical data model. Based on the multilevel relation hierarchical data model, the concept of upper lower layer relational integrity is presented after we analyze and eliminate the covert channels caused by the database integrity. Two SQL statements are extended to process polyinstantiation in the multilevel secure environment. The system is based on the multilevel relation hierarchical data model and is capable of integratively storing and manipulating multilevel complicated objects ( e.g., multilevel spatial data) and multilevel conventional data ( e.g., integer, real number and character string).
文摘SOZL (structured methodology + object-oriented methodology + Z language) is a language that attempts to integrate structured method, object-oriented method and formal method. The core of this language is predicate data flow diagram (PDFD). In order to eliminate the ambiguity of predicate data flow diagrams and their associated textual specifications, a formalization of the syntax and semantics of predicate data flow diagrams is necessary. In this paper we use Z notation to define an abstract syntax and the related structural constraints for the PDFD notation, and provide it with an axiomatic semantics based on the concept of data availability and functionality of predicate operation. Finally, an example is given to establish functionality consistent decomposition on hierarchical PDFD (HPDFD).
基金partially supported by the Strategic Priority Research Program of Chinese Academy of Science(No.XDB 34030000)the National Natural Science Foundation of China(Nos.11975293 and 12205348)。
文摘The High-energy Fragment Separator(HFRS),which is currently under construction,is a leading international radioactive beam device.Multiple sets of position-sensitive twin time projection chamber(TPC)detectors are distributed on HFRS for particle identification and beam monitoring.The twin TPCs'readout electronics system operates in a trigger-less mode due to its high counting rate,leading to a challenge of handling large amounts of data.To address this problem,we introduced an event-building algorithm.This algorithm employs a hierarchical processing strategy to compress data during transmission and aggregation.In addition,it reconstructs twin TPCs'events online and stores only the reconstructed particle information,which significantly reduces the burden on data transmission and storage resources.Simulation studies demonstrated that the algorithm accurately matches twin TPCs'events and reduces more than 98%of the data volume at a counting rate of 500 kHz/channel.
文摘<strong>Objective</strong><span><span><span style="font-family:;" "=""><span style="font-family:Verdana;"><strong>: </strong>Since the identification of COVID-19 in December 2019 as a pandemic, over 4500 research papers were published with the term “COVID-19” contained in its title. Many of these reports on the COVID-19 pandemic suggested that the coronavirus was associated with more serious chronic diseases and mortality particularly in patients with chronic diseases regardless of country and age. Therefore, there is a need to understand how common comorbidities and other factors are associated with the risk of death due to COVID-19 infection. Our investigation aims at exploring this relationship. Specifically, our analysis aimed to explore the relationship between the total number of COVID-19 cases and mortality associated with COVID-19 infection accounting for other risk factors. </span><b><span style="font-family:Verdana;">Methods</span></b><span style="font-family:Verdana;">: Due to the presence of over dispersion, the Negative Binomial Regression is used to model the aggregate number of COVID-19 cases. Case-fatality associated with this infection is modeled as an outcome variable using machine learning predictive multivariable regression. The data we used are the COVID-19 cases and associated deaths from the start of the pandemic up to December 02-2020, the day Pfizer was granted approval for their new COVID-19 vaccine. </span><b><span style="font-family:Verdana;">Results</span></b><span style="font-family:Verdana;">: Our analysis found significant regional variation in case fatality. Moreover, the aggregate number of cases had several risk factors including chronic kidney disease, population density and the percentage of gross domestic product spent on healthcare. </span><b><span style="font-family:Verdana;">The Conclusions</span></b><span style="font-family:Verdana;">: There are important regional variations in COVID-19 case fatality. We identified three factors to be significantly correlated with case fatality</span></span></span></span><span style="font-family:Verdana;">.</span>
文摘The advantage of recursive programming is that it is very easy to write and it only requires very few lines of code if done correctly.Structured query language(SQL)is a database language and is used to manipulate data.In Microsoft SQL Server 2000,recursive queries are implemented to retrieve data which is presented in a hierarchical format,but this way has its disadvantages.Common table expression(CTE)construction introduced in Microsoft SQL Server 2005 provides the significant advantage of being able to reference itself to create a recursive CTE.Hierarchical data structures,organizational charts and other parent-child table relationship reports can easily benefit from the use of recursive CTEs.The recursive query is illustrated and implemented on some simple hierarchical data.In addition,one business case study is brought forward and the solution using recursive query based on CTE is shown.At the same time,stored procedures are programmed to do the recursion in SQL.Test results show that recursive queries based on CTEs bring us the chance to create much more complex queries while retaining a much simpler syntax.
文摘A new method for accelerated walkthroughs of virtual environments is presented. In order to improve rendering quality, it partitions visibility computation PVS preprocessing and runtime dynamic visibility computation, at the meantime level-of-detail models are managed to speed up the rendering. The method relies on the hierarchical spatial data structure (HSDS) of 3D objects in virtual environment. Compared with some classical speedup methods such as the graphic hardware method and a few typical software speedup techniques, the new method embodies obvious improvement in speedup performance.
文摘A form evaluation system for brush-written Chinese characters is developed.Calligraphic knowledge used in the system is represented in the form of ruleswith the help of a data structure proposed in this paper. Reflecting the spe-cific hierarchical relations among radicals and strokes of Chinese characters,the proposed data structure is based upon a character model that can generatebrush-written Chinese characters on a computer. Through evaluation experi-ments using the developed system, it is shown that representation of calligraphicknowledge and form evaluation of Chinese characters can be smoothly realizedif the data structure is utilized.
文摘Purpose-Patient treatment trajectory data are used to predict the outcome of the treatment to particular disease that has been carried out in the research.In order to determine the evolving disease on the patient and changes in the health due to treatment has not considered existing methodologies.Hence deep learning models to trajectory data mining can be employed to identify disease prediction with high accuracy and less computation cost.Design/methodology/approach-Multifocus deep neural network classifiers has been utilized to detect the novel disease class and comorbidity class to the changes in the genome pattern of the patient trajectory data can be identified on the layers of the architecture.Classifier is employed to learn extracted feature set with activation and weight function and then merged on many aspects to classify the undetermined sequence of diseases as a new variant.The performance of disease progression learning progress utilizes the precision of the constituent classifiers,which usually has larger generalization benefits than those optimized classifiers.Findings-Deep learning architecture uses weight function,bias function on input layers and max pooling.Outcome of the input layer has applied to hidden layer to generate the multifocus characteristics of the disease,and multifocus characterized disease is processed in activation function using ReLu function along hyper parameter tuning which produces the effective outcome in the output layer of a fully connected network.Experimental results have proved using cross validation that proposed model outperforms methodologies in terms of computation time and accuracy.Originality/value-Proposed evolving classifier represented as a robust architecture on using objective function to map the data sequence into a class distribution of the evolving disease class to the patient trajectory.Then,the generative output layer of the proposed model produces the progression outcome of the disease of the particular patient trajectory.The model tries to produce the accurate prognosis outcomes by employing data conditional probability function.The originality of the work defines 70%and comparisons of the previous methods the method of values are accurate and increased analysis of the predictions.
基金partly supported by the National Natural Science Foundation of China(Nos.61532012,61370196,and 61672109)
文摘In this paper, we target a similarity search among data supply chains, which plays an essential role in optimizing the supply chain and extending its value. This problem is very challenging for application-oriented data supply chains because the high complexity of the data supply chain makes the computation of similarity extremely complex and inefficient. In this paper, we propose a feature space representation model based on key points,which can extract the key features from the subsequences of the original data supply chain and simplify it into a feature vector form. Then, we formulate the similarity computation of the subsequences based on the multiscale features. Further, we propose an improved hierarchical clustering algorithm for a similarity search over the data supply chains. The main idea is to separate the subsequences into disjoint groups such that each group meets one specific clustering criteria; thus, the cluster containing the query object is the similarity search result. The experimental results show that the proposed approach is both effective and efficient for data supply chain retrieval.