Multi-hazard susceptibility prediction is an important component of disasters risk management plan.An effective multi-hazard risk mitigation strategy includes assessing individual hazards as well as their interactions...Multi-hazard susceptibility prediction is an important component of disasters risk management plan.An effective multi-hazard risk mitigation strategy includes assessing individual hazards as well as their interactions.However,with the rapid development of artificial intelligence technology,multi-hazard susceptibility prediction techniques based on machine learning has encountered a huge bottleneck.In order to effectively solve this problem,this study proposes a multi-hazard susceptibility mapping framework using the classical deep learning algorithm of Convolutional Neural Networks(CNN).First,we use historical flash flood,debris flow and landslide locations based on Google Earth images,extensive field surveys,topography,hydrology,and environmental data sets to train and validate the proposed CNN method.Next,the proposed CNN method is assessed in comparison to conventional logistic regression and k-nearest neighbor methods using several objective criteria,i.e.,coefficient of determination,overall accuracy,mean absolute error and the root mean square error.Experimental results show that the CNN method outperforms the conventional machine learning algorithms in predicting probability of flash floods,debris flows and landslides.Finally,the susceptibility maps of the three hazards based on CNN are combined to create a multi-hazard susceptibility map.It can be observed from the map that 62.43%of the study area are prone to hazards,while 37.57%of the study area are harmless.In hazard-prone areas,16.14%,4.94%and 30.66%of the study area are susceptible to flash floods,debris flows and landslides,respectively.In terms of concurrent hazards,0.28%,7.11%and 3.13%of the study area are susceptible to the joint occurrence of flash floods and debris flow,debris flow and landslides,and flash floods and landslides,respectively,whereas,0.18%of the study area is subject to all the three hazards.The results of this study can benefit engineers,disaster managers and local government officials involved in sustainable land management and disaster risk mitigation.展开更多
Land cover classification(LCC) in arid regions is of great significance to the assessment, prediction, and management of land desertification. Some studies have shown that the red-edge band of RapidE ye images was eff...Land cover classification(LCC) in arid regions is of great significance to the assessment, prediction, and management of land desertification. Some studies have shown that the red-edge band of RapidE ye images was effective for vegetation identification and could improve LCC accuracy. However, there has been no investigation of the effects of RapidE ye images' red-edge band and vegetation indices on LCC in arid regions where there are spectrally similar land covers mixed with very high or low vegetation coverage information and bare land. This study focused on a typical inland arid desert region located in Dunhuang Basin of northwestern China. First, five feature sets including or excluding the red-edge band and vegetation indices were constructed. Then, a land cover classification system involving plant communities was developed. Finally, random forest algorithm-based models with different feature sets were utilized for LCC. The conclusions drawn were as follows: 1) the red-edge band showed slight contribution to LCC accuracy; 2) vegetation indices had a significant positive effect on LCC; 3) simultaneous addition of the red-edge band and vegetation indices achieved a significant overall accuracy improvement(3.46% from 86.67%). In general, vegetation indices had larger effect than the red-edge band, and simultaneous addition of them significantly increased the accuracy of LCC in arid regions.展开更多
Nowadays,Web browsers have become an important carrier of 3D model visualization because of their convenience and portability.During the process of large-scale 3D model visualization based on Web scenes with the probl...Nowadays,Web browsers have become an important carrier of 3D model visualization because of their convenience and portability.During the process of large-scale 3D model visualization based on Web scenes with the problems of slow rendering speed and low FPS(Frames Per Second),occlusion culling,as an important method for rendering optimization,can remove most of the occluded objects and improve rendering efficiency.The traditional occlusion culling algorithm(TOCA)is calculated by traversing all objects in the scene,which involves a large amount of repeated calculation and time consumption.To advance the rendering process and enhance rendering efficiency,this paper proposes an occlusion culling with three different optimization methods based on the WebGPU Computing Pipeline.Firstly,for the problem of large amounts of repeated calculation processes in TOCA,these units are moved from the CPU to the GPU for parallel computing,thereby accelerating the calculation of the Potential Visible Sets(PVS);Then,for the huge overhead of creating pipeline caused by too many 3D models in a certain scene,the Breaking Occlusion Culling Algorithm(BOCA)is introduced,which removes some nodes according to building a Hierarchical Bounding Volume(BVH)scene tree to reduce the overhead of creating pipelines;After that,the structure of the scene tree is transmitted to the GPU in the order of depth-first traversal and finally,the PVS is obtained by parallel computing.In the experiments,3D geological models with five different scales from 1:5,000 to 1:500,000 are used for testing.The results show that the proposed methods can reduce the time overhead of repeated calculation caused by the computing pipeline creation and scene tree recursive traversal in the occlusion culling algorithm effectively,with 97%rendering efficiency improvement compared with BOCA,thereby accelerating the rendering process on Web browsers.展开更多
Snake Optimizer(SO)is a novel Meta-heuristic Algorithm(MA)inspired by the mating behaviour of snakes,which has achieved success in global numerical optimization problems and practical engineering applications.However,...Snake Optimizer(SO)is a novel Meta-heuristic Algorithm(MA)inspired by the mating behaviour of snakes,which has achieved success in global numerical optimization problems and practical engineering applications.However,it also has certain drawbacks for the exploration stage and the egg hatch process,resulting in slow convergence speed and inferior solution quality.To address the above issues,a novel multi-strategy improved SO(MISO)with the assistance of population crowding analysis is proposed in this article.In the algorithm,a novel multi-strategy operator is designed for the exploration stage,which not only focuses on using the information of better performing individuals to improve the quality of solution,but also focuses on maintaining population diversity.To boost the efficiency of the egg hatch process,the multi-strategy egg hatch process is proposed to regenerate individuals according to the results of the population crowding analysis.In addition,a local search method is employed to further enhance the convergence speed and the local search capability.MISO is first compared with three sets of algorithms in the CEC2020 benchmark functions,including SO with its two recently discussed variants,ten advanced MAs,and six powerful CEC competition algorithms.The performance of MISO is then verified on five practical engineering design problems.The experimental results show that MISO provides a promising performance for the above optimization cases in terms of convergence speed and solution quality.展开更多
Spectral-spatial Gabor filtering(GF),a robust feature extraction tool,has been widely investigated for hyperspectral image(HSI)classification.Recently,a new type of GF method,named phase-induced GF,which showed great ...Spectral-spatial Gabor filtering(GF),a robust feature extraction tool,has been widely investigated for hyperspectral image(HSI)classification.Recently,a new type of GF method,named phase-induced GF,which showed great potential for HSI feature extraction,was proposed.Although this new type of GF possibly better explores the frequency characteristics of HSIs,with a new parameter added,it generates a much larger amount of features,yielding redundancies and noises,and is therefore risky to severely deteriorate the efficiency and accuracy of classification.To tackle this problem,we fully exploit phase-induced Gabor features efficiently,proposing an efficient phase-induced Gabor cube selection and weighted fusion(EPCS-WF)method for HSI classification.Specifically,to eliminate the redundancies and noises,we first select the most representative Gabor cubes using a newly designed energy-based phase-induced Gabor cube selection(EPCS)algorithm before feeding them into classifiers.Then,a weighted fusion(WF)strategy is adopted to integrate the mutual information residing in different feature cubes to generate the final predictions.Our experimental results obtained on four well-known HSI datasets demonstrate that the EPCS-WF method,while only adopting four selected Gabor cubes for classification,delivers better performance as compared with other Gabor-based methods.The code of this work is available at https://github.com/cairlin5/EPCS-WF-hyperspectral-image-classification for the sake of reproducibility.展开更多
Land surface temperature(LST)is a key parameter in land surface system.The National Aeronautics and Space Administration(NASA)recently released new Moderate Resolution Imaging Spectroradiometer(MODIS)LST products(MOD2...Land surface temperature(LST)is a key parameter in land surface system.The National Aeronautics and Space Administration(NASA)recently released new Moderate Resolution Imaging Spectroradiometer(MODIS)LST products(MOD21 and MYD21).Here,we conducted a detailed comparison between the MYD11 and MYD21 LST data in China's Mainland.The LSTs of MYD21 were approximately 1℃ higher than those of MYD11 averaged for China's Mainland,as MYD21 corrected the cold bias of MYD11.The proportions of the valid value of MYD21 were generally lower than those of MYD11 because the cloud removal method of MYD21 was stricter than that of MYD11.Furthermore,the outliers were less significant in MYD11 than in MYD21 because the outliers in MYD11 were removed using temporal constraints on LST.The outliers in MYD21A2 resulted in a difference of greater than 3℃ in average seasonal surface urban heat island intensity(SUHII)between MYD11A2 and MYD21A2.Finally,using MYD11 may underestimate the slope of long-term trends of SUHII.MYD21 LST data may have some uncertainties in urban areas.This study provided a reference for users for selecting LST products and for data producers to further improve MODIS LST products.展开更多
Within any scientific disciplines, a large amount of data are buried within various literature depositories and archives, making it difficult to manually extract useful information from the datum swamps. The machine-l...Within any scientific disciplines, a large amount of data are buried within various literature depositories and archives, making it difficult to manually extract useful information from the datum swamps. The machine-learning extraction of data therefore is necessary for the big-data-based studies. Here, we develop a new text-mining technique to reconstruct the global database of the Precambrian to Recent stromatolites, providing better understanding of secular changes of stromatolites though geological time. The step-by-step data extraction process is described as below. First, the PDF documents of stromatolite-containing literatures were collected, and converted into text formation. Second, a glossary and tag-labeling system using NLP(Natural Language Processing) software was employed to search for all possible candidate pairs from each sentence within the papers collected here. Third, each candidate pair and features were represented as a factor graph model using a series of heuristic procedures to score the weights of each pair feature. Occurrence data of stromatolites versus stratigraphical units(abbreviated as Strata), facies types, locations, and age worldwide were extracted from literatures, respectively, and their extraction accuracies are 92%/464, 87%/778, 92%/846, and 93%/405 from 3 750 scientific abstracts, respectively, and are 90%/1 734, 86%/2 869, 90%/2 055 and 91%/857 from 11 932 papers, respectively. A total of 10 072 unique datum items were identified. The newly obtained stromatolite dataset demonstrates that their stratigraphical occurrences reached a pronounced peak during the Proterozoic(2 500 – 541 Ma), followed by a distinct fall during the Early Phanerozoic, and overall fluctuations through the Phanerozoic(541–0 Ma). Globally, seven stromatolite hotspots were identified from the new dataset, including western United States, eastern United States, western Europe, India, South Africa, northern China, and southern China. The proportional occurrences of inland aquatic stromatolites remain rather low(~20%) in comparison to marine stromatolites from the Precambrian to Jurassic, and then display a significant increase(30%–70%) from the Cretaceous to the present.展开更多
In recent years,with the progress of computer technology,some traditional industries such as geology are facing changes in industrial structure and application mode.So we try to apply big data and virtualization techn...In recent years,with the progress of computer technology,some traditional industries such as geology are facing changes in industrial structure and application mode.So we try to apply big data and virtualization technology in the field of geoscience.This study aims at addressing the existing problems in geological applications,such as data sharing,data processing and computing performance.A Geological Cloud Platform has been designed and realized preliminarily with big data and virtualization technology.The application of the Geological Cloud Platform can be divided into two parts:1)to nest the geological computing model in cloud platform and visualize the results and 2)to use relevant software to conduct data analysis and processing in virtual machines of Windows or Linux system.Finally,we prospect Carlin-type deposits in Nevada by using the spatial data model ArcSDM in the virtual machine.展开更多
1 Introduction A Bitcoin ledger comprises a sizable number of transaction records,which can be utilized to make it easier to track and analyze the traits and patterns of cryptocurrency-related transactions.To facilita...1 Introduction A Bitcoin ledger comprises a sizable number of transaction records,which can be utilized to make it easier to track and analyze the traits and patterns of cryptocurrency-related transactions.To facilitate the visual analysis of Bitcoin,numerous tools with various aims have been developed.For example,MiningVis[1]and SuPoolVisor[2]are the analytics systems for Bitcoin mining pools,as well as[3-5]focus on the Bitcoin transaction graphs analysis.However,due to our previous research requirements for Bitcoin transaction graphs,none of the available tools can provide exploring the features of connection related to the address and observe significant visual patterns.Specifically,using these tools is challenging to navigate to abnormal node clusters effortlessly from large node groups and then analyze the local interlink characteristics between transaction nodes with interactive analytics.展开更多
0 INTRODUCTION.The global availability of digital elevation model(DEM)data,such as 90-m Shuttle Radar Topography Mission(SRTM)DEM and 30-m Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital ...0 INTRODUCTION.The global availability of digital elevation model(DEM)data,such as 90-m Shuttle Radar Topography Mission(SRTM)DEM and 30-m Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model(ASTER GDEM),has been extensively utilized in morphotectonic analyses(e.g.,Wang et al.,2024;Cheng et al.,2018;Pérez-Pe?a et al.,2010;El Hamdouni et al.,2008).展开更多
基金supported by the Joint Funds of the National Natural Science Foundation of China(U21A2013)the State Key Laboratory of Biogeology and Environmental Geology,China University of Geosciences(GBL12107)the National Natural Science Foundation of China(61271408)。
文摘Multi-hazard susceptibility prediction is an important component of disasters risk management plan.An effective multi-hazard risk mitigation strategy includes assessing individual hazards as well as their interactions.However,with the rapid development of artificial intelligence technology,multi-hazard susceptibility prediction techniques based on machine learning has encountered a huge bottleneck.In order to effectively solve this problem,this study proposes a multi-hazard susceptibility mapping framework using the classical deep learning algorithm of Convolutional Neural Networks(CNN).First,we use historical flash flood,debris flow and landslide locations based on Google Earth images,extensive field surveys,topography,hydrology,and environmental data sets to train and validate the proposed CNN method.Next,the proposed CNN method is assessed in comparison to conventional logistic regression and k-nearest neighbor methods using several objective criteria,i.e.,coefficient of determination,overall accuracy,mean absolute error and the root mean square error.Experimental results show that the CNN method outperforms the conventional machine learning algorithms in predicting probability of flash floods,debris flows and landslides.Finally,the susceptibility maps of the three hazards based on CNN are combined to create a multi-hazard susceptibility map.It can be observed from the map that 62.43%of the study area are prone to hazards,while 37.57%of the study area are harmless.In hazard-prone areas,16.14%,4.94%and 30.66%of the study area are susceptible to flash floods,debris flows and landslides,respectively.In terms of concurrent hazards,0.28%,7.11%and 3.13%of the study area are susceptible to the joint occurrence of flash floods and debris flow,debris flow and landslides,and flash floods and landslides,respectively,whereas,0.18%of the study area is subject to all the three hazards.The results of this study can benefit engineers,disaster managers and local government officials involved in sustainable land management and disaster risk mitigation.
基金Under the auspices of Fundamental Research Funds for Central Universities,China University of Geosciences(Wuhan)(No.CUGL150417)National Natural Science Foundation of China(No.41274036,41301026)
文摘Land cover classification(LCC) in arid regions is of great significance to the assessment, prediction, and management of land desertification. Some studies have shown that the red-edge band of RapidE ye images was effective for vegetation identification and could improve LCC accuracy. However, there has been no investigation of the effects of RapidE ye images' red-edge band and vegetation indices on LCC in arid regions where there are spectrally similar land covers mixed with very high or low vegetation coverage information and bare land. This study focused on a typical inland arid desert region located in Dunhuang Basin of northwestern China. First, five feature sets including or excluding the red-edge band and vegetation indices were constructed. Then, a land cover classification system involving plant communities was developed. Finally, random forest algorithm-based models with different feature sets were utilized for LCC. The conclusions drawn were as follows: 1) the red-edge band showed slight contribution to LCC accuracy; 2) vegetation indices had a significant positive effect on LCC; 3) simultaneous addition of the red-edge band and vegetation indices achieved a significant overall accuracy improvement(3.46% from 86.67%). In general, vegetation indices had larger effect than the red-edge band, and simultaneous addition of them significantly increased the accuracy of LCC in arid regions.
基金supported by the National Natural Science Foundation of China (42172333,41902304,U1711267)the fund of the State Key Laboratory of Biogeology and Environmental Geology (2021)+1 种基金Science and Technology Strategic Prospecting Project of Guizhou Province ( [2022]ZD003)the Knowledge Innovation Program of Wuhan-Shuguang Project (2022010801020206).
文摘Nowadays,Web browsers have become an important carrier of 3D model visualization because of their convenience and portability.During the process of large-scale 3D model visualization based on Web scenes with the problems of slow rendering speed and low FPS(Frames Per Second),occlusion culling,as an important method for rendering optimization,can remove most of the occluded objects and improve rendering efficiency.The traditional occlusion culling algorithm(TOCA)is calculated by traversing all objects in the scene,which involves a large amount of repeated calculation and time consumption.To advance the rendering process and enhance rendering efficiency,this paper proposes an occlusion culling with three different optimization methods based on the WebGPU Computing Pipeline.Firstly,for the problem of large amounts of repeated calculation processes in TOCA,these units are moved from the CPU to the GPU for parallel computing,thereby accelerating the calculation of the Potential Visible Sets(PVS);Then,for the huge overhead of creating pipeline caused by too many 3D models in a certain scene,the Breaking Occlusion Culling Algorithm(BOCA)is introduced,which removes some nodes according to building a Hierarchical Bounding Volume(BVH)scene tree to reduce the overhead of creating pipelines;After that,the structure of the scene tree is transmitted to the GPU in the order of depth-first traversal and finally,the PVS is obtained by parallel computing.In the experiments,3D geological models with five different scales from 1:5,000 to 1:500,000 are used for testing.The results show that the proposed methods can reduce the time overhead of repeated calculation caused by the computing pipeline creation and scene tree recursive traversal in the occlusion culling algorithm effectively,with 97%rendering efficiency improvement compared with BOCA,thereby accelerating the rendering process on Web browsers.
基金supported by Grant(42271391 and 62006214)from National Natural Science Foundation of Chinaby Grant(8091B022148)from Joint Funds of Equipment Pre-Research and Ministry of Education of China+1 种基金by Grant(2023BIB015)from Special Project of Hubei Key Research and Development Programby Grant(KLIGIP-2021B03)from Open Research Project of the Hubei Key Laboratory of Intelligent Geo-Information Processing.
文摘Snake Optimizer(SO)is a novel Meta-heuristic Algorithm(MA)inspired by the mating behaviour of snakes,which has achieved success in global numerical optimization problems and practical engineering applications.However,it also has certain drawbacks for the exploration stage and the egg hatch process,resulting in slow convergence speed and inferior solution quality.To address the above issues,a novel multi-strategy improved SO(MISO)with the assistance of population crowding analysis is proposed in this article.In the algorithm,a novel multi-strategy operator is designed for the exploration stage,which not only focuses on using the information of better performing individuals to improve the quality of solution,but also focuses on maintaining population diversity.To boost the efficiency of the egg hatch process,the multi-strategy egg hatch process is proposed to regenerate individuals according to the results of the population crowding analysis.In addition,a local search method is employed to further enhance the convergence speed and the local search capability.MISO is first compared with three sets of algorithms in the CEC2020 benchmark functions,including SO with its two recently discussed variants,ten advanced MAs,and six powerful CEC competition algorithms.The performance of MISO is then verified on five practical engineering design problems.The experimental results show that MISO provides a promising performance for the above optimization cases in terms of convergence speed and solution quality.
基金supported by the National Natural Science Foundation of China (Grant Nos. 61771496, 42030111, and 61976234)partially supported by the National Program on Key Research Projects of China (Grant No. 2017YFC1502706)
文摘Spectral-spatial Gabor filtering(GF),a robust feature extraction tool,has been widely investigated for hyperspectral image(HSI)classification.Recently,a new type of GF method,named phase-induced GF,which showed great potential for HSI feature extraction,was proposed.Although this new type of GF possibly better explores the frequency characteristics of HSIs,with a new parameter added,it generates a much larger amount of features,yielding redundancies and noises,and is therefore risky to severely deteriorate the efficiency and accuracy of classification.To tackle this problem,we fully exploit phase-induced Gabor features efficiently,proposing an efficient phase-induced Gabor cube selection and weighted fusion(EPCS-WF)method for HSI classification.Specifically,to eliminate the redundancies and noises,we first select the most representative Gabor cubes using a newly designed energy-based phase-induced Gabor cube selection(EPCS)algorithm before feeding them into classifiers.Then,a weighted fusion(WF)strategy is adopted to integrate the mutual information residing in different feature cubes to generate the final predictions.Our experimental results obtained on four well-known HSI datasets demonstrate that the EPCS-WF method,while only adopting four selected Gabor cubes for classification,delivers better performance as compared with other Gabor-based methods.The code of this work is available at https://github.com/cairlin5/EPCS-WF-hyperspectral-image-classification for the sake of reproducibility.
基金financially supported by the National Natural Science Foundation of China[grant number 41975044],[grant number 41771360],[grant number 41601044],[grant number 41801021],[grant number 41571400]the Special Fund for Basic Scientific Research of Central Colleges,China University of Geosciences,Wuhan[grant number CUGL170401]and[grant number CUGCJ1704].
文摘Land surface temperature(LST)is a key parameter in land surface system.The National Aeronautics and Space Administration(NASA)recently released new Moderate Resolution Imaging Spectroradiometer(MODIS)LST products(MOD21 and MYD21).Here,we conducted a detailed comparison between the MYD11 and MYD21 LST data in China's Mainland.The LSTs of MYD21 were approximately 1℃ higher than those of MYD11 averaged for China's Mainland,as MYD21 corrected the cold bias of MYD11.The proportions of the valid value of MYD21 were generally lower than those of MYD11 because the cloud removal method of MYD21 was stricter than that of MYD11.Furthermore,the outliers were less significant in MYD11 than in MYD21 because the outliers in MYD11 were removed using temporal constraints on LST.The outliers in MYD21A2 resulted in a difference of greater than 3℃ in average seasonal surface urban heat island intensity(SUHII)between MYD11A2 and MYD21A2.Finally,using MYD11 may underestimate the slope of long-term trends of SUHII.MYD21 LST data may have some uncertainties in urban areas.This study provided a reference for users for selecting LST products and for data producers to further improve MODIS LST products.
基金supported by three grants from the National Natural Science Foundation of China (Nos.41821001,41902315,41930322)。
文摘Within any scientific disciplines, a large amount of data are buried within various literature depositories and archives, making it difficult to manually extract useful information from the datum swamps. The machine-learning extraction of data therefore is necessary for the big-data-based studies. Here, we develop a new text-mining technique to reconstruct the global database of the Precambrian to Recent stromatolites, providing better understanding of secular changes of stromatolites though geological time. The step-by-step data extraction process is described as below. First, the PDF documents of stromatolite-containing literatures were collected, and converted into text formation. Second, a glossary and tag-labeling system using NLP(Natural Language Processing) software was employed to search for all possible candidate pairs from each sentence within the papers collected here. Third, each candidate pair and features were represented as a factor graph model using a series of heuristic procedures to score the weights of each pair feature. Occurrence data of stromatolites versus stratigraphical units(abbreviated as Strata), facies types, locations, and age worldwide were extracted from literatures, respectively, and their extraction accuracies are 92%/464, 87%/778, 92%/846, and 93%/405 from 3 750 scientific abstracts, respectively, and are 90%/1 734, 86%/2 869, 90%/2 055 and 91%/857 from 11 932 papers, respectively. A total of 10 072 unique datum items were identified. The newly obtained stromatolite dataset demonstrates that their stratigraphical occurrences reached a pronounced peak during the Proterozoic(2 500 – 541 Ma), followed by a distinct fall during the Early Phanerozoic, and overall fluctuations through the Phanerozoic(541–0 Ma). Globally, seven stromatolite hotspots were identified from the new dataset, including western United States, eastern United States, western Europe, India, South Africa, northern China, and southern China. The proportional occurrences of inland aquatic stromatolites remain rather low(~20%) in comparison to marine stromatolites from the Precambrian to Jurassic, and then display a significant increase(30%–70%) from the Cretaceous to the present.
文摘In recent years,with the progress of computer technology,some traditional industries such as geology are facing changes in industrial structure and application mode.So we try to apply big data and virtualization technology in the field of geoscience.This study aims at addressing the existing problems in geological applications,such as data sharing,data processing and computing performance.A Geological Cloud Platform has been designed and realized preliminarily with big data and virtualization technology.The application of the Geological Cloud Platform can be divided into two parts:1)to nest the geological computing model in cloud platform and visualize the results and 2)to use relevant software to conduct data analysis and processing in virtual machines of Windows or Linux system.Finally,we prospect Carlin-type deposits in Nevada by using the spatial data model ArcSDM in the virtual machine.
基金supported by the CCF-NSFOCUS Kun-Peng Scientific Research Fund(No.CCF-NSFOCUS2021008)the Provincial Key Research and Development Program of Hubei(No.2020BAB105)+3 种基金the National Natural Science Foundation of China(Grant No.61972366),the Knowledge Innovation Program of Wuhan-Basic Research(No.2022010801010197)the Foundation of Hubei Key Laboratory of Intelligent Geo-Information Processing(No.KLIGIP-2021B06)the Opening Project of Nanchang Innovation Institute,Peking University(No.NCII2022A02)The work of K.-K.R.Choo was supported only by the Cloud Technology Endowed Professorship.
文摘1 Introduction A Bitcoin ledger comprises a sizable number of transaction records,which can be utilized to make it easier to track and analyze the traits and patterns of cryptocurrency-related transactions.To facilitate the visual analysis of Bitcoin,numerous tools with various aims have been developed.For example,MiningVis[1]and SuPoolVisor[2]are the analytics systems for Bitcoin mining pools,as well as[3-5]focus on the Bitcoin transaction graphs analysis.However,due to our previous research requirements for Bitcoin transaction graphs,none of the available tools can provide exploring the features of connection related to the address and observe significant visual patterns.Specifically,using these tools is challenging to navigate to abnormal node clusters effortlessly from large node groups and then analyze the local interlink characteristics between transaction nodes with interactive analytics.
基金supported by the National Key Research and Development Project of China(No.2023YFC3007303)the Open Research Project of the Hubei Key Laboratory of Intelligent Geo-Information Processing(No.KLIGIP-2019B08)。
文摘0 INTRODUCTION.The global availability of digital elevation model(DEM)data,such as 90-m Shuttle Radar Topography Mission(SRTM)DEM and 30-m Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model(ASTER GDEM),has been extensively utilized in morphotectonic analyses(e.g.,Wang et al.,2024;Cheng et al.,2018;Pérez-Pe?a et al.,2010;El Hamdouni et al.,2008).