期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
Blockchain for Smart Homes: Blockchain Low Latency
1
作者 Reem Jamaan Alzahrani Fatimah Saad Alzahrani 《Journal of Computer and Communications》 2024年第12期1-15,共15页
The inclusion of blockchain in smart homes increases data security and accuracy within home ecosystems but presents latency issues that hinder real-time interactions. This study addresses the important challenge of bl... The inclusion of blockchain in smart homes increases data security and accuracy within home ecosystems but presents latency issues that hinder real-time interactions. This study addresses the important challenge of blockchain latency in smart homes through the development and application of the Blockchain Low Latency (BLL) model using Hyperledger Fabric v2.2. With respect to latency, the BLL model proposes the optimization of the following fundamental blockchain parameters: transmission rate, endorsement policy, batch size, and batch timeout. After conducting hypothesis testing on system parameters, we found that transactions per second (tps) of 30, OutOf (2) endorsement policy, in which any two of five peers endorse a batch size of 10 and batch timeout of 1 s, considerably decrease latency. The BLL model achieved an average latency of 0.39 s, approximately 30 times faster than Ethereum’s average latency of 12 s, thereby enhancing the efficiency of blockchain-based smart home applications. The results of this study demonstrate that despite introducing certain latency issues, proper selection of parameters in blockchain configurations can eliminate these latency problems, making blockchain technology more viable for real-time Internet of Things (IoT) applications such as smart homes. Future work involves applying the proposed model to a larger overlay and deploying it in real-world smart home environments using sensor devices, enhancing the given configuration to accommodate a large number of transactions, and adjusting the overlay in line with the complexity of the network. Therefore, this study provides practical recommendations for solving the latency issue in blockchain systems, relates theoretical advancements to real-life applications in IoT environments, and stresses the significance of parameter optimization for maximum effectiveness. 展开更多
关键词 Internet of Things (IoT) Blockchain latency Hyperledger Fabric Smart Homes latency optimization Data Security
在线阅读 下载PDF
Latency Minimization Using an Adaptive Load Balancing Technique in Microservices Applications
2
作者 G.Selvakumar L.S.Jayashree S.Arumugam 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期1215-1231,共17页
Advancements in cloud computing and virtualization technologies have revolutionized Enterprise Application Development with innovative ways to design and develop complex systems.Microservices Architecture is one of th... Advancements in cloud computing and virtualization technologies have revolutionized Enterprise Application Development with innovative ways to design and develop complex systems.Microservices Architecture is one of the recent techniques in which Enterprise Systems can be developed as fine-grained smaller components and deployed independently.This methodology brings numerous benefits like scalability,resilience,flexibility in development,faster time to market,etc.and the advantages;Microservices bring some challenges too.Multiple microservices need to be invoked one by one as a chain.In most applications,more than one chain of microservices runs in parallel to complete a particular requirement To complete a user’s request.It results in competition for resources and the need for more inter-service communication among the services,which increases the overall latency of the application.A new approach has been proposed in this paper to handle a complex chain of microservices and reduce the latency of user requests.A machine learning technique is followed to predict the weighting time of different types of requests.The communication time among services distributed among different physical machines are estimated based on that and obtained insights are applied to an algorithm to calculate their priorities dynamically and select suitable service instances to minimize the latency based on the shortest queue waiting time.Experiments were done for both interactive as well as non interactive workloads to test the effectiveness of the solution.The approach has been proved to be very effective in reducing latency in the case of long service chains. 展开更多
关键词 Microservices load balancing cloud computing latency optimization netflix
在线阅读 下载PDF
Optimal edge-cloud collaboration based strategies for minimizing valid latency of railway environment monitoring system
3
作者 Xiaoping Ma Jing Zhao +2 位作者 Limin Jia Xiyuan Chen Zhe Li 《High-Speed Railway》 2023年第3期185-194,共10页
Response speed is vital for the railway environment monitoring system,especially for the sudden-onset disasters.The edge-cloud collaboration scheme is proved efficient to reduce the latency.However,the data characteri... Response speed is vital for the railway environment monitoring system,especially for the sudden-onset disasters.The edge-cloud collaboration scheme is proved efficient to reduce the latency.However,the data characteristics and communication demand of the tasks in the railway environment monitoring system are all different and changeable,and the latency contribution of each task to the system is discrepant.Hence,two valid latency minimization strategies based on the edge-cloud collaboration scheme is developed in this paper.First,the processing resources are allocated to the tasks based on the priorities,and the tasks are processed parallly with the allocated resources to minimize the system valid latency.Furthermore,considering the differences in the data volume of the tasks,which will induce the waste of the resources for the tasks finished in advance.Thus,the tasks with similar priorities are graded into the same group,and the serial and parallel processing strategies are performed intra-group and inter-group simultaneously.Compared with the other four strategies in four railway monitoring scenarios,the proposed strategies proved latency efficiency to the high-priority tasks,and the system valid latency is reduced synchronously.The performance of the railway environment monitoring system in security and efficiency will be promoted greatly with the proposed scheme and strategies. 展开更多
关键词 Railway environment monitoring Edge-cloud collaboration computing Valid latency optimization
在线阅读 下载PDF
Multi-network-region traffic cooperative scheduling in large-scale LEO satellite networks
4
作者 LI Chengxi WANG Fu +8 位作者 YAN Wei CUI Yansong FAN Xiaodong ZHU Guangyu XIE Yanxi YANG Lixin ZHOU Luming ZHAO Ran WANG Ning 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第4期829-841,共13页
A low-Earth-orbit(LEO)satellite network can provide full-coverage access services worldwide and is an essential candidate for future 6G networking.However,the large variability of the geographic distribution of the Ea... A low-Earth-orbit(LEO)satellite network can provide full-coverage access services worldwide and is an essential candidate for future 6G networking.However,the large variability of the geographic distribution of the Earth’s population leads to an uneven service volume distribution of access service.Moreover,the limitations on the resources of satellites are far from being able to serve the traffic in hotspot areas.To enhance the forwarding capability of satellite networks,we first assess how hotspot areas under different load cases and spatial scales significantly affect the network throughput of an LEO satellite network overall.Then,we propose a multi-region cooperative traffic scheduling algorithm.The algorithm migrates low-grade traffic from hotspot areas to coldspot areas for forwarding,significantly increasing the overall throughput of the satellite network while sacrificing some latency of end-to-end forwarding.This algorithm can utilize all the global satellite resources and improve the utilization of network resources.We model the cooperative multi-region scheduling of large-scale LEO satellites.Based on the model,we build a system testbed using OMNET++to compare the proposed method with existing techniques.The simulations show that our proposed method can reduce the packet loss probability by 30%and improve the resource utilization ratio by 3.69%. 展开更多
关键词 low-Earth-orbit(LEO)satellite network satellite communication load balance multi-region scheduling latency optimization
在线阅读 下载PDF
User-Level Scheduling and Resource Allocation for Multi-Beam Satellite Systems with Full Frequency Reuse 被引量:3
5
作者 Tao Leng Yanan Wang +2 位作者 Dongwei Hu Gaofeng Cui Weidong Wang 《China Communications》 SCIE CSCD 2022年第6期179-192,共14页
Multi-beam satellite communication systems can improve the resource utilization and system capacity effectively.However,the inter-beam interference,especially for the satellite system with full frequency reuse,will de... Multi-beam satellite communication systems can improve the resource utilization and system capacity effectively.However,the inter-beam interference,especially for the satellite system with full frequency reuse,will degrade the system performance greatly due to the characteristics of multi-beam satellite antennas.In this article,the user scheduling and resource allocation of a multi-beam satellite system with full frequency reuse are jointly studied,in which all beams can use the full bandwidth.With the strong inter-beam interference,we aim to minimize the system latency experienced by the users during the process of data downloading.To solve this problem,deep reinforcement learning is used to schedule users and allocate bandwidth and power resources to mitigate the inter-beam interference.The simulation results are compared with other reference algorithms to verify the effectiveness of the proposed algorithm. 展开更多
关键词 multi-beam satellite full frequency reuse inter-beam interference latency optimization deep reinforcement learning
在线阅读 下载PDF
Workload-aware request routing in cloud data center using software-defined networking
6
作者 Haitao Yuan Jing Bi Bohu Li 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2015年第1期151-160,共10页
Large latency of applications will bring revenue loss to cloud infrastructure providers in the cloud data center. The existing controllers of software-defined networking architecture can fetch and process traffic info... Large latency of applications will bring revenue loss to cloud infrastructure providers in the cloud data center. The existing controllers of software-defined networking architecture can fetch and process traffic information in the network. Therefore, the controllers can only optimize the network latency of applications. However, the serving latency of applications is also an important factor in delivered user-experience for arrival requests. Unintelligent request routing will cause large serving latency if arrival requests are allocated to overloaded virtual machines. To deal with the request routing problem, this paper proposes the workload-aware software-defined networking controller architecture. Then, request routing algorithms are proposed to minimize the total round trip time for every type of request by considering the congestion in the network and the workload in virtual machines(VMs). This paper finally provides the evaluation of the proposed algorithms in a simulated prototype. The simulation results show that the proposed methodology is efficient compared with the existing approaches. 展开更多
关键词 cloud data center(CDC) software-defined networking request routing resource allocation network latency optimization
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部