CS Events

Qualifying Exam

End-to-end Latency Measurement with One Side Traffic Visibility

 

Download as iCal file

Monday, November 18, 2024, 10:00am - 11:30am

 

Speaker: Bhavana Vannarth Shobhana

Location : CoRE 431

Committee

Professor Srinivas Narayana

Professor Badri Nath

Professor Richard Martin

Professor Mubbasir Kapadia

Event Type: Qualifying Exam

Abstract: End-to-end latency is a crucial metric that indicates the performance of networked services deployed in data centers. User-experienced latency significantly impacts the revenue of online services like financial trading applications, online retail services, streaming services, and mobile computing. Therefore, tracking user-visible latency is a prime concern. Several factors, like network path congestion, server load fluctuations, and volatile traffic conditions, can impact latency experienced by users. As a result, there could be a high variance in latency over time. Therefore, tracking latency continuously over time without any additional overhead at all the components participating in service delivery is vital. Our work investigates passive and continuous measurement of user-visible latency for networked services by integrating measurements into vantage points between the client and server. The basic approach to measure latency is to determine the time between the departure of a client request and the arrival of its corresponding response at the vantage point. However, a key challenge in such measurement is that the vantage points of interest may only have the visibility of user traffic in one direction. For example, Load Balancers may be configured to handle request traffic but not response traffic. We introduce techniques to estimate the arrival time of a response by leveraging the closed-loop nature of network connections used by services. Further, we propose algorithms that exploit multiple observations over time to infer the user-experienced latency while observing only the request traffic. Experiments with web traffic of servers rendering websites under realistic server load show that our approach can achieve 98% accuracy relative to the direct estimation of the request-to-request latency and request-to-response latency at the client.

Contact  Professor Srinivas Narayana

Zoom Link: https://rutgers.zoom.us/j/6252693480?pwd=MitnM3pVN0pWaVFRTWJiVXVibVduUT09

Publication - https://dl.acm.org/doi/pdf/10.1145/3563766.3564094