Close

Presentation

Scalable Planning Platform for Orchestration of Autonomous Systems Across Edge-Cloud Continuum
DescriptionEdge accelerators, such as NVIDIA Jetson, are enabling rapid inference of deep neural network (DNN) models and computer vision algorithms through low-end graphics processing unit (GPU) modules integrated with ARM-based processors. Their compact form factor allows integration with mobile platforms, such as unmanned aerial vehicles (UAVs) with onboard cameras, facilitating real-time execution of diverse scientific workflows, from wildfire monitoring to disaster management. The limited compute resources of mobile edge accelerators necessitate collaboration with remote servers in the cloud for processing compute-intensive workloads. These remote servers can include high-performance computers, serverless cloud platforms offering Functions-as-a-Service (FaaS), or private GPU servers.

In my PhD dissertation, the work proposes and implements a scalable platform designed to support multiple mobile devices (UAVs) with edge accelerators, collaborating with remote servers to provide real-time performance for a wide range of spatio-temporal autonomous applications. The platform incorporates deadline-driven scheduling heuristics, strategies for preemptively dropping tasks based on their earliest deadlines, migration of tasks from edge to cloud, work stealing from cloud back to edge, and adaptation to network variability, all while ensuring quality of service (QoS). Outputs from the servers can be used by other mobile devices or the planning platform itself to orchestrate the next set of tasks in the workflow. Evaluations against baseline algorithms and multiple workloads demonstrate that the proposed heuristics achieve an optimal balance between task completion and accrued utility.