-
Ohio State University PI: Yingbin Liang
Co-PIs: Junshan Zhang, Qinru Qiu, Hai Li
Microsoft Liaison: Kevin HsiehThanks to the significant technological advances in unmanned aerial vehicles (UAVs), the past few years have witnessed an explosive growth of UAVs in civilian and commercial applications. To ensure efficient and reliable auto-navigation and planning in these applications, there is an urgent need to develop the foundational technology that enables UAVs to have strong sensing, communications, and on-board computing capabilities, to adapt rapidly to the dynamically changing environment, to be resilient to extreme environmental conditions, and to be secure against data contamination and malicious attacks. The interdisciplinary nature of the proposed research will provide valuable research opportunities and hands-on projects for a diverse group of students.
The goal of this project is to leverage and significantly advance the recent breakthroughs in NextG wireless communications, deep machine learning, hardware-aware model generation, and robust and trustworthy artificial intelligence, to enable the design of an intelligent and resilient UAV navigation and planning system. More specifically, this project will develop: (a) real-time communication assisted ambient sensing with multi-modality data fusion and machine learning assisted fast processing for global state tracking; (b) a multi-agent decentralized reinforcement learning (RL) framework with highly scalable computations and flexible latency tolerance; (c) deep learning based message passing for efficient communication and powerful hardware-aware neural architecture search for efficient on-board computation; and (d) comprehensive robustness and security design for system protection from outlier data, malicious poisoning attacks, and RL system attacks. The project will also conduct extensive performance evaluations to validate the developed approaches and algorithms.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
-
Northwestern University PI: Yan Chen
Co-PIs: Hai Zhou, Simone Campanoni
Microsoft Liaison: Dan FayCellular networks have become an inseparable part of everyday lives. Newer generations of cellular network infrastructure such as the 5G have become more complicated, and their developments will take larger effort and longer time. In order to reduce the dependence of cellular infrastructure on foreign providers, and thus to improve the security of the national information framework, it is necessary to facilitate the rapid development of open-source and secure software for new generations of cellular networks. A big challenge in these efforts is how to quickly generate secure open-source code according to the changing NextG protocol standards. This research project develops an automatic, secure, and scalable tool-chain to accelerate the translation of NextG protocol documents into executable code.
The project leverages technical advancements in formal language for protocol description, formal verification for security analysis, and program synthesis for code generation. These components are tightly integrated to explore a radical and game-changing approach to the protocol software synthesis. The output of the research is a sequence of tools that start with NextG documents, produce I/O Automata representation of the protocols, and finally generate secure open-source code. Formal security analysis is also conducted on the I/O Automata representation to identify and remove vulnerabilities.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
-
New York University PI: Sundeep Rangan
Co-PIs: Farshad Khorrami, Ramesh Karri, Elza Erkip, Siddharth Garg
Microsoft Liaison: Victor BahlThe design, manufacturing, and deployment of next generation (NextG) cellular wireless networks (5G and beyond) is increasingly relying on specialized, high-performance baseband and radio frequency (RF) hardware components, often supplied by third parties across the globe. The use of these critical of hardware components sourced from diverse manufacturers introduces formidable security vulnerabilities into critical network infrastructure. The broad goal of this project is to develop methods to build resilient and secure NextG wireless systems from potentially unsecure hardware components. This project focuses on a particularly important class of attacks called hardware Trojans, where hardware components supplied by a third party are maliciously altered to launch an attack from within a network node, such as a cellular base station. Once triggered, these attacks can degrade or disable service, transmit signals to disrupt other nodes, or snoop or leak sensitive data.
The project combines theory in hardware security and communications theory to detect and mitigate these attacks in four closely related thrusts. Thrust 1 develops computationally efficient methods to detect the presence of hardware Trojans in both the baseband and RF. Thrust 2 provides rigorous bounds to estimate the capacity of undetected hardware attacks and enables a critical optimization of the power and computation on hardware verification and potential throughput degradation. Thrust 3 extends these methods to network settings, including jamming and multi-user attacks. Thrust 4 develops a novel and powerful evaluation platform to experiment with hardware security methods in both the baseband and RF in a high throughput millimeter wave software defined radio.The PIs will disseminate the results and data to the wider community through workshops. In addition, the PIs teach security and wireless classes at NYU and will integrate the research and experiments in their classes and class projects.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
-
Carnegie-Mellon University PI: Anthony Rowe
Co-PIs: Jeffrey Bilmes
Microsoft Liaison: Francis Yan, Kevin HsiehExtended Reality (XR) applications tightly couple virtual content with the physical world through technologies such as virtual and augmented reality. These applications are compute-intensive, latency-sensitive, and bandwidth-hungry, making them ideal drivers for next-generation communication architectures. Current best-effort networking techniques struggle to meet the high demand of XR workloads leading to dropped or delayed network packets, observable jitter, and poor Quality-of-Experience. Fortunately, there are statistical correlations between past and future packets that go under-exploited in modern XR systems. The goal of this project is to develop a fast, resilient, and adaptive general neural network transformer-based architecture for interactive XR applications using machine learning (ML) to model, historically summarize, adapt to, predict, and then utilize streaming time-series data. The proposed modeling strategy is transformer-based, but it can represent context relationships going arbitrarily far back in time, something ordinarily infeasible, and it can also rapidly adapt to changing statistics via a novel adaptive final neural network layer.
This work will result in a system that can transparently interpose network control messages to reduce bandwidth through forecasting and extrapolation. It will also help to mask packet delays and drops over wireless channels improving network coordination critical to many XR applications like AR guided surgery, search and rescue, digital telepresence and automotive heads up displays.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
-
University of Texas at Austin PI: Sandeep Chinchali
Co-PIs: N/A
Microsoft Liaison: Shadi Noghabi, Kevin HsiehFleets of networked robots are being deployed on our roads, in factories, and in hospitals for tasks like self-driving, manufacturing, and nurse assistance. These robots are struggling to process growing volumes of rich sensory data and deploy compute-and-power hungry machine learning (ML) models. However, robots have an opportunity to augment their intelligence by querying remote compute resources over next-generation (NextG) wireless networks. However, researchers lack algorithms to balance the accuracy benefits of networked computation with systems costs of delay, power, congestion, and load on remote compute servers. As such, this project is innovating algorithms based on decision theory (i.e., a mathematical cost-benefit analysis) to decide how to balance on-robot and remote computation while only communicating task-relevant, privacy-preserving data. The resulting communication-efficient algorithms aim to improve the resiliency of NextG networks by minimizing congestion. The project’s outreach efforts aim to enable K-12 students to prototype on remote robots.
This project develops algorithms to enable joint inference, learning, and control between robotic swarms and the cloud while resiliently adapting to variations in network connectivity and compute availability. Today’s robotic control algorithms are largely informed by onboard sensors and a local physical state, but effectively ignore the time-variant state of a network. As such, they often make sub-optimal decisions on when to query the cloud, often leading to excessive congestion. Accordingly, this project develops decision-theoretic algorithms that flexibly trade-off the accuracy benefits of the cloud with systems costs. First, the project develops collaborative inference algorithms that decide whether, and where, to offload computation using a Markov Decision Process. Then, it develops statistical data sampling algorithms that estimate the marginal gain of uploading new training data with labeling and training costs. The final thrust learns compressed representations of video and LiDAR that optimize for ML inference accuracy, as opposed to conventional human perception metrics.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
-
University of Texas at Austin PI: Jeffrey Andrews
Co-PIs: Georgios-Alex Dimakis
Microsoft Liaison: Srikanth Kandula, Kevin HsiehWireless communications applications of the 2030s — such as extended reality, autonomous vehicles, and telemedicine — will require extremely high bit rates and reduced energy consumption. To fulfill these requirements, sixth-generation (6G) era systems will leverage ultra-high carrier frequencies enabled by deploying massive arrays of very small antennas. Such architectures are termed ultra-high dimensional (UHD) communication systems. This is a very challenging scenario for wireless communication, since the UHD channel is difficult to learn and the wireless energy must be radiated in a highly directional manner towards the intended receiver. Existing approaches to channel estimation and beam alignment will not scale to the UHD scenario in terms of complexity, power consumption, or overhead signaling. This project aims to develop novel, scalable, and high-performing approaches to channel estimation and beam alignment for UHD communication systems using deep generative models (DGMs), an emerging toolset from unsupervised deep learning. This project will advance the fundamentals of DGMs in a manner appropriate for decentralized real-time applications, and also develop these novel methods for the unique challenges of UHD communication systems.
The project will develop novel theoretical tools for solving challenging inverse problems arising in UHD communication systems, in particular considering (i) resilience to measurement errors and outliers; (ii) resilience to shifting channel distributions and low-resolution measurements; (iii) adaptation to changes in data distribution via federated generative adversarial network (GAN) training; (iv) compressed GAN representations; and (v) incorporating prior information in untrained DGMs via learned regularization. The project will also research DGM-based (i) interference-robust channel estimation and GAN-training techniques; (ii) federated downlink UHD channel estimation paradigms; (iii) end-to-end UHD channel estimators that account for physical impairments such as low-resolution quantization and RF nonlinearity; (iv) low-latency beam alignment using compressed DGM-based channel representations; and (v) joint channel estimation and beam alignment using DGM-based site-specific codebooks. The project will also build novel simulators and experiments to accurately characterize UHD channels in the context of 6G cellular systems, in collaboration with the RINGS industry partners, to benefit the wider 6G ecosystem.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
-
Carnegie-Mellon University PI: Dan Rubenstein
Co-PIs: Henning Schulzrinne, Ethan Katz-Bassett, Todd Arnold
Microsoft Liaison: Stefan Saroiu, Sharad AgarwalDependence on networked systems requires their predictability and reliability. While these systems are designed to cope with failures, there continue to be instances where anomalous conditions, whether brought about by an intentional attack or unfortunate natural circumstance, compromise the network’s ability to perform. These network failures are often the result of individual components that all make implicit assumptions about the dependability of other components that, in fact, are themselves making the same or similar assumptions, leading to simultaneous failure. This research project takes an overarching multi-layered, systems-level approach to prepare applications for the inevitable component failures by providing a shadow network: pre-planned alternatives ready to deliver critical services should the primary services, networks, or paths fail. The shadow network continually seeks out and analyzes applications and services to identify resilience and combine end-to-end services using a mix of primary and specifically designed shadow components, such that should failures occur, the shadow components can take effective action. By increasing the reliability of the network-at-large, users of all Internet applications will benefit by reducing the frequency in which the “network is down”. Preventing Internet failures naturally provides engagement with the general public, standards bodies, and policymakers, as well as any students, from secondary to Ph.D., interested in research with real-world impact.
The research takes a three-pronged approach toward shadow network development. First, the design considers shadow components for every aspect of the application, providing the basis for reliability. Second, the research investigates how to continually utilize measurement-driven testing to assess overall and component-specific reliability to be ready for a wide range of potential outage and failure scenarios. Last, the research investigates the programmability of existing and emerging components to scale up the shadow services when they are needed and to prioritize critical applications and overall availability. The methods used to perform the research includes resilient network design and deployment, protocol design, experimentation, monitoring and testing, and modeling and analysis. Evaluation consists of a combination of prototype, simulation, and mathematical analysis, and is scored in terms of efficiency, cost, resiliency, and reliability.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
-
Carnegie-Mellon University PI: Giulia Fanti
Co-PIs: Vyas Sekar
Microsoft Liaison: Francis Yan, Srikanth KandulaNext-generation networked systems are increasingly data-driven, meaning they are developed, tuned, and tested on real data. For instance, data-driven techniques can enable better quality of experience for content distribution over the Internet, better wireless communication techniques, and better attack detection techniques for emerging cybersecurity threats. Unfortunately, a pervasive lack of data limits the potential of data-driven research and development. Data holders are often reluctant to share datasets for fear of revealing business secrets or running afoul of regulations. These data access challenges will become (and already are) a fundamental stumbling block for innovation in next-generation networks.
This project aims to tackle this impasse with synthetic data —– data that exhibits the same statistical patterns as real data, without the need to explicitly share the original source data. Synthetic datasets can be safely released to enable cross-stakeholder collaboration. Synthetic data generation techniques, however, have classically suffered from poor data quality. This proposal explores how to leverage and extend recent advances in machine learning to use Generative Adversarial Networks (GANs) to generate synthetic models of networking datasets. Realizing the potential benefits of GAN-generated synthetic data for networking systems, however, is challenging on multiple fronts. First, network traffic datasets (e.g., packet captures) entail complex relationships that raise new fidelity and scalability implications for prior GAN models. Second, networking use cases pose new (and traditional) privacy requirements, and the resulting privacy-fidelity tradeoffs remain poorly understood. Finally, several networking use cases entail studying rare or extreme events (e.g., outages, flash crowds, attacks). Data for such extreme events by definition is rare and challenging for GANs (or any synthetic data model) to learn. This project will tackle interdisciplinary challenges spanning networking, machine learning, and privacy to develop novel foundations for GAN-enabled workflows for supporting data-driven operations in next-generation network systems.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
-
Massachusetts Institute of Technology PI: Eytan Modiano
Co-PIs: Gil Zussman
Microsoft Liaison: Manikanta KotaruOver the past 20 years, wireless technologies have been revolutionized, with data rates increasing by several orders of magnitude. New wireless communication technologies, along with the emergence of computing capabilities at the network’s edge will make it possible to achieve giga-bits-per-second data rates and less than millisecond delays. This project develops network algorithms that leverage these emerging technologies to enable robust and resilient next generation of wireless networks, and novel applications, such as smart cities, connected vehicles, virtual reality, telemedicine, and advanced manufacturing.
The goal of this project is to develop autonomous network control algorithms that will enable robust and resilient distributed computing wireless networks through a combination of physical layer technology, edge-cloud computing, and intelligent network control. In particular, the project team is developing an autonomous network control framework that can jointly address resiliency in the physical and application layers by dynamically adapting the next-generation wireless communication systems to the time-varying conditions of the environment and, at the same time, intelligently allocating the available communication and computation resources in order to meet the performance requirements of time-sensitive edge-cloud services. The network control framework makes resource allocation decisions dynamically and in an autonomous manner, without prior knowledge of traffic statistics and network conditions, making it resilient to changes in the network and environment. The project team is using the COSMOS testbed to prototype, experiment, and evaluate the network control framework and its application to edge cloud systems. The project includes impactful outreach and education activities which, among other things, engage middle and high school teachers and students from NYC and the neighboring Harlem community in wireless networking, thereby broadening participation in Computing and STEM.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
-
Yale University PI: Lin Zhong
Co-PIs: Anurag Khandelwal
Microsoft Liaison: Victor BahlNextG network systems rely on virtualization to move away from specialized, dedicated equipment to cloud and edge datacenters, to reduce cost and accelerate innovation. So far such virtualization efforts have met with very limited success, making inroads largely with 4G/LTE small cells. This is because 5G and beyond employ compute-intensive technologies such as massive multiple-input, multiple output (MIMO) and low-density parity-check (LDPC) code to deliver the unprecedented network performance. Massive MIMO not only demands massive computational power itself, but also proportionally increases that of LDPC. Not surprisingly, existing commercial massive MIMO solutions all rely on specialized, dedicated hardware such as FPGA and application-specific integrated circuits. Massive MIMO remains the largest barrier toward virtualized mobile networks. The goal of the proposed project is to overcome this technical barrier and virtualize the massive MIMO physical layer for NextG network systems. In doing so, the project will expedite the adoption of massive MIMO, resulting in more capable, more efficient, and more cost-effective mobile networks. Through our ongoing collaborations with industry leaders, the project will timely transfer technologies into practice. It also provides a meeting ground for software systems and wireless communication research and creates timely content for teaching Computer Science majors about the wireless physical layer. The project will provide a platform to engage undergraduate students and high-school students in computing research, especially women and underrepresented minorities.
The project targets the following scientific contributions. (1) Design and implementation of the massive MIMO physical layer that scales up on a many-core server efficiently and intelligently. We will combine the efficiency of static and elasticity of dynamic task scheduling and devise latency-driven, automated schemes for optimal resource provisioning. (2) Distributed design and implementation that can utilize compute resources integrated over a local-area network, beyond a single server. We will leverage programmable switches to minimize the impact of the network and to utilize commodity servers as well as accelerators to achieve scale, cost effectiveness and energy efficiency beyond the reach of a single many-core server. (3) Elastic design and implementation that match up to dynamics in mobile network workload, simultaneously achieving high resilience, low cost for the network operator, and high utilization for the cloud provider, in a multi-tenant environment. We will explore serverless computing and develop it further to better meet the stringent latency requirement of the massive MIMO physical layer.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
-
Carnegie-Mellon University PI: Heather Miller
Co-PIs: Claire Le Goues
Microsoft Liaison: Srikanth KandulaAs public clouds and Content Delivery Networks (CDNs) race to enable compute capabilities at the edge of their networks, software developers are no longer deploying just static files (video, images) at the edge, but significant application code as well. This presents a dramatic departure from current development practices. With this new breed of applications on the horizon, the security, performance, and feature requirements of the underlying execution platform grow commensurately. Second, the harsh reality of unexpected network failures at the edge forces every mainstream developer to become a distributed systems engineer. Distributed systems engineering multiplies complexity and difficulty manyfold, as partial failure (unavailability of one or more services) can adversely impact the system in unknown or hard-to-predict ways, even leading to catastrophic cascading failures. This research proposes an approach to full-stack “resilience engineering” to enable secure, effective, and performant edge computation in NextG systems. Our work focuses on and builds on WebAssembly, which is emerging as the common underlying language-agnostic execution platform in new edge computing environments. We propose a robust, performant, and secure experimental runtime engine with support for instrumentation and rapid prototyping that will facilitate fault injection and program repair. On top of this foundation, this work proposes a set of tools for programming language-agnostic fault injection, testing, and repair of resilience (distributed system-related) errors in distributed and networked systems. This would allow software developers to quickly and effectively test and fix resilience defects before application code is deployed to users, rather than simply deploying and hoping for the best without any indication of how that application behaves in the face of partial failure (which is the current state of the art). If widely adopted, the results of this work would, in turn, reduce the occurrence of catastrophic cascading failures that cause widespread outages in critical networked services.
Essentially all software that we use as a society is now networked, meaning apps connect to multiple servers and coordinate together to complete a task (such as online banking, searching for nearby restaurants, and even streaming media). This project directly supports the development of dependable networked services. Prior to this research, apps are typically released to users without testing for issues at the network level between services. This work develops new ways to test this brave new world of networked software, making it possible for software developers to automatically discover and automatically repair bugs before that software is released to users. As a result of this research, we will have new tools that, if broadly used, can result in fewer outages of critical networked services that society depends on.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
-
University of California-San Diego PI: Tara Javidi
Co-PIs: Urbashi Mitra, Ashutosh Sabharwal
Microsoft Liaison: Victor BahlWireless networks are an essential societal infrastructure, much like power, water, and transportation systems. With the growing incidence of natural and human-induced disruptions, it has become critical that network service is resilient to unforeseen disruptive events. The state of the art in wireless network design does not directly provision for large-scale disruptive events such as a weather event disabling base-stations. In other words, the wireless networks of today play a small role in ensuring its own resiliency. Our research advances the national infrastructure, prosperity and welfare by providing a novel resiliency paradigm for Next Generation networks. In particular, the specific aim of this research is to design a new generation of wireless networks that stay vigilant by sensing both the internal system state and the external environment. The inclusion of sensing into a future physical layer will be a cornerstone of realizing next generation (such as 6G) wireless networks. Joint sensing and communication will enable high performance allocation of limited resources and create the ability for the network to reason about itself and its environment. We assert that such a paradigm needs to adopt the following design principles: First and foremost, resilient systems assume that unforeseen disruptions can and will occur. Second, the systems are smart in their vigilance by actively managing the resources to form a world-view that enables active preparedness and resilience. Third, armed with active vigilance, resilient systems are in a constant state of preparedness, continually adapting their view of the world by counterfactually reasoning about the system’s future.
This project will facilitate the translation of academic research through an active and sustained engagement with industry on this future network design. The proposed work will yield software, open data sharing and research leveraging the PIs’ extensive track-record of open-source experiments. The PIs will expand the existing educational and outreach programs to increase participation from under-represented groups in Electrical & Computer Engineering at their institutions. These programs focus on all levels of education and engagement, from faculty to post-doctoral researchers, graduate students and undergraduates. In particular, we will take a data-driven approach to widening participation of underrepresented groups in flagship conferences of the field as well as the annual Information Theory and Applications workshop which is held annually at University of California San Diego.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
-
University of California-Santa Barbara PI: Upamanyu Madhow
Co-PIs: Mark Rodwell
Microsoft Liaison: Bodhi PriyanthaAs the demand for high-speed wireless data keeps growing, it is essential to access the vast amounts of spectrum available in “millimeter wave (mmWave)” frequency bands, which are orders of magnitude higher than the frequency bands used in WiFi and cellular systems today. There has been substantial recent progress demonstrating feasibility of radio frequency integrated circuits (RFICs) built with low-cost silicon semiconductor processes, for lower mmWave frequency bands such as 28 GHz licensed spectrum (for 5G cellular), and 60 GHz unlicensed spectrum (for next-generation WiFi). This project aims to provide a quantum leap beyond these efforts, developing strategically important and commercially viable technologies for opening up upper mmWave bands beyond 100 GHz. A specific goal is to develop antenna arrays with thousands of elements, capable of forming agile pencil beams tracking mobile devices, which can be miniaturized into compact form factors because of the tiny wavelengths at 100+ GHz. The project investigates novel approaches for co-design of hardware and algorithms for scaling array sizes, targeting significant jumps in attainable link distances and data rates (10 Gbps per mobile user in an urban cell, and 100 Gbps for a fixed wireless alternative to fiber).
Millimeter wave (mmWave) communication will play a crucial role in next-generation communication infrastructures. A fundamental bottleneck in mmWave hardware development is packaging: “fitting” the RFIC electronics becomes difficult at small carrier wavelengths due to the standard constraint of half-wavelength spacing between antennas. This project investigates novel hardware architectures that sidestep such packaging bottlenecks to realize massive extended arrays, along with closely coupled innovations in all-digital hierarchical signal processing architectures, targeting quantum leaps in capacity and resilience. Hardware research includes development of extremely low-cost 140GHz transceiver modular array tile technologies that readily scale to arrays having vast numbers of elements. Signal processing and systems research develops all-digital hierarchical signal processing architectures matched to the tiled hardware architecture, illustrating the system-level impact of the robustness and additional spatial degrees of freedom provided by extended arrays in canonical multiuser (MU) MIMO and Line of Sight (LoS) MIMO settings aimed at flexible, cost-effective deployment of access and backhaul nodes. A key design concept is spatial redundancy: by choosing hardware and system parameters such that the number of array RF channels greatly exceeds the number of MIMO signals involved, it becomes possible to simplify power consumption and die area by sacrificing dynamic range.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
-
Texas A&M Engineering Experiment Station PI: Guofei Gu
Co-PIs: Walter Magnussen, Jeff Huang, Chia-Che Tsai
Microsoft Liaison: Kevin Hsieh, Sathiya Kumaran ManiNextG network systems are expected to connect billions of heterogeneous Internet of Thing (IoT) devices along with billions of people, enable machine-to-machine communications, and provide low-latency computational and storage resources on-demand at the devices and in the cloud. In NextG, many services will run dynamically as lightweight functionalities close to users to support ultra-low latency when storing/processing extremely critical data, for example, from autonomous vehicles and tele-surgery. Historically, however, many of these services have been developed/deployed with no security in mind, or simply with security as a reactive add-on. This kind of security practice may beget life-threatening consequences for critical NextG applications/services. The project’s novelties are the introduction of a revolutionary capability for secure architecture for NextG and a system, NextSec, that enhances the security of NextG services. This project provides a solid foundation and collaborative community for future system and network security research. The project team incorporate insights and results from this work into relevant courses, recruit/educate underrepresented students in computing, and transfer the developed technology to industry.
To address the challenges in the secure composition of microservices in the pervasive, distributed user-to-edge-to-cloud continuum of NextG network systems, this project has three research thrusts. The concept of security transformation is new for addressing the various security issues of microservices. The Programmable Security thrust provides new primitives for supporting a software-defined way of enforcing user-to-edge-to-cloud security. Finally, the extended Maximal Causality Reduction methods offer efficient, scalable verification of complex security properties across microservices. The framework has been evaluated on Texas A&M Commercial 4G/5G Advanced Wireless Application Research Environment (AWARE) testbed.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
-
Princeton University PI: Ravi Netravali
Co-PIs: N/A
Microsoft Liaison: Ganesh Ananthanarayanan, Srikanth KandulaOver the past few decades, cellular networks have evolved to deliver improved performance across increasingly heterogeneous components spanning the network edge (e.g., user devices) to base stations to traditional cloud backends. A key motivator behind these advances is to enhance the support for edge applications, especially video analysis (VA). Yet VA applications are currently not structured to fully leverage those advances. A primary issue is the lack of structured frameworks to develop and run VA applications, which in turn prevents the deployment and optimizations required to take advantage of all that cellular networks (and their edge-cloud hierarchies) have to offer. To tackle this limitation, the proposed work advocates for a re-designed VA software stack that explicitly ties VA operations and requirements to the resources, interfaces, and vantage points that each platform element in a mobile edge-cloud hierarchy brings. To achieve this goal, the project takes a bottom-up, three-pronged approach that involves (1) developing a new object-oriented query language for VA applications that makes the aforementioned characteristics explicit and observable, (2) leveraging those features to develop a suite of resource-aware optimizations to VA computations that can operate under diverse (and restricted) edge constraints, and (3) designing a novel task placement engine that automatically adapts and operates VA applications across edge-cloud hierarchies.
Owing to the widespread use of VA applications in sectors spanning traffic control, to autonomous vehicles, to disaster relief, the proposed research promises benefits to a large part of the population. The key improvements will come along two axes – (1) replacing painstaking manual analysis with automatic determination of the appropriate interactions between VA applications and emerging mobile networking infrastructure, and (2) democratizing the use of edge networking infrastructure – and will target two different groups. On the one hand, the proposed frameworks will simplify the creation of cutting-edge VA applications for developers by automatically deciding what public edge infrastructure to use and how to use it most effectively (in terms of cost, accuracy, and performance). On the other hand, the developed systems will assist network operators in identifying the most fruitful resource enhancements and helpful information about the platform to expose to application elements. The project also involves outreach efforts to attract students from populations currently under-represented in computer science. Key to these efforts is magnifying the interdisciplinary nature of edge-based VA applications that span mobile systems and networks, computer vision, programming languages, and machine learning. The software and research artifacts designed as part of this project are released on a regularly-maintained, public website.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
-
University of Washington PI: Simon Peter
Co-PIs: Arvind Krishnamurthy
Microsoft Liaison: Shadi Noghabi, Wei BaiNext-generation data centers will operate under tight and variable power envelopes. Renewable energy and an increase in extreme weather lead to more power supply variability. At the same time, fluctuating data center utilization caused by user churn increases power demand variability. Power demand and supply have to be kept in balance at increasingly short timescales and resiliency of data centers to power events is paramount for their availability. This project’s goal is to build and evaluate a data center power control plane (PCP). PCP can control power demand at a fine granularity and over short timescales by making it software-defined. The key is to gracefully trade off power and quality of service over time. This allows PCP to shed or consolidate load to conserve power during a power event. PCP aims to reduce recovery latency from power disasters, outlive power disruptions, and improve the energy efficiency of next-generation data centers via power oversubscription. Several research questions are addressed to make PCP practical, such as how to scale power demand and supply instrumentation and control; how to leverage the diverse power envelopes of various processing devices available in a data center; how to determine sheddable load, and how to control it transparently; how to further reduce fail-over and recovery latency from power outages?
Next-generation data centers will provide time and life-critical services to a large portion of society globally. By providing power resiliency, this project has the potential to dramatically improve next-generation service availability, to enable critical services, and to avert power disasters to save lives, reduce the blast radius of a power emergency, and constrain the cost of a power disaster. The work will also influence the education of the next generation of systems engineers, as associated project materials can serve as courseware emphasizing power resiliency to better prepare students for these changes touching every aspect of computing.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
-
University of Illinois Urbana-Champaign PI: Deepak Vasisht
Co-PIs: Haitham Hassanieh, Gagandeep Singh
Microsoft Liaison: Manikanta Kotaru, Ranveer Chandra, Vaishnavi RanganathanNext-generation (NextG) networks will be unprecedented in their scale, diversity, and capabilities. They will connect hundreds of billions of devices ranging from smartphones to smart sensors. These networks will enable networking for very diverse devices — low power Internet-of-things (IoT) devices with year-long batteries to data hungry virtual/augmented reality (VR/AR) headsets. Finally, next-generation networks will enable new services through joint communication and sensing — e.g., a multi-antenna base station may sense its environment and share information about pedestrians and cars with autonomous vehicles. These characteristics will make NextG central to many transformative applications like digital healthcare, Industry 4.0, autonomous driving, and telepresence. This proposal will build robust Machine Learning-based frameworks that deliver new communication and sensing capabilities for NextG networks. The educational efforts in this proposal will train students to research and work with cutting edge data-drive wireless systems.
The proposal will build state-of-the-art Machine Learning frameworks that will be key enablers for Next-generation (NextG) networks. Specifically, these frameworks will: (a) create autonomous systems that remove bottlenecks and maximize the performance benefits of novel hardware capabilities in NextG networks such as massive antenna arrays, and multiple frequency bands, and (b) extract fine-grained insights from wireless signals for sensing and imaging of the surrounding environment. A key focus of this proposal is to build logical reasoning and formal verification frameworks that provide provable guarantees on the robustness of these Machine Learning models, so that they are robust to both environmental and adversarial noise. Such robustness is crucial for successful adoption of data-driven approaches in production systems, due to the criticality of NextG infrastructure.
-
Rutgers University New Brunswick PI: Roy Yates
Co-PIs: Dipankar Raychaudhuri, Waheed Bajwa, Anand Sarwate
Microsoft Liaison: Shadi Noghabi, Kevin HsiehMachine learning (ML) is the enabler of emerging real-time applications ranging from augmented reality and smart cities to autonomous vehicles that are changing how people live and work. Low latency is essential for these services; emerging real-time applications will typically need assistance from a mobile edge cloud (MEC) for real-time operation. This emerging scenario introduces significant new challenges: mobile devices are heterogeneous, ranging from energy-harvesting sensors to automobiles, but storage and compute resources are generally limited and communication is often over low-bandwidth channels; real-time deployment of trained ML models requires autonomous computation and decision-making that is adaptive to heterogeneous time-varying local environments; devices need to make high-accuracy inferences on high-dimensional data in real time; devices continuously gather new data that must be processed, aggregated, and communicated to the MEC; mobile users have heterogenous privacy preferences that require privacy-sensitive use of the MEC; and the applications and services on the mobile devices must be resilient to changes in both the cyber and physical worlds in order to ensure personal safety. This project is aimed at the design and experimental validation of an MEC-based distributed ML system that accounts for these factors.
In this setting of real-time operation, online decision-making, and offline training of ML-based applications that must be resilient to data, application, user, and system changes, this research program has four facets: (1) Edge-centric distributed ML models to enable both real-time inferences at mobile devices and fast distributed semi-supervised training are being developed and evaluated. (2) Based on age-of-information timeliness metrics, real-time inference methods and system operation are optimized to balance mobile computation against network resources. (3) Differential privacy and other privacy metrics for real-time and online operation of MEC-assisted ML are being developed and incorporated in the distributed algorithms for system adaptation. (4) The project integrates these design approaches in a proof-of-concept prototype on the NSF COSMOS testbed in NY City to validate feasibility and evaluate device and system resilience for representative applications.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
-
University of California-Berkeley PI: Sylvia Ratnasamy
Co-PIs: Scott Shenker
Microsoft Liaison: Stefan Saroiu, Ganesh Ananthanarayanan, Srikanth Kandula, Wei Bai, Francis YanA key part of the Internet infrastructure is the access or “last mile” link over which a user’s device connects to the Internet. This last-mile infrastructure is essential in ensuring that users enjoy high-quality access to Internet services. It is thus important that the last-mile infrastructure is highly available, protects user privacy, offers high performance, and yet is cost-efficient with a healthy ecosystem of innovative last-mile vendors. Unfortunately, today’s last-mile ecosystem — and the cellular ecosystem in particular — falls short on all of these dimensions. The goal of this project is to take initial steps toward transforming how our communication systems interact with the last mile. If successful, the resulting infrastructure would be more available, and the ecosystem would be incentivized to provide cost-effective and high-quality connectivity to all.
The current last-mile ecosystem closely couples the usage of (scarce) access resources to whatever backend provider the user has contracted with. For example, today’s cellular architecture limits users to the facilities associated (directly or indirectly) with their mobile network operator This coupling limits resilience (i.e., surviving failures of individual access points or providers) as well as efficiency (since last-mile costs cannot be shared across operators). The latter in turn discourage deployment in rural areas and hence constrains the reach of network services. To improve both the resilience and efficiency of our cellular networks, this project explores a new access architecture that decouples last-mile resources from a user’s choice of mobile operator. This enables users to make use of all available access resources, providing far greater protection from last-mile failures and making more markets financially viable. Realizing this decoupling requires three main areas of technical innovation. First, to achieve the desired decoupling, mobility should be handled outside the cellular core while preserving performance and feature parity with today’s cellular networks. Second, recent developments have shown how current cellular protocols do not provide adequate levels of privacy. The decoupling of the last mile from the backend must be done in a way that not only matches but instead greatly exceeds the current privacy protections. Third, to allow users to seamlessly switch between cellular and Wi-Fi access, there must be a uniform way to authenticate users, and this approach should not require hardware or firmware changes to the underlying technologies.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.