Carnegie Mellon University

Privacy & Security

Data is everywhere and it is more important than ever that we understand how to ensure the privacy of users.

Technology is deeply embedded in our daily lives, often collecting sensitive data like GPS locations, credit card details, and activity logs through devices such as smartphones and smart home systems. In Societal Computing, we tackle these challenges by blending computational techniques with social science methods to ensure privacy, security, and ethical tech solutions.

Now, perhaps more than ever, it is imperative that we better understand key concerns related to securing and anonymizing data across an ever-growing range of technologies. Nowhere else can this be seen more acutely than in the various recent data breaches which have exposed countless users' private information and activity; data they believed to be safe and private.

Our research in Privacy and Security unites a cross-disciplinary faculty to tackle socio-technical challenges at the intersection of technology and society. From securing ubiquitous computing to analyzing data practices in the Internet of Things and improving privacy policies, our faculty’s diverse expertise drives impactful research that shapes ethical tech practices and policies.

scientists-infront-of-computer-in-lab.png

Faculty

Privacy in Ubiquitous Computing and IoT

Privacy Policy and Ethical Tech

Human-Centered Privacy Research

Example Research

MITES: A Privacy-Aware General-Purpose Sensing Infrastructure for Smart Buildings

The Mites project tackles one of the most complex challenges in smart building technology: how to deploy comprehensive sensing infrastructure while respecting privacy, ensuring security, and maintaining community trust in a shared space. Through an innovative combination of hardware design, privacy-preserving architecture, and extensive community engagement, the researchers developed a system that successfully balances the competing needs of different stakeholders - from building managers seeking efficiency to occupants concerned about surveillance. This work exemplifies how technical innovation must be deeply integrated with social considerations, as demonstrated by their iterative design process that incorporated community feedback and led to novel solutions like location obfuscation to protect occupant privacy. The project showcases the kind of interdisciplinary thinking central to Societal Computing, where cutting-edge technical solutions are shaped by and responsive to human needs and social dynamics...

  • Focus: Privacy-aware sensing in smart buildings  
  • Impact: Balances security, privacy, and community trust  
  • Approach: Combines hardware design and privacy-preserving tech

Learn more

Hertzbleed: Turning Power Side-Channel Attacks Into Remote Timing Attacks on x86

This work shines a light on the Hertzbleed attack, a novel technique that transforms power side-channel attacks into remote timing attacks on modern x86 CPUs. By exploiting data-dependent frequency changes induced by dynamic voltage and frequency scaling (DVFS), Hertzbleed allows remote attackers to infer cryptographic keys through timing variations without needing direct power measurements. This study highlights significant security implications for cryptographic implementations, demonstrating that even constant-time code can be vulnerable to remote timing attacks, challenging long-held assumptions about side-channel resistance in CPU architectures.

  • Focus: Security vulnerabilities in x86 CPUs  
  • Impact: Uncovers risks in cryptographic implementations  
  • Approach: Analyzes power side-channel attacks

Learn more

Speculative Privacy Concerns About AR Glasses Data Collection

This paper explores privacy concerns related to data collection by future consumer-grade augmented reality (AR) glasses. Through semi-structured interviews with current AR users, the authors examine attitudes toward the collection of 15 types of sensitive data, such as face images, brain waves, and bystander voiceprints. Findings reveal diverse privacy concerns, often rooted in context-specific values and expectations. Participants expressed desires for customizable privacy controls and highlighted risks to marginalized groups. This research provides valuable insights for AR designers and policymakers on building privacy-respecting technologies in the evolving AR landscape.

  • Focus: Privacy risks of AR glasses data collection  
  • Impact: Informs privacy-respecting tech design  
  • Approach: Examines user attitudes and data types

Learn more

GFWeb: Measuring the Great Firewall's Web Censorship at Scale

This paper introduces GFWeb, a large-scale system designed to monitor and analyze web censorship by the Great Firewall (GFW) of China. Over a 20-month study involving more than a billion domain tests, GFWeb uncovers extensive HTTP and HTTPS blocking by the GFW, providing the most comprehensive dataset of censored domains to date. Findings highlight the GFW’s evolving methods, including its asymmetrical and bidirectional blocking behavior, which has implications for measuring and circumventing internet censorship. This work offers critical insights into the architecture of censorship systems and informs future censorship measurement and evasion strategies.

  • Focus: Web censorship by the Great Firewall  
  • Impact: Provides insights into internet censorship  
  • Approach: Large-scale monitoring and analysis 

Learn more

WaVe: a verifiably secure WebAssembly sandboxing runtime

WaVe advances WebAssembly (Wasm) security by creating a runtime system that enforces memory and resource isolation through automated verification. Designed to implement the WebAssembly System Interface (WASI), WaVe ensures that Wasm applications interact safely with the operating system without compromising memory or access boundaries. This study highlights WaVe’s ability to deliver robust security while performing on par with industry-standard Wasm runtimes, making it a critical innovation for secure Wasm deployments in various applications.

  • Focus: Security in WebAssembly runtimes  
  • Impact: Enhances secure software deployment  
  • Approach: Implements memory and resource isolation

Learn more

Legal Accountability as Software Quality: A U.S. Data Processing Perspective

This paper proposes "Legal Accountability" as a core software quality to embed regulatory compliance directly into software design, transforming it from a corporate oversight activity into a principal design focus. Through the lens of U.S. data processing law, the authors outline five essential properties—traceability, completeness, validity, auditability, and continuity—that ensure software accountability to the law. This perspective highlights how legal and engineering experts can co-design systems that prioritize compliance alongside other key software qualities, presenting a new framework for legally accountable software development.

  • Focus: Legal accountability in software design  
  • Impact: Promotes ethical software development  
  • Approach: Integrates legal principles into tech

Learn more

TEO: Ephemeral Ownership for IoT Devices to Provide Granular Data Control

To address privacy concerns for users and bystanders around IoT devices in shared spaces, this paper introduces TEO (Ephemeral Ownership for IoT). TEO allows temporary co-ownership of devices and data, giving users control over collected information during their usage period while maintaining robust privacy through encrypted storage. Verified for security and implemented as a lightweight library, TEO shows minimal performance impact, making it a viable solution for shared IoT environments like rentals or offices where privacy and control are essential.

  • Focus: Privacy in IoT device data sharing  
  • Impact: Enhances user control over data  
  • Approach: Develops lightweight, secure storage 

Learn more

SoK: Content moderation for end-to-end encryption

Our online interactions are increasingly safeguarded by end-to-end encryption (E2EE), yet this very protection complicates efforts to counter harmful content like hate speech and misinformation. Without solutions that balance privacy with accountability, we risk either compromising user security or allowing unchecked content to thrive. This paper provides a structured framework to tackle these challenges, examining detection, response, and transparency methods that preserve privacy while enabling effective content moderation. The research opens new paths for designing systems that address both security and societal harm, offering essential insights for researchers in security, cryptography, and policy.

  • Focus: Content moderation in encrypted systems  
  • Impact: Balances privacy and security  
  • Approach: Examines socio-technical challenges

Learn more

Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks

AI technologies are advancing rapidly, but with them come a host of emerging privacy risks that threaten personal security in unprecedented ways. This paper presents a taxonomy of privacy risks specific to AI, derived from an analysis of 321 documented incidents. The authors identify 12 categories of risks, from deepfakes and physiognomic profiling to amplified surveillance, offering a structured framework to help AI practitioners understand and mitigate these threats. This taxonomy underscores the urgent need for robust privacy safeguards tailored to the unique challenges posed by AI.

Learn more

Research Impact and Policy Connections  

Societal Computing research in Privacy and Security not only advances technical solutions but also informs real-world policies and practices. Our work on projects like GFWeb and Legal Accountability shapes how technology addresses societal challenges, from internet censorship to ethical software design.