Xida Ren

Xida Ren

Computer Science PhD @ UVA & Hardware Performance Researcher @ Intel

Intel Labs

University of Virginia


I am Xida Ren, pursuing a PhD in security & performance for ML Hardware at the University of Virginia Computer Architecture Lab. I am also interning at Intel.

My current research interests include hardware security, workload profiling, and software-hardware co-design.

I’m interested in learning more about AI governance and AI-related policy. I think hardware security features like the Intel Management Engine, Trusted Platform Modules, and TDX (trusted domain extensions) can be potentially used to support AI governance and I’m happy to talk about how.

I feel at home with people who like words, numbers, and diagrams.

Download my resumé (1 page).

Download my CV (4 pages).

Nothing on this website represents the views of anybody but myself.

  • Hardware Security
  • Profiling
  • Software-hardware co-design
  • Semiconductor Supply Chain
  • PhD in Computer Science, 2023

    University of Virginia

  • BSc in Computer Science and Mathematics, 2019

    College of William and Mary


Inte Labs
Research Intern
Aug 2022 – Present Portland, Oregon
  • Profile workloads to generate architecture-independent workload summaries that use Basic Block Vectors to accurately predicts workload performance on novel hardware.
  • Accelerate summary generation by 1,000,000x using hardware performance counters.
  • Generate multi-platform executable benchmarks based on performance summaries using MLIR.
  • Use differential privacy to enable trace-sharing across organizational boundaries without concern for leaking sensitive IP.
University of Virginia
PhD Candidate
Aug 2019 – Present Charlottesville, VVirginia
  • Discovered I See Dead Micro-Ops, a critical security flaw that threatened execution integrity and data security in modern x86 processors. Published at ISCA 2021.
  • Developed performance preserving Spectre defenses for SMT, published under SecSMT at USENIX Security 2022.
  • Applying formal verification to ensure that quantized machine learning models remained invulnerable to adversarial attacks using DNNV (https://github.com/dlshriver/dnnv), ONNX, and ReluPlex (https://arxiv.org/abs/1702.01135)
  • Mentoring 5 undergraduate students on computer architecture and machine learning projects, breaking down large projects into digestible chunks, as well as providing instruction on computer architecture, side-channel attacks, machine learning compilers, and ML models (incl. model specification, feature engineering, parameter tuning, and cross-validation).

Recent Publications

Quickly discover relevant content by filtering publications.
(2021). I See Dead µops: Leaking Secrets via Intel/AMD Micro-Op Caches. In ISCA 2021.

PDF Cite Code Project Slides ISCA 2021