Cedar Ren

Cedar Ren

Software Engineer @ MatX

Intel Labs

University of Virginia

Biography

I am Cedar Ren. I write kernels, compiler, simulation, and other systems code for high-performance LLM acceleration ASICs.

I’m interested in systems, hierarchies of astraction, and representations of knowledge.

I feel at home with people who like words, numbers, and diagrams.

See my work:

Download my resumé (1 page).

Download my CV (4 pages).

Nothing on this website represents the views of anybody but myself.

Interests
  • Tensor Hardware Performance
  • Software-hardware co-design
  • Semiconductor Supply Chain
Education
  • PhD in Computer Science, 2023

    University of Virginia

  • BSc in Computer Science and Mathematics, 2019

    College of William and Mary

Experience

 
 
 
 
 
MatX
Software Engineer
Apr 2025 – Present San Francisco Bay Area
  • Simulator for high-performance LLM acceleration ASIC
  • Quantization / Numerics
  • Kernels and Dataflow
 
 
 
 
 
AMD
Machine Learning Compiler Engineer (LLVM / MLIR)
Nov 2023 – Apr 2025 Seattle, Washington
  • Built ONNX->torch mlir conversion pipeline
  • Implemented prefix-tree kvcaching (aka radix caching) for LLM serving tool
  • Wrote infrastructure to cut down overhead around accelerated transformer kernels
 
 
 
 
 
Intel Labs
Research Intern
Aug 2022 – Nov 2023 Portland, Oregon
  • Profile workloads to generate architecture-independent workload summaries that use Basic Block Vectors to accurately predicts workload performance on novel hardware.
  • Accelerate summary generation by 1,000,000x using hardware performance counters.
  • Generate multi-platform executable benchmarks based on performance summaries using MLIR.
  • Use differential privacy to enable trace-sharing across organizational boundaries without concern for leaking sensitive IP.
 
 
 
 
 
University of Virginia
PhD Candidate
Aug 2019 – Present Charlottesville, VVirginia
  • Discovered I See Dead Micro-Ops, a critical security flaw that threatened execution integrity and data security in modern x86 processors. Published at ISCA 2021.
  • Developed performance preserving Spectre defenses for SMT, published under SecSMT at USENIX Security 2022.
  • Applying formal verification to ensure that quantized machine learning models remained invulnerable to adversarial attacks using DNNV (https://github.com/dlshriver/dnnv), ONNX, and ReluPlex (https://arxiv.org/abs/1702.01135)
  • Mentoring 5 undergraduate students on computer architecture and machine learning projects, breaking down large projects into digestible chunks, as well as providing instruction on computer architecture, side-channel attacks, machine learning compilers, and ML models (incl. model specification, feature engineering, parameter tuning, and cross-validation).

Publications

(2021). I See Dead µops: Leaking Secrets via Intel/AMD Micro-Op Caches. In ISCA 2021.

PDF Cite Code Slides ISCA 2021