I. Emerging Non-Volatile Memories (eNVMs)
eNVMs include the spin-transfer torque magnetic random-access memory (STT-MRAM), phase change memory (PCM), resistive random access memory (RRAM) and ferroelectric field-effect transistor (FeFET). These eNVMs are mostly pursued as the next-generation storage-class memory or embedded memory technologies with aggressive industrial research and development, and we have extensive research activities in RRAM and ferroelectric devices. We are interested in the following topics:
1. Nanofabrication: new materials/devices structure optimization; back-end-of-line process integration at low-processing temperature.
2. Device testing and modeling: device electrical testing, reliability characterization, physical numerical modeling, compact modeling for SPICE simulation;
3. Array-level design and chip tape-out (collaboration with industry, e.g. TSMC/GF): programming scheme, sensing scheme, peripheral circuitry design;
4. Monolithic 3-D integration of eNVM on top of logic: 3D partition, physical design, design automation flow;
5. Radiation effects in eNVM based devices, arrays and systems.
6. Cryogenic characterization of eNVM based devices, arrays and systems for their applications in quantum computing periphery.
II. Hardware Design for Machine/Deep Learning and Neuromorphic Computing
The machine/deep learning and neuromorphic computing algorithms typically require enormous amount of computational and memory resources to perform the training of the model parameters and/or the inference. The back-and-forth data transfer between processing core and memory via the narrow I/O interface imposes a “memory wall” problem to the entire system. Therefore, a radical shift of computing paradigm towards “compute-in-memory” is an attractive solution, where logic and memory array are integrated in a fine-grain fashion and the data-intensive computation is offloaded to the memory periphery. Many on-chip memory arrays (including SRAM, and eNVMs) could be customized as synaptic arrays for parallelizing the matrix-vector multiplication or weighted sum operations in the neural networks. On-chip implementation of the machine/deep learning and neuromorphic computing requires co-design of devices, circuits and algorithms, which may potentially gain orders of magnitude improvement in the speed and the energy efficiency for performing intelligent tasks such as image or speech recognition. We are interested in the following topics:
1. Synaptic device engineering for multilevel states tuning, symmetric and linear incremental programming;
2. Prototype AI chip design and integration of synaptic arrays with CMOS peripheral circuits;
3. Electronic design automation (EDA) tool development for evaluating various synaptic devices and array architectures (e.g. integration of NeuroSim with Tensorflow/PyTorch);
4. Algorithm and architecture co-optimization for efficient mapping and data flow (e.g. routing and interconnect) considering the hardware constraints, etc.
III. Hardware Security for Emerging Technologies
Cyber systems and electronic devices are vulnerable targets for the adversary, leading to serious security and privacy issues for mission critical applications. Classical cryptography relies on the secret key, which may leak out to the adversary by various means, e.g. software attacks (viruses), physical attacks such as invasive, or side-channel attacks, etc. These concerns motivate the development of Physical Unclonable Function (PUF). PUF is a security primitive that leverages the inherent randomness in the physical systems (e.g. the semiconductor manufacturing process) to produce unique responses (outputs) upon the inquiry of challenges (inputs). We plan to leverage the variability of eNVM devices as the physical mechanism for PUF application. The machine learning hardware also impose new security threats and vulnerabilities such as chip cloning, model reverse engineering, adversarial attacks. We are interested in the following topics:
1. Designing eNVM based PUF circuits and prototypes;
2. Exploring the vulnerabilities and countermeasures in machine learning hardware accelerators.
Sponsors of Research:
We acknowledge the support from current and past sponsors for our research, including in-kind donations (e.g. software, wafer, chip tape-out shuttle):