Xinyu Shi

Research project

System-Technology Co-optimization for enablement of MRAM-based Machine Learning

Project supervisor

Dr. Dwaipayan Biswas

Recruitment date
01/01/2023

Xinyu Shi

My name is Xinyu. I was born in Anshan, a small industrial city in northeast China.

I received my master degree of Internet of Things Engineering from Ecole Polytechnique in France. In my free time, I enjoy collecting records and practicing to be an “old-school” style DJ.
During my post-graduate study, I had the chance to finish my master thesis in imec on system-level memory power analysis. This experience brought me to the world of microelectronic and future computer architecture. Nowadays the growing numbers of AI-based applications’ performances are limited by the traditional Von-Neumann architecture. SPEAR provides me the opportunity to work on MRAM based compute-in-memory, which is expected to be the solution to more efficient AI computation.
The future world will be driven by artificial intelligence. I’m so glad to be a part of it!


Project Description

ML techniques such as DNNs have realized important breakthroughs in a myriad of application domains. The core operations in DNN are MVMs and in majority of use-cases, incur a dedicated training step for modelling, resulting in generating a set of parameters which are used in an inference step for classification/prediction outcomes. Training of DNNs have been traditionally carried out using software compute capabilities while a considerable research effort has been spent by the community to accelerate the inference on-chip for (near) real-time outcomes optimizing energy and accuracy. There is a need to look at optimized training procedures for reducing the energy footprint with minimal accuracy trade-off. Minimizing the data movement between the compute and memory blocks (non-Von Neuman trajectory) has had great success towards energy optimization, especially targeting the accelerated-inference landscape for ML applications. This has primarily been achieved through compute near/in memory (CnM/CiM) techniques. Devices based on standard technology as well as novel/emerging technology have been the main contributors to CiM/CnM paradigm, helping to optimize the core MVM operation. Both digital multiply-accumulate circuits and Kirchoff's law-based analogue domain processing have been explored to avoid costly memory fetches to an external memory. 

Dense non-volatile memories (NVM), with large resistance (MOhm) and narrow parameters distribution are a promising candidate, however typical write penalties for the standard STT variant of MRAM technology could be a bottleneck for their adoption. This is mitigated by making use of MRAM emerging writing concepts: SOT and VCMA. In addition, design solutions are proposed to create multi-level bit MTJs cells and are currently being prototyped for further demonstration. This project will explore Design-Technology Co-optimization (DTCO) using in-house SOT/VGSOT-MRAM technology-based ML hardware for optimizing ML Training related system performance for a dedicated application space. This will help to close the bottom-up loop connecting device characteristics to system power/performance metrics, enabling system technology co-optimization (STCO) for CnM-centric ML applications. 

 In this PhD you will: 

  1. Understand device characteristics for binary and multi-level bit SOT-MTJ.  
  2. DTCO: architecture-level choices which help to optimize the device knobs for yielding low-energy hardware solution for ML 
  3. STCO: Exploration of novel compute near/in memory concepts using MRAM for system PPA impact estimation.   

The PhD is expected to develop a: i) device-level understanding enabling ML circuit level DTCO optimization, and ii) architecture-level choices in conjunction with MRAM technology, which help to optimize system PPA for ML training/inference. You are expected to participate in both circuit and architecture level optimization loops, enabling device benchmarking. 

 
[1] S. Cosemans et al., "Towards 10000TOPS/W DNN Inference with Analog in-Memory Computing – A Circuit Blueprint, Device Options and Requirements," 2019 IEEE International Electron Devices Meeting (IEDM), 2019.  

[2] J. Doevenspeck et al., "SOT-MRAM Based Analog in-Memory Computing for DNN Inference," 2020 IEEE Symposium on VLSI Technology, 2020. 

[3] G. Karunaratne et al. Robust high-dimensional memory-augmented neural networks. Nat Commun (2021). 

Host institution

Imec is a world-leading research and innovation hub in nanoelectronics and digital technologies. The machine learning program at Imec is leading the quest for computationally- and energy-efficient machine learning accelerators. Imec‘s machine learning research is driving the co-evolution of hardware and algorithms needed to facilitate the move to this new computational paradigm.

Planned Secondments

ETH Zürich (Zurich, Switzerland), under the supervision of Pietro Gambardella.

NanOsc (Gothenburg, Sweden), under the supervision of Fredrik Magnusson.

Registering University

KU Leuven (Leuven, Belgium).

Diseño y desarrollo web Triplevdoble