- Research project
Design-Technology Co-optimization techniques for enablement of MRAM-based Machine Learning hardware
- Project supervisor
Dr. Arindam Mallik
- Recruitment date
Before starting the STEM career in university, time was devoted to studying ancient cultures and languages such as Latin and Greek.
The shift to scientific knowledge came when I wondered how it was possible that silicon can make devices that can perform billions of operations in a fraction of a seconds, and it has completely changed our life compared to just 20 years ago. Hence, I started to study electronics and math in depth, falling in love with analog electronics and mixed signals. I have been in several institutions, such as Politecnico of Turin, Milan and Saint Petersburg, EPFL and Grenoble INP, with the study of devices, electronics and solid state physics. At the same time, I always carried on some business projects with Fondazione Agnelli, looking for real applications to what I was studying. The Happy Dennard scaling era is finishing, and silicon transistor technology is saturated in innovation terms, especially from the point of view of memory, which occupies lots of area and contributes to static energy. SOT-MRAM is a promising technology which can shift the development of electronics, bringing it to the edge, saving power and reducing delay. Electronics and design space research are needed in order to make this technology operative. In the future, I see myself as a pioneer and an explorer, contributing MRAM to be disruptive.
Current research shows the limitations of devices to be used for Training Neural Networks due to limited precision. This PhD topic will focus to break the barrier of device engineering to enable true analog in memory computing for Machine Learning training algorithms. The approach adopted for such a device optimization would require optimization at every abstraction level of a computing system, starting from algorithm, architecture, circuits to device engineering. ML algorithms such as DNNs have realized important breakthroughs in a myriad of application domains. The core operations in DNN are MVMs and the dominant model today is to train DNN using software capabilities, which results in an extremely large consumption of energy. A DNN can be physically represented by crossbar arrays hardware with programmable resistors (referred to as weight memory devices). Ideally, DNN accelerators should consist of dense non-volatile memories, with large resistance (MOhm) and narrow parameters distribution. MRAM technology is a promising candidate for such an approach: their resistance can be arbitrarily tuned to reach values required for analogue MVMs. However, increasing MTJ resistance makes impossible the cell writing using STT. This is mitigated by making use of MRAM emerging writing concepts: SOT and VCMA. In addition, design solutions are proposed to create multi-level bit MTJs cells and are currently being prototyped for further demonstration. Within this context, this project will explore Design-Technology Co-optimization (DTCO) of MRAM-based ML hardware.
In this PhD, you will:
- Explore Design-Technology Co-optimization (DTCO) of MRAM-based ML hardware, with a primary focus on SOT-MRAM and VCMA devices
- Perform device-level characterization and optimization needed to enable a low-energy hardware solution,
- Propose circuit design level to explore DTCO techniques for ML circuit implementation.
By performing this PhD at Imec, you will have the opportunities to contribute both to the fundamental understanding of the SOT physics as well as to enable the practical realization of SOT-MRAM using state-of-the-art industrial fabrication methods on 300mm wafers.
 S. Cosemans et al. IEDM (2019)  J. Doevenspeck et al. VLSI (2020)
Imec is a world-leading research and innovation hub in nanoelectronics and digital technologies. The machine learning program at Imec is leading the quest for computationally- and energy-efficient machine learning accelerators. Imec‘s machine learning research is driving the co-evolution of hardware and algorithms needed to facilitate the move to this new computational paradigm.
ETH Zürich (Zurich, Switzerland), under the supervision of Pietro Gambardella.
NanOsc (Gothenburg, Sweden), under the supervision of Fredrik Magnusson.
KU Leuven (Leuven, Belgium).