Author: Maha Khademi

My September

The past September was a very busy month for me. It began with JEMS 2023 in Madrid, where I orally presented my results and got very useful feedbacks from the experts of my field. Immediately afterwards, I attended ESM 2023 which was also held north of Madrid, and I had the chance to have a reunion with Ismaeil (ESR 10), Arturo (ESR 12) and Niklas (ESR 4) as well as meeting so many other junior researchers. The most important part of both of these events to me was networking. Getting to know so many amazing people and making friends along with interesting talks and lectures (and also Covid!) shaped my September.

Partying with Arturo, Niklas, and Ismaeil at the European School of Magnetism 2023!

In spite of all the fun and science I had, I feel really grateful for being back to my routine and work in Gothenburg. Staying away from home for a long time is definitely not the easiest thing to do for me!

Although I could not attend the SPEAR’s training sessions in Halle this month, I’m really looking forward to gathering with everyone again in Hamburg soon!

My Presentation at JEMS 2023, Madrid

My First Fabrication Experience

For the past few months since I started my work here in Gothenburg, the weather has been rainy at least half of the time! Even for me who loves the rain, it can get annoying sometimes. On the other hand, long winter nights would severely affect the energy level of someone without a routine to stick to (and of course, vitamin D pills!). Thankfully, it was easier for me as I am busy with the measurements and cleanroom training almost all the time, and I can’t really feel how fast the days pass by.

Although I still have to get the licenses for the cleanroom tools, I am being trained and partially doing fabrication with the help of one of our postdocs who is also mentoring me. Coming from a simulation background, it truly fascinates me how I can fabricate real-life devices and measure them. Last week we finished my first simple spin Hall nano oscillator chip and since then, I’ve been doing auto-oscillation measurements on it to determine the signal (and the device) quality. If everything goes well, we will fabricate Memristive gates on top of them, which is a very complex procedure. Afterward, we will investigate the effects of those Memristive gate’s position and shape on spin Hall nano oscillator chains and arrays.

A picture of the device that we’ve fabricated (I accidentally scratched it during the development! Thankfully it works fine).

Neuromorphic Computing: A Brief Explanation

Have you ever thought about why we can not perform brain tasks on our computers? Well of course I don’t mean a simple cat/dog recognition, or calculus (Computers have been specifically designed to do a very limited number of brain functions extremely well – even better than humans), but something bigger like analyzing new and unfamiliar situations.

To answer this question, let’s first see how conventional computers work:

Simply explained, there are two main units: a processing unit to process and analyze the data and a memory unit to store them. These two blocks are separated from each other and every time that a task must be done, the data should go back and forth between these two units. This architecture is known as Von Neumann architecture [1].

Fig1: Von Neumann architecture: in a conventional computing system, when an operation f is performed on data D, D has to be moved into a processing unit, leading to significant costs in latency and energy [2].

As you’ve already found out, there are two issues with this architecture that makes it almost impossible to do heavy tasks with:

  1. Energy consumption, as the blocks are “separated” and lots of Joule heating can happen in between.
  2. Not fast enough, due to the time required for the data to go back and forth.

This is also known as the Von Neumann bottleneck [3]. In other words, the architecture causes a limitation on the throughput, and this intensive data exchange is a problem. To find an alternative for it, it’s best to take a look at our brain and try to build something to emulate it, because not only is it the fastest computer available, but it is super energy efficient.

The brain is made up of a very dense network of interconnected neurons, which are responsible for all the data processing happening there. Each neuron has three parts: Soma (some call it neuron as well) which is the cell body and is responsible for the chemical processing of the neuron, Synapse which is like the memory unit and determines the strength of the connections to other neurons, and Axon which is like the wire connecting one neuron to the other.

Fig2: Neural networks in biology and computing [4].

Neurons communicate with voltage signals (spikes) generated by the ions and the chemicals inside our brains. There have been many models presented on how they work, but here will be discussed the simplest (and probably the most useful) one: The leaky integrate and fire model [5].

Fig3: leaky integrate and fire model, Incoming pulses excite the biological neuron; if the excitation reaches a threshold, the neuron fires an outcoming spike, and if not, it relaxes back to the resting potential [6].

As it was said earlier, neurons communicate with spikes which can change the potential of the soma. If a spike from a previous neuron arrives at a neuron after it, the potential of the soma increases. However, this is temporary, meaning that if no other spikes arrive afterward, the potential of the soma will reach the relaxed level again (leakage). On the other hand, if a train of spikes arrives at the neuron, they can accumulate (integrate) and if the potential reaches a threshold potential, the neuron itself will generate a spike (fire). After firing, the potential will again reach the relaxed level.  

Apparently, the connections between all the neurons are not the same, and the differences are in the synapses. The form and combination of the synapses change in time depending on how active or inactive those two neurons were. The more they communicate, the stronger and better their connection, and this is called “synaptic plasticity”. (This is why learning something new is so hard because the connections between the neurons need time and practice to get better!). For more investigation into the fascinating world of the brain, this book is recommended: Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition [7].

Now, it’s time to get back to the Von Neumann bottleneck. With inspiration from the brain, it can be seen that it’s better to place the memory unit in the vicinity, or even inside, the processing unit (just like the soma and the synapses which are really close), this way so much time and energy can be saved. It is also obvious that the processing units are better to be nonlinear as in the brain, and the memory unit should be able to be changed or manipulated to mimic the synaptic plasticity. We know how different parts should behave in order to have a computer to at least function like the brain, but the big question is: What hardwares should be used? What kind of devices act like a neuron, or a synapse? And even if we find them, are we able to place them close to each other to overcome the Von Neumann bottleneck?

These are the questions that Neuromorphic computing tries to answer. In other words, it is an attempt to build new hardware to be able to do computing like our brain. Some of the most promising candidates here are the spin-orbit devices as they are super-fast, energy-efficient, and more importantly, nonlinear [8][9]. I will talk about them and their major role in this field more in detail in the second part of my post soon!

Please don’t hesitate to ask questions: mahak@chalmers.se

References:

1. Von Neumann, J. Papers of John von Neumann on computers and computer theory. United States: N. p., 1986. Web.

2. Sebastian, A., Le Gallo, M., Khaddam-Aljameh, R. et al. Memory devices and applications for in-memory computing. Nat. Nanotechnol. 15, 529–544 (2020).

3. John Backus. 1978. Can programming be liberated from the von Neumann style? a functional style and its algebra of programs. Commun. ACM 21, 8 (Aug. 1978), 613–641.

4. Bains, S. The business of building brains. Nat Electron 3, 348–351 (2020).

5. Brunel, N., van Rossum, M.C.W. Lapicque’s 1907 paper: from frogs to integrate-and-fire. Biol Cybern 97, 337–339 (2007).

6. Kurenkov, A., DuttaGupta, S., Zhang, C., Fukami, S., Horio, Y., Ohno, H., Artificial Neuron and Synapse Realized in an Antiferromagnet/Ferromagnet Heterostructure Using Dynamics of Spin–Orbit Torque Switching. Adv. Mater. 2019, 31, 1900636.

7. Gerstner, W., Kistler, W., Naud, R., & Paninski, L. (2014). Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. Cambridge: Cambridge University Press.

8. Grollier, J., Querlioz, D., Camsari, K.Y. et al. Neuromorphic spintronics. Nat Electron 3, 360–370 (2020).

9. Zahedinejad, M., Fulara, H., Khymyn, R. et al. Memristive control of mutual spin Hall nano-oscillator synchronization for neuromorphic computing. Nat. Mater. 21, 81–87 (2022).

An Incredible Start

It’s been almost two months since I moved to Sweden. Although I had a frustrating one-year delay due to all the admission issues and visa processes, it was totally worth it. Gothenburg is a wonderful city (at least in the summer!) surrounded by nature. In fact, it has been announced several times as the most sustainable city in the world.

Here at the MC2 department of the Chalmers university, I fit in the group so easily that I could never wish for a better environment. My friendly colleagues have been super helpful and supportive. Our scientific and non-scientific discussions during Fika time (a Swedish tradition almost the same as a coffee break with sweets) have inspired me a lot.

For someone who is coming from a simulation background, it’s not easy to learn the experimental techniques and instruments at once, however, I am doing my best! For now, I’m learning to work with different deposition and measurement techniques such as sputtering, AMR, FMR, and ST-FMR as well as taking cleanroom courses. So far, it has been a challenging yet valuable experience for me.

Despite the fact that I had just started my Ph.D., I decided to attend the NeuroSpin summer school at Lausanne. It was a pleasant opportunity for me to talk to professors and students with different backgrounds and it actually widened my point of view about the role of our field in future interdisciplinary neuromorphic computing applications. I also had a chance to finally meet with Marco, ESR 3, and Ismael, ESR 10, and had such an enjoyable time with their company both inside and outside the summer school classes.

A selfie with Marco and Ismael at NeuroSpin
Diseño y desarrollo web Triplevdoble