For the past few months since I started my work here in Gothenburg, the weather has been rainy at least half of the time! Even for me who loves the rain, it can get annoying sometimes. On the other hand, long winter nights would severely affect the energy level of someone without a routine to stick to (and of course, vitamin D pills!). Thankfully, it was easier for me as I am busy with the measurements and cleanroom training almost all the time, and I can’t really feel how fast the days pass by.
Although I still have to get the licenses for the cleanroom tools, I am being trained and partially doing fabrication with the help of one of our postdocs who is also mentoring me. Coming from a simulation background, it truly fascinates me how I can fabricate real-life devices and measure them. Last week we finished my first simple spin Hall nano oscillator chip and since then, I’ve been doing auto-oscillation measurements on it to determine the signal (and the device) quality. If everything goes well, we will fabricate Memristive gates on top of them, which is a very complex procedure. Afterward, we will investigate the effects of those Memristive gate’s position and shape on spin Hall nano oscillator chains and arrays.
Have you ever thought about why we can not perform brain tasks on our computers? Well of course I don’t mean a simple cat/dog recognition, or calculus (Computers have been specifically designed to do a very limited number of brain functions extremely well – even better than humans), but something bigger like analyzing new and unfamiliar situations.
To answer this question, let’s first see how conventional computers work:
Simply explained, there are two main units: a processing unit to process and analyze the data and a memory unit to store them. These two blocks are separated from each other and every time that a task must be done, the data should go back and forth between these two units. This architecture is known as Von Neumann architecture .
As you’ve already found out, there are two issues with this architecture that makes it almost impossible to do heavy tasks with:
Energy consumption, as the blocks are “separated” and lots of Joule heating can happen in between.
Not fast enough, due to the time required for the data to go back and forth.
This is also known as the Von Neumann bottleneck . In other words, the architecture causes a limitation on the throughput, and this intensive data exchange is a problem. To find an alternative for it, it’s best to take a look at our brain and try to build something to emulate it, because not only is it the fastest computer available, but it is super energy efficient.
The brain is made up of a very dense network of interconnected neurons, which are responsible for all the data processing happening there. Each neuron has three parts: Soma (some call it neuron as well) which is the cell body and is responsible for the chemical processing of the neuron, Synapse which is like the memory unit and determines the strength of the connections to other neurons, and Axon which is like the wire connecting one neuron to the other.
Neurons communicate with voltage signals (spikes) generated by the ions and the chemicals inside our brains. There have been many models presented on how they work, but here will be discussed the simplest (and probably the most useful) one: The leaky integrate and fire model .
As it was said earlier, neurons communicate with spikes which can change the potential of the soma. If a spike from a previous neuron arrives at a neuron after it, the potential of the soma increases. However, this is temporary, meaning that if no other spikes arrive afterward, the potential of the soma will reach the relaxed level again (leakage). On the other hand, if a train of spikes arrives at the neuron, they can accumulate (integrate) and if the potential reaches a threshold potential, the neuron itself will generate a spike (fire). After firing, the potential will again reach the relaxed level.
Apparently, the connections between all the neurons are not the same, and the differences are in the synapses. The form and combination of the synapses change in time depending on how active or inactive those two neurons were. The more they communicate, the stronger and better their connection, and this is called “synaptic plasticity”. (This is why learning something new is so hard because the connections between the neurons need time and practice to get better!). For more investigation into the fascinating world of the brain, this book is recommended: Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition .
Now, it’s time to get back to the Von Neumann bottleneck. With inspiration from the brain, it can be seen that it’s better to place the memory unit in the vicinity, or even inside, the processing unit (just like the soma and the synapses which are really close), this way so much time and energy can be saved. It is also obvious that the processing units are better to be nonlinear as in the brain, and the memory unit should be able to be changed or manipulated to mimic the synaptic plasticity. We know how different parts should behave in order to have a computer to at least function like the brain, but the big question is: What hardwares should be used? What kind of devices act like a neuron, or a synapse? And even if we find them, are we able to place them close to each other to overcome the Von Neumann bottleneck?
These are the questions that Neuromorphic computing tries to answer. In other words, it is an attempt to build new hardware to be able to do computing like our brain. Some of the most promising candidates here are the spin-orbit devices as they are super-fast, energy-efficient, and more importantly, nonlinear . I will talk about them and their major role in this field more in detail in the second part of my post soon!
It’s been almost two months since I moved to Sweden. Although I had a frustrating one-year delay due to all the admission issues and visa processes, it was totally worth it. Gothenburg is a wonderful city (at least in the summer!) surrounded by nature. In fact, it has been announced several times as the most sustainable city in the world.
Here at the MC2 department of the Chalmers university, I fit in the group so easily that I could never wish for a better environment. My friendly colleagues have been super helpful and supportive. Our scientific and non-scientific discussions during Fika time (a Swedish tradition almost the same as a coffee break with sweets) have inspired me a lot.
For someone who is coming from a simulation background, it’s not easy to learn the experimental techniques and instruments at once, however, I am doing my best! For now, I’m learning to work with different deposition and measurement techniques such as sputtering, AMR, FMR, and ST-FMR as well as taking cleanroom courses. So far, it has been a challenging yet valuable experience for me.
Despite the fact that I had just started my Ph.D., I decided to attend the NeuroSpin summer school at Lausanne. It was a pleasant opportunity for me to talk to professors and students with different backgrounds and it actually widened my point of view about the role of our field in future interdisciplinary neuromorphic computing applications. I also had a chance to finally meet with Marco, ESR 3, and Ismael, ESR 10, and had such an enjoyable time with their company both inside and outside the summer school classes.