Today, we’re excited to have Dr Pierre Brunswick, CEO and co-founder of NeuroMem Technologies.
Dr Pierre Brunswick will be talking about the misconceptions surrounding AI and the importance of ethics for the foundation of AI. For more information about neuromorphic engineering, check out this complementary piece.
Wan Wei: Hello Dr. Pierre, thank you for your time today. Can you tell us more about yourself and what you are doing?
Dr Pierre Brunswick:What we have done for the last 25 years is that, we at NeuroMem and our parent company General Vision have become a very successful AI company selling IP to some of the biggest players in the industry including Intel that uses our technology.
We (GV/NeuroMem) decided three years ago to come to Singapore to build an ecosystem to empower the developers in this sector by using our technology to help them on their AI because the journey to developing AI can be quite long, but you have to go through it and it’s mandatory. We just help them to get there a bit quicker.
So, my background is that I’m a geek. I can play with computers since I can make hardware and I know chip design and power architecture. This solution (that NeuroMem offers) is purely hardware based, it’s a chip. So, for me, it was really natural to evangelize this technology, and explain that it is based exactly on how well our neurons are working.
It’s easy to explain to the people, convey the right message and teach and preach the technology to help people to not focus only on one angle but get a broader view on all existing solutions for AI.
Wan Wei: Why did you decide to focus on the neurons of the human brain?
Dr Pierre Brunswick: Why neurons? With NeuroMem, we can make easily up to 1 Billion neurons. The brain is 10 Billion and we can really behave, more or less, like 10% of the brain.
The next question is the neurons, what would they simulate? Each one simulates how many synapses and how many of them are we far away from. I’m fascinated because we have gone to the moon and maybe, soon to mars but we don’t even know 10% of our brain.
Wan Wei: AI as a concept is sometimes viewed as a magical technology because it’s in so many areas. What do you think is the biggest misconception about AI?
Dr Pierre Brunswick: Well, most of the people are a bit scared of AI but it’s already everywhere. It’s like someone saying, “You know, I’m not sure I want to use a mobile phone today.” It will look a bit strange, right?
Now, people use mobile phones in different ways. Old people may just want to call, and all the other people are on chats and share their lives.
So, it’s the same with AI. AI is already everywhere. Every company is engaged in AI. The thing I want to achieve is to alert people that transforming your business model or your thinking with AI is a long journey. You have to engage early if you want to stay and remain competitive.
But it’s not only that. Security of the workers is critical. Health of people is critical. AI is everywhere in these domains every day. It helps doctors to get diagnostic pre-alerts, to do preventive maintenance, predictive maintenance and so on.
So, I think that’s what the people don’t understand – just how important AI is. What people should do is not be neglectful, not be scared and just ask, “Oh, how can I benefit from AI?”, and that will help the AI integration into our lives.
Wan Wei: As you mentioned, what people don’t understand can sometimes turn to fear. Some people are afraid and uncomfortable interacting with AI. With the development of AI that increasingly looks like human, how do you think the Uncanny Valley effect can be mitigated?
Dr Pierre Brunswick: AI started when we launched the system to sync by itself like we do. When we teach a robot, the first thing you teach a robot is to please never kill a human. The second thing you teach a robot is how to save the world. How long do you think it will take for the robot to think that the two first instructions don’t match?
Because the biggest bacteria, the biggest virus in the world is a human.
This is where I said using AI without having a real system of ethics in place and government regulation is a risk factor. Now, tell me one technology which is public domain now that everybody uses which had not first been used by military application? All of them have been first used by military application.
For example, let’s talk about nuclear power. A lot of countries have enough electricity because of nuclear energy. The first use of nuclear power was by the army. Then it becomes the public domain. We have millions of examples that the army would invest first in that technology because it’s important for them. That’s why we have the first robot killer one day before we have robots to save you from falling down at a traffic light.
This is normal. So, what do we need? We just need to have the right regulation and understand that ethics should be at the middle of everything we do. AI is good if you are ethical.
Wan Wei: Do you think that with AI being used in such diverse ways that the reason people are not giving thought to ethics is because there are other areas in the field that are more important to develop first?
Dr Pierre Brunswick: Let me explain to you. In AI, we have a roof, we have a ceiling which is machine learning. Under that roof, we have pillars. Today, everybody is focusing only one pillar which is called deep learning. A house with only one pillar is not very stable. You need other pillars.
Now we have started a reinforcement learning which is a good pillar. But we have also edge computing which is back, and we have neuromorphic hardware. I’m fully part of it which is critical because it’s complementary. So, for me, the winner would be the one who will be first in all four pillars not only in one.
So, this war on low voltage, only on deep learning, computing more is good but maybe it’s too fast on one pillar and not evenly dispatched in the ecosystem and these resources are needed for the other pillars. Neglecting one will be a failure.
Wan Wei: Using your metaphor, ethics is the ground where they are building the house of AI?
Dr Pierre Brunswick: So, let me give you an example so you understand what I mean.
There was one guy who is a climber. He fell down and he breaks his legs. So, we replaced his legs with foot prosthesis with AI. It’s so good that now he’s back to being number one in the world for climbing. The problem is all the climbers cut their legs because they want the same equipment to become again number one.
Now, the world become crazy. Humans are crazy.
We should not cut our legs to have a prosthesis system because you go faster. This just helps you when you have a disaster. So, when I say ethics, it’s also the way humans behave.
Wan Wei: As you pointed out, sometimes, bad people will use AI in a way that is detrimental. What are some ways you would suggest for human beings to be able to fully trust AI?
Dr Pierre Brunswick:Again, you’re too much focusing on deep planning and the eternality and everything. I don’t believe that just data can predict everything. I don’t believe that we are all the same. So, because we are different, by definition, it’s impossible to be all in one type.
As long as this is not disruptive, this is where neuromorphic hardware becomes critical. It’s a natural transition. I recognize, I learn; I teach, I learn. That will help a lot to progress AI. This is why we have more and more platforms developed, more people working on this technology which is complementary to data.
We need both; but we also need to detect anomalies on the go. We don’t need to send all the information to the cloud. It doesn’t make sense especially on biometrics, we’re sharing on blockchain. All the data need to be protected and secured. We are the only ones who can protect everything because neurons cannot be hacked. We need to put AI with non-hackable solutions.
Wan Wei: Thank you so much Dr Pierre, for your time and your insights. On a parting note, do you have anything else to add?
Dr Pierre Brunswick: I think it’s important that we start to tell people that if there is a good ethic in everything which was done, AI is not a danger. I want to give an example to finish here, there’s a country called Malaysia. Malaysia is scared of AI because a lot of workers are low cost workers and they are in the production line. Robots will replace them.
What we are putting in place with the government and memos and so on in Malaysia is to train those people to become educated and critical enough to be behind the robots. So, what is happening is that we are moving blue collar to white collar. We will need them and if you look at the number of jobs that will be created by AI, it is creating more jobs than losing jobs. People have to understand that they have to go through education, and they will have a better life.
So, the quality of the product improves, and you serve the country by doing that. So, I’m very happy that Malaysia understood that, and other countries are also working on process like that.
We have to go through education. If you don’t focus right now on AI education program, you may have a hole bigger than you think before people are mature enough to move behind the robots.
Wan Wei: Thank you so much, Dr Pierre.
Dr Pierre Brunswick: My pleasure.