Today, we’re excited to have Dr Lim Wee-Kiat, Senior Research Fellow at Asian Business Case Centre, Nanyang Business School, Nanyang Technological University. Wee-kiat was a speaker at IoT Asia.
In this interview, Wee-kiat will be sharing his thoughts on the impact of artificial intelligence on the future of work.
Wan Wei: Hello Wee Kiat. Could you tell us more about yourself and what you’re currently doing?
Dr Lim Wee-Kiat: Hello Wan Wei, thank you for having me today!
I study how people and organisations use technology, especially in a business context. I’m trained as a sociologist, and my undergraduate degree was in communication. I look at the social, communicative, and managerial aspects of how people and organisations use technology.
Wan Wei: What do you think is the distinction between intelligence augmentation and artificial intelligence?
Dr Lim Wee-Kiat: I think that we are already seeing a lot of IA (Intelligence Augmentation) happening in the world. Often, we just don’t realise it.
For example, when we drive, we use GPS navigation applications. There’s AI working behind the scenes helping you drive. And there are many more of such applications that are already in the market. One way to look at how we can think about AI and IA is to borrow from the field of robotics.
In robotics, there’s this principle called the 4 Ds of Robotics. The first three Ds are established, and the fourth D is fairly new. The Ds refer to Dull, Dirty, Dangerous, and Dear. Dear here means expensive. Dull work could be tasks that are so tedious that few of us are willing to do them. There’s dirty work, for example, cleaning sewage and surveying sewage conditions. Then there’s dangerous work, for example, after an earthquake where you need to identify survivors in the debris or wreckage. Finally, dear work refers to situations, for example, when you need to repair infrastructure that is so expensive that humans have to be careful, else things might break and we cause even more damage.
You can apply this principle to what we know about IA. For example, a study designed a legal AI and pitted it against the top 20 corporate lawyers in the US. They were supposed to do one task: To review five non-disclosure agreements, or NDAs, which are part of the staple of corporate law businesses. The human lawyers and the computer lawyer (i.e., the legal AI) were trying to spot errors. The legal AI was able to find 94% of the errors, and the human lawyers found 85%. The more interesting finding was that human lawyers took about 50 to 160 minutes to review each contract. The AI took on average only 26 seconds.
However, the lawyers were not only astounded; they were also ecstatic. They were not upset because the task of reviewing NDAs is often high volume and low risk. Now you can have AI come in to do most of that dirty and dull work. Now the human lawyers can focus their expertise on difficult, unique cases. So, AI comes in to augment, and support human beings, and in this case, lawyers, to do a better job.
Wan Wei: What’s one of the biggest myths you have ever heard about AI when it comes to the future of work?
Dr Lim Wee-Kiat: One of the biggest myths that I’ve ever heard is that we’re going to see AI becoming so powerful in the next few years that it is going be all-present, all-intelligent. AI is going to decide the fate of human beings. It knows it’s so superior that it should get rid of humans. I think we are still quite far from that scenario.
But you do see a proliferation of AI applications across many fields now. One of the most prominent ones is the success of AlphaGo in the game of Go. AlphaGo is great at Go. AlphaGo can play chess even better because chess is sort of simpler than Go. However, if you ask AlphaGo to drive a car now, it is not able to do that because it is a function-specific application of AI. AI applications today are narrow AI.
The scenario that we already have a general AI that is able to learn, become sentient, reflect, ask questions, and even decide what kind of questions it should answer…I think we are still quite far away from that.
Going back to the AlphaGo example: when AlphaGo beat Lee Sedol, it had about 2,000 CPUs, and close to 300 GPUs powering it. Just comparing its sheer weight to Lee Sedol’s brain, which is at most about 1.5 kg, this is the amount of effort needed in order for AI to have a shot at beating a master.
We must remember that it wasn’t purely an AI effort too. Several months before the game with Lee Sedol, AlphaGo had beaten Fan Hui, who was the European Go champion. In turn, Fan Hui became an advisor to the AlphaGo team. So, the AlphaGo victory was not a pure AI substitution thing. It’s an augmentation. You have a collaboration through design. At the end of the day, the effort is a human achievement, a human success.
Wan Wei: As you mentioned, current AI systems are a symbol of human achievement and success. What kinds of frameworks have to be created to ensure successful collaboration between human and AI?
Dr Lim Wee-Kiat: I don’t think there’s a definitive framework out there right now. Everyone is still trying to figure it out. Based on our research (and our book, Living Digital 2040), you’ve got to take a more techno-pragmatic view, which is to see technology as what it is. Dan Dennett, a cognitive scientist and philosopher, highlights it well, and I paraphrase: AI is not your friend, it’s not your foe, it’s not your colleague, it’s a tool.
Therefore, if AI is a tool, then the question is, how best can you incorporate it into what you do? You have to be clear about why you would want to introduce AI. What are the values behind this decision? For example, is it out of safety? Is it out of efficiency? Then, you also need to bring your wisdom and sensibility to identify where will not be a good place to apply AI. This is because the cost of implementation might cause you to lose other advantages you already have, for example, the importance of human touch.
While the next example I am highlighting is not about AI, but it illustrates the importance of thinking clearly about implementing technological solutions, including AI.
There was this case in the US, I think, about a supermarket.
After the supermarket put in place automated checkout machines, it realised that shoppers didn’t like to use them. It had to uninstall and bring back the cashier. If more forethought and design has been done upfront, the supermarket could have prevented the need and costs to bring back the cashier. Similarly, you have to understand why you want to put AI and where you want to put it. You need to look at the task level, and look carefully at what the tasks are that you want AI to replace or to augment.
This relates to something we found from our research. We realised that the current talk about AI replacing jobs is at a coarser and aggregate level. We believe AI doesn’t replace job by job, but rather task by task. That is a much more granular level.
You need to look carefully at the tasks that AI can do and also understand what you won’t let AI do, based on the outcomes you want to achieve. This requires social sensibilities because you also need to figure out who will be the ones going to be affected by AI. This cannot be just a pure economic game. Humans work not only because we need to feed ourselves. There’s dignity in work. We find meaning in it. We devote a lot of time to it. We fuse our identity with our jobs and our organisations.
To simply cast people out as economic digits…I think we can do better than that as a society. So then, if you implement AI, you have to think about what are then the other possibilities of reassigning or creating new jobs for your fellow staff? In some businesses, you may have no choice but to fire some people. For those whom you keep, you need to think about what could we get them to do? Perhaps they could be the final sanity check for the decisions AI arrive at?
The experienced staff who remain could also work in other departments because they are the subject matter experts in those specialised processes. They could help their colleagues figure things out much better. Also, if AI is going to generate so much productivity gain, do give some of it back to the rank and file. I think it’s possible to use AI for a win-win situation, and not necessarily a win-lose situation.
Wan Wei: Before we wrap up, is there anything you would like to add?
Dr Lim Wee-Kiat: With AI and automation coming on board, one of the skill sets that is becoming important is the ability to ask better questions. This is because if you’re able to ask better questions, you can figure out what problems are critical enough to devote resources to solve them. You can indiscriminately apply technology to a lot of things, but if you don’t ask the right questions, you don’t solve the right problems.
Let me give an example. This goes back to World War II when the United States set up a classified research unit. That research unit was to bring statistics to help the military make better decisions, to deploy better, to use resources more efficiently. One problem was many warplanes were shot down. Those that returned had many bullet holes, especially in the fuselage, engines, and fuel section. So, when you have finite resources to armour future planes, where would you armour them? The answer is, planes need armour on places where you don’t find bullet holes, such as the tail and wings. So you need to ask the right question: what about the planes that didn’t return?
This is what I mean by critical thinking: if you ask the wrong question, you’ll solve the wrong problem. Your solution will only be as good as your question. This is where I think schools need to get better. This skill set of asking better questions is going to become even more important because many solutions may come from AI. However, AI is not able to pick what questions to solve. This selection is going to be done by us.
Wan Wei: I understand that you have a book. Could you tell us more about it?
Dr Lim Wee-Kiat: The book is titled Living Digital 2040.
It’s published by World Scientific. In the research study, we spoke with about 170 people, either through interviews, workshops, or group discussions. We looked at the impact of technology on healthcare, education, and work in Singapore. We spoke to principals, vice-principals, HODs of IT in the education sector. We spoke with nursing directors, nurses, specialists, CEO hospitals in the healthcare sector. We worked with KPMG among others in the work sector. We also interviewed start-ups, and many others, including experts.
In the book, we talked about the different scenarios that could manifest in the future. We were not trying to predict what future was going to be like – it’s more of a thought experiment. We were hoping to piece different pictures of how the future was going to look like and what that was going to mean for us. We put in some recommendations and tools that individuals and organisations could take a look. One of the things we raised is that you need to look at what you do right now, task by task. Then you may be able to figure out which tasks are going to be taken over by technology.
At the end of the day, for myself, for us as individuals, we have to learn to experiment. If AI is coming, there’s no point trying to avoid it. You have to get comfortable with it and see what works for you, and what doesn’t. Then you know where to position yourself and you have a better idea of where you can go.
Wan Wei: Thank you so much for your time today.