Derek Zhang (00:07) Hi everyone, welcome to the Convo AI World Podcast. Today's episode is a special face-to-face interview, and this time we're in Singapore. This is a city full of passionate AI innovators, developers, and entrepreneurs. All right, let's get started. It's such a pleasure to have a very special guest today, Dr James. Dr James is one of the most recognized figures in the AI community, not just in Singapore, but across the entire Asia-Pacific region. James, could you please say hi to our audience? Dr James Ong (00:38) Hello everybody, my name is James Ong. Derek Zhang (00:40) Thank you, James. James, you are a household name in the regional AI ecosystem. For those who may be new to your work, could you please share a little bit about your background and the areas you are currently focusing on? Dr James Ong (00:53) Okay, I wouldn't say I'm the household name, right? I do give a lot of talks because of some of the work I've done. My name is James Ong. I have been in the artificial intelligence space for the last 40 years. Started in 1986 in the AI research lab. I was involved in AI startup in the US also. Then later on, the last 10 years, I started AI think tank called Artificial Intelligence International Institute, short form called AIII, advocate sustainable AI for humanity. So many of my work you can find on Amazon on the book that I wrote, co-wrote with my two other co-authors, Mr Andeed Ma and Tan Siok Siok. The book is called AI for Humanity: Building a Sustainable AI for the Future. We talk about the future of AI in terms of AI for humanity. Derek Zhang (01:40) Nice, thanks for sharing. So if I remember correctly, I think this is our fourth time to meet each other. Dr James Ong (01:46) Amazing that we have a chance to meet up a lot. Derek Zhang (01:49) ⁓ Yeah, I think the last time we met each other in Shanghai a few months ago, I know you are extremely busy these days. Could you share with our audience like what projects or initiatives you have been working on recently? Dr James Ong (02:01) Okay, if you start with last year, then I'll say the book that we wrote, published by Wiley, is something I'm busy on. I'm also preparing for the Chinese version of my book to be published by Renmin Daxue Chu Ban She. So that is the core theme. So this year I've been busy at the World AI Conference in Shanghai. We did the China-ASEAN AI Forum. And so something that was very important to advocate trustworthy AI business. So that's one thing I've done. Recently, of course, ⁓ the most important thing we have done is advocating AI for Humanity at the United Nations University AI Conference in Macau to establish AI for Humanity as the United Nations SDG Goal Number 18. Derek Zhang (02:44) That's very cool yeah so I want to talk a little bit about the topics in in Singapore it is impossible to avoid talking about FinTech I want to get your you see a lot of opportunities for AI in FinTech technology? Dr James Ong (03:00) Actually many area. First of all, I'm also the head of AI for Global FinTech Institute, a non-profit organization in Singapore that helping to shape the future of FinTech. I'm also the ambassador, official ambassador for the Singapore FinTech Festival. So I try to promote and bring global awareness about Singapore FinTech Festival. For me, I have a special focus on finance and AI, where most people will focus on how AI can empower finance. I'm actually looking at how finance can empower AI for humanity, where I personally believe the right use of funds and money is play a role in governing AI and steering the direction of AI because that fits the human nature to want to make profit. Derek Zhang (03:44) Nice, nice, nice. So from a user perspective, I know many consumers remain cautious about AI in FinTech, since AI can operate like superhuman efficiency, right? Its mistakes could also be magnified exponentially, and that makes people worried about it, especially for the people who are not from AI industry. Do you think this is a valid concern? Dr James Ong (04:07) It's a valid concern. First of all, even for traditional finance, like you make payment, you want to use AI to ensure all kinds of risk management, like making sure that you're making payment to the right people, right? So many other things that AI can help to uncover risk. That's pretty traditional. But with the recent trend about agentic AI or AI agent, where you delegate the decision-making to AI to make things for you, including potentially financial transactions, you definitely had to be cautious because if you make a decision without your permission, so you may end up spending your money or making decisions about committing financial commitment without you knowing. So now the society has to be even more cautious about ⁓ the proper AI governance and the level of control we need to have for end user, consumer user when I'm using an AI, agentic AI system to help me to do certain tasks or capable, certain decision as well. Derek Zhang (05:07) Yeah, yeah, I think so. Yeah, I know you're also one of the strongest voices advocating AI for humanity, right? Could you share with our audience, and you mentioned it a few times already, could you share with our audience, what does it exactly mean? Like what areas does it cover? Some people may, like for me, I may consider AI for humanity is equivalent to the concept of AI safety guardrails. Is it a correct understanding? Dr James Ong (05:35) So the way I look at AI for Humanity, if you look at some of my presentation, I talk about the so-called sustainability of AI itself to make sure that AI doesn't go into another winter. So right now there's a lot of talk about OpenAI doing a lot of circular deal where now deals are circulating. So this could lead to an AI winter or AI bubble. So we want to make sure we don't repeat the same mistake that we made in the last AI winter in the 80s and the 90s. Then the other part about what we want to do is what we call AI for sustainability. The United Nations has goals that has been set up to hit the Sustainable Development Goals. We want to make sure every time we do AI, we want to use it to meet those goals. That will help humanity because this is more or less the kind of ⁓ objective the world wants to achieve in terms of anti-poverty, health, and etc. Last, again, I want to quote United Nations has a report called Governing AI for Humanity. We want to make sure we put in the global framework to make sure that we are governing AI the right way to ensure the benefits is good for all humanity. Derek Zhang (06:41) Nice, nice, nice. You mentioned a few big names. Can you share with us some of the most memorable moments or milestones from your journey of promoting this vision? Dr James Ong (06:52) ⁓ Yes, I work with many institutions, non-profit. So some of the moment will be, I would say the reason why the United Nations University, in Macau, they actually provide me a platform, a session to actually advocate AI for humanity as SDG Goal Number 18. That is incredible. I really appreciate to the conference organizer to believe in us and trust that we are doing the right thing. And obviously it's quite aligned with the conference team of building an equitable digital future. Other institution I was in Davos this year, in many side event, I was invited to speak. Again, I talked a lot about AI for Humanity, it was very well received. I spoke at the Economist Impact Forum in India, in Bangalore. Many people came after my talk and realized Derek Zhang (07:27) Come on. Dr James Ong (07:42) very resonate some of the things I advocate. Of course, at the World AI Conference in Shanghai, the theme this year is about global solidarity in the AI area, where Geoffrey Hinton also spoke about his concern about the existential threat of digital intelligence replacing biological intelligence. Again, a very important milestone to be able to talk at that level. And also in March this year I was at South by Southwest in Austin, Texas, and I also gave a couple of talk on this area, also very well received in the US as well. So it's actually pretty global. I think this is something that despite different cultural languages or religion, this is something that everybody are concerned about, and that's why UN plays a critical role in driving this area of leadership. Derek Zhang (08:26) Okay, okay, gotcha. So this needs efforts from everyone in the AI industry. Dr James Ong (08:31) Yeah, I think everybody who are not in AI or industry are concerned. Nowadays, AI is not just dealing with developer of AI or the provider AI. Anybody who use AI on a daily basis using ChatGPT or DeepSeek or Anthropic or Claude or Perplexity, you're using AI. So it's a concern to all of them. And interestingly, one of the most common questions I got when I ⁓ launched my book last year is from parents, especially mothers, who are concerned about their children's education, who are using AI whether they are doing it the right way and of course how are they learning the right way and also their future of career in terms of future job whether they're studying for something that is relevant to them in their future career. Derek Zhang (09:16) Yeah, talking about that is an interesting topic. Although we may not like to admit it, but we already see AI has indeed replaced many human jobs, right? As someone who has been advocating AI for humanity, do you feel frustrated about that reality? Dr James Ong (09:33) I'm not frustrated. I believe that as any other technology, every time every wave of AI coming, any technology, there are always some jobs that are replaced. Jobs that are irrelevant, but there are always new jobs created. But I think with AI as a technology, which I believe is totally different from any other technology humankind has created, and if you listen to the talk and the book by Yuval Harari, who wrote the Sapiens, you know, in the latest book, Nexus, talk about that. It's a cognitive revolution on humankind. So I would say we have to rethink about the whole economics and jobs. And so we need to educate our labor force to transition into something where we can deliver things at a higher level. And also I have to accept at some point we may reach the age of abundance where everybody doesn't need to work so much anymore and I think we need to be ready for that kind of society. But then we need to ensure that when we reach the age of abundance we have a equal distribution or fair distribution of the wealth across society because then we'll be getting all kinds of salary not having to work so much. Derek Zhang (10:46) Thank you. Thank you for the sharing. Good to get your opinion. So I want to also touch base a few recent updates from the AI industry, maybe with some specific examples. OK. So I'm not sure if we follow that. But as far as I know, after Grok, we also noticed that GPT, they have changed its content restrictions. So for prohibited content, now only applies to minors, not adults. This makes more people accessible to those prohibited content. Do you think this trend of relaxing guardrails by large-model providers is a healthy development or maybe not? Dr James Ong (11:23) I felt that is not the right thing to do in my personal opinion. So there are different measures to deal with it. First of all, as a consumer, you can be more aware. You can, if you're a parent, you need to make sure that you don't let your children use a certain type of AI because AI has multiple personality, whether you go from Gemini and ChatGPT, DeepSeek, you know, even Qwen from Alibaba or Claude. So all they have different different level of governance. So I think awareness, literacy to the parent to guide the younger children or elderly is important. That's why I think AI for everyone is important to advocate AI for public good and public safety with AI literacy and AI fluency are important things to be done. Now then also there is development towards what I call ⁓ specialized vertical AI where you actually put in the right kind of information and knowledge inside there that can be designed for a target audience that is safe. So that is one part of dealing with it. I also personally believe that we're going to move towards what we call sovereignty AI, where country will get to build their own AI for their own country and community for their own education. And that is quite consistent with the way that happened today where we have sovereignty telecom, where every country get to own their own telecom network. So I foresee that there will be a necessary public good or public infrastructure every government has to provide for their own citizens. Derek Zhang (12:52) Gotcha, gotcha. That makes total sense. So it is funny. So from what I learned, on one hand, OpenAI is relaxing safety guard rails. On the other hand, OpenAI recently introduced a GPT-OS safeguard. This allows people to add a safeguard layer between the user interface and the main LLM in the middle. This is considered as an effort by model providers to build more responsible AI. How do you view this? Like the contradictory move. On one hand, it is relaxing prohibited content, while on the other hand, it is strengthening safeguarding mechanisms. Dr James Ong (13:31) I don't know why they relax it but I guess it's part of it. Part of it is because the current state of AI technology makes it quite complicated to have to build all the guardrails. So the fact that they're providing additional tool is a good thing. It makes something more configurable to the developer building on top of that. It's almost like creating a certain filter, right? Or you can call it censorship, right? It depends how you use it. I think it's a good thing that they provide that. My opinion is that no matter what you're dealing with the large language model as a fundamental technology You're gonna have hallucination and therefore some of this thing may not be able to be cleaned up easily because the cost of doing so will be very very high and Therefore I felt that, if I'm correct if my foresight is correct, a few years down the road, we're gonna have a fundamental shift in the basic AI technology beyond the current large-language model And if you listen to thought leader like Yann LeCun or Lee Fei Fei, they're talking about we hit the limit in terms of large-language model. So I think new-generation technology is to be created where we may have a handle on some of the risks and control all of the content. Derek Zhang (14:38) Gotcha, gotcha. Yeah, so ⁓ I know you have been talking to a lot of leaders in the AI industries and influencers. I know many people today, they are investing their ⁓ efforts in app innovation or model optimization, right? We have already heard a lot of discussion about this. Do you see AI safety is receiving enough attention at the same time? Dr James Ong (15:01) I would say it's uneven. I think there are many, many good think-tanks in the US, like in Berkeley, they're doing a lot of work, like a professor from Yoshua Bengio, who started this Net Zero law in a non-profit to ensure safe AI for humanity. Stuart Russell from Berkeley is also working on that. So there are many people working on that, as well as people from MIT, I think Max Tegmark, who are working on it. And then in China, there are different people working on it as well. And United Nations, we just recently published a report on AI safety interoperability that cover four nations. I think it was UK, South Korea, China, and Singapore. And I was involved in part of that report as well for the Singapore part. So I think there is enough attention. And Singapore also set up the AI safety hub. in Singapore. Again, the government is making a lot of good effort to make sure that we invest into AI-safety research and Singapore is at the forefront. And under IMDA, there's something called AI Verify Foundation, which provide tool for organization to do testing of AI. So Singapore, we're doing quite fine, but it's not so even across the world and we are quite lucky to be in Singapore to be providing this. And I guess we hope to make this technology more available and so open source will be one way to make it available to many other countries. Derek Zhang (16:23) Got you. I think you just mentioned the safety hub, right? It is backed by the Singapore government. Dr James Ong (16:28) Yes, I think it's by government supporting and coordinating the best resources in Singapore. And there's a lot of community events in Singapore as well. So Singapore is really at the forefront of doing it and really doing it for public good. And I can see this culture may cascade into many other countries as well, given the resources we have built out in Singapore. Derek Zhang (16:48) Gotcha. So I also want to get your opinion. So when it comes to AI-safety, right, I know many people like communities are working on that, as you mentioned. But to me personally, the accountability still remains unclear to me. So in your opinion, who should take the ownership and the responsibility to ensure it is safe? It should be the model providers, the application developers or someone else. Dr James Ong (17:14) Well, because of the complexity of the supply chain of creating AI, there's data provider, data creator, data who decide what data to use, and there's also infrastructure, then there's model building, then there's building guardrail, then there's also building application. It's very hard to quantify who is responsible. And also, I think it varies from one industry to another. If you look into the autonomous driving, there's hardware involved as well. While in education, you'll be mostly cognitive. So I think the landscape is still being decided. Standards are being built. There's a standard by ISO called 42,001 actually provide AI management system and organization in Singapore like RIMAS, Risk and Insurance Management Association in Singapore. We actually did a workshop together with IMDA to bring the standard to the organization. So there's one way we can do something about it at the corporate level, following standard, developing standard and make them adopted. And at least it's a common framework in the organizations to do so. Then at the public level, everybody must equip themselves with AI literacy so that they know how to protect themselves. So the public advocacy of AI for everyone, which is also important, So in October, early October, during the F1, we actually launched a conference called the AICon 2025, which advocate AI for everyone to make sure that we bring AI literacy and fluency to the public so that we can educate the public as well as the parent who can take care of the children or the adult can take care of the elderly, as well as bring more awareness to the general workforce about what they need to do to protect themselves. Derek Zhang (19:00) Gotcha, gotcha. Yeah, we talked a lot about Singapore's AI development. I know you have a strong connection with other South-Eastern Asia countries like Indonesia, Malaysia. So how do you see the AI development, especially for AI for Humanity? Dr James Ong (19:16) There's a pretty big community in Malaysia and Indonesia. I'm actually quite connected with them and I gave quite a number of talk in Malaysia recently. In fact, I'm going back in December again, also in Indonesia. So there are actually effort going there and they are going very very well and leadership at the government level at both Malaysia and Singapore, Indonesia are working hard on it. And in fact, when we held our forum in World AI Conference in July. We have the Minister of Digital, Mr Gobind Singh, who is actually as a guest of honor. We have the Vice Minister of Higher Education, Science and Technology, Stella Christie. She's also our guest of honor. And they come share what happened in terms of the AI initiative in their country. Derek Zhang (20:02) Gotcha, glad to see the actions. Dr James Ong (20:04) Yes, they are actually very well aware and they actually doing real things and developing initiative to do in their respective country. And every country are different because of the profile of the industry and the stage of development in the country and the scale as well. Derek Zhang (20:20) Exactly. Okay, now let's talk a little bit about conversational AI, which Agora is focusing on. So I want to get your opinion. We have seen a growing number of voice-first AI agents emerging in 2025. So do you see that happening? What's your perspective on the future of voice-based interaction with AI? Dr James Ong (20:42) I think it's inevitable because there are certain things that suitable to have visual and text and voice, right? Multimodal. There certain applications that voice plays a very critical role. For example, when you're driving, you need that when you're busy doing other things. But now, of course, with autonomous driving, then you can actually go beyond voice, can even watch video, right? And etc. So voice, I would say, is going to happen one way or another. The key thing is right now, when you receive a phone call, through a voice AI and it sounds so real like a deepfake voice right? And that's why I think we have spoken before there are more safety measures or security measures we can provide to make sure that the voice AI is not being misused for the wrong reason to scam people because young children or even adults or elderly they actually will fall victim to that as well. Derek Zhang (21:37) Nice, Yeah. So I also have a question. in AI products such as assistants, right? Assistant agent or emotional companions, these agents have access to many types of personal information, right? With the significant improvement in the agent memory, now you can easily ask the agent, like, 'what did I do last week?' Or 'what make me sad last month?' So while we all want AI agents to be smarter and have bad memory, will that make you worried? Dr James Ong (22:07) Actually my recent experience using different AI, especially ChatGPT, I start to realize every time I start a new conversation, it remembers a lot of things about me and it starts to suggest things based on what I have shared with ChatGPT before. I'm very well aware. Sometimes when you leverage the memory, it's good for me. Sometimes you also make me feel eerie as well. Derek Zhang (22:15) Mm-hmm. Dr James Ong (22:29) And the question is more like, do I have control over how much it should know when I ask a prompt? And I think that is a philosophical and ethical question we need to ask. The only challenge right now is that I don't have the control over that at the moment. Because I always thought every conversation is independent, but it turned out that that's not the case anymore. It remember a lot of thing about me even from prior conversation, which I was hoping that it would be independent from each other. And I think it may be because I don't know what other option, I hope it give me an option to say that this thing must start clean with zero memory. That will be a wonderful function to provide. And I think it'll be useful for the provider like OpenAI to do so. Of course, not just to OpenAI, every other provider as well. Derek Zhang (23:18) Gotcha, gotcha. Yeah, so I also want to chat for the innovations, app innovations today. So we want better user experience. We also want it to be more safe, right? So like big models today, many of them are multimodal, right? Natively supporting text, speech and video. This technology usually offers the better experience, right? End to end. but it is also more like a black box. It is harder to track the errors and more prone to hidden biases. At the same time, you can combine everything by yourself by build a, by set up a LLM, STT, and TTS for each module, right? For the functions. And you can, of course, add safeguards to each module. For example, do profanity filters on STT. speech-to-text, and do content moderation for LLM. But it will hurt the user experience, increase the latency. So how do you view the balance between the user experience and the safety guardrails? Dr James Ong (24:23) There is no right answer for that because one of the key terms I call is context. Depending on what context you're dealing with, you're having a conversation about what, right? From whom to whom, right? And so, depending on how the context is, if it's just a fun, entertainment, that's fine, but you are seriously about doing certain thing and then you might be get influence in terms of the biases, in terms of influencing my decision. So it depends on who are the person, who are the user as well. So context can be very important. And that's why vertical application of certain AI, conversational AI may have a set of guidelines to do so, right? But if you talk about public general use, we already have that for the internet today in terms of making sure that there's an age limit as well. But actually it's a very hard problem to do. And that's why the person who designed the user experience have a big responsibility at the more ethical and moral level. How do you make the right decision? No different about how you're building the app today, right? So again, ultimately it's about the person who built it and you have great responsibility, even if you have great power to create a product to serve so many people. Derek Zhang (25:40) Gotcha, gotcha. As you mentioned, we also noticed that many companies today, they are taking AI misuse more seriously. For example, in the voice industry that I'm familiar with, some companies, now require not just when you clone your audio, just share the original audio file, but it requires you to do a real-time dialogue verification to ensure voice print authenticity. Oh, I see. So have you observed this growing awareness and efforts to prevent AI misuse in this industry? Dr James Ong (26:08) I see. Yes, I think if you look at, let's say voice, the different ecosystem player as part of it, the delivering of the content, the operator. So the capability you mentioned could be actually intercepted by the operator to have this requirement where you might say a real-life conversation first before you use that part of the capability, right? And I think the necessary awareness and safeguard is needed. And how to do it again, based on context, and then if you're doing call-center is one type, you're talking about making phone call for family calls is different, doing conference calls like Zoom or things that will be different as well. So again, I don't have a proper answer, but I think we humankind should have common sense where we apply certain thing in a certain situation and use common sense to make the judgment. Again, the provider have big responsibility to do so. And if you don't do so, the regulator is gonna come in and then they're gonna set the guideline. In this case, it will be the telecommunication regulator, In your case. Derek Zhang (27:23) Got you, got you. How do you see voice AI, like convol AI influencing human communication, cultural and trust in the long run? Do you have the trust in it? Dr James Ong (27:33) The challenge right now with the deepfakes in terms of voice, I've reached a point where I don't receive phone call anymore. Because I felt that even before AI, even con call can be unreal, right? But I definitely want to talk to people over voice after I validated, basically. So I would say there are different... mechanism the provider can provide. I also put in my own safeguard. For example, I don't receive call that is not in my IDD. That's something I will not receive. If I receive a call from WhatsApp, I may decide whether I want to receive or not. And I may require the person to, if it's a unknown number I normally don't pick up, I want the person to introduce a lot more information before I accept a voice call. Because voice calls are very intimate. So it's not something that I will easily open up to talk. Basically because of that, I'm far more cautious. And so you want to do voice call, you need to make sure it's trustworthy before you use it basically. Derek Zhang (28:32) Gotcha, gotcha. Yeah, I know you have been promoting AI for community for four years. Could you share what this mission truly means to you and what motivates you to pursue it as your lifelong vision? Dr James Ong (28:46) Yes, because I've been in AI for 40 years, I've seen the last AI boom and the last AI winter and then now another AI boom. I would say given the stage of my life right now, I feel that is something that if I can bring my experience and what I've learned and what I foresee for the future, this is something that I've been doing for the last 10 years and that's why I started my AI think tank, 8 years ago as a private think tank so that we have independent of the thoughts to advocate AI for Humanity. And when I started doing it 10 years ago, 8 years ago, I wasn't sure this is something that is important. I thought it's very important, but now with the arrival of the large language model and ChatGPT. All the feedback I got, I know that I'm working on something very, very important. And I decided to dedicate a lot of my efforts onto that and hopefully find a sustainable development model, means sustainable in terms of technology, governance and commercialization so that it can be sustained. Also meeting our Sustainable Development Goals based on the definition by the United Nations. That's why advocating SDG Number 18 as AI for Humanity is one of my initiatives and there are many other initiatives we are working on that links to technology, AI commercialization, AI governance, including many of the startups that I'm actually nurturing and incubating as well. Derek Zhang (30:09) Gotcha, gotcha. I like the idea. So you are trying to build a community with sustainable development of AI safety, AI governance, and AI commercialization. Dr James Ong (30:21) Yes, See, bring awareness is one part, creating an ecosystem is another part, but really developing actual initiative, working with startup and getting them in the right direction, building the right technology to benefit from humanity or AI governance is also part of it. So we are not just talking about it, we are actually working on actions. And hopefully these are profitable ventures as well as they are impactful ventures at the same time. Derek Zhang (30:49) Cool cool cool. Thanks for sharing. Before we wrap up, I want to get what would be your plans for next year. Dr James Ong (30:56) Next year will be a couple of things. One is continue to take the AI for Humanity SDG Number 18 to the next level to a wider audience. At the same time, I would say, given my book will be published in Chinese, hopefully into other languages as well. Hopefully, Spanish or Arabic are in discussion. But at least with the Chinese version of it, I can reach out to a pretty big audience in China and get them to embrace AI for Humanity to contribute to a better world globally. Last but not least will be very specific initiative, education initiative, finance related initiative, ecosystem initiative where we bring public awareness. And these are some of the things that we are working on. Derek Zhang (31:38) Got you, got you. Yeah, I also read the English version of the book and I like it very much. I learned a lot. Dr James Ong (31:44) Yeah, good. Hopefully after you read it, any feedback and do share with the public so that they are more aware of it. Derek Zhang (31:50) Thank you so much. Yes. OK. So this is already my third trip to Singapore this year. I'm truly impressed by how fast the AI is developing here, as you mentioned. I want to get your opinion on Singapore's AI development and the talent landscape. Do you think it is good right now? Dr James Ong (32:08) It's good. Singapore really attracts a lot of talent, first by cultivating, developing talent locally. At the same time, we also bring in lot of talent from all parts of the world who wants to contribute. We also work with talent as an international hub with talent in different countries. Because nowadays with the telecommunication infrastructure, I mean, you don't have to be physically here but your heart could be here working with us, right? And Singapore does offer a very good environment to interact and collaborate. In fact, Singapore did something what we call the Singapore Consensus for AI Safety in May this year, and bring in some of the best talent from all over world. And this is great leadership by Singapore to do so, right? So I would say that is part of it in terms of the talent development. And of course we have very good education framework as well. We love to educate as many people as possible. Derek Zhang (33:03) Yeah, I think that also aligns what we are seeing in Singapore. We meet a lot of AI startup companies. They are very passionate and yeah. Dr James Ong (33:12) AI Startup, we also have ⁓ institutions like university, like NUS, SUTD, NTU, SMU and SUSS. There's also other universities like INSEAD in Singapore who play a big role with the big alumni network that actually has a big impact in the business school area as well. Derek Zhang (33:31) Gotcha, gotcha. Okay, I think it is time to say goodbye. Before we close, Dr James, do you have any other messages or suggestions to AI developers and our audience? Dr James Ong (33:45) I would say, well, follow us on AI for Humanity, especially support us on the SDG Number 18 for the UN, because this is something not just about us, about all of you. And I think whether you're developer, user, educator, you have a role to contribute, and whatever you do may sound small, but you can actually make a difference in changing the direction of AI towards AI for humanity, And so there's a message of hope to share with the audience. Derek Zhang (34:14) Nice. Thank you. Thank you, James. Thanks for joining us today. It's a big pleasure having you on the show. I believe that building a responsible AI community is something all of us in the AI industry should strive for. Not just James' personal effort, but AI for Humanity should guide our collective actions to help AI industry to the next level. Dr James Ong (34:36) I couldn't have spoken better than what you've just summarized. I'm 100% endorse what you just said. All the customer, ⁓ partners of Agora can actually contribute to this effort. And we can work together to make that happen globally. Derek Zhang (34:50) Yeah, let's work together. Thank you, James. Yeah, it's time to say goodbye. Thanks for everyone for tuning in and see you in the next episode. See ya. Dr James Ong (35:00) See you everybody.