Recently I had the privilege of being interviewed by Ilya Billig, organizer of the video cast GenAI: Reality Check - Business outcomes of GenAI series. Here is the transcript of the interview.
Ilya: Hi, welcome to Reality Check. My guest today is B Krishnan, an Enterprise Architect with over 10 years of experience. Biju, can you introduce yourself and tell us how you became an AI Ethics Assessor?
Biju: Thanks for inviting me. I've been in the business of designing solutions for different technologies for the last 20 years, focusing on analytics, big data, AI, and machine learning in the past few years. Recently, I've been thoughtful about the implications of AI in our daily lives and started learning more about incorporating ethical principles into AI applications. My goal is to combine my practical experience in deploying AI applications with the ethical values we should uphold as a society.
Ilya: What's your view on the future of AI?
Biju: I currently work for Norderia, an SAP consultancy, focusing on using AI to increase the productivity of end users and consultants. I see AI as a tool to augment human skills and knowledge, not as a replacement for human workers.
Ilya: Why do we need to consider ethics in AI Solutions?
Biju: Ethical AI is important to uphold societal principles. Key principles include transparency, ensuring AI systems' results are accurate and consistent, making AI systems reliable and safe, preventing unintentional discrimination, and ensuring human accountability and control.
Ilya: Is there a special AI Ethics we need to think about?
Biju: We have codified ethical values for structured assessment, like transparency, explainability, safety, security, robustness, and fairness. The IEEE CertifAIEd process, for example, provides ontological specifications for these values.
Ilya: Are we going to have 100 Commandments for AI?
Biju: Currently, the IEEE CertifAIEd has five pillars. As AI evolves, we might need to consider other values, especially with advancements towards AGI or super AI.
Ilya: How is AI Ethics going to be enforced in companies?
Biju: Large companies like Microsoft have responsible AI guidelines. However, a small percentage of companies have a dedicated department for responsible AI. Companies should draft responsible AI guidelines, educate users on AI's capabilities and limitations, and set up processes to identify and manage high-risk use cases.
Ilya: What steps should companies take to ensure future compliance with ethical AI?
Biju: Companies should establish organizational-level responsible AI guidelines, educate users, create a use case registry, conduct ethical profiling, and involve assessors for high-risk use cases.
Ilya: Who can help companies with AI Ethics?
Biju: Large companies often have a Chief AI Ethics Officer. The field is nascent, but there are experts and frameworks like IEEE CertifAIEd. The US is adding requirements into existing laws, and China has started legislating AI.
Ilya: Is ethical use of AI binary or a gradation?
Biju: There's a gradation of risk in AI applications, from low to high risk, and prohibited applications. Ethical profiling helps judge the risk and nature of AI applications.
Ilya: What's happening with AI legislation?
Biju: The EU AI Act is in progress, with a focus on foundational models. The US is incorporating AI governance into state laws. China has also released regulations on AI.
Ilya: What's the future of AI Ethics Assessors?
Biju: The demand for AI Ethics experts will grow, especially with upcoming legislation. Training in responsible AI will become more prevalent.
Ilya: Can you share use cases where AI elevates people professionally?
Biju: AI tools like chat GPT can help non-native English speakers in their jobs. In my organization, we focus on using AI to simplify business processes, improving productivity and work-life balance.
Ilya: What question would you like to ask the next guest?
Biju: I would ask them to identify potential high-risk AI applications in their industry that might be beneficial but carry significant risks.