The realm of ethical AI development is currently in its early stages of development. Although there's plenty of discussion surrounding the topic, most corporations are still in the process of crafting procedures and guidelines. Larger corporations, particularly those heavily dependent on AI, are at the forefront of these efforts and have made substantial investments in this domain. The future doesn't solely belong to those who create AI systems; instead, it belongs to those who earn trust for delivering dependable AI systems.
In this blog post we shall look at how big corporations are managing AI Ethics.
Let's begin by examining the behemoth known as IBM, which has maintained its position for 112 years and weathered numerous technological shifts during its existence. IBM could serve as an excellent model for many large organizations seeking to replicate their AI governance and ethics practices.
In his paper titled "Learning to Trust Artificial Intelligence Systems," Dr. Guruduth Banavar, Chief Science Officer and Vice President of Cognitive Computing at IBM Research, sheds light on the crucial steps IBM has taken to promote ethical AI development.
The establishment of an internal IBM Cognitive Ethics Board, to discuss, advise and guide the ethical development and deployment of AI systems.
A company-wide educational curriculum on the ethical development of cognitive technologies.
The creation of the IBM Cognitive Ethics and Society research program, a multi-disciplinary research program for the ongoing exploration of responsible development of AI systems aligned with our personal and professional values.
Participation in cross-industry, government and scientific initiatives and events around AI and ethics, such as the White House Office of Science and Technology Policy AI workshops, the International Joint Conference on Artificial Intelligence, and the conference of the Association for the Advancement of Artificial Intelligence.
Regular, ongoing IBM-hosted engagements with a robust ecosystem of academics, researchers, policymakers, NGOs and business leaders on the ethical implications of AI.
Microsoft products are present in almost every enterprise on earth, from SMB's to the largest of corporations. And as Microsoft starts to weave AI into its products, ethical AI development becomes all the more important.
On its part Microsoft has published its "Responsible AI" framework which guides its AI development efforts. I have created a video which summarises the foundations of this framework.
Google's CEO had himself published the company's AI principles in a blog post in 2018. The principles can be summarised as follows
Social Benefit: AI should be socially beneficial and offer overall advantages that outweigh potential risks and downsides.
Avoid Unfair Bias: AI systems should not create or reinforce unfair biases related to race, ethnicity, gender, nationality, income, sexual orientation, ability, or political/religious beliefs.
Safety: AI systems should be built and tested for safety to prevent unintended harm. Best practices in AI safety research should be followed.
Accountability: AI systems should provide opportunities for feedback, explanations, and appeal, ensuring appropriate human control.
Privacy: Privacy design principles should be integrated into AI development, including notice, consent, safeguards, transparency, and user control over data usage.
Scientific Excellence: AI development should uphold high standards of scientific excellence, fostering rigorous inquiry, integrity, and collaboration for advancements in various fields.
Responsible Use: AI technologies should be made available for uses that align with these principles, considering factors such as primary purpose, uniqueness, scale, and Google's involvement in the technology's deployment.
Google has though had a fair share of controversies with some employees voicing their disagreements about their views on the company's stand on ethical AI development out in the open. Enterprises will face a similar dilemma where profit motives and ethical principles might struggle to co-exist and this a good reason to be clear about the the ethical AI development principles not just internally but also publish them to the general public.
In comparison to the well-established giants we've explored, Amazon, a relatively newer company, has made significant strides in becoming an integral part of our daily lives. Earlier this year, following a White House meeting with key players in the AI field, Amazon issued a press release. In this release, they not only pledged their dedication to responsible AI practices but also took the important step of informing the public about their efforts to ensure AI security.
While Amazon's responsible AI guidelines share similarities with those of other tech giants, it's their emphasis on security that offers valuable insights. Here's a summary of the security-related aspects highlighted in their press release:
Security Testing: Commit to internal and external adversarial-style testing (also known as "red-teaming") of models or systems in areas including misuse, societal risks, and national security concerns, such as bio, cyber, and other safety areas.
Collaboration: Work toward information sharing among companies and governments regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards. Incentivize third-party discovery and reporting of issues and vulnerabilities.
Content Verification: Develop and deploy mechanisms that enable users to determine if audio or visual content is AI-generated, including robust provenance, watermarking, or both, for AI-generated audio or visual content.
Cybersecurity: Invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.
Transparency and Accountability: Publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use, including discussion of societal risks, such as effects on fairness and bias.
SAP is part and parcel of business processes across many industries, especially large corporations.
And SAP is infusing AI into its technologies, in fact AI enabled business automation seems to be the core focus at SAP these days. Therefore a lot of emphasis has been placed at ensuring ethical development of AI @ SAP.
SAP's ethical AI development is based on seven foundational principles
Business beyond bias.
Transparency and integrity.
Quality and safety standards.
Data protection and privacy.
Engagement with wider societal challenges of AI.
The have also setup a governance framework
An AI Ethics Steering committee which develops overarching guidance.
An AI Ethics advisory panel which consists of external experts as well and provides input to the framing of ethical AI policies.
Trustworthy AI work-stream which is a group of employees who seek the means to implement the guidelines in development.
To learn more about how SAP puts the guiding principles to practice, I recommend watching the webinar recording below.
In conclusion, the landscape of ethical AI development is evolving rapidly, with major corporations like IBM, Microsoft, Google, and Amazon taking significant steps to shape the future of AI governance and ethics. These industry leaders are setting the bar high by implementing robust frameworks, research programs, and educational initiatives aimed at responsible AI development.
AI Governance maturity assessment
If you wish to get a quick check on the maturity level of your organisations AI governance, please try our FREE online assessment tool.