The need for Ethics in AI
The launch of ChatGPT and its phenomenal success, meant that AI is now part of our everyday lives. This moment has been equated to the discovery of fire and AI is now been compared to electricity. What it essentially means is that AI now not considered a niche technology, rather its gained the status of a commodity.
Along with popularity comes great responsibility. Legislators around the world have already been taking note of the power of AI enabled applications and working on legislations to protect the society in general, however with the launch of ChatGPT, the discussions and debates are now not just confined to parliaments and are in focus also in daily discussions. This means that countries and continents are scrambling to enact laws and ensure compliance of autonomous systems, particularly those that are powered by AI.
Need for Certification
The realm of AI Ethics is still in its infancy. A considerable number of organizations and teams engaged in product development remain oblivious to, or find themselves uncertain about, how to engage AI Ethics experts for guidance during the creation of their solutions. Presently, there are global institutional guidelines, such as those proposed by entities like the OECD or UNESCO1, as well as the Global Partnership on AI Framework. To illustrate, the EU AI Act is slated for introduction by the close of 2023, with the implementation shortly following. Considering that product development life cycles typically span one to two years, organizations are likely to seek to address their AI Ethics requirements early in their initiatives. IEEE CertifAIEd™ provides a set of criteria and a methodology for evaluating and certifying AI systems in terms of ethical risks.
The certification process is divided into the following stages
In the Enquiry phase an IEEE Authorised Assessor (such as AI Ethics Assessor) will discuss various aspects of the assessment such as the participating organisations goals and expected outcomes. Ancillary aspects such as timeframe for completing the audit, the budget allocated and participation of stakeholders will be discussed. Once the details are gathered, the assessor will present the project scope and both parties will need to agree on the scope and timelines.
During this phase, the IEEE Authorised Assessor works with the participating organisation to agree on the human/socio technical values affected by the product under development or already deployed. The outcome of this work is the identification of the products ethical risk profile and applicable criterion.
The assessor establishes the appropriate criterion based on the products risk profile and co-ordinates collection of evidence that the product under scrutiny meets each of the criterion identified. The outcome of this exercise is documented in a Case for Ethics document that details the conformity of the product to the identified criterion.
An independent IEEE Authorised Certifier conducts an independent review of the case for Ethics document and provides a detailed assessment report which may contain suggestions for improvement. If the validation is successful, the certifier grants the participating organisation the IIEEE CertifAIEd™ mark and adds you to the CertifAIEd™ registry.
Benefits of IIEEE CertifAIEd™ mark
By providing certification guidance, conducting assessments, and offering independent verification, IEEE CertifAIEd™ empowers organisations to expand the adoption of responsible innovation. This, in turn, enhances the quality of AI systems, fosters trust among crucial stakeholders, and unlocks the associated advantages.
Engage an IIEEE CertifAIEd™ Assessor
Here at aiethicsassessor.com we are eager to participate in your quest to gain the IEEE CertifAIEd™ mark for your AI enabled products and services.
Fill in the form and we will be in touch with you as soon as possible.