Artificial Intelligence (AI) has become part of our lives: It helps us find the quickest way home, recommends music and TV programs, and powers voice assistants. AI also drives important functionalities of our education technology products. As AI continues to evolve quickly, it has the potential to unlock transformative innovation in education and other areas of life that will benefit our clients and society at large.
Every new and powerful technology comes with risk. This is true for AI as well. Harmful bias, inaccurate output, lack of transparency and accountability, and AI that is not aligned to human values are just some of the risks that need to be managed to allow for the safe and responsible use of AI. We understand that we are responsible for managing these risks and for helping our clients manage the risks.
The lawful, ethical, and responsible use of AI is a key priority for Eduquery. While the advent of sophisticated generative AI tools has brought more attention to AI risks, such risks are not new. We show how we manage AI risks and help educate our clients.
Our Trustworthy AI program is aligned to the NIST AI Risk Management Framework and upcoming legislation such as the EU AI Act. The program builds on and integrates with our ISO-certified privacy and security risk management programs and processes.
As mentioned above, AI will continue to rapidly evolve. In turn, this requires us to stay nimble and further enhance our Trustworthy AI program. We are committed to the continuous enhancement of our program to help ensure we and our clients can use AI-powered product functionalities safely and responsibly.
As part of our Trustworthy AI program, we are committing ourselves to implementing the following principles. These principles are based on and aligned to the principles of the NIST AI Risk Management Framework, the EU AI Act and the OECD AI Principles. The principles apply to both our internal use of AI as well as to AI functionalities in products we provide to our clients.