In his Responsible AI Transparency Report, which mainly covers 2023, Microsoft touts its achievements in safely deploying AI products. The annual AI transparency report is one of the commitments made by the company after signing a voluntary agreement with the White House in July last year. Microsoft and other companies have promised to build responsible AI systems and commit to security.
Microsoft says in the report that it created 30 responsible AI tools in the past year, strengthened its responsible AI team and asked teams creating generative AI applications to measure and map risks throughout the development cycle. The company emphasizes that it adding content identifying information to its image generation platforms, which put a watermark on a photo, labeling it as made by an AI model.
The company says it has given Azure AI customers access to tools that detect problematic content such as hate speech, sexual content and self-harm, as well as tools to assess the risks of security. This includes new jailbreak detection methods, which have been expanded in March this year to include indirect prompt injections where malicious instructions become part of the data ingested by the AI model.
It also extends its red team efforts, including both internal red teams that deliberately attempt to circumvent the security features of its AI models, as well as red team applications to enable third-party testing before releasing new models.
However, his red units have their work cut out for them. The company’s AI deployments have not been free of controversy.
Natasha Crampton, head of AI at Microsoft, said in an email to The edge that the business understands that AI is still a work in progress, just like responsible AI.
“Responsible AI has no finish line, which is why we will never consider our work on voluntary AI commitments finished. But we have made great progress since signing them and we look forward to building on our momentum this year,” Crampton said.