In so many ways, AI creates exciting opportunities for all of us to bring new ideas to life. Microsoft is taking new steps to ensure these new technologies are resistant to abuse. We are committed as a company to a robust, technical, and comprehensive approach that protects people, grounded in safety by design. Our strong safety architecture is applied at the AI platform, model, and applications levels. It includes aspects such as ongoing red team analysis, preemptive classifiers, the blocking of abusive prompts, automated testing, and rapid bans of users who abuse the system. Durable media provenance and watermarking are essential to combat deepfakes in video, images, or audio. We use cryptographic methods to mark and sign AI-generated content with metadata about its source and history. Microsoft has been a leader in R&D on methods for authenticating provenance, including as a co-founder of Project Origin and the Coalition for Content Provenance and Authenticity (C2PA). We use provenance technology, watermarking and fingerprinting techniques to quickly determine if an image or video is AI generated or manipulated. We are committed to identifying and removing deceptive and abusive content like this when it is on our hosted consumer services such as LinkedIn, our Gaming network, and other relevant services.