Search
AI-powered search, human-powered content.
scroll to top arrow or icon

Microsoft Sponsored posts

Last week, Microsoft Vice Chair and President Brad Smith testified before the US Senate Select Committee on Intelligence on securing US elections from nation-state adversaries ahead of the 2024 general election. Microsoft’s Threat Analysis Center has detected numerous cyber-influence operations from Russia, Iran, and China aimed at distorting voter information and sowing discord. The testimony emphasized the importance of collaborative efforts within the tech community and with government entities to safeguard democracy. Read Smith’s written testimony for more details, including actions Microsoft is proactively taking, and the recommendations for Congress.

Microsoft is teaming up with the Institute for Nonprofit News’ Rural News Network to equip local, regional, and statewide newsrooms with additional resources to help them cover the 2024 elections. Supported by Microsoft’s Democracy Forward Program, RNN’s Text RURAL is an SMS-based service that uses AI to send tailored, fact-based news straight to those living in areas where broadband may not yet be readily available. This initiative includes geo-targeted ads, multilingual translations, and multimedia guides to ensure rural voters are well-informed. The network, comprised of over 80 newsrooms, aims to strengthen democracy by providing crucial election information to often overlooked rural areas. Learn more about the technology.

Microsoft is tackling the growing issue of non-consensual intimate imagery, or NCII, and AI-generated deepfakes by partnering with StopNCII. This collaboration allows victims to create digital fingerprints of their images to prevent unauthorized sharing. Microsoft has taken action on nearly 269,000 images up to the end of August. Read more in Microsoft’s latest white paper about its approach and policy recommendations to protect the public from abusive AI-generated content.

Keeping humans at the center of AI doesn't just lead to more responsible products, it leads to better products overall. Five years ago, Microsoft created the Office of Responsible AI to harness the power of research, policy, and engineering to ensure ethical principles for AI are integrated into all of its systems. Read about the lessons in building responsible AI.

Adults can only identify about 50% of deepfake videos, according to a recent survey. Last month, Microsoft released new recommendations for US policymakers to protect the public from abusive AI-generated content. As synthetic content becomes increasingly advanced and widespread, the Microsoft On the Issues team explains how laws and policies need to evolve to keep pace with this technology. Read the latest.

AI technologies used to create deepfakes are widely available and not limited only to those who care about responsible use. This means well-informed citizens are essential to curbing the spread of deepfakes and upholding the democratic process during this year’s presidential election. Even for skilled AI experts, spotting AI-manipulated images can be difficult. Try Microsoft’s Real or Not quiz to see just how hard it is to distinguish AI-generated images from real ones.

Microsoft’s AI Frontiers lab focuses on creating advanced AI systems that are not only capable and efficient but also reliable and socially responsible. Ece shares her journey into responsible AI, starting from her early work with the Hubble Space Telescope, and emphasizes the importance of addressing AI risks and opportunities in a balanced way. Read the Q&A.

Subscribe to our free newsletter, GZERO Daily

GZEROMEDIA

Subscribe to GZERO's daily newsletter

Most Popular Videos