DE Jobs

Search from over 2 Million Available Jobs, No Extra Steps, No Extra Forms, Just DirectEmployers

Job Information

Microsoft Corporation Principal AI Safety Researcher - AI Red Team in Redmond, Washington

Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world.

Do you have research experience in Adversarial Machine learning or AI Safety Research? Do you want to find failures in Microsoft’s big bet AI systems impacting millions of users? Microsoft’s AI Red Team is looking for a Principal AI Safety Researcher where you will get to work alongside security experts to push the boundaries of AI Red Teaming. We are an interdisciplinary group of red teamers, adversarial Machine Learning (ML) researchers, Responsible AI experts and software developers with the mission of proactively finding failures in Microsoft’s big bet AI systems. Your work will impact Microsoft’s AI portfolio including Phi series, Bing Copilot, Security Copilot, Github Copilot, Office Copilot and Windows Copilot. We are looking for a Principal Researcher with experience in adversarial machine learning or AI safety work to help make AI security better and help our customers expand with our AI systems. We have multiple openings and open to remote work. We are foucused on Open source and helping the community with our research, releasing tools such as Counterfit and PyRIT. Publishing papers is not required but encouraged in this group.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Responsibilities

  • Research new and emerging threats to inform the organization

  • Discover and exploit Responsible AI vulnerabilities end-to-end in order to assess the safety of systems

  • Develop methodologies and techniques to scale and accelerate responsible AI Red Teaming

  • Collaborate with teams to influence measurement and mitigations of these vulnerabilities in AI systems

  • Work alongside traditional offensive security engineers, adversarial ML experts, developers to land responsible AI operations

  • Embody our Culture (https://www.microsoft.com/en-us/about/corporate-values) and Values (https://careers.microsoft.com/us/en/culture)

Qualifications

Qualifications - Required:

  • Doctorate in relevant field AND 3+ years related research experience OR equivalent experience.

  • Research experience especially in adversarial machine learning, or the intersection of machine learning and security

Other Requirements

  • Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check:

  • This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.

Research Sciences IC5 - The typical base pay range for this role across the U.S. is USD $137,600 - $267,000 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $180,400 - $294,000 per year.

Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay

Microsoft will accept applications for the role until June 15, 2024.

#MSFTSecurity #MSECAIR #AIRedTeam

Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations (https://careers.microsoft.com/v2/global/en/accessibility.html) .

DirectEmployers