The MITRE Corporation Principal AI Fairness and Bias Researcher in Livermore, California
Why choose between doing meaningful work and having a fulfilling life? At MITRE, you can have both. That's because MITRE people are committed to tackling our nation's toughest challenges—and we're committed to the long-term well-being of our employees. MITRE is different from most technology companies. We are a not-for-profit corporation chartered to work for the public interest, with no commercial conflicts to influence what we do. The R&D centers we operate for the government create lasting impact in fields as diverse as cybersecurity, healthcare, aviation, defense, and enterprise transformation. We're making a difference every day—working for a safer, healthier, and more secure nation and world. Our workplace reflects our values. We offer competitive benefits, exceptional professional development opportunities, and a culture of innovation that embraces diversity, inclusion, flexibility, collaboration, and career growth. If this sounds like the choice you want to make, then choose MITRE—and make a difference with us.
MITRE’s newly established Artificial Intelligence and Autonomy Innovation Center consists of a multi-generational, diverse team united by a passion and a sense of urgency to harness AI and Autonomy to help solve problems in the public interest. We discover, create, and lead through empowering an inclusive community of collaborative innovators, learners, and technical creatives.
We are looking for people with a lifelong commitment to learning, who are passionate about harnessing artificial intelligence to make a positive impact in the world. We have built a supportive and inclusive environment where collaboration is encouraged and learning is shared freely. We value our diversity of experience, knowledge, backgrounds, and perspectives and harness these qualities to discover connections, solve problems, and create extraordinary impact.
As a Principal AI Fairness and Bias Researcher, you will explore the development of trustworthy AI systems that are readily accepted and deployed to tackle grand challenges facing society. Specific research topics within scope of the role include, but are not limited to, transparency, explainability, accountability, potential adverse biases and effects, mitigation strategies, algorithmic advances, fairness objectives, validation of fairness, and advances in broad accessibility and utility.
The types of questions we are exploring are:
How can AI assist users and offer enhanced insights, while avoiding exposing them to discrimination, such as in health, housing, law enforcement, and employment?
How can we balance the need for efficiency and exploration with fairness and sensitivity to users?
As our world moves toward relying on intelligent agents, how can we create a system that individuals and communities can trust?
We are looking for a blend of:
Cultivated interest in the societal impacts of AI technology, particularly in the areas of fairness and bias.
A track record of generating new ideas or improving upon existing ideas in artificial intelligence and/or machine learning, such as those demonstrated by one or more first-author publications or projects.
Experience managing/mentoring in a research environment.
Programming experience in creating high-performance implementations of machine learning algorithms.
Bachelor’s degree in Computer Science, a related technical field, and/or equivalent practical experience
10 + years’ work experience using AI to investigate and define potential adverse biases and effects, mitigation strategies, fairness objectives and validation of fairness.
2+ years successful leadership experience.
US citizen, able to obtain and maintain a US government clearance.
Experience in natural language understanding, computer vision, automated reasoning, machine learning, or artificial intelligence.
Recent programming experience (e.g., C, C++, Python, R, Java).
Contributions to research communities/efforts, including publishing papers on artificial intelligence and machine learning (AAAI, ACM, JMLR, ICLR, NeurIPS, ICML, ACL, CVPR).
PhD in Computer Science, a related technical field, and/or equivalent practical experience
Relevant work experience, including full time industry experience or as a researcher in a lab.
Experience mentoring, leading teams, and/or projects.
Programming publication examples in GitHub or a similar repository.
Experience writing research grant proposals.
Strong publication record in a variety of forums (journals, conferences, etc.).
Ability to design and execute on a research agenda.
Preference given to qualified candidates holding active secret clearance.
This requisition requires the following clearance(s):
MITRE is proud to be an equal opportunity employer. MITRE recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, gender, gender expression, sexual identity, disability, age, veteran status, and other protected status.
MITRE intends to maintain a website that is fully accessible to all individuals. If you are unable to search or apply for jobs and would like to request a reasonable accommodation for any part of MITRE’s employment process, please contact MITRE’s Recruiting Help Line at 703-983-8226 or email at firstname.lastname@example.org.
Copyright © 1997-2021, The MITRE Corporation. All rights reserved. MITRE is a registered trademark of The MITRE Corporation. Material on this site may be copied and distributed with permission only.
At MITRE, we solve problems for a safer world. Through our federally funded R&D centers and public-private partnerships, we work across government to tackle challenges to the safety, stability, and well-being of our nation. As a not-for-profit organization, MITRE works in the public interest across federal, state and local governments, as well as industry and academia. We bring innovative ideas into existence in areas as varied as artificial intelligence, intuitive data science, quantum information science, health informatics, space security, policy and economic expertise, trustworthy autonomy, cyber threat sharing, and cyber resilience.