New

AI Governance Fellow

Contract

Hybrid

Deadline

July 19, 2025

About the organization

CDT

Center for Democracy & Technology

Organization type

Social Impact Organization

In A Nutshell

Location

Hybrid Washington DC, USA

Salary

$80,000-$115,000

Job Type

Contract

Experience Level

Entry-level

Deadline to apply

July 19, 2025

Contribute to the work of CDT’s AI Governance Lab, focused on the responsible design, testing, monitoring, and regulation of AI systems.

Responsibilities

  • Developing, analyzing, prototyping and amplifying best practices for AI governance solutions.
  • Advocating for the adoption of responsible AI governance solutions through multi-stakeholder initiatives, standards-setting, and direct-to-company engagement.
  • Advising policymakers on effective legislative and regulatory approaches.
  • Supporting civil rights organizations, consumer protection organizations and other public interest advocates engaging on AI issues by providing technical and operational expertise.
  • Building bridges for the research community to better participate in and inform current policy debates, particularly around technical developments and their societal implication.
  • Tracking recent technical developments at the frontier of AI research and translating and communicating them to internal and external policy audiences.
  • Grounding AI research and policy in the needs of impacted stakeholder groups and the use context.

Skillset

  • A degree in a relevant discipline such as computer science, information science, engineering, economics, public policy, or similar research qualification and experience (including industry experience in AI risk management or governance); a graduate degree (e.g., Masters or PhD) or commensurate research and/or applied experience is desirable but not strictly required.
  • Demonstrated research and/or applied experience in the form of publications ( reports, conference papers, peer-reviewed papers, etc.), presentations, frameworks, or other outputs, and the demonstrated ability to translate technical research findings to non-technical and/or non-expert audiences.
  • Familiarity with the AI governance landscape including current debates around AI system and foundation model evaluation, approaches to risk assessment, and emerging regulatory frameworks. Candidates with knowledge of ongoing AI safety research, aI safety stakeholders, and related policy discussions including debates around model capabilities, alignment approaches, and risk mitigation strategies, are particularly encouraged to apply
  • Interest in technology’s societal impacts, particularly how AI systems affect individuals and communities based on race, gender, disability, income, immigration status, or other characteristics.
  • Excellent writing skills for communicating with technical, policy, and general audiences.

Spot any inaccurate information? Have a job to share? Let us know.