In A Nutshell
Contribute to the development of safe and effective AI models for healthcare applications.
Responsibilities
- Design and apply practical and scalable methods to improve safety and reliability of our models, including RLHF, automated red teaming, scalable oversight, etc.
- Evaluate methods using health-related data, ensuring models provide accurate, reliable, and trustworthy information.
- Build reusable libraries for applying general alignment techniques to our models.
- Proactively understand the safety of our models and systems, identifying areas of risk.
- Work with cross-team stakeholders to integrate methods in core model training and launch safety improvements in OpenAI’s products.
Skillset
- Have 4+ years of experience with deep learning research and LLMs, especially practical alignment topics such as RLHF, automated red teaming, scalable oversight, etc.
- Hold a Ph.D. or other degree in computer science, AI, machine learning, or a related field.
- Stay goal-oriented instead of method-oriented, and are not afraid of unglamorous but high-value work when needed.
- Possess experience making practical model improvements for AI model deployment.
- Own problems end-to-end, and are willing to pick up whatever knowledge you’re missing to get the job done.
- Are a team player who enjoys collaborative work environments.
- Demonstrate passion for AI safety and improving global health outcomes.
- Bonus: possess experience in health-related AI research or deployments.