We are seeking a Research Scientist/Research Engineer to join our team. As a Research Scientist/Research Engineer, you will develop novel methods to improve the alignment and generalization of large-scale generative models. You will collaborate with researchers and engineers to define best practices in data-driven AI development. You will also partner with top foundation model labs to provide both technical and strategic input on the development of the next generation of generative AI models.
Key Responsibilities:
- Research and develop novel post-training techniques, including SFT, RLHF, and reward modeling, to enhance LLM core capabilities in both text and multimodal modalities.
- Design and experiment new approaches to preference optimization.
- Analyze model behavior, identify weaknesses, and propose solutions for bias mitigation and model robustness.
- Publish research findings in top-tier AI conferences.
Ideal Candidate:
- Ph.D. or Master's degree in Computer Science, Machine Learning, AI, or a related field.
- Deep understanding of deep learning, reinforcement learning, and large-scale model fine-tuning.
- Experience with post-training techniques such as RLHF, preference modeling, or instruction tuning.
- Excellent written and verbal communication skills
- Published research in areas of machine learning at major conferences (NeurIPS, ICML, ICLR, ACL, EMNLP, CVPR, etc.) and/or journals
- Previous experience in a customer-facing role.
Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.
XML job scraping automation by YubHub