We are seeking a highly motivated and talented Machine Learning Research Engineer to join our team in Berlin. As a member of our research team, you will be responsible for developing and implementing new machine learning models and algorithms to improve the performance of our search and retrieval systems.
What you'll do
- Relentlessly push search quality forward — through models, data, tools, or any other leverage available.
- Train, and optimize large-scale deep learning models using frameworks like PyTorch, leveraging distributed training (e.g., PyTorch Distributed, DeepSpeed, FSDP) and hardware acceleration, with a focus on retrieval and ranking models.
- Conduct research in representation learning, including contrastive learning, multilingual, evaluation, and multimodal modeling for search and retrieval.
- Build and optimize RAG pipelines for grounding and answer generation.
What you need
- Understanding of search and retrieval systems, including quality evaluation principles and metrics.
- Strong proficiency with PyTorch, including experience in distributed training techniques and performance optimization for large models.
- Interested in representation learning, including contrastive learning, dense & sparse vector representations, representation fusion, cross-lingual representation alignment, training data optimization and robust evaluation.
- Publication record in AI/ML conferences or workshops (e.g., NeurIPS, ICML, ICLR, ACL, EMNLP, SIGIR).
Why this matters
As a Machine Learning Research Engineer at Perplexity, you will have the opportunity to work on cutting-edge projects that have a direct impact on the performance of our search and retrieval systems. Your contributions will help us to improve the accuracy and efficiency of our models, and ultimately, provide better results for our users.