VPP-5 - LLM Scientist

VPP-5 – LLM Scientist

We are seeking a LLM Scientist with expertise in natural language processing (NLP), large language models (LLMs), and deep learning. The ideal candidate will have strong experience in fine-tuning and adapting transformer-based models, optimizing Retrieval-Augmented Generation (RAG) pipelines, and deploying scalable AI-driven solutions. You will be responsible for designing, training, and evaluating language models, working closely with engineers to integrate them into production systems.

Department:
Machine Learning Engineering
Project Location(s):
United States - Remote
Job Type:
Full time - Contract
Education:
Bachelor's

Key Responsibilities

Large Language Model (LLM) Research & Development:

Fine-tune and optimize transformer-based models (GPT, BERT, T5, Llama, Mistral, etc.) for various business applications.

Conduct experiments on prompt engineering, fine-tuning, and parameter-efficient training methods (LoRA, QLoRA, adapters).

Design and evaluate custom loss functions, data augmentation techniques, and optimization strategies for domain-specific applications.

Retrieval-Augmented Generation (RAG) & Vector Stores:

Develop and optimize RAG architectures for enhancing model retrieval efficiency and relevance.

Work with vector databases (FAISS, Pinecone, Chroma, Milvus) for embedding search and retrieval tasks.

Implement and experiment with retrievers, re-rankers, and hybrid search techniques to improve response quality.

Model Deployment & Optimization:

Optimize LLM inference speed, memory efficiency, and cost using quantization, pruning, and distillation techniques.

Deploy LLM-based solutions on cloud (AWS, GCP, Azure) or on-prem environments, ensuring scalability and reliability.

Experiment with low-latency deployment frameworks (vLLM, DeepSpeed, Triton).

AI Experimentation & Continuous Improvement:

Stay updated with latest LLM research (e.g., OpenAI, Meta, Google DeepMind, Hugging Face).

Experiment with multi-modal AI (text, image, video, audio) and reinforcement learning to expand LLM capabilities.

Publish research findings, contribute to open-source projects, and present innovations at conferences.

Qualifications

Required Technical Skills

Deep Learning & NLP: TensorFlow, PyTorch, Hugging Face, Transformers.

Model Training & Optimization: Fine-tuning LLMs, quantization, pruning, distillation.

Vector Databases & Retrieval: FAISS, Pinecone, Chroma, Milvus.

Cloud Deployment: AWS, GCP, Azure, Kubernetes, Triton.

Data Engineering: Preprocessing pipelines, tokenization, large-scale datasets.

Experimentation & Evaluation: BLEU, ROUGE, METEOR, perplexity, human evaluation.

Soft Skills

Problem-solving: Ability to handle complex AI challenges with a research-driven approach.

Collaboration: Work with engineers, data scientists, and product teams to translate research into real-world applications.

Adaptability: Eagerness to explore new techniques and push the boundaries of generative AI.

Communication: Strong writing and presentation skills for both technical and non-technical audiences.

Apply now
Contact Us Now