Junior AI/ML/Deep Learning Engineer





Department: Engineering
Focus Areas: LLMs, Vision AI, Generative Models
Experience: 1 to 2 years
Location: Bengaluru, India
Role Overview
We’re looking for a research-driven AI Engineer passionate about deep learning, modern architectures, and applied AI. In this role, you’ll bridge research and engineering — implementing research papers, designing experiments, and deploying production-grade AI systems across:
Large Language Models (LLMs) – text generation, fine-tuning, RAG systems
Computer Vision – classification, detection, segmentation
Generative AI – diffusion models, image generation, vision-language models
This is an engineering-heavy research role requiring both theoretical depth and hands-on implementation skills.
Key Responsibilities
Research & Experimentation
Read, analyze, and implement state-of-the-art research papers
Design controlled experiments with ablation studies and statistical validation
Prototype novel architectures and training techniques from recent literature
Maintain scientific documentation of experiments, findings, and methodologies
Model Development
Build and optimize transformer-based LLMs for text generation and instruction tuning
Develop vision models using CNNs and Vision Transformers (ViT)
Implement generative models like Stable Diffusion and GANs
Create multimodal AI systems (e.g., CLIP, BLIP) for vision-language understanding
Fine-tune large models using LoRA, QLoRA, prompt engineering, RLHF
Engineering & Deployment
Build end-to-end training and data pipelines
Deploy models using FastAPI/Flask with optimized inference
Apply quantization, pruning, distillation for model compression
Ensure clean, tested, and documented code with Git version control
Integrate models into scalable cloud environments (AWS/GCP/Azure)
Required Qualifications
Education & Core Skills
Bachelor’s/Master’s in Computer Science, AI/ML, Data Science, or related fields
Strong Python skills (Java is a plus)
Proficient in PyTorch (TensorFlow familiarity a bonus)
Solid understanding of Transformers, Attention Mechanisms, CNNs, and Vision AI
Research Capabilities
Ability to read and implement research papers independently
Strong foundation in experimental design, baselines, and evaluation metrics
Analytical mindset for model performance debugging
Excellent technical writing and documentation
LLM Expertise
Experience with GPT-style models and encoder-decoder architectures
Hands-on with fine-tuning workflows and prompt engineering
Understanding of RAG (Retrieval-Augmented Generation)
Familiarity with Hugging Face Transformers & Datasets
Vision & Generative AI
Knowledge of Diffusion Models (DDPM, Stable Diffusion)
Understanding of ViT / ResNet / EfficientNet architectures
Familiarity with CLIP, BLIP, and other vision-language models
Experience with image generation pipelines
ML Engineering
Strong with pandas, numpy, scikit-learn, OpenCV
Experience with MLflow / TensorBoard for experiment tracking
Backend knowledge: FastAPI / Flask for serving
Exposure to Docker and cloud platforms (AWS/GCP/Azure)
Commitment to software engineering best practices
Preferred (Strong Plus)
Publications or technical blogs in ML/AI
Open-source contributions (GitHub portfolio)
Experience with FAISS, Milvus, Pinecone
Familiarity with LangChain, LlamaIndex, ControlNet, ComfyUI, AUTOMATIC1111
Experience in 3D vision, video understanding, or reinforcement learning
What You’ll Gain
Mentorship from senior AI researchers and ML engineers
Hands-on experience with state-of-the-art LLMs and Generative AI
Opportunity to work on real-world projects across multiple industries
Collaborative R&D environment focused on experimentation and innovation
Access to GPU resources for large-scale model training
Freedom to explore and contribute new ideas to ongoing research
Interested?
Send your resume to hr@areta360.com
Department: Engineering
Focus Areas: LLMs, Vision AI, Generative Models
Experience: 1 to 2 years
Location: Bengaluru, India
Role Overview
We’re looking for a research-driven AI Engineer passionate about deep learning, modern architectures, and applied AI. In this role, you’ll bridge research and engineering — implementing research papers, designing experiments, and deploying production-grade AI systems across:
Large Language Models (LLMs) – text generation, fine-tuning, RAG systems
Computer Vision – classification, detection, segmentation
Generative AI – diffusion models, image generation, vision-language models
This is an engineering-heavy research role requiring both theoretical depth and hands-on implementation skills.
Key Responsibilities
Research & Experimentation
Read, analyze, and implement state-of-the-art research papers
Design controlled experiments with ablation studies and statistical validation
Prototype novel architectures and training techniques from recent literature
Maintain scientific documentation of experiments, findings, and methodologies
Model Development
Build and optimize transformer-based LLMs for text generation and instruction tuning
Develop vision models using CNNs and Vision Transformers (ViT)
Implement generative models like Stable Diffusion and GANs
Create multimodal AI systems (e.g., CLIP, BLIP) for vision-language understanding
Fine-tune large models using LoRA, QLoRA, prompt engineering, RLHF
Engineering & Deployment
Build end-to-end training and data pipelines
Deploy models using FastAPI/Flask with optimized inference
Apply quantization, pruning, distillation for model compression
Ensure clean, tested, and documented code with Git version control
Integrate models into scalable cloud environments (AWS/GCP/Azure)
Required Qualifications
Education & Core Skills
Bachelor’s/Master’s in Computer Science, AI/ML, Data Science, or related fields
Strong Python skills (Java is a plus)
Proficient in PyTorch (TensorFlow familiarity a bonus)
Solid understanding of Transformers, Attention Mechanisms, CNNs, and Vision AI
Research Capabilities
Ability to read and implement research papers independently
Strong foundation in experimental design, baselines, and evaluation metrics
Analytical mindset for model performance debugging
Excellent technical writing and documentation
LLM Expertise
Experience with GPT-style models and encoder-decoder architectures
Hands-on with fine-tuning workflows and prompt engineering
Understanding of RAG (Retrieval-Augmented Generation)
Familiarity with Hugging Face Transformers & Datasets
Vision & Generative AI
Knowledge of Diffusion Models (DDPM, Stable Diffusion)
Understanding of ViT / ResNet / EfficientNet architectures
Familiarity with CLIP, BLIP, and other vision-language models
Experience with image generation pipelines
ML Engineering
Strong with pandas, numpy, scikit-learn, OpenCV
Experience with MLflow / TensorBoard for experiment tracking
Backend knowledge: FastAPI / Flask for serving
Exposure to Docker and cloud platforms (AWS/GCP/Azure)
Commitment to software engineering best practices
Preferred (Strong Plus)
Publications or technical blogs in ML/AI
Open-source contributions (GitHub portfolio)
Experience with FAISS, Milvus, Pinecone
Familiarity with LangChain, LlamaIndex, ControlNet, ComfyUI, AUTOMATIC1111
Experience in 3D vision, video understanding, or reinforcement learning
What You’ll Gain
Mentorship from senior AI researchers and ML engineers
Hands-on experience with state-of-the-art LLMs and Generative AI
Opportunity to work on real-world projects across multiple industries
Collaborative R&D environment focused on experimentation and innovation
Access to GPU resources for large-scale model training
Freedom to explore and contribute new ideas to ongoing research
Interested?
Send your resume to hr@areta360.com
Department: Engineering
Focus Areas: LLMs, Vision AI, Generative Models
Experience: 1 to 2 years
Location: Bengaluru, India
Role Overview
We’re looking for a research-driven AI Engineer passionate about deep learning, modern architectures, and applied AI. In this role, you’ll bridge research and engineering — implementing research papers, designing experiments, and deploying production-grade AI systems across:
Large Language Models (LLMs) – text generation, fine-tuning, RAG systems
Computer Vision – classification, detection, segmentation
Generative AI – diffusion models, image generation, vision-language models
This is an engineering-heavy research role requiring both theoretical depth and hands-on implementation skills.
Key Responsibilities
Research & Experimentation
Read, analyze, and implement state-of-the-art research papers
Design controlled experiments with ablation studies and statistical validation
Prototype novel architectures and training techniques from recent literature
Maintain scientific documentation of experiments, findings, and methodologies
Model Development
Build and optimize transformer-based LLMs for text generation and instruction tuning
Develop vision models using CNNs and Vision Transformers (ViT)
Implement generative models like Stable Diffusion and GANs
Create multimodal AI systems (e.g., CLIP, BLIP) for vision-language understanding
Fine-tune large models using LoRA, QLoRA, prompt engineering, RLHF
Engineering & Deployment
Build end-to-end training and data pipelines
Deploy models using FastAPI/Flask with optimized inference
Apply quantization, pruning, distillation for model compression
Ensure clean, tested, and documented code with Git version control
Integrate models into scalable cloud environments (AWS/GCP/Azure)
Required Qualifications
Education & Core Skills
Bachelor’s/Master’s in Computer Science, AI/ML, Data Science, or related fields
Strong Python skills (Java is a plus)
Proficient in PyTorch (TensorFlow familiarity a bonus)
Solid understanding of Transformers, Attention Mechanisms, CNNs, and Vision AI
Research Capabilities
Ability to read and implement research papers independently
Strong foundation in experimental design, baselines, and evaluation metrics
Analytical mindset for model performance debugging
Excellent technical writing and documentation
LLM Expertise
Experience with GPT-style models and encoder-decoder architectures
Hands-on with fine-tuning workflows and prompt engineering
Understanding of RAG (Retrieval-Augmented Generation)
Familiarity with Hugging Face Transformers & Datasets
Vision & Generative AI
Knowledge of Diffusion Models (DDPM, Stable Diffusion)
Understanding of ViT / ResNet / EfficientNet architectures
Familiarity with CLIP, BLIP, and other vision-language models
Experience with image generation pipelines
ML Engineering
Strong with pandas, numpy, scikit-learn, OpenCV
Experience with MLflow / TensorBoard for experiment tracking
Backend knowledge: FastAPI / Flask for serving
Exposure to Docker and cloud platforms (AWS/GCP/Azure)
Commitment to software engineering best practices
Preferred (Strong Plus)
Publications or technical blogs in ML/AI
Open-source contributions (GitHub portfolio)
Experience with FAISS, Milvus, Pinecone
Familiarity with LangChain, LlamaIndex, ControlNet, ComfyUI, AUTOMATIC1111
Experience in 3D vision, video understanding, or reinforcement learning
What You’ll Gain
Mentorship from senior AI researchers and ML engineers
Hands-on experience with state-of-the-art LLMs and Generative AI
Opportunity to work on real-world projects across multiple industries
Collaborative R&D environment focused on experimentation and innovation
Access to GPU resources for large-scale model training
Freedom to explore and contribute new ideas to ongoing research
Interested?
Send your resume to hr@areta360.com
Department: Engineering
Focus Areas: LLMs, Vision AI, Generative Models
Experience: 1 to 2 years
Location: Bengaluru, India
Role Overview
We’re looking for a research-driven AI Engineer passionate about deep learning, modern architectures, and applied AI. In this role, you’ll bridge research and engineering — implementing research papers, designing experiments, and deploying production-grade AI systems across:
Large Language Models (LLMs) – text generation, fine-tuning, RAG systems
Computer Vision – classification, detection, segmentation
Generative AI – diffusion models, image generation, vision-language models
This is an engineering-heavy research role requiring both theoretical depth and hands-on implementation skills.
Key Responsibilities
Research & Experimentation
Read, analyze, and implement state-of-the-art research papers
Design controlled experiments with ablation studies and statistical validation
Prototype novel architectures and training techniques from recent literature
Maintain scientific documentation of experiments, findings, and methodologies
Model Development
Build and optimize transformer-based LLMs for text generation and instruction tuning
Develop vision models using CNNs and Vision Transformers (ViT)
Implement generative models like Stable Diffusion and GANs
Create multimodal AI systems (e.g., CLIP, BLIP) for vision-language understanding
Fine-tune large models using LoRA, QLoRA, prompt engineering, RLHF
Engineering & Deployment
Build end-to-end training and data pipelines
Deploy models using FastAPI/Flask with optimized inference
Apply quantization, pruning, distillation for model compression
Ensure clean, tested, and documented code with Git version control
Integrate models into scalable cloud environments (AWS/GCP/Azure)
Required Qualifications
Education & Core Skills
Bachelor’s/Master’s in Computer Science, AI/ML, Data Science, or related fields
Strong Python skills (Java is a plus)
Proficient in PyTorch (TensorFlow familiarity a bonus)
Solid understanding of Transformers, Attention Mechanisms, CNNs, and Vision AI
Research Capabilities
Ability to read and implement research papers independently
Strong foundation in experimental design, baselines, and evaluation metrics
Analytical mindset for model performance debugging
Excellent technical writing and documentation
LLM Expertise
Experience with GPT-style models and encoder-decoder architectures
Hands-on with fine-tuning workflows and prompt engineering
Understanding of RAG (Retrieval-Augmented Generation)
Familiarity with Hugging Face Transformers & Datasets
Vision & Generative AI
Knowledge of Diffusion Models (DDPM, Stable Diffusion)
Understanding of ViT / ResNet / EfficientNet architectures
Familiarity with CLIP, BLIP, and other vision-language models
Experience with image generation pipelines
ML Engineering
Strong with pandas, numpy, scikit-learn, OpenCV
Experience with MLflow / TensorBoard for experiment tracking
Backend knowledge: FastAPI / Flask for serving
Exposure to Docker and cloud platforms (AWS/GCP/Azure)
Commitment to software engineering best practices
Preferred (Strong Plus)
Publications or technical blogs in ML/AI
Open-source contributions (GitHub portfolio)
Experience with FAISS, Milvus, Pinecone
Familiarity with LangChain, LlamaIndex, ControlNet, ComfyUI, AUTOMATIC1111
Experience in 3D vision, video understanding, or reinforcement learning
What You’ll Gain
Mentorship from senior AI researchers and ML engineers
Hands-on experience with state-of-the-art LLMs and Generative AI
Opportunity to work on real-world projects across multiple industries
Collaborative R&D environment focused on experimentation and innovation
Access to GPU resources for large-scale model training
Freedom to explore and contribute new ideas to ongoing research
Interested?
Send your resume to hr@areta360.com
Department: Engineering
Focus Areas: LLMs, Vision AI, Generative Models
Experience: 1 to 2 years
Location: Bengaluru, India
Role Overview
We’re looking for a research-driven AI Engineer passionate about deep learning, modern architectures, and applied AI. In this role, you’ll bridge research and engineering — implementing research papers, designing experiments, and deploying production-grade AI systems across:
Large Language Models (LLMs) – text generation, fine-tuning, RAG systems
Computer Vision – classification, detection, segmentation
Generative AI – diffusion models, image generation, vision-language models
This is an engineering-heavy research role requiring both theoretical depth and hands-on implementation skills.
Key Responsibilities
Research & Experimentation
Read, analyze, and implement state-of-the-art research papers
Design controlled experiments with ablation studies and statistical validation
Prototype novel architectures and training techniques from recent literature
Maintain scientific documentation of experiments, findings, and methodologies
Model Development
Build and optimize transformer-based LLMs for text generation and instruction tuning
Develop vision models using CNNs and Vision Transformers (ViT)
Implement generative models like Stable Diffusion and GANs
Create multimodal AI systems (e.g., CLIP, BLIP) for vision-language understanding
Fine-tune large models using LoRA, QLoRA, prompt engineering, RLHF
Engineering & Deployment
Build end-to-end training and data pipelines
Deploy models using FastAPI/Flask with optimized inference
Apply quantization, pruning, distillation for model compression
Ensure clean, tested, and documented code with Git version control
Integrate models into scalable cloud environments (AWS/GCP/Azure)
Required Qualifications
Education & Core Skills
Bachelor’s/Master’s in Computer Science, AI/ML, Data Science, or related fields
Strong Python skills (Java is a plus)
Proficient in PyTorch (TensorFlow familiarity a bonus)
Solid understanding of Transformers, Attention Mechanisms, CNNs, and Vision AI
Research Capabilities
Ability to read and implement research papers independently
Strong foundation in experimental design, baselines, and evaluation metrics
Analytical mindset for model performance debugging
Excellent technical writing and documentation
LLM Expertise
Experience with GPT-style models and encoder-decoder architectures
Hands-on with fine-tuning workflows and prompt engineering
Understanding of RAG (Retrieval-Augmented Generation)
Familiarity with Hugging Face Transformers & Datasets
Vision & Generative AI
Knowledge of Diffusion Models (DDPM, Stable Diffusion)
Understanding of ViT / ResNet / EfficientNet architectures
Familiarity with CLIP, BLIP, and other vision-language models
Experience with image generation pipelines
ML Engineering
Strong with pandas, numpy, scikit-learn, OpenCV
Experience with MLflow / TensorBoard for experiment tracking
Backend knowledge: FastAPI / Flask for serving
Exposure to Docker and cloud platforms (AWS/GCP/Azure)
Commitment to software engineering best practices
Preferred (Strong Plus)
Publications or technical blogs in ML/AI
Open-source contributions (GitHub portfolio)
Experience with FAISS, Milvus, Pinecone
Familiarity with LangChain, LlamaIndex, ControlNet, ComfyUI, AUTOMATIC1111
Experience in 3D vision, video understanding, or reinforcement learning
What You’ll Gain
Mentorship from senior AI researchers and ML engineers
Hands-on experience with state-of-the-art LLMs and Generative AI
Opportunity to work on real-world projects across multiple industries
Collaborative R&D environment focused on experimentation and innovation
Access to GPU resources for large-scale model training
Freedom to explore and contribute new ideas to ongoing research
Interested?
Send your resume to hr@areta360.com