Job Application-ML OPS-Gurgaon/Bangalore

  • Gurugram
  • Exl
Job description (ML Ops) Job Location: Gurgaon/Bangalore Experience: 5+ years Notice Period: Immediate Joiner to 60 days Join EXL's innovative data science and engineering team, where we harness the power of generative AI to push the boundaries of data-driven solutions. At EXL, we're committed to exploring new frontiers in technology, and we need your expertise to drive us forward. Join our team and immerse yourself in a collaborative environment, where together we'll tackle complex challenges, solve impactful problems for our clients and redefine the art of possibility leveraging data science. We have openings for multiple roles which are summarized below: Job Title : MLOPs Engineer Company: EXL Analytics Job Description: We are looking for an experienced MLOPs Engineer with expertise in Spark/PySpark, MLOps/LLMops/DLOps, CI/CD, Kafka, Python, distributed computing, GitHub, data pipelines, cloud hosting, Azure services, Microsoft services, various data connectors, and more. This role will involve designing, implementing, and optimizing data science pipelines, deploying machine learning models, and ensuring smooth operation in production environments. Responsibilities: Design, develop, and maintain data science pipelines for model training, evaluation, and deployment. Manage and optimize infrastructure resources (e.g., cloud services, containers) to support model deployment and inference Collaborate with data scientists, software engineers, and DevOps teams to deploy machine learning models using best practices in MLOps. Automate end-to-end ML workflows, including data preprocessing, model training, evaluation, and deployment, using tools like Kubeflow or Apache Airflow Implement CI/CD pipelines for automated model deployment, testing, and monitoring. Utilize Kafka and other messaging systems for real-time data processing and streaming analytics. Optimize distributed computing infrastructure for scalability, performance, and cost efficiency. Manage GitHub repositories for version control and collaboration on machine learning projects. Utilize various data connectors and integration tools to access and process data from different sources. Develop and maintain documentation for data science pipelines, infrastructure, and processes. Stay up to date on emerging technologies and best practices in machine learning operations and data engineering. Qualifications: 5+ Years of prior experience in Data Engineering and MLOPs. 3+ Years of strong exposure in deploying and managing data science pipelines in production environments. Strong proficiency in Python programming language. Experience with Spark/PySpark and distributed computing frameworks. Hands-on experience with CI/CD pipelines and automation tools. Exposure in deploying a use case in production leveraging Generative AI involving prompt engineering and RAG Framework Familiarity with Kafka or similar messaging systems. Strong problem-solving skills and the ability to iterate and experiment to optimize AI model behavior. Excellent problem-solving skills and attention to detail. Ability to communicate effectively with diverse clients/stakeholders. Education Background: Bachelor’s or master’s degree in computer science, Engineering, or a related field. Tier I/II candidates preferred. If you interested in this role, then please share your following details along with the updated CV: Total Experience- Current CTC Fixed- Expected CTC Fixed- Notice Period- Any other offer In hand- Reason for change- Skills- Regards, Nandini Sharma