Data Engineer with Innovative Entity in Insurance Industry

  • Hyderabad
  • 10xtd

Compelling Opportunity for Data Engineer with Innovative Entity in Insurance Industry

Employment | Immediate

Location: Hyderabad, India

Reporting Manager: Head of Analytics

Work Pattern: Full Time, 5 days in the office

Minimum Experience as a data engineer: 8 years

Responsibilities

· Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights

· Constructing infrastructure for efficient ETL and ELT processes from various sources and storage systems

· Knowledge of how to create and improve data sets, “big data” data pipelines, and infrastructures

· Being able to do root cause analysis on data and procedures both internally and outside to find possibilities for improvement and provide clarification

· Outstanding analytical abilities related to coping with unstructured datasets.

· Capacity to develop procedures that support task management, data structures, dependence, and metadata

· Strong project management and organisational skills

· Collaborating closely with Product Managers and Business Managers to design technical solutions aligned with business requirements

· Leading the implementation of algorithms and prototypes to transform raw data into useful information

· Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations

· Creating innovative data validation methods and data analysis tools

· Ensuring compliance with data governance and security policies

· Interpreting data trends and patterns to establish operational alerts

· Developing analytical tools, programs, and reporting mechanisms

· Conducting complex data analysis and presenting results effectively

· Preparing data for prescriptive and predictive modeling

· Continuously exploring opportunities to enhance data quality and reliability

· Applying strong programming and problem-solving skills to develop scalable solutions

· Collaborate with data scientists, data analysts and architects on several products and projects

Skills

· 8+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines

· Technical expertise with data models, data mining, and segmentation techniques for both structured and unstructured data

· High proficiency in Scala/Java and Spark for applied large-scale data processing

· Expertise with big data technologies, including Spark, Data Lake, Delta Lake, and Hive

· Solid understanding of batch and streaming data processing techniques

· Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion

· Expert-level ability to write complex, optimized SQL queries across extensive data volumes

· Experience with RDBMS and OLAP databases like MySQL, Redshift

· Familiarity with Hadoop, HBase, MapReduce, and other suitable platforms.

· Excellent understanding of operating systems like UNIX, Linux, and Windows

· Familiarity with Agile methodologies

· Obsession for service observability, instrumentation, monitoring, and alerting

· Knowledge or experience in architectural best practices for building data lakes

· Hands-on data management in any of the hyperscalers – Azure, AWS, GCP

Good to Have:

· Passion for testing strategy, problem-solving, and continuous learning.

· Willingness to acquire new skills and knowledge

· Possess a product/engineering mindset to drive impactful data solutions

· Experience working in distributed environments with global teams

· Great numerical and analytical skills

Soft Skills

· Analytical thinking skills

· Excellent verbal and written communication skills

· Problem-solving skills

Tools and Programs (Min 3 of these)

· Python

· SQL

· Databricks

· Kedro

· Airflow

· Luigi

· PowerBI

· Tableau

· Java

· C#

Education

A bachelor’s degree in computer science, data science, statistics or related field. A master’s degree in data management, information science or similar is preferred. Certifications from IBM, Google or Microsoft is desired.

Screening Criteria

· A bachelor’s degree in computer science, data science, statistics or related field

· 8+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines

· High proficiency in Scala/Java and Spark for applied large-scale data processing

· Expertise with big data technologies, including Spark, Data Lake, Delta Lake, and Hive

· Solid understanding of batch and streaming data processing techniques

· Expert-level ability to write complex, optimized SQL queries across extensive data volumes

· Expertise in atleast 3 of the following

o Python

o SQL

o Databricks

o Kedro

o Airflow

o Luigi

o PowerBI

o Tableau

o Java

o C#

· Exposure to data models, data mining, and segmentation techniques for both structured and unstructured data

· Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion

· Experience with RDBMS and OLAP databases like MySQL, Redshift

· Excellent understanding of operating systems like UNIX, Linux, and Windows

· Hands-on data management in any of the hyperscalers – Azure, AWS, GCP

Considerations

· Location – Hyderabad

· Working from office

· 5 day working

Evaluation Process

Round 1 – HR Round

Round 2 & 3 – Technical Round

Round 4 – Discussion with CEO

Interested Profiles,

· Do write to frontoffice@10XTD.in with the Subject –Data Engineer

Kindly share the following inputs

o Confirmation that the pre-requisites for the role are fully met

o C.V for Reference

Note

o Additional inputs to be gathered from the candidate to put together the application