Data Engineer-2

  • Hyderabad
  • Zelis
Responsibilities:Design, develop, and maintain Azure Data Factory Pipelines with at least 5 years of experience, ensuring scalability, reliability, and efficiency.Implement and optimize data pipelines in at least one Columnar MPP Cloud data warehouse (e.g., Snowflake, Azure Synapse, Redshift) for over 5 years, focusing on performance and scalability.Utilize ETL tools such as Fivetran and DBT for 2 years to streamline data extraction, transformation, and loading processes.Set up and manage version control using Git and Azure DevOps, ensuring smooth collaboration and code management.Work within Agile methodologies, utilizing tools like Jira and Confluence for project management and documentation.Develop and optimize SQL objects (procedures, triggers, views, functions) in SQL Server, with a focus on performance tuning and query optimization.Leverage Azure Architecture and Data Lake knowledge to design and implement scalable data solutions.Demonstrate a proactive and self-starting approach, capable of understanding and probing for requirements, and generating functional specifications for code migration.Apply hands-on programming skills to optimize performance, handle large data volume transformations (100 GBs monthly), and create tailored solution/data flows to meet business requirements.Produce timely documentation including mapping, UTR, and defect/KEDB logs.Have expertise in primary technologies: Snowflake, DBT (development & testing), and secondary technologies: Python, ETL, or any data processing tool.Familiarity with Snowflake, including Role-Based Access Control (RBAC), warehouse management, and administration activities such as performance tuning and database maintenance.Requirements:Bachelor's degree in Computer Science, Engineering, or related field.5-8 years of experience in data engineering roles.Familiarity with Azure services and architecture.Experience in Healthcare domain is a plus.