Experience- 5- 10Yrs
Notice Period- Immediate-30 Days
Key Responsibilities:
• Design, develop, and maintain data pipelines using AWS services such as Glue, S3, and Lambda.
• Implement data processing frameworks using Python and PySpark/Spark.
• Manage and optimize data storage solutions using Iceberg and other relevant technologies.
• Ensure the continuous integration and deployment of data solutions using Jenkins and GitHub.
• Develop and maintain APIs for data access and manipulation.
• Monitor and manage data workflows using Opsgenie.
• Work collaboratively with the team to solve complex data problems and deliver high-quality data solutions.
• Ensure data quality, integrity, and security across all data systems. • Provide technical guidance and mentorship to junior team members.
Required Skills and Qualifications:
• Proven experience in data engineering, particularly with AWS services (4+ years).
• Proficiency in AWS Glue, S3, Lambda, and other AWS services.
• Strong programming skills in Python and experience with PySpark/Spark.
• Hands-on experience with CI/CD tools such as Jenkins and GitHub.
• Solid understanding of data storage solutions and management, particularly with Iceberg.
• Experience with API development and integration.
• Excellent problem-solving and analytical skills.
• Strong communication and interpersonal skills.
Preferred Qualifications:
• Experience with Mage and DBT for data transformation and management.
• Knowledge of Teradata and HVR/replication services for application development.
• Familiarity with monitoring tools such as Opsgenie.
• Ability to adapt and learn new technologies quickly.
Education:
• Bachelor's degree in Computer Science, Information Technology, or a related field.