Data Lake

Data Engineer | AWS, Python & Snowflake | Ridgefield, CT (Hybrid) | $140Kโ€“$185K

๐Ÿง  Data Engineer

๐Ÿ“ Location: Ridgefield, Connecticut (Hybrid โ€“ 2โ€“3 days onsite per week)
๐Ÿ’ผ Openings: 2
๐Ÿข Industry: Information Technology / Life Sciences
๐ŸŽ“ Education: Bachelorโ€™s degree in Computer Science, MIS, or related field (Masterโ€™s preferred)
๐Ÿšซ Visa Sponsorship: Not available
๐Ÿšš Relocation: Available for the ideal candidate
๐Ÿ’ฐ Compensation: $140,000 โ€“ $185,000 base salary + full benefits
๐Ÿ•“ Employment Type: Full-Time | Permanent

๐ŸŒŸ The Opportunity

Step into the future with a global leader in healthcare innovation โ€” where Data and AI drive transformation and impact millions of lives.

As part of the Enterprise Data, AI & Platforms (EDP) team, youโ€™ll join a high-performing group thatโ€™s building scalable, cloud-based data ecosystems and shaping the companyโ€™s data-driven future.

This role is ideal for a hands-on Data Engineer who thrives on designing, optimizing, and maintaining robust data pipelines in the cloud, while collaborating closely with architects, scientists, and business stakeholders across the enterprise.

๐Ÿงญ Key Responsibilities

  • Design, develop, and maintain scalable ETL/ELT data pipelines and integration frameworks to enable advanced analytics and AI use cases.

  • Collaborate with data architects, modelers, and data scientists to evolve the companyโ€™s cloud-based data architecture strategy (data lakes, warehouses, streaming analytics).

  • Optimize and manage data storage solutions (e.g., S3, Snowflake, Redshift), ensuring data quality, integrity, and security.

  • Implement data validation, monitoring, and troubleshooting processes to ensure high system reliability.

  • Work cross-functionally with IT and business teams to understand data requirements and translate them into scalable solutions.

  • Document architecture, workflows, and best practices to support transparency and continuous improvement.

  • Stay current with emerging data engineering technologies, tools, and methodologies, contributing to innovation across the organization.

๐Ÿง  Core Requirements

Technical Skills

โœ… Hands-on experience with AWS data services such as Glue, Lambda, Athena, Step Functions, and Lake Formation.
โœ… Strong proficiency in Python and SQL for data manipulation and pipeline development.
โœ… Experience in data warehousing and modeling (dimensional modeling, Kimball methodology).
โœ… Familiarity with DevOps and CI/CD practices for data solutions.
โœ… Experience integrating data between applications, data warehouses, and data lakes.
โœ… Understanding of data governance, metadata management, and data quality principles.

Cloud & Platform Experience

  • Expertise in AWS, Azure, or Google Cloud Platform (GCP) โ€“ AWS preferred.

  • Knowledge of ETL/ELT tools such as Apache Airflow, dbt, Azure Data Factory, or AWS Glue.

  • Experience with Snowflake, PostgreSQL, MongoDB, or other modern database systems.

Education & Experience

๐ŸŽ“ Bachelorโ€™s degree in Computer Science, MIS, or related field
๐Ÿ’ผ 5โ€“7 years of professional experience in data engineering or data platform development
โญ AWS Solutions Architect certification is a plus

๐Ÿš€ Preferred Skills & Attributes

  • Deep knowledge of big data technologies (Spark, Hadoop, Flink) is a strong plus.

  • Proven experience troubleshooting and optimizing complex data pipelines.

  • Strong problem-solving skills and analytical mindset.

  • Excellent communication skills for collaboration across technical and non-technical teams.

  • Passion for continuous learning and data innovation.

๐Ÿ’ฐ Compensation & Benefits

๐Ÿ’ต Base Salary: $140,000 โ€“ $185,000 (commensurate with experience)
๐ŸŽฏ Bonus: Role-based variable incentive
๐Ÿ’Ž Benefits Include:

  • Comprehensive health, dental, and vision coverage

  • Paid vacation and holidays

  • 401(k) retirement plan

  • Wellness and family support programs

  • Flexible hybrid work environment

๐Ÿงฉ Candidate Snapshot

  • Experience: 5โ€“7 years in data engineering or related field

  • Key Skills: AWS Glue | Python | SQL | ETL | CI/CD | Snowflake | Data Modeling | Cloud Architecture

  • Seniority Level: Midโ€“Senior

  • Work Arrangement: 2โ€“3 days onsite in Ridgefield, CT

  • Travel: Occasional

๐Ÿš€ Ready to power the future of data-driven healthcare?
Join a global data and AI team committed to harnessing the power of cloud and analytics to drive discovery, innovation, and meaningful impact worldwide.

Data Engineer | Azure, Databricks, Python, SQL, Spark | Hybrid โ€“ Netherlands (โ‚ฌ3,500โ€“โ‚ฌ5,000/month)

Data Engineer

๐Ÿ“ Location: Eindhoven area or Randstad, Netherlands (Hybrid โ€“ 3 office days / 2 home days)
๐Ÿ’ผ Employment Type: Full-time
๐Ÿ’ต Salary: โ‚ฌ3,500 โ€“ โ‚ฌ5,000 per month (โ‚ฌ45,360 โ€“ โ‚ฌ64,800 annually)
๐ŸŽฏ Experience Level: Mid-level | 2โ€“3 yearsโ€™ experience

About the Role

Do you love working with data โ€” from digging into sources and writing clean ingestion scripts to ensuring a seamless flow into a data lake? As a Data Engineer, youโ€™ll design and optimize data pipelines that transform raw information into reliable, high-quality datasets for enterprise clients.

Youโ€™ll work with state-of-the-art technologies in the cloud (Azure, Databricks, Fabric) to build solutions that deliver business-critical value. In this role, data quality, stability, and monitoring are key โ€” because the pipelines you create will be used in production environments.

Key Responsibilities

  • Develop data connectors and processing solutions using Python, SQL, and Spark.

  • Define validation tests within pipelines to guarantee data integrity.

  • Implement monitoring and alerting systems for early issue detection.

  • Take the lead in troubleshooting incidents to minimize user impact.

  • Collaborate with end users to validate and continuously improve solutions.

  • Work within an agile DevOps team to build, deploy, and optimize pipelines.

Requirements

  • ๐ŸŽ“ Bachelorโ€™s or Masterโ€™s degree in Computer Science, Data Engineering, or related field.

  • 2โ€“3 years of relevant experience in data ingestion and processing.

  • Strong knowledge of SQL, Python, and Spark.

  • Familiarity with container environments (e.g., Kubernetes).

  • Experience with Azure Data Factory, Databricks, or Fabric is a strong plus.

  • Experience with data model management and dashboarding (e.g., PowerBI) preferred.

  • Team player with strong communication skills in Dutch and English.

  • Familiarity with enterprise data platforms and data lakes is ideal.

What We Offer

  • ๐Ÿ’ถ Salary: โ‚ฌ3,500 โ€“ โ‚ฌ5,000 per month

  • ๐ŸŒด 26 vacation days

  • ๐Ÿš— Lease car or mobility budget (โ‚ฌ600)

  • ๐Ÿ’ป Laptop & mobile phone

  • ๐Ÿ’ธ โ‚ฌ115 monthly cost allowance

  • ๐Ÿฆ 50% employer contribution for health insurance

  • ๐Ÿ“ˆ 60% employer contribution for pension scheme

  • ๐ŸŽฏ Performance-based bonus

  • ๐Ÿ“š Training via in-house Academy (hard & soft skills)

  • ๐Ÿ‹๏ธ Free use of on-site gym

  • ๐ŸŒ Hybrid work model (3 days in office, 2 days at home)

  • ๐Ÿค Start with a 12-month contract, with option to move to indefinite after evaluation

Ideal Candidate

You are a hands-on data engineer who enjoys data wrangling and building robust pipelines. You take pride in seeing your code run smoothly in production and know how to troubleshoot quickly when issues arise. With strong technical skills in SQL, Python, and Spark, plus familiarity with cloud platforms like Azure, youโ€™re ready to contribute to impactful enterprise projects.

๐Ÿ‘‰ Ready to make data flow seamlessly and create business value? Apply now to join a passionate, innovation-driven team.