AWS Glue

Sr. Principal Full Stack Software Engineer – Data Analytics | Ridgefield, CT | $140k–$210k

πŸ’» Sr. Principal / Lead Software Engineer – Full Stack (Data Analytics)

πŸ“ Location: Ridgefield, Connecticut (Hybrid – 2–3 days onsite per week)
🏒 Industry: Pharmaceutical / Biotech / Data & AI
πŸŽ“ Education: Bachelor’s or Master’s in Computer Science, MIS, or related field
πŸ’Ό Experience Level: Mid–Senior / Principal (7–10+ years)
🚫 Visa Sponsorship: Not available
🚚 Relocation: Available for ideal candidate
πŸ’° Compensation: $140,000 – $210,000 base salary + full benefits + bonus eligibility
πŸ•“ Employment Type: Full-Time | Permanent

🌟 The Opportunity

Join a world-class Enterprise Data, AI & Platforms (EDP) organization on a mission to harness the power of data and artificial intelligence to transform global healthcare.

As a Full Stack Software Engineer (Lead or Principal level), you’ll design and deliver cloud-native data analytics applications that power advanced insights, innovation, and decision-making across a global enterprise. This role offers the chance to work across the entire stack β€” from scalable back-end systems and APIs to dynamic data visualizations β€” while collaborating with multidisciplinary teams in a fast-paced, agile environment.

If you’re passionate about building modern data-driven applications and leading technical excellence, this is your opportunity to make a measurable impact on the future of healthcare and digital innovation.

🧭 Key Responsibilities

  • Full Stack Development: Design, develop, and maintain cloud-based applications and services, encompassing front-end and back-end systems.

  • Data Analytics Engineering: Build and optimize data pipelines (ETL/ELT) using tools such as AWS Glue, Databricks, and dbt.

  • Architecture Integration: Develop serverless back-end solutions using AWS Lambda, API Gateway, and related technologies.

  • Visualization & Insights: Build intuitive dashboards and visualization layers using Tableau or Power BI.

  • DevOps & Automation: Leverage Git, Jenkins, Docker, and Kubernetes to automate builds, testing, and deployments.

  • Agile Collaboration: Work closely with product owners and cross-functional teams in an agile environment to deliver iterative, high-value releases.

  • Quality & Testing: Implement robust testing frameworks and continuous integration pipelines to ensure reliability and scalability.

  • Mentorship & Leadership: Guide junior developers, contribute to architectural decisions, and champion best practices across engineering teams.

🧠 Required Skills & Experience

βœ… 7+ years of professional software development experience, including full-stack data analytics applications.
βœ… Proven experience in cloud environments (AWS required) β€” strong knowledge of Lambda, API Gateway, S3, Glue, and Databricks.
βœ… Solid experience building data pipelines (ETL/ELT) using AWS Glue, Databricks, and dbt.
βœ… Hands-on expertise with frontend visualization tools like Tableau and Power BI.
βœ… Familiarity with containerization and continuous deployment tools (Docker, Kubernetes, Jenkins).
βœ… Working knowledge of API integration, serverless architectures, and data security best practices.
βœ… Strong communication skills, collaborative mindset, and the ability to mentor and inspire technical teams.

πŸ’‘ Preferred Skills & Attributes

⭐ Background in pharmaceutical or digital healthcare technology (preferred, not required).
⭐ Experience in agile environments, continuous improvement, and cross-functional collaboration.
⭐ Familiarity with software testing frameworks, QA automation, and CI/CD pipelines.
⭐ A passion for data, AI, and modern analytics solutions that drive business transformation.
⭐ Ability to balance hands-on technical work with strategic architectural leadership.

πŸ’° Compensation & Benefits

πŸ’΅ Base Salary: $140,000 – $210,000 (commensurate with experience)
🎯 Bonus: Role-based and/or performance incentive
πŸ’Ž Benefits Include:

  • Comprehensive health, dental, and vision coverage

  • 401(k) with company match

  • Paid holidays, vacation, and flexible hybrid work schedule

  • Professional development, mobility, and career growth opportunities

  • Relocation assistance for the ideal candidate

🧩 Candidate Snapshot

  • Experience: 7–10+ years full-stack development (data analytics focus)

  • Core Skills: AWS Cloud | ETL/ELT | Databricks | dbt | Tableau | Power BI | Lambda | API Gateway | DevOps

  • Seniority: Lead or Principal Engineer

  • Work Arrangement: 2–3 days onsite in Ridgefield, CT

  • Travel: Occasional

🌍 Why This Role Matters

This role sits at the intersection of software engineering, data analytics, and AI innovation β€” driving systems that shape the way global teams make data-driven decisions in research, medicine, and operations. You’ll collaborate with visionary technologists and scientists to build solutions that turn complex data into actionable intelligence β€” ultimately improving lives around the world.

πŸš€ Ready to engineer the future of data-driven healthcare?
Join a global organization where technology, data, and purpose intersect to deliver meaningful impact β€” one application at a time.

 

Data Engineer | AWS, Python & Snowflake | Ridgefield, CT (Hybrid) | $140K–$185K

🧠 Data Engineer

πŸ“ Location: Ridgefield, Connecticut (Hybrid – 2–3 days onsite per week)
πŸ’Ό Openings: 2
🏒 Industry: Information Technology / Life Sciences
πŸŽ“ Education: Bachelor’s degree in Computer Science, MIS, or related field (Master’s preferred)
🚫 Visa Sponsorship: Not available
🚚 Relocation: Available for the ideal candidate
πŸ’° Compensation: $140,000 – $185,000 base salary + full benefits
πŸ•“ Employment Type: Full-Time | Permanent

🌟 The Opportunity

Step into the future with a global leader in healthcare innovation β€” where Data and AI drive transformation and impact millions of lives.

As part of the Enterprise Data, AI & Platforms (EDP) team, you’ll join a high-performing group that’s building scalable, cloud-based data ecosystems and shaping the company’s data-driven future.

This role is ideal for a hands-on Data Engineer who thrives on designing, optimizing, and maintaining robust data pipelines in the cloud, while collaborating closely with architects, scientists, and business stakeholders across the enterprise.

🧭 Key Responsibilities

  • Design, develop, and maintain scalable ETL/ELT data pipelines and integration frameworks to enable advanced analytics and AI use cases.

  • Collaborate with data architects, modelers, and data scientists to evolve the company’s cloud-based data architecture strategy (data lakes, warehouses, streaming analytics).

  • Optimize and manage data storage solutions (e.g., S3, Snowflake, Redshift), ensuring data quality, integrity, and security.

  • Implement data validation, monitoring, and troubleshooting processes to ensure high system reliability.

  • Work cross-functionally with IT and business teams to understand data requirements and translate them into scalable solutions.

  • Document architecture, workflows, and best practices to support transparency and continuous improvement.

  • Stay current with emerging data engineering technologies, tools, and methodologies, contributing to innovation across the organization.

🧠 Core Requirements

Technical Skills

βœ… Hands-on experience with AWS data services such as Glue, Lambda, Athena, Step Functions, and Lake Formation.
βœ… Strong proficiency in Python and SQL for data manipulation and pipeline development.
βœ… Experience in data warehousing and modeling (dimensional modeling, Kimball methodology).
βœ… Familiarity with DevOps and CI/CD practices for data solutions.
βœ… Experience integrating data between applications, data warehouses, and data lakes.
βœ… Understanding of data governance, metadata management, and data quality principles.

Cloud & Platform Experience

  • Expertise in AWS, Azure, or Google Cloud Platform (GCP) – AWS preferred.

  • Knowledge of ETL/ELT tools such as Apache Airflow, dbt, Azure Data Factory, or AWS Glue.

  • Experience with Snowflake, PostgreSQL, MongoDB, or other modern database systems.

Education & Experience

πŸŽ“ Bachelor’s degree in Computer Science, MIS, or related field
πŸ’Ό 5–7 years of professional experience in data engineering or data platform development
⭐ AWS Solutions Architect certification is a plus

πŸš€ Preferred Skills & Attributes

  • Deep knowledge of big data technologies (Spark, Hadoop, Flink) is a strong plus.

  • Proven experience troubleshooting and optimizing complex data pipelines.

  • Strong problem-solving skills and analytical mindset.

  • Excellent communication skills for collaboration across technical and non-technical teams.

  • Passion for continuous learning and data innovation.

πŸ’° Compensation & Benefits

πŸ’΅ Base Salary: $140,000 – $185,000 (commensurate with experience)
🎯 Bonus: Role-based variable incentive
πŸ’Ž Benefits Include:

  • Comprehensive health, dental, and vision coverage

  • Paid vacation and holidays

  • 401(k) retirement plan

  • Wellness and family support programs

  • Flexible hybrid work environment

🧩 Candidate Snapshot

  • Experience: 5–7 years in data engineering or related field

  • Key Skills: AWS Glue | Python | SQL | ETL | CI/CD | Snowflake | Data Modeling | Cloud Architecture

  • Seniority Level: Mid–Senior

  • Work Arrangement: 2–3 days onsite in Ridgefield, CT

  • Travel: Occasional

πŸš€ Ready to power the future of data-driven healthcare?
Join a global data and AI team committed to harnessing the power of cloud and analytics to drive discovery, innovation, and meaningful impact worldwide.