Databricks

Sr. Principal Full Stack Software Engineer – Data Analytics | Ridgefield, CT | $140k–$210k

πŸ’» Sr. Principal / Lead Software Engineer – Full Stack (Data Analytics)

πŸ“ Location: Ridgefield, Connecticut (Hybrid – 2–3 days onsite per week)
🏒 Industry: Pharmaceutical / Biotech / Data & AI
πŸŽ“ Education: Bachelor’s or Master’s in Computer Science, MIS, or related field
πŸ’Ό Experience Level: Mid–Senior / Principal (7–10+ years)
🚫 Visa Sponsorship: Not available
🚚 Relocation: Available for ideal candidate
πŸ’° Compensation: $140,000 – $210,000 base salary + full benefits + bonus eligibility
πŸ•“ Employment Type: Full-Time | Permanent

🌟 The Opportunity

Join a world-class Enterprise Data, AI & Platforms (EDP) organization on a mission to harness the power of data and artificial intelligence to transform global healthcare.

As a Full Stack Software Engineer (Lead or Principal level), you’ll design and deliver cloud-native data analytics applications that power advanced insights, innovation, and decision-making across a global enterprise. This role offers the chance to work across the entire stack β€” from scalable back-end systems and APIs to dynamic data visualizations β€” while collaborating with multidisciplinary teams in a fast-paced, agile environment.

If you’re passionate about building modern data-driven applications and leading technical excellence, this is your opportunity to make a measurable impact on the future of healthcare and digital innovation.

🧭 Key Responsibilities

  • Full Stack Development: Design, develop, and maintain cloud-based applications and services, encompassing front-end and back-end systems.

  • Data Analytics Engineering: Build and optimize data pipelines (ETL/ELT) using tools such as AWS Glue, Databricks, and dbt.

  • Architecture Integration: Develop serverless back-end solutions using AWS Lambda, API Gateway, and related technologies.

  • Visualization & Insights: Build intuitive dashboards and visualization layers using Tableau or Power BI.

  • DevOps & Automation: Leverage Git, Jenkins, Docker, and Kubernetes to automate builds, testing, and deployments.

  • Agile Collaboration: Work closely with product owners and cross-functional teams in an agile environment to deliver iterative, high-value releases.

  • Quality & Testing: Implement robust testing frameworks and continuous integration pipelines to ensure reliability and scalability.

  • Mentorship & Leadership: Guide junior developers, contribute to architectural decisions, and champion best practices across engineering teams.

🧠 Required Skills & Experience

βœ… 7+ years of professional software development experience, including full-stack data analytics applications.
βœ… Proven experience in cloud environments (AWS required) β€” strong knowledge of Lambda, API Gateway, S3, Glue, and Databricks.
βœ… Solid experience building data pipelines (ETL/ELT) using AWS Glue, Databricks, and dbt.
βœ… Hands-on expertise with frontend visualization tools like Tableau and Power BI.
βœ… Familiarity with containerization and continuous deployment tools (Docker, Kubernetes, Jenkins).
βœ… Working knowledge of API integration, serverless architectures, and data security best practices.
βœ… Strong communication skills, collaborative mindset, and the ability to mentor and inspire technical teams.

πŸ’‘ Preferred Skills & Attributes

⭐ Background in pharmaceutical or digital healthcare technology (preferred, not required).
⭐ Experience in agile environments, continuous improvement, and cross-functional collaboration.
⭐ Familiarity with software testing frameworks, QA automation, and CI/CD pipelines.
⭐ A passion for data, AI, and modern analytics solutions that drive business transformation.
⭐ Ability to balance hands-on technical work with strategic architectural leadership.

πŸ’° Compensation & Benefits

πŸ’΅ Base Salary: $140,000 – $210,000 (commensurate with experience)
🎯 Bonus: Role-based and/or performance incentive
πŸ’Ž Benefits Include:

  • Comprehensive health, dental, and vision coverage

  • 401(k) with company match

  • Paid holidays, vacation, and flexible hybrid work schedule

  • Professional development, mobility, and career growth opportunities

  • Relocation assistance for the ideal candidate

🧩 Candidate Snapshot

  • Experience: 7–10+ years full-stack development (data analytics focus)

  • Core Skills: AWS Cloud | ETL/ELT | Databricks | dbt | Tableau | Power BI | Lambda | API Gateway | DevOps

  • Seniority: Lead or Principal Engineer

  • Work Arrangement: 2–3 days onsite in Ridgefield, CT

  • Travel: Occasional

🌍 Why This Role Matters

This role sits at the intersection of software engineering, data analytics, and AI innovation β€” driving systems that shape the way global teams make data-driven decisions in research, medicine, and operations. You’ll collaborate with visionary technologists and scientists to build solutions that turn complex data into actionable intelligence β€” ultimately improving lives around the world.

πŸš€ Ready to engineer the future of data-driven healthcare?
Join a global organization where technology, data, and purpose intersect to deliver meaningful impact β€” one application at a time.

 

Data Engineer | Azure, Databricks, Python, SQL, Spark | Hybrid – Netherlands (€3,500–€5,000/month)

Data Engineer

πŸ“ Location: Eindhoven area or Randstad, Netherlands (Hybrid – 3 office days / 2 home days)
πŸ’Ό Employment Type: Full-time
πŸ’΅ Salary: €3,500 – €5,000 per month (€45,360 – €64,800 annually)
🎯 Experience Level: Mid-level | 2–3 years’ experience

About the Role

Do you love working with data β€” from digging into sources and writing clean ingestion scripts to ensuring a seamless flow into a data lake? As a Data Engineer, you’ll design and optimize data pipelines that transform raw information into reliable, high-quality datasets for enterprise clients.

You’ll work with state-of-the-art technologies in the cloud (Azure, Databricks, Fabric) to build solutions that deliver business-critical value. In this role, data quality, stability, and monitoring are key β€” because the pipelines you create will be used in production environments.

Key Responsibilities

  • Develop data connectors and processing solutions using Python, SQL, and Spark.

  • Define validation tests within pipelines to guarantee data integrity.

  • Implement monitoring and alerting systems for early issue detection.

  • Take the lead in troubleshooting incidents to minimize user impact.

  • Collaborate with end users to validate and continuously improve solutions.

  • Work within an agile DevOps team to build, deploy, and optimize pipelines.

Requirements

  • πŸŽ“ Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.

  • 2–3 years of relevant experience in data ingestion and processing.

  • Strong knowledge of SQL, Python, and Spark.

  • Familiarity with container environments (e.g., Kubernetes).

  • Experience with Azure Data Factory, Databricks, or Fabric is a strong plus.

  • Experience with data model management and dashboarding (e.g., PowerBI) preferred.

  • Team player with strong communication skills in Dutch and English.

  • Familiarity with enterprise data platforms and data lakes is ideal.

What We Offer

  • πŸ’Ά Salary: €3,500 – €5,000 per month

  • 🌴 26 vacation days

  • πŸš— Lease car or mobility budget (€600)

  • πŸ’» Laptop & mobile phone

  • πŸ’Έ €115 monthly cost allowance

  • 🏦 50% employer contribution for health insurance

  • πŸ“ˆ 60% employer contribution for pension scheme

  • 🎯 Performance-based bonus

  • πŸ“š Training via in-house Academy (hard & soft skills)

  • πŸ‹οΈ Free use of on-site gym

  • 🌍 Hybrid work model (3 days in office, 2 days at home)

  • 🀝 Start with a 12-month contract, with option to move to indefinite after evaluation

Ideal Candidate

You are a hands-on data engineer who enjoys data wrangling and building robust pipelines. You take pride in seeing your code run smoothly in production and know how to troubleshoot quickly when issues arise. With strong technical skills in SQL, Python, and Spark, plus familiarity with cloud platforms like Azure, you’re ready to contribute to impactful enterprise projects.

πŸ‘‰ Ready to make data flow seamlessly and create business value? Apply now to join a passionate, innovation-driven team.

 

Data Engineer Consultant | Hybrid | Netherlands | €77K–€88K + €3K Bonus

Job Title: Data Engineer Consultant

Location: Netherlands (Hybrid - 2 days office, 3 days home)
Industry: Data Engineering
Compensation: €77,472 - €87,840 per year (€3,200 - €4,000 monthly)
Monthly Bonus: €3,000
Working Hours: Minimum 36 hours per week
Vacation Days: 25
Mobility Budget: €450 monthly
Visa Sponsorship: Not Available
Languages Required: Fluent Dutch and English
Relocation Assistance: Not Available
Holidays: 25

Job Description

As a Data Engineer Consultant, your primary responsibility is to prepare data for analytical or operational use. You will build data pipelines to bring together information from different source systems. You will integrate, consolidate, and clean the data before structuring it for use in analytical applications.

While working on challenging assignments with our clients, we also focus on your professional growth. We believe in helping you discover and unlock your potential through coaching, training, and sharing knowledge. This enables you to continue developing as a professional and helps us serve our clients even better.

Ideal Candidate

The ideal candidate should possess deep knowledge of data engineering and data modeling, both conceptually and dimensionally. You should have experience with various cloud architectures, such as Microsoft Azure or AWS, and be familiar with working in Scrum, Agile, and DevOps methodologies. You should be proficient in technologies such as Databricks, Spark Structured Streaming, and PySpark, and be capable of translating user requirements into appropriate solutions. Additionally, you should be skilled in analyzing source data and designing effective data models.

Key Responsibilities

  • Data Engineering: Build and maintain data pipelines, integrate data from various source systems, and structure it for analytical purposes.

  • Data Modeling: Apply conceptual and dimensional data modeling techniques to ensure data can be leveraged effectively.

  • Technology Application: Use Databricks, Spark, and PySpark to build robust data solutions.

  • Collaboration: Work within Scrum and Agile teams to develop data solutions that meet business needs.

Skills & Qualifications

Must-Have Skills

  • Data Engineering

  • Data Modeling

  • Scrum, Agile, DevOps methodologies

  • Python

  • MySQL

  • Microsoft Azure

  • Bachelor’s degree (HBO or equivalent)

  • Fluency in Dutch

Preferable Skills

  • Databricks

  • Microsoft Power BI

  • Azure Data Factory

  • Data Vault

  • Data Governance

  • Bachelor’s degree in Data Science (BSc) or Computer Science (BSc)

  • Data Engineering on Microsoft Azure (DP-203) certification

  • Proficiency in English

Soft Skills

  • Strong communication skills

  • Adaptability

  • Teamwork and collaboration

  • Problem-solving abilities

  • Self-driven and motivated

Experience

  • More than 5 years of experience working in complex data environments at top 500 companies.

Compensation & Benefits

  • Annual Salary: €77,472 - €87,840

  • Monthly Salary: €3,200 - €4,000

  • Monthly Bonus: €3,000

  • Mobility Budget: €450

  • Extra Benefits: Pension package, phone, expenses reimbursement, lease budget, and laptop.

Working Conditions

  • Hybrid Work: 2 days in the office, 3 days remote

  • Vacation: 25 days off per year

  • Visa Sponsorship: Not available

  • Relocation Assistance: Not available

  • Working Hours: Minimum of 36 hours per week

 

Learn more