Data Engineer

Senior / Lead Machine Learning Engineer | Python, PyTorch, AI | Fully Remote | $180,000–$215,000

Senior / Lead Machine Learning Engineer

🌍 Location: Fully Remote
💼 Employment Type: Full-time
💰 Compensation: $180,000 – $215,000 (base salary, depending on experience)
📊 Benefits: Full package included

About the Role

We’re seeking a Senior/Lead ML Engineer to drive the development of advanced enterprise AI and intelligent data applications. This is a hands-on role that combines machine learning, data engineering, and software development to deliver practical, production-ready solutions with measurable impact.

If you’re excited about tackling complex engineering challenges in high-standard environments, this role offers strong career growth opportunities, including senior technical leadership pathways.

Key Responsibilities

  • Lead platform upgrades to ensure products remain cutting-edge and effective.

  • Design and manage dynamic dashboards using Python SDKs to turn data into actionable insights.

  • Optimize data pipelines and access patterns for performance and scalability.

  • Troubleshoot and resolve runtime and performance challenges.

  • Architect robust, scalable, and user-friendly applications designed for long-term growth.

  • Collaborate closely with Product Managers to improve usability and ensure real-world impact.

What You Won’t Do

❌ Work in silos – this role requires versatility across ML, data systems, and software engineering.
❌ Focus solely on research without real-world implementation.

Tech Stack

  • Languages & Tools: Python (primary), Docker, Git

  • Libraries & Frameworks: pandas, numpy, scikit-learn, PyTorch

  • Systems & Processes: CI/CD pipelines, monitoring tools, testing frameworks

Requirements

✅ 4+ years of professional Python software engineering with experience in production ML deployment (beyond prototyping).
✅ Proven experience with the end-to-end ML lifecycle: model development → deployment → monitoring.
✅ Strong production systems background in rigorous engineering environments (Big Tech or top-tier startups preferred).
Bachelor’s degree in Computer Science from a top 15 university (Ivy League, Stanford, MIT, CMU, etc.).
U.S. Citizenship and ability to obtain a government security clearance.

Preferred Qualifications

  • Experience in defense-related applications.

  • Exposure to multiple programming languages and diverse tech stacks.

Soft Skills

  • Strong written and verbal communication.

  • Pragmatic approach with a focus on delivering incremental value.

  • Collaborative, with the ability to mentor and influence peers.

Candidate Profile – Not a Fit If

🚫 Job hopper (<2 years per role).
🚫 Focused mainly on research/data science without production deployment.
🚫 Strong theoretical ML background but lacking hands-on implementation.
🚫 No experience with CI/CD, monitoring, or scalable architecture.
🚫 Consulting/contract-heavy career history.

Compensation & Benefits

💰 Base Salary: $180,000 – $215,000
📦 Benefits: Comprehensive full package
🛫 Travel: Occasional, interview travel reimbursed
📍 Relocation: Not available

👉 Ready to shape the future of AI-driven enterprise applications? Apply now and step into a role where your engineering expertise drives real-world innovation.

 

Data Engineer | Azure, Databricks, Python, SQL, Spark | Hybrid – Netherlands (€3,500–€5,000/month)

Data Engineer

📍 Location: Eindhoven area or Randstad, Netherlands (Hybrid – 3 office days / 2 home days)
💼 Employment Type: Full-time
💵 Salary: €3,500 – €5,000 per month (€45,360 – €64,800 annually)
🎯 Experience Level: Mid-level | 2–3 years’ experience

About the Role

Do you love working with data — from digging into sources and writing clean ingestion scripts to ensuring a seamless flow into a data lake? As a Data Engineer, you’ll design and optimize data pipelines that transform raw information into reliable, high-quality datasets for enterprise clients.

You’ll work with state-of-the-art technologies in the cloud (Azure, Databricks, Fabric) to build solutions that deliver business-critical value. In this role, data quality, stability, and monitoring are key — because the pipelines you create will be used in production environments.

Key Responsibilities

  • Develop data connectors and processing solutions using Python, SQL, and Spark.

  • Define validation tests within pipelines to guarantee data integrity.

  • Implement monitoring and alerting systems for early issue detection.

  • Take the lead in troubleshooting incidents to minimize user impact.

  • Collaborate with end users to validate and continuously improve solutions.

  • Work within an agile DevOps team to build, deploy, and optimize pipelines.

Requirements

  • 🎓 Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.

  • 2–3 years of relevant experience in data ingestion and processing.

  • Strong knowledge of SQL, Python, and Spark.

  • Familiarity with container environments (e.g., Kubernetes).

  • Experience with Azure Data Factory, Databricks, or Fabric is a strong plus.

  • Experience with data model management and dashboarding (e.g., PowerBI) preferred.

  • Team player with strong communication skills in Dutch and English.

  • Familiarity with enterprise data platforms and data lakes is ideal.

What We Offer

  • 💶 Salary: €3,500 – €5,000 per month

  • 🌴 26 vacation days

  • 🚗 Lease car or mobility budget (€600)

  • 💻 Laptop & mobile phone

  • 💸 €115 monthly cost allowance

  • 🏦 50% employer contribution for health insurance

  • 📈 60% employer contribution for pension scheme

  • 🎯 Performance-based bonus

  • 📚 Training via in-house Academy (hard & soft skills)

  • 🏋️ Free use of on-site gym

  • 🌍 Hybrid work model (3 days in office, 2 days at home)

  • 🤝 Start with a 12-month contract, with option to move to indefinite after evaluation

Ideal Candidate

You are a hands-on data engineer who enjoys data wrangling and building robust pipelines. You take pride in seeing your code run smoothly in production and know how to troubleshoot quickly when issues arise. With strong technical skills in SQL, Python, and Spark, plus familiarity with cloud platforms like Azure, you’re ready to contribute to impactful enterprise projects.

👉 Ready to make data flow seamlessly and create business value? Apply now to join a passionate, innovation-driven team.

 

Data Engineer – AI & Real Estate | Hybrid, Utrecht | €80K + 37 Vacation Days

Title: Data Engineer
Location: Utrecht, Netherlands (Hybrid – 3 days in office, 2 days remote)
Visa Support: Not available
Relocation Support: Not available

Compensation & Benefits

  • Annual Salary: €64,800 - €79,056 (€5,000 - €6,100 per month)

  • Bonuses: 13th-month bonus included

  • Vacation: 37 vacation days per year

  • Pension Plan: Premium pension scheme with only 1% employee contribution

  • Tech Essentials: Choose a laptop and mobile phone or receive a €30 monthly reimbursement

  • Commuting Support: €0.23 per km travel allowance or 100% reimbursed NS Business Card

  • Hybrid Work: €2.40 daily allowance for home working days

  • Professional Development: €1,500 annual training budget

  • Insurance: Discounts up to 10% on health insurance

Role Overview

As a Data Engineer, you will be responsible for designing, developing, and optimizing scalable data architectures to support AI applications. You will work closely with LLM engineers to build robust data pipelines, ensure secure data access, and bring innovative AI-driven solutions to life.

Key Responsibilities

  • Design and maintain scalable data architectures for AI applications

  • Build and manage data pipelines from diverse sources to a vector database in AWS

  • Implement role-based access control and data security measures

  • Monitor and optimize data processes using dashboards and logging tools

  • Present results to stakeholders and contribute to AI-driven innovations

  • Collaborate in Agile teams to deliver project milestones

Requirements

  • Strong expertise in SQL, Python, and cloud environments (preferably AWS)

  • Experience with structured and unstructured databases

  • Familiarity with vector databases, semantic search, and data orchestration tools

  • Understanding of Agile/Scrum methodologies

  • Fluent English communication skills

  • Experience in data architecture design, data governance, and integrating diverse data types

Nice-to-Have Skills

  • Familiarity with AWS services

  • Experience in the real estate sector

  • Dutch communication skills

  • Must be residing in the Netherlands at the time of application

Work Environment & Culture

  • Informal, family-like working atmosphere

  • Diverse teams with an inclusive culture

  • Hybrid working model (office & home balance)

  • Self-managing teams with freedom for innovation

 

Learn more

Data Engineer Consultant | Hybrid | Netherlands | €77K–€88K + €3K Bonus

Job Title: Data Engineer Consultant

Location: Netherlands (Hybrid - 2 days office, 3 days home)
Industry: Data Engineering
Compensation: €77,472 - €87,840 per year (€3,200 - €4,000 monthly)
Monthly Bonus: €3,000
Working Hours: Minimum 36 hours per week
Vacation Days: 25
Mobility Budget: €450 monthly
Visa Sponsorship: Not Available
Languages Required: Fluent Dutch and English
Relocation Assistance: Not Available
Holidays: 25

Job Description

As a Data Engineer Consultant, your primary responsibility is to prepare data for analytical or operational use. You will build data pipelines to bring together information from different source systems. You will integrate, consolidate, and clean the data before structuring it for use in analytical applications.

While working on challenging assignments with our clients, we also focus on your professional growth. We believe in helping you discover and unlock your potential through coaching, training, and sharing knowledge. This enables you to continue developing as a professional and helps us serve our clients even better.

Ideal Candidate

The ideal candidate should possess deep knowledge of data engineering and data modeling, both conceptually and dimensionally. You should have experience with various cloud architectures, such as Microsoft Azure or AWS, and be familiar with working in Scrum, Agile, and DevOps methodologies. You should be proficient in technologies such as Databricks, Spark Structured Streaming, and PySpark, and be capable of translating user requirements into appropriate solutions. Additionally, you should be skilled in analyzing source data and designing effective data models.

Key Responsibilities

  • Data Engineering: Build and maintain data pipelines, integrate data from various source systems, and structure it for analytical purposes.

  • Data Modeling: Apply conceptual and dimensional data modeling techniques to ensure data can be leveraged effectively.

  • Technology Application: Use Databricks, Spark, and PySpark to build robust data solutions.

  • Collaboration: Work within Scrum and Agile teams to develop data solutions that meet business needs.

Skills & Qualifications

Must-Have Skills

  • Data Engineering

  • Data Modeling

  • Scrum, Agile, DevOps methodologies

  • Python

  • MySQL

  • Microsoft Azure

  • Bachelor’s degree (HBO or equivalent)

  • Fluency in Dutch

Preferable Skills

  • Databricks

  • Microsoft Power BI

  • Azure Data Factory

  • Data Vault

  • Data Governance

  • Bachelor’s degree in Data Science (BSc) or Computer Science (BSc)

  • Data Engineering on Microsoft Azure (DP-203) certification

  • Proficiency in English

Soft Skills

  • Strong communication skills

  • Adaptability

  • Teamwork and collaboration

  • Problem-solving abilities

  • Self-driven and motivated

Experience

  • More than 5 years of experience working in complex data environments at top 500 companies.

Compensation & Benefits

  • Annual Salary: €77,472 - €87,840

  • Monthly Salary: €3,200 - €4,000

  • Monthly Bonus: €3,000

  • Mobility Budget: €450

  • Extra Benefits: Pension package, phone, expenses reimbursement, lease budget, and laptop.

Working Conditions

  • Hybrid Work: 2 days in the office, 3 days remote

  • Vacation: 25 days off per year

  • Visa Sponsorship: Not available

  • Relocation Assistance: Not available

  • Working Hours: Minimum of 36 hours per week

 

Learn more

Senior Data Engineer - London - Full Time Perm Hybrid - Base Salary - GBP £70,000 to £90,000

Senior Data Engineer

London

FullTime Perm Hybrid

Base Salary - GBP £70,000 to £90,000

 

 

BOUNTY DESCRIPTION

Skimlinks, a Connexity and Taboola company, drives e-commerce success for 50% of the Internet’s largest online retailers. We deliver $2B in annual sales by connecting retailers to shoppers on the most desirable retail content channels. As a pioneer in online advertising and campaign technology, Connexity is constantly iterating on products, solving problems for retailers, and building interest in new solutions.

We have recently been acquired by Taboola to make the first Open-Web Source for Publishers connecting editorial content to product recommendations, where readers can easily buy products related to stories they are reading.

Skimlinks, a Taboola company, is a global e-commerce monetization platform, with offices in LA, London, Germany, and NYC. We work with over 60,000 premium publishers and 50,000 retailers around the world helping content producers get paid commissions for the products and brands they write about.

 

About the role

We are looking for a Senior Data Engineer to join our team in London. We are creating a fundamentally new approach to digital marketing, combining big data with large-scale machine learning. Our data sets are on a truly massive scale - we collect data on over a billion users per month and analyse the content of hundreds of millions of documents a day.

As a member of our Data Platform team your responsibilities will include:

  • Design, build, test and maintain high-volume Python data pipelines.

  • Analyse complex datasets in SQL.

  • Communicate effectively with Product Managers and Commercial teams to translate complex business requirements into scalable solutions.

  • Perform software development best practices.

  • Work independently in an agile environment.

  • Share your knowledge across the business and mentor colleagues in areas of deep technical expertise.

 

Requirements:

Here at Skimlinks we value dedication, enthusiasm, and a love of innovation. We are disrupting the online monetization industry, and welcome candidates who want to be a part of this ambitious journey. But it is not just hard work, we definitely appreciate a bit of quirkiness and fun along the way.

·        An advanced degree (Bachelor/Masters) in computer science or a related field.

  • Solid programming skills in both Python and SQL.

  • Proven work experience in Google Cloud Platform or other clouds, developing batch (Apache Airflow) and streaming (Dataflow) scalable data pipelines.

  • Passion for processing large datasets at scale (BigQuery, Apache Druid, Elasticsearch)

  • Familiarity with Terraform, DBT & Looker is a plus.

  • Initiatives around performance optimisation and cost reduction.

  • A commercial mindset, you are passionate about creating outstanding products.

Voted “Best Places to Work,” our culture is driven by self-starters, team players, and visionaries. Headquartered in Los Angeles, California, the company operates sites and business services in the US, UK, and EU. We offer top benefits including Annual Leave Entitlement, paid holidays, competitive comp, team events and more!

  • Healthcare insurance & cash plans

  • Pension

  • Parental Leave Policies

  • Learning & Development Program (educational tool)

  • Flexible work schedules

  • Wellness Resources

  • Equity

We are committed to providing a culture at Connexity that supports the diversity, equity and inclusion of our most valuable asset, our people. We encourage individuality, and are driven to represent a workplace that celebrates our differences, and provides opportunities equally across gender, race, religion, sexual orientation, and all other demographics. Our actions across Education, Recruitment, Retention, and Volunteering reflect our core company values and remind us that we’re all in this together to drive positive change in our industry.

SKILLS AND CERTIFICATIONS [note: bold skills and certification are required]

·        Airflow

·        Python

·        SQL

·        GCP

·        BigQuery

·        Data pipelines

To Apply Please Complete the Form Below

Data Engineer - USA, Connecticut - $90,000 to $120,000

Data Engineer

USA, Connecticut

$90,000 to $120,000

 

Position Summary:

Reporting to the Associate Director, Information Technology, the Data Manager/Data Analytics is responsible for overseeing the development and use of company data systems and guaranteeing that all information to and from the company runs timely and securely. They will also effectively identify, analyze and translate business needs into technology and process solutions. This position can be based out of Westlake Village, CA/Marlborough, MA/Danbury, CT.

 

Principal Responsibilities:

  • Works with other team members and business stakeholders to drive development of business analytics requirements.

  • Leverages knowledge of business processes and data domain.

  • Brings deep expertise in data visualization tools and techniques in translating business analytics needs into data visualization and semantic data access requirements.

  • Works with various business units to facilitate technical design of complex data sourcing, transformation and aggregation logic, ensuring business analytics requirements are met.

  • Leverages enterprise standard tools and platforms to visualize analytics insights, typically working with and/or leading a small team.

  • Regularly monitor and evaluate information and data systems that could affect analytical results.

  • Translate business needs to technical specifications

  • Design, build and deploy BI solutions (e.g., reporting tools)

  • Manage integration tools and data warehouse.  

  • Manage and conduct data validation and troubleshooting

  • Create visualizations and reports according to business requirements

  • Monitoring and enhancing databases and related systems to optimize performance.

  • Proactively addressing scalability and performance issues.

  • Ensuring data quality and integrity while supporting large data sets.

  • Debugging and resolving database reliability, integrity, and efficiency.

  • As a member of the IT organization at MannKind Corp. the incumbent is also expected to be customer focused, a problem solver, a communicator, professional, willing to learn, organized and a team player.

  • Duties and responsibilities are not limited to the work listed above and may include other assignments as necessary.

 

Education and Experience Qualifications:

  • BS/BA Degree with minimum 3-5 years related experience in data management or analysis.

  • 3+ years of experience with Relational Database Management Systems (RDBMS)

  • 3+ years of Business Intelligence / Analytics related work experience in challenging environments

  • Strong understanding of modern data modelling techniques

  • Strong understanding of Cloud services providers (AWS, Google, Microsoft) and how to architect solutions around them

  • Ability to decipher and organize large amounts of data.

  • An analytical mindset with superb communication and problem-solving skills.

  • Ability to translate complex problems clearly and in nontechnical terms.

  • In-depth SQL programming knowledge - partitioning, indexing, performance tuning knowledge, stored procedure, views

  • Hands-on experience in developing dashboards and data visualizations using BI tools (e.g.  PowerBI, Tableau)

Learn more