Python

Data Engineer | AWS, Python & Snowflake | Ridgefield, CT (Hybrid) | $140K–$185K

🧠 Data Engineer

📍 Location: Ridgefield, Connecticut (Hybrid – 2–3 days onsite per week)
💼 Openings: 2
🏢 Industry: Information Technology / Life Sciences
🎓 Education: Bachelor’s degree in Computer Science, MIS, or related field (Master’s preferred)
🚫 Visa Sponsorship: Not available
🚚 Relocation: Available for the ideal candidate
💰 Compensation: $140,000 – $185,000 base salary + full benefits
🕓 Employment Type: Full-Time | Permanent

🌟 The Opportunity

Step into the future with a global leader in healthcare innovation — where Data and AI drive transformation and impact millions of lives.

As part of the Enterprise Data, AI & Platforms (EDP) team, you’ll join a high-performing group that’s building scalable, cloud-based data ecosystems and shaping the company’s data-driven future.

This role is ideal for a hands-on Data Engineer who thrives on designing, optimizing, and maintaining robust data pipelines in the cloud, while collaborating closely with architects, scientists, and business stakeholders across the enterprise.

🧭 Key Responsibilities

  • Design, develop, and maintain scalable ETL/ELT data pipelines and integration frameworks to enable advanced analytics and AI use cases.

  • Collaborate with data architects, modelers, and data scientists to evolve the company’s cloud-based data architecture strategy (data lakes, warehouses, streaming analytics).

  • Optimize and manage data storage solutions (e.g., S3, Snowflake, Redshift), ensuring data quality, integrity, and security.

  • Implement data validation, monitoring, and troubleshooting processes to ensure high system reliability.

  • Work cross-functionally with IT and business teams to understand data requirements and translate them into scalable solutions.

  • Document architecture, workflows, and best practices to support transparency and continuous improvement.

  • Stay current with emerging data engineering technologies, tools, and methodologies, contributing to innovation across the organization.

🧠 Core Requirements

Technical Skills

Hands-on experience with AWS data services such as Glue, Lambda, Athena, Step Functions, and Lake Formation.
✅ Strong proficiency in Python and SQL for data manipulation and pipeline development.
✅ Experience in data warehousing and modeling (dimensional modeling, Kimball methodology).
✅ Familiarity with DevOps and CI/CD practices for data solutions.
✅ Experience integrating data between applications, data warehouses, and data lakes.
✅ Understanding of data governance, metadata management, and data quality principles.

Cloud & Platform Experience

  • Expertise in AWS, Azure, or Google Cloud Platform (GCP) – AWS preferred.

  • Knowledge of ETL/ELT tools such as Apache Airflow, dbt, Azure Data Factory, or AWS Glue.

  • Experience with Snowflake, PostgreSQL, MongoDB, or other modern database systems.

Education & Experience

🎓 Bachelor’s degree in Computer Science, MIS, or related field
💼 5–7 years of professional experience in data engineering or data platform development
⭐ AWS Solutions Architect certification is a plus

🚀 Preferred Skills & Attributes

  • Deep knowledge of big data technologies (Spark, Hadoop, Flink) is a strong plus.

  • Proven experience troubleshooting and optimizing complex data pipelines.

  • Strong problem-solving skills and analytical mindset.

  • Excellent communication skills for collaboration across technical and non-technical teams.

  • Passion for continuous learning and data innovation.

💰 Compensation & Benefits

💵 Base Salary: $140,000 – $185,000 (commensurate with experience)
🎯 Bonus: Role-based variable incentive
💎 Benefits Include:

  • Comprehensive health, dental, and vision coverage

  • Paid vacation and holidays

  • 401(k) retirement plan

  • Wellness and family support programs

  • Flexible hybrid work environment

🧩 Candidate Snapshot

  • Experience: 5–7 years in data engineering or related field

  • Key Skills: AWS Glue | Python | SQL | ETL | CI/CD | Snowflake | Data Modeling | Cloud Architecture

  • Seniority Level: Mid–Senior

  • Work Arrangement: 2–3 days onsite in Ridgefield, CT

  • Travel: Occasional

🚀 Ready to power the future of data-driven healthcare?
Join a global data and AI team committed to harnessing the power of cloud and analytics to drive discovery, innovation, and meaningful impact worldwide.

Lead Rust Software Developer | Embedded Systems | Camden, NJ | $132K–$200K

⚙️ Lead Rust Software Developer

📍 Location: Camden, New Jersey (Fully Onsite – every other Friday off 🗓️)
🏢 Industry: Aerospace / Defense / Embedded Systems
🎓 Education: Bachelor’s or Master’s in Computer Science, Computer Engineering, or related field
💼 Experience Level: Mid–Senior (7–10 years)
🔒 Clearance Required: Active U.S. Department of Defense (DoD) Secret Clearance
🚫 Visa Sponsorship: Not available
🚚 Relocation: Available for ideal candidate
💰 Compensation: $132,000 – $200,000 base salary + full benefits
🕓 Schedule: 9/80 (every other Friday off)
💼 Employment Type: Full-Time | Permanent

🌟 The Opportunity

A leading aerospace and defense innovator is seeking an experienced Lead Rust Software Developer to shape the next generation of mission-critical embedded systems.

This role offers the opportunity to spearhead Rust adoption across advanced programs supporting defense, C5, and cyber initiatives. You’ll provide technical leadership, architectural guidance, and hands-on expertise in developing high-assurance, memory-safe embedded software solutions.

If you’re passionate about cutting-edge systems, modern programming languages, and solving complex engineering challenges — this is your chance to make a measurable impact in national defense technology.

🧭 Key Responsibilities

  • Lead Rust software development efforts across multiple embedded product lines.

  • Champion the adoption of Rust best practices, idioms, and design patterns throughout the organization.

  • Collaborate with cross-functional hardware and software teams to define system interfaces, requirements, and design strategies.

  • Support all phases of the software development lifecycle — from requirements and design to implementation, integration, and test.

  • Migrate legacy codebases from C/C++ to Rust while improving maintainability and performance.

  • Develop and document software test plans, validation procedures, and technical specifications.

  • Perform root cause analysis and implement sustainable solutions for complex software issues.

  • Use modeling and simulation tools to support design, prototyping, and evaluation of embedded systems.

  • Contribute to continuous improvement of secure coding standards and software quality initiatives.

🧠 Required Skills & Experience

U.S. citizenship with an active DoD Secret Clearance.
3+ years of hands-on experience developing production-grade Rust software.
8+ years of experience developing embedded systems software.
✅ Proficiency in C/C++, Python, and object-oriented design principles.
✅ Strong understanding of embedded real-time operating systems (VxWorks, Linux, Integrity).
✅ Experience developing software for mission-critical or defense systems.
✅ Excellent communication, documentation, and team collaboration skills.

💡 Preferred Skills

⭐ Experience converting legacy C/C++ codebases to Rust.
⭐ Knowledge of UML tools such as IBM Rhapsody or MagicDraw/Cameo.
⭐ Experience developing device drivers or board support packages (BSPs).
⭐ Background in information assurance, cybersecurity, or cryptography.
⭐ Familiarity with NSA Type 1 certification or DO-178 safety standards.
⭐ Deep understanding of Rust memory safety and secure software engineering.

💰 Compensation & Benefits

💵 Base Salary: $132,000 – $200,000 (based on experience & location)
💎 Benefits Include:

  • Comprehensive medical, dental, and vision coverage

  • 401(k) retirement plan with company match

  • Paid holidays and every other Friday off

  • Employee Assistance Program (EAP)

  • Relocation stipend for qualified candidates

  • Opportunities to work on groundbreaking defense technologies

🧩 Candidate Snapshot

  • Experience: 7–10 years total, with 3+ in Rust and 8+ in embedded systems

  • Focus Areas: Rust, Embedded Systems, C/C++, Cybersecurity, Real-Time Software

  • Clearance: Active DoD Secret (required)

  • Seniority Level: Mid–Senior

  • Work Arrangement: Fully onsite in Camden, NJ

  • Travel: Occasional

🚀 Why Join

You’ll be joining a high-impact engineering organization building the future of secure, mission-critical technology. This is a rare opportunity to lead Rust innovation within embedded defense systems while working alongside some of the brightest minds in aerospace engineering.

Your work will directly contribute to national security, system reliability, and next-generation defense capabilities — all within a collaborative, forward-thinking environment that values innovation and technical excellence.

🔹 Ready to lead the Rust revolution in defense software?
Join a team that’s redefining embedded system performance and security — one line of code at a time.

Lead Data Scientist | Houston, TX | $125K–$140K + Equity + Hybrid Flexibility

Lead Data Scientist

Location: Houston, TX (Onsite)
Level: Senior
Reports To: Director of Data Science
Salary Range: $125,000 – $140,000 (USD)
Benefits: Medical, Dental, Vision, Life Insurance, Retirement, Equity, Paid Time Off, Work From Home Flexibility

About the Role

We are seeking a Lead Data Scientist to spearhead transformative projects that drive impact across the organization. This is a hands-on leadership role where you’ll design advanced models, mentor a team of talented data scientists, and partner with senior stakeholders to solve complex challenges. If you thrive at the intersection of technology, business strategy, and innovation, this role offers the perfect platform to showcase your expertise.

What You’ll Do

  • Lead high-visibility projects that directly influence business strategy and decision-making.

  • Develop and deploy custom algorithms, models, and predictive analytics that unlock value from complex data sets.

  • Translate technical insights into clear, compelling stories for executive-level audiences.

  • Mentor and coach junior data scientists, elevating the technical capabilities of the team.

  • Research and implement cutting-edge machine learning techniques to improve operational performance.

  • Partner with cross-functional teams to identify opportunities, resolve data challenges, and optimize processes.

  • Stay at the forefront of emerging tools and vendors, recommending best-fit solutions.

What You Bring

  • 7+ years of professional experience in data science, with proven project leadership.

  • Strong background in machine learning, advanced statistical modeling, and SQL database management.

  • Experience in healthcare data, EHR systems, or financial/revenue cycle data preferred.

  • Demonstrated ability to design scalable solutions using AWS (certification required).

  • Hands-on expertise in Cogito and other data management platforms.

  • Exceptional communication skills – able to engage both technical teams and executive stakeholders.

  • A problem-solving mindset with the ability to thrive in fast-paced, multidisciplinary environments.

Interview Process

  1. Recruiter Interview

  2. Hiring Manager Interview

  3. Panel Interview

Why Join Us?

Here, your work won’t just be about numbers – it will be about impact. You’ll have the chance to combine advanced data science with real-world problem solving, shaping decisions that matter.

We celebrate growth, encourage innovation, and foster a culture of collaboration. You’ll be supported by mentors, inspired by peers, and empowered to chart your own career path. Our workplace has been consistently recognized as one of the Top Workplaces in the region, a testament to our values-driven culture and commitment to meaningful work.

Benefits at a Glance

  • Comprehensive medical, dental, and vision insurance

  • Life insurance and retirement plans

  • Equity opportunities

  • Generous paid time off

  • Flexible work-from-home options

 

HPC Platform Engineer | $71/hr Contract | Azure & Linux Expert | Collegeville, PA

HPC Platform Engineer

📍 Location: Collegeville, PA (Onsite)
📅 Contract Length: 6 Months
💵 Pay Rate: $71 per hour
💼 Employment Type: Contract | Mid-Senior Level
🛂 Visa Sponsorship: Not available
🚚 Relocation Assistance: Not available
👤 Openings: 1

About the Role

This is a 6-month contract opportunity for an experienced HPC Platform Engineer with strong expertise in Azure fundamentals, Linux administration, and high-performance computing (HPC) platforms. You’ll be working in a fast-paced environment, supporting critical HPC infrastructure, administering tools like Posit Workbench, and ensuring high availability and performance across systems.

If you thrive in solving complex technical challenges and enjoy collaborating with cross-functional teams, this role offers an exciting chance to make a significant impact.

Key Responsibilities

  • Provide technical expertise in HPC platform architecture, design, and support.

  • Administer and manage Posit Workbench, Connect, and Package Manager.

  • Oversee and maintain Linux infrastructure within the HPC ecosystem.

  • Manage and configure Slurm workload manager and Kubernetes clusters.

  • Collaborate with developers, system administrators, and stakeholders to optimize platform performance.

  • Troubleshoot and resolve complex technical issues related to HPC environments.

  • Ensure robust system monitoring, automation, and best practices are followed for operational efficiency.

Must-Have Skills

  • HPC Platform expertise

  • Azure Fundamentals

  • Posit Workbench, Connect & Package Manager Administration

  • Linux Administration

  • Slurm workload manager

  • Kubernetes

Good-to-Have Skills

  • Experience in the Life Sciences domain

  • Broader Azure Infrastructure knowledge

  • Python scripting skills

Additional Requirements

  • Strong problem-solving skills, with the ability to analyze and resolve complex issues.

  • Excellent communication skills to collaborate with technical and non-technical stakeholders.

  • Awareness of Posit Workbench Enterprise tool applications.

Ideal Candidate Profile

  • 7+ years of relevant IT and HPC experience.

  • Bachelor’s degree in Information Technology, Computer Science, or related field.

  • Proven ability to operate in high-demand environments with cross-functional collaboration.

Key Skills

HPC Platform | Microsoft Azure | Azure Fundamentals | Linux | Slurm | Kubernetes | Posit Workbench | Connect | Package Manager | Python (Preferred) | Life Sciences Domain Knowledge (Preferred)

 

AWS Bedrock Developer (GenAI Engineer) | $70/hr Contract | Dallas, TX | 6-Month Project

AWS Bedrock Developer

📍 Location: Dallas, TX (Onsite)
📅 Contract Length: 6 Months
💵 Pay Rate: $70 per hour
💼 Employment Type: Contract | Mid-Senior Level
🛂 Visa Sponsorship: Not available
🚚 Relocation Assistance: Not available
👤 Openings: 1

About the Role

We are seeking an experienced AWS Bedrock Developer / GenAI Engineer with a strong background in cloud computing and generative AI. This role will focus on transforming Contact Center applications using GenAI and leveraging AWS Bedrock to deliver scalable, innovative solutions.

You’ll work hands-on with AWS services, Terraform, and automation frameworks, helping to design, integrate, and optimize cutting-edge GenAI solutions in a mission-critical environment.

Key Responsibilities

  • Design and implement GenAI solutions using AWS Bedrock.

  • Lead transformation of Contact Center applications with generative AI capabilities.

  • Automate server and infrastructure provisioning using Terraform.

  • Develop and optimize AWS Lambda functions, Step Functions, and SSO integrations.

  • Integrate on-premise systems with AWS Bedrock for seamless enterprise adoption.

  • Collaborate with cross-functional teams to deliver scalable and secure architectures.

Must-Have Skills

  • 10+ years of IT experience.

  • 7+ years of AWS Cloud Computing expertise.

  • Proven hands-on experience with AWS Bedrock for GenAI solutions.

  • Strong experience with Terraform for infrastructure as code.

  • Expertise in AWS Lambda, Step Functions, and SSO integrations.

  • Experience integrating on-premise systems with AWS Bedrock.

Ideal Candidate Profile

  • Bachelor’s degree in Computer Science, Engineering, or related field.

  • Strong technical communicator, able to work with technical and business stakeholders.

  • Ability to design and implement scalable, secure, and innovative AI-driven solutions.

Key Skills

AWS Bedrock | GenAI | AWS Cloud | Terraform | AWS Lambda | Step Functions | SSO Integrations | Contact Center Transformation | Cloud Automation

⚡ This role offers the chance to work at the forefront of Generative AI innovation in the enterprise space, shaping next-generation solutions for large-scale applications.

 

Salesforce Solution Architect | Health Cloud & OmniStudio | Toronto, Canada (Hybrid)

Salesforce Solution Architect

📍 Location: Toronto, Ontario, Canada (Hybrid – 3 Days Onsite)
💼 Employment Type: Full-Time
💰 Salary Range: $105,200 – $175,300 USD (commensurate with experience)
🎁 Benefits: Full benefits package including health & wellbeing perks. No relocation assistance or bonus.
🛂 Visa Sponsorship: Not available
🔐 Security Clearance Required: No

Step Into Innovation – Architect the Future of Patient Services

Join a purpose-driven, tech-forward healthcare organization on a mission to redefine patient experience through data, AI, and bold innovation. We're seeking a Salesforce Solution Architect who will play a pivotal role in designing and implementing cutting-edge Salesforce Health Cloud solutions that support patient journeys, improve outcomes, and drive operational excellence.

As a technical leader on the Patient Services Architecture team, you’ll bridge complex business needs with robust, scalable technology. If you're excited about transforming healthcare through smart systems and human-first experiences — this is your opportunity to make a lasting impact.

What You’ll Do

  • Lead the architecture and implementation of Salesforce Health Cloud solutions tailored to large-scale Patient Support Programs (PSPs).

  • Define, document, and communicate technical designs and architecture blueprints to cross-functional stakeholders.

  • Drive seamless integration with platforms such as Snowflake, Informatica, AWS S3, Python, and other healthcare services (e.g., Benefits Verification, Co-Pay).

  • Oversee solution design aligned with enterprise standards and regulatory compliance (GxP, HIPAA, GDPR).

  • Deliver Proof of Concepts (PoCs) to validate and iterate new ideas rapidly.

  • Continuously enhance system performance, simplify architecture, and identify reusable components.

  • Collaborate globally with product teams, developers, and enterprise architects.

  • Maintain documentation including C4 models and solution roadmaps.

What You Bring

Required Skills & Experience:

  • Bachelor’s degree in a relevant field.

  • Salesforce certifications (Application and/or System Architect) strongly preferred.

  • Proven hands-on experience with Salesforce Health Cloud, OmniStudio, OmniScripts, Data Raptors, and complex patient workflows.

  • Deep knowledge of FHIR integration, data privacy, and consent management.

  • Strong understanding of cloud platforms and APIs (Snowflake, AWS, Informatica, Python).

  • Demonstrated ability to lead architecture design, articulate technical vision, and influence cross-functional teams.

  • Experience with agile methodologies and DevOps best practices.

  • Excellent communication skills – both technical and business-facing.

Nice to Haves:

  • Industry knowledge in life sciences or pharma.

  • Familiarity with Veeva, Salesforce Marketing Cloud, Advanced Therapy Management.

  • Experience with Microsoft PowerBI/Tableau and AI/ML capabilities.

  • Experience using tools like AutoRabbit, GitHub, Terraform.

  • Understanding of C4 architecture modeling.

  • Data masking, anonymization, and encryption experience.

Why Join Us?

  • 💡 Lead Innovation – Influence major technical decisions in next-gen healthcare platforms.

  • 🌍 Global Impact – Your work will directly enhance patient experience and health outcomes.

  • 🚀 Career Growth – Endless opportunities to advance, laterally or vertically, across a global organization.

  • 💼 Comprehensive Benefits – Health and wellbeing programs for you and your family.

  • 🧠 Future-Ready Tech – Work with AI, cloud-native platforms, and advanced data analytics.

Our Commitment to Diversity & Inclusion

We value the unique perspectives each individual brings to our team. We are committed to building a workplace that reflects the diverse communities we serve. If you require accommodation at any stage of the recruitment process, please let us know – we’re happy to help.

Progress is powered by people. Be the architect of better.
Apply now and let’s build something extraordinary — together.

 

Sr. Director, AI & Data Science | $190K–$300K + Bonus | Wayzata, MN (Hybrid)

Sr. Director, AI & Data Science

📍 Location: Wayzata, Minnesota, United States (Hybrid – scheduled work from home days)
💼 Industry: Information Technology
📊 Category: Business Intelligence
📌 Job Type: Full-time | Onsite with WFH flexibility
💲 Compensation: $190,000 – $300,000 + bonus eligibility
📈 Relocation Assistance: Possible for ideal candidate

About the Company

Our company’s size and scale allow us to make a positive global impact. We are a family company providing food, ingredients, agricultural solutions, and industrial products that are vital for living. With 160,000 colleagues across 70 countries, we connect farmers, customers, and families with essentials every day.

Job Purpose

The Sr. Director of AI & Data Science is a visionary leader responsible for driving high-impact AI applications across the business. This role partners with executives to identify opportunities where AI can provide strategic advantage, shaping how data science transforms decision-making, operations, and customer experience.

The leader will:

  • Champion responsible AI practices (governance, risk mitigation, ethics).

  • Deliver measurable business outcomes from AI investments.

  • Lead and scale a global team of data scientists (transitioning from regional to global scope).

Key Responsibilities

  • Partner with business leaders to identify and accelerate AI opportunities.

  • Translate complex AI concepts into business-focused narratives for executives.

  • Provide strategic oversight of AI product delivery (scope, timeline, budget).

  • Champion innovation, continuous improvement, and AI adoption across the business.

  • Develop a community of practice for global data scientists to drive reuse and alignment.

  • Build and maintain strategic partnerships and alliances to maximize value.

  • Lead, coach, and develop a team of 16+ data scientists in an inclusive culture.

  • Manage large budgets, ensure measurable ROI, and deliver against business value targets.

Minimum Qualifications

  • Bachelor’s in Data Science, Computer Science, Math, Engineering, or related field (or equivalent experience).

  • 8+ years of related work experience.

  • Experience with at least two machine learning approaches: supervised, unsupervised, reinforcement learning.

  • Proven ability to present complex AI solutions to non-technical audiences.

Preferred Qualifications

  • Master’s or PhD in Data Science, Computer Science, Math, Engineering, or related field.

  • 5+ years of leadership experience (managing teams of data scientists).

  • Experience driving enterprise-wide analytics adoption.

  • Familiarity with CI/CD pipelines, distributed computing frameworks, and AWS.

  • Background in software development/version control.

  • Strong business acumen and experience presenting to senior executives.

  • Experience in Agriculture or related industries is a plus.

Compensation & Benefits

💲 Base Salary: $190,000 – $300,000
🎯 Bonus Eligible: Yes
🏥 Comprehensive health & wellness benefits
🍼 14 weeks parental leave + prevention & wellness programs
🌎 Sick & Safe Leave: 1 hour for every 30 worked (up to 48 hours annually, unless otherwise required by law)

Candidate Profile

  • 7–10+ years’ experience in AI, Data Science, and Analytics.

  • Proven ability as a strategic champion to drive AI adoption at scale.

  • Leadership experience managing diverse data science teams.

  • Strong emotional intelligence (EQ) and collaborative leadership style.

  • Comfortable influencing senior executives and non-technical stakeholders.

Security Clearance Required: No
Visa Candidates Considered: No
✈️ Travel: Occasional

 

Cloud Engineer | Azure / AWS / Google Cloud | €4,200–€5,000 p/m | Eindhoven / Randstad (Hybrid)

Cloud Engineer

📍 Location: Eindhoven area or Randstad (Hybrid – 3 days office, 2 days home)
💼 Employment Type: Full-time
💵 Salary: €4,200 – €5,000 gross per month (€54,432 – €64,800 annually)
🎯 Level: Mid-Senior | 3–5 years’ experience

About the Role

Are you ready to make an impact at leading enterprise clients in the field of Cloud technology?
As a Cloud Engineer, you’ll help organizations accelerate their digital transformation by designing, building, and optimizing cloud solutions. You’ll move seamlessly between strategic thinking and hands-on implementation, working closely with consultants and engineers to deliver innovative solutions across diverse industries.

Key Responsibilities

  • Analyze client challenges and translate them into practical cloud solutions.

  • Design and implement cloud architectures in Azure, AWS, or Google Cloud.

  • Automate infrastructure and application management using IaC tools and CI/CD pipelines.

  • Improve existing cloud environments in terms of cost, security, performance, and manageability.

  • Coach clients and colleagues in adopting cloud technology and DevOps practices.

  • Build sustainable client relationships and identify opportunities for innovation.

What We’re Looking For

  • Bachelor’s or Master’s degree in Computer Science, Mathematics, or Electrical Engineering.

  • 3+ years’ experience as a Cloud Consultant or Engineer in enterprise or consultancy environments.

  • Deep expertise in at least one cloud platform (Azure, AWS, or Google Cloud).

  • Experience with cloud cost control, security, networking, and monitoring.

  • Strong scripting/automation skills (Terraform, ARM, Bicep, Python, or PowerShell).

  • Excellent communication skills and a solution-oriented, results-driven mindset.

Nice-to-Have:

  • Experience with infrastructure automation tools such as Chef, Ansible, or Puppet.

  • Knowledge of networking and storage within large-scale IT infrastructures.

  • Strong Dutch and English communication skills.

What We Offer

  • 💶 Salary: €4,200 – €5,000 gross per month

  • 🚘 Mobility budget: €600/month OR company lease car

  • 🎁 Extras: €115 monthly expense allowance

  • 🌴 26 vacation days

  • 🏥 Insurance: 50% contribution to health insurance

  • 🏦 Pension: 60% contribution to pension plan

  • 🎯 Bonus: Performance-based, tied to personal development & team success

  • 💻 Laptop & phone provided

  • 🎓 Itility Academy for hard & soft skills training

  • 🏋️ Gym access at the office

  • 🏡 Hybrid working model (3 office days / 2 home days)

Ideal Candidate

You’re passionate about technology and thrive in dynamic environments. You can bridge the gap between business and IT, communicate complex ideas clearly, and take ownership of solutions. With proven cloud expertise and a collaborative mindset, you’ll help deliver sustainable, future-proof cloud infrastructures.

👉 Ready to push boundaries in cloud technology and accelerate digital transformation? Apply now and be part of a team that goes one step beyond.

 

CRM / Salesforce Solution Architect – Health Cloud & OmniStudio | Patient Support Services | $150K–$175K | Morristown, NJ or Cambridge, MA

CRM/Salesforce Solution Architect – Patient Support Services – Health Cloud/Omnistudio

📍 Location: Morristown, NJ (Hybrid – 3 Days Onsite) or Cambridge, MA
💼 Employment Type: Full-Time
💲 Salary Range: $150,000 – $175,000 USD (commensurate with experience)
🎁 Benefits: Full healthcare, wellness programs, generous parental leave, retirement plans, and career development opportunities
🚚 Relocation: Not available
🛂 Visa Sponsorship: Not available
✈️ Travel: Occasional

Architect the Future of Patient Experience

If you’re a Salesforce CRM expert passionate about improving lives through better healthcare technology, this role offers the perfect opportunity to combine technical mastery with meaningful impact. As a CRM Solution Architect for Patient Support Services, you’ll design, lead, and deliver enterprise-grade solutions that transform how patients connect, receive care, and stay engaged throughout their healthcare journey.

In this high-visibility role, you’ll collaborate across business, technology, and partner teams to ensure solutions are scalable, secure, and fully aligned with organizational goals—while making a measurable difference for patients.

What You’ll Do

  • Lead Solution Design – Architect and implement Salesforce Health Cloud solutions for large-scale Patient Support Programs (PSPs)

  • Enhance Patient Journeys – Define technology strategies for case management, engagement tools, and integrated partner services (e.g., benefits verification, co-pay programs)

  • Integrate & Optimize – Oversee integrations with platforms such as Snowflake, Informatica, AWS S3, and Python-based services

  • Drive Best Practices – Ensure architecture aligns with enterprise standards, compliance requirements (HIPAA, GDPR, GxP), and security protocols

  • Proof of Concept Leadership – Run PoCs to validate solution feasibility, presenting findings to stakeholders

  • Improve Performance – Streamline architecture for reusability, scalability, and cost efficiency

  • Collaborate Globally – Work closely with enterprise architects, cross-functional teams, and project managers to deliver robust, compliant solutions

  • Documentation & Roadmaps – Maintain architecture diagrams, solution blueprints, and technology roadmaps

What You Bring

Must-Have Qualifications

  • 5+ years architecting Salesforce solutions with a focus on OmniStudio

  • Proven track record in Salesforce Health Cloud design and implementation

  • Strong experience with Integration Procedures, OmniScripts, and Data Raptors

  • In-depth understanding of FHIR data integrations, data privacy, and consent management

  • Bachelor’s degree in a relevant field

  • Exceptional communication skills for both technical and non-technical audiences

  • Knowledge of encryption, data masking, and anonymization techniques

  • Agile methodology experience and ability to work across global time zones

Preferred Experience

  • Salesforce certifications (Application Architect, System Architect)

  • Exposure to Marketing Cloud, Veeva, Service Cloud Voice, Advanced Therapy Management

  • Familiarity with AWS, Azure, Snowflake, Tableau, PowerBI, AI/ML solutions

  • Experience with DevOps tools like AutoRabbit, GitHub, Terraform

  • Background in life sciences or pharma data architecture

Why You’ll Love This Role

  • 💡 Innovative Environment – Work on cutting-edge CRM and cloud solutions that directly improve patient care outcomes

  • 🌍 Meaningful Impact – Your designs will help patients navigate complex healthcare systems with ease and confidence

  • 📈 Career Growth – Opportunities for advancement, cross-functional projects, and skill expansion

  • 🧑‍🤝‍🧑 Supportive Culture – Join a diverse, inclusive, and globally connected team

  • 🏥 Comprehensive Benefits – Full healthcare coverage, wellness programs, at least 14 weeks’ gender-neutral parental leave, and retirement plans

The Ideal Candidate

You’re a hands-on Salesforce architect with deep Health Cloud and OmniStudio experience, a passion for building patient-first solutions, and the ability to work seamlessly between business requirements and technical execution.

 

Senior / Lead Machine Learning Engineer | Python, PyTorch, AI | Fully Remote | $180,000–$215,000

Senior / Lead Machine Learning Engineer

🌍 Location: Fully Remote
💼 Employment Type: Full-time
💰 Compensation: $180,000 – $215,000 (base salary, depending on experience)
📊 Benefits: Full package included

About the Role

We’re seeking a Senior/Lead ML Engineer to drive the development of advanced enterprise AI and intelligent data applications. This is a hands-on role that combines machine learning, data engineering, and software development to deliver practical, production-ready solutions with measurable impact.

If you’re excited about tackling complex engineering challenges in high-standard environments, this role offers strong career growth opportunities, including senior technical leadership pathways.

Key Responsibilities

  • Lead platform upgrades to ensure products remain cutting-edge and effective.

  • Design and manage dynamic dashboards using Python SDKs to turn data into actionable insights.

  • Optimize data pipelines and access patterns for performance and scalability.

  • Troubleshoot and resolve runtime and performance challenges.

  • Architect robust, scalable, and user-friendly applications designed for long-term growth.

  • Collaborate closely with Product Managers to improve usability and ensure real-world impact.

What You Won’t Do

❌ Work in silos – this role requires versatility across ML, data systems, and software engineering.
❌ Focus solely on research without real-world implementation.

Tech Stack

  • Languages & Tools: Python (primary), Docker, Git

  • Libraries & Frameworks: pandas, numpy, scikit-learn, PyTorch

  • Systems & Processes: CI/CD pipelines, monitoring tools, testing frameworks

Requirements

✅ 4+ years of professional Python software engineering with experience in production ML deployment (beyond prototyping).
✅ Proven experience with the end-to-end ML lifecycle: model development → deployment → monitoring.
✅ Strong production systems background in rigorous engineering environments (Big Tech or top-tier startups preferred).
Bachelor’s degree in Computer Science from a top 15 university (Ivy League, Stanford, MIT, CMU, etc.).
U.S. Citizenship and ability to obtain a government security clearance.

Preferred Qualifications

  • Experience in defense-related applications.

  • Exposure to multiple programming languages and diverse tech stacks.

Soft Skills

  • Strong written and verbal communication.

  • Pragmatic approach with a focus on delivering incremental value.

  • Collaborative, with the ability to mentor and influence peers.

Candidate Profile – Not a Fit If

🚫 Job hopper (<2 years per role).
🚫 Focused mainly on research/data science without production deployment.
🚫 Strong theoretical ML background but lacking hands-on implementation.
🚫 No experience with CI/CD, monitoring, or scalable architecture.
🚫 Consulting/contract-heavy career history.

Compensation & Benefits

💰 Base Salary: $180,000 – $215,000
📦 Benefits: Comprehensive full package
🛫 Travel: Occasional, interview travel reimbursed
📍 Relocation: Not available

👉 Ready to shape the future of AI-driven enterprise applications? Apply now and step into a role where your engineering expertise drives real-world innovation.

 

Data Engineer | Azure, Databricks, Python, SQL, Spark | Hybrid – Netherlands (€3,500–€5,000/month)

Data Engineer

📍 Location: Eindhoven area or Randstad, Netherlands (Hybrid – 3 office days / 2 home days)
💼 Employment Type: Full-time
💵 Salary: €3,500 – €5,000 per month (€45,360 – €64,800 annually)
🎯 Experience Level: Mid-level | 2–3 years’ experience

About the Role

Do you love working with data — from digging into sources and writing clean ingestion scripts to ensuring a seamless flow into a data lake? As a Data Engineer, you’ll design and optimize data pipelines that transform raw information into reliable, high-quality datasets for enterprise clients.

You’ll work with state-of-the-art technologies in the cloud (Azure, Databricks, Fabric) to build solutions that deliver business-critical value. In this role, data quality, stability, and monitoring are key — because the pipelines you create will be used in production environments.

Key Responsibilities

  • Develop data connectors and processing solutions using Python, SQL, and Spark.

  • Define validation tests within pipelines to guarantee data integrity.

  • Implement monitoring and alerting systems for early issue detection.

  • Take the lead in troubleshooting incidents to minimize user impact.

  • Collaborate with end users to validate and continuously improve solutions.

  • Work within an agile DevOps team to build, deploy, and optimize pipelines.

Requirements

  • 🎓 Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.

  • 2–3 years of relevant experience in data ingestion and processing.

  • Strong knowledge of SQL, Python, and Spark.

  • Familiarity with container environments (e.g., Kubernetes).

  • Experience with Azure Data Factory, Databricks, or Fabric is a strong plus.

  • Experience with data model management and dashboarding (e.g., PowerBI) preferred.

  • Team player with strong communication skills in Dutch and English.

  • Familiarity with enterprise data platforms and data lakes is ideal.

What We Offer

  • 💶 Salary: €3,500 – €5,000 per month

  • 🌴 26 vacation days

  • 🚗 Lease car or mobility budget (€600)

  • 💻 Laptop & mobile phone

  • 💸 €115 monthly cost allowance

  • 🏦 50% employer contribution for health insurance

  • 📈 60% employer contribution for pension scheme

  • 🎯 Performance-based bonus

  • 📚 Training via in-house Academy (hard & soft skills)

  • 🏋️ Free use of on-site gym

  • 🌍 Hybrid work model (3 days in office, 2 days at home)

  • 🤝 Start with a 12-month contract, with option to move to indefinite after evaluation

Ideal Candidate

You are a hands-on data engineer who enjoys data wrangling and building robust pipelines. You take pride in seeing your code run smoothly in production and know how to troubleshoot quickly when issues arise. With strong technical skills in SQL, Python, and Spark, plus familiarity with cloud platforms like Azure, you’re ready to contribute to impactful enterprise projects.

👉 Ready to make data flow seamlessly and create business value? Apply now to join a passionate, innovation-driven team.

 

Senior Controls Engineer | Automation & PLC Programming | Waukesha, WI | $96K–$130K + Bonus

⚙️ Senior Controls Engineer

📍 Waukesha, WI | Onsite (with office space available)
📅 Full-time | Mid-Senior Level | Engineering – Electrical

💰 Salary: $96,900 – $130,000 + Bonus Eligible
Benefits: Full benefits package (Medical, Dental, Vision, Life, Retirement, PTO)
🚗 Travel: Occasional (including day trips to supplier or other U.S. sites)

🚀 About the Role

We are seeking an experienced Senior Controls Engineer to join our Advanced Manufacturing team. In this role, you’ll be responsible for designing, implementing, and scaling automated and semi-automated control systems that power high-volume manufacturing environments.

You’ll lead the development of PLC code to integrate subsystems like conveyors, robots, machine vision, MES, and third-party equipment. This role is perfect for a hands-on engineer with a passion for automation, Industry 4.0, and driving innovation in manufacturing.

🛠️ Key Responsibilities

  • Design and develop new or improved processes applying advanced engineering principles.

  • Build and maintain factory-level and machine-level control standards for internal teams and external partners.

  • Act as the subject matter expert for controls programming, writing/debugging PLC ladder logic and IPC software with organized, maintainable, and reusable code.

  • Implement ANSI/RIA/OSHA-compliant control-reliable safety systems for robots, gantries, and conveyors.

  • Design user-friendly HMI screens for complex machinery.

  • Partner with IT to connect manufacturing assets to enterprise networks.

  • Lead supplier engagements: create RFQs, participate in design reviews, and oversee machine acceptance testing.

  • Travel to supplier sites for equipment qualification and support build events.

  • Support legacy controls remediation through risk assessment and upgrades.

  • Mentor teams, drive process improvements, and act as a technical resource across engineering.

🎯 Required Qualifications

  • Bachelor’s degree in Engineering, Computer Science, or related discipline.

  • 7+ years of experience with Rockwell Logix.

✅ Preferred Qualifications

  • Expertise designing, programming, and validating controls in high-volume automated manufacturing (Rockwell Logix, GE RXi).

  • Strong knowledge of Industrial Communication Protocols and controls hardware/OT infrastructure.

  • Hands-on experience with machine safety circuits, safety PLCs, and automation systems.

  • Exposure to SCADA, Industry 4.0 platforms, OPC servers, and Historians.

  • SQL and scripting knowledge (Python/Java a plus).

  • Familiarity with Fanuc robotics and ERP systems (SAP preferred).

  • Strong communication skills in global/technical environments.

💡 Skills & Attributes

  • Works independently while providing mentorship to less experienced engineers.

  • Strong conceptualization, visualization, and problem-solving skills.

  • Ability to manage multiple projects, prioritize effectively, and deliver results.

  • Experience improving financial and operational performance through engineering activities.

  • Collaborative leader who can influence cross-functional teams.

📌 Interview Process

  • Recruiter screen

  • Hiring manager interview

  • Technical deep-dive interview

  • Final onsite with cross-functional stakeholders

🏆 Why Apply?

This is a high-impact role where you’ll help shape the future of advanced manufacturing and automation. If you’re passionate about solving complex automation challenges, working with cutting-edge robotics and controls systems, and driving efficiency across production, this role offers the platform to make a real difference.

 

Data Engineer – AI & Real Estate | Hybrid, Utrecht | €80K + 37 Vacation Days

Title: Data Engineer
Location: Utrecht, Netherlands (Hybrid – 3 days in office, 2 days remote)
Visa Support: Not available
Relocation Support: Not available

Compensation & Benefits

  • Annual Salary: €64,800 - €79,056 (€5,000 - €6,100 per month)

  • Bonuses: 13th-month bonus included

  • Vacation: 37 vacation days per year

  • Pension Plan: Premium pension scheme with only 1% employee contribution

  • Tech Essentials: Choose a laptop and mobile phone or receive a €30 monthly reimbursement

  • Commuting Support: €0.23 per km travel allowance or 100% reimbursed NS Business Card

  • Hybrid Work: €2.40 daily allowance for home working days

  • Professional Development: €1,500 annual training budget

  • Insurance: Discounts up to 10% on health insurance

Role Overview

As a Data Engineer, you will be responsible for designing, developing, and optimizing scalable data architectures to support AI applications. You will work closely with LLM engineers to build robust data pipelines, ensure secure data access, and bring innovative AI-driven solutions to life.

Key Responsibilities

  • Design and maintain scalable data architectures for AI applications

  • Build and manage data pipelines from diverse sources to a vector database in AWS

  • Implement role-based access control and data security measures

  • Monitor and optimize data processes using dashboards and logging tools

  • Present results to stakeholders and contribute to AI-driven innovations

  • Collaborate in Agile teams to deliver project milestones

Requirements

  • Strong expertise in SQL, Python, and cloud environments (preferably AWS)

  • Experience with structured and unstructured databases

  • Familiarity with vector databases, semantic search, and data orchestration tools

  • Understanding of Agile/Scrum methodologies

  • Fluent English communication skills

  • Experience in data architecture design, data governance, and integrating diverse data types

Nice-to-Have Skills

  • Familiarity with AWS services

  • Experience in the real estate sector

  • Dutch communication skills

  • Must be residing in the Netherlands at the time of application

Work Environment & Culture

  • Informal, family-like working atmosphere

  • Diverse teams with an inclusive culture

  • Hybrid working model (office & home balance)

  • Self-managing teams with freedom for innovation

 

Learn more

Data Scientist (TS/SCI W/Poly) – Fort Meade, MD | $185K Salary

Job Title: Data Scientist, TS/SCI W/Poly – Fort Meade, MD - $185,000

Experience Level: Mid-senior
Experience Required: 3 Years
Education Level: Bachelor’s Degree
Job Function: Information Technology
Industry: Defense & Space
Compensation: Up to $185,000
Total Positions: 1
Relocation Assistance: Not available
Visa Sponsorship: Not available
Clearance Required: TS/SCI with Polygraph (Full Scope Poly preferred)

Position Overview

This role involves working in a dynamic environment, applying advanced data science techniques to address complex challenges. You will be part of a dedicated team providing data-driven solutions to support the Intelligence Community. This position emphasizes the development and integration of innovative analytical tools and techniques to meet evolving client needs.

If you are passionate about solving challenging problems, honing your technical expertise, and contributing to critical missions, this role offers excellent opportunities for professional growth.

Primary Responsibilities

  • Conduct advanced research and implement data science solutions to detect and classify objects using tools like Python, TensorFlow, and PyTorch.

  • Integrate diverse datasets into coherent databases and train foundation models to support mission objectives.

  • Fine-tune machine learning models to address specific intelligence questions.

  • Create clear documentation and present findings to key decision-makers.

  • Collaborate with stakeholders across government, industry, and academia to drive consensus and achieve shared goals.

  • Adapt to shifting priorities and assist with ad hoc tasks as needed.

Qualifications

Required:

  • Active TS/SCI clearance with Polygraph.

  • Bachelor’s degree in a technical field such as Computer Science, Mathematics, or Statistics.

  • A minimum of 3 years of experience in data science.

  • Proficiency in Python or R.

  • Experience with Machine Learning, Computer Vision, or Object Detection.

Preferred:

  • Experience with PyTorch, Keras, or analyzing overhead imagery.

  • Master’s degree in a technical field with at least 1 year of relevant experience.

Key Competencies

  • Strong analytical, problem-solving, and critical-thinking skills.

  • Excellent written, verbal, and graphical communication abilities.

  • Ability to work both independently and collaboratively in a fast-paced environment.

  • Proficiency with standard tools, including Microsoft Office and relevant network systems.

Note: This position requires U.S. Citizenship due to the nature of the work and the clearance requirements.

This description ensures compliance with your preferences for anonymity and rephrases content to maintain originality.

 

Learn more

Data Scientist - TS/SCI W/Poly - McLean, VA | Up to $185,000

Job Title: Data Scientist, TS/SCI W/Poly - McLean, VA - $185,000

Experience Level: Mid-senior
Experience Required: 3 Years
Education Level: Bachelor’s Degree
Job Function: Information Technology
Industry: Defense & Space
Compensation: Up to $185,000
Total Positions: 1
Relocation Assistance: Not available
Visa Sponsorship: Not available
Clearance Required: TS/SCI with Polygraph (Full Scope Poly preferred)

Position Overview

This role involves working in a dynamic environment, applying advanced data science techniques to address complex challenges. You will be part of a dedicated team providing data-driven solutions to support the Intelligence Community. This position emphasizes the development and integration of innovative analytical tools and techniques to meet evolving client needs.

If you are passionate about solving challenging problems, honing your technical expertise, and contributing to critical missions, this role offers excellent opportunities for professional growth.

Primary Responsibilities

  • Conduct advanced research and implement data science solutions to detect and classify objects using tools like Python, TensorFlow, and PyTorch.

  • Integrate diverse datasets into coherent databases and train foundation models to support mission objectives.

  • Fine-tune machine learning models to address specific intelligence questions.

  • Create clear documentation and present findings to key decision-makers.

  • Collaborate with stakeholders across government, industry, and academia to drive consensus and achieve shared goals.

  • Adapt to shifting priorities and assist with ad hoc tasks as needed.

Qualifications

Required:

  • Active TS/SCI clearance with Polygraph.

  • Bachelor’s degree in a technical field such as Computer Science, Mathematics, or Statistics.

  • A minimum of 3 years of experience in data science.

  • Proficiency in Python or R.

  • Experience with Machine Learning, Computer Vision, or Object Detection.

Preferred:

  • Experience with PyTorch, Keras, or analyzing overhead imagery.

  • Master’s degree in a technical field with at least 1 year of relevant experience.

Key Competencies

  • Strong analytical, problem-solving, and critical-thinking skills.

  • Excellent written, verbal, and graphical communication abilities.

  • Ability to work both independently and collaboratively in a fast-paced environment.

  • Proficiency with standard tools, including Microsoft Office and relevant network systems.

Note: This position requires U.S. Citizenship due to the nature of the work and the clearance requirements.

This description ensures compliance with your preferences for anonymity and rephrases content to maintain originality.

 

Deployment Engineer – Network Solutions | Contract $65 per hour | San Francisco Bay Area

Job Title: Deployment Engineer – Network Solutions

Experience Level: Mid-senior
Experience Required: 5 Years
Education Level: Bachelor’s Degree
Job Function: Information Technology
Industry: Information Technology and Services
Pay Rate: $65 per hour
Total Positions: 1
Relocation Assistance: Not available
Visa Sponsorship: Not available

Location

San Francisco Bay Area with occasional travel to Mexico (once per quarter).

Languages: Bilingual in Spanish/English preferred, but not mandatory.

Duration: 6+ months

Position Overview

We are seeking a skilled Deployment Engineer with expertise in Mellanox network solutions. The role involves designing, implementing, and supporting robust network deployments, ensuring high-performance infrastructure for our clients. This position requires in-depth knowledge of Mellanox technologies, hands-on experience with InfiniBand and Ethernet solutions, and a strong understanding of networking protocols.

Occasional travel to Mexico is required to provide on-site support and ensure the success of deployment projects.

Key Responsibilities

  • Plan, design, and deploy network solutions using Mellanox hardware and technologies.

  • Configure, test, and optimize InfiniBand and Ethernet systems to meet client requirements.

  • Collaborate with clients to understand needs, propose tailored solutions, and offer best-practice recommendations.

  • Perform on-site installations, configurations, and troubleshooting.

  • Monitor and maintain network deployments, ensuring high availability and optimal performance.

  • Provide technical training and knowledge transfer to client teams for seamless operations.

  • Troubleshoot and resolve network issues while maintaining clear documentation and escalation protocols.

  • Work with cross-functional teams to streamline deployment processes and enhance efficiency.

  • Travel to client sites in Mexico as needed for project support.

Required Qualifications

  • Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field (or equivalent experience).

  • Strong expertise with Cumulus OS, OSPF, BGP, and campus WAN (mandatory).

  • 3+ years of experience in network engineering or deployment, with a focus on Mellanox technologies.

  • Proficiency in Mellanox hardware and tools, including Mellanox OFED, SwitchX, and Spectrum.

  • Hands-on experience with InfiniBand and Ethernet solutions.

  • In-depth knowledge of networking protocols, configuration, and troubleshooting.

  • Bilingual proficiency in Spanish and English (preferred).

  • Strong problem-solving abilities and teamwork skills.

  • Willingness to travel to Mexico for project-related assignments.

Preferred Qualifications

  • Experience with automation tools and scripting (e.g., Python, Ansible) to streamline deployments.

  • Familiarity with other networking hardware and technologies (e.g., Cisco, Juniper).

  • Networking certifications such as CCNA or CCNP.

This revised job description maintains clarity, avoids duplication, and adheres to your preference for unique language while keeping all essential details intact.

 

Learn more

Data Engineer Consultant | Hybrid | Netherlands | €77K–€88K + €3K Bonus

Job Title: Data Engineer Consultant

Location: Netherlands (Hybrid - 2 days office, 3 days home)
Industry: Data Engineering
Compensation: €77,472 - €87,840 per year (€3,200 - €4,000 monthly)
Monthly Bonus: €3,000
Working Hours: Minimum 36 hours per week
Vacation Days: 25
Mobility Budget: €450 monthly
Visa Sponsorship: Not Available
Languages Required: Fluent Dutch and English
Relocation Assistance: Not Available
Holidays: 25

Job Description

As a Data Engineer Consultant, your primary responsibility is to prepare data for analytical or operational use. You will build data pipelines to bring together information from different source systems. You will integrate, consolidate, and clean the data before structuring it for use in analytical applications.

While working on challenging assignments with our clients, we also focus on your professional growth. We believe in helping you discover and unlock your potential through coaching, training, and sharing knowledge. This enables you to continue developing as a professional and helps us serve our clients even better.

Ideal Candidate

The ideal candidate should possess deep knowledge of data engineering and data modeling, both conceptually and dimensionally. You should have experience with various cloud architectures, such as Microsoft Azure or AWS, and be familiar with working in Scrum, Agile, and DevOps methodologies. You should be proficient in technologies such as Databricks, Spark Structured Streaming, and PySpark, and be capable of translating user requirements into appropriate solutions. Additionally, you should be skilled in analyzing source data and designing effective data models.

Key Responsibilities

  • Data Engineering: Build and maintain data pipelines, integrate data from various source systems, and structure it for analytical purposes.

  • Data Modeling: Apply conceptual and dimensional data modeling techniques to ensure data can be leveraged effectively.

  • Technology Application: Use Databricks, Spark, and PySpark to build robust data solutions.

  • Collaboration: Work within Scrum and Agile teams to develop data solutions that meet business needs.

Skills & Qualifications

Must-Have Skills

  • Data Engineering

  • Data Modeling

  • Scrum, Agile, DevOps methodologies

  • Python

  • MySQL

  • Microsoft Azure

  • Bachelor’s degree (HBO or equivalent)

  • Fluency in Dutch

Preferable Skills

  • Databricks

  • Microsoft Power BI

  • Azure Data Factory

  • Data Vault

  • Data Governance

  • Bachelor’s degree in Data Science (BSc) or Computer Science (BSc)

  • Data Engineering on Microsoft Azure (DP-203) certification

  • Proficiency in English

Soft Skills

  • Strong communication skills

  • Adaptability

  • Teamwork and collaboration

  • Problem-solving abilities

  • Self-driven and motivated

Experience

  • More than 5 years of experience working in complex data environments at top 500 companies.

Compensation & Benefits

  • Annual Salary: €77,472 - €87,840

  • Monthly Salary: €3,200 - €4,000

  • Monthly Bonus: €3,000

  • Mobility Budget: €450

  • Extra Benefits: Pension package, phone, expenses reimbursement, lease budget, and laptop.

Working Conditions

  • Hybrid Work: 2 days in the office, 3 days remote

  • Vacation: 25 days off per year

  • Visa Sponsorship: Not available

  • Relocation Assistance: Not available

  • Working Hours: Minimum of 36 hours per week

 

Learn more

Enterprise Analytics Manager, Houston, TX - $140,000 - $180,000

Enterprise Analytics Manager, Houston, TX
Full-Time, Permanent
$140,000 - $180,000
10/20% Bonus + Benefits

MINIMUM QUALIFICATIONS

Education:

  • Bachelor’s Degree in Information Systems, Computer Science, Data Science, Business Analytics, or equivalent experience required.

Licenses/Certifications:

  • None required.

Experience / Knowledge / Skills:

  • Minimum of five (5) years of experience in analytics and information systems, including at least three (3) years in a leadership role.

  • Strong oral and written communication skills.

  • Customer-focused with a collaborative mindset.

  • Results-oriented, capable of thriving in a fast-paced environment and managing multiple projects.

  • Excellent interpersonal and time management skills.

  • Familiarity with business intelligence tools, data science tools, and dashboard software, including but not limited to:

    • Database and Query Languages: SQL, Nomad, Oracle, Vertica, Snowflake

    • Visualization Tools: Tableau (preferred), Spotfire, Sisense, Qlik, Microsoft Power BI

    • Data Visualization Server Admin Tools: Tableau Core, Data Management Server

    • Data Prep/Transformation Tools: Tableau Prep, Hadoop, Alteryx, Trifacta, Talend

    • Statistical Tools: R, SAS, SPSS, Matlab, Minitab

    • Data Science Tools: Python, R, SAS, dataiku, DataRobot, Anaconda

PRINCIPAL ACCOUNTABILITIES

  • Align departmental initiatives with organizational goals to support strategic objectives.

  • Lead, motivate, and oversee teams responsible for data collection, modeling, analysis, and insights to drive value.

  • Foster transparent communication through departmental and cross-functional meetings with key stakeholders.

  • Manage resource needs and promote professional growth within the team by setting and tracking performance goals and development plans.

  • Oversee and prioritize analytics requests, strategically managing timelines, deliverables, and resource allocation.

  • Collaborate with executive leadership to deliver data-driven insights that align with enterprise goals and priorities.

  • Monitor and manage budgets related to analytics projects or operations.

  • Serve as an advisor to leadership on analytics strategy and data-driven decision-making.

  • Build and maintain relationships with internal and external customers, vendors, and regulatory agencies.

  • Oversee the design and delivery of analytics solutions that enable informed decision-making through operational metrics.

  • Provide timely updates and executive summaries to leadership and stakeholders.

  • Act as a mentor and coach, fostering a culture of collaboration and continuous improvement.

  • Establish quality controls and standards to meet organizational expectations and regulatory requirements.

  • Ensure compliance with policies, including security, access control, and data privacy standards (e.g., HIPAA).

  • Manage foundational analytics tools and systems, ensuring availability, growth, and support.

  • Administer and oversee the onboarding of data into the Enterprise Data Warehouse Platform, as well as third-party data submissions.

  • Support the growth of analytics capabilities across the organization by promoting transparency and usage of existing tools and products.

  • Mentor and train analytics user groups in tools such as Tableau, Business Objects, and SQL.

  • Encourage adoption of data-driven decision-making through existing analytical products.

OTHER EXPECTATIONS

  • Adhere to organizational policies, procedures, and standards related to quality, productivity, and resource management.

  • Promote professional growth through continuing education and skills development.

  • Serve as a mentor and resource for less experienced staff.

  • Demonstrate a commitment to personalized and efficient service for all stakeholders.

Other duties as assigned.

This version removes specific organizational references and rewrites certain sections to ensure uniqueness. Let me know if additional adjustments are needed!

 

Learn more

Senior Endpoint Engineer - Hybrid – Buffalo, NY - $120,000 - $155,000

Senior Endpoint Engineer - Hybrid – Buffalo, NY

 

Salary Range Transparency: 

Buffalo, NY: $120,000 - $155,000 annually. The base salary range reflects the role's high and low pay scales within this location. Compensation may vary based on experience, location, and performance.

 

Reporting To: 

Manager, End User Computing

 

 

Join a dynamic, people-centered culture where creativity and innovation drive success! Here, your work fuels a vibrant environment where teamwork and empowerment help achieve truly impactful results. Not only will you find deeper job satisfaction and better rewards, but you’ll also enjoy a balanced quality of life.

 

Our Corporate Group team is looking for a talented Senior Endpoint Engineer skilled in architecting, building, and supporting endpoint solutions. If you enjoy solving complex technical challenges, this is a fantastic opportunity to design secure, reliable, and scalable solutions for managing all endpoint devices across our global organization. In this role, you will enhance and simplify the end-user experience, working on a hybrid schedule from East Aurora, NY.

 

What You’ll Bring: 

- Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent experience).

- 5+ years of experience in endpoint engineering or a related discipline.

- Proficiency with Microsoft Intune and SCCM (System Center Configuration Manager) for managing endpoints.

- Expertise in OS deployment, configuration, and troubleshooting (Windows/macOS).

- Strong knowledge of hardware and driver management.

- Familiarity with browser management and security for Chrome and Edge.

- Experience in scripting languages such as PowerShell, Python, or Bash for automation.

- Knowledge of device compliance policies, conditional access, and Windows Autopilot.

 

What You’ll Do: 

- Design and implement OS deployment solutions, hardware management, and secure browser configurations aligned with our security standards.

- Lead the deployment and maintenance of Windows OS across devices, ensuring security compliance and optimal performance.

- Drive effective hardware and driver updates to support seamless functionality.

- Implement enterprise-level endpoint management with Microsoft Intune and SCCM, including patch management and application distribution.

- Automate tasks and processes using PowerShell, Python, and other scripting tools to enhance efficiency.

- Collaborate with security teams to uphold endpoint security standards, including encryption and endpoint monitoring.

- Monitor and optimize endpoint health for an exceptional end-user experience.

- Partner with cross-functional teams to implement solutions successfully.

 

What We Offer: 

- Financial Rewards: Competitive pay, annual profit sharing, 401k match, and access to an Employee Stock Purchase Plan.

- Work/Life Balance: Flexible paid time off, holiday and parental leave programs, and relocation assistance.

- Health & Wellness: Comprehensive insurance (medical, dental, vision, life, disability) plus access to an Employee Assistance Plan.

- Professional Growth: Tuition assistance, mentorship, leadership development, and other growth programs.

- Inclusive Workplace: Engage in Employee Resource Groups, cultural events, and celebrations that foster a diverse and welcoming community.

 

Learn more

Senior Data Engineer - London - Full Time Perm Hybrid - Base Salary - GBP £70,000 to £90,000

Senior Data Engineer

London

FullTime Perm Hybrid

Base Salary - GBP £70,000 to £90,000

 

 

BOUNTY DESCRIPTION

Skimlinks, a Connexity and Taboola company, drives e-commerce success for 50% of the Internet’s largest online retailers. We deliver $2B in annual sales by connecting retailers to shoppers on the most desirable retail content channels. As a pioneer in online advertising and campaign technology, Connexity is constantly iterating on products, solving problems for retailers, and building interest in new solutions.

We have recently been acquired by Taboola to make the first Open-Web Source for Publishers connecting editorial content to product recommendations, where readers can easily buy products related to stories they are reading.

Skimlinks, a Taboola company, is a global e-commerce monetization platform, with offices in LA, London, Germany, and NYC. We work with over 60,000 premium publishers and 50,000 retailers around the world helping content producers get paid commissions for the products and brands they write about.

 

About the role

We are looking for a Senior Data Engineer to join our team in London. We are creating a fundamentally new approach to digital marketing, combining big data with large-scale machine learning. Our data sets are on a truly massive scale - we collect data on over a billion users per month and analyse the content of hundreds of millions of documents a day.

As a member of our Data Platform team your responsibilities will include:

  • Design, build, test and maintain high-volume Python data pipelines.

  • Analyse complex datasets in SQL.

  • Communicate effectively with Product Managers and Commercial teams to translate complex business requirements into scalable solutions.

  • Perform software development best practices.

  • Work independently in an agile environment.

  • Share your knowledge across the business and mentor colleagues in areas of deep technical expertise.

 

Requirements:

Here at Skimlinks we value dedication, enthusiasm, and a love of innovation. We are disrupting the online monetization industry, and welcome candidates who want to be a part of this ambitious journey. But it is not just hard work, we definitely appreciate a bit of quirkiness and fun along the way.

·        An advanced degree (Bachelor/Masters) in computer science or a related field.

  • Solid programming skills in both Python and SQL.

  • Proven work experience in Google Cloud Platform or other clouds, developing batch (Apache Airflow) and streaming (Dataflow) scalable data pipelines.

  • Passion for processing large datasets at scale (BigQuery, Apache Druid, Elasticsearch)

  • Familiarity with Terraform, DBT & Looker is a plus.

  • Initiatives around performance optimisation and cost reduction.

  • A commercial mindset, you are passionate about creating outstanding products.

Voted “Best Places to Work,” our culture is driven by self-starters, team players, and visionaries. Headquartered in Los Angeles, California, the company operates sites and business services in the US, UK, and EU. We offer top benefits including Annual Leave Entitlement, paid holidays, competitive comp, team events and more!

  • Healthcare insurance & cash plans

  • Pension

  • Parental Leave Policies

  • Learning & Development Program (educational tool)

  • Flexible work schedules

  • Wellness Resources

  • Equity

We are committed to providing a culture at Connexity that supports the diversity, equity and inclusion of our most valuable asset, our people. We encourage individuality, and are driven to represent a workplace that celebrates our differences, and provides opportunities equally across gender, race, religion, sexual orientation, and all other demographics. Our actions across Education, Recruitment, Retention, and Volunteering reflect our core company values and remind us that we’re all in this together to drive positive change in our industry.

SKILLS AND CERTIFICATIONS [note: bold skills and certification are required]

·        Airflow

·        Python

·        SQL

·        GCP

·        BigQuery

·        Data pipelines

To Apply Please Complete the Form Below