Data Engineer Remote Jobs

113 Results

1d

Senior Data Engineer

OuraHelsinki,Uusimaa,Finland, Remote Hybrid
MLS3agilesqlgitdockertypescriptpythonAWS

Oura is hiring a Remote Senior Data Engineer

Our mission at Oura is to empower every person to own their inner potential. Our award-winning products help our global community gain a deeper knowledge of their readiness, activity, and sleep quality by using their Oura Ring and its connected app. We've helped 2.5 million people understand and improve their health by providing daily insights and practical steps to inspire healthy lifestyles.

We are looking for a Senior Software Engineer to join our Data & ML Platform team. 
You’ll join a platform team focused on two major internal systems that support many internal Oura users and several production features:

  • Oura’s Datalake
  • Cloud MLOps systems

Concretely, you will be:

  • Building, operating and improving systems to move, process and store large amounts of data (Terabyte-Petabyte scale) leveraging tools such as: AWS Kinesis, S3, Spark / Glue, Athena, dbt, iceberg, snowflake, docker, workflow engines, and more.
  • Building components that support the handling of datasets, training, testing and release of new on-device and cloud-based ML models.
  • Independently collaborating with different stakeholders including Data Scientists, Testing, Firmware, Hardware to define and implement improvements and new functionality.
  • Supporting internal datalake consumers in their day-to-day work.
  • Writing python code (mostly) and some typescript code.

We hope that following can be said about you:

  • 4+ years of experience developing and operating production systems.
  • Experience running, monitoring and debugging production systems at scale on a public cloud. We rely on AWS but experience with other cloud platforms counts too.
  • Experience with programming languages such as Python and Typescript. Willingness to code in Python is required since a large part of our codebase is Python.
  • Good architectural understanding of event driven architectures, workflow engines, database and datawarehouse systems.
  • Enjoy writing maintainable and well-tested code
  • Follow common practices: version control (git), issue tracking, unit testing and agile development processes
  • Generalist and pragmatic approach to development. Knowledge of various programming languages is a plus.
  • Ability to build infrastructure and components following best practices such as CI/CD and infrastructure as code.
  • Broad knowledge of software fundamentals, databases, warehouses and system design.
  • You can write well-structured, testable and high-performance code.
  • You are familiar with some of the following:
    • Workflow engines, Stream processing, Spark, Athena, SQL, dbt
  • You are self-motivated, proactive, and bring energy to the teams and projects you work on.
  • You are driven by value creation and overall impact.

Not required but potentially relevant:

  • Knowledge of ML, particularly relevant if with PyTorch.

As a company we are focused on improving the way we live our lives. From the people who use our product to the team behind it, we work to empower every person to own their inner potential.

Location

In this role you can work remotely in Finland as well as from our easy-to-reach Helsinki or Oulu offices.

  • If working remotely, availability to occasionally travel to the office is expected (for example for workshops and team gatherings)

Benefits

  • Competitive Salary
  • Lunch benefit
  • Wellness benefit
  • Flexible working hours
  • Collaborative, smart teammates
  • An Oura ring of your own
  • Personal learning & development program
  • Wellness Time Off

If this sounds like the next step for you, please send us your application as soon as possible, but by November 17th the latest.

Oura is proud to be an equal opportunity workplace. We celebrate diversity and are committed to creating an inclusive environment for all employees. Individuals seeking employment at Oura are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, socioeconomic status, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws. We will not tolerate discrimination or harassment based on any of these characteristics.

See more jobs at Oura

Apply for this job

2d

Data Engineer (Remote)

ExperianCosta Mesa, CA, Remote
Bachelor's degreeterraformsqlmobilepythonAWS

Experian is hiring a Remote Data Engineer (Remote)

Job Description

We are thrilled to announce that NeuroID has been acquired by Experian, a global leader in identity verification and fraud prevention. Experian's solutions helped clients avoid an estimated $15 billion in fraud losses last year. Since 2023, NeuroID has partnered with Experian to provide the latest behavioral analytics, combating identity theft and advanced fraud attacks. Now, as part of Experian's CrossCore® on the Experian Ascend Technology Platform™, we continue to lead the charge in proactive fraud detection and seamless user experiences. Be part of a team that's pioneering the future of fraud prevention!

NeuroID is looking for a Senior Data Engineer to manage and transform our data pipeline. We are a Snowflake and DBT shop with lots of semi-structured data. The job combines aspects of both product and research so there are several projects and operational aspects of the job.

NeuroID detects fraud by analyzing how people type data on web forms and mobile apps (how cool is that?). We are powered by data, and you will be at the heart of what we do.

You will report to the NeuroID Chief Data Science Officer.

You will have the opportunity to:

  • Handle the daily management of data pipeline using DBT
  • Implement changes to Snowflake guided by product changes and research needs
  • Monitor and make changes to manage pipeline costs
  • Troubleshoot and resolve data, system, and performance issues

Qualifications

Your background:

  • 3+ years' of experience as a Data/Analytics Engineer using DBT. DBT experience should include working with semi-structured data, custom macros, incremental models, and job scheduling
  • Experience working with Snowflake or other cloud data warehouses. Understand clones, pipes, external stages, and optimizing queries.
  • Terraform, Python, and AWS experience is preferred
  • Very comfortable with modern SQL (CTE, window functions)
  • Bachelor's Degree in Data Science/Analytics, Computer Science, Information Systems, or Math or an equivalent combination of education and experience

Benefits/Perks:

  • Great compensation package and bonus plan
  • Core benefits, including medical, dental, vision, and matching 401K
  • Flexible work environment, ability to work remotely, hybrid, or in-office
  • Flexible time off, including volunteer time off, vacation, sick, and 12-paid holidays

See more jobs at Experian

Apply for this job

TRUCKING PEOPLE is hiring a Remote Junior AI Data Engineer (Remote)

Junior AI Data Engineer (Remote) - TRUCKING PEOPLE - Career Page /* Basic CMS Settings */ .jobs-navbar, .jobboard .modal-custom .modal-header {background-color: #0068CC;} .page-header .b

See more jobs at TRUCKING PEOPLE

Apply for this job

4d

Senior Data Engineer, Streaming

TubiSan Francisco, CA; Remote
MLS3scalasqlapic++AWS

Tubi is hiring a Remote Senior Data Engineer, Streaming

Join Tubi (www.tubi.tv), Fox Corporation's premium ad-supported video-on-demand (AVOD) streaming service leading the charge in making entertainment accessible to all. With over 200,000 movies and television shows, including a growing library of Tubi Originals, 200+ local and live news and sports channels, and 455 entertainment partners featuring content from every major Hollywood studio, Tubi gives entertainment fans an easy way to discover new content that is available completely free. Tubi's library has something for every member of our diverse audience, and we're committed to building a workforce that reflects that diversity. We're looking for great people who are creative thinkers, self-motivators, and impact-makers looking to help shape the future of streaming.

About the Role:

With such a large catalog of content, data and machine learning are fundamental to Tubi's success, and Tubi's Data Platform is the cornerstone of all data and ML use. At Tubi, you will join a stellar team of engineers with a passion for solving the most challenging problems using cutting-edge technology. In this Lead Data Engineering role, you will be expected to be a hands-on leader, leading by example as you and your team build out real-time systems to handle data at massive scale. You will enable machine learning engineers to iterate and experiment faster than ever before. You will help data scientists take ideas to production in days or weeks, not months or years. And you will build tools to enable data analysis and modeling for even the least tech-savvy colleagues. In short, you will enable Tubi to be truly data-driven.

Responsibilities: 

  • Handle the collection and processing of large scale raw data.
  • Building low latency and low maintenance data infrastructure that powers the whole company
  • Develop and enhance our analytics pipeline by creating new tools and solutions to facilitate intuitive data consumption for stakeholders
  • Constructs and improves infrastructure for data extraction, transformation, loading and cleaning from diverse sources using API’s, SQL and AWS technologies.
  • Improving data quality by building tools, processes and pipelines to enforce, check and manage data quality at a large scale.
  • Implement CI/CD pipelines for data operations ensuring efficient and smooth deployment of data models and applications.
  • Address ad hoc data requests and core pipeline tasks

Your Background:

  • 5+ years of experience building scalable batch and streaming data pipelines (Spark or Flink)
  • 3+ years of experience designing and implementing pipelines for ETL and cleaning of data from a wide variety of sources using API’s, SQL, Spark and AWS technologies.
  • Greenfield data warehouse modeling experience
  • Strong knowledge of Streaming, distributed databases, and cloud storage (e.g., S3).
  • Strong experience in JVM language (Scala is not required, but preferred)
  • Prior experience with Kafka, Kinesis, or equivalent.

Pursuant to state and local pay disclosure requirements, the pay range for this role, with final offer amount dependent on education, skills, experience, and location is listed annually below. This role is also eligible for an annual discretionary bonus, long-term incentive plan, and various benefits including medical/dental/vision, insurance, a 401(k) plan, paid time off, and other benefits in accordance with applicable plan documents.

California, New York City, Westchester County, NY, and Seattle, WA Compensation

$164,000 to $234,000 / year + Bonus + Long-Term Incentive Plan + Benefits

Colorado and Washington (excluding Seattle, WA) Compensation

$147,000 to $210,000 / year + Bonus + Long-Term Incentive Plan + Benefits

#LI-MQ1


Tubi is a division of Fox Corporation, and the FOX Employee Benefits summarized here, covers the majority of all US employee benefits.  The following distinctions below outline the differences between the Tubi and FOX benefits:

  • For US-based non-exempt Tubi employees, the FOX Employee Benefits summary accurately captures the Vacation and Sick Time.
  • For all salaried/exempt employees, in lieu of the FOX Vacation policy, Tubi offers a Flexible Time off Policy to manage all personal matters.
  • For all full-time, regular employees, in lieu of FOX Paid Parental Leave, Tubi offers a generous Parental Leave Program, which allows parents twelve (12) weeks of paid bonding leave within the first year of the birth, adoption, surrogacy, or foster placement of a child. This time is 100% paid through a combination of any applicable state, city, and federal leaves and wage-replacement programs in addition to contributions made by Tubi.
  • For all full-time, regular employees, Tubi offers a monthly wellness reimbursement.

Tubi is proud to be an equal opportunity employer and considers qualified applicants without regard to race, color, religion, sex, national origin, ancestry, age, genetic information, sexual orientation, gender identity, marital or family status, veteran status, medical condition, or disability. Pursuant to the San Francisco Fair Chance Ordinance, we will consider employment for qualified applicants with arrest and conviction records. We are an E-Verify company.

See more jobs at Tubi

Apply for this job

4d

Data Engineer - AWS

Tiger AnalyticsHartford,Connecticut,United States, Remote
S3LambdaairflowsqlDesignAWS

Tiger Analytics is hiring a Remote Data Engineer - AWS

Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Engineering, Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.

As an AWS Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines on AWS cloud infrastructure. You will work closely with cross-functional teams to support data analytics, machine learning, and business intelligence initiatives. The ideal candidate will have strong experience with AWS services, Databricks, and Snowflake.

Key Responsibilities:

  • Design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
  • Implement data processing and transformation workflows using Databricks, Apache Spark, and SQL to support analytics and reporting requirements.
  • Build and maintain orchestration workflows using Apache Airflow to automate data pipeline execution, scheduling, and monitoring.
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver scalable data solutions.
  • Optimize data pipelines for performance, reliability, and cost-effectiveness, leveraging AWS best practices and cloud-native technologies.
  • 8+ years of experience building and deploying large-scale data processing pipelines in a production environment.
  • Hands-on experience in designing and building data pipelines
  • Strong proficiency in AWS services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
  • Strong experience with Databricks, Pyspark for data processing and analytics.
  • Solid understanding of data modeling, database design principles, and SQL and Spark SQL.
  • Experience with version control systems (e.g., Git) and CI/CD pipelines.
  • Excellent communication skills and the ability to collaborate effectively with cross-functional teams.
  • Strong problem-solving skills and attention to detail.

This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

See more jobs at Tiger Analytics

Apply for this job

4d

Data Engineer Co-op

SemiosVancouver,British Columbia,Canada, Remote Hybrid
sql

Semios is hiring a Remote Data Engineer Co-op

Who we are:

We are a bunch of people who really care about agriculture, food and the challenges facing farming. We want to help farmers with data driven decision making to help nature feed a growing population. Join our team of expert engineers, agronomists, entomologists, crop researchers, and data scientists who are continually conducting research to help drive innovation in agriculture.

Semios is a market leader in leveraging the internet-of-things (IoT) and big data to improve the sustainability and profitability of specialty crops. With 500 million data points being reported by our sensors every day, we leverage our big data analytics, such as in-depth pest and disease modeling, to empower tree fruit and tree nut growers with decision-making tools to minimize resources and risks.

Our innovative work has received several industry awards:

One of our partners produced this short video which shows what we do and our positive environmental impact.

We know our journey is only achievable by having a great team that shares ideas, tries new things and learns as we go.

Who you are:

Motivated by meaningful work, you are looking for a collaborative, team environment with the opportunity to learn and grow as you take the initiative to try new things for January 2025.

As a Data Engineer Co-op, you will be part of building the most integrated data platform in Agriculture, which means joining information from many sources. Your job is to build pipelines and supporting tools to combine and deliver the data in a timely manner for the customer

What you will do:

  • You'll work with our internal Data Engineers and Data Scientists to find the most efficient way to ingest, extract, and process new data sources.
  • Look for opportunities to reduce platform costs and optimize queries.
  • Along the way, you'll become an automation guru, play a role in our big data platform, and our new AI tooling; deliver tangible results by the end of your term.
  • You’ll gain wide exposure to SQL, DBT, Dagster, and time-series databases like Google BigQuery, and Postgres.
  • You’ll help monitor the health of our data platform using Mode and Grafana. 

We want you to succeed, so you will need:

  • You're working towards a degree in Engineering, Computer Science, Math or similar fields.
  • You have experience in SQL.
  • You're organized, independent, and detail-oriented. You love to finish a project knowing you've been thorough and delivered the goods.
  • You have some exposure to cloud computing, emerging AI technologies, and databases.
  • You are interested in contributing to pipelines in a large-scale data environment.
  • You're excited to learn and understand agriculture and what drives the food supply that feeds our planet.

Salary Range: $18 to $22.00 per hour.

Please note that the pay rate offered may vary based on factors including but not limited to knowledge, skills and experience, as well as business and organizational needs.

  • Sleep better, knowing you're making the world a better place through more sustainable food production.
  • Collaborative and casual work environment.
  • Opportunity to learn and make an impact by working on meaningful projects.
  • Tech-focused office location, convenient to transit and bike paths.
  • You enjoy the hybrid working environment - the flexibility of working from home some days and the personal connections built from working in the office.

Semios welcomes all applicants regardless of race, gender, orientation, sexual identity, economic class, ability, disability, age, religious beliefs or disbeliefs, or status. We believe that different perspectives and backgrounds are what make a company flourish and we welcome everyone.

See more jobs at Semios

Apply for this job

6d

Data Engineer (F/H)

ASINantes, France, Remote
S3agilenosqlairflowsqlazureapijavac++

ASI is hiring a Remote Data Engineer (F/H)

Description du poste

Dans un souci d’accessibilité et de clarté, les termes employés au masculin se réfèrent aussi bien au genre féminin que masculin.  

Simon, Responsable de l’équipe Data Nantaise, est à la recherche d’un Data Engineer pour mettre en place, intégrer, développer et optimiser des solutions de pipeline sur des environnements Cloud et On Premise pour nos projets clients.

Au sein d’une équipe dédiée et principalement en contexte agile : 

  • Vous participez à la rédaction de spécifications techniques et fonctionnelles
  • Vous maitrisez les formats de données structurés et non structurés et savez les manipuler
  • Vous connectez une solution ETL / ELT à une source de données
  • Vous concevez et réalisez un pipeline de transformation et de valorisation des données, et ordonnancez son fonctionnement
  • Vous prenez en charge les développements de médiations 
  • Vous veillez à la sécurisation des pipelines de données
  • Vous concevez et réalisez des API utilisant les données valorisées
  • Vous concevez et implémentez des solutions BI
  • Vous participez à la rédaction des spécifications fonctionnelles et techniques des flux
  • Vous définissez des plans de tests et d’intégration
  • Vous prenez en charge la maintenance évolutive et corrective
  • Vous traitez les problématiques de qualité de données

En fonction de vos compétences et appétences, vous intervenez sur l’une ou plusieurs des technologies suivantes :

  • L’écosystème data notamment Microsoft Azure
  • Les langages : SQL, Java
  • Les bases de données SQL et NoSQL
  • Stockage cloud: S3, Azure Blob Storage…
  • Les ETL/ESB et autres outils : Talend, Spark, Kafka NIFI, Matillion, Airflow, Datafactory, Glue...

 

En rejoignant ASI,

  • Vous évoluerez au sein d’une entreprise aux modes de fonctionnement internes flexibles garantis par une politique RH attentive (accord télétravail 3J/semaine, accord congé parenthèse…) 
  • Vous pourrez participer (ou animer si le cœur vous en dit) à nos nombreux rituels, nos événements internes (midi geek, dej’tech) et externes (DevFest, Camping des Speakers…)  
  • Vous évoluerez dans une entreprise bientôt reconnue Société à mission, Team GreenCaring et non GreenWashing porteuse d’une démarche RSE incarnée et animée, depuis plus de 10 ans. (Equipe RSE dédiée, accord forfaits mobilités durables…) 

Qualifications

Vous êtes issu d’une formation supérieure en informatique, mathématiques ou spécialisé en Big Data, et avez une expérience minimale de 3 ans en ingénierie des données et d'une expérience opérationnelle réussie dans la construction de pipelines de données structurées et non structurées.

  • Attaché à la qualité de ce que vous réalisez, vous faites preuve de rigueur et d'organisation dans la réalisation de vos activités.
  • Doté d'une bonne culture technologique, vous faites régulièrement de la veille pour actualiser vos connaissances.
  • Un bon niveau d’anglais, tant à l’écrit qu’à l’oral est recommandé.

Désireux d’intégrer une entreprise à votre image, vous vous retrouvez dans nos valeurs de confiance, d’écoute, de plaisir et d’engagement. 

Le salaire proposé pour ce poste est compris entre 36 000 et 40 000 €, selon l'expérience et les compétences, tout en respectant l'équité salariale au sein de l'équipe. 

 

A compétences égales, ce poste est ouvert aux personnes en situation de handicap. 

See more jobs at ASI

Apply for this job

6d

Stage Data Engineer (F/H)

ASINantes, France, Remote
S3agilescalanosqlairflowmongodbazurescrumjavapython

ASI is hiring a Remote Stage Data Engineer (F/H)

Description du poste

Dans un souci d’accessibilité et de clarté, les termes employés au masculin se réfèrent aussi bien au genre féminin que masculin.  

Afin de répondre aux enjeux de nos clients, et dans la continuité du développement de nos expertises Data, nous sommes à la recherche d’un stagiaire Data Engineer.

Intégré à l’équipe Data Nantaise, vous rejoignez un projet, sous la responsabilité d’un expert, et au quotidien :

  • Vous avez un tuteur dédié pour suivre votre évolution
  • Vous participez au développement d’une chaîne de traitement de l’information de bout en bout
  • Vous intervenez sur de l’analyse descriptive/inférentielle ou prédictive
  • Vous participez aux spécifications techniques
  • Vous appréhendez les méthodologies Agile Scrum et cycle en W
  • Vous montez en compétences dans l’un ou plusieurs des environnements technologiques suivants :
    • L’écosystème Data: Spark, Hive, Kafka, Hadoop…
    • Les langages : Scala, Java, Python…
    • Les bases de données NoSQL : MongoDB, Cassandra…
    • Le stockage cloud: S3, Azure…
    • Les ETL/Outils d'orchestration du marché : Airflow, Datafactory, Talend...

 

En rejoignant ASI,

  • Vous évoluerez dans une entreprise bientôt reconnue Société à mission, Team GreenCaring et non GreenWashing porteuse d’une démarche RSE incarnée et animée, depuis plus de 10 ans. (Equipe RSE dédiée, accord forfaits mobilités durables…) 
  • Vous intégrerez les différentes communautés expertes d'ASI, pour partager des bonnes pratiques et participer aux actions d'amélioration continue. 

Qualifications

De formation supérieure en informatique, mathématiques ou spécialisée en Big Data (de type école d’ingénieurs ou université) en cours de validation (Bac+5), vous êtes à la recherche d’un stage de fin d’études d’une durée de 4 à 6 mois.

  • Le respect et l’engagement font partie intégrante de vos valeurs.
  • Passionné par la donnée, vous êtes rigoureux et vos qualités relationnelles vous permettent de vous intégrer facilement dans l’équipe.

Le stage devant déboucher sur une proposition d'emploi concrète en CDI.

 

Désireux d’intégrer une entreprise à votre image, vous vous retrouvez dans nos valeurs de confiance, d’écoute, de plaisir et d’engagement. 

 

A compétences égales, ce poste est ouvert aux personnes en situation de handicap. 

See more jobs at ASI

Apply for this job

6d

Sr. Data Engineer (Databricks)

FluentToronto,Ontario,Canada, Remote
ui

Fluent is hiring a Remote Sr. Data Engineer (Databricks)

Fluent is building the next generation advertising network, Partner Monetize & Advertiser Acquisition.  Our vision is to build an ML/AI first network of advertisers and publishers to achieve a common objective, elevating relevancy in E-commerce for everyday shoppers. 

As a Data Engineer you will bring your Databricks pipeline expertise to execute on building data products to power Fluent’s business lines.  These data products will be the foundation for sophisticated data representation of customer journeys and marketplace activity. 

You are known as a strong and efficient IC Data Engineer, with the ability to assist the Data Architect vetting the translation of an Enterprise Data Model to physical data models and pipelines.  You are familiar with Databricks medallion architecture, and how to work backwards from your enterprise models of Gold Data Products.  You are considered an expert in Spark. 

You will work with your counterparts to build and operate high-impact data solutions.  This role is fully Remote in the United States or Canada, with occasional travel to NYC. 

Fluent is looking for an experienced Data Engineer, who thrives in writing robust code in the Databricks ecosystem.   

What You’ll Do  

  • Majority of the role will be software engineering – Tables, Views, Spark jobs, orchestration within Databricks environment, following an enterprise data model design.  You will help elevate standards across testing, code repository, naming conventions, etc.
  • Develop, deploy, and manage scalable pipelines on Databricks, ensuring robust integration with a Feature Store leveraging online tables for machine learning models. 
  • Investigate and leverage Databricks’ capabilities to implement real-time data processing and streaming, potentially using Spark Streaming, Online Tables, Delta Live Tables. 
  • Contribute and maintain the high quality of the code base with comprehensive data observability, metadata standards, and best practices. 
  • Partner with data science, UI and reporting teams to understand data requirements and translate them into models. 
  • Keep track of emerging tech and trends within the Databricks ecosystem 
  • Share your knowledge by giving brown bags, tech talks, and evangelizing appropriate tech and engineering best practices. 
  • Empower internal teams by providing communication on architecture, target gold tables, execution plans, releases and training. 
  • Bachelors or Masters degree in computer science 
  • 5+ years of industry experience in Data Engineering, including expertise in Spark and SQL. 
  • 2+ years of experience with Databricks environment  
  • Nice to have: Familiarity with real-time ML systems within Databricks will be very beneficial 

At Fluent, we like what we do and we like who we do it with. Our team is a tight-knit crew of go-getters; we love to celebrate our successes! In addition we offer a fully stocked kitchen, catered breakfast and lunch, and our office manager keeps the calendar stocked with activity-filled events. When we’re not eating, working out, or planning parties, Fluent folks can be found participating in recreational sports leagues, networking with She Runs It, and bonding across teams during quarterly outings to baseball games, fancy dinners, and pizza-making classes. And we have all the practical benefits, too…

  • Competitive compensation
  • Ample career and professional growth opportunities
  • New Headquarters with an open floor plan to drive collaboration
  • Health, dental, and vision insurance
  • Pre-tax savings plans and transit/parking programs
  • 401K with competitive employer match
  • Volunteer and philanthropic activities throughout the year
  • Educational and social events
  • The amazing opportunity to work for a high-flying performance marketing company!

Salary Range: $160,000 to $180,000 - The base salary range represents the low and high end of the Fluent salary range for this position. Actual salaries will vary depending on factors including but not limited to location, experience, and performance.

Fluent participates in the E-Verify Program. As a participating employer, Fluent, LLC will provide the Social Security Administration (SSA) and, if necessary, the Department of Homeland Security (DHS), with information from each new employee’s Form I-9 to confirm work authorization. Fluent, LLC follows all federal regulations including those set forth by The Office of Special Counsel for Immigration-Related Unfair Employment Practices (OSC). The OSC enforces the anti-discrimination provision (§ 274B) of the Immigration and Nationality Act (INA), 8 U.S.C. § 1324b.

See more jobs at Fluent

Apply for this job

6d

Data Engineer (Databricks)

FluentToronto,Ontario,Canada, Remote
ui

Fluent is hiring a Remote Data Engineer (Databricks)

Fluent is building the next generation advertising network, Partner Monetize & Advertiser Acquisition.  Our vision is to build an ML/AI first network of advertisers and publishers to achieve a common objective, elevating relevancy in E-commerce for everyday shoppers. 

As a Data Engineer you will bring your Databricks pipeline expertise to execute on building data products to power Fluent’s business lines.  These data products will be the foundation for sophisticated data representation of customer journeys and marketplace activity. 

You are known as a strong and efficient IC Data Engineer, with the ability to assist the Data Architect vetting the translation of an Enterprise Data Model to physical data models and pipelines.  You are familiar with Databricks medallion architecture, and how to work backwards from your enterprise models of Gold Data Products.  You are considered an expert in Spark. 

You will work with your counterparts to build and operate high-impact data solutions.  This role is fully Remote in the United States or Canada, with occasional travel to NYC. 

Fluent is looking for an experienced Data Engineer, who thrives in writing robust code in the Databricks ecosystem.   

What You’ll Do  

  • Majority of the role will be software engineering – Tables, Views, Spark jobs, orchestration within Databricks environment, following an enterprise data model design.  You will help elevate standards across testing, code repository, naming conventions, etc.
  • Develop, deploy, and manage scalable pipelines on Databricks, ensuring robust integration with a Feature Store leveraging online tables for machine learning models. 
  • Investigate and leverage Databricks’ capabilities to implement real-time data processing and streaming, potentially using Spark Streaming, Online Tables, Delta Live Tables. 
  • Contribute and maintain the high quality of the code base with comprehensive data observability, metadata standards, and best practices. 
  • Partner with data science, UI and reporting teams to understand data requirements and translate them into models. 
  • Keep track of emerging tech and trends within the Databricks ecosystem 
  • Share your knowledge by giving brown bags, tech talks, and evangelizing appropriate tech and engineering best practices. 
  • Empower internal teams by providing communication on architecture, target gold tables, execution plans, releases and training. 
  • Bachelors or Masters degree in computer science 
  • 3+ years of industry experience in Data Engineering, including expertise in Spark and SQL. 
  • 1+ years of experience with Databricks environment  
  • Nice to have: Familiarity with real-time ML systems within Databricks will be very beneficial 

At Fluent, we like what we do and we like who we do it with. Our team is a tight-knit crew of go-getters; we love to celebrate our successes! In addition we offer a fully stocked kitchen, catered breakfast and lunch, and our office manager keeps the calendar stocked with activity-filled events. When we’re not eating, working out, or planning parties, Fluent folks can be found participating in recreational sports leagues, networking with She Runs It, and bonding across teams during quarterly outings to baseball games, fancy dinners, and pizza-making classes. And we have all the practical benefits, too…

  • Competitive compensation
  • Ample career and professional growth opportunities
  • New Headquarters with an open floor plan to drive collaboration
  • Health, dental, and vision insurance
  • Pre-tax savings plans and transit/parking programs
  • 401K with competitive employer match
  • Volunteer and philanthropic activities throughout the year
  • Educational and social events
  • The amazing opportunity to work for a high-flying performance marketing company!

Salary Range: $130,000 to $160,000 - The base salary range represents the low and high end of the Fluent salary range for this position. Actual salaries will vary depending on factors including but not limited to location, experience, and performance.

Fluent participates in the E-Verify Program. As a participating employer, Fluent, LLC will provide the Social Security Administration (SSA) and, if necessary, the Department of Homeland Security (DHS), with information from each new employee’s Form I-9 to confirm work authorization. Fluent, LLC follows all federal regulations including those set forth by The Office of Special Counsel for Immigration-Related Unfair Employment Practices (OSC). The OSC enforces the anti-discrimination provision (§ 274B) of the Immigration and Nationality Act (INA), 8 U.S.C. § 1324b.

See more jobs at Fluent

Apply for this job

7d

Senior Data Engineer

M3USALondon, United Kingdom, Remote
agilesqloracleDesignazurepostgresqlpythonAWS

M3USA is hiring a Remote Senior Data Engineer

Job Description

Essential Duties and Responsibilities: 

Including, but are not limited to:  

  • Design, develop, and maintain high-quality, secure data pipelines and processes to manage and transform data efficiently. 

  • Lead the architecture and implementation of data models, schemas, and integrations that support business intelligence and reporting needs. 

  • Collaborate with cross-functional teams to understand data requirements and deliver optimal data solutions that align with business goals. 

  • Maintain and enhance data infrastructure, including data warehouses, lakes, and integration tools. 

  • Provide guidance on best practices for data management, security, and compliance. 

  • Support Power BI and other visualization tools, ensuring consistent and reliable access to data insights. 

  • Oversee the delivery of data initiatives, ensuring they meet project milestones, KPIs, and deadlines. 

Qualifications

Education and Training Required: 

  • Bachelor’s degree in Computer Science, Data Science, or a related field, or equivalent experience.  

Minimum Experience: 

  • 5+ years of experience in data engineering or related fields.  

  • 2+ years of experience with Power BI or similar data visualization tools.  

Knowledge and Skills: 

  • Proficiency with data engineering tools and technologies (e.g., SQL, Python, ETL tools).  

  • Strong experience with Power BI for data visualization and reporting.  

  • Familiarity with cloud-based data platforms (e.g., AWS, Azure, Google Cloud).  

  • Experience with data modelling, data warehousing, and designing scalable data architectures.  

  • Strong knowledge of database systems (e.g., SQL Server, Oracle, PostgreSQL).  

  • Experience working in an Agile development environment.  

  • Excellent communication skills to work effectively with both technical and non-technical stakeholders.  

  • Ability to multi-task and manage multiple projects simultaneously.  

  • Problem-solving mindset with a desire to continuously improve data processes.  

See more jobs at M3USA

Apply for this job

7d

Senior Data Engineer (Remote)

M3USAFort Washington, PA, Remote
agilesqloracleDesignazurepostgresqlpythonAWS

M3USA is hiring a Remote Senior Data Engineer (Remote)

Job Description

M3 Global Research, an M3 company, is seeking a Senior Data Engineer to join our data engineering team. This role will focus on building and maintaining robust data pipelines, working closely with stakeholders to ensure data solutions align with business objectives, and utilizing tools like Power BI for data visualization and reporting. The ideal candidate has strong analytical skills, a passion for data-driven decision-making, and excellent communication abilities to work effectively with stakeholders across the organization.

Essential Duties and Responsibilities:

Include, but are not limited to:

  • Design, develop, and maintain high-quality, secure data pipelines and processes to manage and transform data efficiently.
  • Lead the architecture and implementation of data models, schemas, and integrations that support business intelligence and reporting needs.
  • Collaborate with cross-functional teams to understand data requirements and deliver optimal data solutions that align with business goals.
  • Maintain and enhance data infrastructure, including data warehouses, lakes, and integration tools.
  • Provide guidance on best practices for data management, security, and compliance.
  • Support Power BI and other visualization tools, ensuring consistent and reliable access to data insights.
  • Oversee the delivery of data initiatives, ensuring they meet project milestones, KPIs, and deadlines.

Qualifications

  • 5+ years of experience in data engineering or related fields.
  • 2+ years of experience with Power BI or similar data visualization tools.
  • Proficiency with data engineering tools and technologies (e.g., SQL, Python, ETL tools).
  • Strong experience with Power BI for data visualization and reporting.
  • Familiarity with cloud-based data platforms (e.g., AWS, Azure, Google Cloud).
  • Experience with data modeling, data warehousing, and designing scalable data architectures.
  • Strong knowledge of database systems (e.g., SQL Server, Oracle, PostgreSQL).
  • Experience working in an Agile development environment.
  • Excellent communication skills to work effectively with both technical and non-technical stakeholders.
  • Ability to multi-task and manage multiple projects simultaneously.
  • Problem-solving mindset with a desire to continuously improve data processes.  

See more jobs at M3USA

Apply for this job

8d

Sr. Data Engineer

DevOPSterraformairflowpostgressqlDesignapic++dockerjenkinspythonAWSjavascript

hims & hers is hiring a Remote Sr. Data Engineer

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

​​About the Role:

We're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving Million+ Hims & Hers subscribers.

You Will:

  • Architect and develop data pipelines to optimize performance, quality, and scalability
  • Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources
  • Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake
  • Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance 
  • Orchestrate sophisticated data flow patterns across a variety of disparate tooling
  • Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics
  • Partner with the rest of the Data Platform team to set best practices and ensure the execution of them
  • Partner with the analytics engineers to ensure the performance and reliability of our data sources.
  • Partner with machine learning engineers to deploy predictive models
  • Partner with the legal and security teams to build frameworks and implement data compliance and security policies
  • Partner with DevOps to build IaC and CI/CD pipelines
  • Support code versioning and code deployments for data Pipeline

You Have:

  • 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages
  • Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed
  • Demonstrated experience writing complex, highly optimized SQL queries across large data sets
  • Experience with cloud technologies such as AWS and/or Google Cloud Platform
  • Experience with Databricks platform
  • Experience with IaC technologies like Terraform
  • Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres
  • Experience building event streaming pipelines using Kafka/Confluent Kafka
  • Experience with modern data stack like Airflow/Astronomer, Databricks, dbt, Fivetran, Confluent, Tableau/Looker
  • Experience with containers and container orchestration tools such as Docker or Kubernetes.
  • Experience with Machine Learning & MLOps
  • Experience with CI/CD (Jenkins, GitHub Actions, Circle CI

Nice to Have:

  • Experience building data models using dbt
  • Experience with Javascript and event tracking tools like GTM
  • Experience designing and developing systems with desired SLAs and data quality metrics
  • Experience with microservice architecture
  • Experience architecting an enterprise-grade data platform

Our Benefits (there are more but here are some highlights):

  • Competitive salary & equity compensation for full-time roles
  • Unlimited PTO, company holidays, and quarterly mental health days
  • Comprehensive health benefits including medical, dental & vision, and parental leave
  • Employee Stock Purchase Program (ESPP)
  • Employee discounts on hims & hers & Apostrophe online products
  • 401k benefits with employer matching contribution
  • Offsite team retreats

#LI-Remote

 

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions, including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

An estimate of the current salary range is
$160,000$185,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims considers all qualified applicants for employment, including applicants with arrest or conviction records, in accordance with the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance, the California Fair Chance Act, and any similar state or local fair chance laws.

Hims & Hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, please contact us at accommodations@forhims.com and describe the needed accommodation. Your privacy is important to us, and any information you share will only be used for the legitimate purpose of considering your request for accommodation. Hims & Hers gives consideration to all qualified applicants without regard to any protected status, including disability. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

10d

Data Engineer

AssemblyRemote
Mid LevelFull TimeMaster’s DegreescalasqlDesignazurejavapythonAWS

Assembly is hiring a Remote Data Engineer

Data Engineer - Assembly Industries - Career PageSee more jobs at Assembly

Apply for this job

Libertex Group is hiring a Remote Data Engineer

Established in 1997, the Libertex Group has helped shape the online trading industry by merging innovative technology, market movements and digital trends. 

The multi-awarded online trading platform, Libertex, enables traders to access the market and invest in stocks or trade CFDs with underlying assets being commodities, Forex, ETFs, cryptocurrencies, and others.

Libertex is, also, the Official Online Trading Partner of FC Bayern, bringing the exciting worlds of football and trading together.

We build innovative fintech so people can #TradeForMore with Libertex.

Job Overview

We are responsible for designing and implementing ETL processes using modern dbt technology, managing DWH, data marts, and dashboards, as well as modeling, transforming, testing, and deploying data.

  • Strong SQL Skills (T-SQL preferred) - Expertise in writing complex queries, optimizing database performance, and ensuring data integrity. Ability to design and develop data models, ETL/ELT pipelines, and transformations.
  • Experience with MSSQL Server - Hands-on experience in database design, query optimization, and performance tuning on MSSQL.
  • Familiarity with dbt (Data Build Tool) - Experience in developing, managing, and optimizing data models and transformations in dbt. Ability to design and implement robust data pipelines using dbt, ensuring data accuracy and reliability.
  • Proficiency in Python - Ability to write clean, efficient, and reusable Python scripts for automating data processes. Experience in writing Python code to handle ETL tasks, data manipulation, and API integrations.

Nice to have:

  • Experience with Apache Airflow will be big plus
  • Experience with Docker will be plus
  • Experience with GitLab CI/CD will be plus
  • Strong communication skills to collaborate with data engineers, analysts, and business stakeholders.
  • Proactive problem-solving attitude and a continuous improvement mindset.
  • Excel (MS Office) - advanced level
  • Intermediate (B1) and higher level of English

Responsibilities:

  • Interaction with all team members and participation in development at all stages.
  • Building data integration; Development and transforming data via dbt-models.
  • Creating auto-tests, documentation of models and tests.
  • Create data pipelines on regular base.
  • Data warehouse performance optimizations; Applying best data engineering practices.

  • Work in a pleasant and enjoyable environment near the Montenegrin sea or mountains
  • Quarterly bonuses based on Company performance
  • Generous relocation package for the employee and their immediate family/partner 
  • Medical Insurance Plan with coverage for the employee and their immediate family from day one
  • 24 working days of annual leave 
  • Yearly reimbursement of travel expenses for the employee and family's flight home
  • Corporate events and team building activities
  • Udemy Business unlimited membership & language training courses 
  • Professional and personal development opportunities in a fast-growing environment 

See more jobs at Libertex Group

Apply for this job

13d

Senior Data Engineer

DailyPay IncRemote, United States
SalestableausqlDesignc++python

DailyPay Inc is hiring a Remote Senior Data Engineer

About Us:

DailyPay, Inc. is transforming the way people get paid. As the industry’s leading on-demand pay solution, DailyPay uses an award-winning technology platform to help America’s top employers build stronger relationships with their employees. This voluntary employee benefit enables workers everywhere to feel more motivated to work harder and stay longer on the job, while supporting their financial well-being outside of the workplace.

DailyPay is headquartered in New York City, with operations throughout the United States as well as in Belfast. For more information, visit DailyPay's Press Center.

The Role:

DailyPay is looking for a Senior Data Engineer to join our Data Engineering Team. The Data Engineering Team is responsible for building the data infrastructure that underpins our data analytics and data products that are used cross-functionally inside the company (sales, marketing, operations, engineering, etc.) as well as by DailyPay partner companies. The team also ingests internal and external data to help provide insights about the payroll industry in general, as well as about personal finance and financial wellbeing.

If this opportunity excites you, we encourage you to apply even if you do not meet all of the qualifications.

How You Will Make an Impact:

  • Build and maintain company’s ETL/ELT infrastructure and data pipelines
  • Design and implement data testing / scaling capabilities for data pipelines
  • Maintain and optimize monitoring and alerting for company’s ELT, data pipelines, data warehouse, and analytics infrastructure
  • Optimize database performance while reducing warehouse costs and development times
  • Participate in code approvals and PR review process for company-wide analytics engineering efforts
  • Architect data lakehouse for DailyPay to use as the single source of truth for internal and external client reporting/analytics

What You Bring to The Team:

  • 7+ years SQL experience; expert SQL capability
  • Familiarity with BI tools such as Tableau, Looker, Metabase or similar
  • Experience in Data Architecture for Dimensional Models, Data Lakes and Data Lakehouses
  • Excellent communication skills including experience speaking to technical and business audiences and working globally
  • 2+ years Python experience
  • 1+ years of dbt experience
  • Experience with Snowflake, Redshift, and ETL tools like Fivetran, Qlik Replicate or Stitch is a plus

What We Offer:

  • Exceptional health, vision, and dental care
  • Opportunity for equity ownership
  • Life and AD&D, short- and long-term disability
  • Employee Assistance Program
  • Employee Resource Groups
  • Fun company outings and events
  • Unlimited PTO
  • 401K with company match

 

 

#BI-Remote #LI-Remote

 

Pay Transparency.  DailyPay takes a market-based approach to compensation, which may vary depending on your location. United States locations are categorized into three tiers based on a cost of labor index for that geographic area. The salary ranges are listed by geographic tier. Additionally, this role may be eligible for variable incentive compensation and stock options. Where a candidate fits within the compensation range for a role is based on their demonstrated experience, qualifications, skills, and internal equity. 

New York City
$145,000$194,000 USD
Remote, Premium (California, Connecticut, Washington D.C., New Jersey, New York, Massachusetts, Washington)
$133,000$178,000 USD
Remote, Standard
$126,000$169,000 USD

 


DailyPay is committed to fostering an inclusive, equitable culture of belonging, grounded in empathy and respect, which values openness to opinions, awareness of lived experiences, fair treatment and access for all. We strive to build and develop diverse teams to create an organization where innovation thrives, where the full potential of each person is engaged, and their views, beliefs and values are integrated into our ways of working. 

We encourage people of all backgrounds to join us on our mission. If you require reasonable accommodation for any aspect of the recruitment process, please send a request to peopleops@dailypay.com. All requests for accommodation will be addressed as confidentially as practicable.

DailyPay is an equal opportunity employer. All qualified applicants will receive consideration without regard to race, color, religion or creed, alienage or citizenship status, political affiliation, marital or partnership status, age, national origin, ancestry, physical or mental disability, medical condition, veteran status, gender, gender identity, pregnancy, childbirth (or related medical conditions), sex, sexual orientation, sexual and other reproductive health decisions, genetic disorder, genetic predisposition, carrier status, military status, familial status, or domestic violence victim status and any other basis protected under federal, state, or local laws.

See more jobs at DailyPay Inc

Apply for this job

17d

Junior Data Engineer

Accesa - RatiodataEmployee can work remotely, Romania, Remote
agileairflowpostgressqlDesignjavapythonbackend

Accesa - Ratiodata is hiring a Remote Junior Data Engineer

Job Description

One of our clients operates prominently in the financial sector, where we enhance operations across their extensive network of 150,000 workstations and support a workforce of 4,500 employees. As part of our commitment to optimizing data management strategies, we are migrating data warehouse (DWH) models into data products within the Data Integration Platform (DIP). 

Responsibilities: 

Drive Data Efficiency: Create and maintain optimal data transformation pipelines. 

Master Complex Data Handling: Work with large, complex financial data sets to generate outputs that meet functional and non-functional business requirements. 

Lead Innovation and Process Optimization: Identify, design, and implement process improvements such as automating manual processes, optimizing data delivery, and re-designing infrastructure for higher scalability. 

• Architect Scalable Data Infrastructure: Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using open-source technologies. 

• Unlock Actionable Insights: Build/use analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. 

• Collaborate with Cross-Functional Teams: Work with stakeholders, including Senior Management, Departments Heads, Product, Data, and Design teams, to assist with data-related technical issues and support their data infrastructure needs. 

Qualifications

Must have: 

  • 1+ years of experience in a similar role, preferably within Agile teams. 

  • Skilled in SQL and relational databases for data manipulation. 

  • Experience in building and optimizing Big Data pipelines and architectures. 

  • Knowledge of Big Data tools such as Spark, and object-oriented programming in Java; experience with Spark using Python is a plus.  

  • Proven experience in performing data analysis and root cause analysis on diverse datasets to identify opportunities for improvement. 

  • Strong analytical skills in working with both structured and unstructured data. 

 

Nice to have: 

  • Experience in engaging with customer stakeholders  

  • Expertise in manipulating and processing large, disconnected datasets to extract actionable  

  • Familiarity with innovative technologies in message queuing, stream processing, and scalable big data storage solutions  

  • Technical skills in the following areas are a plus: Relational Databases (eg. Postgres), Big Data Tools: (eg. Databricks), and Workflow Management (eg. Airflow), and Backend Development using Spring Boot.

  • Experience with ETL processes, including scheduling and orchestration using tools like Apache Airflow (or similar). 

Apply for this job

17d

Senior Data Engineering

NielsenIQIllinois, IL, Remote
DevOPSagileDesignazurejenkinspythonAWS

NielsenIQ is hiring a Remote Senior Data Engineering

Job Description

Position Description 

  • Meet with stakeholders to understand the big picture and asks. 
  • Recommend architecture aligned with the goals and objectives of the product/organization. 
  • Recommend standard ETL design patterns and best practice. 
  • Drive the detail design and architectural discussions as well as customer requirements sessions to support the implementation of code and procedures for our big data product.
  • Design and develop proof of concept/prototype to demonstrate architecture feasibility. 
  • Collaborate with developers on the team to meet product deliverables. 
  • Must have familiarity with data science tech stack. Any one of the  languages :SAS,SPSS code or R-code. 
  • Work independently and collaboratively on a multi-disciplined project team in an Agile development environment. 
  • Ability to identify and solve for code/design optimization. 
  • Learn and integrate with a variety of systems, APIs, and platforms. 
  • Interact with a multi-disciplined team to clarify, analyze, and assess requirements. 
  • Be actively involved in the design, development, and testing activities in big data applications. 

Qualifications

  • Hands-on experience Python and Pyspark, Jupyter Notebooks, Python. 
  • Familiarity with Databricks. Azure Databricks is a plus. 
  • Familiarity with data cleansing, transformation, and validation. 
  • Proven architecture skills on Big Data  projects. 
  • Hands-on experience with a code versioning tool such as GitHub, Bitbucket, etc. 
  • Hands-on experience building pipelines in GitHub (or Azure Devops,Github, Jenkins, etc.) 
  • Hands-on experience with Spark. 
  • Strong written and verbal communication skills. 
  • Self-motivated and ability to work well in a team. 

Any mix of the following skills is also valuable: 

  • Experience with data visualization tools such as Power BI or Tableau. 
  • Experience with DEVOPS CI/CD tools and automation processes (e.g., Azure DevOPS, GitHub, BitBucket). 
  • Experience with Azure Cloud Services and Azure Data Factory. 
  • Azure or AWS Cloud certification preferred. 

Education:

  • Bachelor of Science degree from an accredited university 

See more jobs at NielsenIQ

Apply for this job

19d

Senior Data Engineer

MozillaRemote
sqlDesignc++python

Mozilla is hiring a Remote Senior Data Engineer

To learn the Hiring Ranges for this position, please select your location from the Apply Now dropdown menu.

To learn more about our Hiring Range System, please click this link.

Why Mozilla?

Mozilla Corporation is the non-profit-backed technology company that has shaped the internet for the better over the last 25 years. We make pioneering brands like Firefox, the privacy-minded web browser, and Pocket, a service for keeping up with the best content online. Now, with more than 225 million people around the world using our products each month, we’re shaping the next 25 years of technology and helping to reclaim an internet built for people, not companies. Our work focuses on diverse areas including AI, social media, security and more. And we’re doing this while never losing our focus on our core mission – to make the internet better for people. 

The Mozilla Corporation is wholly owned by the non-profit 501(c) Mozilla Foundation. This means we aren’t beholden to any shareholders — only to our mission. Along with thousands of volunteer contributors and collaborators all over the world, Mozillians design, build and distributeopen-sourcesoftware that enables people to enjoy the internet on their terms. 

About this team and role:

As a Senior Data Engineer at Mozilla, your primary area of focus will be on our Analytics Engineering team. This team focuses on modeling our data so that the rest of Mozilla has access to it, in the appropriate format, when they need it, to help them make data informed decisions. This team is also tasked with helping to maintain and make improvements to our data platform. Some recent improvements include introducing a data catalog, building in data quality checks among others. Check out the Data@Mozilla blog for more details on some of our work.

What you’ll do: 

  • Work with data scientists to design data modes, answer questions and guide product decisions
  • Work with other data engineers to design and maintain scalable data models and ETL pipelines
  • Help improve the infrastructure for ingesting, storing and transforming data at a scale of tens of terabytes per day
  • Help design and build systems to monitor and analyze data from Mozilla’s products
  • Establish best practices for governing data containing sensitive information, ensuring compliance and security

What you’ll bring: 

  • At a minimum 3 years of professional experience in data engineering
  • Proficiency with the programming languages used by our teams (SQL and Python)
  • Demonstrated experience designing data models used to represent specific business activities to power analysis
  • Strong software engineering fundamentals: modularity, abstraction, data structures, and algorithms
  • Ability to work collaboratively with a distributed team, leveraging strong communication skills to ensure alignment and effective teamwork across different time zones
  • Our team requires skills in a variety of domains. You should have proficiency in one or more of the areas listed below, and be interested in learning about the others:
    • You have used data to answer specific questions and guide company decisions.
    • You are opinionated about data models and how they should be implemented; you partner with others to map out a business process, profile available data, design and build flexible data models for analysis.
    • You have experience recommending / implementing new data collection to help improve the quality of data models.
    • You have experience with data infrastructure: databases, message queues, batch and stream processing
    • You have experience building modular and reusable ETL/ELT pipelines in distributed databases
    • You have experience with highly scalable distributed systems hosted on cloud providers (e.g. Google Cloud Platform)
  • Commitment to our values:
    • Welcoming differences
    • Being relationship-minded
    • Practicing responsible participation
    • Having grit

What you’ll get:

  • Generous performance-based bonus plans to all regular employees - we share in our success as one team
  • Rich medical, dental, and vision coverage
  • Generous retirement contributions with 100% immediate vesting (regardless of whether you contribute)
  • Quarterly all-company wellness days where everyone takes a pause together
  • Country specific holidays plus a day off for your birthday
  • One-time home office stipend
  • Annual professional development budget
  • Quarterly well-being stipend
  • Considerable paid parental leave
  • Employee referral bonus program
  • Other benefits (life/AD&D, disability, EAP, etc. - varies by country)

About Mozilla 

Mozilla exists to build the Internet as a public resource accessible to all because we believe that open and free is better than closed and controlled. When you work at Mozilla, you give yourself a chance to make a difference in the lives of Web users everywhere. And you give us a chance to make a difference in your life every single day. Join us to work on the Web as the platform and help create more opportunity and innovation for everyone online.

Commitment to diversity, equity, inclusion, and belonging

Mozilla understands that valuing diverse creative practices and forms of knowledge are crucial to and enrich the company’s core mission.  We encourage applications from everyone, including members of all equity-seeking communities, such as (but certainly not limited to) women, racialized and Indigenous persons, persons with disabilities, persons of all sexual orientations,gender identities, and expressions.

We will ensure that qualified individuals with disabilities are provided reasonable accommodations to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment, as appropriate. Please contact us at hiringaccommodation@mozilla.com to request accommodation.

We are an equal opportunity employer. We do not discriminate on the basis of race (including hairstyle and texture), religion (including religious grooming and dress practices), gender, gender identity, gender expression, color, national origin, pregnancy, ancestry, domestic partner status, disability, sexual orientation, age, genetic predisposition, medical condition, marital status, citizenship status, military or veteran status, or any other basis covered by applicable laws.  Mozilla will not tolerate discrimination or harassment based on any of these characteristics or any other unlawful behavior, conduct, or purpose.

Group: D

#LI-DNI

Req ID: R2679

See more jobs at Mozilla

Apply for this job

19d

Senior Data Engineer

PlentificLondon,England,United Kingdom, Remote Hybrid
B2B

Plentific is hiring a Remote Senior Data Engineer

We're Plentific, the world’s leading real-time property solution, and we're looking for top talent to join our ambitious team. We’re a global company, headquartered in London, and operating across the United Kingdom, Germany and North America.

As a B2B company, we're dedicated to helping landlords, letting agents and property managers streamline operations, unlock revenue, increase tenant satisfaction, and remain compliant through our award-winning SaaS technology platform. We also work with SMEs and large service providers, helping them access more work and grow their businesses.

We're not just any proptech - we're backed by some of the biggest names in the business, including A/O PropTech, Highland Europe, Mubadala, RXR Digital Ventures and Target Global and work with some of the world’s most prominent real estate players.

But we're not just about business - we're also building stronger communities where people can thrive by ensuring the quality and safety of buildings, supporting decarbonisation through our ESG Retrofit Centre of Excellence and championing diversity across the sector through the Women’s Trade Network. We're committed to creating exceptional experiences for our team members, too. Our culture is open and empowering, and we're always looking for passionate, driven individuals to join us on our mission.

So, what's in it for you?

  • A fast-paced, friendly, collaborative and hybrid/flexible working environment
  • Ample opportunities for career growth and progression
  • A multicultural workplace with over 20 nationalities that value diversity, equity, and inclusion
  • Prioritisation of well-being with social events, digital learning, career development programs and much more

If you're ready to join a dynamic and innovative team that’s pioneering change in real estate, we'd love to hear from you.

The Role

We’re looking for a proactive and energetic individual with extensive experience in Data Engineering and Machine Learning to join our growing business. You’ll be working alongside highly technical and motivated teams and report to the Head of Data Engineering. You would be expected to contribute to the growth of the data/ML/AI products both internally and for our customers. You’ll be working on the cutting edge of technology and will thrive if you have a desire to learn and keep up to date with the latest trends in Data Infrastructure, Machine Learning and Generative AI. For people with the right mindset, this provides a very intellectually stimulating environment.

Responsibilities

  • Be one of the architects for our data model defined in dbt.
  • Take ownership and refine our existing real time data pipelines.
  • Create and maintain analytics dashboards that are defined as-code in Looker
  • Create and productize Machine Learning and LLM-based features
  • Be a mentor for the more junior data engineers in the team
  • Proficient in SQL and Python. A live coding interview is part of the hiring process.
  • Experience in data modelling with dbt
  • Experience organising the data governance across a company, including the matrix of access permissions for a data warehouse.
  • Experience with BI tools as code. Looker experience is a nice to have.
  • Experience building ETL/ELT data ingestion and transformation pipelines
  • Experience training Machine Learning Algorithms
  • Experience productizing Machine Learning from the infrastructure perspective (MLOps)
  • Nice to have: experience productizing multimodal (text, images, audio, video) GenAI products with frameworks such as LangChain

As you can see, we are quickly progressing with our ambitious plans and are eager to grow our team of doers to achieve our vision of managing over 2 million properties through our platform across various countries. You can help us shape the future of property management across the globe. Here’s what we offer:

  • A competitive compensation package
  • 25 days annual holiday
  • Flexible working environment including the option to work abroad
  • Private health care for you and immediate family members with discounted gym membership, optical, dental and private GP
  • Enhanced parental leave
  • Life insurance (4x salary)
  • Employee assistance program
  • Company volunteering day and charity salary sacrifice scheme
  • Learning management system powered by Udemy
  • Referral bonus and charity donation if someone you introduce joins the company
  • Season ticket loan, Cycle to work, Electric vehicle and Techscheme programs
  • Pension scheme
  • Work abroad scheme
  • Company-sponsored lunches, dinners and social gatherings
  • Fully stocked kitchen with drinks, snacks, fruit, breakfast cereal etc.

See more jobs at Plentific

Apply for this job