airflow Remote Jobs

143 Results

2h

Staff Software Engineer, Ads Measurement

InstacartCanada - Remote (ON, AB or BC Only)
Salesterraformscalaairflowsqlpython

Instacart is hiring a Remote Staff Software Engineer, Ads Measurement

We're transforming the grocery industry

At Instacart, we invite the world to share love through food because we believe everyone should have access to the food they love and more time to enjoy it together. Where others see a simple need for grocery delivery, we see exciting complexity and endless opportunity to serve the varied needs of our community. We work to deliver an essential service that customers rely on to get their groceries and household goods, while also offering safe and flexible earnings opportunities to Instacart Personal Shoppers.

Instacart has become a lifeline for millions of people, and we’re building the team to help push our shopping cart forward. If you’re ready to do the best work of your life, come join our table.

Instacart is a Flex First team

There’s no one-size fits all approach to how we do our best work. Our employees have the flexibility to choose where they do their best work—whether it’s from home, an office, or your favorite coffee shop—while staying connected and building community through regular in-person events. Learn more about our flexible approach to where we work.

Overview

 

About the Role -As a Staff Software Engineer, you will play a crucial role in building a robust data platform that addresses strategic challenges within Ads and across Instacart. You'll work on developing systems to capture and enforce data-compliance requirements, and enhance self-service capabilities for internal consumers while maintaining data integrity and consistency. Your efforts will drive the north-star vision for the platform by consolidating diverse requirements and collaborating across multiple engineering teams, ensuring strategic alignment and continuous delivery.

 

About the Team -You will join a dynamic team that transforms raw data into actionable insights, fulfilling the needs of advertisers, internal data consumers like data science and machine learning engineers, sales, and more. The team owns several systems such as metrics transformation and storage, billing, spam detection, lift testing, and supports multiple facets of Instacart’s data infrastructure to equip the organization with valuable information. We empower Instacart teams to make data-driven decisions and directly impact business goals.

 

About the Job 

  • Lead and break down complex problems while keeping the broader data vision in focus.
  • Collaborate with engineers and engineering teams, providing mentorship and fostering a strong engineering culture.
  • Directly contribute to our data vision and engineering architecture. Produce and review technical artifacts that align with business goals.
  • Oversee engineering initiatives end-to-end, proactively managing risks, setting goals, and ensuring smooth execution and delivery.
  • Balance maintainability with tech debt and the development of new features.

 

About You

Minimum Qualifications

  • 8+ years of software development experience.
  • Proven ability to work with multiple stakeholders and manage ambiguity and conflicting requirements.
  • Self-motivated with a strong sense of ownership in a fast-paced startup environment.
  • Expertise in Spark, ETLs, Distributed System architecture, MapReduce, SQL, and Big Data infrastructure.
  • Experience in balancing urgency and pragmatism with high-quality, long-term solutions.

 

Preferred Qualifications

  • 10+ years of experience
  • Strong background in Scala and functional programming.
  • Familiarity with Snowflake, Databricks, DBT, Airflow, Python, Terraform, and Go.

Instacart provides highly market-competitive compensation and benefits in each location where our employees work. This role is remote and the base pay range for a successful candidate is dependent on their permanent work location. Please review our Flex First remote work policy here. Currently, we are only hiring in the following provinces: Ontario, Alberta and British Columbia.

Offers may vary based on many factors, such as candidate experience and skills required for the role. Additionally, this role is eligible for a new hire equity grant as well as annual refresh grants. Please read more about our benefits offerings here.

For Canadian based candidates, the base pay ranges for a successful candidate are listed below.

CAN
$221,000$245,000 CAD

See more jobs at Instacart

Apply for this job

11h

Machine Learning Engineer

Plum FintechAthens,Attica,Greece, Remote Hybrid
MLagileterraformairflowsqlgitpython

Plum Fintech is hiring a Remote Machine Learning Engineer

At Plum, we're on a mission to maximise wealth for all. We’re making saving money effortless and turning investing into something everyone can do.

Our journey began back in 2017, when we became one of the first to use artificial intelligence and automation to simplify personal finance. Fast forward to today, and we've already helped people save £2 billion across 10 European markets.

Named the UK's fastest-growing fintech in the Deloitte Technology Fast 50, our success is down to the passion and dedication of our diverse team. Based in our London, Athens and Nicosia offices, 170 talented people work together to empower people to do more with their money. And now, the team is growing!

About the Role

We are looking for a talented and passionate Machine Learning Engineer to join our team and build the next generation of our real-time, data-driven applications. As a Machine Learning Engineer, you will provision and expand the infrastructure of ML and AI at Plum, whilst collaborating closely with Data Engineers and Data Scientists, who are currently managing numerous live production ML systems and delivering on a substantial roadmap. This traverses areas such as document processing automation, transaction fraud detection, marketing spend optimisation and customer retention.

What will you do

  • Collaborate with data scientists, data analysts and data engineers on production systems and applications focused on traditional ML, generative AI and MLOps
  • Transition ML models from experimental prototypes to production deployments that can handle high volumes of data in real-time, enabling us to make rapid decisions and provide immediate value to our users
  • Stay engaged with the latest advancements in data science and fintech, particularly in the areas of leveraging ML techniques to deliver business impact.
  • Contribute to the continuous development of our AI and ML Ops infrastructure, covering areas such as model deployment, continuous retraining, feature store, performance monitoring and drift detection.
  • Exercise software engineering best practices in the codebase, like version control and continuous integration, with an aim to ensure our models are not just effective, but also thoroughly tested, well-documented, and regularly maintained.
  • Promote a culture of mutual learning and growth, where teaching and learning from colleagues is encouraged. We highly value knowledge sharing and ongoing learning.

Who you are????

  • Strong foundations in data structures, data modelling (e.g. Airflow and dbt), software architecture, Python, SQL, machine learning frameworks (e.g. Keras, PyTorch), and libraries (e.g. scikit-learn).
  • Proven experience in developing, maintaining and deploying machine learning models for real-time applications, with a strong understanding of streaming data processing technologies and real-time inference frameworks in production environments
  • Strong understanding of ML applications development life cycle processes and tools: CI/CD, version control (git), testing frameworks, MLOps, agile methodologies, monitoring and alerting, experiment trackers (e.g. mlFlow) & orchestrators (Airflow, Kubeflow)
  • Experience in building and optimizing scalable machine learning infrastructure in a cloud setup. We use Google Cloud Platform and leverage services like BigQuery, Vertex AI, and Cloud Storage.
  • You have a solid understanding of how to measure the performance of ML models
  • Strong problem-solving skills, a critical and creative mindset, and a team-oriented approach with a focus on mentorship and knowledge sharing.

Nice to Have

  • Deep knowledge of math, probability, statistics and algorithms
  • Experienced with Large Language Models, Generative AI, Langchain, Transformer models
  • Understanding of the concepts of GPU-powered workloads, NVIDIA drivers, container runtimes
  • Experience provisioning infrastructure components using Terraform, including virtual machines, storage, databases, and other necessary services

Plum's Perks

  • We're all in this together! Own part of the company through stock options ????
  • Annual training budget
  • Private Health & Life Insurance
  • Free Plum Premium subscription (normally £9.99 a month)
  • Free parking slots
  • 25 days holiday a year, excluding public holidays
  • Employee referral scheme up to €4000
  • Flexible approach to remote working, though we encourage at least 2-3 days a week in our beautiful office in central Athens for optimal collaboration
  • 45 days work from anywhere
  • Team breakfast on Tuesdays and team lunch on Thursdays in the office, as well as a plentiful supply of fruit, snacks and coffee
  • 1 day paid leave for volunteering, supporting you giving back to society
  • 2 weeks paid sabbatical after four years of service
  • Team trip to secret destinations once a year ✈️
  • Great office location in the heart of Athens (Syntagma square), with an amazing view!
  • A vibe that’s ????????????

If you think this sounds like a bit of you then don’t hesitate to get in touch!

Thanks,

Plum Τeam ????

*Plum is an Equal Opportunity Employer. Plum does not discriminate on the basis of age, race, religion, sex, gender identity, sexual orientation, non-disqualifying physical or mental disability, national origin or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit and business need.

See more jobs at Plum Fintech

Apply for this job

11h

Machine Learning Engineer

Plum FintechNicosia,Cyprus, Remote Hybrid
MLagileterraformairflowsqlgitpython

Plum Fintech is hiring a Remote Machine Learning Engineer

At Plum, we're on a mission to maximise wealth for all. We’re making saving money effortless and turning investing into something everyone can do.

Our journey began back in 2017, when we became one of the first to use artificial intelligence and automation to simplify personal finance. Fast forward to today, and we've already helped people save £2 billion across 10 European markets.

Named the UK's fastest-growing fintech in the Deloitte Technology Fast 50, our success is down to the passion and dedication of our diverse team. Based in our London, Athens and Nicosia offices, 170 talented people work together to empower people to do more with their mone. And now, the team is growing!

About the Role

We are looking for a talented and passionate Machine Learning Engineer to join our team and build the next generation of our real-time, data-driven applications. As a Machine Learning Engineer, you will provision and expand the infrastructure of ML and AI at Plum, whilst collaborating closely with Data Engineers and Data Scientists, who are currently managing numerous live production ML systems and delivering on a substantial roadmap. This traverses areas such as document processing automation, transaction fraud detection, marketing spend optimisation and customer retention.

What will you do

  • Collaborate with data scientists, data analysts and data engineers on production systems and applications focused on traditional ML, generative AI and MLOps
  • Transition ML models from experimental prototypes to production deployments that can handle high volumes of data in real-time, enabling us to make rapid decisions and provide immediate value to our users
  • Stay engaged with the latest advancements in data science and fintech, particularly in the areas of leveraging ML techniques to deliver business impact.
  • Contribute to the continuous development of our AI and ML Ops infrastructure, covering areas such as model deployment, continuous retraining, feature store, performance monitoring and drift detection.
  • Exercise software engineering best practices in the codebase, like version control and continuous integration, with an aim to ensure our models are not just effective, but also thoroughly tested, well-documented, and regularly maintained.
  • Promote a culture of mutual learning and growth, where teaching and learning from colleagues is encouraged. We highly value knowledge sharing and ongoing learning.

Who you are????

  • Strong foundations in data structures, data modelling (e.g. Airflow and dbt), software architecture, Python, SQL, machine learning frameworks (e.g. Keras, PyTorch), and libraries (e.g. scikit-learn).
  • Proven experience in developing, maintaining and deploying machine learning models for real-time applications, with a strong understanding of streaming data processing technologies and real-time inference frameworks in production environments
  • Strong understanding of ML applications development life cycle processes and tools: CI/CD, version control (git), testing frameworks, MLOps, agile methodologies, monitoring and alerting, experiment trackers (e.g. mlFlow) & orchestrators (Airflow, Kubeflow)
  • Experience in building and optimizing scalable machine learning infrastructure in a cloud setup. We use Google Cloud Platform and leverage services like BigQuery, Vertex AI, and Cloud Storage.
  • You have a solid understanding of how to measure the performance of ML models
  • Strong problem-solving skills, a critical and creative mindset, and a team-oriented approach with a focus on mentorship and knowledge sharing.

Nice to Have

  • Deep knowledge of math, probability, statistics and algorithms
  • Experienced with Large Language Models, Generative AI, Langchain, Transformer models
  • Understanding of the concepts of GPU-powered workloads, NVIDIA drivers, container runtimes
  • Experience provisioning infrastructure components using Terraform, including virtual machines, storage, databases, and other necessary services

Plum's Perks

  • We're all in this together! Own part of the company through stock options ????
  • Annual training budget
  • Private Life Insurance - Ethniki Asfalistiki
  • Provident Fund - Ancoria Bank
  • Free Plum Premium subscription (normally £9.99 a month).
  • Free parking slots
  • 25 days holiday a year, excluding public holidays
  • Employee referral scheme up to €4000
  • Flexible approach to remote working, though we encourage at least 2-3 days a week in our beautiful office in Nicosia for optimal collaboration.
  • 45 days work from anywhere
  • Team lunch on Thursdays in the office, as well as a plentiful supply of fruit, snacks and coffee.
  • 1 day paid leave for volunteering, supporting you giving back to society.
  • 2 weeks paid sabbatical after four years of service.
  • Team trip to secret destinations once a year ✈️
  • A vibe that’s ????????????

If you think this sounds like a bit of you then don’t hesitate to get in touch!

Thanks,

Plum Τeam ????

*Plum is an Equal Opportunity Employer. Plum does not discriminate on the basis of age, race, religion, sex, gender identity, sexual orientation, non-disqualifying physical or mental disability, national origin or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit and business need.Plum's Perks

See more jobs at Plum Fintech

Apply for this job

2d

Senior Test Automation Engineer

Sigma SoftwareWarsaw, Poland, Remote
Cypressagileairflowtypescriptjenkins

Sigma Software is hiring a Remote Senior Test Automation Engineer

Job Description

  • Write and execute tests to validate products and software applications.
  • Develop and maintain test execution and tracking software.
  • Collaborate with Product team to identify product and technical requirements.
  • Collaborate with Quality & Automation team and Software Engineers to identify, reproduce, and document bugs or defects.
  • Assist in integrating testing into CI/CD pipeline.
  • Contribute towards architecture designs providing feedback on testability (for Senior).
  • Provide guidance and mentorship for less-experienced Test Automation Engineers (for Senior).
  • Other duties and responsibilities as assigned.

Qualifications

  • 5+ years of work experience in Test Automation for Senior (3+ years for Middle).
  • Strong automation coding skills in TypeScript or JavaScript.
  • Hands-on experience with Cypress (2+ years at least).
  • Experience with CI/CD tools and frameworks (Jenkins, Github Actions, Airflow, Concourse CI preferred).
  • A full understanding and appreciation of the test pyramid.
  • Excellent communication skills and the ability to work in an Agile team.
  • The ability to teach others how to write quality code (for Senior).

See more jobs at Sigma Software

Apply for this job

2d

Senior Generative AI Software Engineer

ExperianCosta Mesa, CA, Remote
MLairflowsqlAWS

Experian is hiring a Remote Senior Generative AI Software Engineer

Job Description

Job description

The Senior Generative AI Engineer, reporting to the Head of AI/ML Innovation, will work with the Generative AI team to accelerate Experian's impact by bringing together data, technology, and data science to build game-changing products and services for our customers. We are looking for a Generative AI Engineer, Sr., to help develop our specialized Generative AI Models in integration with Consumer Services products. These new capabilities will help provide Financial Power to All our customers. As a growing team, we embrace a startup mentality while operating in a large organization. We value speed and impact – and our results and ways of working are transforming the culture of the larger organizations around us. Role accountabilities and critical activities

You'll develop our generative AI models with existing software teams building products

You will

  • You will develop a scalable framework for developing specialized Generative AI models
  • You can develop scalable pipelines, tools, and services for building production-ready Generative AI models
  • You will work with our data engineers, ml engineers, and software engineers to pilot our products with beta customers
  • Maintain our culture of simple, streamlined code and full CI/CD automation
  • Develop simple, streamlined, and well-tested ML pipeline components

Qualifications

  • 5+ years of experience working with machine learning or generative AI libraries such as Langchain, llamaindex, Langsmith, and llamaguard
  • Open source Foundation models: llama3, llama2, mistral, falcon, phi, dbrx, arctic, Bert
  • Off-the-shelf models: GPT, Claude, Gemini
  • Embedding models: nomic, ada, bge, e5
  • Orchestration Frameworks: mlflow, airflow
  • Extensive experience working with retrieval augmented generation and supporting databases/libraries such as: Vector databases: chromadb, pgvector, qdrant, pinecone, weaviate, and milvus
  • Chunking algorithms/strategies
  • Embedding models/dimensions
  • Advanced knowledge in Python/PySpark and SQL
  • Experience working in the Databricks AI Suite of products, particularly Mosaic branded products
  • Familiarity with the AWS platform and services including CI/CD automation methods
  • Experience with fine tuning LLMs or SLMs and working with large sets of data

See more jobs at Experian

Apply for this job

3d

Senior Data Scientist (Platform)

AgeroRemote
MLMaster’s DegreeairflowsqlB2Bc++pythonAWS

Agero is hiring a Remote Senior Data Scientist (Platform)

About Agero:

Wherever drivers go, we’re leading the way. Agero’s mission is to rethink the vehicle ownership experience through a powerful combination of passionate people and data-driven technology, strengthening our clients’ relationships with their customers. As the #1 B2B, white-label provider of digital driver assistance services, we’re pushing the industry in a new direction, taking manual processes, and redefining them as digital, transparent, and connected. This includes: an industry-leading dispatch management platform powered by Swoop; comprehensive accident management services; knowledgeable consumer affairs and connected vehicle capabilities; and a growing marketplace of services, discounts and support enabled by a robust partner ecosystem. The company has over 150 million vehicle coverage points in partnership with leading automobile manufacturers, insurance carriers and many others. Managing one of the largest national networks of service providers, Agero responds to approximately 12 million service events annually. Agero, a member company of The Cross Country Group, is headquartered in Medford, Mass., with operations throughout North America. To learn more, visit https://www.agero.com/.

POSITION SUMMARY:

This position is focused on driving innovation in the core business, including but not limited to, predicting operations performance, short-term forecasting, improving dispatching efficacy, simulation and analyzing data from our network of roadside assistance service providers and extensive call-center operations. An ideal candidate would assist stakeholders in understanding and making use of insights gained from statistical analyses and building predictive models. The platform machine learning team is a highly collaborative cross-functional team with Machine Learning engineers, data scientists, software engineers and product management, working together to power the next wave of ML-driven platform enhancements.

 

ESSENTIAL FUNCTIONS:

Participate in end-to-end data science projects from problem definition and data exploration to result validation and working with engineers to put models in production.

  • Communicate research results effectively in written and spoken forms to various audiences including product management, engineering and executives.
  • Research and experiment with Agero's extensive datasets to gain new insights and drive innovation.

 

POSSIBLE PROJECTS (within the first 6-12 months):

  • Conduct analysis to help optimize the performance of our dispatching operations.
  • Predictive modeling of job performance indicators to help improve customer satisfaction.
  • Spatial analysis of our service jobs, leveraging weather and other real-time information to assist our operations.
  • Develop ETA prediction models, employing data fusion techniques to improve accuracy.

 

REQUIREMENTS:

 

EDUCATION/EXPERIENCE

This position requires 5+ years of equivalent experience. This experience could come from several paths: e.g., a Ph.D. in a technical field (physical science, engineering, mathematics, computer science, operations research, management science), or a Master’s degree in a technical field and 3+ years relevant experience, or a Bachelor’s degree in a technical field and 5+ years relevant experience.

 

SKILLS

  • Statistics and data exploration, knowledge of EDA best practices.
  • Advanced SQL experience, experience with Snowflake a plus.
  • Experience with A/B testing: designing, sizing and post-analysis.
  • Machine Learning with tabular data: knowledge of modeling with tree-based models like XGBoost.
  • Strong analytical coding skills in Python, proficient with Python data stack (Pandas, Numpy, Scipy, Matplotlib, PyTorch).
  • Good communication skills both in written (technical documents, Python notebooks) and spoken (meetings, presentations) forms.
  • Willing and able to learn and meet business needs, understanding the underlying context.
  • Independent, self-organizing, and able to prioritize multiple complex assignments.



NICE TO HAVES

  • Experience running jobs on Airflow or similar orchestrators.
  • Experience with cloud computing (ideally AWS).
  • Experience with forecasting methods and libraries (e.g., Prophet).
  • Experience with PyMC3 or PyStan is a plus.
  • Experience with Geospatial data is a plus.

Hiring In:

  • United States:  AZ, FL, GA, NH, IL, KY, MA, MI, NC, NM, TN, VA, CA
  • Canada: Province of Ontario
  • #LI-REMOTE

D, E & I Mission & Culture at Agero:

We are all Change Drivers at Agero. Each day, we speak to thousands of drivers and tow professionals across one of the most diverse countries in the world. Our mission to safeguard drivers on the road, strengthen our clients’ relationships with their drivers, and support the communities we live and work in unites us together as one force driving positive change.

The road to positive change starts inside Agero. In celebrating each other’s differences, we lift each other up and create space for innovation and community. Bringing our whole selves to work powers our commitment, drive, agility, and courage - ensuring we are not only changing the landscape of the driver services industry, we also are making a difference in the lives of our customers with each call, chat, and rescue.

THIS DESCRIPTION IS NOT INTENDED TO BE A COMPLETE STATEMENT OF JOB CONTENT, RATHER TO ACT AS A GUIDE TO THE ESSENTIAL FUNCTIONS PERFORMED. MANAGEMENT RETAINS THE DISCRETION TO ADD TO OR CHANGE THE DUTIES OF THE POSITION AT ANY TIME.

To review Agero's privacy policy click the link:https://www.agero.com/privacy.

***Disclaimer:Agero is committed to creating a diverse and inclusive environment and encourages applications from all qualified candidates. Accommodation is available. Additionally, we offer accommodation for applicants with disabilities in our recruitment processes. If you require accommodation during the recruitment process, please contactrecruiting@agero.com.

***Agero communicates with candidates via text for matters related to submitted applications, questions, and availability for interviews. If you prefer not to receive texts, you can contact Agero's recruiting team directly at recruiting@agero.com.

See more jobs at Agero

Apply for this job

4d

Business Inteligence Engineer

Libertex GroupGeorgia, Remote
airflowsqlDesigngit

Libertex Group is hiring a Remote Business Inteligence Engineer

Libertex Group Overview 

Established in 1997, the Libertex Group has helped shape the online trading industry by merging innovative technology, market movements and digital trends. 

The multi-awarded online trading platform, Libertex, enables traders to access the market and invest in stocks or trade CFDs with underlying assets being commodities, Forex, ETFs, cryptocurrencies, and others.

Libertex is, also, the Official Online Trading Partner of FC Bayern, bringing the exciting worlds of football and trading together.

We build innovative fintech so people can #TradeForMore with Libertex.

Job Overview

We are seeking a skilled BI Engineer with expertise in PowerBI data modeling to design and maintain business intelligence solutions. The ideal candidate will have experience in creating efficient data models, optimizing SQL Server databases for reporting, and ensuring data accuracy for decision-making. This role focuses on translating complex data into actionable insights through interactive reports and dashboards.

Main Responsibilities

  • Develop and maintain PowerBI data models and complex data warehousing solutions.
  • Build and manage ETL pipelines using dbt, Airflow, and Python.
  • Optimize SQL Server datamarts for performance and reliability.
  • Implement version control using Git for data management.
  • Collaborate with cross-functional teams to translate business requirements into technical solutions.
  • 3+ years of experience in Data Engineering with a specialization in PowerBI and data warehousing.
  • Strong proficiency in MS SQL Server, dbt, Airflow, Python, and Git.
  • Excellent command English language.
  • Deep expertise in data warehousing principles, including design, implementation, and query optimization.
  • Excellent problem-solving skills with a focus on performance and efficiency.
  • Strong communication abilities for effective collaboration with cross-functional teams.

  • Work in a pleasant and enjoyable environment near the Montenegrin sea or mountains
  • Quarterly bonuses based on Company performance
  • Generous relocation package for the employee and their immediate family/partner 
  • Medical Insurance Plan with coverage for the employee and their immediate family from day one
  • 24 working days of annual leave 
  • Yearly reimbursement of travel expenses for the employee and family's flight home
  • Corporate events and team building activities
  • Udemy Business unlimited membership & language training courses 
  • Professional and personal development opportunities in a fast-growing environment 

See more jobs at Libertex Group

Apply for this job

4d

Data Engineer - AWS

Tiger AnalyticsHartford,Connecticut,United States, Remote
S3LambdaairflowsqlDesignAWS

Tiger Analytics is hiring a Remote Data Engineer - AWS

Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Engineering, Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.

As an AWS Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines on AWS cloud infrastructure. You will work closely with cross-functional teams to support data analytics, machine learning, and business intelligence initiatives. The ideal candidate will have strong experience with AWS services, Databricks, and Snowflake.

Key Responsibilities:

  • Design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
  • Implement data processing and transformation workflows using Databricks, Apache Spark, and SQL to support analytics and reporting requirements.
  • Build and maintain orchestration workflows using Apache Airflow to automate data pipeline execution, scheduling, and monitoring.
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver scalable data solutions.
  • Optimize data pipelines for performance, reliability, and cost-effectiveness, leveraging AWS best practices and cloud-native technologies.
  • 8+ years of experience building and deploying large-scale data processing pipelines in a production environment.
  • Hands-on experience in designing and building data pipelines
  • Strong proficiency in AWS services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
  • Strong experience with Databricks, Pyspark for data processing and analytics.
  • Solid understanding of data modeling, database design principles, and SQL and Spark SQL.
  • Experience with version control systems (e.g., Git) and CI/CD pipelines.
  • Excellent communication skills and the ability to collaborate effectively with cross-functional teams.
  • Strong problem-solving skills and attention to detail.

This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

See more jobs at Tiger Analytics

Apply for this job

6d

Data Engineer (F/H)

ASINantes, France, Remote
S3agilenosqlairflowsqlazureapijavac++

ASI is hiring a Remote Data Engineer (F/H)

Description du poste

Dans un souci d’accessibilité et de clarté, les termes employés au masculin se réfèrent aussi bien au genre féminin que masculin.  

Simon, Responsable de l’équipe Data Nantaise, est à la recherche d’un Data Engineer pour mettre en place, intégrer, développer et optimiser des solutions de pipeline sur des environnements Cloud et On Premise pour nos projets clients.

Au sein d’une équipe dédiée et principalement en contexte agile : 

  • Vous participez à la rédaction de spécifications techniques et fonctionnelles
  • Vous maitrisez les formats de données structurés et non structurés et savez les manipuler
  • Vous connectez une solution ETL / ELT à une source de données
  • Vous concevez et réalisez un pipeline de transformation et de valorisation des données, et ordonnancez son fonctionnement
  • Vous prenez en charge les développements de médiations 
  • Vous veillez à la sécurisation des pipelines de données
  • Vous concevez et réalisez des API utilisant les données valorisées
  • Vous concevez et implémentez des solutions BI
  • Vous participez à la rédaction des spécifications fonctionnelles et techniques des flux
  • Vous définissez des plans de tests et d’intégration
  • Vous prenez en charge la maintenance évolutive et corrective
  • Vous traitez les problématiques de qualité de données

En fonction de vos compétences et appétences, vous intervenez sur l’une ou plusieurs des technologies suivantes :

  • L’écosystème data notamment Microsoft Azure
  • Les langages : SQL, Java
  • Les bases de données SQL et NoSQL
  • Stockage cloud: S3, Azure Blob Storage…
  • Les ETL/ESB et autres outils : Talend, Spark, Kafka NIFI, Matillion, Airflow, Datafactory, Glue...

 

En rejoignant ASI,

  • Vous évoluerez au sein d’une entreprise aux modes de fonctionnement internes flexibles garantis par une politique RH attentive (accord télétravail 3J/semaine, accord congé parenthèse…) 
  • Vous pourrez participer (ou animer si le cœur vous en dit) à nos nombreux rituels, nos événements internes (midi geek, dej’tech) et externes (DevFest, Camping des Speakers…)  
  • Vous évoluerez dans une entreprise bientôt reconnue Société à mission, Team GreenCaring et non GreenWashing porteuse d’une démarche RSE incarnée et animée, depuis plus de 10 ans. (Equipe RSE dédiée, accord forfaits mobilités durables…) 

Qualifications

Vous êtes issu d’une formation supérieure en informatique, mathématiques ou spécialisé en Big Data, et avez une expérience minimale de 3 ans en ingénierie des données et d'une expérience opérationnelle réussie dans la construction de pipelines de données structurées et non structurées.

  • Attaché à la qualité de ce que vous réalisez, vous faites preuve de rigueur et d'organisation dans la réalisation de vos activités.
  • Doté d'une bonne culture technologique, vous faites régulièrement de la veille pour actualiser vos connaissances.
  • Un bon niveau d’anglais, tant à l’écrit qu’à l’oral est recommandé.

Désireux d’intégrer une entreprise à votre image, vous vous retrouvez dans nos valeurs de confiance, d’écoute, de plaisir et d’engagement. 

Le salaire proposé pour ce poste est compris entre 36 000 et 40 000 €, selon l'expérience et les compétences, tout en respectant l'équité salariale au sein de l'équipe. 

 

A compétences égales, ce poste est ouvert aux personnes en situation de handicap. 

See more jobs at ASI

Apply for this job

6d

Stage Data Analyst (F/H)

ASINantes, France, Remote
agilescalanosqlairflowmongodbazurescrumjavapython

ASI is hiring a Remote Stage Data Analyst (F/H)

Description du poste

Dans un souci d’accessibilité et de clarté, les termes employés au masculin se réfèrent aussi bien au genre féminin que masculin.  

Afin de répondre aux enjeux de nos clients, et dans la continuité du développement de nos expertises Data, nous sommes à la recherche d’un stagiaire Data Analyst.

Intégré à l’équipe Data Nantaise, vous rejoignez un projet, sous la responsabilité d’un expert, et au quotidien :

  • Vous préparez et intégrez les données nécessaires à la préparation de rapports dataviz
  • Vous nettoyez et formatez les données pour les mettre à disposition de vos rapports
  • Vous concevez et mettez en forme vos données au travers des outils de la suite Power Platform : Power Automate, Power Bi, …
  • Vous développez, testez et mettez à disposition un ensemble de rapports PowerBi pour faciliter la prise de décision
  • Vous mettez en valeur, diffusez et vulgarisez les résultats obtenus
  • Vous appréhendez les méthodologies AgileScrum et cycle en W
  • Vous montez en compétences dans l’un ou plusieurs des environnements technologiques suivants :
    • L’écosystème Data: Spark, Hive, Kafka, Hadoop, Microsoft…
    • Les langages :Scala, Java, Python, DAX, PowerQuery…
    • Les bases de données NoSQL : MongoDB, Cassandra, CosmosDB…
    • Le stockage cloud: Azureet les différentes briques associées dont Power BI
    • Les ETL/Outils d'orchestration du marché : Airflow, Datafactory, Talend...

 

En rejoignant ASI,

  • Vous évoluerez dans une entreprise bientôt reconnue Société à mission, Team GreenCaring et non GreenWashing porteuse d’une démarche RSE incarnée et animée, depuis plus de 10 ans. (Equipe RSE dédiée, accord forfaits mobilités durables…) 
  • Vous intégrerez les différentes communautés expertes d'ASI, pour partager des bonnes pratiques et participer aux actions d'amélioration continue. 

 

Qualifications

De formation supérieure en informatique, mathématiques ou spécialisée en Big Data (de type école d’ingénieurs ou université) en cours de validation (Bac+5), vous êtes à la recherche d’un stage de fin d’études d’une durée de 4 à 6 mois.

  • Le respect et l’engagement font partie intégrante de vos valeurs.
  • Passionné par la donnée, vous êtes rigoureux et vos qualités relationnelles vous permettent de vous intégrer facilement dans l’équipe.

Le stage devant déboucher sur une proposition d'emploi concrète en CDI.

Désireux d’intégrer une entreprise à votre image, vous vous retrouvez dans nos valeurs de confiance, d’écoute, de plaisir et d’engagement. 

A compétences égales, ce poste est ouvert aux personnes en situation de handicap. 

See more jobs at ASI

Apply for this job

6d

Stage Data Engineer (F/H)

ASINantes, France, Remote
S3agilescalanosqlairflowmongodbazurescrumjavapython

ASI is hiring a Remote Stage Data Engineer (F/H)

Description du poste

Dans un souci d’accessibilité et de clarté, les termes employés au masculin se réfèrent aussi bien au genre féminin que masculin.  

Afin de répondre aux enjeux de nos clients, et dans la continuité du développement de nos expertises Data, nous sommes à la recherche d’un stagiaire Data Engineer.

Intégré à l’équipe Data Nantaise, vous rejoignez un projet, sous la responsabilité d’un expert, et au quotidien :

  • Vous avez un tuteur dédié pour suivre votre évolution
  • Vous participez au développement d’une chaîne de traitement de l’information de bout en bout
  • Vous intervenez sur de l’analyse descriptive/inférentielle ou prédictive
  • Vous participez aux spécifications techniques
  • Vous appréhendez les méthodologies Agile Scrum et cycle en W
  • Vous montez en compétences dans l’un ou plusieurs des environnements technologiques suivants :
    • L’écosystème Data: Spark, Hive, Kafka, Hadoop…
    • Les langages : Scala, Java, Python…
    • Les bases de données NoSQL : MongoDB, Cassandra…
    • Le stockage cloud: S3, Azure…
    • Les ETL/Outils d'orchestration du marché : Airflow, Datafactory, Talend...

 

En rejoignant ASI,

  • Vous évoluerez dans une entreprise bientôt reconnue Société à mission, Team GreenCaring et non GreenWashing porteuse d’une démarche RSE incarnée et animée, depuis plus de 10 ans. (Equipe RSE dédiée, accord forfaits mobilités durables…) 
  • Vous intégrerez les différentes communautés expertes d'ASI, pour partager des bonnes pratiques et participer aux actions d'amélioration continue. 

Qualifications

De formation supérieure en informatique, mathématiques ou spécialisée en Big Data (de type école d’ingénieurs ou université) en cours de validation (Bac+5), vous êtes à la recherche d’un stage de fin d’études d’une durée de 4 à 6 mois.

  • Le respect et l’engagement font partie intégrante de vos valeurs.
  • Passionné par la donnée, vous êtes rigoureux et vos qualités relationnelles vous permettent de vous intégrer facilement dans l’équipe.

Le stage devant déboucher sur une proposition d'emploi concrète en CDI.

 

Désireux d’intégrer une entreprise à votre image, vous vous retrouvez dans nos valeurs de confiance, d’écoute, de plaisir et d’engagement. 

 

A compétences égales, ce poste est ouvert aux personnes en situation de handicap. 

See more jobs at ASI

Apply for this job

7d

Senior Backend Developer (.NET)

DevoteamVilnius, Lithuania, Remote
DevOPSnosqlairflowazurec++c#AWSbackend

Devoteam is hiring a Remote Senior Backend Developer (.NET)

Job Description

Imagine being part of one of the most successful IT companies in Europe and finding innovative solutions to technical challenges. Turn imagination into reality and apply for this exciting career opportunity in Devoteam.  

Job Highlights:  

  • Joining more than 10.000 talented colleagues around Europe  
  • International career opportunity   
  • Cozy environment in Vilnius and Kaunas offices   

Your Highlights?   

  • You have extreme ownership and proven experience in leading complex projects and tasks  
  • You have a strong sense of honesty, responsibility and reliability  
  • You are ready to step out of comfort zone and constantly work on improving soft and hard skills  
  • You are an excellent team player and always ready to assist your colleagues  

Still with us? Then we might have a fantastic job opportunity for you!  

OUR NEW SENIOR BACKEND DEVELOPER  

As a Senior Backend Developer you will get to work on innovating Cloud-based software with the goal of full-scale autonomy when operating Public Clouds. You will also provide technical leadership and mentorship to other team members. 

SOME OF YOUR RESPONSIBILITIES:  

  • Develop Cloud-based software backend, ensuring high performance and scalability;
  • Help create, maintain and follow best practices for development work including coding, testing, source control, build automation, continuous deployment and continuous delivery  
  • Contribute to software engineering excellence by mentoring other team members  
  • Participate in the development process relying on your technical expertise  
  • Keep up to date on technology innovations 

SOME OF OUR REQUIREMENTS:  

  • Deep knowledge of software development methodologies and abilities to quickly adopt to languages and platforms when needed   
  • Extensive development experience in C#/.NET  
  • Experience in writing REST APIs 
  • Experience in developing multi tenant applications 
  • Experience with  DevOps including CI/CD and Configuration Management  
  • Experience in cloud application development for one of cloud providers (Microsoft Azure, AWS or GCP)  
  • Experience working with source code repository management systems, such as GitHub, and Azure DevOps  

It would be awesome, if you have:  

  • Understanding of AI and Machine Learning  
  • Experience in building and supporting complex distributed  SaaS solutions  
  • Solid knowledge of database technologies (RDBMS and NoSQL)  
  • Familiarity with some data engineering tools is a plus: Databricks, BigQuerry.
  • Experience with workflow orchestration engines, e.g. Cadence, Temporal, Airflow, AWS step functions, etc. 

WHAT YOU CAN LOOK FORWARD TO:   

  • Creating a purposeful set of software built on a modern tech stack
  • Becoming a part of very specialized team who will support your ability to succeed  
  • A challenging and exciting career with an international perspective and opportunities  
  • Attractive compensation package with a mix of fixed and variable  
  • High level of trust and competency to make your own decisions  
  • A warm and talented culture with a focus on business, but knowing that family always comes first  
  • Access to international network of specialists within the organization to build your rep and skills  
  • Salary from 5500  EUR gross (depending on the experience and competencies)


At  Devoteam we have created a culture of honesty and transparency, inclusion, and cooperation which we value a lot. We are looking for colleagues, who are highly motivated and proactive, not afraid of challenges. We are highly invested in the career path development of our employees, and we offer and support possibilities for further training, certification and specialization.   

Qualifications

See more jobs at Devoteam

Apply for this job

7d

Database Administrator (remote in Spain possible)

LanguageWireSpain, Remote
DevOPSairflowsqlazureqapostgresql

LanguageWire is hiring a Remote Database Administrator (remote in Spain possible)

Do you just love tweaking that one annoying query to perform just a little bit better?

Are you the go-to guy to know how to find or use data in a complex distributed ecosystem including plenty of services and databases?

Are you interested in pushing organizations to use their data more effectively and become more data-driven?

Yes? You should definitely read on!

The role you’ll play

At LanguageWire, we offer a large product suite to cater for all our customer’s linguistic needs.

Our product suite includes many products powered by many microservices, each of them owning their own data. We also have our data warehouse and data lake solutions to develop and power our AI/ML developments.

We are looking for a seasoned DBA to support engineering teams in their day to day work. Technology selection, data modelling, query optimization, monitoring & troubleshooting issues, etc. are continuous needs that you will help teams with.

You will not only support the teams but will be responsible for evangelizing and leveling up our engineering teams regarding their data and database. By providing guidance and training.

In parallel to that, as our system is complex and distributed, you will work closely with our Senior Director of Technology to build a solid data governance framework and define/execute our data strategy.

The team you’ll be a part of

We have 8 software teams working across 5 countries and taking care of the continuous development of our platform. We strongly believe in building our own tech so we can deliver the best solutions for our customers. Our teams cover the full technical scope needed to create advanced language solutions with AI, complex web-based tools, workflow engines, large scale data stores and much more. Our technology and linguistic data assets set us apart from the competition and we’re proud of that.

You will report directly to our Senior Director of Technology and work as part of our Technical Enablement team which is a cross-functional team of specialists working closely with all our other engineering teams in core technical aspects (architecture, data engineering, QA automation, performance, cybersecurity, etc.). Our Technical Enablement team es key to ensure that LanguageWire platform is built, run, and maintained in a scalable, reliable, performant and secure manner.

If you want to make a difference, make it with us by…

  • Ensuring the optimal operations of our products and services by being the hands-on expert that support our teams on with their databases and data needs.
  • Defining LanguageWire’s data architecture framework, standards, and principles, including modeling, metadata, security, reference data, and master data.
  • Driving the strategy execution across the entire tech organization by closely collaborating with other teams.

In one year, you’ll know you were successful if…

  • All of LanguageWire’s data is well modelled and documented.
  • LanguageWire has a powerful core data engine that allows our ML/AI teams to effectively leverage all of our data.
  • You are regarded as the go-to person for all database and data needs.

 

Desired experience and competencies

What does it take to work for LanguageWire?

What you’ll need to bring

You are a hands-on technical expert

  • Expert knowledge of SQL (PostgreSQL, SQL Server, etc.)
  • Solid data modelling skills, including conceptual, logical and physical models.
  • Good knowledge of cloud services (Azure & GCP) and DevOps engineering

You are a team player 

  • Excellent communicator able to create engagement and commitment from teams around you
  • You love solving complex puzzles with engineers from different areas and different backgrounds 
  • You’re eager to understand how the different areas of the ecosystem connect to create the complete value chain

You are a team player 

  • You love solving complex puzzles with engineers from different areas and different backgrounds 
  • You’re eager to understand how the different areas of the ecosystem connect to create the complete value chain

Fluent English (reading, writing, speaking) 

This will make you stand out

  • Experience working within a microservice-based architecture
  • Experience with Data Warehousing (BigQuery, SnowFlake, Databricks, …)
  • Experience with Orchestration technology (Apache Airflow, Azure Data Factory, …)
  • Experience with Data Lakes and Data Warehouses

Your colleagues say you

  • Are approachable and helpful when needed
  • know all the latest trends in the industry
  • never settle for second best

Our perks

  • Enjoy flat hierarchies, responsibility and freedom, direct feedback, and room to stand up for your own ideas
  • Internal development opportunities, ongoing support from your People Partner, and an inclusive and fun company culture
  • International company with over 400 employees. Offices in Copenhagen, Aarhus, Stockholm, Varberg, London, Leuven, Lille, Paris, Munich, Hamburg, Zurich, Kiev, Gdansk, Atlanta, Finland and Valencia
  • For this role, we have a full-time FlexiWire@home option for remote work. Of course, you are always welcome at the office to collaborate and connect with your colleagues.
  • We take care of our people and initiate many social get-togethers from Friday Bars a to Summer or Christmas parties. We have fun!
  • 200 great colleagues in the Valencia office belonging to different business departments
  • Excellent location in cool and modern offices in the city center, with a great rooftop terrace and a view over the Town Hall Square
  • Working in an international environment—more than 20 different nationalities
  • A private health insurance
  • A dog friendly atmosphere
  • Big kitchen with access to organic fruits, nuts and biscuits and coffee.
  • Social area and game room (foosball table, darts, and board games)
  • Bike and car parking

 

About LanguageWire

At LanguageWire, we want to wire the world together with language. Why? Because we want to help people & businesses simplify communication. We are fueled by the most advanced technology (AI) and our goal is to make customer's lives easier by simplifying their communication with any audience across the globe.

 

Our values drive our behavior

We are curious. We are trustworthy. We are caring. We are ambitious.

At LanguageWire, we are curious and intrigued by what we don’t understand. We believe relationships are based on honesty and responsibility, and being trustworthy reinforces an open, humble, and honest way of communicating. We are caring and respect each other personally and professionally. We encourage authentic collaboration, invite feedback and a positive social environment. Our desire to learn, build, and share knowledge is a natural part of our corporate culture.

 

Working at LanguageWire — why we like it: 

“We believe that we can wire the world together with language. It drives us to think big, follow ambitious goals, and get better every day. By embracing and solving the most exciting and impactful challenges, we help people to understand each other better and to bring the world closer together.”

(Waldemar, Senior Director of Product Management, Munich)

Yes, to diversity, equity & inclusion

In LanguageWire, we believe diversity in gender, age, background, and culture is essential for our growth. Therefore, we are committed to creating a culture that incorporates diverse perspectives and expertise in our everyday work.

LanguageWire’s recruitment process is designed to be transparent and fair for all candidates. We encourage candidates of all backgrounds to apply, and we ensure that candidates are provided with an equal opportunity to demonstrate their competencies and skills.

Want to know more?

We can’t wait to meet you! So, why wait 'til tomorrow? Apply today!

If you want to know more about LanguageWire, we encourage you to visit our website!

See more jobs at LanguageWire

Apply for this job

8d

Sr. Data Engineer

DevOPSterraformairflowpostgressqlDesignapic++dockerjenkinspythonAWSjavascript

hims & hers is hiring a Remote Sr. Data Engineer

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

​​About the Role:

We're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving Million+ Hims & Hers subscribers.

You Will:

  • Architect and develop data pipelines to optimize performance, quality, and scalability
  • Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources
  • Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake
  • Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance 
  • Orchestrate sophisticated data flow patterns across a variety of disparate tooling
  • Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics
  • Partner with the rest of the Data Platform team to set best practices and ensure the execution of them
  • Partner with the analytics engineers to ensure the performance and reliability of our data sources.
  • Partner with machine learning engineers to deploy predictive models
  • Partner with the legal and security teams to build frameworks and implement data compliance and security policies
  • Partner with DevOps to build IaC and CI/CD pipelines
  • Support code versioning and code deployments for data Pipeline

You Have:

  • 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages
  • Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed
  • Demonstrated experience writing complex, highly optimized SQL queries across large data sets
  • Experience with cloud technologies such as AWS and/or Google Cloud Platform
  • Experience with Databricks platform
  • Experience with IaC technologies like Terraform
  • Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres
  • Experience building event streaming pipelines using Kafka/Confluent Kafka
  • Experience with modern data stack like Airflow/Astronomer, Databricks, dbt, Fivetran, Confluent, Tableau/Looker
  • Experience with containers and container orchestration tools such as Docker or Kubernetes.
  • Experience with Machine Learning & MLOps
  • Experience with CI/CD (Jenkins, GitHub Actions, Circle CI

Nice to Have:

  • Experience building data models using dbt
  • Experience with Javascript and event tracking tools like GTM
  • Experience designing and developing systems with desired SLAs and data quality metrics
  • Experience with microservice architecture
  • Experience architecting an enterprise-grade data platform

Our Benefits (there are more but here are some highlights):

  • Competitive salary & equity compensation for full-time roles
  • Unlimited PTO, company holidays, and quarterly mental health days
  • Comprehensive health benefits including medical, dental & vision, and parental leave
  • Employee Stock Purchase Program (ESPP)
  • Employee discounts on hims & hers & Apostrophe online products
  • 401k benefits with employer matching contribution
  • Offsite team retreats

#LI-Remote

 

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions, including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

An estimate of the current salary range is
$160,000$185,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims considers all qualified applicants for employment, including applicants with arrest or conviction records, in accordance with the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance, the California Fair Chance Act, and any similar state or local fair chance laws.

Hims & Hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, please contact us at accommodations@forhims.com and describe the needed accommodation. Your privacy is important to us, and any information you share will only be used for the legitimate purpose of considering your request for accommodation. Hims & Hers gives consideration to all qualified applicants without regard to any protected status, including disability. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

12d

Principal Software Architect

IFSValencia, Spain, Remote
gRPCgolangagileairfloworacleDesignmobileazuregraphqljavac++.netdockerpostgresqlkubernetesangularjenkinspythonjavascript

IFS is hiring a Remote Principal Software Architect

Job Description

The Principal Architect ("PA") will own the overall architecture accountability for one or more portfolios within IFS Technology. The role of the PA is to build and develop the technology strategy, while growing, leading, and energising multi-faceted technical teams to design and deliver technical solutions that deliver IFS technology needs and are supported by excellent data, methodology, systems and processes. The role will work with a broad set of stakeholders including product managers, engineers, and various R&D and business leaders. The occupant of this role diagnoses and solves significant, complex and non-routine problems; translates practices from other markets, countries and industries; provides authoritative, technical recommendations which have a significant impact on business performance in short and medium term; and contributes to company standards and procedures, including the IFS Technical Reference Architecture. This role actively identifies new approaches that enhance and simplify where possible complexities in the IFS suite. The PA represents IFS as the authority in one or technology areas or portfolios and acts as a role model to develop experts within this area.

What is the role?

  • Build, nurture and grow high performance engineering teams using Agile Engineering principles.
  • Provide technical leadership for design and development of software meeting functional & nonfunctional requirements.
  • Provide multi-horizon technology thinking to broad portfolios and platforms in line with desired business needs.
  • Adopt a hands-on approach to develop the architecture runway for teams.
  • Set technical agenda closely with the Product and Program Managers
  • Ensure maintainability, security and performance in software components developed using well-established engineering/architectural principles.
  • Ensure software quality complying with shift left quality principles.  
  • Conduct peer reviews & provide feedback ensuring quality standards.
  • Engage with requirement owners and liaise with other stakeholders.
  • Contribute to improvements in IFS products & services.

Qualifications

It’s your excellent influencing and communication skills that will really make the difference. Entrepreneurship and resilience will be required, to help drive and shape the technology strategy. You will need technical, operational, and commercial breadth to deliver a strategic technical vision alongside a robust, secure and cost-effective delivery platform and operational model.

  • Seasoned Leader with 10+ years of hands-on experience in Design, Development and Implementation of scalable cloud-based web and mobile applications.
  • Have strong software architectural, technical design and programming skills.
  • Experience in Application Security, Scalability and Performance.
  • Ability to envision the big picture and work on details. 
  • Can articulate technology vision and delivery strategy in a way that is understandable to technical and non-technical audiences.
  • Willingness to learn and adapt different technologies/work environments.
  • Knowledge of and skilled in various tools, languages, frameworks and cloud technologies with the ability to be hands-on where needed:
    • Programming languages - C++, C#, GoLang, Python, JavaScript and Java
    • JavaScript frameworks - Angular, Node and React JS, etc.,
    • Back-end frameworks - .NET, GoLang, etc.,
    • Middleware – REST, GraphQL, GRPC,
    • Databases - Oracle, Mongo DB, Cassandra, PostgreSQL etc.
    • Azure and Amazon cloud services. Proven experience in building cloud-native apps on either or both cloud platforms
    • Kubernetes and Docker containerization
    • CI/CD tools - Circle CI, GitHub, GitLab, Jenkins, Tekton
  • Hands on experience in OOP concepts and design principles.
  • Good to have:
    • Knowledge of cloud-native big data tools (Hadoop, Spark, Argo, Airflow) and data science frameworks (PyTorch, Scikit-learn, Keras, TensorFlow, NumPy)
    • Exposure to ERP application development is advantageous.
  • Excellent communication and multi-tasking skills along with an innovative mindset.

See more jobs at IFS

Apply for this job

13d

Contractor: Lead Data Engineering Services

NewselaRemote - Brazil or Argentina
terraformairflowsqlDesignc++pythonAWS

Newsela is hiring a Remote Contractor: Lead Data Engineering Services

Seeking to hire a Contractor based out of Brazil or Argentina for Lead-Level Data Engineering Services.

Scope of Services:

  • As a Contractor you will work alongside app developers and data stakeholders to create data system changes and to respond to data inquires
  • You will Lead initiatives and problem definition scoping, design, and planning through epics and blueprints. 
  • With your deep domain knowledge and radiation of that knowledge you will develop documentation, technical presentation, discussion, and incident reviews. 
  • You will build and maintain data pipelines and DAG tooling 
  • You will establish and maintain a data catalog with business-related metadata

Skills & Experience:

  • Mastered proficiency in SQL, Python, and relational datastores (columnar and row databases)
  • Proficiency in building and maintaining data pipelines and DAG tooling (Dagster, Airflow, etc)
  • Advanced experience with event-based pipelines and CDC tooling
  • Advanced experience in managing large-scale data migrations in relational datastores
  • Advanced experience in optimizing SQL query performance
  • Advanced experience with data testing strategies to ensure resulting datastores are aligned with expected business logic
  • Experience with DBT orchestration and best practices
  • Experience with enabling monitoring, health checks and alerting on data systems and pipelines
  • Experience establishing and maintaining a data catalog with business-related metadata
  • Experience building tools and automation to run data infrastructure
  • Experience with writing and maintaining cloud-based infrastructure for data pipelines (AWS, GCP and Terraform) is a plus
  • Experience in document, graph or schema-less datastores is a plus

Please note that given the nature of the contract, this role will not be eligible to participate in company-sponsored benefits. 

See more jobs at Newsela

Apply for this job

13d

Contractor: Data Engineering Services

NewselaRemote - Brazil or Argentina
tableauairflowsqlc++pythonAWS

Newsela is hiring a Remote Contractor: Data Engineering Services

Seeking to hire a Contractor based out of Brazil or Argentina for Mid-Senior Level Data Engineering Services.

Scope of Services: 

  • This Contractor will develop a thorough understanding of the various data sources, data pipelines, and enterprise data warehouse models. 
  • Play a crucial role as we assess the existing tools and processes and help team with critical migrations like prefect 1 to prefect 2 as well as incorporating tools like Dbt to analytics engineering cycle.
  • Help the team rapidly meet the business needs by connecting new data sources as needed as well as building new data warehouse models
  • Maintain reliable data and analytics platform by bringing in best practices and tools for data quality checks, monitoring as well as helping troubleshoot and address production issues.

Skills & Experience:

  • 4+ years of experience in Data Engineering
  • Proficient in Python programming and hands-on experience building ETL/ELT pipelines
  • Experience using orchestration tools like Prefect, Airflow etc
  • Working experience with column store Analytical Datastores like Snowflake/Redshift/Bigquery etc
  • Hands-on experience in data modeling; including strong knowledge of SQL
  • Experience with source control systems like GitHub
  • Experience with a public cloud preferably AWS
  • Detail-oriented, and take pride in the quality of your work 
  • Experience working with a modern cloud-native data stack in a fast-paced environment
  • Experience with Dbt is a plus
  • Experience with BI tools like Tableau is a plus

Please note that given the nature of the contract, this role will not be eligible to participate in company-sponsored benefits. 

See more jobs at Newsela

Apply for this job

13d

Contractor: Lead Data Reporting Engineering Services

NewselaRemote - Brazil or Argentina
airflowsqlapic++pythonfrontend

Newsela is hiring a Remote Contractor: Lead Data Reporting Engineering Services

Seeking to hire a Contractor based out of Brazil or Argentina for Lead-Level Data Reporting Engineering Services.

Scope of Services:

  • As a Contractor you will translate product data requests into insightful customer reporting metrics
  • You will communicate with app developers and data stakeholders regarding data system changes and data inquiries
  • You will lead initiative, define problems, and provide scoping to break down work for the team
  • You will communicate with frontend engineers to create data visualization
  • You will lead and work alongside customer-facing business intelligence, reporting, and data visualization teams
  • You will conduct root-cause analyses relating to queries about data irregularities  and product metric definition requests 

Skills & Experience:

  • Mastered proficiency in SQL, Python, and relational datastores
    • Experience columnar relational databases is a plus
  • Advanced experience in optimizing SQL query performance
  • Experience leading and working on customer-facing business intelligence, reporting and data visualization teams
    • Experience with maintaining custom, in house tooling is a plus
  • Strong ability to conduct root-cause analyses relating to queries about data irregularities and product metric definitions requests 
  • Experience with data testing strategies to ensure transformations and metrics are aligned with expected business logic
  • Experience with DBT orchestration and best practices
  • Experience with python web frameworks (Fast API, Flask)
  • Experience with enabling application monitoring and health checks for systems within the team’s domain
  • Experience with data pipelines and DAG tooling (Dagster, Airflow, etc) is a plus
  • Experience with event-based pipelines and CDC tooling is a plus

Please note that given the nature of the contract, this role will not be eligible to participate in company-sponsored benefits. 

See more jobs at Newsela

Apply for this job

13d

Contractor: Senior-Level Site Reliability Engineering Services (Brazil or Argentina)

NewselaRemote - Brazil or Argentina
terraformairflowDesignc++dockerkuberneteslinuxAWS

Newsela is hiring a Remote Contractor: Senior-Level Site Reliability Engineering Services (Brazil or Argentina)

Seeking to hire a Contractor based out of Brazil or Argentina for Senior-Level Site Reliability Engineering Services.

Scope of Services:

  • Be on an on-call rotation to respond to incidents that impact Newsela.com availability and provide support for developers during internal and external incidents 
  • Maintain and assist in extending our infrastructure with Terraform, Github Actions CI/CD, Prefect, and AWS services
  • Build monitoring that alerts on symptoms rather than outages using Datadog, Sentry and CloudWatch
  • Look for ways to turn repeatable manual actions into automations to reduce toil
  • Improve operational processes (such as deployments, releases, migrations, etc) to make them run seamlessly with fault tolerance in mind 
  • Design, build and maintain core cloud infrastructure on AWS and GCP that enables scaling to support thousands of concurrent users
  • Debug production issues across services and levels of the stack 
  • Provide infrastructure and architectural planning support as an embedded team member within a domain of Newsela’s application developers 
  • Plan the growth of Newsela’s infrastructure 
  • Influence the product roadmap and work with engineering and product counterparts to influence improved resiliency and reliability of the Newsela product.
  • Proactively work on efficiency and capacity planning to set clear requirements and reduce the system resource usage to make Newsela cheaper to run for all our customers.
  • Identify parts of the system that do not scale, provide immediate palliative measures, and drive long-term resolution of these incidents.
  • Identify Service Level Indicators (SLIs) that will align the team to meet the availability and latency objectives.
  • For stable counterpart assignments, maintain awareness and actively influence stage group plans and priorities through participation in stage group meetings and async discussions. Act as a steward for reliability.

Skills / Experience:

  • 5+ years of experience in site-reliability 
  • You have advanced Terraform syntax and CI/CD configuration, pipelines, jobs
  • You have managed DAG tooling and data pipelines (ex: Airflow, Dagster, Prefect)
  • You have advanced knowledge and experience with maintaining data pipeline infrastructure and large scale data migrations 
  • You have advanced knowledge of cloud infrastructure services (AWS, GCP)
  • You are well versed in container orchestration technologies: cluster provisioning and new services (ECS, Kubernetes, Docker)
  • Background working with service catalog metrics and recording rules for alerts (Datadog, NewRelic, Sentry, Cloudwatch)
  • Experience with log shipping pipelines and incident debugging visualizations
  • Familiarity with operating system (Linux) configuration, package management, startup and troubleshooting and a comfortable with BASH/CLI scripting
  • Familiarity with block and object storage configuration and debugging.
  • Ability to identify significant projects that result in substantial improvements in reliability, cost savings and/or revenue.
  • Ability to identify changes for the product architecture from the reliability, performance and availability perspectives with a data-driven approach.
  • Lead initiatives and problem definition and scoping, design, and planning through epics and blueprints.
  • You have deep domain knowledge and radiation of that knowledge through documentation, recorded demos, technical presentations, discussions, and incident reviews.
  • You can perform and run blameless RCAs on incidents and outages aggressively looking for answers that will prevent the incident from ever happening again.

Please note that given the nature of the contract, this role will not be eligible to participate in company-sponsored benefits. 

See more jobs at Newsela

Apply for this job

13d

DATA ENGINEER (STAGE Janvier 2025) - H/F

Showroomprive.comSaint-Denis, France, Remote
airflowsqlc++

Showroomprive.com is hiring a Remote DATA ENGINEER (STAGE Janvier 2025) - H/F

Description du poste

Au cœur du pôle Data de Showroomprive, vous intègrerez l’équipe « Data Engineering ».    
Vos missions seront axées autour de l’extraction, du traitement et du stockage de la donnée via le maintien et l’évolution d’un Datawarehouse utilisé par le reste des équipes Data (BI, Data Science, Marketing analyst).    

Vos missions se découperont en deux parties :    

  • Un projet principal à réaliser de A à Z autour de la donnée, de son traitement, de son contrôle ou encore de son accessibilité.    

  • Les taches du quotidien de l’équipe (développement de nouveaux flux, exports de données métier, requêtes ad hoc, gestion d’accès…).   

Pour mener à bien ses missions, notre équipe utilise des outils à la pointe du marché en matière de traitement de la donnée. Notamment, Dataiku et Airflow pour les flux et une plateforme cloud leader du marché.   

Vous intégrerez une équipe de Data Engineers qui vous accompagneront au quotidien pour mener à bien vos missions, mais aussi un service Data aux compétences diverses et pointues dans leurs domaines.   

Qualifications

En fin de formation supérieure (Bac+4 /+5) de type Ecole d’Ingénieurs ou formation universitaire équivalente dans une filière liée à la Business Intelligence ou au Data Engineering.  

Dans le cadre de vos études ou d’une expérience précédente,  

vous avez pu acquérir de solides bases en SQL et en Python. Vous avez aussi développé une réelle appétence à découvrir par vous-même et vous êtes très curieux lorsqu’il s’agit de Data.   

Votre rigueur et votre dynamisme constitueront des atouts clés dans la réussite des missions qui vous seront confiées.  

See more jobs at Showroomprive.com

Apply for this job