airflow Remote Jobs

143 Results

+30d

Senior Machine Learning Engineer

AltRemote US
airflowpostgresDesignpythonAWS

Alt is hiring a Remote Senior Machine Learning Engineer

At Alt, we’re on a mission to unlock the value of alternative assets, and looking for talented people who share our vision. Our platform enables users to exchange, invest, value, securely store, and authenticate their collectible cards. And we envision a world where anything is an investable asset. 

To date, we’ve raised over $100 million from thought leaders at the intersection of culture, community, and capital. Some of our investors include Alexis Ohanian’s fund Seven Seven Six, the founders of Stripe, Coinbase co-founder Fred Ehrsam, BlackRock co-founder Sue Wagner, the co-founders of AngelList, First Round Capital, and BoxGroup. We’re also backed by professional athletes including Tom Brady, Candace Parker, Giannis Antetokounmpo, Alex Morgan, Kevin Durant, and Marlon Humphrey.

Alt is a dedicated equal opportunity employer committed to creating a diverse workforce. We celebrate our differences and strive to create an inclusive environment for all. We are focused on fostering a culture of empowerment which starts with providing our employees with the resources needed to reach their full potential.

What we are looking for:

We are seeking a Senior Machine Learning Engineer who is eager to make a significant impact. In this role, you'll get the opportunity to leverage your technical expertise and problem-solving skills to solve some of the hardest data problems in the hobby. Your primary focus in this role will be on enhancing and optimizing our pricing engine to support strategic business goals. Our ideal candidate is passionate about trading cards, has a strong sense of ownership, and enjoys challenges. At Alt, data is core to everything we do and is a differentiator for our customers. The team’s scope covers data pipeline development, search infrastructure, web scraping, detection algorithms, internal toolings and data quality. We give our engineers a lot of individual responsibility and autonomy, so your ability to make good trade-offs and exercise good judgment is essential.

The impact you will make:

  • Partner with engineers, and cross-functional stakeholders to contribute to all phases of algorithm development including: ideation, prototyping, design, and production
  • Build, iterate, productionize, and own Alt's valuation models
  • Leverage background in pricing strategies and models to develop innovative pricing solutions
  • Design and implement scalable, reliable, and maintainable machine learning systems
  • Partner with product to understand customer requirements and prioritize model features

What you bring to the table:

  • Experience: 5+ years of experience in software development, with a proven track record of developing and deploying models in production. Experience with pricing models preferred.
  • Technical Skills: Proficiency in programming languages and tools such as Python, AWS, Postgres, Airflow, Datadog, and JavaScript.
  • Problem-Solving: A knack for solving tough problems and a drive to take ownership of your work.
  • Communication: Effective communication skills with the ability to ship solutions quickly.
  • Product Focus: Excellent product instincts, with a user-first approach when designing technical solutions.
  • Team Player: A collaborative mindset that helps elevate the performance of those around you.
  • Industry Knowledge: Knowledge of the sports/trading card industry is a plus.

What you will get from us:

  • Ground floor opportunity as an early member of the Alt team; you’ll directly shape the direction of our company. The opportunities for growth are truly limitless.
  • An inclusive company culture that is being built intentionally to foster an environment that supports and engages talent in their current and future endeavors.
  • $100/month work-from-home stipend
  • $200/month wellness stipend
  • WeWork office Stipend
  • 401(k) retirement benefits
  • Flexible vacation policy
  • Generous paid parental leave
  • Competitive healthcare benefits, including HSA, for you and your dependent(s)

Alt's compensation package includes a competitive base salary benchmarked against real-time market data, as well as equity for all full-time roles. We want all full-time employees to be invested in Alt and to be able to take advantage of that investment, so our equity grants include a 10-year exercise window. The base salary range for this role is: $194,000 - $210,000. Offers may vary from the amount listed based on geography, candidate experience and expertise, and other factors.

See more jobs at Alt

Apply for this job

+30d

Data Engineer (m/w/d) - Python / Remote möglich

Ebreuninger GmbHStuttgart, Germany, Remote
DevOPSterraformairflowsqloracleazuredockerpostgresqlkubernetespythonAWS

Ebreuninger GmbH is hiring a Remote Data Engineer (m/w/d) - Python / Remote möglich

Stellenbeschreibung

Das Data Engineering-Team versorgt die Data Platform von Breuninger mit allen Daten, die für diverse Datenprodukte wie Reporting-Dashboards, Marketing-Analysen, Data Science und weitere Use-Cases benötigt werden. Unsere Aufgabe ist es, die Rohdaten von verschiedensten Quellsystemen in der Data Platform (Google Cloud / BigQuery) bereitszustellen. Für diesen Zweck betreiben wir über 100 Daten-Pipelines mit unterschiedlichen Technologien. Als Datenexperten treiben wir außerdem das datengetriebene Arbeiten bei Breuninger voran und sind damit auch Berater für Teams rund um Datenarchitektur und -bereitstellung. 

  • Als Data Engineer (m/w/d) bei Breuninger entwickelst Du unsere Data Platform kontinuierlich weiter (Google Cloud / BigQuery) 
  • Du bist für die Konzeption, Implementierung und Wartung von Datenpipelines verantwortlich (Python, Airflow, dbt Cloud, Kubernetes) 
  • Du stellst die Dateninfrastruktur für unsere Datenpipelines bereit und verbesserst sie (Terraform, Google Cloud) 
  • Du berätst andere Teams beim Aufbau ihrer Datenprozesse 
  • Du übernimmst DevOps-Aufgaben, automatisierst wiederkehrende Prozesse und betreibst CI/CD-Pipelines (Gitlab, Terraform) 
  • Du treibst den Wissensaustausch im Team und im Unternehmen voran und hilfst damit, datengetriebenes Arbeiten im Unternehmen zu fördern 

Qualifikationen

  • Du hast mindestens 2 Jahre relevante Erfahrung im Data Engineering-Umfeld gesammelt 
  • Du hast Erfahrung in der Programmierung mit Python 
  • Du hast gute SQL-Kenntnisse 
  • Du hast Erfahrung mit Airflow oder anderen, vergleichbaren Orchestrierungs-Softwares (Dagster, Prefect, …) 
  • Du hast ein grundsätzliches Verständnis von Datenbanktechnologien (PostgreSQL, Oracle, …) 
  • Du kennst dich mit dem stabilen Betrieb von ELT/ETL-Strecken aus 
  • Du hast bereits Erfahrung mit mindestens einem Cloud-Provider (AWS, GCP, Azure, …) 
  • Du kennst dich mit unterschiedlichen Big Data-Technologien (Apache Beam, Apache Spark, …) und Daten-Architekturen aus (Streaming, Batch, …) 
  • Du hast ein „automate everything“-Mindset und DevOps ist für dich eine Selbstverständlichkeit 
  • Du hast idealerweise schon Erfahrung mit BigQuery, Terraform, Kubernetes und Docker gesammelt
  • Du verfügst über gute Englischkenntnisse 

Apply for this job

+30d

Data Architect

Cohere HealthRemote
agiletableaunosqlairflowsqlDesignc++pythonAWS

Cohere Health is hiring a Remote Data Architect

Company Overview: 

Cohere Health is a fast-growing clinical intelligence company that’s improving lives at scale by promoting the best patient-specific care options, using leading edge AI combined with deep clinical expertise. In only four years our solutions have been adopted by health insurance plans covering over 15 million people, while our revenues and company size have quadrupled.  That growth combined with capital raises totaling $106M positions us extremely well for continued success. Our awards include: 2023 and 2024 BuiltIn Best Place to Work, Top 5 LinkedIn™ Startup, TripleTree iAward, multiple KLAS Research Points of Light, along with recognition on Fierce Healthcare's Fierce 15 and CB Insights' Digital Health 150 lists.

Opportunity Overview: 

You will be a key leader in designing and implementing our data architecture, which is central to our value proposition and crucial to our company's success. As a Data Architect at Cohere, you will work with a high degree of autonomy to design and optimize data warehouses, ensure data governance, and enable data-driven decision-making across the business. You will partner closely with data, product, and engineering teams to solve complex problems and deliver scalable solutions.

Last but not least, People who succeed here are empathetic teammates who are candid, kind, caring, and embody our core values and principles. We believe that diverse, inclusive teams make the most impactful work. Cohere is deeply invested in ensuring a supportive, growth-oriented environment for everyone.

What you will do:

  • Lead the design, implementation, and optimization of our data warehouse and governance policies to ensure scalability and compliance with healthcare regulations.
  • Work closely with stakeholders to understand data requirements and deliver actionable insights that can be efficiently productized.
  • Design, develop, operationalize, and maintain the data models, both logic, physical and conceptual to support various business use cases.
  • Collaborate with cross-functional teams to define data architecture standards and best practices.
  • Ensure data quality, integrity, and security across all data sources and systems.
  • Provide technical leadership and mentorship to data engineering and management teams.
  • Create and manage data governance frameworks, including data catalogs, lineage, and metadata management.
  • Stay current with emerging data technologies and evaluate their potential impact on our architecture.

Your background & requirements:

  • 4+ years of experience leading data architecture initiatives in a fast-paced, agile environment.
  • Bachelor's or Master's degree in Computer Science, Data Science, or a related field with at least 12 years of relevant experience.
  • Proven track record of designing and implementing scalable data warehouses and governance policies.
  • Expertise in SQL and proficiency in data modeling and design. Experience having integrated NoSQL systems as part of a broader data architecture.
  • Hands-on experience with ETL processes, data integration, and data quality frameworks.
  • Experience with data governance tools and practices.
  • Experience building data platforms using python, AWS, Airflow, dbt, and data warehouses.
  • Familiarity with healthcare data standards such as HL7, FHIR, and CCDA.
  • Strong understanding of data privacy and security regulations, including HIPAA.
  • Experience with business intelligence tools like Tableau or PowerBI.
  • Strong analytical and problem-solving skills.
  • Excellent communication and collaboration skills. 

Equal Opportunity Statement: 

Cohere Health is an Equal Opportunity Employer. We are committed to fostering an environment of mutual respect where equal employment opportunities are available to all. To us, it’s personal.

We can’t wait to learn more about you and meet you at Cohere Health!

The salary range for this position is $160,000 to $185,000 annually; as part of a total benefits package which includes health insurance, 401k and bonus. In accordance with state applicable laws, Cohere is required to provide a reasonable estimate of the compensation range for this role. Individual pay decisions are ultimately based on a number of factors, including but not limited to qualifications for the role, experience level, skillset, and internal alignment.

 

#LI-Remote

#BI-Remote




Apply for this job

+30d

Sr. Data Engineer, Marketing Tech

MLDevOPSLambdaagileairflowsqlDesignapic++dockerjenkinspythonAWSjavascript

hims & hers is hiring a Remote Sr. Data Engineer, Marketing Tech

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

We're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving Million+ Hims & Hers subscribers.

You Will:

  • Architect and develop data pipelines to optimize performance, quality, and scalability.
  • Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources.
  • Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake.
  • Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance 
  • Orchestrate sophisticated data flow patterns across a variety of disparate tooling.
  • Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics.
  • Partner with the rest of the Data Platform team to set best practices and ensure the execution of them.
  • Partner with the analytics engineers to ensure the performance and reliability of our data sources.
  • Partner with machine learning engineers to deploy predictive models.
  • Partner with the legal and security teams to build frameworks and implement data compliance and security policies.
  • Partner with DevOps to build IaC and CI/CD pipelines.
  • Support code versioning and code deployments for data Pipelines.

You Have:

  • 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages.
  • Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed.
  • Demonstrated experience writing complex, highly optimized SQL queries across large data sets.
  • Experience working with customer behavior data. 
  • Experience with Javascript, event tracking tools like GTM, tools like Google Analytics, Amplitude and CRM tools. 
  • Experience with cloud technologies such as AWS and/or Google Cloud Platform.
  • Experience with serverless architecture (Google Cloud Functions, AWS Lambda).
  • Experience with IaC technologies like Terraform.
  • Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres.
  • Experience building event streaming pipelines using Kafka/Confluent Kafka.
  • Experience with modern data stack like Airflow/Astronomer, Fivetran, Tableau/Looker.
  • Experience with containers and container orchestration tools such as Docker or Kubernetes.
  • Experience with Machine Learning & MLOps.
  • Experience with CI/CD (Jenkins, GitHub Actions, Circle CI).
  • Thorough understanding of SDLC and Agile frameworks.
  • Project management skills and a demonstrated ability to work autonomously.

Nice to Have:

  • Experience building data models using dbt
  • Experience designing and developing systems with desired SLAs and data quality metrics.
  • Experience with microservice architecture.
  • Experience architecting an enterprise-grade data platform.

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

An estimate of the current salary range for US-based employees is
$140,000$170,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Hims considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.

Hims & hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at accommodations@forhims.com. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

+30d

Data Architect (remote in Spain possible)

LanguageWireSpain, Remote
DevOPSairflowsqlazureqapostgresql

LanguageWire is hiring a Remote Data Architect (remote in Spain possible)

Do you just love tweaking that one annoying query to perform just a little bit better?

Are you the go-to guy to know how to find or use data in a complex distributed ecosystem including plenty of services and databases?

Are you interested in pushing organizations to use their data more effectively and become more data-driven?

Yes? You should definitely read on!

The role you’ll play

As LanguageWire accelerates our AI developments, we are in the process of re-architecting our data infrastructure by revising our existing pipelines and data warehouse and moving towards a data lake architecture.

As Data Architect, you will be responsible of LanguageWire’s efficient management and use of data.

As a technical leader you will define LanguageWire’s data vision and strategy including aspects like architecture, governance, compliance, etc.

Supported by our Senior Director of Technology, you will collaborate closely with our engineering teams to make this vision a reality by driving the data-related aspects of our roadmap, planning and delivering trainings to our engineering teams and supporting them in all their needs.

In parallel to that, you will need to support engineers in their data to day work. Technology selection, data modelling, query optimization, monitoring & troubleshooting issues, etc. are continuous needs that you will help teams with.

This means that you will need to balance your focus between long-term strategic initiatives, evangelization of our engineering teams and more tactical day-to-day support.

The team you’ll be a part of

We have 8 software teams working across 5 countries and taking care of the continuous development of our platform. We strongly believe in building our own tech so we can deliver the best solutions for our customers. Our teams cover the full technical scope needed to create advanced language solutions with AI, complex web-based tools, workflow engines, large scale data stores and much more. Our technology and linguistic data assets set us apart from the competition and we’re proud of that.

You will report directly to our Senior Director of Technology and work as part of our Technical Enablement team which is a cross-functional team of specialists working closely with all our other engineering teams in core technical aspects (architecture, data engineering, QA automation, performance, cybersecurity, etc.). Our Technical Enablement team es key to ensure that LanguageWire platform is built, run, and maintained in a scalable, reliable, performant and secure manner.

If you want to make a difference, make it with us by…

  • Defining LanguageWire’s data architecture framework, standards, and principles, including modeling, metadata, security, reference data, and master data.
  • Driving the strategy execution across the entire tech organization by closely collaborating with other teams.
  • Ensuring the optimal operations of our products and services by being the hands-on expert that support our teams on with their databases and data needs.

In one year, you’ll know you were successful if…

  • All of LanguageWire’s data is well modelled and documented.
  • LanguageWire has a powerful core data engine that allows our ML/AI teams to effectively leverage all of our data.
  • You are regarded as the go-to person for all database and data needs.

 

Desired experience and competencies

What does it take to work for LanguageWire?

What you’ll need to bring

You are a hands-on technical expert

  • Expert knowledge of SQL (SQL Server, PostgreSQL, etc.)
  • Good knowledge of cloud services (Azure & GCP) and DevOps engineering
  • Solid data modelling skills, including conceptual, logical and physical models.
  • Experience with Data Warehousing (BigQuery, SnowFlake, Databricks, …)
  • Experience with Orchestration technology (Apache Airflow, Azure Data Factory, …)
  • Experience with Data Lakes and Data Warehouses

You are a technical leader

  • You stand out as a trusted leader and respected mentor in your team.
  • Excellent communicator able to create engagement and commitment from teams around you

You are a team player 

  • You love solving complex puzzles with engineers from different areas and different backgrounds 
  • You’re eager to understand how the different areas of the ecosystem connect to create the complete value chain

Fluent English (reading, writing, speaking) 

This will make you stand out

  • Technical Leadership experience (influencing without authority)
  • Experience working within a microservice-based architecture

Your colleagues say you

  • Are approachable and helpful when needed
  • know all the latest trends in the industry
  • never settle for second best

Our perks

  • Enjoy flat hierarchies, responsibility and freedom, direct feedback, and room to stand up for your own ideas
  • Internal development opportunities, ongoing support from your People Partner, and an inclusive and fun company culture
  • International company with over 400 employees. Offices in Copenhagen, Aarhus, Stockholm, Varberg, London, Leuven, Lille, Paris, Munich, Hamburg, Zurich, Kiev, Gdansk, Atlanta, Finland and Valencia
  • We offer flexible work options tailored to how you work best. Depending on your team, you may have the option to work full-time from the office as an "Office Bee," part-time from the office as a "Nomad," or full-time from home as a "Homey."
  • We take care of our people and initiate many social get-togethers from Friday Bars a to Summer or Christmas parties. We have fun!
  • 200 great colleagues in the Valencia office belonging to different business departments
  • Excellent location in cool and modern offices in the city center, with a great rooftop terrace and a view over the Town Hall Square
  • Working in an international environment—more than 20 different nationalities
  • A private health insurance
  • A dog friendly atmosphere
  • Big kitchen with access to organic fruits, nuts and biscuits and coffee.
  • Social area and game room (foosball table, darts, and board games)
  • Bike and car parking

 

About LanguageWire

At LanguageWire, we want to wire the world together with language. Why? Because we want to help people & businesses simplify communication. We are fueled by the most advanced technology (AI) and our goal is to make customer's lives easier by simplifying their communication with any audience across the globe.

 

Our values drive our behavior

We are curious. We are trustworthy. We are caring. We are ambitious.

At LanguageWire, we are curious and intrigued by what we don’t understand. We believe relationships are based on honesty and responsibility, and being trustworthy reinforces an open, humble, and honest way of communicating. We are caring and respect each other personally and professionally. We encourage authentic collaboration, invite feedback and a positive social environment. Our desire to learn, build, and share knowledge is a natural part of our corporate culture.

 

Working at LanguageWire — why we like it: 

“We believe that we can wire the world together with language. It drives us to think big, follow ambitious goals, and get better every day. By embracing and solving the most exciting and impactful challenges, we help people to understand each other better and to bring the world closer together.”

(Waldemar, Senior Director of Product Management, Munich)

Yes, to diversity, equity & inclusion

In LanguageWire, we believe diversity in gender, age, background, and culture is essential for our growth. Therefore, we are committed to creating a culture that incorporates diverse perspectives and expertise in our everyday work.

LanguageWire’s recruitment process is designed to be transparent and fair for all candidates. We encourage candidates of all backgrounds to apply, and we ensure that candidates are provided with an equal opportunity to demonstrate their competencies and skills.

Want to know more?

We can’t wait to meet you! So, why wait 'til tomorrow? Apply today!

If you want to know more about LanguageWire, we encourage you to visit our website!

See more jobs at LanguageWire

Apply for this job

+30d

Sr. Data Engineer, Kafka

DevOPSagileterraformairflowpostgressqlDesignapic++dockerkubernetesjenkinspythonAWSjavascript

hims & hers is hiring a Remote Sr. Data Engineer, Kafka

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

We're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving over a million Hims & Hers users.

You Will:

  • Architect and develop data pipelines to optimize performance, quality, and scalability
  • Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources
  • Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake
  • Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance
  • Orchestrate sophisticated data flow patterns across a variety of disparate tooling
  • Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics
  • Partner with the rest of the Data Platform team to set best practices and ensure the execution of them
  • Partner with the analytics engineers to ensure the performance and reliability of our data sources
  • Partner with machine learning engineers to deploy predictive models
  • Partner with the legal and security teams to build frameworks and implement data compliance and security policies
  • Partner with DevOps to build IaC and CI/CD pipelines
  • Support code versioning and code deployments for data Pipelines

You Have:

  • 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages
  • Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed
  • Demonstrated experience writing complex, highly optimized SQL queries across large data sets
  • Experience with cloud technologies such as AWS and/or Google Cloud Platform
  • Experience building event streaming pipelines using Kafka/Confluent Kafka
  • Experience with IaC technologies like Terraform
  • Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres
  • Experience with Databricks platform
  • Experience with modern data stack like Airflow/Astronomer, Databricks, dbt, Fivetran, Confluent, Tableau/Looker
  • Experience with containers and container orchestration tools such as Docker or Kubernetes
  • Experience with Machine Learning & MLOps
  • Experience with CI/CD (Jenkins, GitHub Actions, Circle CI)
  • Thorough understanding of SDLC and Agile frameworks
  • Project management skills and a demonstrated ability to work autonomously

Nice to Have:

  • Experience building data models using dbt
  • Experience with Javascript and event tracking tools like GTM
  • Experience designing and developing systems with desired SLAs and data quality metrics
  • Experience with microservice architecture
  • Experience architecting an enterprise-grade data platform

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

An estimate of the current salary range for US-based employees is
$140,000$170,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Hims considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.

Hims & hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at accommodations@forhims.com. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

+30d

Data Engineer GCP| Summer Job Dating

DevoteamTunis, Tunisia, Remote
airflowsqlscrum

Devoteam is hiring a Remote Data Engineer GCP| Summer Job Dating

Description du poste

Au sein de la direction « Plateforme Data », le consultant intégrera une équipe SCRUM et se concentrera sur un périmètre fonctionnel spécifique.

Votre rôle consistera à contribuer à des projets data en apportant votre expertise sur les tâches suivantes :

  • Concevoir, développer et maintenir des pipelines de données robustes et évolutifs sur Google Cloud Plateform (GCP), en utilisant des outils tels que BigQuery, Airflow, Looker et DBT.
  • Collaborer avec les équipes métier pour comprendre les exigences en matière de données et concevoir des solutions adaptées.
  • Optimiser les performances des traitements des données et des processus ELT en utilisant AirFlow, DBT et BigQuery.
  • Mettre en œuvre des processus de qualité des données pour garantir l'intégrité et la cohérence des données.
  • Travailler en étroite collaboration avec les équipes d'ingénierie pour intégrer les pipelines de données dans les applications et les services existants.
  • Rester à jour avec les nouvelles technologies et les meilleures pratiques dans le domaine du traitement des données et de l'analyse.

 

    Qualifications

    ???? Compétences

    Quels atouts pour rejoindre l’équipe ?

    Diplômé(e) d’un Bac+5 en école d'ingénieur ou équivalent universitaire avec une spécialisation en informatique.

    • Au moins 4 ans d'expérience dans le domaine de l'ingénierie des données, avec une expérience significative dans un environnement basé sur le cloud GCP.
    • Maîtrise avancée de SQL pour l'optimisation et le traitement des données.
    • Certification Google Professional Data Engineer est un plus.
    • Très bonne communication écrite et orale (livrables et reportings de qualité).

    Alors, si vous souhaitez progresser, apprendre et partager, rejoignez-nous !

    See more jobs at Devoteam

    Apply for this job

    +30d

    Data Engineer

    DevoteamTunis, Tunisia, Remote
    airflowsqlscrum

    Devoteam is hiring a Remote Data Engineer

    Description du poste

    Au sein de la direction « Plateforme Data », le consultant intégrera une équipe SCRUM et se concentrera sur un périmètre fonctionnel spécifique.

    Votre rôle consistera à contribuer à des projets data en apportant votre expertise sur les tâches suivantes :

    • Concevoir, développer et maintenir des pipelines de données robustes et évolutifs sur Google Cloud Plateform (GCP), en utilisant des outils tels que BigQuery, Airflow, Looker et DBT.
    • Collaborer avec les équipes métier pour comprendre les exigences en matière de données et concevoir des solutions adaptées.
    • Optimiser les performances des traitements des données et des processus ELT en utilisant AirFlow, DBT et BigQuery.
    • Mettre en œuvre des processus de qualité des données pour garantir l'intégrité et la cohérence des données.
    • Travailler en étroite collaboration avec les équipes d'ingénierie pour intégrer les pipelines de données dans les applications et les services existants.
    • Rester à jour avec les nouvelles technologies et les meilleures pratiques dans le domaine du traitement des données et de l'analyse.

     

      Qualifications

      • Diplômé(e) d’un Bac+5 en école d'ingénieur ou équivalent universitaire avec une spécialisation en informatique.
      • Au moins 4 ans d'expérience dans le domaine de l'ingénierie des données, avec une expérience significative dans un environnement basé sur le cloud GCP.
      • Maîtrise avancée de SQL pour l'optimisation et le traitement des données.
      • Certification Google Professional Data Engineer est un plus.
      • Très bonne communication écrite et orale (livrables et reportings de qualité).

      See more jobs at Devoteam

      Apply for this job

      +30d

      Senior Data Scientist - Product Analytics

      HandshakeSan Francisco, CA (Hybrid)
      airflowsqlDesignc++python

      Handshake is hiring a Remote Senior Data Scientist - Product Analytics

      Everyone is welcome at Handshake. We know diverse teams build better products and we are committed to creating an inclusive culture built on a foundation of respect for all individuals. We strongly encourage candidates from non-traditional backgrounds, historically marginalized or underrepresented groups to apply.

      Your impact

      Handshake is actively seeking an Senior Data Scientist - Product Analytics specializing in Product, Growth, and/or Revenue analytics to support our Research and Development team, spearheading the measurement and analysis of product success. This pivotal role will play a crucial part in shaping Handshake’s overarching strategy and product roadmap. This position requires a strong blend of technical expertise, strategic thinking, and communication skills to guide our products' evolution and deliver maximum value to our three-sided marketplace, consisting of students, employers, and universities. This role reports to the Analytics Manager, embedded within the broader Data team, and you will work alongside a talented team of data scientists and analytics engineers. 

      Your role

      • Work side-by-side and partner strategically with a Research and Development Leadership, becoming a subject-matter expert in a given domain

      • Contribute to a data-driven product strategy and own the corresponding analytics prioritization and roadmapping to execute on and measure the effectiveness of this strategy

      • Own domain goal-setting, tracking, and reporting on progress and impact

      • Lead experimentation within a given vertical, working to ensure that tracking is instrumented proactively and scaling our ability to efficiently and reliably run experiments

      • Collaborate with Analytics Engineering and Relevance teams to define metrics and build out foundational data models

      • Contribute to the vision and strategy for product analytics and the wider data organization at Handshake

      • Lead by example to build a culture of accountability and rigor to substantiate proven business impact

      • Advocate for the millions of students and employer users on Handshake by communicating data insights and recommendations to marketing, product and leadership teams

      Your experience

      • Proven experience in using data to drive product development and decision-making and direct experience working with engineers and wider product teams

      • Strong communication and presentation skills, including the ability to translate data-related concepts and results for leadership and less technical stakeholders

      • Experience conducting exploratory analysis projects related to cohorting, time series analysis, and funnel analysis/optimization. Beyond just reporting metrics, we value the ability to dig in an explain the “why” behind trends

      • A clear understanding and demonstrable experience in experimentation design and process development, including comfort with A/B testing

      • Expertise in SQL is essential. Expertise with Jupyter-style notebooks and Python statistics packages is a huge plus.

      • Experience creating dashboards and visualizations in tools such as Hex, Looker, Mode, etc.

      • Ability to start, own, and drive projects to completion with minimal guidance

      Bonus areas of expertise

      • Data modeling (we use dbt!)

      • Some familiarity with LookML

      Our Analytics Stack 

      • Fivetran, Segment, Bigquery, dbt, Airflow, Looker, Hex

      Compensation range

      • $180,000-$200,000

      For cash compensation, we set standard ranges for all U.S.-based roles based on function, level, and geographic location, benchmarked against similar stage growth companies. In order to be compliant with local legislation, as well as to provide greater transparency to candidates, we share salary ranges on all job postings regardless of desired hiring location. Final offer amounts are determined by multiple factors, including geographic location as well as candidate experience and expertise, and may vary from the amounts listed above.

      About us

      Handshake is the #1 place to launch a career with no connections, experience, or luck required. The platform connects up-and-coming talent with 750,000+ employers - from Fortune 500 companies like Google, Nike, and Target to thousands of public school districts, healthcare systems, and nonprofits. In 2022 we announced our $200M Series F funding round. This Series F fundraise and valuation of $3.5B will fuel Handshake’s next phase of growth and propel our mission to help more people start, restart, and jumpstart their careers.

      When it comes to our workforce strategy, we’ve thought deeply about how work-life should look here at Handshake. With our Hub-Based Remote Working strategy, employees can enjoy the flexibility of remote work, whilst ensuring collaboration and team experiences in a shared space remains possible. Handshake is headquartered in San Francisco with offices in Denver, New York, London, and Berlin and teammates working globally. 

      Check out our careers site to find a hub near you!

      What we offer

      At Handshake, we'll give you the tools to feel healthy, happy and secure.

      Benefits below apply to employees in full-time positions.

      • ???? Equity and ownership in a fast-growing company.
      • ???? 16 Weeks of paid parental leave for birth giving parents & 10 weeks of paid parental leave for non-birth giving parents.
      • ???? Comprehensive medical, dental, and vision policies including LGTBQ+ Coverage. We also provide resources for Mental Health Assistance, Employee Assistance Programs and counseling support.
      • ???? Handshake offers $500/£360 home office stipend for you to spend during your first 3 months to create a productive and comfortable workspace at home.
      • ???? Generous learning & development opportunities and an annual $2,000/£1,500/€1,850 stipend for you to grow your skills and career.
      • ???? Financial coaching through Origin to help you through your financial journey.
      • ???? Monthly internet stipend and a brand new MacBook to allow you to do your best work.
      • ???? Monthly commuter stipend for you to expense your travel to the office (for office-based employees).
      • ???? Free lunch provided twice a week across all offices.
      • ???? Referral bonus to reward you when you bring great talent to Handshake.

      (US-specific benefits, in addition to the first section)

      • ???? 401k Match: Handshake offers a dollar-for-dollar match on 1% of deferred salary, up to a maximum of $1,200 per year.
      • ???? All full-time US-based Handshakers are eligible for our flexible time off policy to get out and see the world. In addition, we offer 8 standardized holidays, and 2 additional days of flexible holiday time off. Lastly, we have a Winter #ShakeBreak, a one-week period of Collective Time Off.
      • ???? Lactation support: Handshake partners with Milk Stork to provide a comprehensive 100% employer-sponsored lactation support to traveling parents and guardians.

      (UK-specific benefits, in addition to the first section) 

      • ???? Pension Scheme: Handshake will provide you with a workplace pension, where you will make contributions based on 5% of your salary. Handshake will pay the equivalent of 3% towards your pension plan, subject to qualifying earnings limits.
      • ???? Up to 25 days of vacation to encourage people to reset, recharge, and refresh, in addition to 8 bank holidays throughout the year.
      • ???? Regular offsites each year to bring the team together + opportunity to travel to our HQ in San Francisco.
      • ????️ Discounts across various high street retailers, cinemas and other social activities exclusively for Handshake UK employees.

      (Germany-specific benefits, in addition to the first section)

      • ???? 25 days of annual leave + 5 days of a winter #ShakeBreak, a one-week period of Collective Time Off across the company.
      • ???? Regular offsites each year to bring the team together + opportunity to travel to our HQ in San Francisco once a year.
      • ???? Urban sports club membership offering access to a diverse network of fitness and wellness facilities.
      • ????️ Discounts across various high street retailers, cinemas and other social activities exclusively for Handshake Germany employees.

      For roles based in Romania: Please ask your recruiter about region specific benefits.

      Looking for more? Explore our mission, values and comprehensive US benefits at joinhandshake.com/careers.

      Handshake is committed to providing reasonable accommodations in our recruitment processes for candidates with disabilities, sincerely held religious beliefs or other reasons protected by applicable laws. If you need assistance or reasonable accommodation, please reach out to us at people-hr@joinhandshake.com.

      See more jobs at Handshake

      Apply for this job

      +30d

      Senior Software Engineer, Marketing Enablement

      InstacartUnited States - Remote
      Master’s DegreenosqlairflowDesignrubypostgresqlpython

      Instacart is hiring a Remote Senior Software Engineer, Marketing Enablement

      We're transforming the grocery industry

      At Instacart, we invite the world to share love through food because we believe everyone should have access to the food they love and more time to enjoy it together. Where others see a simple need for grocery delivery, we see exciting complexity and endless opportunity to serve the varied needs of our community. We work to deliver an essential service that customers rely on to get their groceries and household goods, while also offering safe and flexible earnings opportunities to Instacart Personal Shoppers.

      Instacart has become a lifeline for millions of people, and we’re building the team to help push our shopping cart forward. If you’re ready to do the best work of your life, come join our table.

      Instacart is a Flex First team

      There’s no one-size fits all approach to how we do our best work. Our employees have the flexibility to choose where they do their best work—whether it’s from home, an office, or your favorite coffee shop—while staying connected and building community through regular in-person events. Learn more about our flexible approach to where we work.

      Overview

      About the Role 

      At Instacart, we invite the world to share love through food because we believe everyone should have access to the food they love and more time to enjoy it together. Where others see a simple need for grocery delivery, we see exciting complexity and endless opportunity to serve the varied needs of our community. We work to deliver an essential service that customers rely on to get their groceries and household goods, while also offering safe and flexible earnings opportunities to Instacart Personal Shoppers.

      Instacart has become a lifeline for millions of people, and we’re building the team to help push our shopping cart forward. If you’re ready to do the best work of your life, come join our table.

      As a Senior Software Engineer, you will play a pivotal role in transforming the grocery industry by enhancing our platform's search visibility, improving our outreach, and ensuring that millions of people can access the food they love effortlessly. This position involves tackling complex problems with innovative solutions that enhance user experience and drive significant business results. Your work will directly influence the efficiency and effectiveness of our shopping platform, making everyday essentials accessible to a broader audience.

       

      About the Team 

      As part of the small but mighty Marketing Technology team, within our broader marketing organization, SEO at Instacart plays a crucial role.SEO at Instacart is a dynamic team, with a huge impact on our business. We operate with a "Flex First" approach. This means you'll have the flexibility to work from your preferred environment, with regular opportunities to connect in person. Our team is pivotal in driving traffic and user engagement through strategic initiatives across millions of web pages, influencing customer journeys at every stage. By collaborating with various internal teams and adapting to evolving business needs, you will contribute to a crucial aspect of our overarching business strategy and have the opportunity to work across many areas of our product in a manner that directly impacts customer experience. 

       

      About the Job 

      In your role as Senior Software Engineer, you will:

      • Design and develop: Build high-quality product features focused on enhancing search engine optimization and the user experience.
      • Collaborate effectively: Work closely with product managers, designers, and data scientists to define and execute project requirements.
      • Optimize and scale: Ensure system designs are scalable and robust to support our growing user base and data needs.
      • Innovate and improve: Regularly propose and lead new initiatives that enhance both organizational efficiency and product excellence.
      • Data handling: Develop efficient data pipelines and data models to ensure the collection and delivery of quality data.

       

      About You

      Minimum Qualifications

      • Bachelor's or Master’s degree in Computer Science, Computer Engineering, or a related field.
      • 5+ years of relevant technology experience, particularly in distributed systems.
      • Demonstrable skill and experience with Python, Ruby, React, PostgreSQL, or Go.
      • Proven ability to manage projects end-to-end with significant autonomy.
      • Strong collaborator with a growth mindset, open to feedback and learning.
      • Experience working in large-scale marketplaces, e-commerce, or companies with high-traffic apps.

       

      Preferred Qualifications

      • Experience in the startup environment, particularly in fast-paced settings.
      • Advanced knowledge in data engineering, including data pipelines and NoSQL databases.
      • Familiarity with big data technologies such as Apache Spark, Airflow, and Hadoop.
      • Prior experience working closely with marketing teams and direct involvement in SEO.

      Instacart provides highly market-competitive compensation and benefits in each location where our employees work. This role is remote and the base pay range for a successful candidate is dependent on their permanent work location. Please review our Flex First remote work policyhere.

      Offers may vary based on many factors, such as candidate experience and skills required for the role.Additionally, this role is eligible for a new hire equity grant as well as annual refresh grants. Please read more about our benefits offeringshere.

      For US based candidates, the base pay ranges for a successful candidate are listed below.

      CA, NY, CT, NJ
      $192,000$245,000 USD
      WA
      $184,000$235,000 USD
      OR, DE, ME, MA, MD, NH, RI, VT, DC, PA, VA, CO, TX, IL, HI
      $176,000$225,000 USD
      All other states
      $159,000$203,000 USD

      See more jobs at Instacart

      Apply for this job

      +30d

      Senior/Staff Data Engineer, FinPlatform

      Blockchain.comLondon, United Kingdom - Hybrid
      Commercial experiencekotlinscalaairflowsqlDesigngitjavakubernetespython

      Blockchain.com is hiring a Remote Senior/Staff Data Engineer, FinPlatform

      Blockchain.com is the world's leading software platform for digital assets. Offering the largest production blockchain platform in the world, we share the passion to code, create, and ultimately build an open, accessible and fair financial future, one piece of software.

      We are looking for a talented Senior or Staff Data Engineerto join our FinPlatform team and work from our office in London. The group is part of a larger Data Science team, informing all product decisions and creating models and infrastructure to improve efficiency, growth, and security. To do this, we use data from various sources and of varying quality. Our automated ETL processes serve both the broader company (in the form of clean, simplified tables of aggregated statistics and dashboards) and the Data Science team itself (cleaning and processing data for analysis and modeling purposes, ensuring reproducibility).

      We are looking for someone with experience in designing, building, and maintaining a scalable and robust Data Infra that makes data easily accessible to the Data Science team and the broader audience via different tools. As a data engineer, you will be involved in all aspects of the data infrastructure, from understanding current bottlenecks and requirements to ensuring the quality and availability of data. You will collaborate closely with data scientists, platform, and front-end engineers, defining requirements and designing new data processes for both streaming and batch processing of data, as well as maintaining and improving existing ones. We are looking for someone passionate about high-quality data who understands their impact in solving real-life problems. Being proactive in identifying issues, digging deep into their source, and developing solutions, are at the heart of this role.

      SENIOR

      What You Will Need

      • Bachelor’s degree in Computer Science, Applied Mathematics, Engineering or any other technology-related field
      • Previous experience working in a data engineering role
      • Fluency in Python
      • Experience in both batch processing and streaming data pipelines
      • Experience working with Google Cloud Platform
      • In-depth knowledge of SQL and no-SQL databases
      • In-depth knowledge of coding principles, including Oriented Object Programming
      • Experience with Git

      Nice to have

      • Experience with code optimisation, parallel processing
      • Experience with Airflow, Google Composer or Kubernetes Engine
      • Experiences with other programming languages, like Java, Kotlin or Scala
      • Experience with Spark or other Big Data frameworks
      • Experience with distributed and real-time technologies (Kafka, etc..)
      • 5-8 years commercial experience in a related role

      STAFF

      What You Will Do

      • Maintain and evolve the current data infrastructure and look to evolve it for new requirements
      • Maintain and extend our core data infrastructure and existing data pipelines and ETLs
      • Provide best practices and frameworks for data testing and validation and ensure reliability and accuracy of data
      • Design, develop and implement data visualization and analytics tools and data products.
      • Play a critical role in helping to set up directions and goals for the team
      • Build and ship high-quality code, provide thorough code reviews, testing, monitoring and proactive changes to improve stability
      • You are the one who implements the hardest part of the system or feature.

      What You Will Need

      • Bachelor’s degree in Computer Science, Applied Mathematics, Engineering or any other technology-related field
      • Previous experience working in a data engineering role
      • Fluency in Python
      • Experience in both batch processing and streaming data pipelines
      • Experience working with Google Cloud Platform
      • In-depth knowledge of SQL and no-SQL databases
      • In-depth knowledge of coding principles, including Oriented Object Programming
      • Experience with Git
      • Ability to solve technical problems that few others can do
      • Ability to lead/coordinate rollout and releases of major initiatives

      Nice to have

      • Experience with code optimisation, parallel processing
      • Experience with Airflow, Google Composer or Kubernetes Engine
      • Experiences with other programming languages, like Java, Kotlin or Scala
      • Experience with Spark or other Big Data frameworks
      • Experience with distributed and real-time technologies (Kafka, etc..)
      • 8+ years commercial experience in a related role

      COMPENSATION & PERKS

      • Full-time salary based on experience and meaningful equity in an industry-leading company
      • Hybrid model working from home & our office in Central London (SoHo) 
      • Work from Anywhere Policy - up to 20 days to work remotely 
      • ClassPass 
      • Budgets for learning & professional development 
      • Unlimited vacation policy; work hard and take time when you need it
      • Apple equipment
      • The opportunity to be a key player and build your career at a rapidly expanding, global technology company in an emerging field
      • Flexible work culture  

      Blockchain is committed to diversity and inclusion in the workplace and is proud to be an equal opportunity employer. We prohibit discrimination and harassment of any kind based on race, religion, color, national origin, gender, gender expression, sex, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. This policy applies to all employment practices within our organization, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, and apprenticeship. Blockchain makes hiring decisions based solely on qualifications, merit, and business need at the time.

      See more jobs at Blockchain.com

      Apply for this job

      +30d

      Ingénieur en apprentissage machine/Ingénieure en apprentissage machine

      DevoteamNantes, France, Remote
      MLDevOPSOpenAILambdaagileterraformscalaairflowansiblescrumgitc++dockerkubernetesjenkinspythonAWS

      Devoteam is hiring a Remote Ingénieur en apprentissage machine/Ingénieure en apprentissage machine

      Description du poste

      Missions

      • Supporter le processus de développement machine learning de bout en bout pour concevoir, créer et gérer des logiciels reproductibles, testables et évolutifs.

      • Travailler sur la mise en place et l’utilisation de plateformes ML/IA/MLOps (telles que AWS SageMaker, Kubeflow, AWS Bedrock, AWS Titan) 

      • Apporter à nos clients des best practices en termes d’organisation, de développement, d’automatisation, de monitoring, de sécurité.

      • Expliquer et appliquer les best practices pour l’automatisation, le testing, le versioning, la reproductibilité et le monitoring de la solution IA déployée.

      • Encadrer et superviser les consultant(es) juniors i.e., peer code review, application des best practices.

      • Accompagner notre équipe commerciale sur la rédaction de propositions et des réunions d’avant-vente.

      • Participer au développement de notre communauté interne (REX, workshops, articles, hackerspace.

      • Participer au recrutement de nos futurs talents.

      Qualifications

      Compétences techniques 

      REQUIRED : 

      • Parler couramment Python, PySpark ou Scala Spark. Scikit-learn, MLlib, Tensorflow, Keras, PyTorch, LightGBM, XGBoost, Scikit-Learn et Spark (pour ne citer qu’eux)

      • Savoir implémenter des architectures de Containerisation (Docker / Containerd) et les environnements en Serverless et micro services utilisant Lambda, ECS, Kubernetes

      • Parfaitement opérationnel pour mettre en place des environnements DevOps et Infra As Code, et pratiquer les outils de MLOps

      • Une bonne partie des outils Git, GitLab CI, Jenkins, Ansible, Terraform, Kubernetes, ML Flow, Airflow ou leurs équivalents dans les environnements Cloud doivent faire partie de votre quotidien. 

      • Cloud AWS (AWS Bedrock, AWS Titan, OpenAI, AWS SageMaker / Kubeflow)

      • Méthode Agile / Scrum

      • Feature Store (n'importe quel fournisseur)

       

      NICE TO HAVE : 

      • Apache Airflow

      • AWS SageMaker / Kubeflow

      • Apache Spark

      • Méthode Agile / Scrum

      Evoluer au sein de la communauté

      Évoluer au sein de la Tribe Data, c’est être acteur dans la création d’un environnement stimulant dans lequel les consultants ne cessent de se tirer vers le haut, aussi bien en ce qui concerne les compétences techniques que les soft-skills. Mais ce n’est pas tout, c’est aussi des événements réguliers et des conversations slacks dédiées vous permettant de solliciter les communautés (data, AI/ML, DevOps, sécurité,...) dans leur ensemble !

      A côté de cela, vous avez l’opportunité d’être moteur dans le développement des différentes communautés internes (REX, workshops, articles, Podcasts…).

      Rémunération

      La rémunération fixe proposée pour le poste est en fonction de votre expérience et dans une fourchette de 46,5k et 52,5k.

      See more jobs at Devoteam

      Apply for this job

      +30d

      Software Engineer (with overlap into ML Engineer) for Artificial Intelligence team (Engagement)

      BloomreachSlovakia, Czechia, Remote
      MLDevOPSredisagileremote-firstjiraairflowDesignmongodbapigitpythonAWS

      Bloomreach is hiring a Remote Software Engineer (with overlap into ML Engineer) for Artificial Intelligence team (Engagement)

      Bloomreach is the world’s #1 Commerce Experience Cloud, empowering brands to deliver customer journeys so personalized, they feel like magic. It offers a suite of products that drive true personalization and digital commerce growth, including:

      • Discovery, offering AI-driven search and merchandising
      • Content, offering a headless CMS
      • Engagement, offering a leading CDP and marketing automation solutions

      Together, these solutions combine the power of unified customer and product data with the speed and scale of AI optimization, enabling revenue-driving digital commerce experiences that convert on any channel and every journey. Bloomreach serves over 850 global brands including Albertsons, Bosch, Puma, FC Bayern München, and Marks & Spencer. Bloomreach recently raised $175 million in a Series F funding round, bringing its total valuation to $2.2 billion. The investment was led by Goldman Sachs Asset Management with participation from Bain Capital Ventures and Sixth Street Growth. For more information, visit Bloomreach.com.

       

      Join our Artificial Intelligence team as a Software Engineer and help us revolutionize marketing with ML-powered solutions! You'll work on cutting-edge technologies, impacting millions of users, and contributing to a product that truly makes a difference. The salary range starts at 3500€ per month,along with restricted stock unites and other benefits. Working in one of our Central European offices or from home on a full-time basis, you'll become a core part of the Engineering Team.

      What challenge awaits you?

      You'll face the exciting challenge of building and maintaining ML-powered features in a production environment, ensuring they are reliable, scalable, and deliver real value to our users. You'll work alongside a team to overcome the unique challenges of building and running ML models in a SaaS environment, including managing data complexity, optimizing for performance, and ensuring model robustness.

      You will cooperate with your teammates, Data Science engineers, and Engineering and Product leaders to speed up ML-powered features' delivery (from ideation to production) by applying principles of continuous discovery, integration, testing, and other techniques from Agile, DevOps, and MLOps mindsets. This will involve building efficient workflows, automating processes, and fostering a culture of collaboration and innovation.

      Your job will be to:

      1. Design & Deliver new features
      2. Ensure quality and performance of developed solution
      3. Support and Maintain owned components

      a. Design & Deliver new features

      • Translate business requirements for ML-powered features into technical specifications and design documents.
      • Collaborate with data scientists to ensure new ML features' technical feasibility and scalability.
      • Define and develop back-office API endpoints (to configure the features) as well the high-performance serving endpoints.
      • Develop and implement ML models, algorithms, and data pipelines to support new features.
      • Deploy and monitor new features in production, ensuring seamless integration with existing systems.

      b. Ensure quality and performance of developed solution

      • Perform rigorous testing and quality assurance of ML models and code, including unit tests, integration tests, and A/B testing.
      • Implement monitoring systems and dashboards to track the performance of ML models in production, identify potential issues, and optimize for accuracy and efficiency.
      • Contribute to developing and implementing DevOps and MLOps best practices within the team.

      c. Support and Maintain owned components

      • Maintain end-to-end features, encompassing back-office APIs, models, definitions, and high-performance serving APIs.
      • Provide ongoing support and maintenance for existing ML-powered features, including troubleshooting issues, fixing bugs, and implementing enhancements.
      • Support our client-facing colleagues in the investigation of possible issues (L3 support).
      • Document code, design decisions, and operational procedures to facilitate ongoing maintenance and knowledge sharing.

      What technologies and tools does the AI team work with?

      • Programming languages - Python 
      • Google Cloud Platform services - GKE, BigQuery, BigTable, GCS, Dataproc, VertexAI 
      • Data Storage and Processing - MongoDB, Redis, Spark, TensorFlow 
      • Software and Tools - Grafana, Sentry, Gitlab, Jira, Productboard, PagerDuty 

      The owned area encompasses various domains such as Recommendations, Predictions, Contextual bandits, MLOps. Therefore, having experience in these areas would be beneficial. The team also works with large amounts of data and utilizes platforms and algorithms for model training and data processing & ML pipelines. Experience in these areas is highly valued.

      Your success story will be:

      • In 30 Days: Successfully onboard and contribute to ongoing tasks, demonstrating understanding of the codebase and team processes.
      • In 90 Days: Contribute to design discussions and independently deliver high-quality code for assigned features. Participate in investigating and resolving production issues.
      • In 180 Days: Independently manage larger tasks, contribute to team improvements, and confidently handle L3 support, investigating and resolving production issues.

      You have the following experience and qualities:

      1. Professional— Proven experience in python engineering, system design, and maintenance in the area of AI/ML-powered features.
      2. Personal — Demonstrates strong initiative, ability to work within a team, communication skills, and a commitment to continuous learning and improvement.

      Professional experience

      • Proven experience in Python engineering, with a strong focus on designing and maintaining AI/ML-powered features in production environments.
      • Experience with cloud platforms (e.g., GCP, AWS) and relevant services for ML development and deployment.
      • Solid understanding of software architecture principles, particularly in the context of building and maintaining scalable and reliable APIs and microservices.
      • Experience with version control systems (e.g., Git) and CI/CD pipelines for efficient development and deployment.
      • Familiarity with common ML frameworks, libraries, and tools (e.g., TensorFlow, PyTorch, Scikit-learn, etc.) and with ML pipelines/orchestration frameworks (Kubeflow, Airflow, Prefect,... )

      Personal qualities

      • Demonstrates strong initiative and a proactive approach to problem-solving.
      • Excellent communication and collaboration skills, with the ability to work effectively within a team.
      • A genuine passion for learning new technologies and keeping up-to-date with the latest advancements in AI/ML.
      • A commitment to delivering high-quality work and a dedication to continuous improvement.

      Excited? Join us and transform the future of commerce experiences.

      More things you'll like about Bloomreach:

      Culture:

      • A great deal of freedom and trust. At Bloomreach we don’t clock in and out, and we have neither corporate rules nor long approval processes. This freedom goes hand in hand with responsibility. We are interested in results from day one. 

      • We have defined our5 valuesand the 10 underlying key behaviors that we strongly believe in. We can only succeed if everyone lives these behaviors day to day. We've embedded them in our processes like recruitment, onboarding, feedback, personal development, performance review and internal communication. 

      • We believe in flexible working hours to accommodate your working style.

      • We work remote-first with several Bloomreach Hubs available across three continents.

      • We organize company events to experience the global spirit of the company and get excited about what's ahead.

      • We encourage and support our employees to engage in volunteering activities - every Bloomreacher can take 5 paid days off to volunteer*.
      • TheBloomreach Glassdoor pageelaborates on our stellar 4.6/5 rating. The Bloomreach Comparably page Culture score is even higher at 4.9/5

      Personal Development:

      • We have a People Development Program -- participating in personal development workshops on various topics run by experts from inside the company. We are continuously developing & updating competency maps for select functions.

      • Our resident communication coachIvo Večeřais available to help navigate work-related communications & decision-making challenges.*
      • Our managers are strongly encouraged to participate in the Leader Development Program to develop in the areas we consider essential for any leader. The program includes regular comprehensive feedback, consultations with a coach and follow-up check-ins.

      • Bloomreachers utilize the $1,500 professional education budget on an annual basis to purchase education products (books, courses, certifications, etc.)*

      Well-being:

      • The Employee Assistance Program -- with counselors -- is available for non-work-related challenges.*

      • Subscription to Calm - sleep and meditation app.*

      • We organize ‘DisConnect’ days where Bloomreachers globally enjoy one additional day off each quarter, allowing us to unwind together and focus on activities away from the screen with our loved ones.

      • We facilitate sports, yoga, and meditation opportunities for each other.

      • Extended parental leave up to 26 calendar weeks for Primary Caregivers.*

      Compensation:

      • Restricted Stock Units or Stock Options are granted depending on a team member’s role, seniority, and location.*

      • Everyone gets to participate in the company's success through the company performance bonus.*

      • We offer an employee referral bonus of up to $3,000 paid out immediately after the new hire starts.

      • We reward & celebrate work anniversaries -- Bloomversaries!*

      (*Subject to employment type. Interns are exempt from marked benefits, usually for the first 6 months.)

      Excited? Join us and transform the future of commerce experiences!

      If this position doesn't suit you, but you know someone who might be a great fit, share it - we will be very grateful!


      Any unsolicited resumes/candidate profiles submitted through our website or to personal email accounts of employees of Bloomreach are considered property of Bloomreach and are not subject to payment of agency fees.

       #LI-Remote

      See more jobs at Bloomreach

      Apply for this job

      +30d

      Data Engineer - AWS

      Tiger AnalyticsJersey City,New Jersey,United States, Remote
      S3LambdaairflowsqlDesignAWS

      Tiger Analytics is hiring a Remote Data Engineer - AWS

      Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.

      As an AWS Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines on AWS cloud infrastructure. You will work closely with cross-functional teams to support data analytics, machine learning, and business intelligence initiatives. The ideal candidate will have strong experience with AWS services, Databricks, and Apache Airflow.

      Key Responsibilities:

      • Design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
      • Implement data processing and transformation workflows using Databricks, Apache Spark, and SQL to support analytics and reporting requirements.
      • Build and maintain orchestration workflows using Apache Airflow to automate data pipeline execution, scheduling, and monitoring.
      • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver scalable data solutions.
      • Optimize data pipelines for performance, reliability, and cost-effectiveness, leveraging AWS best practices and cloud-native technologies.
      • 8+ years of experience building and deploying large-scale data processing pipelines in a production environment.
      • Hands-on experience in designing and building data pipelines on AWS cloud infrastructure.
      • Strong proficiency in AWS services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
      • Strong experience with Databricks and Apache Spark for data processing and analytics.
      • Hands-on experience with Apache Airflow for orchestrating and scheduling data pipelines.
      • Solid understanding of data modeling, database design principles, and SQL.
      • Experience with version control systems (e.g., Git) and CI/CD pipelines.
      • Excellent communication skills and the ability to collaborate effectively with cross-functional teams.
      • Strong problem-solving skills and attention to detail.

      This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

      See more jobs at Tiger Analytics

      Apply for this job

      +30d

      Senior Data Engineer

      AltUS Remote
      airflowpostgresDesignc++pythonAWS

      Alt is hiring a Remote Senior Data Engineer

      At Alt, we’re on a mission to unlock the value of alternative assets, and looking for talented people who share our vision. Our platform enables users to exchange, invest, value, securely store, and authenticate their collectible cards. And we envision a world where anything is an investable asset. 

      To date, we’ve raised over $100 million from thought leaders at the intersection of culture, community, and capital. Some of our investors include Alexis Ohanian’s fund Seven Seven Six, the founders of Stripe, Coinbase co-founder Fred Ehrsam, BlackRock co-founder Sue Wagner, the co-founders of AngelList, First Round Capital, and BoxGroup. We’re also backed by professional athletes including Tom Brady, Candace Parker, Giannis Antetokounmpo, Alex Morgan, Kevin Durant, and Marlon Humphrey.

      Alt is a dedicated equal opportunity employer committed to creating a diverse workforce. We celebrate our differences and strive to create an inclusive environment for all. We are focused on fostering a culture of empowerment which starts with providing our employees with the resources needed to reach their full potential.

      What we are looking for:

      We are seeking a Senior Data Engineer who is eager to make a significant impact. In this role, you'll get the opportunity to leverage your technical expertise and problem-solving skills to solve some of the hardest data problems in the hobby. Your primary focus in this role will be on enhancing and optimizing our pricing engine to support strategic business goals. Our ideal candidate is passionate about trading cards, has a strong sense of ownership, and enjoys challenges. At Alt, data is core to everything we do and is a differentiator for our customers. The team’s scope covers data pipeline development, search infrastructure, web scraping, detection algorithms, internal toolings and data quality. We give our engineers a lot of individual responsibility and autonomy, so your ability to make good trade-offs and exercise good judgment is essential.

      The impact you will make:

      • Partner with engineers, and cross-functional stakeholders to contribute to all phases of algorithm development including: ideation, prototyping, design, and production
      • Build, iterate, productionize, and own Alt's valuation models
      • Leverage background in pricing strategies and models to develop innovative pricing solutions
      • Design and implement scalable, reliable, and maintainable machine learning systems
      • Partner with product to understand customer requirements and prioritize model features

      What you bring to the table:

      • Experience: 5+ years of experience in software development, with a proven track record of developing and deploying models in production. Experience with pricing models preferred.
      • Technical Skills: Proficiency in programming languages and tools such as Python, AWS, Postgres, Airflow, Datadog, and JavaScript.
      • Problem-Solving: A knack for solving tough problems and a drive to take ownership of your work.
      • Communication: Effective communication skills with the ability to ship solutions quickly.
      • Product Focus: Excellent product instincts, with a user-first approach when designing technical solutions.
      • Team Player: A collaborative mindset that helps elevate the performance of those around you.
      • Industry Knowledge: Knowledge of the sports/trading card industry is a plus.

      What you will get from us:

      • Ground floor opportunity as an early member of the Alt team; you’ll directly shape the direction of our company. The opportunities for growth are truly limitless.
      • An inclusive company culture that is being built intentionally to foster an environment that supports and engages talent in their current and future endeavors.
      • $100/month work-from-home stipend
      • $200/month wellness stipend
      • WeWork office Stipend
      • 401(k) retirement benefits
      • Flexible vacation policy
      • Generous paid parental leave
      • Competitive healthcare benefits, including HSA, for you and your dependent(s)

      Alt's compensation package includes a competitive base salary benchmarked against real-time market data, as well as equity for all full-time roles. We want all full-time employees to be invested in Alt and to be able to take advantage of that investment, so our equity grants include a 10-year exercise window. The base salary range for this role is: $194,000 - $210,000. Offers may vary from the amount listed based on geography, candidate experience and expertise, and other factors.

      See more jobs at Alt

      Apply for this job

      +30d

      Cloud Platform Architect

      SignifydUnited States (Remote);
      MLDevOPSSQSLambdaagileBachelor's degreeBachelor degreeterraformairflowDesigndockerkubernetesAWS

      Signifyd is hiring a Remote Cloud Platform Architect

      Who Are You

      We seek a highly skilled and experienced Cloud Platform Architect to join our dynamic and growing Cloud Platform team. As a Cloud Platform Architect, you will play a crucial role in strengthening and expanding the core of our cloud infrastructure. We want you to lead the way in scaling our cloud infrastructure for our customers, engineers, and data science teams. You will work alongside talented cloud platform and software engineers and architects to envision how all cloud infrastructure will evolve to support the expansion of Signifyd’s core products. The ideal candidate must: 

       

      • Effectively communicate complex problems by tailoring the message to the audience and presenting it clearly and concisely. 
      • Balance multiple perspectives, disagree, and commit when necessary to move key company decisions and critical priorities forward.
      • Understand the inner workings of Cloud Service Providers (CSPs) such as AWS, GCP, and Azure. Able to understand networking and security concepts core and most relevant within the space.
      • Ability to work independently in a dynamic environment and proactively approach problem-solving.
      • Be committed to achieving positive business outcomes via automation and enablement efforts, reducing costs, and improving operational excellence.
      • Be an example for fellow engineers by showcasing customer empathy, creativity, curiosity, and tenacity.
      • Have strong analytical and problem-solving skills, with the ability to innovate and adapt to fast-paced environments.
      • Design and build clear, understandable, simple, clean, and scalable solutions.
      • Champion an Agile and ‘DevOps’ mindset across the organization.

      What You'll Do

      • Modernize Signifyd’s Cloud Platform to scale for security, cost, operational excellence, reliability, and performance, working closely with Engineering and Data Science teams across Signifyd’s R&D group.
      • Create and deliver a technology roadmap focused on advancing our cloud performance capabilities, supporting our real-time fraud protection and prevention via our core products.
      • Work alongside Architects, Software Engineers, ML Engineers, and Data Scientists to develop innovative big data processing solutions for scaling our core product for eCommerce fraud prevention.
      • Take full ownership of our Cloud Platform evolution to support low-latency, high-quality, high-scale decisioning for Signifyd’s flagship product.

       

      • Architect, deploy, and optimize Cloud Solutions to evolve our technology stack, including Multi-Account strategy, best practices around data access, IAM and security rules, and the best approaches for optimized and secure access to our infrastructure.
      • Implement Engineering Enablement automation and best-of-breed solutions for Developer Tooling to support Elite DORA metrics measurements and optimal Engineering Experience.
      • Mentor and coach fellow engineers on the team, fostering an environment of growth and continuous improvement.
      • Identify and address gaps in team capabilities and processes to enhance team efficiency and success.

      What You'll Need

      • Ideally has 10-15+ years in cloud infrastructure engineering and automation, including at least five years of experience as a cloud engineering architect or lead. Have successfully navigated the challenges of working with large-scale cloud environments encompassing millions of dollars of computing costs and many petabytes of data storage and process.
      • Deep understanding of best practices and current trends in cloud providers, are comfortable working with multi-terabyte datasets, and skilled in high-scale data ingestion, transformation, and distributed processing; experience in understanding Apache Spark or Databricks is a plus.
      • Deep understanding of Container-based systems such as Kubernetes (k8s), Docker, ECS, EKS, GKE, and others.
      • Deep understanding of Networking concepts such as DNS / Route53, ELB/ALB, Networking load balancing, IAM rules, VPC peering and data connectivity, NAT gateways, Network bridge technology such as Megaport, and others.
      • Experience converting existing Cloud infrastructure to serverless architecture patterns (AWS Lambda, Kinesis, Aurora, etc.), deploying via Terraform, Pulumi, or AWS Cloud Formation / CDK.
      • Hands-on expertise in data technologies with proficiency in technologies such as Spark, Airflow, Databricks, AWS services (SQS, Kinesis, etc.), and Kafka. Understand the trade-offs of various architectural approaches and recommend solutions suited to our needs.
      • Executed the planning of product and infrastructure software releases
      • Experience in developing, deploying, and managing CI/CD developer tooling like AWS Code Commit, Code Build, Code Deploy, Code Pipeline, JetBrains TeamCity, and GitHub Enterprise.
      • Understanding how to appropriately deploy, integrate, and maintain Developer build and scanning tools such as Develocity Gradle Enterprise, Sonarqube, Maven, Snyk, CyCode, and others.
      • Deep knowledge of best practices around Logging, Monitoring, and Observability tools such as AWS Cloudwatch, Datadog, Loggly, and others.
      • Demonstrable ability to lead and mentor engineers, fostering their growth and development. 
      • You have successfully partnered with Product and Engineering teams to lead through strategic initiatives.
      • Commitment to quality: you take pride in delivering work that excels in accuracy, performance, and reliability, setting a high standard for the team and the organization.

       

      #LI-Remote

      Benefits in our US offices:

      • Discretionary Time Off Policy (Unlimited!)
      • 401K Match
      • Stock Options
      • Annual Performance Bonus or Commissions
      • Paid Parental Leave (12 weeks)
      • On-Demand Therapy for all employees & their dependents
      • Dedicated learning budget through Learnerbly
      • Health Insurance
      • Dental Insurance
      • Vision Insurance
      • Flexible Spending Account (FSA)
      • Short Term and Long Term Disability Insurance
      • Life Insurance
      • Company Social Events
      • Signifyd Swag

      We want to provide an inclusive interview experience for all, including people with disabilities. We are happy to provide reasonable accommodations to candidates in need of individualized support during the hiring process.

      Signifyd provides a base salary, bonus, equity and benefits to all its employees. Our posted job may span more than one career level, and offered level and salary will be determined by the applicant’s specific experience, knowledge, skills, and abilities, as well as internal equity and alignment with market data.

      USA Base Salary Pay Range
      $220,000$240,000 USD

      See more jobs at Signifyd

      Apply for this job

      +30d

      Ingénieur MLOps AWS

      DevoteamNantes, France, Remote
      MLDevOPSOpenAILambdaagileterraformscalaairflowansiblescrumgitc++dockerkubernetesjenkinspythonAWS

      Devoteam is hiring a Remote Ingénieur MLOps AWS

      Description du poste

      ???????? Missions

      • Supporter le processus de développement machine learning de bout en bout pour concevoir, créer et gérer des logiciels reproductibles, testables et évolutifs.

      • Travailler sur la mise en place et l’utilisation de plateformes ML/IA/MLOps (telles que AWS SageMaker, Kubeflow, AWS Bedrock, AWS Titan) 

      • Apporter à nos clients des best practices en termes d’organisation, de développement, d’automatisation, de monitoring, de sécurité.

      • Expliquer et appliquer les best practices pour l’automatisation, le testing, le versioning, la reproductibilité et le monitoring de la solution IA déployée.

      • Encadrer et superviser les consultant(es) juniors i.e., peer code review, application des best practices.

      • Accompagner notre équipe commerciale sur la rédaction de propositions et des réunions d’avant-vente.

      • Participer au développement de notre communauté interne (REX, workshops, articles, hackerspace.

      • Participer au recrutement de nos futurs talents.

      Qualifications

      ???????? Compétences techniques 

      REQUIRED : 

      • Parler couramment Python, PySpark ou Scala Spark. Scikit-learn, MLlib, Tensorflow, Keras, PyTorch, LightGBM, XGBoost, Scikit-Learn et Spark (pour ne citer qu’eux)

      • Savoir implémenter des architectures de Containerisation (Docker / Containerd) et les environnements en Serverless et micro services utilisant Lambda, ECS, Kubernetes

      • Parfaitement opérationnel pour mettre en place des environnements DevOps et Infra As Code, et pratiquer les outils de MLOps

      • Une bonne partie des outils Git, GitLab CI, Jenkins, Ansible, Terraform, Kubernetes, ML Flow, Airflow ou leurs équivalents dans les environnements Cloud doivent faire partie de votre quotidien. 

      • Cloud AWS (AWS Bedrock, AWS Titan, OpenAI, AWS SageMaker / Kubeflow)

      • Méthode Agile / Scrum

      • Feature Store (n'importe quel fournisseur)

       

      NICE TO HAVE : 

      • Apache Airflow

      • AWS SageMaker / Kubeflow

      • Apache Spark

      • Méthode Agile / Scrum

      ???????? Evoluer au sein de la communauté

      Évoluer au sein de la Tribe Data, c’est être acteur dans la création d’un environnement stimulant dans lequel les consultants ne cessent de se tirer vers le haut, aussi bien en ce qui concerne les compétences techniques que les soft-skills. Mais ce n’est pas tout, c’est aussi des événements réguliers et des conversations slacks dédiées vous permettant de solliciter les communautés (data, AI/ML, DevOps, sécurité,...) dans leur ensemble !

      A côté de cela, vous avez l’opportunité d’être moteur dans le développement des différentes communautés internes (REX, workshops, articles, Podcasts…).

      ???????? Rémunération

      La rémunération fixe proposée pour le poste est en fonction de votre expérience et dans une fourchette de 46,5k et 52,5k.

      See more jobs at Devoteam

      Apply for this job

      +30d

      Lead Data Engineer

      DevoteamTunis, Tunisia, Remote
      airflowsqlscrum

      Devoteam is hiring a Remote Lead Data Engineer

      Description du poste

      Au sein de la direction « Plateforme Data », le consultant intégrera une équipe SCRUM et se concentrera sur un périmètre fonctionnel spécifique.

      Votre rôle consistera à contribuer à des projets data en apportant votre expertise sur les tâches suivantes :

      • Concevoir, développer et maintenir des pipelines de données robustes et évolutifs sur Google Cloud Plateform (GCP), en utilisant des outils tels que BigQuery, Airflow, Looker et DBT.
      • Collaborer avec les équipes métier pour comprendre les exigences en matière de données et concevoir des solutions adaptées.
      • Optimiser les performances des traitements des données et des processus ELT en utilisant AirFlow, DBT et BigQuery.
      • Mettre en œuvre des processus de qualité des données pour garantir l'intégrité et la cohérence des données.
      • Travailler en étroite collaboration avec les équipes d'ingénierie pour intégrer les pipelines de données dans les applications et les services existants.
      • Rester à jour avec les nouvelles technologies et les meilleures pratiques dans le domaine du traitement des données et de l'analyse.

       

        Qualifications

        • Diplômé(e) d’un Bac+5 en école d'ingénieur ou équivalent universitaire avec une spécialisation en informatique.
        • Au moins 3 ans d'expérience dans le domaine de l'ingénierie des données, avec une expérience significative dans un environnement basé sur le cloud GCP.
        • Maîtrise avancée de SQL pour l'optimisation et le traitement des données.
        • Certification Google Professional Data Engineer est un plus.
        • Très bonne communication écrite et orale (livrables et reportings de qualité).

        See more jobs at Devoteam

        Apply for this job

        +30d

        Data Infrastructure Software Engineer II

        DevOPSremote-firstterraformairflowsqlDesigngraphqlc++dockerkubernetespython

        Khan Academy is hiring a Remote Data Infrastructure Software Engineer II

        ABOUT KHAN ACADEMY

        Khan Academy is a nonprofit with the mission to deliver a free, world-class education to anyone, anywhere. Our proven learning platform offers free, high-quality supplemental learning content and practice that cover Pre-K - 12th grade and early college core academic subjects, focusing on math and science. We have over 155 million registered learners globally and are committed to improving learning outcomes for students worldwide, focusing on learners in historically under-resourced communities.

        OUR COMMUNITY 

        Our students, teachers, and parents come from all walks of life, and so do we. Our team includes people from academia, traditional/non-traditional education, big tech companies, and tiny startups. We hire great people from diverse backgrounds and experiences because it makes our company stronger. We value diversity, equity, inclusion, and belonging as necessary to achieve our mission and impact the communities we serve. We know that transforming education starts in-house with learning about ourselves and our colleagues. We strive to be world-class in investing in our people and commit to developing you as a professional.

        THE ROLE

        We have some of the richest educational data in the world, and we want to leverage that data to develop a clearer picture of who our users are, how they are using the site, and how we could better serve them on their educational journey. Your work will enable answering critical and meaningful questions like "how do students learn most effectively?" and "how can we improve our content and product?"

        The ideal candidate will have a strong background in software development and DevOps, with a focus on data engineering. You will be responsible for designing, developing, and maintaining scalable systems and applications. 

        What you’ll work on:

        • Design and manage data pipelines and workflows using SQL, BigQuery, Airflow, and DBT.
        • Develop and maintain data visualization and analytical tools using Streamlit.
        • Design, develop, and maintain data infrastructure software using Python and Go (familiarity with  GraphQL a plus)
        • Implement and manage DevOps processes for Data Engineering tools
        • Use Docker and Kubernetes to build and deploy containerized applications.
        • Utilize Terraform for infrastructure as code to manage and provision cloud resources.

        You can read about our latest work on our Engineering Blog. A few highlights:

        WHAT YOU BRING

        • 4+ years experience in a software engineer role with a focus on tool design, creation, and maintenance for infrastructure or data engineering
        • Strong collaborative development experience including PR reviews and documentation writing
        • Proficient in SQL
        • Proficiency in writing and maintaining data pipelines and data quality monitors in a workflow management tool for productionized solutions, with source control and code review.
        • Proficiency in computer science and software engineering fundamentals, including the ability to program in Python and/or Go.
        • Proficient in Docker and Kubernetes.
        • Experience with Terraform.
        • Experience with BigQuery and data pipeline tools such as Airflow, & DBT would be a big plus as would familiarity with Streamlit for data visualization.

        Note: We welcome candidates with experience in any and all technologies. We don’t require experience in any particular language or tool. Our commitment to on-boarding and mentorship means you won’t be left in the dark as you learn new technologies.

        PERKS AND BENEFITS

        We may be a non-profit, but we reward our talented team extremely well! We offer:

        • Competitive salaries
        • Ample paid time off as needed – Your well-being is a priority.
        • Remote-first culture - that caters to your time zone, with open flexibility as needed, at times
        • Generous parental leave
        • An exceptional team that trusts you and gives you the freedom to do your best
        • The chance to put your talents towards a deeply meaningful mission and the opportunity to work on high-impact products that are already defining the future of education
        • Opportunities to connect through affinity, ally, and social groups
        • And we offer all those other typical benefits as well: 401(k) + 4% matching & comprehensive insurance, including medical, dental, vision, and life

        At Khan Academy we are committed to fair and equitable compensation practices, the well-being of our employees, and our Khan community. This belief is why we have built out a robust Total Rewards package that includes competitive base salaries, and extensive benefits and perks to support physical, mental, and financial well-being.

        The target salary range for this position(LEC IC 1.5) is $165,500 - $201,250 USD / 206,875 - 251,562 CAD. The pay range for this position is a general guideline only. The salary offered will depend on internal pay equity and the candidate’s relevant skills, experience, qualifications, and job market data. Exceptional performers in this role who make an outsized contribution can make well in excess of this range.  Additional incentives are provided as part of the complete total rewards package in addition to comprehensive medical and other benefits.

        MORE ABOUT US

        OUR COMPANY VALUES

        Live & breathe learners

        We deeply understand and empathize with our users. We leverage user insights, research, and experience to build content, products, services, and experiences that our users trust and love. Our success is defined by the success of our learners and educators.

        Take a stand

        As a company, we have conviction in our aspirational point of view of how education will evolve. The work we do is in service to moving towards that point of view. However, we also listen, learn and flex in the face of new data, and commit to evolving this point of view as the industry and our users evolve.

        Embrace diverse perspectives

        We are a diverse community. We seek out and embrace a diversity of voices, perspectives and life experiences leading to stronger, more inclusive teams and better outcomes. As individuals, we are committed to bringing up tough topics and leaning into different points of view with curiosity. We actively listen, learn and collaborate to gain a shared understanding. When a decision is made, we commit to moving forward as a united team.

        Work responsibly and sustainably

        We understand that achieving our audacious mission is a marathon, so we set realistic timelines and we focus on delivery that also links to the bigger picture. As a non-profit, we are supported by the generosity of donors as well as strategic partners, and understand our responsibility to our finite resources. We spend every dollar as though it were our own. We are responsible for the impact we have on the world and to each other. We ensure our team and company stay healthy and financially sustainable.

        Bring out the joy

        We are committed to making learning a joyful process. This informs what we build for our users and the culture we co-create with our teammates, partners and donors.

        Cultivate learning mindset

        We believe in the power of growth for learners and for ourselves. We constantly learn and teach to improve our offerings, ourselves, and our organization. We learn from our mistakes and aren’t afraid to fail. We don't let past failures or successes stop us from taking future bold action and achieving our goals.

        Deliver wow

        We insist on high standards and deliver delightful, effective end-to-end experiences that our users can rely on. We choose to focus on fewer things — each of which aligns to our ambitious vision — so we can deliver high-quality experiences that accelerate positive measurable learning with our strategic partners.

        We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, gender, gender identity or expression, national origin, sexual orientation, age, citizenship, marital status, disability, or Veteran status. We value diversity, equity, and inclusion, and we encourage candidates from historically underrepresented groups to apply.

        See more jobs at Khan Academy

        Apply for this job

        +30d

        Staff Full-Stack Engineer

        BetterUpAnywhere in the U.S. (Remote)
        Salesagileairflowrubyc++AWS

        BetterUp is hiring a Remote Staff Full-Stack Engineer

        Let’s face it, a company whose mission is human transformation better have some fresh thinking about the employer/employee relationship.

        We do. We can’t cram it all in here, but you’ll start noticing it from the first interview.

        Even our candidate experience is different. And when you get an offer from us (and accept it), you get way more than a paycheck. You get a personal BetterUp Coach, a development plan, a trained and coached manager, the most amazing team you’ve ever met (yes, each with their own personal BetterUp Coach), and most importantly, work that matters.

        This makes for a remarkably focused and fulfilling work experience. Frankly, it’s not for everyone. But for people with fire in their belly, it’s a game-changing, career-defining, soul-lifting move.

        Join us and we promise you the most intense and fulfilling years of your career, doing life-changing work in a fun, inventive, soulful culture.

        If that sounds exciting—and the job description below feels like a fit—we really should start talking. 

        As a Staff engineer here at BetterUp you will have the opportunity to leverage your expertise, passion, and drive to make the world better by crafting iconic, groundbreaking solutions via your deep ability to architect and deliver robust, scalable, foundational systems. You will help lead our technical strategy, mentor engineers, and drive innovation at the intersection of platform technology, data, and AI. 

        You will collaborate across functions to drive our mission to transform lives worldwide. You’ll also help us chart a course to elevate our platform, ensuring it is robust, flexible, and composable - you will help us create the bedrock necessary to unlock new possibilities for users across the globe. Your core mission will be to lead the way in building our infrastructure for innovation. You will lead by example, driving significant advancements while growing personally and professionally. Using tried and true technologies like Ruby on Rails running in AWS paired with the latest in data and GenAI technologies your team and platforms will be pivotal in positively impacting millions of people via our Human Transformation Platform. 

        Your leadership will extend to mentoring engineers and spearheading cross-functional collaborations that nurture a culture of innovation and continuous learning. You'll tackle exciting projects and complex problems, pushing the boundaries of technology to support personal and professional growth. 

        Your career at BetterUp will accelerate by delivering innovative projects, extensive leadership opportunities, and the profound impact of your work on global well-being. You'll have the freedom to innovate and the opportunity to grow, all while contributing significantly to our mission and having fun along the way. 

        This role won’t be easy, our course is steep, and the road ahead will be tough, so we are looking for someone who is comfortable in the rapidly changing world of hyper growth startups, who has the experience and wisdom to help us mature into a high growth, iconic world impacting organization. 

        At BetterUp we delight in supporting and pushing each other to bring out the best in our colleagues, and would love someone to join the team who shares our passion for customer empathy, engineering excellence, and continuous improvement. We also deeply understand that a key to peak performance is balance, and our culture is focused on providing the support our people need to be able to bring their whole selves to bear in service of our mission. We recognize that transforming lives starts with your own, and our organization is at the forefront of human transformation, and we aim to start with our team first, so we also give you and a friend a coach to help you grow and thrive. We have all the usual benefits (see details below), plus we also close the company for two full weeks a year to rest and recharge, and we have a thriving remote first culture. 

        What you'll do:

        • Spearhead delivery of scalable, robust systems for BetterUp's platform, ensuring they are flexible and adaptable for future growth.
        • Align technical efforts with business goals, innovating solutions that enhance user experience and operational efficiency.
        • Elevate our team through continuous learning, fostering an environment of engineering excellence, creating leverage via your experience, brilliance and leadership. 
        • Ensure technical solutions are cohesive and support company-wide objectives.
        • Apply deep product development expertise and a strong sense of customer empathy to guide the creation of user-centered solutions, ensuring the delivery of high-impact capabilities that drive value for members and our customers. 

        Attributes we look for: 

        • Bachelor's or Master's in Computer Science/Engineering or equivalent experience - you are relentlessly curious and an expert in multiple tried and true tech stacks, plus you are comfortable jumping into new areas and thrive on learning. 
        • Deep experience in startups with demonstrated success building scalable system architectures, especially for multi-sided marketplace platforms like BetterUp's Human Transformation Platform.
        • Exceptional problem-solving skills, with a strategic mindset able to navigate complex technical challenges, ensuring robust, flexible solutions that align with BetterUp’s long-term vision.
        • Extensive full-stack/data engineering experience, across industries, and domains. 
        • Expertise in building enterprise applications using tech like Ruby on Rails, Ember.js, AWS, etc plus the ability to leverage and reason about modern data technologies (DBT, Airflow, Snowflake) to drive data-driven, data powered systems.
        • Agile and Lean startup veteran who is able to deliver incredible customer-centric product innovations, by their direct engineering ability, but also by shaping and guiding product roadmaps in the face of ambiguity, volatility, and risk. 
        • Strong communicators who possess a passion for BetterUp’s mission, and have a demonstrated ability to mentor and lead with empathy, passion, and wisdom. 

        What will make you successful in your time at BetterUp: 

        • You have radical curiosity and you love to learn new things
        • You seek feedback and turn it into action
        • You can work autonomously while being great at collaboration
        • You mentor and empower others around you

        Benefits:

        At BetterUp, we are committed to living out our mission every day and that starts with providing benefits that allow our employees to care for themselves, support their families, and give back to their community. 

        • Access to BetterUp coaching; one for you and one for a friend or family member 
        • A competitive compensation plan with opportunity for advancement
        • Medical, dental and vision insurance
        • Flexible paid time off
        • Per year: 
          • All federal/statutory holidays observed
          • 4 BetterUp Inner Work days (https://www.betterup.co/inner-work)
          • 5 Volunteer Days to give back
          • Learning and Development stipend
          • Company wide Summer & Winter breaks 
        • Year-round charitable contribution of your choice on behalf of BetterUp
        • 401(k) self contribution

        We are dedicated to building diverse teams that fuel an authentic workplace and sense of belonging for each and every employee. We know applying for a job can be intimidating, please don’t hesitate to reach out — we encourage everyone interested in joining us to apply.

        BetterUp Inc. provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, disability, genetics, gender, sexual orientation, age, marital status, veteran status. In addition to federal law requirements, BetterUp Inc. complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.

        At BetterUp, we compensate our employees fairly for their work. Base salary is determined by job-related experience, education/training, residence location, as well as market indicators. The range below is representative of base salary only and does not include equity, sales bonus plans (when applicable) and benefits. This range may be modified in the future.

        The base salary range for this role is $147,000 – 245,000$.

        Protecting your privacy and treating your personal information with care is very important to us, and central to the entire BetterUp family. By submitting your application, you acknowledge that your personal information will be processed in accordance with ourApplicant Privacy Notice. If you have any questions about the privacy of your personal information or your rights with regards to your personal information, please reach out tosupport@betterup.co

        #LI-Remote

        See more jobs at BetterUp

        Apply for this job