airflow Remote Jobs

142 Results

+30d

Senior Data Engineer

CatalystRemote (US & Canada)
MLDevOPSredisairflowsqlqac++elasticsearchpython

Catalyst is hiring a Remote Senior Data Engineer

Company Overview

Totango + Catalyst have joined forces to build a leading customer growth platform that helps businesses protect and grow their revenue. Built by an experienced team of industry leaders, our software integrates with all the tools CS teams already use to provide one centralized view of customer data.  Our modern and intuitive dashboards help CS leaders develop impactful workflows and take the right actions to understand health, prevent churn, increase adoption, and drive expansion.

Position Overview

Insights and intelligence are the cornerstones of our product offering. We ingest and process massive amounts of data from a variety of sources to help our users understand the overall health of their customers at each stage of their journey.  As a Senior Data Engineer, you will be directly responsible for designing and implementing the next-generation data architecture leveraging technologies such as Databricks, TiDB, and Kafka.

This role is open to remote work anywhere within Canada and the U.S.

 

What You’ll Do 

  • Drive high impact, cross-functional data engineering projects built on top of a modern, best-in-class data stack, working with a variety of open source and Cloud technologies
  • Solve interesting and unique data problems at high volume and large scale  
  • Build and optimize the performance of batch, stream, and queue-based solutions including Kafka and Apache Spark
  • Collaborate with stakeholders from different teams to drive forward the data roadmap
  • Implement data retention, security and governance standards
  • Work with all engineering teams to help drive best practices for ownership and self-serve data processing
  • Support and expand standards, guidelines, tooling and best practices for data engineering at Catalyst
  • Support other data engineers in delivering our critical pipelines
  • Focus on data quality, cost effective scalability, and distributed system reliability and establish automated mechanisms
  • Work cross functionally with application engineers, SRE, product, data analysts, data scientists, or ML engineers

 

What You’ll Need

  • 3+ years of experience successfully implementing modern data architectures
  • Strong Project Management skills
  • Demonstrated experience implementing ETL pipelines with Spark (we use Pyspark)
  • Proficiency in Python, SQL and/or other modern programming language
  • Deep understanding of SQL/New SQL with relational data stores such as Postgres/MySQL
  • A strong desire to show ownership of problems you identify
  • Experience with modern Data Warehouses and Lakes such as Redshift, Snowflake, and Databricks Delta Lake
  • Experience with distributed streaming tools like Kafka and Spark Structured Streaming
  • Familiarity with an orchestration tool such as Airflow, dbt, and Delta Live tables
  • Experience with automated testing for distributed systems (unit testing, E2E testing, QA, data expectation monitoring)
  • Experience working with application engineers, product, and data scientists
  • Experience with leveraging caching for performance using data stores such as Redis and ElasticSearch
  • Experience with maintaining and scaling heterogeneous and large volumes of data in production
  • Practical experience with DevOps best practices (CICD, IAC) is a plus
  • Familiarity with Change Data Capture systems is a nice to have
  •  

 

Why You’ll Love Working Here!

  • We are Remote first! Do your best work where you are most comfortable.
  • Highly competitive compensation package, including equity - everyone has a stake in our growth
  • Comprehensive benefits, including up to 100% paid medical, dental, & vision insurance coverage for you & your loved ones
  • Unlimited PTO policy encouraging you to take the time you need - we trust you to strike the right work/life balance
  • Monthly Mental Health Days and Mental Health Weeks twice per year 

 

 Your base pay is one part of your total compensation package and is determined within a range. The base salary for this role is from $140,000.00 - $180,000.00 per year. We take into account numerous factors in deciding on compensation, such as experience, job-related skills, relevant education or training, and other business and organizational requirements. The salary range provided corresponds to the level at which this position has been defined.

Catalyst+Totango is an equal opportunity employer, meaning that we do not discriminate based upon race, religion, national origin, gender identity, age, sexual orientation, or any other protected class. We believe that diversity is more than just good intentions, and we are committed to creating an inclusive environment for all employees.




See more jobs at Catalyst

Apply for this job

+30d

Senior Data Infrastructure Engineer

WebflowU.S. Remote
DevOPSS3Webflowremote-firstterraformairflowDesignc++dockerAWSbackend

Webflow is hiring a Remote Senior Data Infrastructure Engineer

At Webflow, our mission is to bring development superpowers to everyone. Webflow is the leading visual development platform for building powerful websites without writing code. By combining modern web development technologies into one platform, Webflow enables people to build websites visually, saving engineering time, while clean code seamlessly generates in the background. From independent designers and creative agencies to Fortune 500 companies, millions worldwide use Webflow to be more nimble, creative, and collaborative. It’s the web, made better.

We’re excited for a Senior Data Infrastructure Engineer to join our Data Platform team. In this role, you’ll play a key part in building robust, secure, and scalable infrastructure that powers our data operations. You will have the opportunity to optimize the performance of our data services and automate infrastructure management ensuring everything runs smoothly and reliably. Your expertise will be crucial in integrating and managing essential components like Kafka, Spark, and Airflow, providing a solid foundation for our data-driven products. If you are passionate about leveraging cutting-edge technologies to make a real impact, we’d love to connect with you!

About the role 

  • Location: Remote-first (United States; BC & ON, Canada)
  • Full-time
  • Permanent 
  • Exempt
  • The cash compensation for this role is tailored to align with the cost of labor in different geographic markets. We've structured the base pay ranges for this role into zones for our geographic markets, and the specific base pay within the range will be determined by the candidate’s geographic location, job-related experience, knowledge, qualifications, and skills.
    • United States  (all figures cited below in USD and pertain to workers in the United States)
      • Zone A: $158,000 - $218,000
      • Zone B: $149,000 - $205,000
      • Zone C: $139,000 - $192,000
    • Canada  (All figures cited below in CAD and pertain to workers in ON & BC, Canada)
      • CAD 180,000 - CAD 248,000
  • Please visit our Careers page for more information on which locations are included in each of our geographic pay zones. However, please confirm the zone for your specific location with your recruiter.
  • Reporting to the Senior Engineering Manager

As aSenior Data Infrastructure Engineer, you’ll … 

  • Provision and deploy infrastructure using Pulumi for Kafka, Spark, Airflow, Athena, and other critical systems on AWS.
  • Manage and maintain clusters, ensuring optimal performance and reliability, including implementing auto-scaling and right-sizing instances.
  • Configure and manage VPCs, load balancers, and VPC endpoints for secure communication between internal and external services.
  • Manage IAM roles, apply security patches, plan and execute version upgrades, and ensure compliance with regulations such as GDPR.
  • Design and implement high-availability solutions across multiple zones and regions, including backups, multi-region replication, and disaster recovery plans.
  • Oversee S3 data lake management, including file size management, compaction, encryption, and compression to maximize storage efficiency.
  • Implement caching strategies, indexing, and query optimization to ensure efficient data retrieval and processing.
  • Spearhead initiatives for optimizing performance, capacity planning, ensuring fault tolerance, and implementing failure recovery across all infrastructure components.
  • Implement monitoring and logging using tools like Datadog, CloudWatch and OpenSearch.
  • Develop services, tools and automation to simplify infrastructure complexity for other engineering teams, enabling them to focus on building great products.
  • Participate in all engineering activities including incident response, interviewing, designing and reviewing technical specifications, code review, and releasing new functionality.
  • Mentor, coach, and inspire a team of engineers of various levels.

In addition to the responsibilities outlined above, at Webflow we will support you in identifying where your interests and development opportunities lie and we'll help you incorporate them into your role.

About you 

You’ll thrive as a Senior Data Infrastructure Engineer if you have: 

  • 5+ years of experience as a Data Infrastructure Engineer or in related roles like Platform Engineer, SRE, DevOps or Backend Engineer.
  • Strong experience with provisioning and managing data infrastructure components like Kafka, Spark, and Airflow.
  • Proficiency with cloud services and environments (compute, storage, networking, identity management, infrastructure as code, etc.).
  • Experience with containerization technologies like Docker and Kubernetes.
  • Expertise in infrastructure as code tools like Terraform and Pulumi.
  • Solid understanding of networking concepts and configurations, including VPCs, load balancers, and endpoints.
  • Experience with monitoring and logging tools.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.

Bonus points if you have:

  • AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer).
  • Familiarity with multi-zone and multi-region high availability and disaster recovery strategies.
  • Knowledge of compliance standards (GDPR, CCPA) and security best practices.

Our Core Behaviors:

  • Obsess over customer experience. We deeply understand what we’re building and who we’re building for and serving. We define the leading edge of what’s possible in our industry and deliver the future for our customers.
  • Move with heartfelt urgency. We have a healthy relationship with impatience, channeling it thoughtfully to show up better and faster for our customers and for each other. Time is the most limited thing we have, and we make the most of every moment.
  • Say the hard thing with care. Our best work often comes from intelligent debate, critique, and even difficult conversations. We speak our minds and don’t sugarcoat things — and we do so with respect, maturity, and care.
  • Make your mark. We seek out new and unique ways to create meaningful impact, and we champion the same from our colleagues. We work as a team to get the job done, and we go out of our way to celebrate and reward those going above and beyond for our customers and our teammates.

Benefits & wellness

  • Equity ownership (RSUs) in a growing, privately-owned company
  • 100% employer-paid healthcare, vision, and dental insurance coverage for employees and dependents (US; full-time Canadian workers working 30+ hours per week), as well as Health Savings Account/Health Reimbursement Account, dependent on insurance plan selection. Employees also have voluntary insurance options, such as life, disability, hospital protection, accident, and critical illness
  • 12 weeks of paid parental leave for both birthing and non-birthing caregivers, as well as an additional 6-8 weeks of pregnancy disability for birthing parents to be used before child bonding leave. Employees also have access to family planning care and reimbursement.
  • Flexible PTO with an mandatory annual minimum of 10 days paid time off, and sabbatical program
  • Access to mental wellness coaching, therapy, and Employee Assistance Program
  • Monthly stipends to support health and wellness, as well as smart work, and annual stipends to support professional growth
  • Professional career coaching, internal learning & development programs
  • 401k plan and financial wellness benefits, like CPA or financial advisor coverage
  • Commuter benefits for in-office workers

Temporary employees are not eligible for paid holiday time off, accrued paid time off, paid leaves of absence, or company-sponsored perks.

Be you, with us

At Webflow, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. Employment decisions are made on the basis of job-related criteria without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by applicable law.

Stay connected

Not ready to apply, but want to be part of the Webflow community? Consider following our story on our Webflow Blog, LinkedIn, Twitter, and/or Glassdoor. 

Please note:

To join Webflow, you'll need valid U.S. or Canadian work authorization depending on the country of employment.

If you are extended an offer, that offer may be contingent upon your successful completion of a background check, which will be conducted in accordance with applicable laws. We may obtain one or more background screening reports about you, solely for employment purposes.

Webflow Applicant Privacy Notice

See more jobs at Webflow

Apply for this job

+30d

Senior Analytics Engineer

agileBachelor's degreeremote-firsttableauairflowsqlDesignpython

Parsley Health is hiring a Remote Senior Analytics Engineer

About us:

Parsley Health is a digital health company with a mission to transform the health of everyone, everywhere with the world's best possible medicine. Today, Parsley Health is the nation's largest health care company helping people suffering from chronic conditions find relief with root cause resolution medicine. Our work is inspired by our members’ journeys and our actions are focused on impact and results.

The opportunity:

You will be joining a remote team of passionate engineers reporting into our Data Manager. In this role, you will work closely with Engineering, Product, Design and Customer Reliability teams. Parsley Health is an outcomes-driven organization and your work will directly contribute to the company objectives, including expanding the business nationally; improving activation, conversion and retention; and expansion of our healthcare products.

We work in a blameless environment and we take ownership and pride in our efforts. We like to work in small cross functional product pods where each pod owns the development lifecycle of their products. We follow agile development practices and encourage each pod to tailor the processes to their needs. Our teams are built on pillars of trust, humility and continuous improvement.

About you:

You appreciate the challenge of building reliable and timely business intelligence systems to promote actionable insights. You know that good ETL, preparation, and visualization are key to getting the right answer - and you also know that understanding your stakeholder’s problems is the key to getting the right question. 

You know when to develop an MVP based on a few requirements gathered in a conversation, and when to suggest a dedicated meeting to sort out the 'what-why-how.'

You have a healthy appreciation for the many ways in which distributed systems may fail. You're always thinking beyond the scope of the current project, and about the larger product vision. You are thrilled to deliver the right information, at the right time, in support of our member’s health and clinician’s decision process.

What you’ll do:

  • Manage and architect our ETL, warehouse, and data delivery/visibility.
  • Craft critical retrospective reports for use by our Ops, Clinical, Product, Finance, and MX teams.
  • Provide actionable, prospective insights to support business and clinical decisions.
  • Engage functional peers on core business strategies and how data products support those efforts.
  • Work closely with stakeholder groups to define requirements, design appropriate BI solutions, and implement applications against development standards and best practices.
  • Design, develop, and maintain scalable data pipelines that extract, transform, and load data from various sources into our data warehouse (BigQuery).
  • Ensure data accuracy, consistency, and availability for business intelligence and analytics purposes.
  • Optimize and enhance data processing workflows for performance and efficiency.
  • Implement and maintain data quality monitoring and alerting systems.
  • Work closely with cross-functional teams - including Product and Clinical Operations - to understand data needs and provide actionable insights.
  • Stay up-to-date with emerging trends and technologies in data engineering and analytics.

What you’ll need:

  • Bachelor's degree in Computer Science, Data Science, Information Systems, or a related field (Master's degree preferred).
  • 3+ years of experience in data and analytics.
  • Strong proficiency in SQL and Python
  • Experience with data modeling techniques.
  • Hands-on experience with data warehousing technologies (e.g., Redshift, Snowflake, BigQuery) and ETL tools (e.g., Airflow, DBT).
  • Experience with data visualization tools (e.g., Tableau, Looker) is a plus.
  • Experience with Google Cloud Platform
  • Containerization experience, knowledge of CI tooling, testing frameworks and other code quality tools
  • Familiarity with healthcare data standards and regulations (e.g., HIPAA) is desirable.
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration skills, with the ability to work effectively in a team-oriented environment.
  • A lean towards self-directed learning in data tooling and solutions.

Benefits and Compensation:

  • Equity Stake
  • 401(k) + Employer Matching program
  • Remote-first with the option to work from one of our centers in NYC or LA
  • Complimentary Parsley Health Complete Care membership
  • Subsidized Medical, Dental, and Vision insurance plan options
  • Generous 4+ weeks of paid time off
  • Annual professional development stipend
  • Annual wellness stipend

Parsley Health is committed to providing an equitable, fair and transparent compensation program for all employees.

The starting salary for this role is between $115,000 - $130,000, depending on skills and experience. We take a geo-neutral approach to compensation within the US, meaning that we pay based on job function and level, not location.

Individual compensation decisions are based on a number of factors, including experience level, skillset, and balancing internal equity relative to peers at the company. We expect the majority of the candidates who are offered roles at our company to fall healthily throughout the range based on these factors. We recognize that the person we hire may be less experienced (or more senior) than this job description as posted. If that ends up being the case, the updated salary range will be communicated with candidates during the process.


At Parsley Health we believe in celebrating everything that makes us human and are proud to be an equal opportunity workplace. We embrace diversity and are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe that the more inclusive we are, the better we can serve our members. 


Important note:

In light of recent increase in hiring scams, if you're selected to move onto the next phase of our hiring process, a member of our Talent Acquisition team will reach out to you directly from an@parsleyhealth.comemail address to guide you through our interview process. 

    Please note: 

  • We will never communicate with you via Microsoft Teams
  • We will never ask for your bank account information at any point during the recruitment process, nor will we send you a check (electronic or physical) to purchase home office equipment

We look forward to connecting!

#LI-Remote

See more jobs at Parsley Health

Apply for this job

+30d

Middle Data Engineer (Social Shopping Platform)

Sigma SoftwareWarsaw, Poland, Remote
airflowDesigngitpythonAWS

Sigma Software is hiring a Remote Middle Data Engineer (Social Shopping Platform)

Job Description

  • Contributing to new technology investigations and complex solution design, supporting a culture of innovation by considering matters of security, scalability, and reliability, with a focus on building out our ETL processes 
  • Working with a modern data stack, coming up with well-designed technical solutions and robust code, and implementing data governance processes 
  • Working and professionally communicating with the customer’s team 
  • Taking responsibility for delivering major solution features 
  • Participating in the requirements gathering and clarification process, proposing optimal architecture strategies, and leading the data architecture implementation 
  • Developing core modules and functions, designing scalable and cost-effective solutions 
  • Performing code reviews, writing unit and integration tests 
  • Scaling the distributed system and infrastructure to the next level 
  • Building data platform using power of AWS cloud provider 

Qualifications

  • 3+ years of strong experience with Python as a programming language for data pipelines and related tools 
  • Familiarity and understanding of distributed data processing with Spark for data pipeline optimization and monitoring workloads
  • Proven strong track record of building data transformations using data build tools 
  • Excellent implementation of data modeling and data warehousing best practices
  • Experience working with Looker with a developer proficiency (not user), and with LookML 
  • Strong Data Domain background – understanding of how data engineers, data scientists, analytics engineers, and analysts work to be able to work closely with them and understand their needs 
  • Good written and spoken English communication skills 
  • Familiarity with software engineering best practices: testing, PRs, Git, code reviews, code design, releasing 

WOULD BE A PLUS

  • Data certifications in Data Engineering or Data Analytics 
  • 2 or more years of experience with Databricks and Airflow 
  • Experience with DAGs and orchestration tools 
  • Experience in developing Snowflake-driven data warehouses 
  • Experience in developing event-driven data pipelines 

 

See more jobs at Sigma Software

Apply for this job

Instacart is hiring a Remote Senior Product Manager, Ads Data

We're transforming the grocery industry

At Instacart, we invite the world to share love through food because we believe everyone should have access to the food they love and more time to enjoy it together. Where others see a simple need for grocery delivery, we see exciting complexity and endless opportunity to serve the varied needs of our community. We work to deliver an essential service that customers rely on to get their groceries and household goods, while also offering safe and flexible earnings opportunities to Instacart Personal Shoppers.

Instacart has become a lifeline for millions of people, and we’re building the team to help push our shopping cart forward. If you’re ready to do the best work of your life, come join our table.

Instacart is a Flex First team

There’s no one-size fits all approach to how we do our best work. Our employees have the flexibility to choose where they do their best work—whether it’s from home, an office, or your favorite coffee shop—while staying connected and building community through regular in-person events. Learn more about our flexible approach to where we work.

 

Overview

About the Role

We are seeking a senior product manager to build and extend the data platform we leverage to power our ads and insights businesses that service over 5,000 CPG brands a day. 

You will work in collaboration with R&D, data science, data engineering and commercial leadership, to build the foundation that powers how we surface data to internal and external customers.

This is a unique opportunity to have end to end ownership of a technical platform for the company that is core to our growth. Your portfolio will directly impact how some of the world’s largest CPG brands make investment and strategy decisions.

 

About The Team

The Advertiser Experience team owns the set of systems that power our advertising facing offerings (inclusive of Instacart Ads Manager, Instacart Ads API, Ads Measurement and Data Pipelines). Our teams own the E2E systems from our back-end platforms and data that power a complex ecosystem of CPGs, retailers, customers, and operators .

We are a passionate team of 100+ engineers, data scientists, designers, marketers and product managers focused on driving growth for CPGs. We work hard to make sure everyone is brought along for the journey as we ship award winning products and services to the industry.

 

About The Job

  • Manage the roadmap and execution for our ads data platform and data pipelines for a diverse set of use cases.
  • Drive forward strategy on all aspects of the data platform especially our data sharing practices and ability to derive signal from noise.
  • Lead product planning, product & customer discovery, the product development process, effort estimation, and collaboration with teams across the organization (i.e. Data Engineering and Commercial Teams).
  • Build and maintain a variety of integrations with a complex ecosystem of 3rd party partners like identity graphs, verification providers, media partners, and clean rooms providers.
  • Ensure our ads data platform meets the highest standards for privacy, data protection, and regulatory compliance.
  • Intake & validate new ideas through a set of frameworks and drive them into implementable projects.
  • Advocate for data quality throughout the entire Ads and Eversight R&D organization, and build tools that allow internal customers to be evangelists themselves.

About You

Minimum Qualifications

  • 5+ years of Product Management experience
  • Experience managing data products and data platforms
  • Experience working in deeply technical domains with the ability to quickly ramp up when onboarding into new areas
  • Experience partnering with technical audiences and “translating” to senior audiences across functions
  • Direct experience partnering with Data Engineering and Product teams to identify new roadmap opportunities and improvements
  • Ability to manage and align multiple stakeholders

Preferred Qualifications

  • Fluent in core data processing technologies like Airflow, dbt, cloud data warehouses
  • Experience in influencing and building out multi year strategy

Instacart provides highly market-competitive compensation and benefits in each location where our employees work. This role is remote and the base pay range for a successful candidate is dependent on their permanent work location. Please review our Flex First remote work policyhere.

Offers may vary based on many factors, such as candidate experience and skills required for the role.Additionally, this role is eligible for a new hire equity grant as well as annual refresh grants. Please read more about our benefits offeringshere.

For US based candidates, the base pay ranges for a successful candidate are listed below.

CA, NY, CT, NJ
$187,000$208,000 USD
WA
$180,000$200,000 USD
OR, DE, ME, MA, MD, NH, RI, VT, DC, PA, VA, CO, TX, IL, HI
$172,000$191,000 USD
All other states
$156,000$173,000 USD

See more jobs at Instacart

Apply for this job

+30d

Senior AI Infra Engineer, AI/ML and Data Infrastructure

Chan Zuckerberg InitiativeRedwood City, CA (Open to Remote)
MLRustscalaairflowDesignazurerubyjavac++kuberneteslinuxpythonAWSPHP

Chan Zuckerberg Initiative is hiring a Remote Senior AI Infra Engineer, AI/ML and Data Infrastructure

The Chan Zuckerberg Initiative was founded by Priscilla Chan and Mark Zuckerberg in 2015 to help solve some of society’s toughest challenges — from eradicating disease and improving education to addressing the needs of our local communities. Our mission is to build a more inclusive, just, and healthy future for everyone.

The Team

Across our work in Science, Education, and within our communities, we pair technology with grantmaking, impact investing, and collaboration to help accelerate the pace of progress toward our mission. Our Central Operations & Partners team provides the support needed to push this work forward. 

Central Operations & Partners consists of our Brand & Communications, Community, Facilities, Finance, Infrastructure/IT Operations/Business Systems, Initiative Operations, People, Real Estate/Workplace/Facilities/Security, Research & Learning, and Ventures teams. These teams provide the essential operations, services, and strategies needed to support CZI’s progress toward achieving its mission to build a better future for everyone.

The Opportunity

By pairing engineers with leaders in our science and education teams, we can bring AI/ML technology to the table in new ways to help drive solutions. We are uniquely positioned to design, build, and scale software systems to help educators, scientists, and policy experts better address the myriad challenges they face. Our technology team is already helping schools bring personalized learning tools to teachers and schools across the country. We are also supporting scientists around the world as they develop a comprehensive reference atlas of all cells in the human body, and are developing the capacity to apply state-of-the-art methods in artificial intelligence and machine learning to solve important problems in the biomedical sciences. 

The AI/ML and Data Engineering Infrastructure organization works on building shared tools and platforms to be used across all of the Chan Zuckerberg Initiative, partnering and supporting the work of a wide range of Research Scientists, Data Scientists, AI Research Scientists, as well as a broad range of Engineers focusing on Education and Science domain problems. Members of the shared infrastructure engineering team have an impact on all of CZI's initiatives by enabling the technology solutions used by other engineering teams at CZI to scale. A person in this role will build these technology solutions and help to cultivate a culture of shared best practices and knowledge around core engineering.

We are building a world-class shared services model, and being based in New York helps us achieve our service goals. We require all interested candidates to be based out of New York City and available to work onsite 2-3 days a week.

What You'll Do

  • Participate in the  technical design and building of efficient, stable, performant, scalable and secure AI/ML and Data infrastructure engineering solutions.
  • Active hands-on coding working on our Deep Learning and Machine Learning models
  • Design and implement complex systems integrating with our large scale AI/ML GPU compute infrastructure and platform, making working across multiple clouds easier and convenient for our Research Engineers, ML Engineers, and Data Scientists. 
  • Use your solid experience and skills in building containerized applications and infrastructure using Kubernetes in support of our large scale GPU Research cluster as well as working on our various heterogeneous and distributed AI/ML environments.  
  • Collaborate with other team members in the design and build of our Cloud based AI/ML platform solutions, which includes Databricks Spark, Weaviate Vector Databases, and supporting our hosted Cloud GPU Compute services running containerized PyTorch on large scale Kubernetes.
  • Collaborate with our partners on data management solutions in our heterogeneous collection of complex datasets.
  • Help build tooling that makes optimal use of our shared infrastructure in empowering  our AI/ML efforts with world class GPU Compute Cluster and other compute environments such as our AWS based services.

What You'll Bring

  • BS or MS degree in Computer Science or a related technical discipline or equivalent experience
  • 5+ years of relevant coding experience
  • 3+ years of systems Architecture and Design experience, with a broad range of experience across Data, AI/ML, Core Infrastructure, and Security Engineering
  • Scaling containerized applications  on Kubernetes or Mesos, including expertise  with creating custom containers using secure AMIs and continuous deployment systems that integrate with Kubernetes or Mesos. (Kubernetes preferred)
  • Proficiency with Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure, and experience with On-Prem and Colocation Service hosting environments
  • Proven coding ability with a systems language such as Rust,C/ C++, C#, Go, Java, or Scala
  • Shown ability with a scripting language such as Python, PHP, or Ruby
  • AI/ML Platform Operations experience in an environment with challenging data and systems platform challenges - including large scale Kafka and Spark deployments (or their coralaries such as Pulsar, Flink, and/or Ray) as well as Workflow scheduling tools such as Apache Airflow, Dagster, or Apache Beam  
  • MLOps experience working with medium to large scale GPU clusters in Kubernetes (Kubeflow),  HPC environments, or large scale Cloud based ML deployments
  • Working knowledge of Nvidia CUDA and AI/ML custom libraries.  
  • Knowledge of Linux systems optimization and administration
  • Understanding of Data Engineering, Data Governance, Data Infrastructure, and AI/ML execution platforms.
  • PyTorch, Karas, or Tensorflow  experience a strong nice to have
  • HPC with and Slurm experience a strong nice to have

Compensation

The Redwood City, CA base pay range for this role is $190,000 - $285,000. New hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. Actual placement in range is based on job-related skills and experience, as evaluated throughout the interview process. Pay ranges outside Redwood City are adjusted based on cost of labor in each respective geographical market. Your recruiter can share more about the specific pay range for your location during the hiring process.

Benefits for the Whole You 

We’re thankful to have an incredible team behind our work. To honor their commitment, we offer a wide range of benefits to support the people who make all we do possible. 

  • CZI provides a generous employer match on employee 401(k) contributions to support planning for the future.
  • Annual benefit for employees that can be used most meaningfully for them and their families, such as housing, student loan repayment, childcare, commuter costs, or other life needs.
  • CZI Life of Service Gifts are awarded to employees to “live the mission” and support the causes closest to them.
  • Paid time off to volunteer at an organization of your choice. 
  • Funding for select family-forming benefits. 
  • Relocation support for employees who need assistance moving to the Bay Area
  • And more!

Commitment to Diversity

We believe that the strongest teams and best thinking are defined by the diversity of voices at the table. We are committed to fair treatment and equal access to opportunity for all CZI team members and to maintaining a workplace where everyone feels welcomed, respected, supported, and valued. Learn about our diversity, equity, and inclusion efforts. 

If you’re interested in a role but your previous experience doesn’t perfectly align with each qualification in the job description, we still encourage you to apply as you may be the perfect fit for this or another role.

Explore our work modesbenefits, and interview process at www.chanzuckerberg.com/careers.

#LI-Remote 

 

Facebook X Instagram Linkedin Medium  YouTube
 

See more jobs at Chan Zuckerberg Initiative

Apply for this job

+30d

Junior Solutions Engineer

SingleStoreRemote, United State
SalesscalaairflowsqlDesignazurejavapythonAWS

SingleStore is hiring a Remote Junior Solutions Engineer

Junior SE Position Overview

We are looking for a SingleStore Solutions Engineer who is passionate about removing data bottlenecks for their customers and enabling real-time data capabilities to some of the most difficult data challenges in the industry. In this role you will work directly with our sales teams, and channel partners to identify prospective and current customer pain points where SingleStore can remove those bottlenecks and deliver real-time capabilities. You will provide value-based demonstrations, presentations, and support proof of concepts to validate proposed solutions.

As a SingleStore solutions engineer, you must share our passion for real-time data, fast analytics, and simplified data architecture. You must be comfortable in both high executive conversations as well as being able to deeply understand the technology and its value-proposition.

About our Team

At SingleStore, the Solutions Engineer team epitomizes a dynamic blend of innovation, expertise, and a fervent commitment to meeting complex data challenges head-on. This team is composed of highly skilled individuals who are not just adept at working with the latest technologies but are also instrumental in ensuring that SingleStore is the perfect fit for our customers.

Our team thrives on collaboration and determination, building some of the most cutting-edge deployments of SingleStore data architectures for our most strategic customers. This involves working directly with product management to ensure that our product is not only addressing current data challenges but is also geared up for future advancements.

Beyond the technical prowess, our team culture is rooted in a shared passion for transforming how businesses leverage data. We are a community of forward-thinkers, where each member's contribution is valued in our collective pursuit of excellence. Our approach combines industry-leading engineering, visionary design, and a dedicated customer success ethos to shape the future of database technology. In our team, every challenge is an opportunity for growth, and we support each other in our continuous learning journey. At SingleStore, we're more than a team; we're innovators shaping the real-time data solutions of tomorrow.

Responsibilities

  • Engage with both current and prospective clients to understand their technical and business challenges
  • Present and demonstrate SingleStore product offering to fortune 500 companies.
  • Enthusiastic about the data analytics and data engineering landscape
  • Provide valuable feedback to product teams based on client interactions
  • Stay up to date with database technologies and the SingleStore product offerings

 

Qualifications

  • Excellent presentation and communication skills, with experience presenting to large corporate organizations
  • Ability to communicate complex technical concepts for non-technical audiences.
  • Strong team player with interpersonal skills
  • Broad range of experience within large-scale  database and/or data warehousing technologies
  • Experience with data engineering tools  Apache Spark, Apache Flink,Apache Airflow
  • Demonstrated proficiency in ANSI SQL query languages
  • Demonstrated proficiency in Python, Scala or Java
  • Understanding of private and public cloud platforms such as AWS, Azure, GCP, VMware

SingleStore delivers the cloud-native database with the speed and scale to power the world’s data-intensive applications. With a distributed SQL database that introduces simplicity to your data architecture by unifying transactions and analytics, SingleStore empowers digital leaders to deliver exceptional, real-time data experiences to their customers. SingleStore is venture-backed and headquartered in San Francisco with offices in Sunnyvale, Raleigh, Seattle, Boston, London, Lisbon, Bangalore, Dublin and Kyiv. 

Consistent with our commitment to diversity & inclusion, we value individuals with the ability to work on diverse teams and with a diverse range of people.

Please note that SingleStore's COVID-19 vaccination policy requires that team members in the United States be up to date with the current CDC guidelines for their vaccinations with one of the United States FDA-approved vaccine options to meet in person for SingleStore business or to work from one of our U.S. office locations. [It is expected that this will be a requirement for this role]. If an exemption and/or accommodation to our vaccination policy is requested, a member of the Human Resources department will be available to begin the interactive accommodation process.

To all recruitment agencies: SingleStore does not accept agency resumes. Please do not forward resumes to SingleStore employees. SingleStore is not responsible for any fees related to unsolicited resumes and will not pay fees to any third-party agency or company that does not have a signed agreement with the Company.

#li-remote #remote-li 

SingleStore values individuals for their unique skills and experiences, and we’re proud to offer roles in a variety of locations across the United States. Salary is based on permissible, non-discriminatory factors such as skills, experience, and geographic location, and is just one part of our total compensation and benefits package. Certain roles are also eligible for additional rewards, including merit increases and annual bonuses. 

Our benefits package for this role includes: stock options, flexible paid time off, monthly three-day weekends, 14 weeks of fully-paid gender-neutral parental leave, fertility and adoption assistance, mental health counseling, 401(k) retirement plan, and rich health insurance offerings—including medical, dental, vision and life and disability insurance. 

SingleStore’s base salary range for this role, if based in California, Colorado, Washington, or New York City is: $X - $X USD per year

For candidates residing in California, please see ourCalifornia Recruitment Privacy Notice. For candidates residing in the EEA, UK, and Switzerland, please see ourEEA, UK, and Swiss Recruitment Privacy Notice.

 

Apply for this job

+30d

Data Engineer Cloud GCP

DevoteamTunis, Tunisia, Remote
airflowsqlscrum

Devoteam is hiring a Remote Data Engineer Cloud GCP

Description du poste

Au sein de la direction « Plateforme Data », le consultant intégrera une équipe SCRUM et se concentrera sur un périmètre fonctionnel spécifique.

Votre rôle consistera à contribuer à des projets data en apportant votre expertise sur les tâches suivantes :

  • Concevoir, développer et maintenir des pipelines de données robustes et évolutifs sur Google Cloud Plateform (GCP), en utilisant des outils tels que BigQuery, Airflow, Looker et DBT.
  • Collaborer avec les équipes métier pour comprendre les exigences en matière de données et concevoir des solutions adaptées.
  • Optimiser les performances des traitements des données et des processus ELT en utilisant AirFlow, DBT et BigQuery.
  • Mettre en œuvre des processus de qualité des données pour garantir l'intégrité et la cohérence des données.
  • Travailler en étroite collaboration avec les équipes d'ingénierie pour intégrer les pipelines de données dans les applications et les services existants.
  • Rester à jour avec les nouvelles technologies et les meilleures pratiques dans le domaine du traitement des données et de l'analyse.

 

    Qualifications

    ???? Compétences

    Quels atouts pour rejoindre l’équipe ?

    Diplômé(e) d’un Bac+5 en école d'ingénieur ou équivalent universitaire avec une spécialisation en informatique.

    • Au moins 4 ans d'expérience dans le domaine de l'ingénierie des données, avec une expérience significative dans un environnement basé sur le cloud GCP.
    • Maîtrise avancée de SQL pour l'optimisation et le traitement des données.
    • Certification Google Professional Data Engineer est un plus.
    • Très bonne communication écrite et orale (livrables et reportings de qualité).

    Alors, si vous souhaitez progresser, apprendre et partager, rejoignez-nous !

    See more jobs at Devoteam

    Apply for this job

    +30d

    Senior Backend Developer

    DevoteamVilnius, Lithuania, Remote
    DevOPSnosqlairflowazurec++c#AWSbackend

    Devoteam is hiring a Remote Senior Backend Developer

    Job Description

    Imagine being part of one of the most successful IT companies in Europe and finding innovative solutions to technical challenges. Turn imagination into reality and apply for this exciting career opportunity in Devoteam.  

    Job Highlights:  

    • Joining more than 10.000 talented colleagues around Europe  
    • International career opportunity   
    • Cozy environment in Vilnius and Kaunas offices   

    Your Highlights?   

    • You have extreme ownership and proven experience in leading complex projects and tasks  
    • You have a strong sense of honesty, responsibility and reliability  
    • You are ready to step out of comfort zone and constantly work on improving soft and hard skills  
    • You are an excellent team player and always ready to assist your colleagues  

    Still with us? Then we might have a fantastic job opportunity for you!  

    OUR NEW SENIOR BACKEND DEVELOPER  

    As a Senior Backend Developer you will get to work on innovating Cloud-based software with the goal of full-scale autonomy when operating Public Clouds. You will also provide technical leadership and mentorship to other team members. 

    SOME OF YOUR RESPONSIBILITIES:  

    • Develop Cloud-based software backend, ensuring high performance and scalability;
    • Help create, maintain and follow best practices for development work including coding, testing, source control, build automation, continuous deployment and continuous delivery  
    • Contribute to software engineering excellence by mentoring other team members  
    • Participate in the development process relying on your technical expertise  
    • Keep up to date on technology innovations 

    SOME OF OUR REQUIREMENTS:  

    • Deep knowledge of software development methodologies and abilities to quickly adopt to languages and platforms when needed   
    • Extensive development experience in C#/.NET  
    • Experience in writing REST APIs 
    • Experience in developing multi tenant applications 
    • Experience with  DevOps including CI/CD and Configuration Management  
    • Experience in cloud application development for one of cloud providers (Microsoft Azure, AWS or GCP)  
    • Experience working with source code repository management systems, such as GitHub, and Azure DevOps  

    It would be awesome, if you have:  

    • Understanding of AI and Machine Learning  
    • Experience in building and supporting complex distributed  SaaS solutions  
    • Solid knowledge of database technologies (RDBMS and NoSQL)  
    • Familiarity with some data engineering tools is a plus: Databricks, BigQuerry.
    • Experience with workflow orchestration engines, e.g. Cadence, Temporal, Airflow, AWS step functions, etc. 

    WHAT YOU CAN LOOK FORWARD TO:   

    • Creating a purposeful set of software built on a modern tech stack
    • Becoming a part of very specialized team who will support your ability to succeed  
    • A challenging and exciting career with an international perspective and opportunities  
    • Attractive compensation package with a mix of fixed and variable  
    • High level of trust and competency to make your own decisions  
    • A warm and talented culture with a focus on business, but knowing that family always comes first  
    • Access to international network of specialists within the organization to build your rep and skills  
    • Salary from 4800  EUR gross (depending on the experience and competencies)


    At  Devoteam we have created a culture of honesty and transparency, inclusion, and cooperation which we value a lot. We are looking for colleagues, who are highly motivated and proactive, not afraid of challenges. We are highly invested in the career path development of our employees, and we offer and support possibilities for further training, certification and specialization.   

    Qualifications

    See more jobs at Devoteam

    Apply for this job

    +30d

    Senior Machine Learning Engineer (Remote)

    AgeroRemote
    S35 years of experienceairflowsqlB2BDesignc++pythonAWS

    Agero is hiring a Remote Senior Machine Learning Engineer (Remote)

    About Agero:

    Wherever drivers go, we’re leading the way. Agero’s mission is to rethink the vehicle ownership experience through a powerful combination of passionate people and data-driven technology, strengthening our clients’ relationships with their customers. As the #1 B2B, white-label provider of digital driver assistance services, we’re pushing the industry in a new direction, taking manual processes, and redefining them as digital, transparent, and connected. This includes: an industry-leading dispatch management platform powered by Swoop; comprehensive accident management services; knowledgeable consumer affairs and connected vehicle capabilities; and a growing marketplace of services, discounts and support enabled by a robust partner ecosystem. The company has over 150 million vehicle coverage points in partnership with leading automobile manufacturers, insurance carriers and many others. Managing one of the largest national networks of service providers, Agero responds to approximately 12 million service events annually. Agero, a member company of The Cross Country Group, is headquartered in Medford, Mass., with operations throughout North America. To learn more, visit https://www.agero.com/.

    Job Description:

    This Senior Machine Learning Engineer will be one of the founding members joining the Machine Learning Team that pioneers the path of bringing machine learning into our daily operations and a chance to touch the lives of millions of drivers to get the help they need.  We are looking for you to bring your existing machine learning experience as well as a desire to learn and grow with the role.  This role will be  joining a team that is responsible for achieving the highest customer satisfaction, whilst minimizing costs over millions of roadside assistance dispatches each year.  The ideal candidate brings their experience in Python, their drive and passion for Machine Learning systems and hits the gas pedal to help supercharge the team.  What better way to do that than with the Data Science and Analytics team as part of your pit crew to help.  Are you a Machine Learning driver that can get in the driver’s seat and drive?  Are you also someone that can recommend strategies for improving system applications and services and a focus on ease of deployment, security, reliability, stability, availability and performance and up for a challenge? 


    Key Outcomes:

    • Deliver products/systems through their full life cycle, from idea conception, technical planning, implementation, launch, measurement, and maintenance/iteration.
    • Contribute to developing a strong culture of quality, availability, and security through attention to detail and by supporting industry leading best practices.
    • Drive optimal solution design collaborating with product owners, architects, operations, client services, and cross-functional teams to move fast on creating solutions to client and business problems and as well as be able to identify and act on new opportunities.

    Qualifications:

    • CS or Engineering related Degree
    • Experience building large, complex systems, particularly web services, RESTful APIs, and continuous integration and delivery
    • 3-5 years of experience in software application development and design experience
    • 3-5 years of experience with AWS services (ECR, S3, SageMaker)
    • 5+ years experience in Python (Pandas, Numpy, Scikit-learn)
    • 3-5 years experience working with Machine Learning systems
    • Understand fundamental design principles behind a scalable application
    • Excellent communication skills, with the ability to interact with other teams
    • Motivated to understand the nuances of data and its impact on the business

    Nice to Haves:

    • Experience with tree based models
    • Experience with DVC
    • Experience with Airflow
    • SQL

    Hiring In:

    • United States:  AZ, FL, GA, NH, IL, KY, MA, MI, NC, NM, TN, VA 
    • Canada: Province of Ontario

     

    D, E & I Mission & Culture at Agero:

    We are all Change Drivers at Agero. Each day, we speak to thousands of drivers and tow professionals across one of the most diverse countries in the world. Our mission to safeguard drivers on the road, strengthen our clients’ relationships with their drivers, and support the communities we live and work in unites us together as one force driving positive change.

    The road to positive change starts inside Agero. In celebrating each other’s differences, we lift each other up and create space for innovation and community. Bringing our whole selves to work powers our commitment, drive, agility, and courage - ensuring we are not only changing the landscape of the driver services industry, we also are making a difference in the lives of our customers with each call, chat, and rescue.

    THIS DESCRIPTION IS NOT INTENDED TO BE A COMPLETE STATEMENT OF JOB CONTENT, RATHER TO ACT AS A GUIDE TO THE ESSENTIAL FUNCTIONS PERFORMED. MANAGEMENT RETAINS THE DISCRETION TO ADD TO OR CHANGE THE DUTIES OF THE POSITION AT ANY TIME.

    To review Agero's privacy policy click the link:https://www.agero.com/privacy.

    ***Disclaimer:Agero is committed to creating a diverse and inclusive environment and encourages applications from all qualified candidates. Accommodation is available. Additionally, we offer accommodation for applicants with disabilities in our recruitment processes. If you require accommodation during the recruitment process, please contactrecruiting@agero.com.

    ***Agero communicates with candidates via text for matters related to submitted applications, questions, and availability for interviews. If you prefer not to receive texts, you can contact Agero's recruiting team directly at recruiting@agero.com.

    See more jobs at Agero

    Apply for this job

    +30d

    Engineering Manager (Data Engineering)

    DevOPSredis4 years of experienceagileMaster’s DegreeBachelor's degreescalanosqlairflowpostgressqlc++

    SecurityScorecard is hiring a Remote Engineering Manager (Data Engineering)

    About SecurityScorecard:

    SecurityScorecard is the global leader in cybersecurity ratings, with over 12 million companies continuously rated, operating in 64 countries. Founded in 2013 by security and risk experts Dr. Alex Yampolskiy and Sam Kassoumeh and funded by world-class investors, SecurityScorecard’s patented rating technology is used by over 25,000 organizations for self-monitoring, third-party risk management, board reporting, and cyber insurance underwriting; making all organizations more resilient by allowing them to easily find and fix cybersecurity risks across their digital footprint. 

    Headquartered in New York City, our culture has been recognized by Inc Magazine as a "Best Workplace,” by Crain’s NY as a "Best Places to Work in NYC," and as one of the 10 hottest SaaS startups in New York for two years in a row. Most recently, SecurityScorecard was named to Fast Company’s annual list of theWorld’s Most Innovative Companies for 2023and to the Achievers 50 Most Engaged Workplaces in 2023 award recognizing “forward-thinking employers for their unwavering commitment to employee engagement.”  SecurityScorecard is proud to be funded by world-class investors including Silver Lake Waterman, Moody’s, Sequoia Capital, GV and Riverwood Capital.

    About the Team

    The Product Engineering organization is responsible for building, maintaining, and improving the end-user experiences that comprise our software platform. These experiences are full-stack solutions built atop a complex and robust data platform in order to support key security workflows for both third-party risk management and external attack surface management personas.

    About the Role

    We are seeking a seasoned Software Engineering Manager to lead a team of talented engineers in developing and delivering innovative cybersecurity solutions. The ideal candidate will have a strong technical background, proven leadership skills, and a passion for driving high-quality software development in a fast-paced environment.

    This is a hybrid role that will require being on-site at least once a week in our Midtown New York office.

    Responsibilities:

    • Team Leadership:Manage and mentor a team of software engineers to cultivate a collaborative and inclusive team culture. Offer guidance on career development and professional growth.
    • Project Management:Oversee the planning, execution, and delivery of software projects, ensuring they are completed on time, within scope, and with high quality. Coordinate with cross-functional teams to align on priorities and dependencies.
    • Technical Oversight:Provide technical direction and architectural guidance to the engineering team. Ensure the implementation of best practices in software development, including code reviews, testing, and continuous integration.
    • Process Improvement:Identify and implement process improvements to enhance team productivity, efficiency, and software quality. Promote the adoption of agile methodologies and DevOps practices.
    • Stakeholder Collaboration:Work closely with product managers, designers, and other stakeholders to define and translate project requirements into actionable plans. Communicate project status, risks, and issues to all relevant parties.
    • Innovation:Stay current with industry trends and emerging technologies. Encourage innovation within the team to continuously improve our products and processes.

    Qualifications:

    • Education:Bachelor’s or Master’s degree (or equivalent experience) in Computer Science, Engineering, or a related field.
    • Experience: At least 10 years of experience in software engineering, with a minimum of 3 years in a management role.
    • Technical Skills: Strong proficiency in Javascript, Typescript, React, Redis, and Postgres. Deep experience with modern software development practices (e.g., Agile, DevOps). Experience with Open Search and Clickhouse is desirable.
    • Leadership: Demonstrated the ability to lead and inspire a team and have a proven track record of successfully delivering complex software projects.
    • Communication: Excellent verbal and written English communication skills, with the ability to effectively convey technical concepts to non-technical stakeholders.
    • Problem-Solving: Strong analytical and problem-solving skills, focusing on delivering practical and scalable solutions.
    • Security Knowledge: Familiarity with cybersecurity principles and practices is highly desirable.

    Benefits:

    Specific to each country, we offer a competitive salary, stock options, Health benefits, and unlimited PTO, parental leave, tuition reimbursements, and much more!

    The estimated salary range for this position is $130,000-150,000. Actual compensation for the position is based on a variety of factors, including, but not limited to affordability, skills, qualifications and experience, and may vary from the range. In addition to base salary, employees may also be eligible for annual performance-based incentive compensation awards and equity, among other company benefits.

    SecurityScorecard is committed to Equal Employment Opportunity and embraces diversity. We believe that our team is strengthened through hiring and retaining employees with diverse backgrounds, skill sets, ideas, and perspectives. We make hiring decisions based on merit and do not discriminate based on race, color, religion, national origin, sex or gender (including pregnancy) gender identity or expression (including transgender status), sexual orientation, age, marital, veteran, disability status or any other protected category in accordance with applicable law. 

    We also consider qualified applicants regardless of criminal histories, in accordance with applicable law. We are committed to providing reasonable accommodations for qualified individuals with disabilities in our job application procedures. If you need assistance or accommodation due to a disability, please contact talentacquisitionoperations@securityscorecard.io.

    Any information you submit to SecurityScorecard as part of your application will be processed in accordance with the Company’s privacy policy and applicable law. 

    SecurityScorecard does not accept unsolicited resumes from employment agencies.  Please note that we do not provide immigration sponsorship for this position. #LI-DNI

    See more jobs at SecurityScorecard

    Apply for this job

    +30d

    Senior Software Engineer, Ads

    InstacartUnited States - Remote
    Bachelor's degreescalaairflowsqlDesignpython

    Instacart is hiring a Remote Senior Software Engineer, Ads

    We're transforming the grocery industry

    At Instacart, we invite the world to share love through food because we believe everyone should have access to the food they love and more time to enjoy it together. Where others see a simple need for grocery delivery, we see exciting complexity and endless opportunity to serve the varied needs of our community. We work to deliver an essential service that customers rely on to get their groceries and household goods, while also offering safe and flexible earnings opportunities to Instacart Personal Shoppers.

    Instacart has become a lifeline for millions of people, and we’re building the team to help push our shopping cart forward. If you’re ready to do the best work of your life, come join our table.

    Instacart is a Flex First team

    There’s no one-size fits all approach to how we do our best work. Our employees have the flexibility to choose where they do their best work—whether it’s from home, an office, or your favorite coffee shop—while staying connected and building community through regular in-person events. Learn more about our flexible approach to where we work.

    ABOUT THE ROLE

    Are you ready to take your development skills to the next level? We’re looking for a Senior Software Engineer to join our Ads team. You’ll play a critical role in the evolution of our Ads suite and help build world-class reporting solutions across various platforms, ensuring that advertisers and retailers receive timely, accurate, and actionable data insights. By working closely with Product Designers, Product Managers, Data Scientists, Machine Learning Engineers, and other cross-functional partners, you’ll contribute to the advancement of our Ads suite and guarantee a seamless flow of data to our users.

    The Instacart Ads team is at the forefront of refining our Ads products and supporting infrastructure, so your work will directly enhance our capability to process petabyte-scale data and deliver reports essential for billing, strategic decision-making, and partner management. Our products are used by millions of people every year. To meet–and exceed–expectations we are rapidly improving and modernizing our ads platform, helping raise the quality bar for our products across the entire organization. Sound exciting? Keep reading.

     

    ABOUT THE TEAM

    The Ads team is a diverse group of spirited and highly-dedicated engineers focused on crafting and delivering comprehensive reporting solutions to our advertisers and retailers.

    Our team thrives on dynamic challenges, and we take pride in developing and maintaining scalable and fault-tolerant metrics delivery systems. We've embraced a culture of open and candid collaboration where everyone's views matter, allowing us to continuously innovate and make substantial impact to the digital advertising industry through our work.

    Our tech stack includes but is not limited to Rails, Go, DBT, Airflow, Scala, Apache Spark, Databricks, Delta Lake, Snowflake, Python and Terraform. We believe in constantly learning, growing and adopting the most efficient practices that enable us to deliver quality data services to our stakeholders. If you're a detective at heart, love solving complex problems, and are passionate about the intersection of data and technology, you'll fit right in!

     

    Overview of the Ads teams that are currently hiring: 

    • Ads Measurement & Data: The Ads Measurement & Data team is focused on developing scalable and fault-tolerant data processing systems and delivering comprehensive reporting solutions to our advertisers and retailers.
    • Ads Manager: Ads Manager team is responsible for the tool that advertisers use to manage their ad campaign data and their overall brand presence on Instacart. 

     

    ABOUT THE JOB

    We believe that high-quality data is essential for any business organization, as such we are looking for a strong software engineer excited to raise our efficiency, quality and scalability bar. You will be able to have extensive ownership and the ability to help set best practices and contribute to product and infrastructure features. 

    As a craft leader, you'll be responsible for contributing to the vision, strategy and development of our multi-platform reporting system that is efficient, scalable, and meets diverse user needs. You will advocate for data quality, correctness, scalability and latency standards to ensure consistency in how we enable data-driven decisions and features across the organization. 

    You will also be proactive in spearheading new initiatives, coding and documenting components, writing and reviewing system design documents and partnering with other teams and functions to gather and understand our customer's requirements. You will think and plan strategically for short and long term initiatives to continue shaping our platform and products.

     

    MINIMUM QUALIFICATIONS

    • Bachelor's degree or higher in Computer Science, Software Engineering, or a related field, or equivalent proven industry experience (4+ years).
    • 5+ years of experience in software engineering.
    • Comprehensive understanding of distributed systems, proven experience with data processing technologies such as DBT and Airflow and common web frameworks such as Rails.
    • Highly proficient with SQL, capable of writing and reviewing complex queries for data analysis and debugging.
    • You can design for scale with the entire system in mind.
    • Solid communicator, comfortable seeking and receiving feedback.
    • Strong analytical and debugging skills.
    • Strong sense of ownership working with a large codebase and diverse suite of products.
    • A collaborative mindset to be able to partner with engineers, designers and PM's from multiple teams to co-create impactful solutions while supporting system contributions.
    • Strong organizational skills with the ability to communicate and present ideas clearly and influence key stakeholders at the manager, director, and VP level.

     

    PREFERRED QUALIFICATIONS

    • Prior work experience in the digital advertising industry.
    • Experience with big data technologies such as Spark, Hadoop, Flink, Hive or Kafka, and with both streaming and batching data pipelines.
    • Proven experience with distributed system design.
    • Strong general programming and algorithm skills.
    • Strong attention to detail and accuracy in the implementation, keen eye for edge cases and code reviews. 
    • Data driven mindset.

    Instacart provides highly market-competitive compensation and benefits in each location where our employees work. This role is remote and the base pay range for a successful candidate is dependent on their permanent work location. Please review our Flex First remote work policyhere.

    Offers may vary based on many factors, such as candidate experience and skills required for the role.Additionally, this role is eligible for a new hire equity grant as well as annual refresh grants. Please read more about our benefits offeringshere.

    For US based candidates, the base pay ranges for a successful candidate are listed below.

    CA, NY, CT, NJ
    $192,000$245,000 USD
    WA
    $184,000$235,000 USD
    OR, DE, ME, MA, MD, NH, RI, VT, DC, PA, VA, CO, TX, IL, HI
    $176,000$225,000 USD
    All other states
    $159,000$203,000 USD

    See more jobs at Instacart

    Apply for this job

    +30d

    Data Engineer H/F

    SocotecPalaiseau, France, Remote
    S3LambdanosqlairflowsqlgitkubernetesAWS

    Socotec is hiring a Remote Data Engineer H/F

    Description du poste

    SOCOTEC Monitoring France, leader dans le domaine de l'inspection et de la certification, offre des services dans les secteurs de la construction, des infrastructures et de l'industrie.

    Le Data & AI Hub SOCOTEC, composé de spécialistes en Data Engineering et Data Science, est chargé non seulement de la gestion et de l'optimisation des données, mais aussi de la mise en place de traitements et d'analyses de données. Nous développons des applications basées sur les données pour soutenir les activités métiers de SOCOTEC.

    Nous recherchons un(e) alternant(e) Data Engineer pour intégrer notre équipe Data SOCOTEC.

    En intégrant l'équipe, vous participerez activement à la maintenance et à l'optimisation de notre Datalake, ainsi qu'à la création et à la mise à jour des flux de données. Vous serez responsable de la documentation et de la validation de ces flux, ainsi que de la création et de la mise en place d'outils de reporting tels que Power BI. Vous proposerez également de nouvelles solutions, participerez aux qualifications techniques et contribuerez à l'amélioration continue de notre infrastructure data.

     

    Vous travaillerez sur trois missions principales :

    • Au sein de l’entité Socotec Monitoring France (20%), vous participerez à la définition de la stratégie optimale de données pour Socotec Monitoring (structuration, processus, open data, achats de données externes)
    • Pour le compte du groupe Socotec (60%), vous participez à la construction du Data Lake à l’échelle monde. Votre objectif sera de développer les flux de donner pour leur analyse en lien avec les équipes BI et Data Science. Vous apprendrez à organiser et ordonnancer les flux d’extraction, de transformation et de chargement des données en garantissant leur fiabilité, leur disponibilité, etc.
    • Auprès des clients (20%), vous participerez au pilotage de A à Z de projets finaux : collecte des données, pipeline de prétraitement, modélisation et déploiement.

    Vous ferez preuve d’autonomie, de sagacité et de qualités certaines dans la rédaction et la communication de codes et documentations techniques.

    Le stack technique utilisée :

    • Amazon Web Services (AWS)
    • Apache Airflow comme ordonnanceur
    • Spark pour les pipelines ETL
    • Gitlab pour versionner les sources
    • Kubernetes
    • DeltaLake
    • S3
    • Gérer les metadata avec OpenMetadata
    • Power BI, l’outil de BI, géré avec les équipes BI

    Qualifications

    • Master en Big Data ou diplôme d'ingénieur en informatique avec une forte appétence pour la data
    • Maîtrise des bases de données SQL et NoSQL, ainsi que des concepts associés
    • Connaissance de la stack Big Data (Airflow, Spark, Hadoop)
    • Expérience avec les outils collaboratifs de développement (Git, GitLab, Jupyter Notebooks, etc.)
    • Connaissance appréciée des services AWS (Lambda, EMR, S3)
    • Intérêt marqué pour les technologies innovantes
    • Esprit d'équipe
    • Anglais courant, y compris un bon niveau technique

    See more jobs at Socotec

    Apply for this job

    +30d

    [ALTERNANCE] - Data Engineer H/F

    SocotecPalaiseau, France, Remote
    S3LambdanosqlairflowsqlgitkubernetesAWS

    Socotec is hiring a Remote [ALTERNANCE] - Data Engineer H/F

    Description du poste

    Nous recherchons un(e) alternant(e) Data Engineer pour intégrer notre équipe Data SOCOTEC.

    Accompagné par un Data Engineer, vous participerez activement à la maintenance et l'optimisation de notre Datalake, ainsi qu'à la création et la mise à jour des flux de données. Vous serez également responsable de la documentation et de la validation de ces flux, ainsi que de la création et de la mise en place d'outils de reporting tels que Power BI.

    Vous travaillerez sur trois missions principales :

    • Au sein de l’entité Socotec Monitoring France (10%), vous participerez à la définition de la stratégie optimale de données pour Socotec Monitoring (structuration, processus, open data, achats de données externes)
    • Pour le compte du groupe Socotec (70%), vous participez à la construction du Data Lake à l’échelle monde. Votre objectif sera de développer les flux de donner pour leur analyse en lien avec les équipes BI et Data Science. Vous apprendrez à organiser et ordonnancer les flux d’extraction, de transformation et de chargement des données en garantissant leur fiabilité, leur disponibilité, etc.
    • Auprès des clients (20%), vous participerez au pilotage de A à Z de projets finaux : collecte des données, pipeline de prétraitement, modélisation et déploiement.

     

    Stack technique utilisée :

    • Amazon Web Services (AWS)
    • Apache Airflow comme ordonnanceur
    • Spark pour les pipelines ETL
    • GitLab pour le versionnement des sources
    • Kubernetes
    • DeltaLake
    • Amazon S3
    • Gestion des métadonnées avec OpenMetadata
    • Power BI pour la business intelligence, en collaboration avec les équipes BI

     

    Nous recherchons un profil motivé, rigoureux et passionné par les données, prêt à s'investir pleinement dans des projets ambitieux et à développer ses compétences au sein d'une équipe dynamique et innovante. Rejoignez-nous pour une expérience enrichissante qui pourrait se transformer en un CDI à l'issue de votre alternance.

    Qualifications

    • Master en Big Data ou Ingénieur Informatique avec appétence en data
    • Maitrise des bases de données (SQL, NoSQL) et concepts asosciés
    • Connaissance de la Stack Big data (airflow, spark, hadoop)
    • Utilisation des outils collaboratifs de développement (GIT, GITLAB, NoteBooks Jupyter, etc …)
    • Connaissance AWS appréciée (Lambda, EMR, S3)
    • Gout affirmé pour les technologies innovantes
    • Esprit d’équipe
    • Anglais courant et technique

    See more jobs at Socotec

    Apply for this job

    +30d

    Strategic Americas SE

    SingleStoreRemote, United States
    SalesscalaairflowsqlDesignazurejavapythonAWS

    SingleStore is hiring a Remote Strategic Americas SE

    Senior SE Position Overview

    We are looking for a SingleStore Senior Solutions Engineer who is passionate about removing data bottlenecks for their customers and enabling real-time data capabilities to some of the most difficult data challenges in the industry. In this role you will work directly with our sales teams, and channel partners to identify prospective and current customer pain points where SingleStore can remove those bottlenecks and deliver real-time capabilities. You will provide value-based demonstrations, presentations, and support proof of concepts to validate proposed solutions.

    As a SingleStore solutions engineer, you must share our passion for real-time data, fast analytics, and simplified data architecture. You must be comfortable in both high executive conversations as well as being able to deeply understand the technology and its value-proposition.

    About our Team

    At SingleStore, the Senior Solutions Engineer team epitomizes a dynamic blend of innovation, expertise, and a fervent commitment to meeting complex data challenges head-on. This team is composed of highly skilled individuals who are not just adept at working with the latest technologies but are also instrumental in ensuring that SingleStore is the perfect fit for our customers.

    Our team thrives on collaboration and determination, building some of the most cutting-edge deployments of SingleStore data architectures for our most strategic customers. This involves working directly with product management to ensure that our product is not only addressing current data challenges but is also geared up for future advancements.

    Beyond the technical prowess, our team culture is rooted in a shared passion for transforming how businesses leverage data. We are a community of forward-thinkers, where each member's contribution is valued in our collective pursuit of excellence. Our approach combines industry-leading engineering, visionary design, and a dedicated customer success ethos to shape the future of database technology. In our team, every challenge is an opportunity for growth, and we support each other in our continuous learning journey. At SingleStore, we're more than a team; we're innovators shaping the real-time data solutions of tomorrow.

    Responsibilities

    • Engage with both current and prospective clients to understand their technical and business challenges
    • Present and demonstrate SingleStore product offering to fortune 500 companies.
    • Enthusiastic about the data analytics and data engineering landscape
    • Provide valuable feedback to product teams based on client interactions
    • Stay up to date with database technologies and the SingleStore product offerings

     

    Qualifications

    • Minimum of 3 years experience in a technical pre-sales role
    • Excellent presentation and communication skills, with experience presenting to large corporate organizations
    • Ability to communicate complex technical concepts for non-technical audiences.
    • Strong team player with interpersonal skills
    • Broad range of experience within large-scale  database and/or data warehousing technologies
    • Experience with data engineering tools  Apache Spark, Apache Flink,Apache Airflow
    • Demonstrated proficiency in ANSI SQL query languages
    • Demonstrated proficiency in Python, Scala or Java
    • Understanding of private and public cloud platforms such as AWS, Azure, GCP, VMware

    SingleStore delivers the cloud-native database with the speed and scale to power the world’s data-intensive applications. With a distributed SQL database that introduces simplicity to your data architecture by unifying transactions and analytics, SingleStore empowers digital leaders to deliver exceptional, real-time data experiences to their customers. SingleStore is venture-backed and headquartered in San Francisco with offices in Sunnyvale, Raleigh, Seattle, Boston, London, Lisbon, Bangalore, Dublin and Kyiv. 

    Consistent with our commitment to diversity & inclusion, we value individuals with the ability to work on diverse teams and with a diverse range of people.

    Please note that SingleStore's COVID-19 vaccination policy requires that team members in the United States be up to date with the current CDC guidelines for their vaccinations with one of the United States FDA-approved vaccine options to meet in person for SingleStore business or to work from one of our U.S. office locations. [It is expected that this will be a requirement for this role]. If an exemption and/or accommodation to our vaccination policy is requested, a member of the Human Resources department will be available to begin the interactive accommodation process.

    To all recruitment agencies: SingleStore does not accept agency resumes. Please do not forward resumes to SingleStore employees. SingleStore is not responsible for any fees related to unsolicited resumes and will not pay fees to any third-party agency or company that does not have a signed agreement with the Company.

    #li-remote #remote-li 

    SingleStore values individuals for their unique skills and experiences, and we’re proud to offer roles in a variety of locations across the United States. Salary is based on permissible, non-discriminatory factors such as skills, experience, and geographic location, and is just one part of our total compensation and benefits package. Certain roles are also eligible for additional rewards, including merit increases and annual bonuses. 

    Our benefits package for this role includes: stock options, flexible paid time off, monthly three-day weekends, 14 weeks of fully-paid gender-neutral parental leave, fertility and adoption assistance, mental health counseling, 401(k) retirement plan, and rich health insurance offerings—including medical, dental, vision and life and disability insurance. 

    For candidates residing in California, please see ourCalifornia Recruitment Privacy Notice. For candidates residing in the EEA, UK, and Switzerland, please see ourEEA, UK, and Swiss Recruitment Privacy Notice.

     

    Apply for this job

    +30d

    Lead Data Analyst, Operations, EMR

    Bachelor's degreetableauairflowsqlDesignc++python

    hims & hers is hiring a Remote Lead Data Analyst, Operations, EMR

    Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

    Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

    ​​About the Role:

    As a Lead Analyst, EMR you will help evolve our electronic medical record (EMR) product to the next level. Your knowledge of experimentation, statistical analyses, and cross-functional efforts combined with your excellent communication skills will determine where and how to improve our internal & external customer experiences. In particular, you will be challenged to identify positive and negative changes in our marketplace while working with the team to hypothesize and identify underlying root cause(s) of key metric fluctuations. At the end of the day, you are passionate about metrics and the performance of the business.

    You Will:

    • Own analytics for the EMR Product: conduct data analysis to support new product launches, drive improvements to existing products and inform product roadmap while operating cross-functionally across product, engineering, operations, and other stakeholders
    • Be a thought leader and close partner to product - refine ambiguous questions, outline 2nd and 3rd order effects, and generate new hypotheses through a deep understanding of the data, our customers, and our business
    • Direct experimental design: identify parameters, determine success metrics, perform data validation and monitor progress
    • Define how our teams measure success, by developing Key Performance Indicators and other user/business metrics
    • Regularly present to senior leadership and the product organization on the trends and insights into the changes in the business. This includes the decomposition of the drivers of those changes and recommendations on actions to improve marketplace performance.
    • Design and build best-in-class self-service dashboards using a mixture of analytical and visualization techniques to influence business and product decisions
    • Collaborate with product, engineering, data engineering, and analytics to build and improve on the availability, integrity, accuracy, and reliability of data logging and data pipelines

    You Have:

    • Deep understanding of statistical methods and experiment designs for analytical problems
    • 5+ years of using SQL to extract and manipulate data
    • 3+ years of quant analysis and ​​analyzing A/B/n and multivariate experiments
    • 1+ year of coding in Python or R
    • Advanced knowledge of visualization tools such as Looker or Tableau
    • Bachelor's degree or an advanced degree in Statistics, Mathematics, or a related field
    • Comfort with ambiguity, evolving priorities, and an ability to operate autonomously
    • Ability to narrate insights for complex problems to non-technical audiences from frontline operators up to and including executives

    Preferred Experience & Skills:

    • Experience in two-sided marketplaces, operations, and/or healthcare
    • Project management experience
    • Model development and training (Predictive Modeling)
    • DBT, airflow, and Databricks experience
    • Experience operating cross-functionally with globally distributed teams

    Our Benefits (there are more but here are some highlights):

    • Competitive salary & equity compensation for full-time roles
    • Unlimited PTO, company holidays, and quarterly mental health days
    • Comprehensive health benefits including medical, dental & vision, and parental leave
    • Employee Stock Purchase Program (ESPP)
    • Employee discounts on hims & hers & Apostrophe online products
    • 401k benefits with employer matching contribution
    • Offsite team retreats

    #LI-Remote

    Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

    The actual amount will take into account a range of factors that are considered in making compensation decisions including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

    Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

    An estimate of the current salary range for US-based employees is
    $150,000$165,000 USD

    We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

    Hims is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Hims considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.

    Hims & hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at accommodations@forhims.com. Please do not send resumes to this email address.

    For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

    See more jobs at hims & hers

    Apply for this job

    +30d

    Distributed Cloud l Google Data Project

    DevoteamLisboa, Portugal, Remote
    Bachelor's degreeterraformairflowsqlazurejavadockerpythonAWS

    Devoteam is hiring a Remote Distributed Cloud l Google Data Project

    Job Description

    Devoteam Distributed Cloud is our Google, AWS and Azure strategy and identity within the group Devoteam. We focus on developing solutions end to end within all the 3 major Cloud Platforms and its technologies.

    Our Devoteam Google Cloud Team is looking for a Cloud Data Engineer to join our Data Engineer specialists.

    • Delivery of Data projects more focused on the Engineering component;
    • Working with GCP Data Services such as BigQuery, Cloud Storage , Dataflow, Dataproc, Pub/Sub and Dataplex;
    • Write efficient SQL queries;
    • Develop data processing pipelines using programming frameworks like Apache Beam and CI/CD automatisms; 
    • Automate data engineering tasks;
    • Building and managing data pipelines, with a deep understanding of workflow orchestration, task scheduling, and dependency management;
    • Data Integration and Streaming, including data ingestion from various sources (such as databases, APIs, or logs) into GCP. 

    Qualifications

    • Bachelor's degree in IT or a similar field;
    • 2+ years of professional experience in a data engineering role;
    • Experience with GCP Data Services;
    • Data warehousing knowledge;
    • Knowledge of programming languages such as Python, Java, and SQL (mandatory);
    • Experience with tools like Apache Airflow, Google Cloud Composer, or Cloud Data Fusion;
    • A code-review mindset;
    • Experience with Terraform, GitHub, Github Actions, Bash, and/or Docker;
    • Knowledge of streaming data processing using tools like Apache Kafka;
    • GCP certifications (a plus);
    • Proficiency in English (written and spoken.

    Note: All of the above qualifications are optional, but it is preferred that candidates have some of them.

    See more jobs at Devoteam

    Apply for this job

    +30d

    Senior Site Reliability Engineer

    CatalystRemote (US & Canada)
    kotlinterraformairflowDesignansiblerubyjavadockerelasticsearchpostgresqlkuberneteslinuxpythonAWSbackendNode.js

    Catalyst is hiring a Remote Senior Site Reliability Engineer

    Company Overview

    Totango + Catalyst have joined forces to build a leading customer growth platform that helps businesses protect and grow their revenue. Built by an experienced team of industry leaders, our software integrates with all the tools CS teams already use to provide one centralized view of customer data.  Our modern and intuitive dashboards help CS leaders develop impactful workflows and take the right actions to understand health, prevent churn, increase adoption, and drive expansion.

    Position Overview

    As a Senior Site Reliability Engineer at Totango + Catalyst, you will help shape our infrastructure and build the foundation our team relies on for the rapid delivery of our product. We’ll depend on you to instill best practices for building scalable distributed systems, emphasizing development experience, observability and fault tolerance. Our current stack consists of technologies such as Ruby on Rails, RDS, Elasticsearch, Java, and Kubernetes, and we are moving towards microservices and serverless.  If you thrive in a growth-stage startup environment and are looking for more ownership and the ability to have a significant impact, we would love to meet you.

    This role is opened to candidates working remotely anywhere in Canada and the U.S.

    What You’ll Do

    • Manage our AWS infrastructure, with an emphasis on configuration as code.
    • Keep our site and our services up and running, or get it back up and running quickly when a failure occurs
    • Improve monitoring and work with developers to improve performance and reliability
    • Participate in technical design reviews and architecture planning
    • Debugging complex problems across an entire stack and creating solid solutions
    • Collaborate with product managers and developers to evolve our delivery pipeline
    • Working closely with internal partners and teams to ensure that we ship software that meets security, SLA, performance, and budget requirements
    • Help build our on-call policies and runbooks
    • Take ownership of projects and demonstrate a high level of accountability
    • Manage our data infrastructure and pipeline
    • Focus on quality, cost-effective scalability, and distributed system reliability and establish automated mechanisms

    Who You Are:

    • You are passionate about learning. Obstacles and challenges don’t deter you, you find these as opportunities to learn and grow.
    • You have a positive demeanor and a go-getter attitude! 
    • You are a strong team player. You collaborate well with others, and want to work together to solve common goals.
    • You are proactive in seeking opportunities to learn and identifying opportunities to improve our processess. 



    What You’ll Need

    • 5+ years of experience building and maintaining cloud infrastructure for distributed production systems
    • 1+ year of experience as a backend engineer developing enterprise web applications
    • Excellent communication skills, both verbal and written
    • Know your way around a Unix/Linux shell, can write shell scripts, and understands Linux internals
    • Experience debugging complex problems
    • Experience designing, building, and operating large-scale production systems
    • Proficiency in Bash, Python, or other scripting languages
    • Experience in databases and data warehouses
    • Experience with security requirements for SOC2/ISO
    • FinOps experience
    • Strong Project Management skills
    • A strong desire to show ownership of problems you identify
    • Optional CKAD, CKS, CKA Exam, AWS Certified Exams

    Technologies You’ll Need

    • Demonstrated experience with configuration and orchestration tools such as Terraform, CloudFormation and Ansible
    • Experience with containers, such as Docker 
    • Experience with administering, securing, and optimizing Kubernetes clusters
    • Experience building monitoring, observability, logging, and developer tooling
    • Experience with Helm, Kustomize, ArgoCD, Grafana, Prometheus, Thanos, VictoriaMetrics, Cilium, Linkerd, Envoy, AWS App Mesh, CoreDNS
    • Experience creating CI/CD Pipelines for different coding languages
    • Experience with one or more: Ruby on Rails, Python, Java, Kotlin, Go, Node.js
    • Experience with version control systems like GitHub
    • Familiarity with AWS services, AWS best practices and securing AWS accounts
    • Experience operating and tuning data stores such as PostgreSQL and Elasticsearch
    • Experience with managing the infrastructure that backs data pipelines and data lakes such as Airflow
    • Experience managing streaming infrastructure such as Kafka or Kinesis

    Why You’ll Love Working Here!

    • Work from anywhere!
    • Highly competitive compensation package, including equity 
    • Comprehensive benefits, including up to 100% paid medical, dental, & vision insurance coverage for you & your loved ones
    • Open vacation policy, encouraging you to take the time you need
    • Monthly Mental Health Days and Mental Health Weeks twice per year 
    • Ability to influence and drive key technical and architectural decisions
    • High visibility and impact across the whole company

     

    Your base pay is one part of your total compensation package and is determined within a range. The base salary for this role is from $140,000.00 - $175,000.00 per year. We take into account numerous factors in deciding on compensation, such as experience, job-related skills, relevant education or training, and other business and organizational requirements. The salary range provided corresponds to the level at which this position has been defined.

    Totango + Catalyst is an equal opportunity employer, meaning that we do not discriminate based on race, religion, national origin, gender identity, age, sexual orientation, or any other protected class. Diversity is more than just good intentions; we are committed to creating an inclusive environment for all employees

    See more jobs at Catalyst

    Apply for this job

    +30d

    Senior Data Engineer, Core

    InstacartUnited States - Remote
    airflowsqlDesign

    Instacart is hiring a Remote Senior Data Engineer, Core

    We're transforming the grocery industry

    At Instacart, we invite the world to share love through food because we believe everyone should have access to the food they love and more time to enjoy it together. Where others see a simple need for grocery delivery, we see exciting complexity and endless opportunity to serve the varied needs of our community. We work to deliver an essential service that customers rely on to get their groceries and household goods, while also offering safe and flexible earnings opportunities to Instacart Personal Shoppers.

    Instacart has become a lifeline for millions of people, and we’re building the team to help push our shopping cart forward. If you’re ready to do the best work of your life, come join our table.

    Instacart is a Flex First team

    There’s no one-size fits all approach to how we do our best work. Our employees have the flexibility to choose where they do their best work—whether it’s from home, an office, or your favorite coffee shop—while staying connected and building community through regular in-person events. Learn more about our flexible approach to where we work.

    Overview

    At Instacart, our mission is to create a world where everyone has access to the food they love and more time to enjoy it together. Millions of customers every year use Instacart to buy their groceries online, and the Data Engineering team is building the critical data pipelines that underpin all of the myriad of ways that data is used across Instacart to support our customers and partners.

     

    About the Role 

    Instacart’s Core Data Engineering team plays a critical role in defining and maintaining company-wide datasets, standardized for uniform, reliable, timely and accurate insights from our data. This is a high impact, high visibility role owning critical data integration pipelines and models across all of Instacart’s products. This role is an exciting opportunity to join a key team shaping our most critical data.



    About the Team 

    Core Data Engineering is part of the Infrastructure Engineering pillar, working closely with data engineers, data scientists and senior leaders across the company on developing and standardizing critical company-wide datasets. Our team also collaborates closely with other data infrastructure teams on designing and building key data platforms, systems and tools to make everyone at Instacart more productive with data.



    About the Job 

    • You will be part of a team with a large amount of ownership and autonomy.
    • Large scope for company-level impact working on critical data.
    • You will work closely with engineers and both internal and external stakeholders, owning a large part of the process from problem understanding to shipping the solution.
    • You will ship high quality, scalable and robust solutions with a sense of urgency.
    • You will have the freedom to suggest and drive organization-wide initiatives.




    About You

    Minimum Qualifications

    • 6+ years of working experience in a Data/Software Engineering role, with a focus on building data pipelines.
    • Expert knowledge of SQL and Python.
    • Experience building high quality ETL/ELT pipelines.
    • Experience with cloud-based data technologies such as Snowflake, Databricks, Trino/Presto, or similar.
    • Adept at fluently communicating with many cross-functional stakeholders to drive requirements and design shared datasets.
    • A strong sense of ownership, and an ability to balance a sense of urgency with shipping high quality and pragmatic solutions.
    • Experience working with cross functional stakeholders on metric development, including data scientists, analysts, finance and senior leaders.
    • Experience working with a large codebase on a cross functional team.

     

    Preferred Qualifications

    • Bachelor’s degree in Computer Science, Computer Engineering, Electrical Engineering OR equivalent work experience.
    • Experience with Snowflake, dbt and Airflow
    • Experience with data quality monitoring/observability, either using custom frameworks or tools like Great Expectations, Monte Carlo or similar

    Instacart provides highly market-competitive compensation and benefits in each location where our employees work. This role is remote and the base pay range for a successful candidate is dependent on their permanent work location. Please review our Flex First remote work policyhere.

    Offers may vary based on many factors, such as candidate experience and skills required for the role.Additionally, this role is eligible for a new hire equity grant as well as annual refresh grants. Please read more about our benefits offeringshere.

    For US based candidates, the base pay ranges for a successful candidate are listed below.

    CA, NY, CT, NJ
    $192,000$213,000 USD
    WA
    $184,000$204,000 USD
    OR, DE, ME, MA, MD, NH, RI, VT, DC, PA, VA, CO, TX, IL, HI
    $176,000$196,000 USD
    All other states
    $159,000$177,000 USD

    See more jobs at Instacart

    Apply for this job

    +30d

    Data Engineer

    Maker&Son LtdBalcombe, United Kingdom, Remote
    golangtableauairflowsqlmongodbelasticsearchpythonAWS

    Maker&Son Ltd is hiring a Remote Data Engineer

    Job Description

    We are looking for a highly motivated individual to join our team as a Data Engineer.

    We are based in Balcombe [40 mins from London by train, 20 minutes from Brighton] and we will need you to be based in our offices at least 3 days a week.

    You will report directly to the Head of Data.

    Candidate Overview

    As a part of the Technology Team your core responsibility will be to help maintain and scale our infrastructure for analytics as our data volume and needs continue to grow at a rapid pace. This is a high impact role, where you will be driving initiatives affecting teams and decisions across the company and setting standards for all our data stakeholders. You’ll be a great fit if you thrive when given ownership, as you would be the key decision maker in the realm of architecture and implementation.

    Responsibilities

    • Understand our data sources, ETL logic, and data schemas and help craft tools for managing the full data lifecycle
    • Play a key role in building the next generation of our data ingestion pipeline and data warehouse
    • Run ad hoc analysis of our data to answer questions and help prototype solutions
    • Support and optimise existing ETL pipelines
    • Support technical and business stakeholders by providing key reports and supporting the BI team to become fully self-service
    • Own problems through to completion both individually and as part of a data team
    • Support digital product teams by performing query analysis and optimisation

     

    Qualifications

    Key Skills and Requirements

    • 3+ years experience as a data engineer
    • Ability to own data problems and help to shape the solution for business challenges
    • Good communication and collaboration skills; comfortable discussing projects with anyone from end users up to the executive company leadership
    • Fluency with a programming language - we use NodeJS and Python but looking to use Golang
    • Ability to write and optimise complex SQL statements
    • Familiarity with ETL pipeline tools such as Airflow or AWS Glue
    • Familiarity with data visualisation and reporting tools, like Tableau, Google Data Studio, Looker
    • Experience working in a cloud-based software development environment, preferably with AWS or GCP
    • Familiarity with no-SQL databases such as ElasticSearch, DynamoDB, or MongoDB

    See more jobs at Maker&Son Ltd

    Apply for this job