Data Engineer Remote Jobs

113 Results

+30d

Senior Data Engineer

EquipmentShareRemote; Chicago; Denver; Kansas City; Columbia MO
LambdaagileairflowsqlDesignc++postgresqlpythonAWS

EquipmentShare is hiring a Remote Senior Data Engineer

EquipmentShare is Hiring a Senior Data Engineer.

Your role in our team

At EquipmentShare, we believe it’s more than just a job. We invest in our people and encourage you to choose the best path for your career. It’s truly about you, your future, and where you want to go.

We are looking for a Senior Data Engineer to help us continue to build the next evolution of our data platform in a scalable, performant, and customer-centric architecture.

Our main tech stack includes Snowflake, Apache Airflow, AWS cloud infrastructure (e.g., Kinesis, Kubernetes/EKS, Lambda, Aurora RDS PostgreSQL), Python and Typescript.

What you'll be doing

We are typically organized into agile cross-functional teams composed of Engineering, Product, and Design, which allows us to develop deep expertise and rapidly deliver high-value features and functionality to our next-generation T3 Platform.

You’ll be part of a close-knit team of data engineers developing and maintaining a data platform built with automation and self-service in mind to support analytics and machine learning data products for the next generation of our T3 Fleet that enableend-users to track, monitor and manage the health of their connected vehicles and deployed assets. 

We'll be there to support you as you become familiar with our teams, product domains, tech stack and processes — generally how we all work together.

Primary responsibilities for a Senior Data Engineer

  • Collaborate with Product Managers, Designers, Engineers, Data Scientists and Data Analysts to take ideas from concept to production at scale.
  • Design, build and maintain our data platform to enable automation and self-service for data scientists, machine learning engineers and analysts.
  • Design, build and maintain data product framework to support EquipmentShare application data science and analytics features.
  • Design, build and maintain CI/CD pipelines and automated data and machine learning deployment processes.
  • Develop data monitoring and alerting capabilities.
  • Document architecture, processes and procedures for knowledge sharing and cross-team collaboration.
  • Mentor peers to help them build their skills.

Why We’re a Better Place to Work

We can promise that every day will be a little different with new ideas, challenges and rewards.

We’ve been growing as a team and we are not finished just yet— there is plenty of opportunity to shape how we deliver together.

Our missionis to enable the construction industry with tools that unlock substantial increases to productivity. Together with our team and customers, we are building the future of construction.

T3is the only cloud-based operating system that brings together construction workflows & data from constantly moving elements in one place.

  • Competitive base salary and market leading equity package.
  • Unlimited PTO.
  • Remote first.
  • True work/life balance.
  • Medical, Dental, Vision and Life Insurance coverage.
  • 401(k) + match.
  • Opportunities for career and professional development with conferences, events, seminars and continued education.
  • On-site fitness center at the Home Office in Columbia, Missouri, complete with weightlifting machines, cardio equipment, group fitness space, racquetball courts, a climbing wall, and much more!
  • Volunteering and local charity support that help you nurture and grow the communities you call home through our Giving Back initiative.
  • Stocked breakroom and full kitchen with breakfast and lunch provided daily by our chef and kitchen crew.

About You

You're a hands-on developer who enjoys solving complex problems and building impactful solutions.  Most importantly, you care about making a difference.

  • Take the initiative to own outcomes from start to finish — knowing what needs to be accomplished within your domain and how we work together to deliver the best solution.
  • You are passionate about developing your craft — you understand what it takes to build quality, robust and scalable solutions.
  • You’ll see the learning opportunity when things don’t quite go to plan — not only for you but for how we continuously improve as a team.
  • You take a hypothesis-driven approach — knowing how to source, create and leverage data to inform decision making, using data to drive how we improve, to shape how we evaluate and make platform recommendations.

So, what is important to us?

Above all, you’ll get stuff done. More importantly, you’ll collaborate to do the right things in the right wayto achieve the right outcomes.

  • 7+ years of relevant data platform development experience building production-grade solutions.
  • Proficient with SQL and a high-order object-oriented language (e.g., Python).
  • Experience with designing and building distributed data architecture.
  • Experience building and managing production-grade data pipelines using tools such as Airflow, dbt, DataHub, MLFlow
  • Experience building and managing production-grade data platforms using distributed systems such as Kafka, Spark, Flink and/or others.
  • Familiarity with event data streaming at scale.
  • Proven track record learning new technologies and applying that learning quickly.
  • Experience building observability and monitoring into data products. 
  • Motivated to identify opportunities for automation to reduce manual toil.

EquipmentShare is committed to a diverse and inclusive workplace. EquipmentShare is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.

 

#LI-Remote

 

See more jobs at EquipmentShare

Apply for this job

+30d

Data Engineer

KalderosRemote, United States
Bachelor's degreeslackazurec++

Kalderos is hiring a Remote Data Engineer

About Us

At Kalderos, we are building unifying technologies that bring transparency, trust, and equity to the entire healthcare community with a focus on pharmaceutical pricing.  Our success is measured when we can empower all of healthcare to focus more on improving the health of people. 

That success is driven by Kalderos’ greatest asset, our people. Our team thrives on the problems that we solve, is driven to innovate, and thrives on the feedback of their peers. Our team is passionate about what they do and we are looking for people to join our company and our mission.

That’s where you come in! 

What You’ll Do:

For the position, we’re looking for a Data Engineer II to solve complex problems in the Drug Discounting space. Across all roles, we look for future team members who will live our values of Collaboration, Inspired, Steadfast, Courageous, and Excellence. 

We’re a team of people striving for the best, so naturally, we want to hire the best! We know that job postings can be intimidating, and don’t expect any candidate to check off every box we have listed (though if you can, AWESOME!). We encourage you to apply if your experience matches about 70% of this posting.

  • Work with product teams to understand and develop data models that can meet requirements and operationalize well
  • Build out automated ETL jobs that reliably process large amounts of data, and ensure these jobs runs consistently and well
  • Build tools that enable other data engineers to work more efficiently
  • Try out new data storage and processing technologies in proof of concepts and make recommendations to the broader team
  • Tune existing implementations to run more efficiently as they become bottlenecks, or migrate existing implementations to new paradigms as needed
  • Learn and apply knowledge about the drug discount space, and become a subject matter expert for internal teams to draw upon


General Experience Guidelines

We know your experience extends beyond what you have on paper. The following is a guideline of general experience we’re looking for in this role:

  • Bachelor’s degree in computer science or similar field
  • 4+ years work experience as a Data Engineer in a professional full-time role
  • Experience building ETL pipelines and other services for the healthcare industry 
  • Managing big data implementations – have worked on various implementations for how to scale vertically and horizontally data implementations that manage millions of records. 

Set Yourself Apart

  • Experience with dbt and Snowflake
  • Professional experience in application programming with an object oriented language. 
  • Experience with streaming technologies such as Kafka or Event Hubs 
  • Experience with orchestration frameworks like Azure Data Factory
  • Experience in the healthcare or pharmaceutical industries
  • Experience in a startup environment 

 

Anticipated compensation range for this role is $110-150K/year USD plus bonus.

____________________________________________________________________________________________

Highlighted Company Perks and Benefits

  • Continuous growth and development: Annual continuing education stipend supporting all employees on their ongoing knowledge journey.
  • Celebrating employee milestones: bi-annual stipend for all full-time employees to help you celebrate your exciting accomplishments throughout the year.
  • Work From Home Reimbursement: quarterly funds provided through the pandemic to help ensure all employees have what they need to be productive and effective while working from home.
  • A fair PTO system that allows for a healthy work-life balance. There’s no maximum, but there is a minimum; we want you to take breaks for yourself.
  • 401K plan with a company match.
  • Choose your own Technology: We’ll pay for your work computer. 


What It’s Like Working Here

  • Competitive Salary, Bonus, and Equity Compensation. 
  • We thrive on collaboration, because we believe that all voices matter and we can only put our best work into the world when we work together to solve problems.
  • We empower each other: from our DEI Council to affinity groups for underrepresented populations we believe in ensuring all voices are heard.
  • We know the importance of feedback in individual and organizational growth and development, which is why we've embedded it into our practice and culture. 
  • We’re curious and go deep. Our slack channel is filled throughout the day with insightful articles, discussions around our industry, healthcare, and our book club is always bursting with questions.

To learn more:https://www.kalderos.com/company/culture

Kalderos is an equal opportunity workplace.  We are committed to equal opportunity regardless of race, color, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or veteran status.

 

See more jobs at Kalderos

Apply for this job

+30d

Senior Data Engineer

MedalogixUnited States, Remote
airflowDesign

Medalogix is hiring a Remote Senior Data Engineer

We are looking for an experienced Senior Data Engineer to join our growing team of data experts. As a data engineer at MedaLogix, you will be responsible for developing, maintaining, and optimizing our data warehouse and data pipelines. The data engineer will support multiple stakeholders, including software developers, database architectures, data analysts, and data scientists, to ensure an optimal data delivery architecture. The ideal candidate should possess strong technical abilities to solve complex problems with data, a willingness to learn new technologies and tools if necessary, and be comfortable supporting the data needs of multiple teams, stakeholders, and products.

  • Design, build, and maintain batch or real-time data pipelines in production while adhering to architectural requirements of maintainability and scalability.
  • Build scalable data pipelines (Python/DBT) leveraging the Airflow scheduler/executor framework.
  • Responsible for ensuring that large and/or more complex engineering projects are delivered in alignment with the appropriate business outcomes.
  • Monitor data systems performance and implement optimization strategies.
  • Stay current with emerging trends and technologies in data engineering, and HealthTech industry standards.
  • Perform on-call off-hours support for critical systems.

• 7+ years as a Data Engineer or related role, with a focus on designing and developing performant data pipelines.

• Intermediate/Expert-level knowledge of Kubernetes fundamentals like nodes, pods, services, deployments etc., and their interactions with the underlying infrastructure.

• 2+ years hands-on experience with Docker containerization technology to package applications for use in a distributed system managed by Kubernetes.

• 3+ years’ experience with orchestration platform airflow. Experience with Azure Data Factory is a plus but not required.

• Expertise in using DBT and Apache Airflow for orchestration and data transformation. • Strong programming skills in Python and SQL. Experience with Scala is a plus but not required.

• Strong experience with at least one cloud platform such as AWS, Azure, or Google Cloud

• Experience working with cloud Data Warehouse solutions (Snowflake).

• Excellent problem-solving, communication, and organizational skills.

• Proven ability to work independently and with a team.

• Approachable, personable and team player comfortable working in an Agile environment

• Experience with working on large data sets and distributed computing.

• Knowledge of EMR systems like HCHB/MatrixCare.

• Prior experience as a Senior Data Engineer within a healthcare SaaS group

  • Health Care Plan (Medical, Dental & Vision)
  • Retirement Plan (401k, IRA)
  • Life Insurance (Basic, Voluntary & AD&D)
  • Paid Time Off (Vacation, Sick & Public Holidays)
  • Family Leave (Maternity, Paternity)

See more jobs at Medalogix

Apply for this job

+30d

Senior Data Engineer

CatalystRemote (US & Canada)
MLDevOPSredisairflowsqlqac++elasticsearchpython

Catalyst is hiring a Remote Senior Data Engineer

Company Overview

Totango + Catalyst have joined forces to build a leading customer growth platform that helps businesses protect and grow their revenue. Built by an experienced team of industry leaders, our software integrates with all the tools CS teams already use to provide one centralized view of customer data.  Our modern and intuitive dashboards help CS leaders develop impactful workflows and take the right actions to understand health, prevent churn, increase adoption, and drive expansion.

Position Overview

Insights and intelligence are the cornerstones of our product offering. We ingest and process massive amounts of data from a variety of sources to help our users understand the overall health of their customers at each stage of their journey.  As a Senior Data Engineer, you will be directly responsible for designing and implementing the next-generation data architecture leveraging technologies such as Databricks, TiDB, and Kafka.

This role is open to remote work anywhere within Canada and the U.S.

 

What You’ll Do 

  • Drive high impact, cross-functional data engineering projects built on top of a modern, best-in-class data stack, working with a variety of open source and Cloud technologies
  • Solve interesting and unique data problems at high volume and large scale  
  • Build and optimize the performance of batch, stream, and queue-based solutions including Kafka and Apache Spark
  • Collaborate with stakeholders from different teams to drive forward the data roadmap
  • Implement data retention, security and governance standards
  • Work with all engineering teams to help drive best practices for ownership and self-serve data processing
  • Support and expand standards, guidelines, tooling and best practices for data engineering at Catalyst
  • Support other data engineers in delivering our critical pipelines
  • Focus on data quality, cost effective scalability, and distributed system reliability and establish automated mechanisms
  • Work cross functionally with application engineers, SRE, product, data analysts, data scientists, or ML engineers

 

What You’ll Need

  • 3+ years of experience successfully implementing modern data architectures
  • Strong Project Management skills
  • Demonstrated experience implementing ETL pipelines with Spark (we use Pyspark)
  • Proficiency in Python, SQL and/or other modern programming language
  • Deep understanding of SQL/New SQL with relational data stores such as Postgres/MySQL
  • A strong desire to show ownership of problems you identify
  • Experience with modern Data Warehouses and Lakes such as Redshift, Snowflake, and Databricks Delta Lake
  • Experience with distributed streaming tools like Kafka and Spark Structured Streaming
  • Familiarity with an orchestration tool such as Airflow, dbt, and Delta Live tables
  • Experience with automated testing for distributed systems (unit testing, E2E testing, QA, data expectation monitoring)
  • Experience working with application engineers, product, and data scientists
  • Experience with leveraging caching for performance using data stores such as Redis and ElasticSearch
  • Experience with maintaining and scaling heterogeneous and large volumes of data in production
  • Practical experience with DevOps best practices (CICD, IAC) is a plus
  • Familiarity with Change Data Capture systems is a nice to have
  •  

 

Why You’ll Love Working Here!

  • We are Remote first! Do your best work where you are most comfortable.
  • Highly competitive compensation package, including equity - everyone has a stake in our growth
  • Comprehensive benefits, including up to 100% paid medical, dental, & vision insurance coverage for you & your loved ones
  • Unlimited PTO policy encouraging you to take the time you need - we trust you to strike the right work/life balance
  • Monthly Mental Health Days and Mental Health Weeks twice per year 

 

 Your base pay is one part of your total compensation package and is determined within a range. The base salary for this role is from $140,000.00 - $180,000.00 per year. We take into account numerous factors in deciding on compensation, such as experience, job-related skills, relevant education or training, and other business and organizational requirements. The salary range provided corresponds to the level at which this position has been defined.

Catalyst+Totango is an equal opportunity employer, meaning that we do not discriminate based upon race, religion, national origin, gender identity, age, sexual orientation, or any other protected class. We believe that diversity is more than just good intentions, and we are committed to creating an inclusive environment for all employees.




See more jobs at Catalyst

Apply for this job

+30d

Staff Data & Machine Learning Engineer

CelonisRemote, Germany
MLsqlDesigndockerpythonbackendfrontend

Celonis is hiring a Remote Staff Data & Machine Learning Engineer

We're Celonis, the global leader in Process Mining technology and one of the world's fastest-growing SaaS firms. We believe there is a massive opportunity to unlock productivity by placing data and intelligence at the core of business processes - and for that, we need you to join us.

The Team:

Our team is responsible for building the Celonis’end-to-end Task Mining solution. Task Mining is the technology that allows businesses to capture user interaction (desktop) data, so they can analyze how teams get work done, and how they can do it even better. We own all the related components, e.g. the desktop client, the related backend services, the data processing capabilities, and Studio frontend applications.

The Role:

Celonis is looking for a Staff Data & Machine Learning Engineer to improve and extend our existing Task Mining ETL pipeline as well as build production ready AI based features into the Task Mining product. You will be owning the solution to simplify the extraction of insights from task mining data. This role demands a blend of expertise in data engineering, software development, and machine learning, utilizing Python


The work you’ll do:

  • Design, build, and maintain robust, scalable data pipelines that facilitate the ingestion, processing, and transformation of large datasets
  • Drive the development of AI-powered features and applications from scratch within the Task Mining product
  • Implement data strategies and develop data models
  • Collaborate with other engineering teams to implement, deploy, and monitor ML models in production, ensuring their performance and accuracy
  • Leverage machine learning techniques to provide actionable insights and recommendations for process optimization
  • Write performant, scalable and easy to understand SQL queries and optimize existing ones
  • Learn PQL (Process Query Language – Celonis’ own language for analytical formulas and expressions) and use it to query data from our process mining engine
  • Own the implementation of end to end solutions: leading the design, implementation, build and delivery to customers
  • Provide technical leadership and mentorship to other engineers and team members
  • Lead design discussions, code reviews, and technical planning sessions to ensure high standards and knowledge sharing


The qualifications you need:

  • 8+ years of practical experience in a Computer Science/Data Science related field
  • Or PhD in Data Science/AI/ML area with 5+ years of practical experience 
  • Experience with building production ready and scalable AI/ML applications in the python ecosystem
  • Ability to optimize data pipelines, applications, and machine learning models for high performance and scalability
  • Understanding of ETL jobs, data warehouses/lakes, data modeling, schema design
  • Excellent command of SQL, including query optimization principles
  • Ability to assess dependencies within complex systems, quickly transform your thoughts into an accessible prototype and efficiently explain it to diverse stakeholders
  • Experience with containerization and CI/CD pipelines (e.g. Docker, Github Actions)
  • Interest in learning new technologies (e.g. PQL language and Object Centric Process Mining)
  • Strong communication and collaboration skills (English is a must)
  • Able to supervise and coach mid-level and senior colleagues
  • Knowledge of Column-oriented DBMS (e.g. Vertica) and its specific features would be beneficial
  • Nice to have is knowledge in the frameworks Tensorflow, Pytorch, Langchain, FastAPI, SQLAlchemy 

What Celonis can offer you:

  • The unique opportunity to work with industry-leading process mining technology
  • Investment in your personal growth and skill development (clear career paths, internal mobility opportunities, L&D platform, mentorships, and more)
  • Great compensation and benefits packages (equity (restricted stock units), life insurance, time off, generous leave for new parents from day one, and more). For intern and working student benefits, click here.
  • Physical and mental well-being support (subsidized gym membership, access to counseling, virtual events on well-being topics, and more)
  • A global and growing team of Celonauts from diverse backgrounds to learn from and work with
  • An open-minded culture with innovative, autonomous teams
  • Business Resource Groups to help you feel connected, valued and seen (Black@Celonis, Women@Celonis, Parents@Celonis, Pride@Celonis, Resilience@Celonis, and more)
  • A clear set of company values that guide everything we do: Live for Customer Value, The Best Team Wins, We Own It, and Earth Is Our Future

About Us

Since 2011, Celonis has helped thousands of the world's largest and most valued companies deliver immediate cash impact, radically improve customer experience and reduce carbon emissions. Its Process Intelligence platform uses industry-leading process mining technology and AI to present companies with a living digital twin of their end-to-end processes. For the first time, everyone in an organisation has a common language about how the business works, visibility into where value is hidden and the ability to capture it. Celonis is headquartered in Munich (Germany) and New York (USA) and has more than 20 offices worldwide.

Get familiar with the Celonis Process Intelligence Platform by watching this video.

Join us as we make processes work for people, companies and the planet.

 

Celonis is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Different makes us better.

Accessibility and Candidate Notices

See more jobs at Celonis

Apply for this job

+30d

Senior Data Engineer

Modern HealthRemote - US
DevOPSMaster’s DegreeterraformscalanosqlsqlDesignmongodbazurejavadockerpostgresqlMySQLkubernetespythonAWS

Modern Health is hiring a Remote Senior Data Engineer

Modern Health 

Modern Healthis a mental health benefits platform for employers. We are the first global mental health solution to offer employees access to one-on-one, group, and self-serve digital resources for their emotional, professional, social, financial, and physical well-being needs—all within a single platform. Whether someone wants to proactively manage stress or treat depression, Modern Health guides people to the right care at the right time. We empower companies to helpalltheir employees be the best version of themselves, and believe in meeting people wherever they are in their mental health journey.

We are a female-founded company backed by investors like Kleiner Perkins, Founders Fund, John Doerr, Y Combinator, and Battery Ventures. We partner with 500+ global companies like Lyft, Electronic Arts, Pixar, Clif Bar, Okta, and Udemy that are taking a proactive approach to mental health care for their employees. Modern Health has raised more than $170 million in less than two years with a valuation of $1.17 billion, making Modern Health the fastest entirely female-founded company in the U.S. to reach unicorn status. 

We tripled our headcount in 2021 and as a hyper-growth company with a fully remote workforce, we prioritize our people-first culture (winning awards including Fortune's Best Workplaces in the Bay Area 2021). To protect our culture and help our team stay connected, we require overlapping hours for everyone. While many roles may function from anywhere in the world—see individual job listing for more—team members who live outside the Pacific time zone must be comfortable working early in the morning or late at night; all full-time employees must work at least six hours between 8 am and 5 pm Pacific time each workday. 

We are looking for driven, creative, and passionate individuals to join in our mission. An inclusive and diverse culture are key components of mental well-being in the workplace, and that starts with how we build our own team. If you're excited about a role, we'd love to hear from you!

The Role

The Senior Data Engineer will play a critical role in designing, developing, and maintaining our data infrastructure. This role requires a deep understanding of data architecture, data modeling, and ETL processes. The ideal candidate will have extensive experience with big data technologies, cloud platforms, and a strong ability to collaborate with cross-functional teams to deliver high-quality data solutions.

This position is not eligible to be performed in Hawaii.

What You’ll Do

  • Design, develop, and maintain scalable data pipelines and ETL processes to support data integration and analytics
  • Architect and implement data storage solutions, including data warehouses, data lakes, and databases
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet business needs
  • Optimize and tune data systems for performance, reliability, and scalability
  • Ensure data quality and integrity through rigorous testing, validation, and monitoring
  • Develop and enforce data governance policies and best practices
  • Stay current with emerging data technologies and industry trends, and evaluate their potential impact on our data infrastructure
  • Troubleshoot and resolve data-related issues in a timely manner

Our Stack

  • AWS RDS
  • Snowflake
  • Fivetran
  • DBT
  • Prefect
  • Looker
  • Datadog

Who You Are

  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field
  • 5+ years of experience in data engineering in a modern tech stack
  • Proficiency in programming languages such as Python, Java, or Scala
  • Strong experience with big data technologies
  • Expertise in SQL and experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra)
  • Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud
  • Familiarity with data warehousing solutions like Redshift, BigQuery, or Snowflake
  • Knowledge of data modeling, data architecture, and data governance principles
  • Experience with IaaS technologies (e.g. terraform)
  • Excellent problem-solving skills and attention to detail
  • Strong communication and collaboration skills

Bonus Points

  • Experience with containerization and orchestration tools like Docker and Kubernetes
  • Knowledge of machine learning and data science workflows
  • Experience with CI/CD pipelines and DevOps practices
  • Certification in cloud platforms (e.g., AWS Certified Data Analytics, Google Professional Data Engineer)

Benefits

Fundamentals:

  • Medical / Dental / Vision / Disability / Life Insurance 
  • High Deductible Health Plan with Health Savings Account (HSA) option
  • Flexible Spending Account (FSA)
  • Access to coaches and therapists through Modern Health's platform
  • Generous Time Off 
  • Company-wide Collective Pause Days 

Family Support:

  • Parental Leave Policy 
  • Family Forming Benefit through Carrot
  • Family Assistance Benefit through UrbanSitter

Professional Development:

  • Professional Development Stipend

Financial Wellness:

  • 401k
  • Financial Planning Benefit through Origin

But wait there’s more…! 

  • Annual Wellness Stipend to use on items that promote your overall well being 
  • New Hire Stipend to help cover work-from-home setup costs
  • ModSquad Community: Virtual events like active ERGs, holiday themed activities, team-building events and more
  • Monthly Cell Phone Reimbursement

Equal Pay for Equal Work Act Information

Please refer to the ranges below to find the starting annual pay range for individuals applying to work remotely from the following locations for this role.


Compensation for the role will depend on a number of factors, including a candidate’s qualifications, skills, competencies, and experience and may fall outside of the range shown. Ranges are not necessarily indicative of the associated starting pay range in other locations. Full-time employees are also eligible for Modern Health's equity program and incredible benefits package. See our Careers page for more information.

Depending on the scope of the role, some ranges are indicative of On Target Earnings (OTE) and includes both base pay and commission at 100% achievement of established targets.

San Francisco Bay Area
$138,500$162,900 USD
All Other California Locations
$138,500$162,900 USD
Colorado
$117,725$138,500 USD
New York City
$138,500$162,900 USD
All Other New York Locations
$124,650$146,600 USD
Seattle
$138,500$162,900 USD
All Other Washington Locations
$124,650$146,600 USD

Below, we are asking you to complete identity information for the Equal Employment Opportunity Commission (EEOC). While we are required by law to ask these questions in the format provided by the EEOC, at Modern Health we know that gender is not binary, and we recognize that these categories do not reflect our employees' full range of identities.

See more jobs at Modern Health

Apply for this job

+30d

Data Engineer II

Agile SixUnited States, Remote
MLagileDesignapigitc++pythonbackend

Agile Six is hiring a Remote Data Engineer II

Agile Six is a people-first, remote-work company that serves shoulder-to-shoulder with federal agencies to find innovative, human-centered solutions. We build better by putting people first. We are animated by our core values of Purpose, Wholeness, Trust, Self-Management and Inclusion. We deliver our solutions in autonomous teams of self-managed professionals (no managers here!) who genuinely care about each other and the work. We know that’s our company’s purpose – and that we can only achieve it by supporting a culture where people feel valued, self-managed, and love to come to work.

The role

Agile Six is looking for a Data Engineer for an anticipated role on our cross-functional agile teams. Our partners include: the Department of Veteran Affairs (VA), Centers for Medicare & Medicaid Services (CMS), Centers for Disease Control and Prevention (CDC) and others. 

The successful candidate will bring their experience in data formatting and integration engineering to help us expand a reporting platform. As part of the team, you will primarily be responsible for data cleaning and data management tasks, building data pipelines, and data modeling (designing the schema/structure of datasets and relationships between datasets). We are looking for someone who enjoys working on solutions to highly complex problems and someone who is patient enough to deal with the complexities of navigating the Civic Tech space. The successful candidate for this role is an excellent communicator, as well as someone who is curious about where data analysis, backend development, data engineering, and data science intersect.

We embrace open source software and an open ethos regarding software development, and are looking for a candidate who does the same. Most importantly, we are looking for someone with a passion for working on important problems that have a lasting impact on millions of users and make a difference in our government!

Please note, this position is anticipated, pending contract award response.

Responsibilities

  • Contribute as a member of a cross functional Agile team using your expertise in data engineering, critical thinking, and collaboration to solve problems related to the project
    • Experience with Java/Kotlin/Python, command line, and Git is required
    • Experience with transport protocols including: REST, SFTP, SOAP is required
    • Experience with HL7 2.5.1 and FHIR is strongly preferred
  • Extract, transform, and load data. Pull together datasets, build data pipelines, and turn semi-structured and unstructured data into datasets that can be used for machine learning models.
  • Evaluate and recommend
  • We expect the responsibilities of this position to shift and grow organically over time, in response to considerations such as the unique strengths and interests of the selected candidate and other team members and an evolving understanding of the delivery environment.

Basic qualifications

  • 2+ years of hands-on data engineering experience in a production environment
  • Experience with Java/Kotlin/Python, command line, and Git
  • Demonstrated experience with extract, transform, load (ETL) and data cleaning, data manipulation, and data management
  • Demonstrated experience building and orchestrating automated data pipelines in Java/Python
  • Experience with data modeling: defining the schema/structure of datasets and the relationships between datasets
  • Ability to create usable datasets from semi-structured and unstructured data
  • Solution-oriented mindset and proactive approach to solving complex problems
  • Ability to be autonomous, take initiative, and effectively communicate status and progress
  • Experience successfully collaborating with cross-functional partners and other designers and researchers, seeking and providing feedback in an Agile environment
  • Adaptive, empathetic, collaborative, and holds a positive mindset
  • Has lived and worked in the United States for 3 out of the last 5 years
  • Some of our clients may request or require travel from time to time. If this is a concern for you, we encourage you to apply and discuss it with us at your initial interview

Additional desired qualifications

  • Familiarity with the Electronic Laboratory Reporting workflows and data flow
  • Knowledge of FHIR data / API standard, HL7 2.5.1
  • Experience building or maintaining web service APIs
  • Familiarity with various machine learning (ML) algorithms and their application to common ML problems (e.g. regression, classification, clustering)
  • Statistical experience or degree
  • Experience developing knowledge of complex domain and systems
  • Experience working with government agencies
  • Ability to work across multiple applications, components, languages, and frameworks
  • Experience working in a cross-functional team, including research, design, engineering, and product
  • You are a U.S. Veteran. As a service-disabled veteran-owned small business, we recognize the transition to civilian life can be tricky, and welcome and encourage Veterans to apply

At Agile Six, we are committed to building teams that represent a variety of backgrounds, perspectives, and skills. Even if you don't meet every requirement, we encourage you to apply. We’re eager to meet people who believe in our mission and who can contribute to our team in a variety of ways.

Salary and Sixer Benefits

To promote equal pay for equal work, we publish salary ranges for each position.

The salary range for this position is $119,931-$126,081

Our benefits are designed to reinforce our core values of Wholeness, Self Management and Inclusion. The following benefits are available to all employees. We respect that only you know what balance means for your life and season. While we offer support from coaches, we expect you to own your wholeness, show up for work whole, and go home to your family the same. You will be seen, heard and valued. We expect you to offer the same for your colleagues, be kind (not controlling), be caring (not directive) and ready to participate in a state of flow. We mean it when we say “We build better by putting people first”.

All Sixers Enjoy:

  • Self-managed work/life balance and flexibility
  • Competitive and equitable salary (equal pay for equal work)
  • Employee Stock Ownership (ESOP) for all employees!
  • 401K matching
  • Medical, dental, and vision insurance
  • Employer paid short and long term disability insurance
  • Employer paid life insurance
  • Self-managed and generous paid time off
  • Paid federal holidays and Election day off
  • Paid parental leave
  • Self-managed professional development spending
  • Self-managed wellness days

Hiring practices

Agile Six Applications, Inc. is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, national origin, ancestry, sex, sexual orientation, gender identity or expression, religion, age, pregnancy, disability, work-related injury, covered veteran status, political ideology, marital status, or any other factor that the law protects from employment discrimination.

Note: We participate in E-Verify. Upon hire, we will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S. Unfortunately, we are unable to sponsor visas at this time.

If you need assistance or reasonable accommodation in applying for any of these positions, please reach out to careers@agile6.com. We want to ensure you have the ability to apply for any position at Agile Six.

Please read and respond to the application questions carefully. Interviews are conducted on a rolling basis until the position has been filled.

 

Apply for this job

+30d

Data Engineer H/F

SocotecPalaiseau, France, Remote
S3LambdanosqlairflowsqlgitkubernetesAWS

Socotec is hiring a Remote Data Engineer H/F

Description du poste

SOCOTEC Monitoring France, leader dans le domaine de l'inspection et de la certification, offre des services dans les secteurs de la construction, des infrastructures et de l'industrie.

Le Data & AI Hub SOCOTEC, composé de spécialistes en Data Engineering et Data Science, est chargé non seulement de la gestion et de l'optimisation des données, mais aussi de la mise en place de traitements et d'analyses de données. Nous développons des applications basées sur les données pour soutenir les activités métiers de SOCOTEC.

Nous recherchons un(e) alternant(e) Data Engineer pour intégrer notre équipe Data SOCOTEC.

En intégrant l'équipe, vous participerez activement à la maintenance et à l'optimisation de notre Datalake, ainsi qu'à la création et à la mise à jour des flux de données. Vous serez responsable de la documentation et de la validation de ces flux, ainsi que de la création et de la mise en place d'outils de reporting tels que Power BI. Vous proposerez également de nouvelles solutions, participerez aux qualifications techniques et contribuerez à l'amélioration continue de notre infrastructure data.

 

Vous travaillerez sur trois missions principales :

  • Au sein de l’entité Socotec Monitoring France (20%), vous participerez à la définition de la stratégie optimale de données pour Socotec Monitoring (structuration, processus, open data, achats de données externes)
  • Pour le compte du groupe Socotec (60%), vous participez à la construction du Data Lake à l’échelle monde. Votre objectif sera de développer les flux de donner pour leur analyse en lien avec les équipes BI et Data Science. Vous apprendrez à organiser et ordonnancer les flux d’extraction, de transformation et de chargement des données en garantissant leur fiabilité, leur disponibilité, etc.
  • Auprès des clients (20%), vous participerez au pilotage de A à Z de projets finaux : collecte des données, pipeline de prétraitement, modélisation et déploiement.

Vous ferez preuve d’autonomie, de sagacité et de qualités certaines dans la rédaction et la communication de codes et documentations techniques.

Le stack technique utilisée :

  • Amazon Web Services (AWS)
  • Apache Airflow comme ordonnanceur
  • Spark pour les pipelines ETL
  • Gitlab pour versionner les sources
  • Kubernetes
  • DeltaLake
  • S3
  • Gérer les metadata avec OpenMetadata
  • Power BI, l’outil de BI, géré avec les équipes BI

Qualifications

  • Master en Big Data ou diplôme d'ingénieur en informatique avec une forte appétence pour la data
  • Maîtrise des bases de données SQL et NoSQL, ainsi que des concepts associés
  • Connaissance de la stack Big Data (Airflow, Spark, Hadoop)
  • Expérience avec les outils collaboratifs de développement (Git, GitLab, Jupyter Notebooks, etc.)
  • Connaissance appréciée des services AWS (Lambda, EMR, S3)
  • Intérêt marqué pour les technologies innovantes
  • Esprit d'équipe
  • Anglais courant, y compris un bon niveau technique

See more jobs at Socotec

Apply for this job

+30d

Senior Data Engineer

PostscriptRemote, Anywhere in North America
Lambda8 years of experience6 years of experience4 years of experience5 years of experience3 years of experience10 years of experienceterraformnosqlRabbitMQDesignc++pythonAWSbackend

Postscript is hiring a Remote Senior Data Engineer

Postscript is redefining marketing for ecommerce companies. By introducing SMS as an entirely new channel for ecommerce stores to engage, retain, and convert their customer base, brands are seeing huge ROI with Postscript. Backed by Greylock, Y Combinator and other top investors, Postscript is growing fast and looking for remarkable people to help build a world class organization. 

Postscript Description

Postscript is redefining marketing for ecommerce companies. By introducing SMS as an entirely new channel for ecommerce stores to engage, retain, and convert their customer base, brands are seeing huge ROI with Postscript. Backed by Greylock, Y Combinator and other top investors, Postscript is growing fast and looking for remarkable people to help build a world class organization. 

 

Job Description

As a Senior Data Engineer for the Data Platform team at Postscript, you will provide the company with best in class data foundations to support a broad range of key engineering and product initiatives. The Data Platform team at Postscript focuses on data integration through various sources like our production application and various 3rd party integrations. You will focus on designing and building end to end data pipeline solutions: data ingestion, propagation, persistence, and services to support both our product and our internal BI organization. This role is critical in ensuring data and events are reliable and actionable throughout the Postscript Platform.

 

Primary duties

  • Design and build performant and scalable data systems with high scale
  • Architect cloud native data solutions in AWS
  • Write high quality code to make your software designs a reality
  • Build services to support our product with cross domain data
  • Advise the team and organization on Data Engineering best practices to level up our competency in the organization
  • Mentor and support your fellow engineers via code reviews, design reviews and peer feedback

What We’ll Love About You

  • You’re a polyglot technologist who is passionate about data problems at scaleYou have a proven track record designing and implementing complex data systems from scratch
  • You’ve built data engineering solutions in an AWS environment and have working experience with several AWS services (Lambda, Redshift, Glue, RDS, DMS, etc.)
  • You have several years (5+) of experience writing high quality production code, preferably in Python or Go
  • You have a broad range of experience with data persistence technologies (RDBMS, NoSQL, OLAP, etc.) and know how to select the right tool for the job
  • You’ve worked in event driven systems and have experience with technologies like Kafka, Kinesis, RabbitMQ, etc.
  • You’ve gotten your hands dirty with infrastructure and have used infrastructure as code technologies like Terraform
  • You’re comfortable with ambiguity and like to dig into the problems as much as you love creating solutions

What You’ll Love About Us

  • Salary range of USD $170,000-$190,000 base plus significant equity (we do not have geo based salaries) 
  • High growth startup - plenty of room for you to directly impact the company and grow your career!
  • Work from home (or wherever)
  • Fun - We’re passionate and enjoy what we do
  • Competitive compensation and opportunity for equity
  • Flexible paid time off
  • Health, dental, vision insurance

 

What to expect from our hiring process :

  • Intro Call:You’ll hop on a quick call with the Recruiter so we can get to know you better — and you can learn a little more about the role and Postscript. 
  • Hiring Manager Intro:You’ll hop on a quick call with the Hiring Manager so your future Manager can get to know you better — This is a great time to learn more about the team & position. 
  • Homework Assignment:We will send over an exercise that challenges you to solve a problem & come up with a creative solution, or outline how you've solved a problem in the past. Get a feel for what you’ll be doing on a daily basis!
  • Virtual Onsite Interviews: You’ll be meeting with 2-4 team members on a series of video calls. This is your chance to ask questions and see who this role interacts with on a daily basis.
  • Final FEACH Interview:This is our interview to assess your ability to represent how you work via our FEACH values. As we bui in ld the #1 team in Ecommerce, we look for individuals who embody FEACH professionally and personally. We want to hear about this in your final interview!
  • Reference Checks: We ask to speak with at least two references who have previously worked with you, at least one should be someone who has previously managed your work.
  • Offer:We send over an offer and you (hopefully) accept! Welcome to Postscript!

You are welcome here. Postscript is an ever-evolving place of equal employment for talented individuals.

See more jobs at Postscript

Apply for this job

+30d

Junior/Mid Data Analytics Engineer

EXUSAthens,Attica,Greece, Remote

EXUS is hiring a Remote Junior/Mid Data Analytics Engineer

EXUS is an enterprise software company, founded in 1989 with the vision to simplify risk management software. EXUS launched its Financial Suite (EFS) in 2003 to support financial entities worldwide to improve their results. Today, our EXUS Financial Suite (EFS) is trusted by risk professionals in more than 32 countries worldwide (MENAEUSEA). We introduce simplicity and intelligence in their business processes through technology, improving their collections performance.

Our people constitute the source of inspiration that drives us forward and helps us fulfill our purpose of being role models for a better world.
This is your chance to be part of a highly motivated, diverse, and multidisciplinary team, which embraces breakthrough thinking and technology to create software that serves people.

Our shared Values:

  • We are transparent and direct
  • We are positive and fun, never cynical or sarcastic
  • We are eager to learn and explore
  • We put the greater good first
  • We are frugal and we do not waste resources
  • We are fanatically disciplined, we deliver on our promises

We are EXUS! Are you?

Join our dynamic Data Analytics Teamas we expand our capabilities into data Lakehouse architecture. We are seeking a Junior/Mid Data Analytics Engineer who is enthusiastic about creating compelling data visualizations, effectively communicating them with customers, conducting training sessions, and gaining experience in managing ETL processes for big data.

Key Responsibilities:

  • Develop and maintain reports and dashboards using leading visualization tools, and craft advanced SQL queries for additional report generation.
  • Deliver training sessions on our Analytic Solution and effectively communicate findings and insights to both technical and non-technical customer audiences.
  • Collaborate with business stakeholders to gather and analyze requirements.
  • Debug issues in the front-end analytic tool, investigate underlying causes, and resolve these issues.
  • Monitor and maintain ETL processes as part of our transition to a data lakehouse architecture.
  • Proactively investigate and implement new data analytics technologies and methods.

Required Skills and Qualifications:

  • A BSc or MSc degree in Computer Science, Engineering, or a related field.
  • 1-5 years of experience with data visualization tools and techniques. Knowledge of MicroStrategy and Apache Superset is a plus.
  • 1-5 years of experience with Data Warehouses, Big Data, and/or Cloud technologies. Exposure to these areas in academic projects, internships, or entry-level roles is also acceptable.
  • Familiarity with PL/SQL and practical experience with SQL for data manipulation and analysis. Hands-on experience through academic coursework, personal projects, or job experience is valued.
  • Familiarity with data Lakehouse architecture.
  • Excellent analytical skills to understand business needs and translate them into data models.
  • Organizational skills with the ability to document work clearly and communicate it professionally.
  • Ability to independently investigate new technologies and solutions.
  • Strong communication skills, capable of conducting presentations and engaging effectively with customers in English.
  • Demonstrated ability to work collaboratively in a team environment.
  • Competitive salary
  • Friendly, pleasant, and creative working environment
  • Remote Working
  • Development Opportunities
  • Private Health Insurance

Privacy Notice for Job Applications: https://www.exus.co.uk/en/careers/privacy-notice-f...

See more jobs at EXUS

Apply for this job

+30d

Data Engineer

Maker&Son LtdBalcombe, United Kingdom, Remote
golangtableauairflowsqlmongodbelasticsearchpythonAWS

Maker&Son Ltd is hiring a Remote Data Engineer

Job Description

We are looking for a highly motivated individual to join our team as a Data Engineer.

We are based in Balcombe [40 mins from London by train, 20 minutes from Brighton] and we will need you to be based in our offices at least 3 days a week.

You will report directly to the Head of Data.

Candidate Overview

As a part of the Technology Team your core responsibility will be to help maintain and scale our infrastructure for analytics as our data volume and needs continue to grow at a rapid pace. This is a high impact role, where you will be driving initiatives affecting teams and decisions across the company and setting standards for all our data stakeholders. You’ll be a great fit if you thrive when given ownership, as you would be the key decision maker in the realm of architecture and implementation.

Responsibilities

  • Understand our data sources, ETL logic, and data schemas and help craft tools for managing the full data lifecycle
  • Play a key role in building the next generation of our data ingestion pipeline and data warehouse
  • Run ad hoc analysis of our data to answer questions and help prototype solutions
  • Support and optimise existing ETL pipelines
  • Support technical and business stakeholders by providing key reports and supporting the BI team to become fully self-service
  • Own problems through to completion both individually and as part of a data team
  • Support digital product teams by performing query analysis and optimisation

 

Qualifications

Key Skills and Requirements

  • 3+ years experience as a data engineer
  • Ability to own data problems and help to shape the solution for business challenges
  • Good communication and collaboration skills; comfortable discussing projects with anyone from end users up to the executive company leadership
  • Fluency with a programming language - we use NodeJS and Python but looking to use Golang
  • Ability to write and optimise complex SQL statements
  • Familiarity with ETL pipeline tools such as Airflow or AWS Glue
  • Familiarity with data visualisation and reporting tools, like Tableau, Google Data Studio, Looker
  • Experience working in a cloud-based software development environment, preferably with AWS or GCP
  • Familiarity with no-SQL databases such as ElasticSearch, DynamoDB, or MongoDB

See more jobs at Maker&Son Ltd

Apply for this job

+30d

Principal Data Engineer

MLairflowsqlB2CRabbitMQDesignjavac++pythonAWS

hims & hers is hiring a Remote Principal Data Engineer

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

​​About the Role:

We're looking for an experienced Principal Data Engineer to join our Data Platform Engineering team. Our team is responsible for enabling H&H business (Product, Analytics, Operations, Finance, Data Science, Machine Learning, Customer Experience, Engineering) by providing a platform with a rich set of data and tools to leverage.

You Will:

  • Serve as a technical leader within the Data Platform org. Provide expert guidance and hands-on development of complex engineering problems and projects
  • Collaborate with cross-functional stakeholders including product management, engineering, analytics, and key business representatives to align the architecture, vision, and roadmap with stakeholder needs
  • Establish guidelines, controls, and processes to make data available for developing scalable data-driven solutions for Analytics and AI
  • Create and set best practices for data ingestion, integration, and access patterns to support both real-time and batch-based consumer data needs
  • Implement and maintain data governance practices to ensure compliance, data security, and privacy.
  • Design and lead development on scalable, high-performance data architecture solutions that supports both the consumer side of the business as well as analytic use cases
  • Plan and oversee large-scale and complex technical migrations to new data systems and platforms
  • Drive continuous data transformation to minimize technical debt
  • Display strong thought leadership and execution in pursuit of modern data architecture principles and technology modernization
  • Define and lead technology proof of concepts to ensure feasibility of new data technology solutions
  • Provide technical leadership and mentorship to the members of the team, fostering a culture of technical excellence
  • Create comprehensive documentation for design, and processes to support ongoing maintenance and knowledge sharing
  • Conduct design reviews to ensure that proposed solutions address platform and stakeholder pain points, as well as meet business, and technical requirements, with alignment to standards and best practices
  • Prepare and deliver efficient communications to convey architectural direction and how it aligns with company strategy. Be able to explain the architectural vision and implementation to executives

You Have:

  • Bachelor's or Master's degree in Computer Science or equivalent, with over 12 years of Data Architecture and Data Engineering experience, including team leadership
  • Proven expertise in designing data platforms for large-scale data and diverse data architectures, including warehouses, lakehouses, and integrated data stores.
  • Proficiency and hands-on knowledge in a variety of technologies such as SQL, Bash, Python, Java, Presto, Spark, AWS, data streaming like Kafka, RabbitMQ,
  • Hands-on experience and proficiency with data stacks including Airflow, Databricks, and dbt, as well as data stores such as Cassandra, Aurora, and ZooKeeper
  • Experience with data security (including PHI and PII), as well as data privacy regulations (CCPA and GDPR)
  • Proficient in addressing data-related challenges through analytical problem-solving and aligning data architecture with organizational business goals and objectives
  • Exposure to analytics techniques using ML and AI to assist data scientists and analysts in deriving insights from data
  • Analytical and problem-solving skills to address data-related challenges and find optimal solutions
  • Ability to manage projects effectively, plan tasks, set priorities, and meet deadlines in a fast-paced and ever changing environmen

Nice To Have:

  • Experience working in healthcare or in a B2C company

Our Benefits (there are more but here are some highlights):

  • Competitive salary & equity compensation for full-time roles
  • Unlimited PTO, company holidays, and quarterly mental health days
  • Comprehensive health benefits including medical, dental & vision, and parental leave
  • Employee Stock Purchase Program (ESPP)
  • Employee discounts on hims & hers & Apostrophe online products
  • 401k benefits with employer matching contribution
  • Offsite team retreats

 

#LI-Remote

 

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

An estimate of the current salary range for US-based employees is
$210,000$250,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Hims considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.

Hims & hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at accommodations@forhims.com. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

+30d

Data Engineer

Tiger AnalyticsCharlotte,North Carolina,United States, Remote
S3LambdaDesignAWS

Tiger Analytics is hiring a Remote Data Engineer

Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.

The person on this role will play a crucial role in building scalable and cost-effective data pipelines, data lakes, and analytics systems.

Key Responsibilities:

  • Data Ingestion: Implement data ingestion processes to collect data from various sources, including databases, streaming data, and external APIs.
  • Data Transformation: Develop ETL (Extract, Transform, Load) processes to transform and cleanse raw data into a structured and usable format for analysis.
  • Data Storage: Manage and optimize data storage solutions, including Amazon S3, Redshift, and other AWS storage services.
  • Data Processing: Utilize AWS services like AWS Glue, Amazon EMR, and AWS Lambda to process and analyze large datasets.
  • Data Monitoring and Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, cost-efficiency, and scalability.
  • Integration: Collaborate with data scientists, analysts, and other stakeholders to integrate AWS-based solutions into data analytics and reporting platforms.
  • Documentation: Maintain thorough documentation of data engineering processes, data flows, and system configurations.
  • Scalability: Design AWS-based solutions that can scale to accommodate growing data volumes and changing business requirements.
  • Cost Management: Implement cost-effective solutions by optimizing resource usage and recommending cost-saving measures.
  • Troubleshooting: Diagnose and resolve AWS-related issues to minimize downtime and disruptions.
  • Educational Background: A bachelor's degree in computer science, information technology, or a related field is typically required.
  • AWS Certifications: AWS certifications like AWS Certified Data Analytics - Specialty or AWS Certified Big Data - Specialty are highly beneficial.
  • Programming Skills: Proficiency in programming languages such as Python, Java, or Scala for data processing and scripting. Shell scripting and Linux knowledge would be preferred.
  • Database Expertise: Strong knowledge of AWS database services like Amazon Redshift, Amazon RDS, and NoSQL databases.
  • ETL Tools: Experience with AWS Glue or other ETL tools for data transformation.
  • Version Control: Proficiency in version control systems like Git.
  • Problem-Solving: Strong analytical and problem-solving skills to address complex data engineering challenges.
  • Communication Skills: Effective communication and collaboration skills to work with cross-functional teams.
  • Knowledge of Machine learning concepts would be good to have.

This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

See more jobs at Tiger Analytics

Apply for this job

+30d

Data Quality Engineer

MLMid LevelFull TimeBachelor's degreesqlmobileuiqa

Pixalate, Inc. is hiring a Remote Data Quality Engineer

Data Quality Engineer - Pixalate, Inc. - Career PageSee more jobs at Pixalate, Inc.

Apply for this job

+30d

Data Engineer

IncreasinglyBengaluru, India, Remote
S3LambdaDesigngitjenkinspythonAWS

Increasingly is hiring a Remote Data Engineer

Job Description

Working experience in data integration and pipeline development

Qualifications

3+ years of relevant experience with AWS Cloud on data integration with Databricks,Apache Spark, EMR, Glue, Kafka, Kinesis, and Lambda in S3, Redshift, RDS, MongoDB/DynamoDB ecosystems

Strong real-life experience in python development especially in pySpark in the AWS Cloud environment.

Design, develop, test, deploy, maintain and improve data integration pipeline.

Experience in Python and common python libraries.

Strong analytical experience with the database in writing complex queries, query optimization, debugging, user-defined functions, views, indexes, etc.

Strong experience with source control systems such as Git, Bitbucket, and Jenkins build and continuous integration tools.

 

See more jobs at Increasingly

Apply for this job

+30d

Data Engineer

AmpleInsightIncToronto, Canada, Remote
DevOPSairflowsqlpython

AmpleInsightInc is hiring a Remote Data Engineer

Job Description

We are looking for a data engineer who is passionate about analytics and helping companies build and scale data. You enjoy working with data and are motivated to produce high quality data tools and pipelines that help empower other data scientists. You are experienced in architecting data ETL workflows and schemas. Critical thinking and problem-solving skills are essential for this role.

Qualifications

  • BS (or higher, e.g., MS, or PhD) in Computer Science, Engineering, Math, or Statistics
  • Hands on experience working with user engagement, social, marketing, and/or finance data
  • Proficient in Python (i.e. Pandas, Numpy, scikit-learn, etc), R, TensorFlow, amongst other data science related tools and libraries
  • Extensive experience working on relational databases, designing complex data schemas, and writing SQL queries
  • Deep knowledge on performance tuning of ETL Jobs, SQL, and databases
  • Working knowledge of Snowflake
  • Experience working with Airflow is a strong plus
  • Devops experiences is a plus

See more jobs at AmpleInsightInc

Apply for this job

+30d

Data Engineer

JLIConsultingVaughan, Canada, Remote
oracleazureapigitAWS

JLIConsulting is hiring a Remote Data Engineer

Job Description

Data Engineer Job Responsibilities:

 

•       Work with stakeholders to understand data sources and Data, Analytics and Reporting team strategy in supporting within our on-premises environment and enterprise AWS cloud solution

•       Work closely with Data, Analytics and Reporting Data Management and Data Governance teams to ensure all industry standards and best practices are met

•       Ensure metadata and data lineage is captured and compatible with enterprise metadata and data management tools and processes

•       Run quality assurance and data integrity checks to ensure accurate reporting and data records

•       Ensure ETL pipelines are produced with the highest quality standards, metadata and validated for completeness and accuracy

•       Develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity.

•       Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization.

•       Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.

•       Writes unit/integration tests, contributes to engineering wiki, and documents work.

•       Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.

•       Defines company data assets (data models), spark, sparkSQL jobs to populate data models.

•       Designs data integrations and data quality framework.

•       Designs and evaluates open source and vendor tools for data lineage.

•       Works closely with all business units and engineering teams to develop strategy for long term data platform architecture.

•       Focusing on structured problem solving

•       Phenomenal communication and business awareness

•       Working with ETL tools, Querying languages, and data repositories

•       Support of technical Data Management Solutions

•       Provide support to the development and testing teams to resolve data issues

Qualifications

•       Experience in database, storage, collection and aggregation models, techniques, and technologies – and how to apply them in business

•       Working knowledge of source code control tool such as GIT

•       Knowledge about file formats (e.g. XML, CSV, JSON), databases (e.g. Redshift, Oracle) and different type of connectivity is also very useful.

•       Working experience with the following Cloud platforms is a plus: Amazon Web Services, Google Cloud Platform, Azure

•       Working experience with data modeling, relational modeling, and dimensional modeling

•       The interpersonal skills: You have a way of speaking that engages your audience and instills confidence and credibility. You know how to leverage communication tools and methodologies. You can build relationships internal and external team members, positioning yourself as a trusted advisor. You are always looking for ways to improve processes, and you always ensure your communications have been received and are clearly understood. Your commitment and focus influence those around you to do better.

See more jobs at JLIConsulting

Apply for this job

+30d

Data Center Design Engineer

CloudflareHybrid or Remote
jiraDesign

Cloudflare is hiring a Remote Data Center Design Engineer

About Us

At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company. 

We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! 

Available Location: Lisbon, Portugal; London, UK; Singapore or Remote US 

About the Role

We are seeking a Data Center Design Engineer to design Cloudflare’s pending and future infrastructure deployments for generational improvement in cost, quality, and speed of deployment. We are looking for someone who excels at progressing many projects in parallel, managing dynamic day to day priorities with many stakeholders, and has experience implementing and refining data center design best practices in a high growth environment.  Getting stuff done is a must!

The Data Center Strategy team is part of Cloudflare’s global Infrastructure (INF) team. The INF team grows and manages Cloudflare’s global data center/PoP footprint, enabling Cloudflare’s tremendous growth with compute, network, and data center infrastructure, utilizing an extensive range of global partner vendors.  

What you get to do in this role:

  • Translate data center capacity requirements into actionable white space design and/or rack plans within individual data center contract constraints for power, cooling
  • Manage implementation phase of cage projects with data center providers
  • Design low voltage structured cabling, fiber, cross-connect & conveyance infrastructure as well as any supporting infrastructure on the data center ceiling/floor
  • Work with supply chain team on rack integration plans and location deployment qualification
  • Work cross-functionally with Cloudflare data center engineering team and other internal teams (capacity planning, network strategy, security) to verify scope and solution and create repeatable standard installation procedures
  • Take ownership of and lead projects to design and implement data center expansions or new data centers on tight deadlines with minimal oversight 
  • Technical support in negotiations with external data center partners
  • Assist in RFP preparation, review and cost/engineering analysis
  • Review one-line diagrams and cooling equations for new and existing data centers (Data Center M&E)
  • Power component (PDU) review/approval for Hardware sourcing team
  • Implement, document and maintain power consumption tracking tools and fulfil ad-hoc reporting requests
  • Research new and innovative power efficiency technologies and programs
  • Travel up to 25% to perform infrastructure audits, validate data center construction work and buildouts, and participate in commercial processes.
  • Other duties as assigned

Requirements

  • Bachelors or equivalent experience plus 5+ years of experience in data center mechanical and electrical design and operations/deployment/installation, P.E. certification or equivalent a plus
  • Experience in HVAC, Chilled Water Systems, Condenser Water Systems, Pump controls, Glycool/Glycols, AHU units (DX, split, RTU, CRAC, etc.), CRAH, Raised Floor Systems, HOT/COLD aisle containment and Building Management Systems
  • Understanding of basic electrical theory (voltage, current, power), basic circuit design & analysis, and single- and three-phase power systems
  • Familiarity with Data Center M&E infrastructure design concepts, electrical/UPS topologies, cooling methodologies (central plant, room cooling, high density thermal strategies)
  • Familiarity with industry standards for resilient Data Center design and Uptime Institute Tier Classifications
  • Excellent verbal, written communication and presentation skills
  • Experience working with multiple time zones and multiple cross-functional teams
  • Experience working on time sensitive projects with delivery responsibility under pressure

Bonus Points

  • Degree in electrical/mechanical engineering or IT a plus
  • Experience in large-scale mission critical facility infrastructure design, construction, commissioning, and/or operations a plus
  • Experience with industry standards, building codes and safety standards including UMC, NFPA, ASHRAE, UBC, UMC and LEED, Uptime Institute
  • Knowledge of programming languages a plus
  • JIRA, Confluence admin-level experience a plus
  • AutoCAD experience a plus
  • Experience with FLOTHERM or Tileflow a plus

What Makes Cloudflare Special?

We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.

Project Galileo: We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.

Athenian Project: We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.

1.1.1.1: We released 1.1.1.1to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitmentand ensure that no user data is sold to advertisers or used to target consumers.

Sound like something you’d like to be a part of? We’d love to hear from you!

This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.

Cloudflare is proud to be an equal opportunity employer.  We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness.  All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.We are an AA/Veterans/Disabled Employer.

Cloudflare provides reasonable accommodations to qualified individuals with disabilities.  Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment.  If you require a reasonable accommodation to apply for a job, please contact us via e-mail athr@cloudflare.comor via mail at 101 Townsend St. San Francisco, CA 94107.

See more jobs at Cloudflare

Apply for this job

+30d

Senior Data Engineer

carsalesMelbourne, Australia, Remote
SQSEC2LambdascalaairflowDesignpythonAWS

carsales is hiring a Remote Senior Data Engineer

Job Description

What you’ll do

  • Contributing to the delivery of scalable data architectures, and development & design best practices
  • Leading collaborations across data disciplines to develop, optimise and maintain data pipelines and solutions
  • Engages actively in facilitating team-based problem-solving sessions and contribute to the development of best practices
  • Initiating and nurturing effective working relationships, acting as a trusted advisor on product analytics and commercial data solutions
  • Leading technical recommendations and decision-making while, mentoring early-career engineers playing a key role in growing the team's capabilities
  • Owning the delivery of their allocated initiatives within specified scope, times and budgets

Qualifications

What we are looking for?

Critical to success in the role is the ability to operate in the liminal space between business, data and technical practice.

  • An all-of-business ownership mindset over siloed success; leading with high levels of personal integrity and accountability
  • Ability to distil business and analytics requirements into well-defined engineering problems
  • Skilled at identifying appropriate software engineering methods (e.g. modularisations, abstractions) that make data assets tractable
  • Strong software engineering fundamentals (e.g. data structures, principles of software design, build & testing)
  • Strong data engineering experience (e.g. transformations, modelling, pipelines), grounded in the basics of an analytical discipline (e.g. analytics or science)
  • Skilled in designing and building pipelines using cloud services such as AWS EC2, Glue, Lambda, SNS, SQS, IAM, ECS or equivalent
  • Demonstrated experience with distributed technologies such as Airflow, HDFS, EMR
  • Proficient in two or more programming languages such as Python, Spark, Scala or similar

See more jobs at carsales

Apply for this job

+30d

Sr. Data Engineer - Data Analytics

R.S.ConsultantsPune, India, Remote
SQSLambdaBachelor's degreescalaairflowsqlDesigntypescriptpythonAWSNode.js

R.S.Consultants is hiring a Remote Sr. Data Engineer - Data Analytics

Job Description

We are looking for a Sr. Data Engineer for an International client. This is a 100% remote job. The person will be working from India and will be collaborating with global team. 

Total Experience: 7+ Years

Your role

  • Have key responsibilities within the requirements analysis, scalable & low latency streaming platform solution design, architecture, and end-to-end delivery of key modules in order to provide real-time data solutions for our product
  • Write clean scalable code using Go, Typescript / Node.js / Scala / python / SQL and test and deploy applications and systems
  • Solve our most challenging data problems, in real-time, utilizing optimal data architectures, frameworks, query techniques, sourcing from structured and unstructured data sources.
  • Be part of an engineering organization delivering high quality, secure, and scalable solutions to clients
  • Involvement in product and platform performance optimization and live site monitoring
  • Mentor team members through giving and receiving actionable feedback.

Our tech. stack:

  • AWS (Lambda, SQS, Kinesis, KDA, Redshift, Athena, DMS, Glue,Go/Typescript, Dynamodb), Airflow, Flink, Spark, Looker, EMR
  • A continuous deployment process based on GitLab

A little more about you:

  • A Bachelor's degree in a technical field (eg. computer science or mathematics). 
  • 3+ years experience with real-time, event driven architecture
  • 3+ years experience with a modern programming language such as Scala, Python, Go, Typescript
  • Experience of designing complex data processing pipeline
  • Experience of data modeling(star schema, dimensional modeling etc)
  • Experience of query optimisation
  • Experience of kafka is a plus
  • Shipping and maintaining code in production
  • You like sharing your ideas, and you're open-minded

Why join us?

???? Key moment to join in term of growth and opportunities

????‍♀️ Our people matter, work-life balance is important

???? Fast-learning environment, entrepreneurial and strong team spirit

???? 45+ Nationalities: cosmopolite & multi-cultural mindset

???? Competitive salary package & benefits (health coverage, lunch, commute, sport

DE&I Statement: 

We believe diversity, equity and inclusion, irrespective of origins, identity, background and orientations, are core to our journey. 

Qualifications

Hands-on experience in Scala / Python with Data Modeling, Real Time / Streaming Data. Experience of complex data processing pipeline and Data Modeling.

BE/ BTech in Computer Science

See more jobs at R.S.Consultants

Apply for this job