Lambda Remote Jobs

139 Results

+30d

Staff Engineer - Java

Octillion Media LLCBengaluru, India, Remote
EC2LambdaredisnosqlDesignmongodbapijavadockerpostgresqlMySQLpythonAWS

Octillion Media LLC is hiring a Remote Staff Engineer - Java

Job Description

Responsibilities

- Design, build Octillion sophisticated video advertising solutions with core emphasis on back end technologies

- Willingness to take end to end ownership and being accountable for the success of the product.

- Experience in architecting & building real-time bidding engine, large scale video ads platform.

- Break down business requirements into technical solutions at global scale

- Participate in daily stand ups and provide time estimates.

 

Qualifications

- A degree in Computer Science, Software Engineering, Information Technology or related fields

- Strong software programming capabilities, exhibits good code design and coding style.

- 5+ years of experience with at least one of the programming languages: Java ,Python or Go

- Knowledge of building high throughput system is a big plus (Our server handles 50k+ web requests per second)

- Deep understanding of data structure, algorithm design and analysis, networking, data security and highly scalable systems design

- Experience with big data, data pipelines, loggers, ELK

- Familiar with distributed cache, message middleware, RPC framework, load balancing, security defense and other technologies.

- Experience working with relational and Nosql databases (MySQL, Postgresql, MongoDB, Redis, Hazelcast, Cassandra, Aerospike, or other NoSQL databases)

- Experience with AWS technologies like EC2, Lambda function,Beanstalk, API gateway, CloudFront, FarGate/ECS, tasks, services, clusters, docker container,log analysis (Athena, parquet )

- Big Data/ML experience is a plus

- Experience with RTB, Google IMA SDK, VAST, VPAID, Header bidding is a plus.


 

See more jobs at Octillion Media LLC

Apply for this job

+30d

Senior HPC Systems Engineer

LambdaRemote (US & CAN)
MLLambdaDesignc++kuberneteslinuxpython

Lambda is hiring a Remote Senior HPC Systems Engineer

Lambda's GPU cloud is used by deep learning engineers at Stanford, Berkeley, and Carnegie Mellon. Lambda's on-prem systems power research and engineering at Intel, Microsoft, Kaiser Permanente, major universities, and the Department of Defense.

If you'd like to build the world's best deep learning cloud, join us.

What You’ll Do

  • Design and architect the state-of-the-art AI supercomputers powering our cloud
  • Introduce technology and software to improve the performance, resiliency, and quality of service of our HPC storage and networking infrastructure
  • Work closely with our ML team to benchmark, tune, and optimize our hypervisors, network, and storage
  • Set up monitoring, logging and alerting to ensure high availability and observability
  • Provide guidance and represent the interests of our HPC customers

You

  • Have expertise with architecting, operating, and debugging large scale HPC network and storage infrastructure, ideally using MPI, NCCL, RDMA, Infiniband, and parallel file systems
  • Are experienced with building complex, high-quality software using Python
  • Possess a deep understanding of Linux fundamentals, especially its networking stack
  • Have experience with large GPU clusters is strongly preferred
  • Have experience with virtualization and kubernetes
  • Come from a strong engineering background - Computer Science, Electrical Engineering, Mathematics, Physics

You will be successful in this role if you

  • Have led and taken full ownership over large, ambiguous, cross team projects from conception to production
  • Enjoy moving fast and making a large business impact
  • Value working on a team of high performers that hold each other accountable
  • Are a self-starter, curious, and not afraid to ask when in doubt
  • Are a quick learner and enjoy learning new technologies
  • Value working on a low ego team that emphasizes strong communication, collaboration, and getting to the right answer as a team 

Salary Range Information 

Based on market data and other factors, the salary range for this position is $180,000 - $250,000. However, a salary higher or lower than this range may be appropriate for a candidate whose qualifications differ meaningfully from those listed in the job description.

About Lambda

  • We offer generous cash & equity compensation
  • Investors include Gradient Ventures, Google’s AI-focused venture fund
  • We are experiencing extremely high demand for our systems, with quarter over quarter, year over year profitability
  • Our research papers have been accepted into top machine learning and graphics conferences, including NeurIPS, ICCV, SIGGRAPH, and TOG
  • We have a wildly talented team of 300, and growing fast
  • Health, dental, and vision coverage for you and your dependents
  • Commuter/Work from home stipends for select roles
  • 401k Plan with 2% company match
  • Flexible Paid Time Off Plan that we all actually use

A Final Note:

You do not need to match all of the listed expectations to apply for this position. We are committed to building a team with a variety of backgrounds, experiences, and skills.

Equal Opportunity Employer

Lambda is an Equal Opportunity employer. Applicants are considered without regard to race, color, religion, creed, national origin, age, sex, gender, marital status, sexual orientation and identity, genetic information, veteran status, citizenship, or any other factors prohibited by local, state, or federal law.

See more jobs at Lambda

Apply for this job

+30d

Senior HPC Operations Engineer

LambdaRemote (United States)
LambdaBachelor's degreeDesignc++dockerkuberneteslinux

Lambda is hiring a Remote Senior HPC Operations Engineer

Lambda's GPU cloud is used by deep learning engineers at Stanford, Berkeley, and Carnegie Mellon. Lambda's on-prem systems power research and engineering at Intel, Microsoft, Kaiser Permanente, major universities, and the Department of Defense.

If you'd like to build the world's best deep learning cloud, join us.

What You’ll Do

  • Remotely provision and manage large-scale HPC clusters for AI workloads (up to many thousands of nodes)
  • Remotely install and configure operating systems, firmware, software, and networking on HPC clusters both manually and using automation tools
  • Troubleshoot and resolve HPC cluster issues working closely with physical deployment teams on-site
  • Provide context and details to an automation team to further automate the deployment process
  • Provide clear and detailed requirements back to HPC design team on gaps and improvement areas, specifically in the areas of simplification, stability, and operational efficiency
  • Contribute to the creation and maintenance of Standard Operating Procedures
  • Provide regular and well-communicated updates to project leads throughout each deployment
  • Mentor and assist less-experienced team members
  • Stay up-to-date on the latest HPC/AI technologies and best practices

You

  • Have 10+ years of experience in managing HPC clusters
  • Have 10+ years of everyday Linux experience
  • Have a strong understanding of HPC architecture (compute, networking, storage)
  • Have an innate attention to detail
  • Have experience with Bright Cluster Manager or similar cluster management tools
  • Are an expert in configuring and troubleshooting:
    • SFP+ fiber, InfiniBand (IB), and 100 GbE network fabrics
    • Ethernet, switching, power infrastructure, GPU direct, RDMA, NCCL, Horovod environments
    • Linux-based compute nodes, firmware updates, driver installation
    • SLURM, Kubernetes, or other job scheduling systems
  • Work well under deadlines and structured project plans
  • Have excellent problem-solving and troubleshooting skills
  • Have the flexibility to travel to our North American data centers as on-site needs arise or as part of training exercises
  • Are able to work both independently and as part of a team

Nice to Have

  • Experience with machine learning and deep learning frameworks (PyTorch, TensorFlow) and benchmarking tools (DeepSpeed, MLPerf)
  • Experience with containerization technologies (Docker, Kubernetes)
  • Experience working with the technologies that underpin our cloud business (GPU acceleration, virtualization, and cloud computing)
  • Keen situational awareness in customer situations, employing diplomacy and tact
  • Bachelor's degree in EE, CS, Physics, Mathematics, or equivalent work experience

Salary Range Information 

Based on market data and other factors, the salary range for this position is $170,000-$230,000. However, a salary higher or lower than this range may be appropriate for a candidate whose qualifications differ meaningfully from those listed in the job description. 

 

About Lambda

  • We offer generous cash & equity compensation
  • Investors include Gradient Ventures, Google’s AI-focused venture fund
  • We are experiencing extremely high demand for our systems, with quarter over quarter, year over year profitability
  • Our research papers have been accepted into top machine learning and graphics conferences, including NeurIPS, ICCV, SIGGRAPH, and TOG
  • We have a wildly talented team of 300, and growing fast
  • Health, dental, and vision coverage for you and your dependents
  • Commuter/Work from home stipends for select roles
  • 401k Plan with 2% company match
  • Flexible Paid Time Off Plan that we all actually use

A Final Note:

You do not need to match all of the listed expectations to apply for this position. We are committed to building a team with a variety of backgrounds, experiences, and skills.

Equal Opportunity Employer

Lambda is an Equal Opportunity employer. Applicants are considered without regard to race, color, religion, creed, national origin, age, sex, gender, marital status, sexual orientation and identity, genetic information, veteran status, citizenship, or any other factors prohibited by local, state, or federal law.

See more jobs at Lambda

Apply for this job

+30d

Senior Software Engineer - Cloud

LambdaRemote (US & CAN)
LambdaDesignc++linuxpythonAWS

Lambda is hiring a Remote Senior Software Engineer - Cloud

Lambda's GPU cloud is used by deep learning engineers at Stanford, Berkeley, and Carnegie Mellon. Lambda's on-prem systems power research and engineering at Intel, Microsoft, Kaiser Permanente, major universities, and the Department of Defense.

If you'd like to build the world's best deep learning cloud, join us.

What You’ll Do

  • Build software for training models across hundreds of GPUs interconnected with state-of-the-art networking fabric
  • Build core cloud features like VMs, VPCs, firewalls, distributed file systems within our data centers

Qualifications

  • 8+ years of experience implementing business-critical product features from conception to launch using Python
  • 8+ years of experience contributing to the architecture and design of resilient, large scale distributed systems
  • Strong understanding of public cloud features (e.g. SDN, block storage, distributed file systems, identity management)
  • Strong understanding of Linux (e.g. networking, process management, security, virtualization, systemd).
  • Strong engineering background - EECS preferred, Mathematics, Software Engineering, Physics

You will be successful in this role if you

  • Have led and taken full ownership over large, ambiguous, cross team projects from conception to production
  • Enjoy moving fast and making a large business impact
  • Value working on a team of high performers that hold each other accountable
  • Are a self-starter, curious, and not afraid to ask when in doubt
  • Are a quick learner and enjoy learning new technologies
  • Value working on a low ego team that emphasizes strong communication, collaboration, and getting to the right answer as a team 
  • Care deeply about well-tested code

Salary Range Information 

Based on market data and other factors, the salary range for this position is $185,000-$280,000. However, a salary higher or lower than this range may be appropriate for a candidate whose qualifications differ meaningfully from those listed in the job description.

About Lambda

  • We offer generous cash & equity compensation
  • Investors include Gradient Ventures, Google’s AI-focused venture fund
  • We are experiencing extremely high demand for our systems, with quarter over quarter, year over year profitability
  • Our research papers have been accepted into top machine learning and graphics conferences, including NeurIPS, ICCV, SIGGRAPH, and TOG
  • We have a wildly talented team of 300, and growing fast
  • Health, dental, and vision coverage for you and your dependents
  • Commuter/Work from home stipends for select roles
  • 401k Plan with 2% company match
  • Flexible Paid Time Off Plan that we all actually use

A Final Note:

You do not need to match all of the listed expectations to apply for this position. We are committed to building a team with a variety of backgrounds, experiences, and skills.

Equal Opportunity Employer

Lambda is an Equal Opportunity employer. Applicants are considered without regard to race, color, religion, creed, national origin, age, sex, gender, marital status, sexual orientation and identity, genetic information, veteran status, citizenship, or any other factors prohibited by local, state, or federal law.

See more jobs at Lambda

Apply for this job

+30d

Backend Software Engineer, Java (remote, based in the US)

GremlinRemote, based in the US
LambdasqloracleDesignjavac++dockertypescriptkuberneteslinuxAWSjavascriptbackend

Gremlin is hiring a Remote Backend Software Engineer, Java (remote, based in the US)

Today’s complex, fast-paced systems have become a minefield of reliability risks—any of which could cause an outage that costs millions and destroys customer confidence. That’s why high-availability teams use the Gremlin to find and fix ‌reliability risks before they become incidents.

Gremlin Reliability Platform helps software teams proactively monitor and test their systems for common reliability risks, build and enforce reliability standards, and automate their reliability practices organization-wide. As the industry leader in Chaos Engineering and reliability testing, we work with hundreds of the world’s largest organizations where high availability is non-negotiable.

About the Role of the Senior Software Engineer

As a Software Engineer at Gremlin, you will have the opportunity to improve the reliability of the internet at large by developing Chaos Engineering tooling. You will be able to leverage your engineering experience to inform product design as well as solve complex technical problems that directly impact our customers (which range from the Fortune 500 to smaller organizations). You will work closely with a small, talented team focused on quality, delivery, and predictability.

In this role, you’ll get to:

  • Work closely with engineers, product managers, and other stakeholders to design and build the latest and greatest in Chaos Engineering tooling
  • Leverage strong collaboration and communication skills to deliver new features within a remote culture
  • Partner with product and other business units to understand business problems and present technical solutions and tradeoffs
  • Actively mentor and grow your teammates
  • Care deeply about the customer experience

We'll expect you to have:

  • 5+ years professional Java software engineering experience
  • Experience in Go & Systems Level Programming
  • Experience in cloud technologies: e.g AWS, Lambda, Serverless. Experience with other cloud technologies like Google, Oracle also considered
  • Experience in DynamoDB and/or other no-sql DB or experience in any major relational databases
  • Experience in infrastructure & systems level technologies: e.g., Linux, Docker, Kubernetes, OpenShiftExperience in architecting complex distributed systems and integrating with external systems
  • Strong advocate and practitioner of automated testing, CI/CD, and engineering best practices

Bonus Experience:

  • Has been on-call and participated in an incident management program
  • Familiarity with modern JavaScript frameworks & web development practices: e.g., React, TypeScript, etc.
  • Experience taking features from concept to full production release

*The role does not offer sponsorship employment benefits. 

**If you don't think you meet all of the criteria below but still are interested in the job, please apply. Nobody checks every box—we’re looking for candidates that are particularly strong in a few areas, and have some interest and capabilities in others.

About Gremlin:

Gremlin is a team of industry veterans and people eager to learn from one another. We set the standard for reliability and equip leading organizations with the mindset and expertise needed to drive reliability improvements that move the world forward. We’re backed by top-tier investors Index Ventures, Amplify Partners, and Redpoint Ventures. Our customers love us, and we’re thrilled to be a partner in their success.

What Do We Care About:

  • We Care about our People
    People are our critical differentiators. The company strives to treat our people with respect, empathy, and dignity. We expect that our people will treat each other similarly. In both cases, we will assume good intent. All are welcome at Gremlin. We know our differences make us stronger and that our best ideas and contributions can come from anyone at any level.
  • We Care about Collaboration
    Gremlin is strongest when we come together as one team with shared goals. Be the glue, not the glitter. But as a remote company, teamwork and collaboration won’t happen by accident. We approach every challenge as a shared challenge. We rely on each other for diverse perspectives and creative ideas. We celebrate our wins as a team.
  • We Care about Results
    Be high productivity, low drama. Results matter. To keep our pace, everyone owns the outcomes of their actions and takes action when needed. We reward speed over perfection. We empower each other to iterate and experiment.You are welcome at Gremlin for who you are. The more voices and ideas we have represented in our business, the more we will all flourish, contribute, and build a more reliable internet. Gremlin is a place where everyone can grow and is encouraged. However you identify and whatever background you bring with you, please apply if this sounds like a role that would make you excited to come into work everyday. It’s in our differences that we will find the power to keep building a more reliable internet by building and designing tools used by the best companies in the world.

You are welcome at Gremlin for who you are. The more voices and ideas we have represented in our business, the more we will all flourish, contribute, and build a more reliable internet. Gremlin is a place where everyone can grow and is encouraged. However you identify and whatever background you bring with you, please apply if this sounds like a role that would make you excited to come into work everyday. It’s in our differences that we will find the power to keep building a more reliable internet by building and designing tools used by the best companies in the world. 

Visit our website to learn more - https://www.gremlin.com/about

See more jobs at Gremlin

Apply for this job

+30d

Senior Software Engineer, Frontend (remote, based in the US)

GremlinRemote, based in the US
LambdaagileDesignjavac++dockertypescriptcsskuberneteslinuxAWSjavascriptfrontend

Gremlin is hiring a Remote Senior Software Engineer, Frontend (remote, based in the US)

Job Description: 

Today’s complex, fast-paced systems have become a minefield of reliability risks—any of which could cause an outage that costs millions and destroys customer confidence. That’s why high-availability teams use the Gremlin to find and fix ‌reliability risks before they become incidents.

Gremlin Reliability Platform helps software teams proactively monitor and test their systems for common reliability risks, build and enforce reliability standards, and automate their reliability practices organization-wide. As the industry leader in Chaos Engineering and reliability testing, we work with hundreds of the world’s largest organizations where high availability is non-negotiable.

About the Role of the Senior Software Engineer, Frontend 

As a Senior Software Engineer, Frontend at Gremlin, you will have the opportunity to improve the reliability of the internet at large by developing Reliability Engineering tooling. You will be able to leverage your engineering experience to inform product design as well as solve complex technical problems that directly impact our customers (which range from the Fortune 500 to smaller organizations). You will work closely with a small, talented team focused on quality, delivery, and predictability with an emphasis on providing our customers a great user experience.

In this role, you'll get to:

  • Work closely with engineers, designers, product managers, and other stakeholders to design and build the latest and greatest in Chaos Engineering tooling
  • Leverage strong collaboration and communication skills to deliver new features within a remote culture
  • Partner with design to understand the customer’s needs and design interfaces and experiences that lead our customers to success
  • Partner with product and other business units to understand business problems and present technical solutions and tradeoffs
  • Actively mentor and grow your teammates
  • Care deeply about the customer experience

We'll expect you to have:

  • Experience as a self-driven and collaborative problem solver with strong communication skills
  • 7+ years professional Frontend software engineering experience in modern technologies (TypeScript, JavaScript, React, CSS, etc.)
  • Experience or strong interest in infrastructure & systems level technologies: e.g., Linux, Docker, Kubernetes, OpenShift, etc.
  • Experience with Java software development
  • Experience with agile development environments and practices
  • Strong advocate and practitioner of unit testing and integration testing (Jest/Cypress), CI/CD, code quality, and engineering best practices
  • Leverage your own design skills to collaborate with designers and stakeholders to implement designs and features to required specifications
  • Strong at breaking down ambiguous problems into concrete actions and milestones

Bonus Experience:

  • Experience in cloud technologies: e.g., AWS, DynamoDB, Lambda, Serverless
  • Has been on-call and participated in an incident management program

*The role does not offer sponsorship employment benefits. 

**If you don't think you meet all of the criteria below but still are interested in the job, please apply. Nobody checks every box—we’re looking for candidates that are particularly strong in a few areas, and have some interest and capabilities in others.*

About Gremlin:

Gremlin is a team of industry veterans and people eager to learn from one another. We set the standard for reliability and equip leading organizations with the mindset and expertise needed to drive reliability improvements that move the world forward. We’re backed by top-tier investors Index Ventures, Amplify Partners, and Redpoint Ventures. Our customers love us, and we’re thrilled to be a partner in their success.

What Do We Care About:

  • We Care about our People
    People are our critical differentiators. The company strives to treat our people with respect, empathy, and dignity. We expect that our people will treat each other similarly. In both cases, we will assume good intent. All are welcome at Gremlin. We know our differences make us stronger and that our best ideas and contributions can come from anyone at any level.
  • We Care about Collaboration
    Gremlin is strongest when we come together as one team with shared goals. Be the glue, not the glitter. But as a remote company, teamwork and collaboration won’t happen by accident. We approach every challenge as a shared challenge. We rely on each other for diverse perspectives and creative ideas. We celebrate our wins as a team.
  • We Care about Results
    Be high productivity, low drama. Results matter. To keep our pace, everyone owns the outcomes of their actions and takes action when needed. We reward speed over perfection. We empower each other to iterate and experiment.You are welcome at Gremlin for who you are. The more voices and ideas we have represented in our business, the more we will all flourish, contribute, and build a more reliable internet. Gremlin is a place where everyone can grow and is encouraged. However you identify and whatever background you bring with you, please apply if this sounds like a role that would make you excited to come into work everyday. It’s in our differences that we will find the power to keep building a more reliable internet by building and designing tools used by the best companies in the world.

You are welcome at Gremlin for who you are. The more voices and ideas we have represented in our business, the more we will all flourish, contribute, and build a more reliable internet. Gremlin is a place where everyone can grow and is encouraged. However you identify and whatever background you bring with you, please apply if this sounds like a role that would make you excited to come into work everyday. It’s in our differences that we will find the power to keep building a more reliable internet by building and designing tools used by the best companies in the world. 

Visit our website to learn more - https://www.gremlin.com/about

See more jobs at Gremlin

Apply for this job

+30d

AWS Senior Cloud Architect

DevoteamMilano, Italy, Remote
DevOPSEC2LambdaterraformnosqlsqlansibleazurejenkinsAWS

Devoteam is hiring a Remote AWS Senior Cloud Architect

Descrizione del lavoro

All’interno della direzione Cloud, il Cloud Architect AWS  ha la responsabilità di progettare, disegnare ed implementare applicazioni e servizi cloud-native portando competenze e skills per una guida tecnica ed architetturale nel panorama dei servizi cloud di AWS. Promuove e realizza progetti di cloud-migration, cloud-transformation e modernization  e supporta l’adozione di pratiche e modelli cloud native con conoscenza di tool e framework in ambito DevOps come Kubernetes. Fornisce supporto consulenziale e strategico ai clienti che intendono sia intraprendere percorsi di migrazione di applicazioni legacy che di sviluppo di nuove applicazioni cloud native.

Qualifiche

3-5 anni di esperienza su AWS (gradita conoscenza Azure e GCP)  con almeno 3 anni di esperienza prativa delle seguenti soluzioni AWS: EC2, Lambda, Cloudwatch, RDS, DynamoDB, Migration HUB, Control Tower, Organizations

Competenze ed esperienze tecnico architetturali in ambito AWS  con capacità di bilanciare i requisiti tecnico economici;

·Forte competenza nelle tecnologie AWS, nell'architettura cloud e nelle metodologie di integrazione;

·Esperienze nel disegno, pianificazione ed implementazione di progetti di cloud migrazione e modernizzazione applicativa;

·Conoscenze degli strumenti di gestione nativi dell’hyperscaler;

·Capacità di lavorare in team e di guidare, se richiesto, tecnicamente l’esecuzione di progetti di adozione e trasformazione al cloud;

·Conoscenza di soluzioni, architetture e tecnologie in ambito server, storage, backup, networking, security, virtualizzazione e delle principali versioni di OS e DBMS (SQL, noSQL);

·Esperienza con architetture e servizi basati su containers e/o serverless;

·Esperienza nella progettazione e provisioning di servizi cloud utilizzando metodologie e tool di IaC nativi dei principali hyperscaler e di mercato (es. Terraform, Cloudformation);

· strumenti di CI/CD: Gitlab, Jenkins, AWS CodePipeline…

·Conoscenza di uno o più tool di configuration management, ad esempio Chef, Ansible;

·Conoscenza e/o certificazioni in ambito architetture e tecnologie di microservizi: Kubernetes/Openshift, EKS…

See more jobs at Devoteam

Apply for this job

+30d

Senior Data Engineer

SmartMessageİstanbul, TR - Remote
MLS3SQSLambdaMaster’s DegreenosqlDesignmongodbazurepythonAWS

SmartMessage is hiring a Remote Senior Data Engineer

Who are we?

We are a globally expanding software technology company that helps brands communicate more effectively with their audiences. We are looking forward to expand our people capabilities and success in developing high-end solutions beyond existing boundaries and establish our brand as a Global Powerhouse.

We are free to work from wherever we want and go to the office whenever we like!!!

What is the role?

We are looking for a highly skilled and motivated Senior Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in building and managing data pipelines, noSQL databases, and cloud-based data platforms. You will work closely with data scientists and other engineers to design and implement scalable data solutions.

Key Responsibilities:

  • Design, build, and maintain scalable data pipelines and architectures.
  • Implement data lake solutions on cloud platforms.
  • Develop and manage noSQL databases (e.g., MongoDB, Cassandra).
  • Work with graph databases (e.g., Neo4j) and big data technologies (e.g., Hadoop, Spark).
  • Utilize cloud services (e.g., S3, Redshift, Lambda, Kinesis, EMR, SQS, SNS).
  • Ensure data quality, integrity, and security.
  • Collaborate with data scientists to support machine learning and AI initiatives.
  • Optimize and tune data processing workflows for performance and scalability.
  • Stay up-to-date with the latest data engineering trends and technologies.

Detailed Responsibilities and Skills:

  • Business Objectives and Requirements:
    • Engage with business IT and data science teams to understand their needs and expectations from the data lake.
    • Define real-time analytics use cases and expected outcomes.
    • Establish data governance policies for data access, usage, and quality maintenance.
  • Technology Stack:
    • Real-time data ingestion using Apache Kafka or Amazon Kinesis.
    • Scalable storage solutions such as Amazon S3, Google Cloud Storage, or Hadoop Distributed File System (HDFS).
    • Real-time data processing using Apache Spark or Apache Flink.
    • NoSQL databases like Cassandra or MongoDB, and specialized time-series databases like InfluxDB.
  • Data Ingestion and Integration:
    • Set up data producers for real-time data streams.
    • Integrate batch data processes to merge with real-time data for comprehensive analytics.
    • Implement data quality checks during ingestion.
  • Data Processing and Management:
    • Utilize Spark Streaming or Flink for real-time data processing.
    • Enrich clickstream data by integrating with other data sources.
    • Organize data into partitions based on time or user attributes.
  • Data Lake Storage and Architecture:
    • Implement a multi-layered storage approach (raw, processed, and aggregated layers).
    • Use metadata repositories to manage data schemas and track data lineage.
  • Security and Compliance:
    • Implement fine-grained access controls.
    • Encrypt data in transit and at rest.
    • Maintain logs of data access and changes for compliance.
  • Monitoring and Maintenance:
    • Continuously monitor the performance of data pipelines.
    • Implement robust error handling and recovery mechanisms.
    • Monitor and optimize costs associated with storage and processing.
  • Continuous Improvement and Scalability:
    • Establish feedback mechanisms to improve data applications.
    • Design the architecture to scale horizontally.

Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
  • 5+ years of experience in data engineering or related roles.
  • Proficiency in noSQL databases (e.g., MongoDB, Cassandra) and graph databases (e.g., Neo4j).
  • Strong experience with cloud platforms (e.g., AWS, GCP, Azure).
  • Hands-on experience with big data technologies (e.g., Hadoop, Spark).
  • Proficiency in Python and data processing frameworks.
  • Experience with Kafka, ClickHouse, Redshift.
  • Knowledge of ETL processes and data integration.
  • Familiarity with AI, ML algorithms, and neural networks.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and teamwork skills.
  • Entrepreneurial spirit and a passion for continuous learning.

Join our team!

See more jobs at SmartMessage

Apply for this job

+30d

Senior Data Scientist

SmartMessageİstanbul, TR - Remote
MLS3SQSLambdaMaster’s DegreenosqlmongodbazurepythonAWS

SmartMessage is hiring a Remote Senior Data Scientist

Who are we?

We are a globally expanding software technology company that helps brands communicate more effectively with their audiences. We are looking forward to expand our people capabilities and success in developing high-end solutions beyond existing boundaries and establish our brand as a Global Powerhouse.

We are free to work from wherever we want and go to the office whenever we like!!!

What is the role?

We are seeking an innovative and analytical Senior Data Scientist to join our growing team. The ideal candidate will have a strong background in machine learning, AI, and data analysis. You will work on developing models and algorithms to enhance our RTDM capabilities and drive data-driven decision-making.

Key Responsibilities:

  • Develop, implement, and maintain machine learning models and algorithms.
  • Work with large datasets to extract insights and drive data-driven decisions.
  • Collaborate with data engineers to build scalable data solutions.
  • Utilize cloud-based data platforms (e.g., S3, Redshift, Lambda, Kinesis, EMR).
  • Conduct exploratory data analysis and feature engineering.
  • Choose appropriate algorithms based on the problem type and data characteristics.
  • Implement and optimize AI and neural network models.
  • Create data visualizations and reports to communicate findings.
  • Stay current with the latest research and advancements in data science and AI.
  • Mentor and guide junior data scientists and analysts.

Technical Expertise:

  • Proficiency in Python and data science libraries (e.g., TensorFlow, scikit-learn, PyTorch).
  • Strong experience with noSQL databases (e.g., MongoDB, Cassandra) and big data technologies (e.g., Spark, Hadoop).
  • Experience with cloud platforms (e.g., AWS, GCP, Azure).
  • Knowledge of data engineering processes and data integration.
  • Familiarity with graph databases (e.g., Neo4j) and message queues (e.g., Kafka, SQS).
  • Experience with a wide range of ML and AI algorithms:
  • Supervised Learning:Linear Regression, Logistic Regression, SVM, Naive Bayes, Decision Trees, Random Forests, Gradient Boosting Machines (GBM), AdaBoost, K-Nearest Neighbors (KNN), Neural Networks.
  • Unsupervised Learning: K-Means Clustering, Hierarchical Clustering, Principal Component Analysis (PCA), Anomaly Detection, Autoencoders, Generative Adversarial Networks (GANs).
  • Reinforcement Learning: Q-Learning, Deep Q-Networks (DQN), Policy Gradient Methods, Actor-Critic Methods.
  • Deep Learning: Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory Networks (LSTMs), Transformer Models (e.g., BERT, GPT), Capsule Networks.
  • Predictive Recommendation Engines: Collaborative Filtering, Content-Based Filtering, Hybrid Systems.

Qualifications:

  • Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, or related field.
  • 5+ years of experience in data science or related roles.
  • Understand the business problem and its relevance to business objectives.
  • Evaluate model performance using appropriate metrics.
  • Strong analytical and problem-solving skills.
  • Excellent communication and teamwork skills.
  • Entrepreneurial spirit and a passion for continuous learning.

Join our team!

See more jobs at SmartMessage

Apply for this job

+30d

Linux Support Engineer - Philippines

LambdaRemote - Philippines
MLLambdaazuremetalc++linuxpythonAWS

Lambda is hiring a Remote Linux Support Engineer - Philippines

Lambda's GPU cloud is used by deep learning engineers at Stanford, Berkeley, and Carnegie Mellon. Lambda's on-prem systems power research and engineering at Intel, Microsoft, Kaiser Permanente, major universities, and the Department of Defense.

If you'd like to build the world's best deep learning cloud, join us.

What You’ll Do

  • Be the first point of contact for all incoming technical support questions and handle all customer interactions with understanding, empathy, and transparency.
  • Troubleshoot OS, hardware, and Lambda Stack issues for customers and provide guidance on the best technical solutions that suit their needs.
  • Route and escalate tickets, as needed, to appropriate teams and departments while owning customer communication throughout the issue lifecycle.
  • Work with our technical writing team to document solutions to common problems to allow for future customer self-service.
  • Provide feedback to internal teams on technical issues our customers are facing and, above all, be the customer’s advocate.
  • Work together in a cohesive, customer-first collaborative team environment, sharing your skills, knowledge, and experience.

You

  • Have Linux administration experience in bare-metal, virtualized, and/or cloud environments.
  • Familiarity with private or hybrid cloud environments, such as Azure, AWS, and/or OCI.
  • Experience with monitoring and alerting for enterprise and cloud environments.
  • Have Shell and Python scripting proficiency.
  • Strong ability to curate and adhere to technical standard operating procedures.
  • Possess excellent written and oral communication skills.
  • Proven experience when handling multiple customer interactions in a fast-paced environment.

Nice to Have

  • Familiarity with datacenter level hardware, including GPUs.
  • Familiarity with ML / AI / Deep Learning.
  • Experience with Zendesk ticketing.
  • Wide flexibility for scheduling as we push for 24/7 support availability.

About Lambda

  • We offer generous cash & equity compensation
  • Investors include Gradient Ventures, Google’s AI-focused venture fund
  • We are experiencing extremely high demand for our systems, with quarter over quarter, year over year profitability
  • Our research papers have been accepted into top machine learning and graphics conferences, including NeurIPS, ICCV, SIGGRAPH, and TOG
  • We have a wildly talented team of 300, and growing fast
  • Health, dental, and vision coverage for you and your dependents
  • Commuter/Work from home stipends for select roles
  • 401k Plan with 2% company match
  • Flexible Paid Time Off Plan that we all actually use

A Final Note:

You do not need to match all of the listed expectations to apply for this position. We are committed to building a team with a variety of backgrounds, experiences, and skills.

Equal Opportunity Employer

Lambda is an Equal Opportunity employer. Applicants are considered without regard to race, color, religion, creed, national origin, age, sex, gender, marital status, sexual orientation and identity, genetic information, veteran status, citizenship, or any other factors prohibited by local, state, or federal law.

See more jobs at Lambda

Apply for this job

+30d

Principal Software Engineer I - Inventory Management Systems

Stitch FixRemote, USA
DevOPSS3LambdaTDDagileterraformpostgresDesignslackgraphqlrubytypescriptAWSjavascriptbackend

Stitch Fix is hiring a Remote Principal Software Engineer I - Inventory Management Systems

 

About the Team

The Inventory Management Systems (IMS) Team at Stitch Fix is crucial to our mission of delivering personalized styling services and a seamless client experience. Our team is responsible for designing, developing, and maintaining advanced inventory management solutions that ensure the efficient tracking, control, and optimization of our inventory across the entire supply chain. By leveraging cutting-edge technologies and innovative approaches, we enable Stitch Fix to manage inventory levels accurately, minimize costs, and meet client demand effectively. We build modern software with modern techniques like TDD, continuous delivery, DevOps, and service-oriented architecture. We focus on high-value products that solve clearly identified problems but are designed in a sustainable way and deliver long term value.

About the Role

Stitch Fix’s Inventory Management Systems (IMS) team is looking for a dynamic and forward-thinking Principal Engineer who is dedicated to solving complex inventory challenges. You will work within a distributed team of 4-8 software engineers and cross-functional partners including product, design, algorithms and operations. You're expected to have strong written communication skills and be able to develop strong working relationships with coworkers and business partners. This is a remote position available within the United States. We operate in an agile-inspired manner; collaborating across multiple time zones. You will have the opportunity to develop your non-technical skills by mentoring engineers on your team, leading projects your team is responsible for, and influencing the roadmap of our team. You’ll also participate in our team’s on-call rotation. You will also have the opportunity to be involved in engineering-wide initiatives that aim to improve our culture & developer experience.

You're excited about this opportunity because you will…

  • Work collaboratively as a distributed team—we are a primarily remote team and we use GitHub, Slack, and video conferencing extensively to collaborate.
  • Be at the forefront of tech and fashion, helping Stitch Fix redefine the shopping experience for the next generation.
  • Lead a team in designing solutions that enable our business.
  • Help design, develop, and grow the foundation of client data at Stitch Fix.
  • Have a significant impact on  understanding client needs and preferences by building flexible, scalable systems.
  • Collaborate with stakeholders while leading the technical discovery, decision-making, and project execution.
  • Play a key role in steering design reviews and overseeing solution implementation. 
  • Engage actively in project planning and team ceremonies.
  • Proactively communicate status updates or changes to the scope or timeline of projects to stakeholders and leadership. 
  • Share the responsibility of directing the team’s investment in impactful directions.
  • Contribute to a culture of technical collaboration and scalable, resilient systems.
  • Lead the design of complex systems, recommend solutions and 3rd party integrations, and provide input on technical design documents & project plans
  • Model consistently sustainable results against measurable goals. 
  • Break down projects into actionable milestones.
  • Provide technical leadership, mentorship, pairing opportunities, timely feedback, and code reviews to encourage the growth of others.
  • Invest in the professional development and career growth of your teammates and peers.
  • Frame business problems using high-quality data analysis and empirical evidence for leadership.
  • Find new and better ways of doing things that align with business priorities.
  • Influence other engineers toward right-sized solutions.

We’re excited about you because…

  • Have roughly 10+ years of professional programming experience and are comfortable with multiple modern software development languages.
  • 2+ years of Go experience is preferred.
  • Have strong skills and hands-on experience in backend systems within large-scale service-oriented architectures.
  • Have 3+ years of experience in technical leadership - including driving technical decisions and guiding broader project goals.
  • Experience in integrating and managing third-party APIs, with a strong focus on ensuring seamless data flow, robust error handling, and ensuring business continuity in case of external service failures.
  • Have excellent analytical skills as well as communication skills both verbal and written.
  • Possess an end-to-end mindset, breaking through team silos to deliver best global outcomes.
  • Treasure helping your team members grow and learn.
  • Take initiative and operate with accountability.
  • Are motivated by solving problems and finding creative client-focused solutions.
  • Build high-quality solutions and are pragmatic about weighing project scope and value.
  • Are flexible, dedicated to your craft, and curious.
  • Have expertise in designing high-scale distributed systems, including microservice architecture, containerization and orchestration
  • Mighthave experience with GraphQL schema design.
  • Mighthave experience working remotely alongside a distributed software engineering team.

Technologies we rely on to pursue solutions to business problems include things like:

  • Go, Ruby, Rails
  • React, JavaScript, TypeScript
  • GraphQL and Postgres
  • Kafka
  • AWS services such as Lambda, S3, CloudWatch
  • Terraform

Why you'll love working at Stitch Fix...

  • We are a group of bright, kind people who are motivated by challenge. We value integrity, innovation and trust. You’ll bring these characteristics to life in everything you do at Stitch Fix.
  • We cultivate a community of diverse perspectives— all voices are heard and valued.
  • We are an innovative company and leverage our strengths in fashion and tech to disrupt the future of retail. 
  • We win as a team, commit to our work, and celebrate grit together because we value strong relationships.
  • We boldly create the future while keeping equity and sustainability at the center of all that we do. 
  • We are the owners of our work and are energized by solving problems through a growth mindset lens. We think broadly and creatively through every situation to create meaningful impact.
  • We offer comprehensive compensation packages and inclusive health and wellness benefits.

About Stitch Fix

We're changing the industry and bringing personal styling to every body. We believe in a service and a workplace where you can show up as your best, most authentic self. The Stitch Fix experience is not merely curated—it’s truly personalized to each client we style. We are changing the way people find what they love. We’re disrupting the future of retail with the precision of data science by combining it with human instinct to find pieces that fit our client’s unique style. This novel juxtaposition attracts a highly diverse group of talented people who are both thinkers and doers. This results in a simple, yet powerful offering to our customers and a successful, growing business serving millions of men, women and kids throughout the US. We believe we are only scratching the surface and are looking for incredible people like you to help us boldly create our future. 

Compensation and Benefits

Our anticipated compensation reflects the cost of labor across several US geographic markets, and the range below indicates the low end of the lowest-compensated market to the high end of the highest-compensated market. This position is eligible for new hire and ongoing grants of restricted stock units depending on employee and company performance. In addition, the position is eligible for medical, dental, vision, and other benefits. Applicants should apply via our internal or external careers site.
Salary Range
$218,000$232,000 USD

This link leads to the machine readable files that are made available in response to the federal Transparency in Coverage Rule and includes negotiated service rates and out-of-network allowed amounts between health plans and healthcare providers. The machine-readable files are formatted to allow researchers, regulators, and application developers to more easily access and analyze data.

Please review Stitch Fix's US Applicant Privacy Policy and Notice at Collection here: https://stitchfix.com/careers/workforce-applicant-privacy-policy

Recruiting Fraud Alert: 

To all candidates: your personal information and online safety are top of mind for us.  At Stitch Fix, recruiters only direct candidates to apply through our official career pages at https://www.stitchfix.com/careers/jobs or https://web.fountain.com/c/stitch-fix.

Recruiters will never request payments, ask for financial account information or sensitive information like social security numbers. If you are unsure if a message is from Stitch Fix, please email careers@stitchfix.com

You can read more about Recruiting Scam Awareness on our FAQ page here: https://support.stitchfix.com/hc/en-us/articles/1500007169402-Recruiting-Scam-Awareness 

 

See more jobs at Stitch Fix

Apply for this job

+30d

Sr. Data Engineer, Marketing Tech

MLDevOPSLambdaagileairflowsqlDesignapic++dockerjenkinspythonAWSjavascript

hims & hers is hiring a Remote Sr. Data Engineer, Marketing Tech

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

We're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving Million+ Hims & Hers subscribers.

You Will:

  • Architect and develop data pipelines to optimize performance, quality, and scalability.
  • Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources.
  • Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake.
  • Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance 
  • Orchestrate sophisticated data flow patterns across a variety of disparate tooling.
  • Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics.
  • Partner with the rest of the Data Platform team to set best practices and ensure the execution of them.
  • Partner with the analytics engineers to ensure the performance and reliability of our data sources.
  • Partner with machine learning engineers to deploy predictive models.
  • Partner with the legal and security teams to build frameworks and implement data compliance and security policies.
  • Partner with DevOps to build IaC and CI/CD pipelines.
  • Support code versioning and code deployments for data Pipelines.

You Have:

  • 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages.
  • Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed.
  • Demonstrated experience writing complex, highly optimized SQL queries across large data sets.
  • Experience working with customer behavior data. 
  • Experience with Javascript, event tracking tools like GTM, tools like Google Analytics, Amplitude and CRM tools. 
  • Experience with cloud technologies such as AWS and/or Google Cloud Platform.
  • Experience with serverless architecture (Google Cloud Functions, AWS Lambda).
  • Experience with IaC technologies like Terraform.
  • Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres.
  • Experience building event streaming pipelines using Kafka/Confluent Kafka.
  • Experience with modern data stack like Airflow/Astronomer, Fivetran, Tableau/Looker.
  • Experience with containers and container orchestration tools such as Docker or Kubernetes.
  • Experience with Machine Learning & MLOps.
  • Experience with CI/CD (Jenkins, GitHub Actions, Circle CI).
  • Thorough understanding of SDLC and Agile frameworks.
  • Project management skills and a demonstrated ability to work autonomously.

Nice to Have:

  • Experience building data models using dbt
  • Experience designing and developing systems with desired SLAs and data quality metrics.
  • Experience with microservice architecture.
  • Experience architecting an enterprise-grade data platform.

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

An estimate of the current salary range for US-based employees is
$140,000$170,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Hims considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.

Hims & hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at accommodations@forhims.com. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

+30d

Data Center Strategy - Facility Engineering

LambdaRemote
LambdaDesignc++

Lambda is hiring a Remote Data Center Strategy - Facility Engineering

Lambda was founded in 2012 by AI engineers who published research at top machine learning conferences. We aim to be the leading AI computing platform, supporting developers throughout the entire AI development lifecycle. At Lambda, we empower AI engineers to easily, securely, and affordably build, test, and deploy AI products at scale. Our offerings include high-performance on-prem GPU hardware and flexible cloud-based GPU solutions. We aim to make access to powerful computation as effortless and ubiquitous as electricity.

If you'd like to build the world's best deep learning cloud, join us.

About the Job 

Become a key member of our Data Center Infrastructure Services team as a Principal Data Center Strategist. In this role, you will be instrumental in shaping the future of our data centers. Your responsibilities will include direct engagement with data center providers to evaluate the electrical, mechanical, and operational components of our facilities. You will report to the Vice President of Infrastructure and leverage your extensive knowledge in data center construction and operations. Your expertise will drive thought leadership and ensure optimal performance of our facility portfolio. Additionally, you will spearhead efficiency and build initiatives in both existing facilities and new construction. The ideal candidate will possess profound expertise in data center facilities management and a proven track record of successful implementation of cost saving strategies, and the ability to provide comprehensive technical guidance.

What You'll Do

  • Act as a technical advisor on data center infrastructureAssess new data centers for suitability and compliance with our operational standards.Evaluate and interface directly with data center providers to ensure operational efficiency, appropriate power utilization, and optimal resource allocation.
  • Provide expert troubleshooting support for data center operational issues.
  • Lead after-action reporting and problem remediation processes to continually enhance data center operations.
  • Ensure adherence to best practices for infrastructure concurrent maintainability, server cooling and power configurations, and maintenance to ensure adherence to operational SLAs Serve as a customer-facing data center expert.Provide strategic input on new technologies, building designs, and retrofitting projects to ensure future-ready infrastructure.Collaborate closely with the VP of Infrastructure and other senior leaders to align data center strategies with Lambda's overarching infrastructure goals.
  • Lead the design, deployment, and optimization of data center infrastructure, focusing on power distribution, cooling systems, and environmental controls
  • Drive data center lifecycle controls to ensure technology deployment is aligned and right sized
  • Develop and maintain comprehensive documentation of data center layout and infrastructure topologies to aid in optimizing the costing controls
  • Establish and enforce installation standards and documentation to ensure consistency and efficiency across all data center facilities

You

  • You will know how to build, manage, run and operate a data center at scale.
  • Bring 15+ years of experience in operating, designing, deploying, and optimizing critical data center infrastructure, with a focus on power systems, cooling solutions, and environmental controls
  • Demonstrate advanced proficiency in infrastructure deployment for high power compute environments
  • Have a proven track record of deploying data center operational controls across multiple data center locations
  • Possess a strong character for negotiating terms for design, build, operate and decommission of data center space
  • Detail-oriented with a strong commitment to following established procedures and standards
  • Action-oriented with a passion for continuous learning and professional development
  • Willingness to travel for the setup and optimization of new data center locations

Nice to have

  • Construction Management experience
  • Experience troubleshooting and theoretical knowledge of HPC computer designs
  • Experience working in large-scale campus and portfolio type business models for distributed data center environments
  • Experience collaborating with auditors to ensure compliance with industry standards
  • Previous experience in a leadership or managerial capacity within a data center engineering and operations team

Salary Range Information 

Based on market data and other factors, the salary range for this position is $200,000- $ 247,000 However, a salary higher or lower than this range may be appropriate for a candidate whose qualifications differ meaningfully from those listed in the job description.

About Lambda

  • We offer generous cash & equity compensation
  • Investors include Gradient Ventures, Google’s AI-focused venture fund
  • We are experiencing extremely high demand for our systems, with quarter over quarter, year over year profitability
  • Our research papers have been accepted into top machine learning and graphics conferences, including NeurIPS, ICCV, SIGGRAPH, and TOG
  • We have a wildly talented team of 300, and growing fast
  • Health, dental, and vision coverage for you and your dependents
  • Commuter/Work from home stipends for select roles
  • 401k Plan with 2% company match
  • Flexible Paid Time Off Plan that we all actually use

A Final Note:

You do not need to match all of the listed expectations to apply for this position. We are committed to building a team with a variety of backgrounds, experiences, and skills.

Equal Opportunity Employer

Lambda is an Equal Opportunity employer. Applicants are considered without regard to race, color, religion, creed, national origin, age, sex, gender, marital status, sexual orientation and identity, genetic information, veteran status, citizenship, or any other factors prohibited by local, state, or federal law.

See more jobs at Lambda

Apply for this job

+30d

Sr Data Engineer

VeriskJersey City, NJ, Remote
LambdasqlDesignlinuxpythonAWS

Verisk is hiring a Remote Sr Data Engineer

Job Description

We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data pipeline architecture. The ideal candidate is an experienced data pipeline builder and data wrangler with strong experience in handling data at scale. The Data Engineer will support our software developers, data analysts and data scientists on various data initiatives.

This is a remote role that can be done anywhere in the continental US; work is on Eastern time zone hours.

Why this role

This is a highly visible role within the enterprise data lake team. Working within our Data group and business analysts, you will be responsible for leading creation of data architecture that produces our data assets to enable our data platform.  This role requires working closely with business leaders, architects, engineers, data scientists and wide range of stakeholders throughout the organization to build and execute our strategic data architecture vision.

Job Duties

  • Extensive understanding of SQL queries. Ability to fine tune queries based on various RDBMS performance parameters such as indexes, partitioning, Explain plans and cost optimizers.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies stack
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Working with data scientists and industry leaders to understand data needs and design appropriate data models.
  • Participate in the design and development of the AWS-based data platform and data analytics.

Qualifications

Skills Needed

  • Design and implement data ETL frameworks for secured Data Lake, creating and maintaining an optimal pipeline architecture.
  • Examine complex data to optimize the efficiency and quality of the data being collected, resolve data quality problems, and collaborate with database developers to improve systems and database designs
  • Hands-on building data applications using AWS Glue, Lake Formation, Athena, AWS Batch, AWS Lambda, Python, Linux shell & Batch scripting.
  • Hands on experience with AWS Database services (Redshift, RDS, DynamoDB, Aurora etc.)
  • Experience in writing advanced SQL scripts involving self joins, windows function, correlated subqueries, CTE’s etc.
  • Strong understanding and experience using data management fundamentals, including concepts such as data dictionaries, data models, validation, and reporting.  

Education and Training

  • 10 years full-time software engineering experience preferred with at least 4 years in an AWS environment focused on application development.
  • Bachelor’s degree or foreign equivalent degree in Computer Science, Software Engineering, or related field
  • US citizenship required

#LI-LM03
#LI-Hybrid

See more jobs at Verisk

Apply for this job

+30d

Cloud NetOps Engineer

In All Media IncArgentina - Remote
DevOPSS3EC2LambdaterraformDesignansiblelinuxpythonAWS

In All Media Inc is hiring a Remote Cloud NetOps Engineer

Job Summary:

We are seeking a highly skilled Cloud NetOps Engineer to design, deploy, and manage our scalable, secure, and high-availability AWS cloud infrastructure. The ideal candidate will have extensive experience in network engineering, security solutions implementation, automation, scripting, system administration, and monitoring and optimization.

Key Responsibilities:

Cloud Infrastructure Management:

  • Design, deploy, and manage scalable, secure, and high-availability AWS cloud infrastructure.
  • Optimize AWS services (EC2, VPC, S3, RDS, Lambda, etc.) to ensure efficient operation and cost management.

Network Engineering:

  • Configure, manage, and troubleshoot network routing and switching across cloud and on-premises environments.
  • Implement and maintain advanced network security solutions, including firewalls, VPNs, and intrusion detection/prevention systems.

Security Solutions Implementation:

  • Develop and implement end-to-end network security solutions to protect against internal and external threats.
  • Monitor network traffic and security logs to identify and mitigate potential security breaches.

Automation and Scripting:

  • Automate infrastructure provisioning, configuration management, and deployment processes using tools such as Terraform and Ansible.
  • Develop custom scripts and tools in Python to improve operational efficiency and reduce manual intervention.
  • Implement automation strategies to streamline repetitive tasks and enhance productivity.

System Administration:

  • Perform system administration tasks for Linux servers, including installation, configuration, maintenance, and troubleshooting.
  • Manage and integrate Active Directory services for authentication and authorization.

Firewall and Security Management:

  • Administer and troubleshoot Palo Alto firewalls and Panorama for centralized management and policy enforcement.
  • Manage Cisco Meraki wireless and security stacks, ensuring robust network performance and security compliance.

Monitoring and Optimization:

  • Implement monitoring solutions to track performance metrics, identify issues, and optimize network and cloud resources.
  • Conduct regular performance tuning, capacity planning, and system audits to ensure optimal operation.

Collaboration and Support:

  • Work closely with cross-functional teams, including DevOps, Security, and Development, to support infrastructure and application needs.
  • Provide technical support and guidance to internal teams, ensuring timely resolution of network and system issues.

Documentation and Compliance:

  • Maintain comprehensive documentation of network configurations, infrastructure designs, and operational procedures.
  • Ensure compliance with industry standards and regulatory requirements through regular audits and updates.

Continuous Improvement:

  • Stay updated with the latest trends and technologies in cloud computing, networking, and cybersecurity.
  • Propose and implement improvements to enhance system reliability, security, and performance.

Qualifications:

  • Bachelor’s degree in computer science, Information Technology, or a related field.
  • Proven experience as a Cloud Engineer, Network Engineer, or similar role.
  • Strong knowledge of AWS services and cloud infrastructure management.
  • Proficiency in network engineering, including routing, switching, and security solutions.
  • Experience with automation tools such as Terraform, Ansible, and scripting languages like Python.
  • Solid system administration skills, particularly with Linux servers.
  • Experience managing firewalls and security solutions (e.g., Palo Alto, Cisco Meraki).
  • Strong problem-solving skills and the ability to work in a collaborative environment.
  • Excellent documentation and communication skills.

Preferred Qualifications:

  • AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified SysOps Administrator).
  • Familiarity with DevOps practices and tools.
  • Knowledge of regulatory requirements and compliance standards (e.g., PCI, CIS).

See more jobs at In All Media Inc

Apply for this job

+30d

Senior SRE Engineer (Viator)

TripadvisorOxford, London, Lisbon, Krakow hybrid
DevOPSS3EC2LambdaagilejiraterraformnosqlsqlDesigngitjavadockerelasticsearchkubernetesjenkinspythonAWS

Tripadvisor is hiring a Remote Senior SRE Engineer (Viator)

Viator, a Tripadvisor company, is the leading marketplace for travel experiences. We believe that making memories is what travel is all about. And with 300,000+ travel experiences to explore—everything from simple tours to extreme adventures (and all the niche, interesting stuff in between)—making memories that will last a lifetime has never been easier. With industry-leading flexibility and last-minute availability, it's never too late to make any day extraordinary. Viator. One app, 300,000+ travel experiences you’ll remember.

We are looking for a Senior Software Engineer with a blend of skills of software engineering and operations. A person who truly believes and lives by DevOps principles and values. The roles includes working within the SRE team but interacting with all feature and platform teams to deliver state of the art solutions that ensures availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of our service and applications. If you are looking to be challenged technically and have fun, this is the place for you!

What will you do

  • As part of the SRE team you will be participating in design and implementing parts of our engineering platform that enables scaling, metrics and observability, ensures and improves reliability.
  • Identify gaps in our engineering platform that improves availability, latency, performance, efficiency, change management, monitoring, emergency response
  • Guide and mentor other people on the team and help them grow their skills and knowledge
  • Evangelise DevOps and SRE culture and lead the innovation across engineering feature teams
  • Become part of a PagerDuty based on-call rotation

Skills & Experience

  • Comfortable and happy to code in Python and Java. Experience writing commercial application code in Java.
  • Deep knowledge and understanding of Computer Engineering fundamentals and first principles
  • Deep understanding of scaling solutions both on infrastructure level ( caching layers, database replicas, sharding, partitioning, etc ) and architectural level (denormalisation, CQRS-ES, Federation, etc )
  • Experience building and working with and monitoring microservice architectures in large distributed cloud environments (ideally AWS).
  • Experience with Observability tooling – having proficiency using tools like Elasticsearch, Kibana, APM, Sentry, Grafana, Prometheus, Overops, or similar
  • The ability to guide and mentor other members within the team and improve the way we collaborate, learn, and share ideas
  • Documentation and internal team members alignment; therefore strong written and verbal communication skills are required
  • Excellent collaboration skills to be able to work closely with product engineers and product owners to understand their context and co-design appropriate solutions which balance feature velocity with site reliability
  • Version control and CI/CD – Jenkins, git, bitbucket, GitLab, liquibase
  • Experience in using SQL / NoSQL data stores – RDS, DynamoDB, ElastiCache, Solr
  • Jira and Agile methodologies

Desired Skills & Knowledge

  • Excellent GNU/Linux system administration skills
  • Experience with OpenTelemetry
  • Experience of managing Kubernetes cluster and containerisation
  • AWS and IaC – Terraform, CloudFormation, VPC, IAM, EC2, EKS, Lambda, RDS, S3, CloudWatch, puppet, docker
  • Experience building and running monitoring infrastructure at a large scale. For example, Elasticsearch clusters, Prometheus, Kibana, Grafana, etc
  • Web applications and HTTP servers – Java, apache, nginx
  • Load balancers – ELB, HAProxy, nginx
  • Experience in running  SQL / NoSQL data stores – RDS, DynamoDB, ElastiCache, Solr

 

Perks of Working at Viator

  • Competitive compensation packages, including base salary, annual bonus, and equity.
  • “Work your way” with flexibility to suit your lifestyle. We take a remote-friendly approach to collaboration, with the option to join on-site as often as you’d like in select locations. 
  • Flexible schedule. Work-life balance is ingrained in our culture by design. Trust and accountability make it work.
  • Donation matching. Give back? Give more! We match qualifying charitable donations annually.
  • Tuition assistance. Want to level up your career? We love to hear it! Receive annual support for qualified programs.
  • Lifestyle benefit. An annual benefit to spend on yourself. Use it on travel, wellness, or whatever suits you.
  • Travel perks. We believe that travel is employee development, so we provide discounts and more.
  • Employee assistance program. We’re here for you with resources and programs to help you through life’s challenges.
  • Health benefits. We offer great coverage and competitive premiums.

Our Values

We aspire to lead; We’re relentlessly curious;... want to know more? Read up on our values: 

  • We aspire to lead. Tap into your talent, ambition, and knowledge to bring us – and you – to new heights.
  • We’re relentlessly curious. We push beyond the usual, the known, the “that’s just how it’s done.”
  • We’re better together. We learn from, accept, respect, support, and value one another– and are creating something remarkable in the process.
  • We serve our customers, always. We listen, question, respond, and strive for wow moments.  

We strive for better, not perfect. We won’t get it right the first time – or every time. We’ll provide a safe environment in which to make mistakes, iterate, improve, and grow.

Our workplace is for everyone, as is our people powered platform. At Tripadvisor, we want you to bring your unique identities, abilities, and experiences, so we can collectively revolutionize travel and together find the good out there.

Application process

  • 30 minute call with a recruiter to learn more about the role
  • 1 hour technical coding interview with someone from the Viator Engineering team
  • Three one-hour interviews with members of the team, covering technical topics - including some coding - and what you would bring to Viator.

If you need a reasonable accommodation or support during the application or the recruiting process due to a medical condition or disability, please reach out to your individual recruiter or send an email to AccessibleRecruiting@Tripadvisor.com and let us know the nature of your request . Please include the job requisition number in your message.

#LI-TA1

#Viator

#LI-Hybrid

 

 

 

See more jobs at Tripadvisor

Apply for this job

+30d

Data Engineer (Australia)

DemystDataAustralia, Remote
SalesS3EC2Lambdaremote-firstDesignpythonAWS

DemystData is hiring a Remote Data Engineer (Australia)

Our Solution

Demyst unlocks innovation with the power of data. Our platform helps enterprises solve strategic use cases, including lending, risk, digital origination, and automation, by harnessing the power and agility of the external data universe. We are known for harnessing rich, relevant, integrated, linked data to deliver real value in production. We operate as a distributed team across the globe and serve over 50 clients as a strategic external data partner. Frictionless external data adoption within digitally advancing enterprises is unlocking market growth and allowing solutions to finally get out of the lab. If you like actually to get things done and deployed, Demyst is your new home.

The Opportunity

As a Data Engineer at Demyst, you will be powering the latest technology at leading financial institutions around the world. You may be solving a fintech's fraud problems or crafting a Fortune 500 insurer's marketing campaigns. Using innovative data sets and Demyst's software architecture, you will use your expertise and creativity to build best-in-class solutions. You will see projects through from start to finish, assisting in every stage from testing to integration.

To meet these challenges, you will access data using Demyst's proprietary Python library via our JupyterHub servers, and utilize our cloud infrastructure built on AWS, including Athena, Lambda, EMR, EC2, S3, and other products. For analysis, you will leverage AutoML tools, and for enterprise data delivery, you'll work with our clients' data warehouse solutions like Snowflake, DataBricks, and more.

Demyst is a remote-first company. The candidate must be based in Australia.

Responsibilities

  • Collaborate with internal project managers, sales directors, account managers, and clients’ stakeholders to identify requirements and build external data-driven solutions
  • Perform data appends, extracts, and analyses to deliver curated datasets and insights to clients to help achieve their business objectives
  • Understand and keep current with external data landscapes such as consumer, business, and property data.
  • Engage in projects involving entity detection, record linking, and data modelling projects
  • Design scalable code blocks using Demyst’s APIs/SDKs that can be leveraged across production projects
  • Govern releases, change management and maintenance of production solutions in close coordination with clients' IT teams
  • Bachelor's in Computer Science, Data Science, Engineering or similar technical discipline (or commensurate work experience); Master's degree preferred
  • 1-3 years of Python programming (with Pandas experience)
  • Experience with CSV, JSON, parquet, and other common formats
  • Data cleaning and structuring (ETL experience)
  • Knowledge of API (REST and SOAP), HTTP protocols, API Security and best practices
  • Experience with SQL, Git, and Airflow
  • Strong written and oral communication skills
  • Excellent attention to detail
  • Ability to learn and adapt quickly
  • Distributed working team and culture
  • Generous benefits and competitive compensation
  • Collaborative, inclusive work culture: all-company offsites and local get togethers in Bangalore
  • Annual learning allowance
  • Office setup allowance
  • Generous paid parental leave
  • Be a part of the exploding external data ecosystem
  • Join an established fast growth data technology business
  • Work with the largest consumer and business external data market in an emerging industry that is fueling AI globally
  • Outsized impact in a small but rapidly growing team offering real autonomy and responsibility for client outcomes
  • Stretch yourself to help define and support something entirely new that will impact billions
  • Work within a strong, tight-knit team of subject matter experts
  • Small enough where you matter, big enough to have the support to deliver what you promise
  • International mobility available for top performer after two years of service

Demyst is committed to creating a diverse, rewarding career environment and is proud to be an equal opportunity employer. We strongly encourage individuals from all walks of life to apply.

See more jobs at DemystData

Apply for this job

+30d

Systems Engineer, Enterprise Infrastructure

GrammarlyUnited States; Hybrid
Lambdagolangremote-firstterraformDesignazureapic++pythonAWS

Grammarly is hiring a Remote Systems Engineer, Enterprise Infrastructure

Grammarly is excited to offer a remote-first hybrid working model. Team members work primarily remotely in the United States, Canada, Ukraine, Germany, or Poland. Certain roles have specific location requirements to facilitate collaboration at a particular Grammarly hub.

All roles have an in-person component: Conditions permitting, teams meet 2–4 weeks every quarter at one of Grammarly’s hubs in San Francisco, Kyiv, New York, Vancouver, and Berlin, or in a workspace in Kraków.This flexible approach gives team members the best of both worlds: plenty of focus time along with in-person collaboration that fosters trust and unlocks creativity.

Grammarly team members in this role must be based in the United States, and they must be able to collaborate in person 2 weeks per quarter, traveling if necessary to the hub(s) where the team is based.

The opportunity 

Grammarly is the world’s leading AI writing assistance company, trusted by over 30 million people and 70,000 professional teams daily. From instantly creating a first draft to perfecting every message, Grammarly’s product offerings help people at 96% of the Fortune 500 get their point across—and get results. Grammarly has been profitable for over a decade because we’ve stayed true to our values and built an enterprise-grade product that’s secure, reliable, and helps people do their best work—without selling their data. We’re proud to be one of Inc.’s best workplaces, a Glassdoor Best Place to Work, one of TIME’s 100 Most Influential Companies, and one of Fast Company’s Most Innovative Companies in AI.

To achieve our ambitious goals, we’re looking for a System Engineer to join our Enterprise Infrastructure team. This role will substantially impact the security of Grammarly's product offerings and services. User Trust is our top priority, so our new System Engineer’s impact will be dramatic. Every day of their work at Grammarly, their decisions and actions will have a straightforward impact on the millions of users who trust personal content and rely on Grammarly product offerings daily. They can drive changes across the corporate identity and access management workflows. They will have the opportunity to advocate for security and process improvements while balancing these changes with the speed of growth.

This experienced System Engineer will improve our corporate identity and access management track and its processes to ensure that Grammarly operates securely and efficiently. As the DRI for the corporate identity and access management infrastructure, this person will have the chance to learn about the latest technical solutions. 

Grammarly’s engineers and researchers have the freedom to innovate and uncover breakthroughs—and, in turn, influence our product roadmap. The complexity of our technical challenges is growing rapidly as we scale our interfaces, algorithms, and infrastructure. You can hear more from our team on our technical blog.

Your impact

As a System Engineer Enterprise Infrastructure, you will maintain and expand enterprise infrastructure and services. 

Furthermore, you will: 

  • Design and enhance the corporate environment across external and internal SaaS and PaaS.
  • Build and manage Enterprize's corporate access management platforms. Corporate identity (Okta Identity Engine), access management platforms (Opal. dev).
  • Ensure that corporate IAM complies with the "optimal" level of CISA's Zero Trust Maturity Model and participates in organizational changes that drive the transition to that "optimal" level.
  • Follow the Access Management Maturity Level guidelines to meet the least privileged access model.
  • Design and support the global IAM processes.
  • Develop infrastructure and end user documentation.
  • Collaborate with other teams to address and improve identity and access management issues.
  • Have a one-click (or ideally "no-click") deployment mindset for end users while building or updating IAM processes. 
  • Cooperate with the Cloud Infrastructure team to maintain AWS infrastructure.
  • Provide L3 troubleshooting and support.
  • Enhance IAM security and productivity levels by introducing various metrics.
  • Enhance the self-service features, reporting, and alerting via automation.

We’re looking for someone who

  • Embodies our EAGER values—is ethical, adaptable, gritty, empathetic, and remarkable.
  • Is inspired by our MOVE principles, which are the blueprint for how things get done at Grammarly: move fast and learn faster, obsess about creating customer value, value impact over activity, and embrace healthy disagreement rooted in trust.
  • Is able to collaborate in person 2 weeks per quarter, traveling if necessary to the hub where the team is based.
  • Has a solid experience with Okta (access management, app integration, automation, passwordless).
  • Has solid experience with Terraform (managing resources, deploying infrastructure).
  • Has experience in organizing custom approaches in Access Management.
  • Has solid experience in AWS (Lambda, ECS).
  • Has solid experience with Python automating routine tasks, reports, and lambdas.
  • Has strong project management skills.
  • Has strong analytical, diagnostics, and troubleshooting skills. 
  • Has an ability to solve complex problems at a scale.

It will be a plus to have the following:  

  • Experience Writing code by using Golang, PowerShell, Bash.
  • Experience with Google Cloud Platform and Microsoft Azure platforms.
  • Experience building compliant processes ISO, SOC2, FedRAMP, SOX, NIST, CISA’s Zero Trust Maturity Model.
  • Experience with Opal.dev.
  • Knows API methods and integration between different SaaS.
  • Securing communications TCP/IP stack, SSL/TLS, domain records, certificates, PKI, encryption, hashing, OAuth2.
  • Experience with ZTNA tools (Cloudflare).
  • Knowledge of networking, security, DNS, Unix/Linux operations, and troubleshooting.

Support for you, professionally and personally

  • Professional growth:We believe that autonomy and trust are key to empowering our team members to do their best, most innovative work in a way that aligns with their interests, talents, and well-being. We support professional development and advancement with training, coaching, and regular feedback.
  • A connected team: Grammarly builds a product that helps people connect, and we apply this mindset to our own team. Our remote-first hybrid model enables a highly collaborative culture supported by our EAGER (ethical, adaptable, gritty, empathetic, and remarkable) values. We work to foster belonging among team members in a variety of ways. This includes our employee resource groups, Grammarly Circles, which promote connection among those with shared identities, such as BIPOC and LGBTQIA+ team members, women, and parents. We also celebrate our colleagues and accomplishments with global, local, and team-specific programs. 

Compensation and benefits

Grammarly offers all team members competitive pay along with a benefits package encompassing the following and more: 

  • Excellent health care (including a wide range of medical, dental, vision, mental health, and fertility benefits)
  • Disability and life insurance options
  • 401(k) and RRSP matching 
  • Paid parental leave
  • Twenty days of paid time off per year, eleven days of paid holidays per year, and unlimited sick days 
  • Home office stipends
  • Caregiver and pet care stipends
  • Wellness stipends
  • Admission discounts
  • Learning and development opportunities

Grammarly takes a market-based approach to compensation, which means base pay may vary depending on your location. Our US and Canada locations are categorized into compensation zones based on each geographic region’s cost of labor index. For more information about our compensation zones and locations where we currently support employment, please refer to this page. If a location of interest is not listed, please speak with a recruiter for additional information.

Base pay may vary considerably depending on job-related knowledge, skills, and experience. The expected salary ranges for this position are outlined below by compensation zone and may be modified in the future. 

United States: 
Zone 1: $180,000 – $215,000/year (USD)
Zone 2: $162,000 – $193,000/year (USD)

We encourage you to apply

At Grammarly, we value our differences, and we encourage all—especially those whose identities are traditionally underrepresented in tech organizations—to apply. We do not discriminate on the basis of race, religion, color, gender expression or identity, sexual orientation, ancestry, national origin, citizenship, age, marital status, veteran status, disability status, political belief, or any other characteristic protected by law. Grammarly is an equal opportunity employer and a participant in the US federal E-Verify program (US). We also abide by the Employment Equity Act (Canada).

#LI-PM1

#LI-Hybrid

All team members meeting in person for official Grammarly business or working from a hub location are strongly encouraged to be vaccinated against COVID-19.

 

Apply for this job

+30d

Ingénieur en apprentissage machine/Ingénieure en apprentissage machine

DevoteamNantes, France, Remote
MLDevOPSOpenAILambdaagileterraformscalaairflowansiblescrumgitc++dockerkubernetesjenkinspythonAWS

Devoteam is hiring a Remote Ingénieur en apprentissage machine/Ingénieure en apprentissage machine

Description du poste

Missions

  • Supporter le processus de développement machine learning de bout en bout pour concevoir, créer et gérer des logiciels reproductibles, testables et évolutifs.

  • Travailler sur la mise en place et l’utilisation de plateformes ML/IA/MLOps (telles que AWS SageMaker, Kubeflow, AWS Bedrock, AWS Titan) 

  • Apporter à nos clients des best practices en termes d’organisation, de développement, d’automatisation, de monitoring, de sécurité.

  • Expliquer et appliquer les best practices pour l’automatisation, le testing, le versioning, la reproductibilité et le monitoring de la solution IA déployée.

  • Encadrer et superviser les consultant(es) juniors i.e., peer code review, application des best practices.

  • Accompagner notre équipe commerciale sur la rédaction de propositions et des réunions d’avant-vente.

  • Participer au développement de notre communauté interne (REX, workshops, articles, hackerspace.

  • Participer au recrutement de nos futurs talents.

Qualifications

Compétences techniques 

REQUIRED : 

  • Parler couramment Python, PySpark ou Scala Spark. Scikit-learn, MLlib, Tensorflow, Keras, PyTorch, LightGBM, XGBoost, Scikit-Learn et Spark (pour ne citer qu’eux)

  • Savoir implémenter des architectures de Containerisation (Docker / Containerd) et les environnements en Serverless et micro services utilisant Lambda, ECS, Kubernetes

  • Parfaitement opérationnel pour mettre en place des environnements DevOps et Infra As Code, et pratiquer les outils de MLOps

  • Une bonne partie des outils Git, GitLab CI, Jenkins, Ansible, Terraform, Kubernetes, ML Flow, Airflow ou leurs équivalents dans les environnements Cloud doivent faire partie de votre quotidien. 

  • Cloud AWS (AWS Bedrock, AWS Titan, OpenAI, AWS SageMaker / Kubeflow)

  • Méthode Agile / Scrum

  • Feature Store (n'importe quel fournisseur)

 

NICE TO HAVE : 

  • Apache Airflow

  • AWS SageMaker / Kubeflow

  • Apache Spark

  • Méthode Agile / Scrum

Evoluer au sein de la communauté

Évoluer au sein de la Tribe Data, c’est être acteur dans la création d’un environnement stimulant dans lequel les consultants ne cessent de se tirer vers le haut, aussi bien en ce qui concerne les compétences techniques que les soft-skills. Mais ce n’est pas tout, c’est aussi des événements réguliers et des conversations slacks dédiées vous permettant de solliciter les communautés (data, AI/ML, DevOps, sécurité,...) dans leur ensemble !

A côté de cela, vous avez l’opportunité d’être moteur dans le développement des différentes communautés internes (REX, workshops, articles, Podcasts…).

Rémunération

La rémunération fixe proposée pour le poste est en fonction de votre expérience et dans une fourchette de 46,5k et 52,5k.

See more jobs at Devoteam

Apply for this job

+30d

Data Engineer - AWS

Tiger AnalyticsJersey City,New Jersey,United States, Remote
S3LambdaairflowsqlDesignAWS

Tiger Analytics is hiring a Remote Data Engineer - AWS

Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.

As an AWS Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines on AWS cloud infrastructure. You will work closely with cross-functional teams to support data analytics, machine learning, and business intelligence initiatives. The ideal candidate will have strong experience with AWS services, Databricks, and Apache Airflow.

Key Responsibilities:

  • Design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
  • Implement data processing and transformation workflows using Databricks, Apache Spark, and SQL to support analytics and reporting requirements.
  • Build and maintain orchestration workflows using Apache Airflow to automate data pipeline execution, scheduling, and monitoring.
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver scalable data solutions.
  • Optimize data pipelines for performance, reliability, and cost-effectiveness, leveraging AWS best practices and cloud-native technologies.
  • 8+ years of experience building and deploying large-scale data processing pipelines in a production environment.
  • Hands-on experience in designing and building data pipelines on AWS cloud infrastructure.
  • Strong proficiency in AWS services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
  • Strong experience with Databricks and Apache Spark for data processing and analytics.
  • Hands-on experience with Apache Airflow for orchestrating and scheduling data pipelines.
  • Solid understanding of data modeling, database design principles, and SQL.
  • Experience with version control systems (e.g., Git) and CI/CD pipelines.
  • Excellent communication skills and the ability to collaborate effectively with cross-functional teams.
  • Strong problem-solving skills and attention to detail.

This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

See more jobs at Tiger Analytics

Apply for this job