scala Remote Jobs

153 Results

+30d

Data Scientist

VictoryAustin, TX Remote
DevOPStableauscalasqlDesignazurejavac++pythonAWS

Victory is hiring a Remote Data Scientist

About the Data Scientist position

We are looking for a skilled Data Scientist who will help us analyze large amounts of raw information to find patterns and use them to optimize our performance. You will build data products to extract valuable business insights, analyze trends and help us make better decisions.

We expect you to be highly analytical with a knack for analysis, math and statistics, and a passion for machine-learning and research. Critical thinking and problem-solving skills are also required.

Data Scientist responsibilities are:

  • Research and detect valuable data sources and automate collection processes

  • Perform preprocessing of structured and unstructured data

  • Design, implement and deliver maintainable and high-quality code using best practices (e.g. Git/Github, Secrets, Configurations, Yaml/JSON)

  • Review large amounts of information to discover trends and patterns

  • Create predictive models and machine-learning algorithms

  • Modify and combine different models through ensemble modeling

  • Organize and present information using data visualization techniques

  • Develop and suggest solutions and strategies to business challenges

  • Work together with engineering and product development teams

Data Scientist requirements are:

  • 3+ years' experience of working on Data Scientist or Data Analyst position

  • Significant experience in data mining, machine-learning and operations research

  • Experience with data modeling, design patterns, building highly scalable and secured solutions preferred

  • Prior experience installing data architectures on Cloud providers (e.g. AWS,GCP,Azure), using DevOps tools and automating data pipelines

  • Good experience using business intelligence/visualization tools (such as Tableau), data frameworks (such as Hadoop, DataFrames, RDDs, Dataclasses) and data formats (CSV, JSON, Parquet, Avro, ORC)

  • Advanced knowledge of R, SQL and Python; familiarity with Scala, Java or C++ is an asset

  • MA or PhD degree in Computer Science, Engineering or other relevant area; graduate degree in Data Science or other quantitative field is preferred
  • Must be a U.S. Citizen

See more jobs at Victory

Apply for this job

+30d

Full Stack Software Engineer

WaystoCapCasablanca, MA Remote
GatsbygolangagilescalanosqlB2BDesigngraphqlapijavajavascriptbackend

WaystoCap is hiring a Remote Full Stack Software Engineer

A bit about us:

Hi! We're WaystoCap a Y Combinator startup backed by top tier VCs, and we are building a B2B marketplace in Africa.

A little more about what we do:

We do more than just a listing or marketplace site, and help our buyers and sellers go through the entire trade process more easily, efficiently, and improved with technology.

Here is a short video about us: The African Opportunity

We want every African business to be able to trade internationally, just like any company and take advantage of cutting edge tech to solve payments, logistics, and most importantly trust issues.

Your challenge as a developer:

You will lead a cross functional agile team to help build a strong and scalable products. The team will plan, design and build innovative solutions to unlock the trading internationally and in Africa. backed by a deep understanding of microservice architecture and high availability, you will build highly scalable platform.

We are looking for Backend developer able to:

  • Build and maintain scalable services
  • Design flexible API (REST, GraphQL)
  • Deliver modern reliable and resilient services
  • Share knowledge, evangelize best practices.
  • Proficient with JavaScript
  • Fluency in any backend server language, and expertise in relational databases and schema design.

Requirements:

  • 5 - 8 years professional development experience
  • Experience in programming with one of the following language: Golang, Node, Java, Scala
  • Proficiency in ES6 JavaScript, modern React.js
  • Gatsby, other static site generators or Next.js experience
  • Bootstrap, styled-components
  • Web App Performance optimization
  • Experience in software design and testing
  • Experience in database and storage technologies (RDBMS, NoSQL,...)
  • Experience with CI/CD pipeline
  • Experience working and leading an agile team

What we offer:

  • Product ownership and decision making in the entire development process
  • Work with a talented team and learn from international advisors and investors
  • Be involved in a fundamental company supporting development in Africa
  • Very competitive salary and options package
  • Have a voice in what tools you want us to use

See more jobs at WaystoCap

Apply for this job

+30d

Data Engineer Senior - CDI H/F

TalanLille, France, Remote
scalanosqlsqlmongodbazuredockerkubernetespythonAWS

Talan is hiring a Remote Data Engineer Senior - CDI H/F

Description du poste

Rejoignez une aventure Talan'tueuse !

Ce que nous vous proposons:

Au sein de notre équipe Data Intelligence, nous recherchons notre futur(e) Consultant(e) BI Confirmé(e). 

En tant que Consultant(e) Data Confirmé(e) spécialisé(e) sur les « Modern Data Plateforms » (solutions d'actualité auprès de nos clients grands comptes), vous serez responsable de concevoir, mettre en œuvre et optimiser des solutions data innovantes pour répondre aux besoins de nos clients.

Vous travaillerez en étroite collaboration avec les équipes techniques et les clients pour garantir le succès des projets data. 

Vos futures missions : 

  1. Concevoir et mettre en œuvre des architectures de données modernes, en utilisant les meilleures pratiques de l'industrie, 

  2. Développer et maintenir des pipelines de données efficaces, du sourcing à la visualisation,

  3. Assurer la qualité et la sécurité des données tout au long du cycle de vie, 

  4. Collaborer avec les équipes métier pour comprendre les besoins et fournir des solutions data adaptées, 

  5. Évaluer et recommander des technologies émergentes pour améliorer les capacités de la plateforme data, 

  6. Monter en compétence des ressources internes, 

  7. Participation à la construction des offres de Talan. 

Qualifications

????️‍♀️ Profil recherché :

  • Vous êtes issu(e) de formation BAC+5 (diplôme en informatique, génie logiciel, ou équivalent),

  • Vous justifiez de minimum 5 ans d'expérience sur un poste similaire.

Compétences recherchées : 

  • Vous maitrisez les langages de programmation : Maîtrise avancée de Go, Python, SQL et Scala

  • Technologies de traitement de données : Expérience pratique avec Apache Spark pour le traitement massivement parallèle 

  • Bases de données : 

    • Solide expérience avec les bases de données relationnelles (SQL) et non relationnelles (NoSQL)

    • Expertise dans l'utilisation de BigQuery, Snowflake, DataBricks et autres solutions de data warehousing 

    • Expertise sur la stack ELK  

    • Connaissance de bases de données Timeseries (Prométheus…) 

    • Connaissance de base de données NoSQL ( Mongodb, CoucheBase, etc … ) 

    • Expertise sur les outils de Datavisualisation : Kibana et grafana 

  • Cloud Computing : 

    • Expérience approfondie avec les services cloud, notamment GCP, AWS et Azure 

    • Connaissance approfondie des services tels que Google Cloud Big Data, AWS Glue, et Azure Data Factory 

  • Conteneurisation et Orchestration : 

    • Maîtrise de Kubernetes pour l'orchestration de conteneurs 

  • CI/CD : 

    • Solide compréhension et expérience des pipelines CI/CD pour assurer le déploiement continu et la qualité du code 

  • Systèmes de stockage de données : 

    • Expertise dans la conception et la gestion de systèmes de stockage distribués 

  • Capacité à travailler en équipe et à communiquer efficacement 

  • Esprit d'analyse et de synthèse 

  • Capacité à résoudre des problèmes complexes 

  • Expérience pratique avec Docker pour le déploiement et la gestion de conteneurs.

Vous vous reconnaissez dans la description ? N’hésitez pas à venir nous rencontrer !

Si vous recherchez une structure à taille humaine, en pleine croissance, exigeante et favorisant l’esprit d’équipe, vous êtes certainement le/la consultant(e) que nous recherchons ! ????

???? Notre process de Recrutement

  • 1ère étape avec nos chargées de recrutement : Faisons connaissance !
  • 2ème étape avec nos consultants : Rencontrez et échangez avec vos futurs co-équipiers !
  • 3ème étape avec nos managers : Discussion ouverte autour de votre projection chez Talan ! 

Nous vous proposons des échanges informels avec nos consultants tout au long du processus !

????Nos avantages :

  • Locaux modernes à proximité de la Gare Lille Flandres
  • Télétravail autorisé, prime d’équipement de 100 € 
  • Actionnariat salarié 
  • Proximité et convivialité cultivées à tous les niveaux de l’entreprise (Great Place to Work depuis 10 ans, Top 5 des entreprises où il fait bon travailler depuis 5 ans) 
  • Retour de congé maternité possible à 4/5ème sans perte de salaire pendant 6 mois 
  • Permanence handicap (consultant dédié aux collaborateurs en situation de handicap et aux proches aidants). 
  • Offre de formation complète sur site et à distance, programme de mentoring pour les femmes 
  • Mobilité en France et à l’étranger 
  • Tickets restaurant, prime vacances, 50% transport (abonnement transport public), mutuelle 100% prise en charge
  • Primes de cooptation de 500 à 4000 € 

See more jobs at Talan

Apply for this job

+30d

Principal Analyst/Engineer, Attack Surface Management

SecurityScorecardRemote (United States)
golangBachelor's degreescalaDesignc++python

SecurityScorecard is hiring a Remote Principal Analyst/Engineer, Attack Surface Management

About SecurityScorecard:

SecurityScorecard is the global leader in cybersecurity ratings, with over 12 million companies continuously rated, operating in 64 countries. Founded in 2013 by security and risk experts Dr. Alex Yampolskiy and Sam Kassoumeh and funded by world-class investors, SecurityScorecard’s patented rating technology is used by over 25,000 organizations for self-monitoring, third-party risk management, board reporting, and cyber insurance underwriting; making all organizations more resilient by allowing them to easily find and fix cybersecurity risks across their digital footprint. 

Headquartered in New York City, our culture has been recognized by Inc Magazine as a "Best Workplace,” by Crain’s NY as a "Best Places to Work in NYC," and as one of the 10 hottest SaaS startups in New York for two years in a row. Most recently, SecurityScorecard was named to Fast Company’s annual list of theWorld’s Most Innovative Companies for 2023and to the Achievers 50 Most Engaged Workplaces in 2023 award recognizing “forward-thinking employers for their unwavering commitment to employee engagement.”  SecurityScorecard is proud to be funded by world-class investors including Silver Lake Waterman, Moody’s, Sequoia Capital, GV and Riverwood Capital.

About the Role:

This role is crucial for maintaining the continuous accuracy and completeness of our customers' digital footprint data. The position demands an in-depth understanding of networking protocols such as TCP/IP, DNS, BGP, SSL and an understanding of the fundamentals of how the Internet works. Responsibilities include validating the attribution of digital assets, managing asset claims, addressing inaccuracies, and promptly updating the digital footprint as necessary. The ideal candidate will have a background in researching, designing and deploying Internet facing technologies, preferably in telcos. The candidate will proactively identify and resolve discrepancies and identify directional innovations to the digital attribution system. This role requires a proactive approach and a deep understanding of how digital assets are managed and assigned by Telcos/ISPs.

Job Responsibilities:

  • Validate and Maintain Digital Footprint data:Regularly review and validate the accuracy of how digital assets are attributed to organizations. Ensure that all internet-facing assets are correctly attributed and reflect the current status.
  • Asset Management & Discovery: Research and design new methods to correctly discover and attribute digital assets to organizations. You will also work with key stakeholders to understand customer needs and the nuances on how their organization is reflected on the Internet.
  • Issue Resolution:Address and resolve issues found within the digital footprint, such as misattributions or outdated information. Work closely with the cybersecurity team to understand the impact of these issues on security ratings.
  • Collaboration and Reporting: Work collaboratively with technical and non-technical teams to gather asset data, clarify asset status, and report on footprint changes and their impacts. Provide insights and recommendations based on digital footprint analysis.
  • Continuous Improvement: Contribute to the improvement of methodologies for digital footprint analysis and management. Participate in the development of new tools and processes to enhance the team’s capabilities. As well as keeping an eye on associated engineering costs. 

Required Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, Cybersecurity, or a related field.
  • 2+ years of experience in cybersecurity, IT asset management, or a related field.
  • Familiarity with the various internet registries such as ARIN, RIPE NCC, APNIC, etc 
  • Strong understanding of network infrastructure, BGP, DNS, WHOIS, and IP management.
  • Proficient in data analysis and capable of interpreting complex data related to network security.
  • Experience with cybersecurity tools and platforms, especially those related to asset management and network scanning.
  • Strong problem-solving skills and the ability to operate effectively under tight deadlines.
  • Experience with distributed data processing frameworks (Spark and or Flink)
  • Proficient in Scala, Python or Golang
  • Experience with various data formats

Preferred Qualifications:

  • Certifications such as CISSP, CISM, or related credentials.
  • Experience with scripting languages for data manipulation and automation.
  • Knowledge of regulatory compliance standards relevant to cybersecurity and data protection.

Additional Skills:

  • Excellent communication skills, both written and verbal.
  • Strong organizational skills with the ability to manage multiple priorities.
  • Proactive attitude and a strong team player.
  • Experience with Kafka

Benefits:

Specific to each country, we offer a competitive salary, stock options, Health benefits, and unlimited PTO, parental leave, tuition reimbursements, and much more!

The estimated salary range for this position is $145,000-200,000. Actual compensation for the position is based on a variety of factors, including, but not limited to affordability, skills, qualifications and experience, and may vary from the range. In addition to base salary, employees may also be eligible for annual performance-based incentive compensation awards and equity, among other company benefits. 

SecurityScorecard is committed to Equal Employment Opportunity and embraces diversity. We believe that our team is strengthened through hiring and retaining employees with diverse backgrounds, skill sets, ideas, and perspectives. We make hiring decisions based on merit and do not discriminate based on race, color, religion, national origin, sex or gender (including pregnancy) gender identity or expression (including transgender status), sexual orientation, age, marital, veteran, disability status or any other protected category in accordance with applicable law. 

We also consider qualified applicants regardless of criminal histories, in accordance with applicable law. We are committed to providing reasonable accommodations for qualified individuals with disabilities in our job application procedures. If you need assistance or accommodation due to a disability, please contact talentacquisitionoperations@securityscorecard.io.

Any information you submit to SecurityScorecard as part of your application will be processed in accordance with the Company’s privacy policy and applicable law. 

SecurityScorecard does not accept unsolicited resumes from employment agencies.  Please note that we do not provide immigration sponsorship for this position. 

See more jobs at SecurityScorecard

Apply for this job

+30d

Principal Data Engineer

sliceBelfast or remote UK
scalanosqlsqlDesignjavapython

slice is hiring a Remote Principal Data Engineer

UK Remote or Belfast

Serial tech entrepreneur Ilir Sela started Slice in 2010 with the belief that local pizzerias deserve all of the advantages of major franchises without compromising their independence. Starting with his family’s pizzerias, we now empower over 18,000 restaurants (that’s nearly triple Domino’s U.S. network!) with the technology, services, and collective power that owners need to better serve their digitally minded customers and build lasting businesses. We’re growing and adding more talent to help fulfil this valuable mission. That’s where you come in.

 

The Challenge to Solve

Provide Slice with up to date data to grow the business and to empower independent pizzeria owners to make the best data driven decisions through insights that ensure future success.

 

The Role

You will be responsible for leading data modelling and dataset development across the team. You’ll be at the forefront of our data strategy. Partnering closely with business and product teams,  to fuel data-driven decisions throughout the company. Your leadership will guide our data architecture expansion, ensuring smooth data delivery and maintaining top-notch data quality. Drawing on your expertise, you’ll steer our tech choices and keep us at the cutting edge of the field. You’ll get to code daily and provide your insights into best practices to the rest of the team.

 

The Team

You’ll work with a team of skilled data engineers daily, providing your expertise to their reviews, as well as working on your own exciting projects with teams across the business. You’ll have a high degree of autonomy and the chance to impact many areas of the business. You will optimise data flow and collection for cross-functional teams and support software developers, business intelligence, and data scientists on data initiatives using this to help to support product launches and supporting Marketing efforts to grow the business. This role reports to the Director of Data Engineering.

 

The Winning Recipe

We’re looking for creative, entrepreneurial engineers who are excited to build world-class products for small business counters. These are the core competencies this role calls for:

  • Strong track record of designing and implementing modern cloud data processing architectures using programming languages such as Java, Scala, or Python and technologies like Spark
  • Expert-level SQL skills
  • Extensive experience in data modelling and design, building out the right structures to deliver for various business and product domains
  • Strong analytical abilities and a history of using data to identify opportunities for improvement and where data can help drive the business towards its goals
  • Experience with message queuing, stream processing using frameworks such as Flink or KStreams and highly scalable big data data stores, as well as storage and query pattern design with NoSQL stores
  • Proven leadership skills, with a track record of successfully leading complex engineering projects and mentoring junior engineers, as well as working with cross-functional teams and external stakeholders in a dynamic environment
  • Familiarity with serverless technologies and the ability to design and implement scalable and cost-effective data processing architectures

 

The Extras

Working at Slice comes with a comprehensive set of benefits, but here are some of the unexpected highlights:

  • Access to medical, dental, and vision plans
  • Flexible working hours
  • Generous time off policies
  • Annual conference attendance and training/development budget
  • Market leading maternity and paternity schemes
  • Discounts for local pizzerias (of course)

 

The Hiring Process

Here’s what we expect the hiring process for this role to be, should all go well with your candidacy. This entire process is expected to take 1-3 weeks to complete and you’d be expected to start on a specific date.

  1. 30 minute introductory meeting
  2. 30 minute hiring manager meeting
  3. 60 minute pairing interview
  4. 60 minute technical interview
  5. 30 minute CTO interview
  6. Offer!

Pizza brings people together. Slice is no different. We’re an Equal Opportunity Employer and embrace a diversity of backgrounds, cultures, and perspectives. We do not discriminate on the basis of race, colour, gender, sexual orientation, gender identity or expression, religion, disability, national origin, protected veteran status, age, or any other status protected by applicable national, federal, state, or local law. We are also proud members of the Diversity Mark NI initiative as a Bronze Member.

Privacy Notice Statement of Acknowledgment

When you apply for a job on this site, the personal data contained in your application will be collected by Slice. Slice is keeping your data safe and secure. Once we have received your personal data, we put in place reasonable and appropriate measures and controls to prevent any accidental or unlawful destruction, loss, alteration, or unauthorised access. If selected, we will process your personal data for hiring /employment processes, as well as our legal obligations. If you are not selected for the job position and you have given consent on the question below (by selecting "Give consent") we will store and process your personal data and submitted documents (CV) to consider eligibility for employment up to 365 days (one year). You have the right to withdraw your previously given consent for storing your personal data and CV in the Slice database considering eligibility for employment for a year. You have the right to withdraw your consent at any time.For additional information and / or exercise of your rights to the protection of personal data, you can contact our Data Protection Officer, e-mail:privacy@slicelife.com

See more jobs at slice

Apply for this job

+30d

Consultant Data Intelligence - H/F - CDI

TalanParis, France, Remote
tableauscalaazureAWS

Talan is hiring a Remote Consultant Data Intelligence - H/F - CDI

Description du poste

Si vous recherchez un univers 100% Data, bienvenue au sein du pôle Data Intelligence Solutions - LE pôle Data du groupe Talan !

VOTRE ROLE SUR NOS PROJETS :

Au sein d’équipes projets, vous pourrez être en charge des missions suivantes :

-       Pilotage de projets d’envergure dans des écosystèmes Data/Big Data/Cloud

-       Cadrage du besoin, conception et analyses fonctionnelle et technique

-       Benchmark de solutions et conseil auprès de nos clients sur les solutions technologiques à adopter, en lien avec leurs besoins

-       Conseil sur l’architecture, l’exploitation et la gouvernance des données

-       Maîtrise complète de la plateforme, installation et dimensionnement

-       Création d’applications, manipulation de la donnée, modélisation et développement de rapports et tableaux de bord

-       Rédaction de documentation fonctionnelle et/ou technique

-       Formation d’utilisateurs(rices) et équipe projet

-       Participation à des avant-ventes

VOTRE ROLE CHEZ TALAN :

-       Réalisation de POC (Proof Of Concept)

-       Être ambassadeur(drice) en participant à des événements partenaires

-       Participation à des projets internes et partage de connaissances au sein de nos équipes.

Ensemble réalisons de nouveaux projets Talantueux ! 

Qualifications

VOTRE PROFIL :

De formation Bac+5, vous disposez d’une expérience confirmée de minimum 5 ans dans la data et disposez des compétences suivantes :

-       Vous avez développé une expertise technique et fonctionnelle sur l’une des technologies suivantes : MSBI, Power BI, Tableau, Informatica, Talend, Dbt, Hadoop, Spark, Scala, Hive, Kafka, Snowflake, Azure DB, AWS, GCP, Redshift, Athena, TM1, Jedox, Big Query, Looker…

-       Vous maîtrisez les concepts de base de l'informatique décisionnelle (modélisation, alimentation, restitution...) et/ou du big data

-       Vous avez déjà préparé et mené des ateliers métiers

-       Vous êtes capable d’appréhender le contexte métier

-       La connaissance du contrôle de gestion est un plus

-       Vous avez formé des utilisateurs et équipes projet

-       Une ou plusieurs certifications seraient fortement appréciées

-       Français lu, écrit, parlé

-       L'anglais est un plus.

VOTRE SOUHAIT D’EVOLUTION :

Si vous êtes passionné(e) par l’innovation, et souhaitez élargir vos compétences technico-fonctionnelles dans la data, accéder à des fonctions de management de projet et d’équipe, participer au développement commercial et organisationnel, ou tout simplement pouvoir valoriser vos prises d’initiatives et développer de nouveaux terrains de jeux, alors rejoignez-nous !

See more jobs at Talan

Apply for this job

+30d

Senior MLOps Engineer

EquipmentShareRemote; Columbia, MO
MLDevOPSMaster’s DegreescalaDesignazuregitjavac++dockerkubernetesjenkinspythonAWS

EquipmentShare is hiring a Remote Senior MLOps Engineer

EquipmentShare is Hiring a Senior MLOps Engineer.

EquipmentShare is searching for a Senior MLOps Engineer for a remote opportunity supporting the Data Platform team as it continues to grow.  

Primary Responsibilities

  • Collaborate with scientists and data engineers to design, develop, and deploy scalable ML models and pipelines
  • Work with scientists to build and maintain end-to-end ML pipelines for data ingestion, preprocessing, feature engineering, model training, evaluation, and deployment.
  • Automate continuous integration and continuous deployment (CI/CD) pipelines for scientists to self-service model and feature deployment
  • Develop monitoring and logging solutions to track model performance, data quality, and system health in production environments
  • Work with Scientists to optimize ML workflows for efficiency, scalability, and cost-effectiveness.
  • Automate infrastructure provisioning, configuration management, and resource scaling using infrastructure-as-code (IaC) tools
  • Troubleshoot issues related to data, models, infrastructure, and deployments in production environments
  • Mentor junior members of the Data Platform team and provide technical guidance and best practices
  • Stay updated on the latest advancements and trends in MLOps, machine learning, and cloud technologies

Why We’re a Better Place to Work

  • Competitive salary.
  • Medical, Dental and Vision coverage for full-time employees.
  • 401(k) and company match.
  • Unlimited paid time off (PTO) plus company paid holidays.
  • Generous paid parental leave
  • State of the art onsite gym (Corporate HQ) with instructor led-courses/Gym stipend for remote employees.
  • Seasonal and year round wellness challenges.
  • Company sponsored events (annual family gatherings, happy hours and more).
  • Volunteering and local charity initiatives that help you nurture and grow the communities you call home. Employees receive 16 hours of paid volunteer time per year. 
  • Opportunities for career and professional development with conferences, events, seminars and continued education. 

About You 

Our mission to change an entire industry is not easily achieved, so we only hire people who are inspired by the goal and up for the challenge. In turn, our employees have every opportunity to grow with us, achieve personal and professional success and enjoy making a tangible difference in an industry that’s long been resistant to change. 

Skills & Qualifications 

  • Bachelor’s or Master’s degree, or equivalent practical experience, in computer science, machine learning, software engineering, data engineering, DevOps, or related field
  • 5+ years of experience working in MLOps, machine learning engineering, software engineering, data engineering, DevOps, or related roles
  • Demonstrated knowledge and experience in building a modern data science, experimentation and machine learning technology stack for a data-driven company
  • Proficiency in programming languages such as Python, Java, or Scala
  • Strong understanding of cloud computing platforms (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes)
  • Knowledge of DevOps practices, including infrastructure automation, configuration management, and monitoring
  • Excellent problem-solving skills and attention to detail
  • Effective communication and collaboration skills, with the ability to work in a fast-paced, team-oriented environment
  • Experience with version control systems (e.g., Git) and CI/CD tools (e.g., Jenkins, GitLab CI/CD)
  • Experience with ML frameworks and libraries (e.g., TensorFlow, PyTorch, scikit-learn).
  • Experience with Snowflake is preferred but not required
  • Must be qualified to work in the United States or the United Kingdom - we are not sponsoring any candidates at this time

EquipmentShare is committed to a diverse and inclusive workplace. EquipmentShare is an equal opportunity
employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation,
protected veteran status, disability, age, or other legally protected status.

#LI-Remote

 

See more jobs at EquipmentShare

Apply for this job

+30d

Machine Learning Platform Architect (US)

SignifydUnited States (Remote); New York City, NY
MLS3SQSBachelor's degreeBachelor degree5 years of experience10 years of experiencescalaairflowsqlDesignjavapythonAWS

Signifyd is hiring a Remote Machine Learning Platform Architect (US)

Who Are You

We are seeking a highly skilled and experienced ML Platform Architect to join our dynamic and growing data platform team. As an ML Platform Architect, you will play a crucial role in strengthening and expanding the core of our data products. We want you to help us scale our business, to data-driven decisions, and to contribute to our overall data strategy. You will work alongside talented data platform engineers to envision how all the data elements from multiple sources should fit together and then execute that plan. The ideal candidate must: 

  • Effectively communicate complex data problems by tailoring the message to the audience and presenting it clearly and concisely. 
  • Balance multiple perspectives, disagree, and commit when necessary to move key company decisions and critical priorities forward.
  • Have a profound comprehension of data quality, governance, and analytics.
  • Have the ability to work independently in a dynamic environment and proactively approach problem-solving.
  • Be committed to driving positive business outcomes through expert data handling and analysis.
  • Be an example for fellow engineers by showcasing customer empathy, creativity, curiosity, and tenacity.
  • Have strong analytical and problem-solving skills, with the ability to innovate and adapt to fast-paced environments.
  • Design and build clear, understandable, simple, clean, and scalable solutions.

What You'll Do

  • Modernize Signifyd’s Machine Learning (ML) Platform to scale for resiliency, performance, and operational excellence working closely with Engineering and Data Science teams across Signifyd’s R&D group.
  • Create and deliver a technology roadmap focused on advancing our data processing capabilities, which will support the evolution of our real-time data processing and analysis capabilities.
  • Work alongside ML Engineers, Data Scientists, and other Software Engineers to develop innovative big data processing solutions for scaling our core product for eCommerce fraud prevention.
  • Take full ownership of significant portions of our data processing products, including collaborating with stakeholders on machine learning models, designing large-scale data processing solutions, creating additional processing facets and mechanisms, and ensuring the support of low-latency, high-quality, high-scale decisioning for Signifyd’s flagship product.
  • Architect, deploy, and optimize Databricks solutions on AWS, developing scalable data processing solutions to streamline data operations and enhance data solution deployments.
  • Implement data processing solutions using Spark, Java, Python, Databricks, Tecton, and various AWS services (S3, Redshift, EMR, Athena, Glue).
  • Mentor and coach fellow engineers on the team, fostering an environment of growth and continuous improvement.
  • Identify and address gaps in team capabilities and processes to enhance team efficiency and success.

What You'll Need

  • Ideally has over 10 years of experience in data engineering, including at least 5 years of experience as a data or machine learning architect or lead. Have successfully navigated the challenges of working with large-scale data processing systems.
  • Deep understanding of data processing, comfortable working with multi-terabyte datasets, and skilled in high-scale data ingestion, transformation, and distributed processing, with strong Apache Spark or Databricks experience.
  • Experience in building low-latency, high-availability data stores for use in real-time or near-real-time data processing with programming languages such as Python, Scala, Java, or JavaScript/TypeScript, as well as data retrieval using SQL and NoSQL.
  • Hands-on expertise in data technologies with proficiency in technologies such as Spark, Airflow, Databricks, AWS services (SQS, Kinesis, etc.), and Kafka. Understand the trade-offs of various architectural approaches and recommend solutions suited to our needs.
  • Experience with the latest technologies and trends in Data, ML, and Cloud platforms.
  • Demonstrable ability to lead and mentor engineers, fostering their growth and development. 
  • You have successfully partnered with Product, Data Engineering, Data Science and Machine Learning teams on strategic data initiatives.
  • Commitment to quality, you take pride in delivering work that excels in data accuracy, performance and reliability, setting a high standard for the team and the organization.

#LI-Remote

Benefits in our US offices:

  • Discretionary Time Off Policy (Unlimited!)
  • 401K Match
  • Stock Options
  • Annual Performance Bonus or Commissions
  • Paid Parental Leave (12 weeks)
  • On-Demand Therapy for all employees & their dependents
  • Dedicated learning budget through Learnerbly
  • Health Insurance
  • Dental Insurance
  • Vision Insurance
  • Flexible Spending Account (FSA)
  • Short Term and Long Term Disability Insurance
  • Life Insurance
  • Company Social Events
  • Signifyd Swag

We want to provide an inclusive interview experience for all, including people with disabilities. We are happy to provide reasonable accommodations to candidates in need of individualized support during the hiring process.

Signifyd provides a base salary, bonus, equity and benefits to all its employees. Our posted job may span more than one career level, and offered level and salary will be determined by the applicant’s specific experience, knowledge, skills, and abilities, as well as internal equity and alignment with market data.

USA Base Salary Pay Range
$230,000$250,000 USD

See more jobs at Signifyd

Apply for this job

+30d

Software Engineer - Telematics

agilescalaDesignjavac++postgresqltypescriptkubernetespythonAWSbackend

EquipmentShare is hiring a Remote Software Engineer - Telematics

EquipmentShare is Hiring a Telematics Software Engineer 

At EquipmentShare, we believe it’s more than just a job, we invest in our people and encourage you to choose the best path for your career. It’s truly about you, your future and where you want to go.

We are looking for a Software Engineer(backend) to help us continue to build the next evolution of our platform in a scalable, performant and customer-centric oriented architecture.

Our main tech stack includes : AWS, Kubernetes, Python, Kafka, PostgreSQL, DynamoDB, and Kinesis

If you haveproduction scale experience in a different stack (Go, Java, TypeScript, Scala, C#, TypeScript, etc.) and are interested in moving to a new stack, we should chat.

What you'll be doing

We are typically organized into agile cross-functional teams composed of Engineering, Product and Design, which allows us to develop deep expertise and rapidly deliver high value features and functionality to our next generation T3 Platform.

You’ll work closely with a team of Engineers to build and maintain our cloud native event ingestion and processing architecture built on AWS. These systems serve ourIoT and SaaS customers to provideanalytics, real time alerting and logistics features.

Your team sits at the nexus of high volume message ingestion, hardware deployment logistics and R&D.  Developing the integrations for the next generation of hardware deployment, ensuring high availability and consistent processing performance.  

We'll be there to support you as you become familiar with our teams, product domains, tech stack and processes — generally how we all work together.

As a Software Engineer III you will 

  • Help design, build and deliver the services and domains that power our platform — shaping the product features andcapabilities that underpin our platform.
  • Manage large, complex systems operating real-time, big data pipeline processing and intra-processed messaging.
  • Analyze component, asset, back-end network logs and telecommunications to confirm system performance using proprietary logging and network tools.
  • Review requirements and specification documents as well as architecture and design documentation of telematics systems.  Balancing the desire to ship code with the responsibility to get it right.
  • Contribute to our culture improving how we deliver as a team.  Helping us to leave things better than we find them and making it easier for us to get stuff done.
  • Collaborate with Engineers on our team by sharing your insight, knowledge and experience as welearn and grow together.

Who you are

You're a hands-on developer who gets stuck in, you enjoy solving complex problems and building impactful solutions.  Most importantly, you care about making a difference.

  • Take the initiative to own outcomes from start to finish — knowing what needs to be accomplished within your domain and how we work together to deliver the best solution.
  • You have a passion for developing your craft — you understand what it takes to build quality, robust and scalable solutions.
  • You’ll see the learning opportunity when things don’t quite go to plan — not only for you, but for how we continuously improve as a team.
  • You take a hypothesis-driven approach — knowing how to source, create and leverage data to inform decision making, using data to drive how we improve, to shape how we evaluate and make platform recommendations.

So, what is important to us?

Above all, you’ll get stuff done. More importantly, you’ll collaborate to do the right things, in the right wayto achieve the right outcomes.

  • Proficient with a high order object oriented language. (especially Python - open to Go, Java, Scala, C# etc.)
  • 4+ years of relevant development experience building production grade solutions.
  • Delivery focused with solid exposure to event driven architectures and high volume data processing.
  • Practical exposure of CI/CD pipelines for your production services.
  • Familiarity with public cloud service platforms.
  • Experience partnering and collaborating with remote teams (across different time zones).
  • Proven track record in learning new technologies and applying that learning quickly.
  • Experience building observability and monitoring into applications.

Some of the things that would be nice to have, but not required:

  • Familiar with containerisation and Kubernetes.
  • Practical production knowledge of service oriented architectures.
  • Experience with streaming technologies. (AWS Kinesis, Kafka, etc.)

What we will offer you

We can promise that every day will be a little different with new ideas, challenges and rewards.

We’ve been growing as a team and we are not finished just yet— there is plenty of opportunity to shape how we deliver together.

Our missionis to enable the construction industry with tools that unlock substantial increases to productivity. Together with our team and customers, we are building the future of construction.

T3is the only cloud-based operating system that brings together construction workflows & data from constantly moving elements in one place.

  • Competitive base salary and market leading equity package (pre-IPO).
  • Unlimited PTO.
  • Remote first.
  • True work/life balance.
  • Medical, Dental, Vision and Life Insurance coverage.
  • 401(k) + match.
  • Opportunities for career and professional development with conferences, events, seminars and continued education.
  • On-site fitness center at the Home Office in Columbia, Missouri, complete with weightlifting machines, cardio equipment, group fitness space, racquetball courts, a climbing wall, and much more!
  • Volunteering and local charity support that help you nurture and grow the communities you call home through our Giving Back initiative.
  • Stocked breakroom and full kitchen with breakfast and lunch provided daily by our chef and kitchen crew.

We embrace diversity in all of its forms and foster an inclusive environment for all people to do their best work with us. 

We're an equal opportunity employer. All applicants will be considered for employment without attention to ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity status or disability status.

All appointments will be made on merit.

#LI-Remote

See more jobs at EquipmentShare

Apply for this job

+30d

Data Science Intern - Undergraduate

TubiSan Francisco, CA; Remote
tableauscalasqlc++python

Tubi is hiring a Remote Data Science Intern - Undergraduate

Join Tubi (www.tubi.tv), Fox Corporation's premium ad-supported video-on-demand (AVOD) streaming service leading the charge in making entertainment accessible to all. With over 200,000 movies and television shows, including a growing library of Tubi Originals, 200+ local and live news and sports channels, and 455 entertainment partners featuring content from every major Hollywood studio, Tubi gives entertainment fans an easy way to discover new content that is available completely free. Tubi's library has something for every member of our diverse audience, and we're committed to building a workforce that reflects that diversity. We're looking for great people who are creative thinkers, self-motivators, and impact-makers looking to help shape the future of streaming.

About the Role: 

We are looking for a curious and enthusiastic Data Scientist Intern to join our Ad Product team for the summer. You will work cross-functionally to help ensure that major decisions are data-driven. You will be supported by a close-knit team of Data Scientists and Analysts and will have an opportunity to drive and collaborate on projects end-to-end. Your insights will have a direct impact on the future of Tubi.

What You'll Do:

  • Uncover insights and create data visualizations to ensure data-driven decision-making
  • Build and maintain high-quality data infrastructure that is easily used by less technical audiences
  • Collaborate closely with product managers and engineers to understand experimentation requirements and enhance the scope and depth of experimentation across business units 
  • Build internal data tools to improve productivity for your team and stakeholders
  • Receive guidance/mentorship from senior team members

Qualifications: 

  • SQL and a scripting programming language (Python, R, or Scala)
  • Experience with data visualization and dashboarding tools (Tableau, Periscope, or others)
  • Experience with product analytics and working with event-level data
  • Strong communication, presentation, and collaboration skills
  • Video streaming industry experience is a plus

Program Eligibility Requirements:

  • Must be actively enrolled in an accredited college or university and pursuing an undergraduate degree during the length of the program
  • Current class standing of sophomore (second-year college student) or above
  • Strong academic record (minimum cumulative 3.0 GPA)
  • Committed and available to work for the entire length of the program

About the Program:

  • Application Deadline: April 19, 2024 
  • Program Timeline: 8-week placement, beginning on 6/3
  • Weekly Hours: Up to 40 hours per week (5 days)
  • Worksite: Remote

Pursuant to state and local pay disclosure requirements, the pay range for this role, with the final offer amount dependent on education, skills, experience, and location, is listed per hour below.

California, Colorado, New York City, Westchester County, NY, and Washington
$25$25 USD

Tubi is a division of Fox Corporation, and the FOX Employee Benefits summarized here, covers the majority of all US employee benefits.  The following distinctions below outline the differences between the Tubi and FOX benefits:

  • For US-based non-exempt Tubi employees, the FOX Employee Benefits summary accurately captures the Vacation and Sick Time.
  • For all salaried/exempt employees, in lieu of the FOX Vacation policy, Tubi offers a Flexible Time off Policy to manage all personal matters.
  • For all full-time, regular employees, in lieu of FOX Paid Parental Leave, Tubi offers a generous Parental Leave Program, which allows parents twelve (12) weeks of paid bonding leave within the first year of the birth, adoption, surrogacy, or foster placement of a child. This time is 100% paid through a combination of any applicable state, city, and federal leaves and wage-replacement programs in addition to contributions made by Tubi.
  • For all full-time, regular employees, Tubi offers a monthly wellness reimbursement.

Tubi is proud to be an equal opportunity employer and considers qualified applicants without regard to race, color, religion, sex, national origin, ancestry, age, genetic information, sexual orientation, gender identity, marital or family status, veteran status, medical condition, or disability. Pursuant to the San Francisco Fair Chance Ordinance, we will consider employment for qualified applicants with arrest and conviction records. We are an E-Verify company.

See more jobs at Tubi

Apply for this job

+30d

Data Engineering Intern - Graduate

TubiSan Francisco, CA; Remote
MLterraformscalaairflowsqljavac++python

Tubi is hiring a Remote Data Engineering Intern - Graduate

Join Tubi (www.tubi.tv), Fox Corporation's premium ad-supported video-on-demand (AVOD) streaming service leading the charge in making entertainment accessible to all. With over 200,000 movies and television shows, including a growing library of Tubi Originals, 200+ local and live news and sports channels, and 455 entertainment partners featuring content from every major Hollywood studio, Tubi gives entertainment fans an easy way to discover new content that is available completely free. Tubi's library has something for every member of our diverse audience, and we're committed to building a workforce that reflects that diversity. We're looking for great people who are creative thinkers, self-motivators, and impact-makers looking to help shape the future of streaming.

About the Role:

At Tubi, data plays a vital role in keeping viewers engaged and the business thriving. Every day, data engineering pipelines analyze the massive amount of data generated by millions of viewers, turning it into actionable insights. In addition to processing TBs a day of 1st party user activity data, we manage a petabyte scale data lake and data warehouses that several hundred consumers use daily. We have two openings on two different teams.

Core Data Engineering (1):In this role, you will join a team focused on Core Data Engineering, helping build and analyze business-critical datasets that fuel Tubi's success as a leading streaming platform.

  • Use SQL and SQL modeling to interact with and create massive sets of data
  • Use DBT and its semantic modeling concept to build production data models
  • Use Databricks as a data warehouse and computing platform
  • Use Python/Scala in notebooks to interact with and create large datasets

Streaming Analytics (1):In this role you will join a small and nimble team focused on Streaming Analytics that power our core and critical datasets for machine learning, helping improve the data quality that fuels Tubi's success as a leading streaming platform.

  • Use SQL to explore and analyze the data quality of our most critical datasets, working with different technical stakeholders across ML & data science 
  • Work with engineers to implement a near-time data quality dashboard
  • Use Python/Scala in notebooks to transform and explore large datasets
  • Use tools like Airflow for workflow management and Terraform for cloud infrastructure automation

Qualifications: 

  • Fluency (intermediate) in one major programming language (preferably Python, Scala, or Java) and SQL (any variant)
  • Familiar with big data technologies (e.g., Apache Spark, Kafka) is a plus
  • Strong communication skills and a desire to learn!

Program Eligibility Requirements:

  • Must be actively enrolled in an accredited college or university and pursuing an undergraduate or graduate degree during the length of the program
  • Current class standing of sophomore (second-year college student) or above
  • Strong academic record (minimum cumulative 3.0 GPA)
  • Committed and available to work for the entire length of the program

About the Program:

  • Application Deadline: April 19, 2024 
  • Program Timeline: 10-week placement beginning on6/17
  • Weekly Hours: Up to 40 hours per week (5 days)
  • Worksite:  Remote or Hybrid (SF or LA)

Pursuant to state and local pay disclosure requirements, the pay range for this role, with the final offer amount dependent on education, skills, experience, and location, is listed per hour below.

California, Colorado, New York City, Westchester County, NY, and Washington
$40$40 USD

Tubi is a division of Fox Corporation, and the FOX Employee Benefits summarized here, covers the majority of all US employee benefits.  The following distinctions below outline the differences between the Tubi and FOX benefits:

  • For US-based non-exempt Tubi employees, the FOX Employee Benefits summary accurately captures the Vacation and Sick Time.
  • For all salaried/exempt employees, in lieu of the FOX Vacation policy, Tubi offers a Flexible Time off Policy to manage all personal matters.
  • For all full-time, regular employees, in lieu of FOX Paid Parental Leave, Tubi offers a generous Parental Leave Program, which allows parents twelve (12) weeks of paid bonding leave within the first year of the birth, adoption, surrogacy, or foster placement of a child. This time is 100% paid through a combination of any applicable state, city, and federal leaves and wage-replacement programs in addition to contributions made by Tubi.
  • For all full-time, regular employees, Tubi offers a monthly wellness reimbursement.

Tubi is proud to be an equal opportunity employer and considers qualified applicants without regard to race, color, religion, sex, national origin, ancestry, age, genetic information, sexual orientation, gender identity, marital or family status, veteran status, medical condition, or disability. Pursuant to the San Francisco Fair Chance Ordinance, we will consider employment for qualified applicants with arrest and conviction records. We are an E-Verify company.

See more jobs at Tubi

Apply for this job

+30d

Azure Data Engineer H/F - M Cloud

DevoteamLevallois-Perret, France, Remote
DevOPSscalasqlazuregitpython

Devoteam is hiring a Remote Azure Data Engineer H/F - M Cloud

Description du poste

Vos principales responsabilités en tant qu’Azure Data Engineer :

#Azure #Data #Databricks #Datafactory

Voici une liste non exhaustive de vos missions au quotidien, nous vous faisons confiance pour les prendre en main et les enrichir à votre façon ????

  • Industrialiser les projets Data dans un environnement cloud
  • Implémenter des architectures Data et orienter le choix des technologies adaptées aux besoins des projets
  • Identifier, recueillir, analyser et intégrer les données essentielles à la résolution des problématiques métier et opérationnelles
  • Concevoir du code et le documenter
  • Assurer la qualité des données en instaurant les outils de mesure et de suivi appropriés
  • Contribuer, en collaboration avec l'équipe, au développement de la plateforme Data sur Azure et à l'établissement des meilleures pratiques de développement
  • Appliquer les méthodologies DevOps pour améliorer l'efficacité des processus de développement

Où réaliseriez-vous vos missions ? Chez des clients grands comptes de la banque, de l’assurance, du retail, de la Défense ou du luxe, porteurs de projets innovants.

Qualifications

Ce que vous apporterez à la Tribu ?

[Les compétences idéales] 

Diplômé.e d’une École d’Ingénieurs ou d’un Master 2 en IT, vous avez au moins 3 ans d'expérience professionnelle dans le domaine de la Data et sur le cloud Azure et maitrisez  tout ou partie des technologies suivantes : 

  • Environnement Azure Data Services (Synapse Analytics, Data Factory, Databricks, Azure Machine Learning, EventHub, Stram Analytics, etc.)
  • Maîtrise de Python et SQL
  • Spark (Scala, PySpark)
  • Connaissance de l’approche DevOps : Git, CI/CD…

Vous êtes aguerri.e aux méthodologies agiles et parlez couramment français et anglais.

Vous êtes capable de qualifier et d’expliciter un besoin client, possédez de solides capacités d’animation et avez l’âme d’un(e) leader!

Vous êtes ouvert.e au changement, force de proposition et êtes un véritable moteur au sein d’une équipe.

[Cherries on the cake????]

  • Expérience avec un outil de Data Viz tel que Power BI
  • Certifications Microsoft (AZ-900, DP-900, DP-203, etc.) / Databricks

See more jobs at Devoteam

Apply for this job

+30d

Data Engineer

phDataLATAM - Remote
scalasqlazurejavapythonAWS

phData is hiring a Remote Data Engineer

Job Application for Data Engineer at phData

See more jobs at phData

Apply for this job