Data Engineer Remote Jobs

113 Results

+30d

Senior Data Engineer

Balsam BrandsMexico City, Mexico, Remote
postgressqloracleDesignapiMySQLpython

Balsam Brands is hiring a Remote Senior Data Engineer

Job Description

In this hands-on role as a Senior Data Engineer, your primary responsibility will be to partner with key business partners, data analysts and software engineers to design and build a robust, scalable, company-wide data infrastructure to move and translate data that will be used to inform strategic business decisions. You will ensure performance, stability, cost-efficiency, security, and accuracy of the data on the centralized data platform. The ideal candidate will possess advanced knowledge and hands-on experience in data integration, building data pipelines, batch processing frameworks, and data modeling techniques to facilitate seamless data movement. You will collaborate with various technology and business stakeholders to define requirements and design and deliver data products that meet user needs. The candidate should demonstrate intellectual acumen, excel in engineering best practices, and have a strong interest in developing enterprise-scale solutions using industry-recognized cloud platforms, data warehouses, data integration and orchestration tools.

This full-time position reports to the Senior Manager, Data Engineering and requires in-office presence twice a week (Tuesdays and Wednesdays) to facilitate effective collaboration with both local and remote team members. Some flexibility in the regular work schedule is necessary, as most teams have overlapping hours in the early morning and/or early evening PST. Specific scheduling needs for this role will be discussed in the initial interview.

What you’ll do:

  • Data Infrastructure Design: Develop and maintain robust, scalable, and high-performance data infrastructure to meet the company-wide data and analytics needs
  • Data Lifecycle Management: Manage the entire data lifecycle, including ingestion, modeling, warehousing, transformation, access control, quality, observability, retention, and deletion
  • Strategic Data Movement: Define and implement data integration strategies to collect and ingest various data sources. Design, build and launch efficient and reliable data pipelines to process data of different structures and sizes using Python, APIs, SQL, and platforms like Snowflake
  • Collaboration and Consultation: Serve as a trusted partner to collaborate with technical and cross-functional teams to support their data needs, address data-related technical issues, and provide expert consultation
  • Process Efficiency and Stability: Apply engineering best practices to streamline manual processes, optimize data pipelines, and establish observability capabilities to monitor and alert data quality and infrastructure health and stability
  • Innovative Solutions: Stay updated on the latest technologies and lead the evaluation and deployment of cutting-edge tools to enhance data infrastructure and processes
  • Coaching and Mentorship: Foster a culture of knowledge sharing by acting as a subject matter expert, leading by example, and mentoring others

What you bring to the table:

  • Must be fluent in English, both written and verbal
  • 8+ years of professional experience in the data engineering
  • Extensive hands-on experience with designing and maintaining scalable, efficient, secure and fault tolerant distributed database on Snowflake Cloud Data Platform. In-depth knowledge on cloud platforms, particularly GCP and Microsoft
  • Proficient in designing and implementing data movement pipelines for diverse data sources including databases, external data providers, and streaming sources, for both inbound and outbound data workflows
  • Deep understanding of relational database (SQL Server, Oracle, Postgres, and MySQL) with advanced SQL and Python skills for building API integration, ETLs, and data models
  • Proven experience in building efficient and reliable data pipelines with comprehensive data quality checks, workflow management, and CI/CD integration
  • Excellent analytical thinking skills for performing root cause analysis on external and internal processes and data, resolving data incidents, and identifying opportunities for improvement
  • Effective communication skills for articulating complex technical details in simple business terms to non-technical audience from a various business function
  • Strong understanding of coding standards, best practices, and data governance

Location and Travel: At Balsam Brands, we believe that time spent together, in-person, collaborating and building relationships is important. To be considered for this role, it is preferred that candidates live within the Mexico City, Guadalajara, or Monterrey metropolitan areas in order to attend occasional team meetings, offsites, or learning and development opportunities that will be planned in a centralized location. Travel to the U.S. may be required for companywide and broader team retreats.

Notes: This is a full-time (40 hours/week), indefinite position with benefits. Candidates must be Mexican nationals to be eligible for this position; this screening question will be asked during the application process. Velocity Global is the Employer of Record for Balsam Brands' Mexico City location, and you will be employed and provided benefits under their payroll. Balsam Brands has partnered with Velocity Global to act as your Employer of Record to ensure your employment will comply with all local laws and regulations and you will receive an exceptional employment experience.

Benefits Offered:

  • Competitive compensation; salary is reviewed yearly and may be adjusted as part of the normal compensation review process
  • Career development and growth opportunities; access to online learning solutions and annual stipend for continuous learning
  • Fully remote work and flexible schedule
  • Collaborate in a multicultural environment; learn and share best practices around the globe
  • Government mandated benefits (IMSS, INFONAVIT, SAR, 50% vacation premium)
  • Healthcare coverage provided for the employee and dependents
  • Life insurance provided for the employee
  • Monthly grocery coupons
  • Monthly non-taxable amount for the electricity and internet services 
  • 20 days Christmas bonus
  • Paid Time Off: Official Mexican holidays and 12 vacation days (increases with years of service), plus additional wellness days available at start of employment 

 

 

Qualifications

See more jobs at Balsam Brands

Apply for this job

+30d

Data Engineer

Zensark Tecnologies Pvt LtdHyderabad, India, Remote
S3EC2nosqlpostgressqloracleDesignjavapythonAWS

Zensark Tecnologies Pvt Ltd is hiring a Remote Data Engineer

Job Description

Job Title:              Data Engineer

Department:      Product Development

Reports to:         Director, Software Engineering

 

Summary:

Responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. Support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. Responsible for optimizing or even re-designing Tangoe’s data architecture to support our next generation of products and data initiatives.

 

Responsibilities:

  • Create and maintain optimal data pipeline architecture.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater performance and scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.

 

Skills & Qualifications:

  • 5+ years of experience in a Data Engineer role
  • Experience with relational SQL and NoSQL databases, including Postgres, Oracle and Cassandra.
  • Experience with data pipeline and workflow management tools.
  • Experience with AWS cloud services: S3, EC2, EMR, RDS, Redshift.
  • Experience with stream-processing systems: Storm, Spark-Streaming, Amazon Kinesis, etc.
  • Experience with object-oriented/object function scripting languages: Python, Java, NodeJs.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with both structured and unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.

 

 

Education:

  • Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.

 

Working conditions: 

  • Remote

 

Tangoe reaffirms its commitment to providing equal opportunities for employment and advancement to qualified employees and applicants. Individuals will be considered for positions for which they meet the minimum qualifications and are able to perform without regard to race, color, gender, age, religion, disability, national origin, veteran status, sexual orientation, gender identity, current unemployment status, or any other basis protected by federal, state or local laws. Tangoe is an Equal Opportunity Employer -Minority/Female/Disability/Veteran/Current Unemployment Status.

 

Qualifications

  • Bachelor’s degree in Computer Science, Engineering or a related subject

See more jobs at Zensark Tecnologies Pvt Ltd

Apply for this job

+30d

Data Engineer--US Citizens/Green Card

Software Technology IncBrentsville, VA, Remote
Lambdanosqlsqlazureapigit

Software Technology Inc is hiring a Remote Data Engineer--US Citizens/Green Card

Job Description

I am a Lead Talent Acquisition Specialist at STI (Software Technology Inc) and currently looking for a Data Engineer.

Below is a detailed job description. Should you be interested, please feel free to reach me via call or email. Amrutha.duddula@ AT tiorg.com/732-664-8807

Title:  Data Engineer
Location: Manassas, VA (Remote until Covid)
Duration: Long Term Contract

 Required Skills:

•             Experience working in Azure Databricks, Apache Spark
•             Proficient programming in Scala/Python/Java
•             Experience developing and deploying data pipelines for streaming and batch data sources getting from multiple sources
•             Experience creating data models and implementing business logic using tools and languages listed
•             Working knowledge in Kafka, Structured Streaming, DataFrame API, SQL, NoSQL Database
•             Comfortable with API, Azure Datalake, Git, Notebooks, Spark Cluster, Spark Jobs, Performance tuning
•             Must have excellent communication skills
•             Familiarity with Power BI, Delta Lake, Lambda Architecture, Azure Data Factory, Azure Synapse a plus
•             Telecom domain experience not necessary but really helpful

Thank you,
Amrutha Duddula
Lead Talent Acquisition Specialist
Software Technology Inc (STI)

Email: amrutha.duddula@ AT tiorg.com
Phone : 732-664-8807
www.stiorg.com
www.linkedin.com/in/amruthad/

Qualifications

See more jobs at Software Technology Inc

Apply for this job

+30d

Senior Data science Engineer - Remote

RapidSoft CorpReston, VA, Remote
agileDesignjavapython

RapidSoft Corp is hiring a Remote Senior Data science Engineer - Remote

Job Description

Duties and Responsibilities: • Develop data solutions in collaboration with other team members and software engineering teams that meet and anticipate business goals and strategies • Work with senior data science engineers in analyzing and understanding all aspects of data, including source, design, insight, technology and modeling • Develop and manage scalable data processing platforms for both exploratory and real-time analytics • Oversee and develop algorithms for quick data acquisition, analysis and evolution of the data model to improve search and recommendation engines. • Document and demonstrate solutions • Design system specifications and provide standards and best practices • Support and mentor junior data engineers by providing advice and coaching • Make informed decisions quickly and taking ownership of services and applications at scale • Be a persistent, creative problem solver, constantly striving to improve and iterate on both processes and technical solutions • Remain cool and effective in a crisis • Understand business needs and know how to create the tools to manage them • Take initiative, own the problem and own the solution • Other duties as assigned  Supervisory Responsibilities: • None  Minimum Qualifications: • Bachelor's Degree in Data Engineering, Computer Science, Information Technology, or a related discipline (or equivalent experience) • 8+ years experience in data engineering development • 5+ years experience working in object oriented programming languages such as Python or Java • Experience working in an Agile environment

Qualifications

See more jobs at RapidSoft Corp

Apply for this job

+30d

Senior Data Engineer

phDataIndia - Remote
scalasqlazurejavapythonAWS

phData is hiring a Remote Senior Data Engineer

Job Application for Senior Data Engineer at phData

See more jobs at phData

Apply for this job

+30d

Lead Data Engineer

phDataIndia - Remote
scalasqlazurejavapythonAWS

phData is hiring a Remote Lead Data Engineer

Job Application for Lead Data Engineer at phData

See more jobs at phData

Apply for this job

+30d

Senior Data Engineer

RemoteRemote-Southeast Asia
airflowsqljenkinspythonAWS

Remote is hiring a Remote Senior Data Engineer

About Remote

Remote is solving global remote organizations’ biggest challenge: employing anyone anywhere compliantly. We make it possible for businesses big and small to employ a global team by handling global payroll, benefits, taxes, and compliance. Check out remote.com/how-it-works to learn more or if you’re interested in adding to the mission, scroll down to apply now.

Please take a look at remote.com/handbook to learn more about our culture and what it is like to work here. Not only do we encourage folks from all ethnic groups, genders, sexuality, age and abilities to apply, but we prioritize a sense of belonging. You can check out independent reviews by other candidates on Glassdoor or look up the results of our candidate surveys to see how others feel about working and interviewing here.

All of our positions are fully remote. You do not have to relocate to join us!

What this job can offer you

This is an exciting time to join the growing Data Team at Remote, which today consists of over 15 Data Engineers, Analytics Engineers and Data Analysts spread across 10+ countries. Throughout the team we're focused on driving business value through impactful decision making. We're in a transformative period where we're laying the foundations for scalable company growth across our data platform, which truly serves every part of the Remote business. This team would be a great fit for anyone who loves working collaboratively on challenging data problems, and making an impact with their work. We're using a variety of modern data tooling on the AWS platform, such as Snowflake and dbt, with SQL and python being extensively employed.

This is an exciting time to join Remote and make a personal difference in the global employment space as a Senior Data Engineer, joining our Data team, composed of Data Analysts and Data Engineers. We support the decision making and operational reporting needs by being able to translate data into actionable insights to non-data professionals at Remote. We’re mainly using SQL, Python, Meltano, Airflow, Redshift, Metabase and Retool.

What you bring

  • Experience in data engineering; high-growth tech company experience is a plus
  • Strong experience with building data extraction/transformation pipelines (e.g. Meltano, Airbyte) and orchestration platforms (e.g. Airflow)
  • Strong experience in working with SQL, data warehouses (e.g. Redshift) and data transformation workflows (e.g. dbt)
  • Solid experience using CI/CD (e.g. Gitlab, Github, Jenkins)
  • Experience with data visualization tools (e.g. Metabase) is considered a plus
  • A self-starter mentality and the ability to thrive in an unstructured and fast-paced environment
  • You have strong collaboration skills and enjoy mentoring
  • You are a kind, empathetic, and patient person
  • Writes and speaks fluent English
  • It's not required to have experience working remotely, but considered a plus

Key Responsibilities

  • Playing a key role in Data Platform Development & Maintenance:
    • Managing and maintaining the organization's data platform, ensuring its stability, scalability, and performance.
    • Collaboration with cross-functional teams to understand their data requirements and optimize data storage and access, while protecting data integrity and privacy.
    • Development and testing architectures that enable data extraction and transformation to serve business needs.
  • Improving further our Data Pipeline & Monitoring Systems:
    • Designing, developing, and deploying efficient Extract, Load, Transform (ELT) processes to acquire and integrate data from various sources into the data platform.
    • Identifying, evaluating, and implementing tools and technologies to improve ELT pipeline performance and reliability.
    • Ensuring data quality and consistency by implementing data validation and cleansing techniques.
    • Implementing monitoring solutions to track the health and performance of data pipelines and identify and resolve issues proactively.
    • Conducting regular performance tuning and optimization of data pipelines to meet SLAs and scalability requirements.
  • Dig deep into DBT Modelling:
    • Designing, developing, and maintaining DBT (Data Build Tool) models for data transformation and analysis.
    • Collaboration with Data Analysts to understand their reporting and analysis needs and translate them into DBT models, making sure they respect internal conventions and best practices.
  • Driving our Culture of Documentation:
    • Creating and maintaining technical documentation, including data dictionaries, process flows, and architectural diagrams.
    • Collaborating with cross-functional teams, including Data Analysts, SREs (Site Reliability Engineers) and Software Engineers, to understand their data requirements and deliver effective data solutions.
    • Sharing knowledge and offer mentorship, providing guidance and advice to peers and colleagues, creating an environment that empowers collective growth

Practicals

  • You'll report to: Engineering Manager - Data
  • Team: Data 
  • Location:For this position we welcome everyone to apply, but we will prioritise applications from the following locations as we encourage our teams to diversify; Vietnam, Indonesia, Taiwan and South-Korea
  • Start date: As soon as possible

Remote Compensation Philosophy

Remote's Total Rewards philosophy is to ensure fair, unbiased compensation and fair equitypayalong with competitive benefits in all locations in which we operate. We do not agree to or encourage cheap-labor practices and therefore we ensure to pay above in-location rates. We hope to inspire other companies to support global talent-hiring and bring local wealth to developing countries.

At first glance our salary bands seem quite wide - here is some context. At Remote we have international operations and a globally distributed workforce.  We use geo ranges to consider geographic pay differentials as part of our global compensation strategy to remain competitive in various markets while we hiring globally.

The base salary range for this full-time position is $53,500 USD to $131,300 USD. Our salary ranges are determined by role, level and location, and our job titles may span more than one career level. The actual base pay for the successful candidate in this role is dependent upon many factors such as location, transferable or job-related skills, work experience, relevant training, business needs, and market demands. The base salary range may be subject to change.

Application process

  1. Interview with recruiter
  2. Interview with future manager
  3. Async exercise stage 
  4. Interview with team members

#LI-DP

Benefits

Our full benefits & perks are explained in our handbook at remote.com/r/benefits. As a global company, each country works differently, but some benefits/perks are for all Remoters:
  • work from anywhere
  • unlimited personal time off (minimum 4 weeks)
  • quarterly company-wide day off for self care
  • flexible working hours (we are async)
  • 16 weeks paid parental leave
  • mental health support services
  • stock options
  • learning budget
  • home office budget & IT equipment
  • budget for local in-person social events or co-working spaces

How you’ll plan your day (and life)

We work async at Remote which means you can plan your schedule around your life (and not around meetings). Read more at remote.com/async.

You will be empowered to take ownership and be proactive. When in doubt you will default to action instead of waiting. Your life-work balance is important and you will be encouraged to put yourself and your family first, and fit work around your needs.

If that sounds like something you want, apply now!

How to apply

  1. Please fill out the form below and upload your CV with a PDF format.
  2. We kindly ask you to submit your application and CV in English, as this is the standardised language we use here at Remote.
  3. If you don’t have an up to date CV but you are still interested in talking to us, please feel free to add a copy of your LinkedIn profile instead.

We will ask you to voluntarily tell us your pronouns at interview stage, and you will have the option to answer our anonymous demographic questionnaire when you apply below. As an equal employment opportunity employer it’s important to us that our workforce reflects people of all backgrounds, identities, and experiences and this data will help us to stay accountable. We thank you for providing this data, if you chose to.

See more jobs at Remote

Apply for this job

+30d

Data Engineer II (Remote)

HackerRankRemote within India
agiletableauscalaairflowsqlDesignAWS

HackerRank is hiring a Remote Data Engineer II (Remote)

At HackerRank, we help over 2,500 of the most prestigious logos across industries find, hire and upskill amazing developer talent using our SaaS-based Developer Skills Platform. We pioneered and continue to lead the developer skills market. At HackerRank, we are passionate about our mission to "Change the world to value skills over pedigree".This position is full-time and remote within India.

You will be working on:

  • Evaluate technologies, develop POCs, solve technical challenges and propose innovative solutions for our technical and business problems
  • Delight our stakeholders, customers and partners by building high-quality, well-tested, scalable and reliable business applications.
  • Design, build and maintain streaming and batch data pipelines that can scale.
  • Architect, develop and maintain our Modern lake house Platform using AWS native infrastructure
  • Designing Complex Data Models to deliver insights and enable self-service
  • Take ownership of scaling, performance, security, and reliability of our data infrastructure
  • Hiring, guiding and mentoring junior engineers
  • Work in the agile development environment, participate in code reviews
  • Collaborate with remote development teams and cross-functional teams

We are looking for:

  • 3+ years of experience with designing, developing and maintaining data engineering & BI solutions.
  • Experience with Data Modeling for Big Data Solutions.
  • Experience with Spark, Spark Structured Streaming (Scala Spark)
  • Experience with database technologies like Redshift or Trino
  • Experience with BI Solutions like Looker, Power BI, Amazon Quicksight, Tableau etc is a big plus
  • Experience with ETL Design & Orchestration using platforms like Apache Airflow, MageAI etc is a big plus
  • Experience querying massive datasets using Languages like SQL, Hive, Spark, Trino
  • Experience with performance tuning complex data warehouses and queries.
  • Able to solve problems of scale, performance, security, and reliability
  • Self-driven, initiative taker with good communication skills, ability to lead and mentor junior engineers, work with cross-functional teams, drive architecture decisions
  • Knowledge of Kafka, Kafka Connect and related technologies is a huge bonus

Benefits & Perks:

We have a full package of competitive benefits and perks which include:

  • One-time home office set up stipend
  • Monthly Remote Work Enablement Stipend
  • Professional Development Reimbursement
  • Wellbeing Benefits (Headspace, etc)
  • Flexible paid time off, paid leave for new parents, and flexible work hours
  • Insurance for all employees (term life, personal accident, medical) along with medical insurance for their dependents
  • Employee stock options

About HackerRank:

HackerRank is a Y Combinator alumnus backed by tier-one Silicon Valley VCs with total funding of over $100 million. The HackerRank Developer Skills Platform is the standard for assessing developer skills for 2,500+ companies across industries and 23M+ developers worldwide. Companies like LinkedIn, Stripe, and Peloton rely on HackerRank to objectively evaluate skills against millions of developers at every hiring process, allowing teams to hire the best and reduce engineering time. Developers rely on HackerRank to turn their skills into great jobs. We’re data-driven givers who take full ownership of our work and love delighting our customers!

HackerRank is a proud equal employment opportunity and affirmative action employer. We provide equal opportunity to everyone for employment based on individual performance and qualification. We never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines.

Notice to prospective HackerRank job applicants:
We’ve noticed fake accounts posing as HackerRank Recruiters on Linkedin and through text. These imposters trick you into paying them for jobs/providing credit check information.
Here’s how to spot the real deal:

  • Our Recruiters use @hackerrank.com email addresses.
  • We never ask for payment or credit check information to apply, interview, or work here.

Thanks for your interest in HackerRank!

See more jobs at HackerRank

Apply for this job

+30d

Data Engineer

Out There MediaMarousi,Attica,Greece, Remote Hybrid
MLmobile

Out There Media is hiring a Remote Data Engineer

We are offering an amazing opportunity to a talented and skilled Data Engineer to play a key role in leveraging big data analytics and technology to improve Out There Media's overall business operations.

About OTM

Out There Media (OTM) is a leading international mobile advertising company that uniquely links mobile operators with advertisers, public figures and international organizations via its proprietary, award-winning technology, Mobucks™, while also offering world class creative services.

Out There Media is trusted by some of the world’s most popular brands, such as Unilever, P&G, Disney, Starbucks, Budweiser, Netflix, Coca Cola, L’Oréal and McDonalds, international organizations such as the UN and the WHO, major mobile operators including Verizon, T-Mobile, Vodafone, Starhub, O2 Telefonica, Telcel (America Movil), MTN Group and many more, as well as Public Figures and Political Parties. The Company is headquartered in Vienna, Austria with operations across the globe.

What’s In for You

As a Data Engineer at Out There Media, you will play a critical role in building and maintaining the data infrastructure that powers our technology platform Mobucks™. You will be responsible for designing, developing, and deploying data pipelines that ingest, transform, and store massive datasets from various sources. Your work will directly impact the success of our advertising campaigns and the overall growth of the company.

Your Role and Responsibilities

  • Analyze and organize raw data
  • Developing and maintaining datasets and evaluate datasets for accuracy and quality in order to improving data quality and efficiency
  • Build data systems and pipelines
  • Prepare data for prescriptive and predictive modeling
  • Build up processes for data mining, data modeling, data streaming and create efficient ML models for interpret trends and patterns
  • Develop analytical tools and programs
  • Ensure that all data systems meet the high transactional requirements as well as industry best practices
  • Integrate up-and-coming data management and software engineering technologies into existing data structures
  • Create custom software components and analytics applications
  • Research new uses for existing data
  • Employ an array of technological languages and tools to connect systems together
  • Define data retention policies and Install/update disaster recovery procedures
  • Always be on the edge of technology and continuously monitor and test the system to ensure optimized performance.
  • Work with internal teams in understanding the business requirements and implementing solutions to achieve business goals

  • Degree in a related field such as software / computer engineering, applied mathematics, physics statistics or business informatics
  • 5+ years of working experience acquired by working in companies dealing heavily with big data (e.g. research companies)
  • Proficient use of SQL Language
  • Excellent use of Google BigQuery and Google Datastudio and Dataflow
  • Experienced with technologies / tools such as Spark, SparkSql, Flink, GeoSpark
  • Experienced with Hadoop, and MapReduce processes
  • Good knowledge of Big Data querying tools, such as Hive and others
  • Experience building and using clustering and classification algorithms and methods
  • Experience with integration of data from multiple data sources
  • Excellent English skills, written and oral.
  • Intellectual curiosity to find new and unusual ways of how to solve data management issues.
  • Ability to approach data organization challenges while keeping an eye on what’s important.

Working At OTM:

Our culture is fast-paced, entrepreneurial, and rewarding. If you are passionate about representing a company that truly believes in delighting its customers using cutting-edge, market leading digital technologies and products, you are at the right place!

  • We offer a hybrid working environment
  • A unique, diverse and multi-national company culture
  • The compensation package includes a competitive remuneration, dependent on experience and skills, and a bonus upon achievement of KPIs, in line with the company’s performance and rewards scheme.
  • Referral bonus scheme
  • Opportunity to work on cutting-edge technology and make a real impact
  • Be part of a team that is revolutionizing the mobile advertising industry

We are an equal-opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

See more jobs at Out There Media

Apply for this job

+30d

Senior Data Engineer

SmartMessageİstanbul, TR - Remote
MLS3SQSLambdaMaster’s DegreenosqlDesignmongodbazurepythonAWS

SmartMessage is hiring a Remote Senior Data Engineer

Who are we?

We are a globally expanding software technology company that helps brands communicate more effectively with their audiences. We are looking forward to expand our people capabilities and success in developing high-end solutions beyond existing boundaries and establish our brand as a Global Powerhouse.

We are free to work from wherever we want and go to the office whenever we like!!!

What is the role?

We are looking for a highly skilled and motivated Senior Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in building and managing data pipelines, noSQL databases, and cloud-based data platforms. You will work closely with data scientists and other engineers to design and implement scalable data solutions.

Key Responsibilities:

  • Design, build, and maintain scalable data pipelines and architectures.
  • Implement data lake solutions on cloud platforms.
  • Develop and manage noSQL databases (e.g., MongoDB, Cassandra).
  • Work with graph databases (e.g., Neo4j) and big data technologies (e.g., Hadoop, Spark).
  • Utilize cloud services (e.g., S3, Redshift, Lambda, Kinesis, EMR, SQS, SNS).
  • Ensure data quality, integrity, and security.
  • Collaborate with data scientists to support machine learning and AI initiatives.
  • Optimize and tune data processing workflows for performance and scalability.
  • Stay up-to-date with the latest data engineering trends and technologies.

Detailed Responsibilities and Skills:

  • Business Objectives and Requirements:
    • Engage with business IT and data science teams to understand their needs and expectations from the data lake.
    • Define real-time analytics use cases and expected outcomes.
    • Establish data governance policies for data access, usage, and quality maintenance.
  • Technology Stack:
    • Real-time data ingestion using Apache Kafka or Amazon Kinesis.
    • Scalable storage solutions such as Amazon S3, Google Cloud Storage, or Hadoop Distributed File System (HDFS).
    • Real-time data processing using Apache Spark or Apache Flink.
    • NoSQL databases like Cassandra or MongoDB, and specialized time-series databases like InfluxDB.
  • Data Ingestion and Integration:
    • Set up data producers for real-time data streams.
    • Integrate batch data processes to merge with real-time data for comprehensive analytics.
    • Implement data quality checks during ingestion.
  • Data Processing and Management:
    • Utilize Spark Streaming or Flink for real-time data processing.
    • Enrich clickstream data by integrating with other data sources.
    • Organize data into partitions based on time or user attributes.
  • Data Lake Storage and Architecture:
    • Implement a multi-layered storage approach (raw, processed, and aggregated layers).
    • Use metadata repositories to manage data schemas and track data lineage.
  • Security and Compliance:
    • Implement fine-grained access controls.
    • Encrypt data in transit and at rest.
    • Maintain logs of data access and changes for compliance.
  • Monitoring and Maintenance:
    • Continuously monitor the performance of data pipelines.
    • Implement robust error handling and recovery mechanisms.
    • Monitor and optimize costs associated with storage and processing.
  • Continuous Improvement and Scalability:
    • Establish feedback mechanisms to improve data applications.
    • Design the architecture to scale horizontally.

Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
  • 5+ years of experience in data engineering or related roles.
  • Proficiency in noSQL databases (e.g., MongoDB, Cassandra) and graph databases (e.g., Neo4j).
  • Strong experience with cloud platforms (e.g., AWS, GCP, Azure).
  • Hands-on experience with big data technologies (e.g., Hadoop, Spark).
  • Proficiency in Python and data processing frameworks.
  • Experience with Kafka, ClickHouse, Redshift.
  • Knowledge of ETL processes and data integration.
  • Familiarity with AI, ML algorithms, and neural networks.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and teamwork skills.
  • Entrepreneurial spirit and a passion for continuous learning.

Join our team!

See more jobs at SmartMessage

Apply for this job

+30d

Sr. Data Engineer, Marketing Tech

MLDevOPSLambdaagileairflowsqlDesignapic++dockerjenkinspythonAWSjavascript

hims & hers is hiring a Remote Sr. Data Engineer, Marketing Tech

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

We're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving Million+ Hims & Hers subscribers.

You Will:

  • Architect and develop data pipelines to optimize performance, quality, and scalability.
  • Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources.
  • Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake.
  • Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance 
  • Orchestrate sophisticated data flow patterns across a variety of disparate tooling.
  • Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics.
  • Partner with the rest of the Data Platform team to set best practices and ensure the execution of them.
  • Partner with the analytics engineers to ensure the performance and reliability of our data sources.
  • Partner with machine learning engineers to deploy predictive models.
  • Partner with the legal and security teams to build frameworks and implement data compliance and security policies.
  • Partner with DevOps to build IaC and CI/CD pipelines.
  • Support code versioning and code deployments for data Pipelines.

You Have:

  • 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages.
  • Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed.
  • Demonstrated experience writing complex, highly optimized SQL queries across large data sets.
  • Experience working with customer behavior data. 
  • Experience with Javascript, event tracking tools like GTM, tools like Google Analytics, Amplitude and CRM tools. 
  • Experience with cloud technologies such as AWS and/or Google Cloud Platform.
  • Experience with serverless architecture (Google Cloud Functions, AWS Lambda).
  • Experience with IaC technologies like Terraform.
  • Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres.
  • Experience building event streaming pipelines using Kafka/Confluent Kafka.
  • Experience with modern data stack like Airflow/Astronomer, Fivetran, Tableau/Looker.
  • Experience with containers and container orchestration tools such as Docker or Kubernetes.
  • Experience with Machine Learning & MLOps.
  • Experience with CI/CD (Jenkins, GitHub Actions, Circle CI).
  • Thorough understanding of SDLC and Agile frameworks.
  • Project management skills and a demonstrated ability to work autonomously.

Nice to Have:

  • Experience building data models using dbt
  • Experience designing and developing systems with desired SLAs and data quality metrics.
  • Experience with microservice architecture.
  • Experience architecting an enterprise-grade data platform.

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

An estimate of the current salary range for US-based employees is
$140,000$170,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Hims considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.

Hims & hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at accommodations@forhims.com. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

+30d

Sr. Data Engineer, Kafka

DevOPSagileterraformairflowpostgressqlDesignapic++dockerkubernetesjenkinspythonAWSjavascript

hims & hers is hiring a Remote Sr. Data Engineer, Kafka

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

We're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving over a million Hims & Hers users.

You Will:

  • Architect and develop data pipelines to optimize performance, quality, and scalability
  • Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources
  • Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake
  • Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance
  • Orchestrate sophisticated data flow patterns across a variety of disparate tooling
  • Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics
  • Partner with the rest of the Data Platform team to set best practices and ensure the execution of them
  • Partner with the analytics engineers to ensure the performance and reliability of our data sources
  • Partner with machine learning engineers to deploy predictive models
  • Partner with the legal and security teams to build frameworks and implement data compliance and security policies
  • Partner with DevOps to build IaC and CI/CD pipelines
  • Support code versioning and code deployments for data Pipelines

You Have:

  • 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages
  • Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed
  • Demonstrated experience writing complex, highly optimized SQL queries across large data sets
  • Experience with cloud technologies such as AWS and/or Google Cloud Platform
  • Experience building event streaming pipelines using Kafka/Confluent Kafka
  • Experience with IaC technologies like Terraform
  • Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres
  • Experience with Databricks platform
  • Experience with modern data stack like Airflow/Astronomer, Databricks, dbt, Fivetran, Confluent, Tableau/Looker
  • Experience with containers and container orchestration tools such as Docker or Kubernetes
  • Experience with Machine Learning & MLOps
  • Experience with CI/CD (Jenkins, GitHub Actions, Circle CI)
  • Thorough understanding of SDLC and Agile frameworks
  • Project management skills and a demonstrated ability to work autonomously

Nice to Have:

  • Experience building data models using dbt
  • Experience with Javascript and event tracking tools like GTM
  • Experience designing and developing systems with desired SLAs and data quality metrics
  • Experience with microservice architecture
  • Experience architecting an enterprise-grade data platform

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

An estimate of the current salary range for US-based employees is
$140,000$170,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Hims considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.

Hims & hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at accommodations@forhims.com. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

EXUS is hiring a Remote Junior/Mid Data Analytics Engineer

EXUS is an enterprise software company, founded in 1989 with the vision to simplify risk management software. EXUS launched its Financial Suite (EFS) in 2003 to support financial entities worldwide to improve their results. Today, our EXUS Financial Suite (EFS) is trusted by risk professionals in more than 32 countries worldwide (MENAEUSEA). We introduce simplicity and intelligence in their business processes through technology, improving their collections performance.

Our people constitute the source of inspiration that drives us forward and helps us fulfill our purpose of being role models for a better world.
This is your chance to be part of a highly motivated, diverse, and multidisciplinary team, which embraces breakthrough thinking and technology to create software that serves people.

Our shared Values:

  • We are transparent and direct
  • We are positive and fun, never cynical or sarcastic
  • We are eager to learn and explore
  • We put the greater good first
  • We are frugal and we do not waste resources
  • We are fanatically disciplined, we deliver on our promises

We are EXUS! Are you?

Join our dynamic Data Analytics Teamas we expand our capabilities into data Lakehouse architecture. We are seeking a Junior/Mid Data Analytics Engineer who is enthusiastic about creating compelling data visualizations, effectively communicating them with customers, conducting training sessions, and gaining experience in managing ETL processes for big data.

Key Responsibilities:

  • Develop and maintain reports and dashboards using leading visualization tools, and craft advanced SQL queries for additional report generation.
  • Deliver training sessions on our Analytic Solution and effectively communicate findings and insights to both technical and non-technical customer audiences.
  • Collaborate with business stakeholders to gather and analyze requirements.
  • Debug issues in the front-end analytic tool, investigate underlying causes, and resolve these issues.
  • Monitor and maintain ETL processes as part of our transition to a data lakehouse architecture.
  • Proactively investigate and implement new data analytics technologies and methods.

Required Skills and Qualifications:

  • A BSc or MSc degree in Computer Science, Engineering, or a related field.
  • 1-5 years of experience with data visualization tools and techniques. Knowledge of MicroStrategy and Apache Superset is a plus.
  • 1-5 years of experience with Data Warehouses, Big Data, and/or Cloud technologies. Exposure to these areas in academic projects, internships, or entry-level roles is also acceptable.
  • Familiarity with PL/SQL and practical experience with SQL for data manipulation and analysis. Hands-on experience through academic coursework, personal projects, or job experience is valued.
  • Familiarity with data Lakehouse architecture.
  • Excellent analytical skills to understand business needs and translate them into data models.
  • Organizational skills with the ability to document work clearly and communicate it professionally.
  • Ability to independently investigate new technologies and solutions.
  • Strong communication skills, capable of conducting presentations and engaging effectively with customers in English.
  • Demonstrated ability to work collaboratively in a team environment.
  • Competitive salary
  • Friendly, pleasant, and creative working environment
  • Remote Working
  • Development Opportunities
  • Private Health Insurance Allowance

Privacy Notice for Job Applications: https://www.exus.co.uk/en/careers/privacy-notice-f...

See more jobs at EXUS

Apply for this job

+30d

Junior/Mid Data Analytics Engineer

EXUSBucharest,Romania, Remote

EXUS is hiring a Remote Junior/Mid Data Analytics Engineer

EXUS is an enterprise software company, founded in 1989 with the vision to simplify risk management software. EXUS launched its Financial Suite (EFS) in 2003 to support financial entities worldwide to improve their results. Today, our EXUS Financial Suite (EFS) is trusted by risk professionals in more than 32 countries worldwide (MENAEUSEA). We introduce simplicity and intelligence in their business processes through technology, improving their collections performance.

Our people constitute the source of inspiration that drives us forward and helps us fulfill our purpose of being role models for a better world.
This is your chance to be part of a highly motivated, diverse, and multidisciplinary team, which embraces breakthrough thinking and technology to create software that serves people.

Our shared Values:

  • We are transparent and direct
  • We are positive and fun, never cynical or sarcastic
  • We are eager to learn and explore
  • We put the greater good first
  • We are frugal and we do not waste resources
  • We are fanatically disciplined, we deliver on our promises

We are EXUS! Are you?

Join our dynamic Data Analytics Teamas we expand our capabilities into data Lakehouse architecture. We are seeking a Junior/Mid Data Analytics Engineer who is enthusiastic about creating compelling data visualizations, effectively communicating them with customers, conducting training sessions, and gaining experience in managing ETL processes for big data.

Key Responsibilities:

  • Develop and maintain reports and dashboards using leading visualization tools, and craft advanced SQL queries for additional report generation.
  • Deliver training sessions on our Analytic Solution and effectively communicate findings and insights to both technical and non-technical customer audiences.
  • Collaborate with business stakeholders to gather and analyze requirements.
  • Debug issues in the front-end analytic tool, investigate underlying causes, and resolve these issues.
  • Monitor and maintain ETL processes as part of our transition to a data lakehouse architecture.
  • Proactively investigate and implement new data analytics technologies and methods.

Required Skills and Qualifications:

  • A BSc or MSc degree in Computer Science, Engineering, or a related field.
  • 1-5 years of experience with data visualization tools and techniques. Knowledge of MicroStrategy and Apache Superset is a plus.
  • 1-5 years of experience with Data Warehouses, Big Data, and/or Cloud technologies. Exposure to these areas in academic projects, internships, or entry-level roles is also acceptable.
  • Familiarity with PL/SQL and practical experience with SQL for data manipulation and analysis. Hands-on experience through academic coursework, personal projects, or job experience is valued.
  • Familiarity with data Lakehouse architecture.
  • Excellent analytical skills to understand business needs and translate them into data models.
  • Organizational skills with the ability to document work clearly and communicate it professionally.
  • Ability to independently investigate new technologies and solutions.
  • Strong communication skills, capable of conducting presentations and engaging effectively with customers in English.
  • Demonstrated ability to work collaboratively in a team environment.
  • Competitive salary
  • Friendly, pleasant, and creative working environment
  • Remote Working
  • Development Opportunities
  • Private Health Insurance Allowance

Privacy Notice for Job Applications: https://www.exus.co.uk/en/careers/privacy-notice-f...

See more jobs at EXUS

Apply for this job

+30d

Staff Data Platform Engineer

CelonisRemote, Germany
DevOPSBachelor's degreesqlDesignpostgresql

Celonis is hiring a Remote Staff Data Platform Engineer

We're Celonis, the global leader in Process Mining technology and one of the world's fastest-growing SaaS firms. We believe there is a massive opportunity to unlock productivity by placing data and intelligence at the core of business processes - and for that, we need you to join us.

The Team:

The Cloud Platform and Infrastructure Engineering team provides the foundation to run our core product in the cloud. We operate our services using different cloud providers, hosted in many regions across the world, running 24/7. The scale  of our environment provides us with many interesting engineering challenges that we love to solve and optimize for future growth. 

The Role:

You’ll be part of a distributed team across Europe and the US and responsible for the architecture of our growing data services infrastructure, which powers our cloud products. You will collaborate closely with the development teams as well as our cloud platform teams to define and drive the future architecture. As we are growing rapidly, this offers unique challenges and a great experience for even the most seasoned engineers. While you have a good understanding of data persistence and messaging technologies, your core expertise is in the realm of RDBMS systems, specifically PostgreSQL. 

You will provide advice to the teams on the optimal usage and on pitfalls to avoid, and help junior engineers to level up their expertise. While the role is strategic in nature, you will also have the ability to step in and support hands-on where the team needs you.

The work you’ll do:

  • Design and manage the PostgreSQL infrastructure for a large scale cloud environment.
  • Build out our observability capabilities to provide DevOps teams insight into their consumption details. 
  • Collaborate with development teams and stakeholders.
  • Drive architectural initiatives that evolve our environments.
  • Provide advice and guidance to data service consumer teams and junior engineers.

The qualifications you need:

  • Bachelor's degree in information systems, information technology, computer science, or similar.
  • 6+ years in managing RDBMS databases at a large scale.
  • 2+ years experience with PostgreSQL.
  • In-depth knowledge of Structured Query Language (SQL) and SQL statement performance tuning.
  • Sound knowledge of best practices in database administration and modeling and data security.
  • Strong organizational skills and attention to detail.
  • Exceptional problem-solving and critical thinking skills.
  • Excellent collaboration and communication skills.

 

What Celonis can offer you:

  • The unique opportunity to work with industry-leading process mining technology
  • Investment in your personal growth and skill development (clear career paths, internal mobility opportunities, L&D platform, mentorships, and more)
  • Great compensation and benefits packages (equity (restricted stock units), life insurance, time off, generous leave for new parents from day one, and more)
  • Physical and mental well-being support (subsidized gym membership, access to counseling, virtual events on well-being topics, and more)
  • A global and growing team of Celonauts from diverse backgrounds to learn from and work with
  • An open-minded culture with innovative, autonomous teams
  • Business Resource Groups to help you feel connected, valued and seen (Black@Celonis, Women@Celonis, Parents@Celonis, Pride@Celonis, Resilience@Celonis, and more)
  • A clear set of company values that guide everything we do: Live for Customer Value, The Best Team Wins, We Own It, and Earth Is Our Future

About Us

Since 2011, Celonis has helped thousands of the world's largest and most valued companies deliver immediate cash impact, radically improve customer experience and reduce carbon emissions. Its Process Intelligence platform uses industry-leading process mining technology and AI to present companies with a living digital twin of their end-to-end processes. For the first time, everyone in an organisation has a common language about how the business works, visibility into where value is hidden and the ability to capture it. Celonis is headquartered in Munich (Germany) and New York (USA) and has more than 20 offices worldwide.

Get familiar with the Celonis Process Intelligence Platform by watching this video.

Join us as we make processes work for people, companies and the planet.

 

Celonis is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Different makes us better.

Accessibility and Candidate Notices

See more jobs at Celonis

Apply for this job

+30d

Sr Data Engineer

VeriskJersey City, NJ, Remote
LambdasqlDesignlinuxpythonAWS

Verisk is hiring a Remote Sr Data Engineer

Job Description

We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data pipeline architecture. The ideal candidate is an experienced data pipeline builder and data wrangler with strong experience in handling data at scale. The Data Engineer will support our software developers, data analysts and data scientists on various data initiatives.

This is a remote role that can be done anywhere in the continental US; work is on Eastern time zone hours.

Why this role

This is a highly visible role within the enterprise data lake team. Working within our Data group and business analysts, you will be responsible for leading creation of data architecture that produces our data assets to enable our data platform.  This role requires working closely with business leaders, architects, engineers, data scientists and wide range of stakeholders throughout the organization to build and execute our strategic data architecture vision.

Job Duties

  • Extensive understanding of SQL queries. Ability to fine tune queries based on various RDBMS performance parameters such as indexes, partitioning, Explain plans and cost optimizers.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies stack
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Working with data scientists and industry leaders to understand data needs and design appropriate data models.
  • Participate in the design and development of the AWS-based data platform and data analytics.

Qualifications

Skills Needed

  • Design and implement data ETL frameworks for secured Data Lake, creating and maintaining an optimal pipeline architecture.
  • Examine complex data to optimize the efficiency and quality of the data being collected, resolve data quality problems, and collaborate with database developers to improve systems and database designs
  • Hands-on building data applications using AWS Glue, Lake Formation, Athena, AWS Batch, AWS Lambda, Python, Linux shell & Batch scripting.
  • Hands on experience with AWS Database services (Redshift, RDS, DynamoDB, Aurora etc.)
  • Experience in writing advanced SQL scripts involving self joins, windows function, correlated subqueries, CTE’s etc.
  • Strong understanding and experience using data management fundamentals, including concepts such as data dictionaries, data models, validation, and reporting.  

Education and Training

  • 10 years full-time software engineering experience preferred with at least 4 years in an AWS environment focused on application development.
  • Bachelor’s degree or foreign equivalent degree in Computer Science, Software Engineering, or related field
  • US citizenship required

#LI-LM03
#LI-Hybrid

See more jobs at Verisk

Apply for this job

+30d

Senior Data Engineer

SynackRemote in the US
c++

Synack is hiring a Remote Senior Data Engineer

Job Application for Senior Data Engineer at Synack

See more jobs at Synack

Apply for this job

+30d

Data Engineer

DevoteamTunis, Tunisia, Remote
airflowsqlscrum

Devoteam is hiring a Remote Data Engineer

Description du poste

Au sein de la direction « Plateforme Data », le consultant intégrera une équipe SCRUM et se concentrera sur un périmètre fonctionnel spécifique.

Votre rôle consistera à contribuer à des projets data en apportant votre expertise sur les tâches suivantes :

  • Concevoir, développer et maintenir des pipelines de données robustes et évolutifs sur Google Cloud Plateform (GCP), en utilisant des outils tels que BigQuery, Airflow, Looker et DBT.
  • Collaborer avec les équipes métier pour comprendre les exigences en matière de données et concevoir des solutions adaptées.
  • Optimiser les performances des traitements des données et des processus ELT en utilisant AirFlow, DBT et BigQuery.
  • Mettre en œuvre des processus de qualité des données pour garantir l'intégrité et la cohérence des données.
  • Travailler en étroite collaboration avec les équipes d'ingénierie pour intégrer les pipelines de données dans les applications et les services existants.
  • Rester à jour avec les nouvelles technologies et les meilleures pratiques dans le domaine du traitement des données et de l'analyse.

 

    Qualifications

    • Diplômé(e) d’un Bac+5 en école d'ingénieur ou équivalent universitaire avec une spécialisation en informatique.
    • Au moins 4 ans d'expérience dans le domaine de l'ingénierie des données, avec une expérience significative dans un environnement basé sur le cloud GCP.
    • Maîtrise avancée de SQL pour l'optimisation et le traitement des données.
    • Certification Google Professional Data Engineer est un plus.
    • Très bonne communication écrite et orale (livrables et reportings de qualité).

    See more jobs at Devoteam

    Apply for this job

    +30d

    Data Engineer (Australia)

    DemystDataAustralia, Remote
    SalesS3EC2Lambdaremote-firstDesignpythonAWS

    DemystData is hiring a Remote Data Engineer (Australia)

    Our Solution

    Demyst unlocks innovation with the power of data. Our platform helps enterprises solve strategic use cases, including lending, risk, digital origination, and automation, by harnessing the power and agility of the external data universe. We are known for harnessing rich, relevant, integrated, linked data to deliver real value in production. We operate as a distributed team across the globe and serve over 50 clients as a strategic external data partner. Frictionless external data adoption within digitally advancing enterprises is unlocking market growth and allowing solutions to finally get out of the lab. If you like actually to get things done and deployed, Demyst is your new home.

    The Opportunity

    As a Data Engineer at Demyst, you will be powering the latest technology at leading financial institutions around the world. You may be solving a fintech's fraud problems or crafting a Fortune 500 insurer's marketing campaigns. Using innovative data sets and Demyst's software architecture, you will use your expertise and creativity to build best-in-class solutions. You will see projects through from start to finish, assisting in every stage from testing to integration.

    To meet these challenges, you will access data using Demyst's proprietary Python library via our JupyterHub servers, and utilize our cloud infrastructure built on AWS, including Athena, Lambda, EMR, EC2, S3, and other products. For analysis, you will leverage AutoML tools, and for enterprise data delivery, you'll work with our clients' data warehouse solutions like Snowflake, DataBricks, and more.

    Demyst is a remote-first company. The candidate must be based in Australia.

    Responsibilities

    • Collaborate with internal project managers, sales directors, account managers, and clients’ stakeholders to identify requirements and build external data-driven solutions
    • Perform data appends, extracts, and analyses to deliver curated datasets and insights to clients to help achieve their business objectives
    • Understand and keep current with external data landscapes such as consumer, business, and property data.
    • Engage in projects involving entity detection, record linking, and data modelling projects
    • Design scalable code blocks using Demyst’s APIs/SDKs that can be leveraged across production projects
    • Govern releases, change management and maintenance of production solutions in close coordination with clients' IT teams
    • Bachelor's in Computer Science, Data Science, Engineering or similar technical discipline (or commensurate work experience); Master's degree preferred
    • 1-3 years of Python programming (with Pandas experience)
    • Experience with CSV, JSON, parquet, and other common formats
    • Data cleaning and structuring (ETL experience)
    • Knowledge of API (REST and SOAP), HTTP protocols, API Security and best practices
    • Experience with SQL, Git, and Airflow
    • Strong written and oral communication skills
    • Excellent attention to detail
    • Ability to learn and adapt quickly
    • Distributed working team and culture
    • Generous benefits and competitive compensation
    • Collaborative, inclusive work culture: all-company offsites and local get togethers in Bangalore
    • Annual learning allowance
    • Office setup allowance
    • Generous paid parental leave
    • Be a part of the exploding external data ecosystem
    • Join an established fast growth data technology business
    • Work with the largest consumer and business external data market in an emerging industry that is fueling AI globally
    • Outsized impact in a small but rapidly growing team offering real autonomy and responsibility for client outcomes
    • Stretch yourself to help define and support something entirely new that will impact billions
    • Work within a strong, tight-knit team of subject matter experts
    • Small enough where you matter, big enough to have the support to deliver what you promise
    • International mobility available for top performer after two years of service

    Demyst is committed to creating a diverse, rewarding career environment and is proud to be an equal opportunity employer. We strongly encourage individuals from all walks of life to apply.

    See more jobs at DemystData

    Apply for this job

    +30d

    Data Engineer - AWS

    Tiger AnalyticsJersey City,New Jersey,United States, Remote
    S3LambdaairflowsqlDesignAWS

    Tiger Analytics is hiring a Remote Data Engineer - AWS

    Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.

    As an AWS Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines on AWS cloud infrastructure. You will work closely with cross-functional teams to support data analytics, machine learning, and business intelligence initiatives. The ideal candidate will have strong experience with AWS services, Databricks, and Apache Airflow.

    Key Responsibilities:

    • Design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
    • Implement data processing and transformation workflows using Databricks, Apache Spark, and SQL to support analytics and reporting requirements.
    • Build and maintain orchestration workflows using Apache Airflow to automate data pipeline execution, scheduling, and monitoring.
    • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver scalable data solutions.
    • Optimize data pipelines for performance, reliability, and cost-effectiveness, leveraging AWS best practices and cloud-native technologies.
    • 8+ years of experience building and deploying large-scale data processing pipelines in a production environment.
    • Hands-on experience in designing and building data pipelines on AWS cloud infrastructure.
    • Strong proficiency in AWS services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
    • Strong experience with Databricks and Apache Spark for data processing and analytics.
    • Hands-on experience with Apache Airflow for orchestrating and scheduling data pipelines.
    • Solid understanding of data modeling, database design principles, and SQL.
    • Experience with version control systems (e.g., Git) and CI/CD pipelines.
    • Excellent communication skills and the ability to collaborate effectively with cross-functional teams.
    • Strong problem-solving skills and attention to detail.

    This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

    See more jobs at Tiger Analytics

    Apply for this job