Data Engineer Remote Jobs

113 Results

+30d

Data Engineer - Snowflake

Tiger AnalyticsChicago,Illinois,United States, Remote Hybrid
Design

Tiger Analytics is hiring a Remote Data Engineer - Snowflake

Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.

The Data Engineer will be responsible for architecting, designing, and implementing advanced analytics capabilities. The right candidate will have broad skills in database design, be comfortable dealing with large and complex data sets, have experience building self-service dashboards, be comfortable using visualization tools, and be able to apply your skills to generate insights that help solve business challenges.We are looking for someone who can bring their vision to the table and implement positive change in taking the company's data analytics to the next level.

Key Responsibilities:

Data Integration:

Implement and maintain data synchronization between on-premises Oracle databases and Snowflake using Kafka and CDC tools.

Support Data Modeling:

Assist in developing and optimizing the data model for Snowflake, ensuring it supports our analytics and reporting requirements.

Data Pipeline Development:

Design, build, and manage data pipelines for the ETL process, using Airflow for orchestration and Python for scripting, to transform raw data into a format suitable for our new Snowflake data model.

Reporting Support:

Collaborate with data architect to ensure the data within Snowflake is structured in a way that supports efficient and insightful reporting.

Technical Documentation:

Create and maintain comprehensive documentation of data pipelines, ETL processes, and data models to ensure best practices are followed and knowledge is shared within the team.

Tools and Skillsets:

Data engineering: proven track record of developing and maintaining data pipelines and data integration projects

Databases: Strong experience with Oracle, Snowflake, and Databricks.

Data Integration Tools: Proficiency in using Kafka and CDC tools for data ingestion and synchronization.

Orchestration Tools: Expertise in Airflow for managing data pipeline workflows.

Programming: Advanced proficiency in Python and SQL for data processing tasks.

Data Modeling: Understanding of data modeling principles and experience with data warehousing solutions.

Cloud Platforms: Knowledge of cloud infrastructure and services, preferably Azure, as it relates to Snowflake and Databricks integration.

Collaboration Tools: Experience with version control systems (like Git) and collaboration platforms.

CI/CD Implementation: Utilize CI/CD tools to automate the deployment of data pipelines and infrastructure changes, ensuring high-quality data processing with minimal manual intervention.

Communication: Excellent communication and teamwork skills, with a detail-oriented mindset. Strong analytical skills, with the ability to work independently and solve complex problems.

Requirements

  • 8+ years of overall industry experience specifically in data engineering
  • 5+ years of experience building and deploying large-scale data processing pipelines in a production environment.
  • Strong experience in Python, SQL, and PySpark
  • Creating and optimizing complex data processing and data transformation pipelines using python
  • Experience with “Snowflake Cloud Datawarehouse” and DBT tool
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • Understanding of Datawarehouse (DWH) systems, and migration from DWH to data lakes/Snowflake
  • Understanding of ELT and ETL patterns and when to use each. Understanding of data models and transforming data into the models
  • Strong analytic skills related to working with unstructured datasets
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management

This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

See more jobs at Tiger Analytics

Apply for this job

+30d

Senior Data Engineer

AltUS Remote
airflowpostgresDesignc++pythonAWS

Alt is hiring a Remote Senior Data Engineer

At Alt, we’re on a mission to unlock the value of alternative assets, and looking for talented people who share our vision. Our platform enables users to exchange, invest, value, securely store, and authenticate their collectible cards. And we envision a world where anything is an investable asset. 

To date, we’ve raised over $100 million from thought leaders at the intersection of culture, community, and capital. Some of our investors include Alexis Ohanian’s fund Seven Seven Six, the founders of Stripe, Coinbase co-founder Fred Ehrsam, BlackRock co-founder Sue Wagner, the co-founders of AngelList, First Round Capital, and BoxGroup. We’re also backed by professional athletes including Tom Brady, Candace Parker, Giannis Antetokounmpo, Alex Morgan, Kevin Durant, and Marlon Humphrey.

Alt is a dedicated equal opportunity employer committed to creating a diverse workforce. We celebrate our differences and strive to create an inclusive environment for all. We are focused on fostering a culture of empowerment which starts with providing our employees with the resources needed to reach their full potential.

What we are looking for:

We are seeking a Senior Data Engineer who is eager to make a significant impact. In this role, you'll get the opportunity to leverage your technical expertise and problem-solving skills to solve some of the hardest data problems in the hobby. Your primary focus in this role will be on enhancing and optimizing our pricing engine to support strategic business goals. Our ideal candidate is passionate about trading cards, has a strong sense of ownership, and enjoys challenges. At Alt, data is core to everything we do and is a differentiator for our customers. The team’s scope covers data pipeline development, search infrastructure, web scraping, detection algorithms, internal toolings and data quality. We give our engineers a lot of individual responsibility and autonomy, so your ability to make good trade-offs and exercise good judgment is essential.

The impact you will make:

  • Partner with engineers, and cross-functional stakeholders to contribute to all phases of algorithm development including: ideation, prototyping, design, and production
  • Build, iterate, productionize, and own Alt's valuation models
  • Leverage background in pricing strategies and models to develop innovative pricing solutions
  • Design and implement scalable, reliable, and maintainable machine learning systems
  • Partner with product to understand customer requirements and prioritize model features

What you bring to the table:

  • Experience: 5+ years of experience in software development, with a proven track record of developing and deploying models in production. Experience with pricing models preferred.
  • Technical Skills: Proficiency in programming languages and tools such as Python, AWS, Postgres, Airflow, Datadog, and JavaScript.
  • Problem-Solving: A knack for solving tough problems and a drive to take ownership of your work.
  • Communication: Effective communication skills with the ability to ship solutions quickly.
  • Product Focus: Excellent product instincts, with a user-first approach when designing technical solutions.
  • Team Player: A collaborative mindset that helps elevate the performance of those around you.
  • Industry Knowledge: Knowledge of the sports/trading card industry is a plus.

What you will get from us:

  • Ground floor opportunity as an early member of the Alt team; you’ll directly shape the direction of our company. The opportunities for growth are truly limitless.
  • An inclusive company culture that is being built intentionally to foster an environment that supports and engages talent in their current and future endeavors.
  • $100/month work-from-home stipend
  • $200/month wellness stipend
  • WeWork office Stipend
  • 401(k) retirement benefits
  • Flexible vacation policy
  • Generous paid parental leave
  • Competitive healthcare benefits, including HSA, for you and your dependent(s)

Alt's compensation package includes a competitive base salary benchmarked against real-time market data, as well as equity for all full-time roles. We want all full-time employees to be invested in Alt and to be able to take advantage of that investment, so our equity grants include a 10-year exercise window. The base salary range for this role is: $194,000 - $210,000. Offers may vary from the amount listed based on geography, candidate experience and expertise, and other factors.

See more jobs at Alt

Apply for this job

FanDuel is hiring a Remote Senior Data Platform Engineer

Job Application for Senior Data Platform Engineer at FanDuel

See more jobs at FanDuel

Apply for this job

+30d

Lead Data Engineer

DevoteamTunis, Tunisia, Remote
airflowsqlscrum

Devoteam is hiring a Remote Lead Data Engineer

Description du poste

Au sein de la direction « Plateforme Data », le consultant intégrera une équipe SCRUM et se concentrera sur un périmètre fonctionnel spécifique.

Votre rôle consistera à contribuer à des projets data en apportant votre expertise sur les tâches suivantes :

  • Concevoir, développer et maintenir des pipelines de données robustes et évolutifs sur Google Cloud Plateform (GCP), en utilisant des outils tels que BigQuery, Airflow, Looker et DBT.
  • Collaborer avec les équipes métier pour comprendre les exigences en matière de données et concevoir des solutions adaptées.
  • Optimiser les performances des traitements des données et des processus ELT en utilisant AirFlow, DBT et BigQuery.
  • Mettre en œuvre des processus de qualité des données pour garantir l'intégrité et la cohérence des données.
  • Travailler en étroite collaboration avec les équipes d'ingénierie pour intégrer les pipelines de données dans les applications et les services existants.
  • Rester à jour avec les nouvelles technologies et les meilleures pratiques dans le domaine du traitement des données et de l'analyse.

 

    Qualifications

    • Diplômé(e) d’un Bac+5 en école d'ingénieur ou équivalent universitaire avec une spécialisation en informatique.
    • Au moins 3 ans d'expérience dans le domaine de l'ingénierie des données, avec une expérience significative dans un environnement basé sur le cloud GCP.
    • Maîtrise avancée de SQL pour l'optimisation et le traitement des données.
    • Certification Google Professional Data Engineer est un plus.
    • Très bonne communication écrite et orale (livrables et reportings de qualité).

    See more jobs at Devoteam

    Apply for this job

    +30d

    Sr. Data Engineer

    Talent ConnectionPleasanton, CA, Remote
    Designjava

    Talent Connection is hiring a Remote Sr. Data Engineer

    Job Description

    Position Overview: As a Sr. Data Engineer, you will be pivotal in developing and maintaining data solutions that enhance our client's reporting and analytics capabilities. You will leverage a variety of data technologies to construct scalable, efficient data pipelines that support critical business insights and decision-making processes.

    Key Responsibilities:

    • Architect and design data pipelines that meet reporting and analytics requirements.
    • Develop robust and scalable data pipelines to integrate data from diverse sources into a cloud-based data platform.
    • Convert business needs into architecturally sound data solutions.
    • Lead data modernization projects, providing technical guidance and setting design standards.
    • Optimize data performance and ensure prompt resolution of issues.
    • Collaborate with cross-functional teams to create efficient data flows.

    Qualifications

    Required Skills and Experience:

    • 7+ years of experience in data engineering and pipeline development.
    • 5+ years of experience in data modeling for data warehousing and analytics.
    • Proficiency with modern data architecture and cloud data platforms, including Snowflake and Azure.
    • Bachelor’s Degree in Computer Science, Information Systems, Engineering, Business Analytics, or a related field.
    • Strong skills in programming languages such as Java and Python.
    • Experience with data orchestration tools and DevOps/Data Ops practices.
    • Excellent communication skills, capable of simplifying complex information.

    Preferred Skills:

    • Experience in the retail industry.
    • Familiarity with reporting tools such as MicroStrategy and Power BI.
    • Experience with tools like Streamsets and dbt.

    See more jobs at Talent Connection

    Apply for this job

    +30d

    Data Engineer

    O'Reilly MediaRemote, United States
    DjangoS3agileRabbitMQslackscrumqadockerpostgresqljenkinspython

    O'Reilly Media is hiring a Remote Data Engineer

    Description

    About O’Reilly Media

    O’Reilly’s mission is to change the world by sharing the knowledge of innovators. For over 40 years, we’ve inspired companies and individuals to do new things—and do things better—by providing them with the skills and understanding that’s necessary for success.

    At the heart of our business is a unique network of experts and innovators who share their knowledge through us. O’Reilly Learning offers exclusive live training, interactive learning, a certification experience, books, videos, and more, making it easier for our customers to develop the expertise they need to get ahead. And our books have been heralded for decades as the definitive place to learn about the technologies that are shaping the future. Everything we do is to help professionals from a variety of fields learn best practices and discover emerging trends that will shape the future of the tech industry.

    Our customers are hungry to build the innovations that propel the world forward. And we help you do just that.
     
    Diversity

    At O’Reilly, we believe that true innovation depends on hearing from, and listening to, people with a variety of perspectives. We want our whole organization to recognize, include, and encourage people of all races, ethnicities, genders, ages, abilities, religions, sexual orientations, and professional roles.
     
    About the Team 

    Our data platform team is dedicated to establishing a robust data infrastructure, facilitating easy access to quality, reliable, and timely data for reporting, analytics, and actionable insights. We focus on designing and building a sustainable and scalable data architecture, treating data as a core corporate asset. Our efforts also include process improvement, governance enhancement, and addressing application, functional, and reporting needs. We value teammates who are helpful, respectful, communicate openly, and prioritize the best interests of our users. Operating across various cities and timezones in the US, our team fosters collaboration to deliver work that brings pride and fulfillment. 
     
    About the Role 

    We are looking for a thoughtful and experienced data engineer to help grow a suite of systems and tools written primarily in Python. The ideal candidate will have a deep understanding of modern data engineering concepts and will have shipped or supported code and infrastructure with a user base in the millions and datasets with billions of records. The candidate will be routinely implementing features, fixing bugs, performing maintenance, consulting with product managers, and troubleshooting problems. Changes you make will be accompanied by tests to confirm desired behavior. Code reviews, in the form of pull requests reviewed by peers, are a regular and expected part of the job as well.
     
    Salary Range:$110,000 - $138,000

    What You’ll Do 
    • Develop data pipelines or features related to data ingestion, transformation, or storage using Python and relational databases (e.g., PostgreSQL) or cloud-based data warehousing (e.g.,BigQuery)
    • Collaborate with product managers to define clear requirements, deliverables, and milestones
    • Team up with other groups within O’Reilly (e.g. data science or machine learning) to leverage experience and consult on data engineering best practices
    • Review a pull request from a coworker and pair on a tricky problem
    • Provide a consistent and reliable estimate to assess risk for a project manager
    • Learn about a new technology or paper and present it to the team
    • Identify opportunities to improve our pipelines through research and proof-of-concepts
    • Help QA and troubleshoot a pesky production problem
    • Participate in agile process and scrum ceremonies
    What You’ll Have 

    Required: 
    • 3+ years of professional data engineering experience (equivalent education and/or experience may be considered)
    • 2+ year experience of working in an agile environment
    • Proficiency in building highly scalable ETL and streaming-based data pipelines using Google Cloud Platform services and products
    • Experience in building data pipelines using Docker
    • Experience in building data pipelines using tools such as Talend, Fivetran 
    • Proficiency in large scale data platforms and data processing systems such as S3, GCS, Google BigQuery and Amazon Redshift
    • Excellent Python and PostgreSQL development and debugging skills
    • Experience building systems to retrieve and aggregate data from event-driven messaging frameworks (e.g. RabbitMQ and Pub/Sub)
    • Experience with deployment tools such as Jenkins to build automated CI/CD pipelines
    • Strong drive to experiment, learn and improve your skills
    • Respect for the craft—you write self-documenting code with modern techniques
    • Great written communication skills—we do a lot of work asynchronously in Slack and Google Docs
    • Empathy for our users—a willingness to spend time understanding their needs and difficulties is central to the team
    • Desire to be part of a compact, fun, and hard-working team
     
    Preferred:
    • Experience with Google Cloud Dataflow/Apache Beam
    • Experience with Django RESTful endpoints
    • Experience working in a distributed team
    • Knowledge and experience with machine learning pipelines
    • Contributions to open source projects
    • Knack for benchmarking and optimization

     

    At this time, O'Reilly Media Inc. is not able to provide visa sponsorship or provide any immigration support (i.e. H-1B, STEM, OPT, CPT, EAD and Permanent Residency process)

    See more jobs at O'Reilly Media

    Apply for this job

    +30d

    Data Engineer

    SonderMindDenver, CO or Remote
    S3scalaairflowsqlDesignjavac++pythonAWS

    SonderMind is hiring a Remote Data Engineer

    About SonderMind

    At SonderMind, we know that therapy works. SonderMind provides accessible, personalized mental healthcare that produces high-quality outcomes for patients. SonderMind's individualized approach to care starts with using innovative technology to help people not just find a therapist, but find the right, in-network therapist for them, should they choose to use their insurance. From there, SonderMind's clinicians are committed to delivering best-in-class care to all patients by focusing on high-quality clinical outcomes. To enable our clinicians to thrive, SonderMind defines care expectations while providing tools such as clinical note-taking, secure telehealth capabilities, outcome measurement, messaging, and direct booking.

    To follow the latest SonderMind news, get to know our clients, and learn about what it’s like to work at SonderMind, you can follow us on Instagram, Linkedin, and Twitter. 

    About the Role

    In this role, you will be responsible for designing, building, and managing the information infrastructure systems used to collect, store, process, and distribute data. You will also be tasked with transforming data into a format that can be easily analyzed. You will work closely with data engineers on data architectures and with data scientists and business analysts to ensure they have the data necessary to complete their analyses.

    Essential Functions

    • Strategically design, construct, install, test, and maintain highly scalable data management systems
    • Develop and maintain databases, data processing procedures, and pipelines
    • Integrate new data management technologies and software engineering tools into existing structures
    • Develop processes for data mining, data modeling, and data production
    • Translate complex functional and technical requirements into detailed architecture, design, and high-performing software and applications
    • Create custom software components and analytics applications
    • Troubleshoot data-related issues and perform root cause analysis to resolve them
    • Manage overall pipeline orchestration
    • Optimize data warehouse performance

     

    What does success look like?

    Success in this role will be by the seamless and efficient operations of data infrastructure. This includes minimal downtime, accurate and timely data delivery and the successful implementation of new technologies and tools. The individual will have demonstrated their ability to collaborate effectively to define solutions with both technical and non-technical team members across data science, engineering, product and our core business functions. They will have made significant contributions to improving our data systems, whether through optimizing existing processes or developing innovative new solutions. Ultimately, their work will enable more informed and effective decision-making across the organization.

     

    Who You Are 

    • Bachelor’s degree in Computer Science, Engineering, or a related field
    • Minimum three years experience as a Data Engineer or in a similar role
    • Experience with data science and analytics engineering is a plus
    • Experience with AI/ML in GenAI or data software - including vector databases - is a plus
    • Proficient with scripting and programming languages (Python, Java, Scala, etc.)
    • In-depth knowledge of SQL and other database related technologies
    • Experience with Snowflake, DBT, BigQuery, Fivetran, Segment, etc.
    • Experience with AWS cloud services (S3, RDS, Redshift, etc.)
    • Experience with data pipeline and workflow management tools such as Airflow
    • Strong negotiation and interpersonal skills: written, verbal, analytical
    • Motivated and influential – proactive with the ability to adhere to deadlines; work to “get the job done” in a fast-paced environment
    • Self-starter with the ability to multi-task

    Our Benefits 

    The anticipated salary rate for this role is between $130,000-160,000 per year.

    As a leader in redesigning behavioral health, we are walking the walk with our employee benefits. We want the experience of working at SonderMind to accelerate people’s careers and enrich their lives, so we focus on meeting SonderMinders wherever they are and supporting them in all facets of their life and work.

    Our benefits include:

    • A commitment to fostering flexible hybrid work
    • A generous PTO policy with a minimum of three weeks off per year
    • Free therapy coverage benefits to ensure our employees have access to the care they need (must be enrolled in our medical plans to participate)
    • Competitive Medical, Dental, and Vision coverage with plans to meet every need, including HSA ($1,100 company contribution) and FSA options
    • Employer-paid short-term, long-term disability, life & AD&D to cover life's unexpected events. Not only that, we also cover the difference in salary for up to seven (7) weeks of short-term disability leave (after the required waiting period) should you need to use it.
    • Eight weeks of paid Parental Leave (if the parent also qualifies for STD, this benefit is in addition which allows between 8-16 weeks of paid leave)
    • 401K retirement plan with 100% matching which immediately vests on up to 4% of base salary
    • Travel to Denver 1x a year for annual Shift gathering
    • Fourteen (14) company holidays
    • Company Shutdown between Christmas and New Years
    • Supplemental life insurance, pet insurance coverage, commuter benefits and more!

    Application Deadline

    This position will be an ongoing recruitment process and will be open until filled.

     

    Equal Opportunity 
    SonderMind does not discriminate in employment opportunities or practices based on race, color, creed, sex, gender, gender identity or expression, pregnancy, childbirth or related medical conditions, religion, veteran and military status, marital status, registered domestic partner status, age, national origin or ancestry, physical or mental disability, medical condition (including genetic information or characteristics), sexual orientation, or any other characteristic protected by applicable federal, state, or local laws.

    Apply for this job

    +30d

    Data Engineer

    Tiger AnalyticsLondon,England,United Kingdom, Remote Hybrid

    Tiger Analytics is hiring a Remote Data Engineer

    Tiger Analytics is pioneering what AI and analytics can do to solve some of the toughest problems faced by organizations globally. We develop bespoke solutions powered by data and technology for several Fortune 100 companies. We have offices in multiple cities across the US, UK, Canada, India, and Singapore, and a substantial remote global workforce.

    If you are passionate about working on business problems that can be solved using structured and unstructured data on a large scale, Tiger Analytics would like to talk to you. We are seeking an experienced and dynamic Data Engineer to play a key role in designing and implementing robust data solutions that help in solving the client's complex business problem

    Responsibilities:

    • Design, develop, and maintain scalable data pipelines using Scala, DBT, and SQL.
    • Implement and optimize distributed data processing solutions using MPP databases and technologies.
    • Build and deploy machine learning models using distributed processing frameworks such as Spark, Glue, and Iceberg.
    • Collaborate with data scientists and analysts to operationalize ML models and integrate them into production systems.
    • Ensure data quality, reliability, and integrity throughout the data lifecycle.
    • Continuously optimize and improve data processing and ML workflows for performance and scalability.

    Requirements:

    • 5+ years of experience in data engineering and machine learning.
    • Proficiency in Scala programming language for building data pipelines and ML models.
    • Hands-on experience with DBT (Data Build Tool) for data transformation and modeling.
    • Strong SQL skills for data querying and manipulation.
    • Experience with MPP (Massively Parallel Processing) databases and distributed processing technologies.
    • Familiarity with distributed processing frameworks such as Spark, Glue, and Iceberg.
    • Ability to work independently and collaboratively in a team environment.

    Significant career development opportunities exist as the company grows. The position offers a unique opportunity to be part of a small, fast-growing, challenging and entrepreneurial environment, with a high degree of individual responsibility.

    See more jobs at Tiger Analytics

    Apply for this job

    +30d

    Senior Data Engineer

    Expression NetworksDC, US - Remote
    agileAbility to travelnosqlsqlDesignscrumjavapythonjavascript

    Expression Networks is hiring a Remote Senior Data Engineer

    Expression is looking to hire a Senior Data Engineer (individual contributor) to add to the continued growth we are seeing with our Data Science department. This position will daily report to the program manager and data team manager on projects and will be responsible for the design and execution of high-impact data architecture and engineering solutions to customers across a breadth of domains and use cases.

    Location:

    • Remote with the ability to travel monthly when needed
      • Local (DC/VA/MD Metropolitan area) is preferred but not required
      • Relocation assistance available for highly qualified candidates

    Security Clearance:

    • US Citizenship required
    • Ability to obtain Secret Clearance or higher

    Primary Responsibilities:

    • Directly working and leading others on the development, testing, and documentation of software code and data pipelines for data extraction, ingestion, transformation, cleaning, correlation, and analytics
    • Leading end-to-end architectural design and development lifecycle for new data services/products, and making them operate at scale
    • Partnering with Program Managers, Subject Matter Experts, Architects, Engineers, and Data Scientists across the organization where appropriate to understand customer requirements, design prototypes, and optimize existing data services/products
    • Setting the standard for Data Science excellence in the teams you work with across the organization, and mentoring junior members in the Data Science department

    Additional Responsibilities:

    • Participating in technical development of white papers and proposals to win new business opportunities
    • Analyzing and providing feedback on product strategy
    • Participating in research, case studies, and prototypes on cutting-edge technologies and how they can be leveraged
    • Working in a consultative fashion to improve communication, collaboration, and alignment amongst teams inside the Data Science department and across the organization
    • Helping recruit, nurture, and retain top data engineering talent

    Required Qualifications:

    • 4+ years of experience bringing databases, data integration, and data analytics/ML technologies to production with a PhD/MS in Computer Science/Data Science/Computer Engineering or relevant field, or 6+ years of experience with a Bachelor’s degree
    • Mastery in developing software code in one or more programming languages (Python, JavaScript, Java, Matlab, etc.)
    • Expert knowledge in databases (SQL, NoSQL, Graph, etc.) and data architecture (Data Lake, Lakehouse)
    • Knowledgeable in machine learning/AI methodologies
    • Experience with one or more SQL-on-Hadoop technology (Spark SQL, Hive, Impala, Presto, etc.)
    • Experience in short-release cycles and the full software lifecycle
    • Experience with Agile development methodology (e.g., Scrum)
    • Strong writing and oral communication skills to deliver design documents, technical reports, and presentations to a variety of audiences

    Benefits:

    • 401k matching
    • PPO and HDHP medical/dental/vision insurance
    • Education reimbursement
    • Complimentary life insurance
    • Generous PTO and holiday leave
    • Onsite office gym access
    • Commuter Benefits Plan

    About Expression:

    Founded in 1997 and headquartered in Washington DC, Expression provides data fusion, data analytics, software engineering, information technology, and electromagnetic spectrum management solutions to the U.S. Department of Defense, Department of State, and national security community. Expression’s “Perpetual Innovation” culture focuses on creating immediate and sustainable value for our clients via agile delivery of tailored solutions built through constant engagement with our clients. Expression was ranked #1 on the Washington Technology 2018's Fast 50 list of fastest growing small business Government contractors and a Top 20 Big Data Solutions Provider by CIO Review.

    Equal Opportunity Employer/Veterans/Disabled

    See more jobs at Expression Networks

    Apply for this job

    +30d

    Senior Data Engineer

    Tiger AnalyticsJersey City,New Jersey,United States, Remote

    Tiger Analytics is hiring a Remote Senior Data Engineer

    Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Science, Machine Learning and AI. We are the trusted analytics partner for several Fortune 100 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best analytics global consulting team in the world.

    We are seeking an experienced Data Engineer to join our data team. As a Data Engineer, you will be responsible for designing, building, and maintaining data pipelines, data integration processes, and data infrastructure using Dataiku. You will collaborate closely with data scientists, analysts, and other stakeholders to ensure efficient data flow and support data-driven decision making across the organization.

    • 8+ years of overall industry experience specifically in data engineering
    • Strong knowledge of data engineering principles, data integration, and data warehousing concepts.
    • Strong understanding of the pharmaceutical/ Life Science domain, including knowledge of patient data, Commercial data, drug development processes, and healthcare data.
    • Proficiency in data engineering technologies and tools, such as SQL, Python, ETL frameworks, data integration platforms, and data warehousing solutions.
    • Experience with data modeling, database design, and data architecture principles.
    • Familiarity with big data technologies (e.g., Hadoop, Spark) and cloud platforms - AWS, Azure
    • Strong analytical and problem-solving skills, with the ability to work with large and complex datasets.
    • Strong communication and collaboration abilities.
    • Attention to detail and a focus on delivering high-quality work.

    Significant career development opportunities exist as the company grows. The position offers a unique opportunity to be part of a small, challenging, and entrepreneurial environment, with a high degree of individual responsibility.

    See more jobs at Tiger Analytics

    Apply for this job

    +30d

    Senior Data Engineer

    SendleAustralia (Remote)
    SalestableausqlDesigngit

    Sendle is hiring a Remote Senior Data Engineer

    Sendle builds shipping that is good for the world. We help small businesses thrive by making parcel delivery simple, reliable, and affordable. We’re a B Corp and the first 100% carbon neutral delivery service in Australia, Canada, and the United States, where we harness major courier networks to create a delivery service that levels the playing field for small businesses.

    We envision a world where small businesses can compete on a level playing field with the big guys. Sendle is a fast-growing business with bold ambitions and big dreams. 

    In the last few years, we’ve made huge strides towards our goal of becoming the largest SMB eCommerce courier in the world, moving from a single-country operation in Australia to a successful launch and operation in the US and Canada.  We’ve also launched major partnerships with Vestiaire Collective, eBay, Shopify, and Etsy too! 

    But most importantly, we’re a bunch of good people doing good work. Wanna join us?

    A bit about the role

    We are looking for a Senior Data Engineer who is passionate about building scalable data systems that will enable our vision of data democratization to drive value for the business.

    As a company, data is at the center of every critical business decision we make. With this role, you will work across many different areas of the business, learning about everything from marketing and sales to courier logistics and network performance. Additionally, there is the opportunity to work directly with stakeholders, with you being a technical thought partner and working collaboratively to design and build solutions to address key business questions. 

    What you’ll do

    • Develop, deploy, and maintain data models to support the data needs of various teams across the company
    • Build data models with DBT, utilizing git for source control
    • Ingest data from different sources (via Fivetran, APIs, etc.) into Snowflake for use by the DBT models
    • Collaborate with the Data Engineering team to brainstorm, scope, and implement process improvements
    • Work with the entire Data and Analytics team to enhance data observability and monitoring
    • Act as a thought partner for stakeholders and peers across the company on ad hoc data requests and identify the best approach and design for our near-term and long-term growth objectives
    • Understand the tradeoffs between technical possibilities and stakeholder needs and strive for balanced solutions
    • Hold self and others accountable to meet commitments and act with a clear sense of ownership
    • Demonstrate persistence in the face of obstacles, resolve them effectively, and involve others as needed
    • Contribute to our data literacy efforts by improving the accessibility, discoverability, and interpretability of our data
    • Research industry trends and introduce new methodologies and processes to the team

    What you’ll need

    • Experience with data modeling, data warehousing, and building ETL pipelines (Dagster, DBT, and Snowflake experience a plus)
    • Advanced SQL knowledge
    • Experience with source control technologies such as git
    • Strong communication skills and the ability to partner with business stakeholders to translate business requirements into technical solutions
    • Ability to effectively communicate technical approach with teammates and leaders 
    • Ability to thrive in a remote environment through effective async communication and collaboration
    • Ability to manage multiple projects simultaneously
    • A can-do attitude and demonstrates flexibility by readily taking on new opportunities and assisting others
    • The 5Hs (our core values) in their approach to work and building partnerships with stakeholders and teammates

    What we’re offering

    • The chance to work with a creative team in a supportive environment
    • A personal development budget
    • You are able to create your own work environment, connecting to a remote team from anywhere in Australia
    • EAP access for you and your immediate family, because we care about your wellbeing
    • Options through participation in Sendle’s ESOP

    What matters to us

    We believe that our culture is one of our most important assets. We have 5 key values that we look for in every member of our team.

    • Humble - We put others first. We embrace and seek feedback from others.
    • Honest - We speak gently but frankly. We take ownership of our mistakes and speak the truth.
    • Happy - We enjoy the journey. We are optimistic and find opportunities in all things.
    • Hungry -We aspire to make a difference. We aim high, step out of our comfort zones, and tackle the hard problems.
    • High-Performing - We relentlessly deliver.We know the goal and work fearlessly towards it.

    Legally, we need you to know this: 

    We are an equal opportunity employer and value diversity. We do not discriminate on the basis of race, religion, colour, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

    If you require accommodations due to a disability to participate in the application or interview process, please get in touch with our team at careers@sendle.com to discuss your needs.

    But it’s important to us that you know this:

    We strongly believe that diversity of experience contributes to a broader collective perspective that will consistently lead to a better company and better outcomes. We are working hard to increase the diversity of our team wherever we can and we actively encourage everyone to consider becoming a part of it.

    If you want to be a part of something remarkable then we’re excited to hear from you.

    Interested in knowing more? Check us out on our Careers Page, Being Carbon Neutral and LinkedIn.

     

    #LI-Remote

    See more jobs at Sendle

    Apply for this job

    FanDuel is hiring a Remote Staff Data Platform Engineer

    Job Application for Staff Data Platform Engineer at FanDuel

    See more jobs at FanDuel

    Apply for this job

    +30d

    Data Engineer (Colombia)

    SezzleColombia, Remote
    MLSalesgolangBachelor's degreeterraformsqlDesignansiblec++dockerkubernetespythonAWS

    Sezzle is hiring a Remote Data Engineer (Colombia)

    About the Role: 

    We are looking for a Data Engineer who will assist us building, running and improving the data infrastructure that data and engineering teams use to power their services.  Your duties will include the development, testing, and maintenance of data tooling and services, using a combination of cloud products, open source tools and internal applications. You should be able to build high-quality, scalable solutions for a variety of problems. We are seeking a talented and motivated Data Engineer who is best in class with a high IQ plus a high EQ. This role presents an exciting opportunity to thrive in a dynamic, fast-paced environment within a rapidly growing team, with abundant prospects for career advancement.

    About Sezzle:

    Sezzle is a cutting-edge fintech company dedicated to financially empowering the next generation. With only one in three millennials owning a credit card and the majority lacking their desired credit scores, Sezzle addresses these challenges through a payment platform that offers interest-free installment plans at online stores. By increasing consumers' purchasing power, Sezzle drives sales and basket sizes for thousands of eCommerce merchants that it partners with.

    Key Responsibilities Include:

    • Work with a team to plan, design and build tools and services that improve our internal data infrastructure platform and the pipelines that feed it using Python, Go, AWS, Terraform, and Kubernetes.
    • Develop monitoring and alerting of our data infrastructure to detect problems.
    • Perform ongoing maintenance of our data infrastructure, such as apply upgrades.
    • Assist product developers, data scientists and machine learning engineers in debugging and triaging production issues.
    • Collaborate with cross-functional teams to integrate machine learning solutions into production systems
    • Take part in the postmortem reviews, suggesting ways we can improve the reliability of our platform.
    • Document the actions you take and produce both runbooks and automation to reduce day to day toil.

    Minimum Requirements:

    • Bachelor's in Computer Science, Data Science, Machine Learning or a related field

    Preferred Knowledge and Skills:

    • Experience with AWS services like Redshift, Glue, Sagemaker etc
    • Experience with Data Focused Languages such as Python or SQL
    • Knowledge of ML model training/deploy is a plus
    • Knowledge in data analysis algorithms (e.g. statistics, machine learning)
    • Familiarity with machine learning frameworks and libraries, such as TensorFlow or PyTorch
    • Experience with MLOps principles is a plus
    • Familiarity with orchestration tools like Dagster/Airflow
    • Basic knowledge of Golang, Docker and Kubernetes
    • Familiarity with deployment/provisioning tools like Terraform, Helm, Ansible
    • Experience documenting requirements and specification

    About You: 

    • You have relentlessly high standards - many people may think your standards are unreasonably high. You are continually raising the bar and driving those around you to deliver great results. You make sure that defects do not get sent down the line and that problems are fixed so they stay fixed.
    • You’re not bound by convention - your success—and much of the fun—lies in developing new ways to do things
    • You need action - speed matters in business. Many decisions and actions are reversible and do not need extensive study. We value calculated risk-taking.
    • You earn trust - you listen attentively, speak candidly, and treat others respectfully.
    • You have backbone; disagree, then commit - you can respectfully challenge decisions when you disagree, even when doing so is uncomfortable or exhausting. You have conviction and are tenacious. You do not compromise for the sake of social cohesion. Once a decision is determined, you commit wholly.

    What Makes Working at Sezzle Awesome? 

    At Sezzle, we are more than just brilliant engineers, passionate data enthusiasts, out-of-the-box thinkers, and determined innovators. We believe in surrounding ourselves with only the best and the brightest individuals. Our culture is not defined by a certain set of perks designed to give the illusion of the traditional startup culture, but rather, it is the visible example living in every employee that we hire. 

    Equal Employment Opportunity: Sezzle Inc. is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate based on race, color, religion, sex, national origin, age, disability, genetic information, pregnancy, or any other legally protected status. Sezzle recognizes and values the importance of diversity and inclusion in enriching the employment experience of its employees and in supporting our mission.

    #Li-remote

    See more jobs at Sezzle

    Apply for this job

    +30d

    Data Engineer (m/w/d)

    GerresheimerEssen, Germany, Remote
    azure

    Gerresheimer is hiring a Remote Data Engineer (m/w/d)

    Stellenbeschreibung

    • Du hilfst uns, den Data Analytics / Business Intelligence Bereich für die Business Unit Moulded Glass aufzubauen und weiterzuentwickeln, um Mehrwert und Business Insights zu generieren.
    • Du entwickelst, implementierst und wartest ETL-Prozesse mit Azure Data Factory und anderen Tools in enger Abstimmung mit unserem Gerresheimer Data Science Center.
    • Du überführst bestehende Auswertungen in standardisierte und automatisierte Lösungen mit Fokus auf Datenmodellierung, und optimierst sie kontinuierlich.
    • Du entwickelst und optimierst Datenmodelle, um einen effizienten Datenzugriff und -analyse zu ermöglichen und integrierst verschiedene Datenquellen und Datenbanken (wie S4 HANA, interne und externe Quellen), um eine konsistente Datenbasis sicherzustellen.
    • Du entwirfst, implementierst und überwachst Datenpipelines, -architektur und Mechanismen zur Datenqualitätsüberwachung und -verbesserung, inklusive Fehlererkennung und -behebung.
    • Du setzt AI/ML-Projekte in enger Zusammenarbeit mit unserem Gerresheimer Data Science Center um.

    Die Tätigkeit kann remoteausgeführt werden, falls gewünscht.

    Qualifikationen

    • Du verfügst über einen Bachelor- oder Master-Abschluss in Informatik, Mathematik, Ingenieurswissenschaften oder einer verwandten Fachrichtung.
    • Du besitzt fundierte technische Kenntnisse und idealerweise Erfahrungen in der Arbeit mit relevanten Azure-Diensten wie Azure Data Factory, Microsoft Azure, SQL/DWH/Analysis Services, Azure Data Lake, Azure Synapse und Azure DevOps.
    • Erfahrung im Umgang mit Massendaten, Datenmodellierung und Datenverarbeitung sind für diese Position unerlässlich.
    • Du beherrschst mindestens eine Programmiersprache, idealerweise Python.
    • Idealerweise verfügst du über Kenntnisse der Datenstrukturen in ERP-Systemen wie SAP FI/CO/MM/SD.
    • Du hast idealerweise Erfahrungen mit BI-Tools wie Power BI und mit agilen Projektmanagementmethoden wie Scrum.
    • Du bist begeistert von Themen wie Data Analytics, Data Science, AI, Machine und Deep Learning.
    • Du zeigst eine kontinuierliche Lernbereitschaft und Interesse daran, dich auch in anderen Bereichen weiterzubilden (z.B. RPA).
    • Du sprichst Deutsch und Englisch.

    See more jobs at Gerresheimer

    Apply for this job

    +30d

    Senior Data Engineer, Finance

    InstacartUnited States - Remote
    airflowsqlDesign

    Instacart is hiring a Remote Senior Data Engineer, Finance

    We're transforming the grocery industry

    At Instacart, we invite the world to share love through food because we believe everyone should have access to the food they love and more time to enjoy it together. Where others see a simple need for grocery delivery, we see exciting complexity and endless opportunity to serve the varied needs of our community. We work to deliver an essential service that customers rely on to get their groceries and household goods, while also offering safe and flexible earnings opportunities to Instacart Personal Shoppers.

    Instacart has become a lifeline for millions of people, and we’re building the team to help push our shopping cart forward. If you’re ready to do the best work of your life, come join our table.

    Instacart is a Flex First team

    There’s no one-size fits all approach to how we do our best work. Our employees have the flexibility to choose where they do their best work—whether it’s from home, an office, or your favorite coffee shop—while staying connected and building community through regular in-person events. Learn more about our flexible approach to where we work.

    Overview

     

    At Instacart, our mission is to create a world where everyone has access to the food they love and more time to enjoy it together. Millions of customers every year use Instacart to buy their groceries online, and the Data Engineering team is building the critical data pipelines that underpin all of the myriad of ways that data is used across Instacart to support our customers and partners.

    About the Role 

     

    The Finance data engineering team plays a critical role in defining how financial data is modeled and standardized for uniform, reliable, timely and accurate reporting. This is a high impact, high visibility role owning critical data integration pipelines and models across all of Instacart’s products. This role is an exciting opportunity to join a key team shaping the post-IPO financial data vision and roadmap for the company.

     

    About the Team 

     

    Finance data engineering is part of the Infrastructure Engineering pillar, working closely with accounting, billing & revenue teams to support the monthly/quarterly book close, retailer invoicing and internal/external financial reporting. Our team collaborates closely with product teams to capture critical data needed for financial use cases.

     

    About the Job 

    • You will be part of a team with a large amount of ownership and autonomy.
    • Large scope for company-level impact working on financial data.
    • You will work closely with engineers and both internal and external stakeholders, owning a large part of the process from problem understanding to shipping the solution.
    • You will ship high quality, scalable and robust solutions with a sense of urgency.
    • You will have the freedom to suggest and drive organization-wide initiatives.

     

    About You

    Minimum Qualifications

    • 8+ years of working experience in a Data/Software Engineering role, with a focus on building data pipelines.
    • Expert with SQL and  knowledge of Python.
    • Experience building high quality ETL/ELT pipelines.
    • Past experience with data immutability, auditability, slowly changing dimensions or similar concepts.
    • Experience building data pipelines for accounting/billing purposes.
    • Experience with cloud-based data technologies such as Snowflake, Databricks, Trino/Presto, or similar.
    • Adept at fluently communicating with many cross-functional stakeholders to drive requirements and design shared datasets.
    • A strong sense of ownership, and an ability to balance a sense of urgency with shipping high quality and pragmatic solutions.
    • Experience working with a large codebase on a cross functional team.

     

    Preferred Qualifications

    • Bachelor’s degree in Computer Science, computer engineering, electrical engineering OR equivalent work experience.
    • Experience with Snowflake, dbt (data build tool) and Airflow
    • Experience with data quality monitoring/observability, either using custom frameworks or tools like Great Expectations, Monte Carlo etc

     

    #LI-Remote

    Instacart provides highly market-competitive compensation and benefits in each location where our employees work. This role is remote and the base pay range for a successful candidate is dependent on their permanent work location. Please review our Flex First remote work policyhere.

    Offers may vary based on many factors, such as candidate experience and skills required for the role.Additionally, this role is eligible for a new hire equity grant as well as annual refresh grants. Please read more about our benefits offeringshere.

    For US based candidates, the base pay ranges for a successful candidate are listed below.

    CA, NY, CT, NJ
    $221,000$245,000 USD
    WA
    $212,000$235,000 USD
    OR, DE, ME, MA, MD, NH, RI, VT, DC, PA, VA, CO, TX, IL, HI
    $203,000$225,000 USD
    All other states
    $183,000$203,000 USD

    See more jobs at Instacart

    Apply for this job

    +30d

    Senior Data Engineer

    KalderosRemote, United States
    terraformnosqlsqlmobileslackazurec++dockerpython

    Kalderos is hiring a Remote Senior Data Engineer

    About Us

    At Kalderos, we are building unifying technologies that bring transparency, trust, and equity to the entire healthcare community with a focus on pharmaceutical pricing.  Our success is measured when we can empower all of healthcare to focus more on improving the health of people. 

    That success is driven by Kalderos’ greatest asset, our people. Our team thrives on the problems that we solve, is driven to innovate, and thrives on the feedback of their peers. Our team is passionate about what they do and we are looking for people to join our company and our mission.

    That’s where you come in! We are looking for a collaborative Senior Data Engineer. You have a strong inclination to work in rapidly developing and expanding organizations and possess the necessary background to do so. You are well-acquainted with the fast-paced, high-volume, and uncertain nature of operations in the organization, and perceive it as a chance to deliver significant outcomes. 

    What You'll Do

    • Work with product teams to understand and develop data models that can meet requirements and operationalize well
    • Mentor junior data engineers and provide guidance on best practices, tools, and techniques.
    • Collaborate with other Data Engineers to solve problems that directly impact enterprise customers
    • Build data transformations and data flows utilizing Python, SQL, DBT, and Snowflake
    • Build out automated ETL jobs that reliably process large amounts of data, and ensure these jobs run consistently and are well-monitored
    • Build tools that enable other data engineers to work more efficiently
    • Try out new data storage and processing technologies in proof of concepts and make recommendations to the broader team
    • Tune existing implementations to run more efficiently as they become bottlenecks, or migrate existing implementations to new paradigms as needed
    • Learn and apply knowledge about the drug discount space, and become a subject matter expert for internal teams to draw upon

    What You'll Bring

    • Bachelor’s degree in computer science or similar field 
    • 6+ years of work experience as a Data Engineer in a professional full-time role
    • Advanced knowledge of SQL and Python (along with data analysis libraries such as Pandas, NumPy, and others) with 6+ years of experience
    • Strong data modeling experience (understanding of SQL, NoSQL, and other database types along with relational modeling)
    • Ability to learn new technologies and work in various components across the full tech stack 
    • Experience of 3+ years with DBT and Snowflake
    • Experience of 6+ years with cloud native infrastructure (Azure preferred)
    • Experience of 6+ years with Container Applications (Docker)
    • Experience building ETL pipelines and other services for the healthcare industry 
    • Professional experience in application programming with an object-oriented language.

    Set Yourself Apart

    • Experience with event driven platforms
    • Experience with Spark 
    • Experience with infrastructure as code tools like Terraform or Azure Resource Manager (ARM) templates
    • Knowledge about the drug discount space and health care industry

    Expected Salary Range: $145,000 - $175,000 base + bonus

    This position can be remote in the United States or hybrid in Chicago, IL or Boston, MA. Expected hours will be Eastern or Central time.

    ____________________________________________________________________________________________

    Highlighted Company Perks and Benefits

    • Medical, Dental, and Vision benefits
    • 401k with company match
    • Flexible PTO with a 10 day minimum
    • Opportunity for growth
    • Mobile & Wifi Reimbursement
    • Commuter Reimbursement
    • Donation matching for charitable contributions
    • Travel reimbursement for healthcare services not available near your home
    • New employee home office setup reimbursement

    What It’s Like Working Here

    • We thrive on collaboration, because we believe that all voices matter and we can only put our best work into the world when we work together to solve problems.
    • We empower each other and believe in ensuring all voices are heard.
    • We know the importance of feedback in individual and organizational growth and development, which is why we've embedded it into our practice and culture. 
    • We’re curious and go deep. Our slack channel is filled throughout the day with insightful articles, discussions around our industry, healthcare, and our book club is always bursting with questions.

    To learn more:https://www.kalderos.com/company/culture

    We know that job postings can be intimidating, and research shows that while men apply to jobs when they meet an average of 60% of the criteria, women and other marginalized folks tend to only apply when they check every box. We encourage you to apply if you think you may be a fit and give us both a chance to find out!

    Kalderos is proud to be an equal opportunity workplace.  We are committed to equal opportunity regardless of race, color, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or veteran status.

    Kalderos participates in E-Verify.

    See more jobs at Kalderos

    Apply for this job

    +30d

    Senior Civil Engineer - Data Center

    OlssonGreenville, SC, Remote
    Bachelor's degreeDesign

    Olsson is hiring a Remote Senior Civil Engineer - Data Center

    Job Description

    As a Senior Civil Engineer on our Data Center Civil Team, you will be a part of the firm’s largest and most complex projects. You will serve as a project manager on some projects and lead design engineer on others. Prepare planning and design documents, process design calculations, and develop and maintain team and client standards. You may lead quality assurance/quality control and act as an advisor on complex projects. You will also coordinate with other Olsson teams, professional staff, technical staff, clients, and other consultants.

    You may travel to job sites for observation and attend client meetings.

    *This role offers flexible work options, including remote and hybrid opportunities, to accommodate diverse working preferences and promote work-life balance. Candidates can work hybrid schedules, work remotely, or work out of any Olsson office location in these regions/areas.

    Qualifications

    You are passionate about:

    • Working collaboratively with others
    • Having ownership in the work you do
    • Using your talents to positively affect communities
    • Solving problems
    • Providing excellence in client service

    You bring to the team:

    • Strong communication skills
    • Ability to contribute and work well on a team
    • Bachelor's Degree in civil engineering
    • At least 8 years of related civil engineering experience
    • Proficient in Civil 3D software
    • Must be a registered professional engineer

    See more jobs at Olsson

    Apply for this job

    +30d

    Data Engineer (H/F)

    CITECHParis, France, Remote
    sqlansiblegitpythonPHP

    CITECH is hiring a Remote Data Engineer (H/F)

    Description du poste

    ???????? Vous aurez donc les missions principales suivantes : ????????

    ???? Support de l'application (notamment lors des clôtures mensuelles).
    ???? Participer à la maintenance évolutive.
    ???? Participer à la conception et l'implémentation de nouvelles fonctionnalités.
    ???? Participer à la refonte technique.
    ???? Participer à la migration de Talend vers Spark/scala.

    Qualifications

    ????De formation supérieure en informatique, vous justifiez de 5 années d’expérience minimum sur un poste similaire.   

    ⚙️ Les compétences attendues sont les suivantes :

    ✔️ Vous maîtrisez Spark, Talend (Data Intégration, Big Data) et Scala.
    ✔️Vous avez des compétences en développement (Shell unix, Perl, PHP, Python, git, github).
    ✔️Vous avez des compétences sur l’environnement technique suivant :
     Hadoop (Big Data), Hive, Microsoft PowerBI, Microsoft SQLServer Analysis services (Olap), Integration services, Reporting services, Scripting (GuitHub, Ansible, AWX, shell, vba) et SQL Server Database.

    See more jobs at CITECH

    Apply for this job

    +30d

    Sr Big Data Engineer

    Ingenia AgencyMexico Remote
    Bachelor's degreesqloraclepython

    Ingenia Agency is hiring a Remote Sr Big Data Engineer

    At Ingenia Agency we’re looking for a Data Engineerto join our team.

    Responsible for creating and sustaining pipelines that allow for the analysis of data.

    What will you be doing?

    • Conceptualizing and generating infrastructure that allows data to be accessed and analyzed in a global setting.
    • Load raw data from our SQL Servers, manipulate and save it into Google Cloud databases.
    • Detecting and correcting errors in data and writing scripts to clean such data up.
    • Work with scientists and clients in the business to gather requirements and ensure easy flow of data.

    What are we looking for?

    • Age indifferent.
    • Bachelor's degree in Data Engineering, Big Data Analytics, Computer Engineering, or related field.
    • Master's degree in a relevant field is advantageous.
    • Proven experience as a Data Engineer.
    • Expert proficiency in Python, ETL and SQL.
    • Familiarity with Google Cloud/ AWS/Azure or suitable equivalent.
    • Excellent analytical and problem-solving skills.
    • A knack for independent and group work.
    • Knowledge of Oracle and MDM Hub.
    • Capacity to successfully manage a pipeline of duties with minimal supervision.
    • Advanced English.
    • Be Extraordinary!

    What are we offering?

    • Competitive salary
    • Contract for specific period of time
    • Law benefits:
      • 10 days of vacations to the first year fulfilled
      • IMSS
    • Additional benefits:
      • Contigo Membership (Insurance of minor medical expenses)
        • Personal accident policy.
        • Funeral assistance.
        • Dental and visual health assistance.
        • Emotional wellness.
        • Benefits & discounts.
        • Network of medical services and providers with a discount.
        • Medical network with preferential prices.
        • Roadside assistance with preferential price, among others.
      • 3 special permits a year, to go out to any type of procedure that you have to do half day equivalent
      • Half day off for birthdays
      • 5 days of additional vacations in case of marriage
      • 50% scholarship in language courses in the Anglo
      • Percentage scholarship in the study of graduates or masters with the Tec. de Mty.
      • Agreement with ticket company for preferential rates for events of entertainment.



    See more jobs at Ingenia Agency

    Apply for this job

    +30d

    Sr Data Engineer GCP

    Ingenia AgencyMexico Remote
    Bachelor's degree5 years of experience3 years of experienceairflowsqlapipython

    Ingenia Agency is hiring a Remote Sr Data Engineer GCP


    AtIngenia Agency we’re looking for a Sr Data Engineer to join our team.

    Responsible for creating and sustaining pipelines that allow for the analysis of data.

    What will you be doing?

    • Sound understanding of Google Cloud Platform.
    • Should have worked on Big Query, Workflow or Composer.
    • Should know how to reduce BigQuery costs by reducing the amount of data processed by the queries.
    • Should be able to speed up queries by using denormalized data structures, with or without nested repeated fields.
    • Exploring and preparing data using BigQuery.
    • Experience in delivering artifacts scripts Python, dataflow components, SQL, Airflow and Bash/Unix scripting.
    • Building and productionizing data pipelines using dataflow.

    What are we looking for?

    • Bachelor's degree in Data Engineering, Big Data Analytics, Computer Engineering, or related field.
    • Age indifferent.
    • 3 to 5 years of experience in GCP is required.
    • Must have Excellent GCP, Big Query and SQL skills.
    • Should have at least 3 years of experience in BigQuery Dataflow and Experience with Python and Google Cloud SDK API Scripting to create reusable framework.
    • Candidate should have strong hands-on experience in PowerCenter
    • In depth understanding of architecture, table partitioning, clustering, type of tables, best practices.
    • Proven experience as a Data Engineer, Software Developer, or similar.
    • Expert proficiency in Python, R, and SQL.
    • Candidates with Google Cloud certification will be preferred
    • Excellent analytical and problem-solving skills.
    • A knack for independent and group work.
    • Capacity to successfully manage a pipeline of duties with minimal supervision.
    • Advanced English.
    • Be Extraordinary!

    What are we offering?

    • Competitive salary
    • Law benefits:
      • 10 days of vacations to the first year fulfilled
      • IMSS
    • Additional benefits:
      • Contigo Membership (Insurance of minor medical expenses)
        • Personal accident policy.
        • Funeral assistance.
        • Dental and visual health assistance.
        • Emotional wellness.
        • Benefits & discounts.
        • Network of medical services and providers with a discount.
        • Medical network with preferential prices.
        • Roadside assistance with preferential price, among others.
      • 3 special permits a year, to go out to any type of procedure that you have to do half day equivalent
      • Half day off for birthdays
      • 5 days of additional vacations in case of marriage
      • 50% scholarship in language courses in the Anglo
      • Percentage scholarship in the study of graduates or masters with the Tec. de Mty.
      • Agreement with ticket company for preferential rates for events of entertainment.

    See more jobs at Ingenia Agency

    Apply for this job