Senior Data Engineer

Location
Remote
Work Type
Full-Time
Level
Senior

About Zingly

At Zingly, we build relationships bigger than business. We are a highly ambitious, early stage (post seed round, stealth mode), tech startup laser focused on changing how consumers build and grow relationships with their brands, for the benefit of both. We believe that modern technology can serve to empower consumers and put them on an equal footing with brands, and that we can bring people together to build transparent, authentic relationships across the board.

We are looking for individuals who also are passionate about relationships -- and if your skills follow your passions, you are in the right place. Our team is globally distributed, and we welcome diverse backgrounds and paths of life. We challenge assumptions, embrace change, and value divergent perspectives. We put people first, inside and outside of work. Creating a healthy, balanced work culture is a top priority at Zingly.

We are looking for a Senior Data Engineer with 8+ years of experience in building and supporting highly scalable distributed databases. Effective, timely communication with team members, including in English, is required.

Opportunities for Impact:

  • Design and build the data platform for Gen(AI) models
  • Design and build a tech solution that has the potential to impact all industries in the world
  • Change how billions of people think about customer experience, pre- and post-sale
  • Build a robust, scalable, all in one customer engagement platform to be used by hundreds of millions of customers and tens of thousands of brands
  • Inspire and guide your teammates by example, through attitude and craft
  • Be a part of, and contribute to, a people-first company culture
  • Develop and maintains scalable cloud-based data, data cleaning, data organization, and integration process
  • Collaborates with Data AI engineers to develop and support data models that feed analytics models
  • Design and Implement ETL big data pipelines to train ML models
  • Implement processes and systems to monitor data quality, ensuring production data is accurate and available for all key stakeholders 
  • Selecting and integrating a variety of big data tools and frameworks required for processing
  • Responsible for availability, scalability, reliability, and performance of the big data platform
  • Collaborate and peer review with other engineers on feature design and implementation
  • Design and build real-time data streams for live reporting and real-time actions

About you:

  • Driven and determined to do your best, at everything you do
  • Passionate about creating the best user experience possible, on schedule
  • Independent but collaborative, clearly communicating and working across design, product, and engineering teams
  • Team player, quick to help others, even when it's not part of your core role
  • Always curious and excited to learn new things, looking for feedback and opportunities to grow 
  • Proactive, anticipating what's around the corner, both in our own product and the bigger industry
  • Innovative and not afraid to (respectfully) question status quo; out of the box thinker
  • Leader in character and skill set, setting the bar high for yourself and others

Preferred Experience:

  • 8+ years of experience in data engineering and building big data solutions
  • Proficiency with Big Data ecosystem - Spark (PySpark), Hadoop, HDFS, HIVE, NoSQL, and modern Cloud Data lakes (Cloudera Data Platform or Deltalake)
  • Strong knowledge of AWS data platform and tools
  • Strong SQL expertise, optimizing complex joins and database concepts
  • Strong programming development experience in languages like Python and Java
  • Experience with building stream-processing systems using Spark-Streaming
  • Experience with Unix/Shell or Python scripting
  • Experience of developing REST API interface
  • Knowledge of AI/ML and MLOps 
  • Excellent problem solving and troubleshooting skills
  • Excellent understanding of computer science fundamentals, data structures, and algorithms

Nice to Haves:

  • Experience of Kafka and Splunk
  • Machine learning tools experience

We offer great benefits, flexible work hours and a remote-first culture built on trust. If this sounds like you, let’s connect, we would love to have you onboard!