LearnGet to production faster with the Pinecone Reference Architecture for AWSLearn more
All Jobs

Sr. Software Engineer - Data Platform

at Pinecone, NYC

About Pinecone

Pinecone is pioneering a vector database to power modern AI/ML applications. We provide customers with capabilities that until now have only been in the hands of a few tech giants - such Google’s Search and Facebook’s feed ranking. Our team consists of multiple startup founders, including the core team that created Amazon SageMaker, and is backed by some of Silicon Valley’s prominent investors.

We value integrity, passion, pushing boundaries, real-world problem solving, and a sense of humor. We work in collaboration and encourage new ideas and initiatives.

About The Role:

As we continue to grow and scale our business, it's essential that we have a highly skilled data engineering team in place to ensure that we can use robust and reliable data for necessary business operations, analysis, and reporting. The Experience Team is responsible for helping our users manage, observe, and operate their vector databases at scale. This role is the second engineer on the team dedicated to designing and building the data warehouse and pipelines that enable data discovery, integration, transformation, and analysis across the business. The team works closely with stakeholders across the business, including growth marketing, product, finance, and sales operations. 

Overview

As a Data Engineer on our Data Platform Team, you'll play a vital role in developing and maintaining the infrastructure that powers our use of data across our organization. You'll collaborate with our software engineers and analysts to ensure that our data is structured, optimized, and readily available to drive our business forward. You'll be at the forefront of designing and building the data platform that our customers rely on to manage their AI/ML applications. If you're passionate about working on complex data challenges and thrive in a dynamic, fast-paced environment, we'd love to have you join our team!

Requirements

  • 6+ years hands-on experience writing and deploying production quality code. Familiarity with production Rust applications preferred.

  • Professional experience using Python, Java, or Scala for data processing

  • Deep understanding of SQL and analytical data warehouses

  • Experience designing, building and operating BigQuery instances and pipelines

  • Experience implementing ETL (or ELT) best practices at scale.

  • Experience with data pipeline and orchestration tools (dbt, Airflow, Prefect)

  • Experience with big data processing concepts (Spark, Kafka, DataFlow)

  • Experiences working with cloud infrastructure technologies (GCP, AWS, Kubernetes)

  • Strong data modeling skills and familiarity with the Kimball methodology

Responsibilities

  • Build and maintain data pipelines from internal services and SaaS apps

  • Provide architecture recommendations and implement them

  • Write performant code and define standards for style and maintenance

  • Support and mentor team members to grow technical skills

  • Ship medium to large features independently and with minimal guidance

  • Influence long-range goals and achieve consensus among stakeholders

 

Apply
Share via: