포지션 상세
센드버드의 엔지니어링 팀은 신뢰성, 풍부한 기능, 확장성을 모두 갖춘 실시간 대화 솔루션을 다양한 플랫폼에 걸쳐 제공하기 위해 거대한 도전과제들을 해결해나가고 있습니다. 후보자께서 합류하시면 센드버드 메시징 플랫폼에 기여함으로써 수많은 고객들이 SDK, 샘플을 통해 앱에 실시간 대화 기능을 잘 구축하도록 도와줄 수 있습니다. 저희와 함께 세계 최고의 실시간 대화 솔루션을 구축하게 될 것입니다.
센드버드의 채팅, 보이스, 비디오 플랫폼이 매개체가 되어, 의사와 환자가 디지털 공간에서 만나고, 팬들이 디지털 공간에서 스포츠 경기와 공연을 즐기고, 심지어 구매자와 판매자가 디지털 장터에서 거래하는 것이 가능해집니다. 센드버드의 여정에 합류하세요!
• Support data warehousing of a variety of different datasets at scale
• Build production services using open-source technologies such as Airflow, Spark, AWS cloud infrastructure such as EKS, Lambda, Aurora, S3, Athena, and also GCP services such as BigQuery, Dataflow
• Develop data processing platform and tools for data collection, discovery, and analytics in Python, Java, Scala and define workflows and processes in testing, validation, monitoring and so forth
• Collaborate with other teams and work cross-functionally for data related product initiatives
• 5+ years of work experience in data engineering and/or building ETL pipelines in production
• Work experience in AWS and/or GCP data pipeline ecosystem
• Fluency in several programming languages such as Python, Java, or Scala
• Strong analytic skills related to data models and conducting SQL
• Ability to find the optimal solution given resource constraints
• Understand the need and enhance data quality and reliability
센드버드의 채팅, 보이스, 비디오 플랫폼이 매개체가 되어, 의사와 환자가 디지털 공간에서 만나고, 팬들이 디지털 공간에서 스포츠 경기와 공연을 즐기고, 심지어 구매자와 판매자가 디지털 장터에서 거래하는 것이 가능해집니다. 센드버드의 여정에 합류하세요!
주요업무
• Design distributed, high-volume ETL data pipelines that power Sendbird analytics and products• Support data warehousing of a variety of different datasets at scale
• Build production services using open-source technologies such as Airflow, Spark, AWS cloud infrastructure such as EKS, Lambda, Aurora, S3, Athena, and also GCP services such as BigQuery, Dataflow
• Develop data processing platform and tools for data collection, discovery, and analytics in Python, Java, Scala and define workflows and processes in testing, validation, monitoring and so forth
• Collaborate with other teams and work cross-functionally for data related product initiatives
자격요건
• Working knowledge of distributed processing, highly scalable data stores, and developing and maintaining a large variety of datasets• 5+ years of work experience in data engineering and/or building ETL pipelines in production
• Work experience in AWS and/or GCP data pipeline ecosystem
• Fluency in several programming languages such as Python, Java, or Scala
• Strong analytic skills related to data models and conducting SQL
• Ability to find the optimal solution given resource constraints
• Understand the need and enhance data quality and reliability










