San Jose, US-United States, Seattle, US-United States
Posted 6 months ago
About The Company This company pioneers short-form video creation and social engagement, boasting a vast, engaged user base. Its platform empowers users with creative tools, filters, and effects. With a diverse content ecosystem, it’s a hub of creativity and expression. The proprietary algorithm ensures personalized content feeds, enhancing user engagement and satisfaction. This company wields significant influence on digital media, making it an invaluable partner for innovative collaborations and marketing endeavors. Team Intro Large Model Team is committed to developing the most advanced AI large model technology in the industry, becoming a world-class research team, and contributing to technological and social development. The Large Model Team has a long-term vision and determination in the field of AI, with research directions covering NLP, CV, speech, and other areas. Relying on the abundant data and computing resources of the platform, the team has continued to invest in relevant fields and has launched its own general large model, providing multi-modal capabilities. The Machine Learning (ML) System sub-team combines system engineering and the art of machine learning to develop and maintain massively distributed ML training and Inference system/services around the world, providing high-performance, highly reliable, scalable systems for LLM/AIGC/AGI In our team, you’ll have the opportunity to build the large scale heterogeneous system integrating with GPU/NPU/RDMA/Storage and keep it running stable and reliable, enrich your expertise in coding, performance analysis and distributed system, and be involved in the decision-making process. You’ll also be part of a global team with members from the United States, China and Singapore working collaboratively towards unified project direction. Responsibilities – Responsible for ensuring our ML systems are operating and running efficiently for large model development, training, evaluation, and inference – Responsible for the stability of offline tasks/services in multi-data center, multi-region, and multi-cloud scenarios – Responsible for resource management and planning, cost and budget, including computing and storage resources – Responsible for global system disaster recovery, cluster machine governance, stability of business services, resource utilisation improvement and operation efficiency improvement – Build software tools, products and systems to monitor and manage the ML infrastructure and services efficiently – Be part of the global team roster that ensures system and business on-call support Minimum Qualifications – Bachelor’s degree or above, major in computer science, computer engineering or related – Strong proficiency in at least one programming language such as Go/Python/Shell in Linux environment – Strong hands-on experience with Kubernetes and containers skills, and have more than 2 years of relevant operation and maintenance experience – Possess excellent logical analysis ability, able to reasonably abstract and split business logic – Have good documentation principles and habits to be able to write and update workflow and technical documentation as required on time – Possess a strong sense of responsibility, good learning ability, communication ability and self-drive, good team spirit Preferred Qualifications – Engaged in the operation and maintenance of large-scale ML distributed systems – Experience in operation and maintenance of GPU servers |
Job Features
Job Category | DevOps & SRE |
Seniority | Junior / Mid IC |
Base Salary | $180,000 - $276,000 |
Recruiter | levana.lyu@ocbridge.ai |