字节跳动豆包大模型团队近日开源一项名为COMET的MoE架构优化技术,可将大模型训练效率提升1.7倍。论文显示,该技术已实际应用于字节的万卡集群训练,实现数百万GPU小时训练算力的节省。
特别声明:以上内容(如有图片或视频亦包括在内)为自媒体平台“网易号”用户上传并发布,本平台仅提供信息存储服务。
Notice: The content above (including the pictures and videos if any) is uploaded and posted by a user of NetEase Hao, which is a social media platform and only provides information storage services.