Yequan's Academic
Yequan's Academic
Home
Projects
Publications
Patents
Talks
Contact
Light
Dark
Automatic
English
English
中文 (简体)
Large Model
Masked Structural Growth for 2x Faster Language Model Pre-training
To lower the computional cost of training large model, we focus on speeding up pre-training by progressively growing from a small Transformer structure to a large one.
Yiqun Yao
,
Zheng Zhang
,
Jing Li
,
Yequan Wang
PDF
Cite
Code
Project
Project
52B to 1T: Lessons Learned via Tele-FLM Series
As scaling laws underscore the potential of increasing model sizes, the academic community has intensified its investigations into LLMs with capacities exceeding 50 billion parameters. This technical report builds on our prior work with Tele-FLM (also known as FLM-2), a publicly available 52-billion-parameter model.
Xiang Li
,
Yiqun Yao
,
Xin Jiang
,
Xuezhi Fang
,
China Telecom
,
Yequan Wang
,
Zhongjiang He
,
Zhongyuan Wang
,
Xuelong Li
,
Tiejun Huang
PDF
Cite
Project
Tele-FLM-1T
Tele-FLM
FLM Family
The FLM series of large models are a set of large models completed by the Cofe-AI team, including FLM-2, FLM-101B, and FreeLM. The core technology mainly consists of growth techniques, loss prediction technologies, and the FreeLM framework, among others.
PDF
Cite
×