Yequan's Academic
Yequan's Academic
Home
Projects
Publications
Patents
Talks
Contact
Light
Dark
Automatic
English
English
中文 (简体)
Foundation Model
Masked Structural Growth for 2x Faster Language Model Pre-training
To lower the computional cost of training large model, we focus on speeding up pre-training by progressively growing from a small Transformer structure to a large one.
Yiqun Yao
,
Zheng Zhang
,
Jing Li
,
Yequan Wang
PDF
Cite
Code
Project
Project
52B to 1T: Lessons Learned via Tele-FLM Series
As scaling laws underscore the potential of increasing model sizes, the academic community has intensified its investigations into LLMs with capacities exceeding 50 billion parameters. This technical report builds on our prior work with Tele-FLM (also known as FLM-2), a publicly available 52-billion-parameter model.
Xiang Li
,
Yiqun Yao
,
Xin Jiang
,
Xuezhi Fang
,
China Telecom
,
Yequan Wang
,
Zhongjiang He
,
Zhongyuan Wang
,
Xuelong Li
,
Tiejun Huang
PDF
Cite
Project
Tele-FLM-1T
Tele-FLM
Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales
We find that Maximal Update parametrization (uP) enables accurate fitting of scaling laws for hyperparameters close to common loss basins, without any search. Thus, different models can be directly compared on large scales with loss prediction even before the training starts. We propose a new paradigm as a first step towards reliable academic research for any model scale without heavy computation.
Yiqun Yao
,
Yequan Wang
PDF
Cite
Project
Cite
×