Ya-Lin-Zhang commited on
Commit
402252b
·
verified ·
1 Parent(s): e2ed27a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -3
README.md CHANGED
@@ -21,7 +21,7 @@ tags:
21
 
22
  We are excited to announce the official open-source release of Ring-flash-linear-2.0!
23
 
24
- Building on the success of our Ling 2.0 series, this model continues to leverage a powerful hybrid architecture of linear and standard attention, perfectly balancing high performance with superior efficiency. By integrating our proven MoE design with optimizations like a 1/32 expert activation ratio and MTP layers, Ring-flash-linear achieves the performance of a 40 B dense model while activating only 6.1 B parameters. This model was converted from [Ling-flash-base-2.0](https://huggingface.co/inclusionAI/Ling-flash-base-2.0), further trained on an additional 1T tokens.
25
  When it comes to benchmarks, Ring-flash-linear-2.0 not only holds its own against standard attention models (like Ring-flash-2.0) but also outperforms other open-source MoE and Dense models in its class on several demanding tasks. Plus, with support for a 128k long context, it's faster and more precise than ever, especially when handling long-form inputs and outputs.
26
 
27
  <div style="display: flex; justify-content: center;">
@@ -32,6 +32,11 @@ When it comes to benchmarks, Ring-flash-linear-2.0 not only holds its own agains
32
  </div>
33
 
34
  ## Evaluation
 
 
 
 
 
35
  <div style="display: flex; justify-content: center;">
36
  <div style="text-align: center;">
37
  <img src="https://mdn.alipayobjects.com/huamei_t783ie/afts/img/mc1wSo7zHV4AAAAARHAAAAgADgCDAQFr/original" width="1000">
@@ -49,8 +54,10 @@ When it comes to benchmarks, Ring-flash-linear-2.0 not only holds its own agains
49
 
50
  ## Linear Attention, Highly Sparse, High-Speed Generation
51
 
52
- Thanks to its hybrid attention mechanism and highly sparse MoE architecture, Ring-flash-linear-2.0 achieves near-linear time complexity and constant space complexity, resulting in outstanding inference efficiency. To fully demonstrate this advantage, we conducted a end-to-end comparison between our model and top-tier competitors of similar size or performance.
53
- What is truly exciting is that in the comparison with Qwen3-32B, Ring-flash-linear-2.0 demonstrates a remarkable advantage in inference efficiency. During the prefill phase, when the context length exceeds 32k, its throughput approaches 5 times that of the former. Its performance in the high-concurrency decoding phase is even more impressive, when generating a length of 32k, Ring-flash-linear-2.0 already boasts a significant throughput advantage of 4 times. When the generated length reaches 64k, this advantage surges to nearly 10 times!
 
 
54
 
55
  <div style="display: flex; justify-content: center; align-items: flex-start; gap: 20px;">
56
  <div style="text-align: center;">
 
21
 
22
  We are excited to announce the official open-source release of Ring-flash-linear-2.0!
23
 
24
+ Building on the success of our Ling 2.0 series, this model continues to leverage a powerful hybrid architecture of linear and standard attention, perfectly balancing high performance with superior efficiency. By integrating our proven MoE design with optimizations like a 1/32 expert activation ratio and MTP layers, Ring-flash-linear achieves the performance of a 40B dense model while activating only 6.1B parameters. This model was converted from [Ling-flash-base-2.0](https://huggingface.co/inclusionAI/Ling-flash-base-2.0), further trained on an additional 1T tokens.
25
  When it comes to benchmarks, Ring-flash-linear-2.0 not only holds its own against standard attention models (like Ring-flash-2.0) but also outperforms other open-source MoE and Dense models in its class on several demanding tasks. Plus, with support for a 128k long context, it's faster and more precise than ever, especially when handling long-form inputs and outputs.
26
 
27
  <div style="display: flex; justify-content: center;">
 
32
  </div>
33
 
34
  ## Evaluation
35
+
36
+ To better demonstrate the model's capabilities, we selected representative open-source thinking models and closed-source APIs for comparison.
37
+ We present results on several challenging reasoning benchmarks spanning domains such as mathematics, coding, and science. Also, we evaluate the model's performance on a creative writing task (Creative Writing v3).
38
+ We observe that our model achieves performance on par with other models.
39
+
40
  <div style="display: flex; justify-content: center;">
41
  <div style="text-align: center;">
42
  <img src="https://mdn.alipayobjects.com/huamei_t783ie/afts/img/mc1wSo7zHV4AAAAARHAAAAgADgCDAQFr/original" width="1000">
 
54
 
55
  ## Linear Attention, Highly Sparse, High-Speed Generation
56
 
57
+ Thanks to its hybrid attention mechanism and highly sparse MoE architecture, Ring-flash-linear-2.0 achieves near-linear time complexity and constant space complexity, resulting in outstanding inference efficiency.
58
+ To fully demonstrate this advantage, we conducted a comparison between our model and top-tier competitors of similar size or performance.
59
+ The results clearly demonstrate the advantage of our model in inference efficiency.
60
+
61
 
62
  <div style="display: flex; justify-content: center; align-items: flex-start; gap: 20px;">
63
  <div style="text-align: center;">