Update README.md
Browse files
README.md
CHANGED
|
@@ -21,7 +21,7 @@ tags:
|
|
| 21 |
|
| 22 |
We are excited to announce the official open-source release of Ring-flash-linear-2.0!
|
| 23 |
|
| 24 |
-
Building on the success of our Ling 2.0 series, this model continues to leverage a powerful hybrid architecture of linear and standard attention, perfectly balancing high performance with superior efficiency. By integrating our proven MoE design with optimizations like a 1/32 expert activation ratio and MTP layers, Ring-flash-linear achieves the performance of a
|
| 25 |
When it comes to benchmarks, Ring-flash-linear-2.0 not only holds its own against standard attention models (like Ring-flash-2.0) but also outperforms other open-source MoE and Dense models in its class on several demanding tasks. Plus, with support for a 128k long context, it's faster and more precise than ever, especially when handling long-form inputs and outputs.
|
| 26 |
|
| 27 |
<div style="display: flex; justify-content: center;">
|
|
@@ -32,6 +32,11 @@ When it comes to benchmarks, Ring-flash-linear-2.0 not only holds its own agains
|
|
| 32 |
</div>
|
| 33 |
|
| 34 |
## Evaluation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
<div style="display: flex; justify-content: center;">
|
| 36 |
<div style="text-align: center;">
|
| 37 |
<img src="https://mdn.alipayobjects.com/huamei_t783ie/afts/img/mc1wSo7zHV4AAAAARHAAAAgADgCDAQFr/original" width="1000">
|
|
@@ -49,8 +54,10 @@ When it comes to benchmarks, Ring-flash-linear-2.0 not only holds its own agains
|
|
| 49 |
|
| 50 |
## Linear Attention, Highly Sparse, High-Speed Generation
|
| 51 |
|
| 52 |
-
Thanks to its hybrid attention mechanism and highly sparse MoE architecture, Ring-flash-linear-2.0 achieves near-linear time complexity and constant space complexity, resulting in outstanding inference efficiency.
|
| 53 |
-
|
|
|
|
|
|
|
| 54 |
|
| 55 |
<div style="display: flex; justify-content: center; align-items: flex-start; gap: 20px;">
|
| 56 |
<div style="text-align: center;">
|
|
|
|
| 21 |
|
| 22 |
We are excited to announce the official open-source release of Ring-flash-linear-2.0!
|
| 23 |
|
| 24 |
+
Building on the success of our Ling 2.0 series, this model continues to leverage a powerful hybrid architecture of linear and standard attention, perfectly balancing high performance with superior efficiency. By integrating our proven MoE design with optimizations like a 1/32 expert activation ratio and MTP layers, Ring-flash-linear achieves the performance of a 40B dense model while activating only 6.1B parameters. This model was converted from [Ling-flash-base-2.0](https://huggingface.co/inclusionAI/Ling-flash-base-2.0), further trained on an additional 1T tokens.
|
| 25 |
When it comes to benchmarks, Ring-flash-linear-2.0 not only holds its own against standard attention models (like Ring-flash-2.0) but also outperforms other open-source MoE and Dense models in its class on several demanding tasks. Plus, with support for a 128k long context, it's faster and more precise than ever, especially when handling long-form inputs and outputs.
|
| 26 |
|
| 27 |
<div style="display: flex; justify-content: center;">
|
|
|
|
| 32 |
</div>
|
| 33 |
|
| 34 |
## Evaluation
|
| 35 |
+
|
| 36 |
+
To better demonstrate the model's capabilities, we selected representative open-source thinking models and closed-source APIs for comparison.
|
| 37 |
+
We present results on several challenging reasoning benchmarks spanning domains such as mathematics, coding, and science. Also, we evaluate the model's performance on a creative writing task (Creative Writing v3).
|
| 38 |
+
We observe that our model achieves performance on par with other models.
|
| 39 |
+
|
| 40 |
<div style="display: flex; justify-content: center;">
|
| 41 |
<div style="text-align: center;">
|
| 42 |
<img src="https://mdn.alipayobjects.com/huamei_t783ie/afts/img/mc1wSo7zHV4AAAAARHAAAAgADgCDAQFr/original" width="1000">
|
|
|
|
| 54 |
|
| 55 |
## Linear Attention, Highly Sparse, High-Speed Generation
|
| 56 |
|
| 57 |
+
Thanks to its hybrid attention mechanism and highly sparse MoE architecture, Ring-flash-linear-2.0 achieves near-linear time complexity and constant space complexity, resulting in outstanding inference efficiency.
|
| 58 |
+
To fully demonstrate this advantage, we conducted a comparison between our model and top-tier competitors of similar size or performance.
|
| 59 |
+
The results clearly demonstrate the advantage of our model in inference efficiency.
|
| 60 |
+
|
| 61 |
|
| 62 |
<div style="display: flex; justify-content: center; align-items: flex-start; gap: 20px;">
|
| 63 |
<div style="text-align: center;">
|