ZzzHelloWorld commited on
Commit
fabf9c5
·
verified ·
1 Parent(s): 003c62d

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +139 -0
  2. Shapegrid/ShapeGrid_area.tsv +0 -0
  3. Shapegrid/ShapeGrid_loc.tsv +0 -0
  4. Sudoku/ShapeGrid_sudoku.tsv +0 -0
  5. VLMEvalKit-sudoku/.env +31 -0
  6. VLMEvalKit-sudoku/.pre-commit-config.yaml +43 -0
  7. VLMEvalKit-sudoku/LICENSE +203 -0
  8. VLMEvalKit-sudoku/README.md +155 -0
  9. VLMEvalKit-sudoku/eval.sh +7 -0
  10. VLMEvalKit-sudoku/requirements.txt +40 -0
  11. VLMEvalKit-sudoku/vlmeval/dataset/CGAVCounting/__init__.py +0 -0
  12. VLMEvalKit-sudoku/vlmeval/dataset/CGAVCounting/__pycache__/utils.cpython-310.pyc +0 -0
  13. VLMEvalKit-sudoku/vlmeval/dataset/CGAVCounting/requirements.txt +2 -0
  14. VLMEvalKit-sudoku/vlmeval/dataset/EgoExoBench/__init__.py +1 -0
  15. VLMEvalKit-sudoku/vlmeval/dataset/EgoExoBench/__pycache__/egoexobench.cpython-310.pyc +0 -0
  16. VLMEvalKit-sudoku/vlmeval/dataset/GUI/__pycache__/screenspot.cpython-310.pyc +0 -0
  17. VLMEvalKit-sudoku/vlmeval/dataset/GUI/screenspot_v2.py +208 -0
  18. VLMEvalKit-sudoku/vlmeval/dataset/OmniDocBench/__init__.py +0 -0
  19. VLMEvalKit-sudoku/vlmeval/dataset/OmniDocBench/__pycache__/omnidocbench.cpython-310.pyc +0 -0
  20. VLMEvalKit-sudoku/vlmeval/dataset/OmniDocBench/metrics.py +486 -0
  21. VLMEvalKit-sudoku/vlmeval/dataset/OmniDocBench/omnidocbench.py +551 -0
  22. VLMEvalKit-sudoku/vlmeval/dataset/mmmath.py +459 -0
  23. VLMEvalKit-sudoku/vlmeval/dataset/utils/bmmr_grade.py +470 -0
  24. VLMEvalKit-sudoku/vlmeval/dataset/utils/megabench/scoring/xml_nbbox_iou.py +33 -0
  25. VLMEvalKit-sudoku/vlmeval/vlm/__pycache__/__init__.cpython-310.pyc +0 -0
  26. VLMEvalKit-sudoku/vlmeval/vlm/__pycache__/idefics.cpython-310.pyc +0 -0
  27. VLMEvalKit-sudoku/vlmeval/vlm/__pycache__/phi3_vision.cpython-310.pyc +0 -0
  28. VLMEvalKit-sudoku/vlmeval/vlm/__pycache__/points.cpython-310.pyc +0 -0
  29. VLMEvalKit-sudoku/vlmeval/vlm/__pycache__/smolvlm.cpython-310.pyc +0 -0
  30. VLMEvalKit-sudoku/vlmeval/vlm/__pycache__/transcore_m.cpython-310.pyc +0 -0
  31. VLMEvalKit-sudoku/vlmeval/vlm/granite_vision/__pycache__/__init__.cpython-310.pyc +0 -0
  32. VLMEvalKit-sudoku/vlmeval/vlm/granite_vision/__pycache__/granite_vision.cpython-310.pyc +0 -0
  33. VLMEvalKit-sudoku/vlmeval/vlm/hawk_vl/__pycache__/prompt.cpython-310.pyc +0 -0
  34. VLMEvalKit-sudoku/vlmeval/vlm/hawk_vl/hawk/__init__.py +1 -0
  35. VLMEvalKit-sudoku/vlmeval/vlm/hawk_vl/hawk/model/__init__.py +1 -0
  36. VLMEvalKit-sudoku/vlmeval/vlm/hawk_vl/hawk/model/vision_encoder/__init__.py +5 -0
  37. VLMEvalKit-sudoku/vlmeval/vlm/hawk_vl/hawk/model/vision_encoder/qwen_vit/__init__.py +2 -0
  38. VLMEvalKit-sudoku/vlmeval/vlm/hawk_vl/hawk/model/vision_encoder/qwen_vit/configuration_qwen_vit.py +56 -0
  39. VLMEvalKit-sudoku/vlmeval/vlm/hawk_vl/hawk/utils.py +16 -0
  40. VLMEvalKit-sudoku/vlmeval/vlm/internvl/__init__.py +3 -0
  41. VLMEvalKit-sudoku/vlmeval/vlm/internvl/__pycache__/__init__.cpython-310.pyc +0 -0
  42. VLMEvalKit-sudoku/vlmeval/vlm/internvl/__pycache__/internvl_chat.cpython-310.pyc +0 -0
  43. VLMEvalKit-sudoku/vlmeval/vlm/internvl/__pycache__/utils.cpython-310.pyc +0 -0
  44. VLMEvalKit-sudoku/vlmeval/vlm/internvl/utils.py +312 -0
  45. VLMEvalKit-sudoku/vlmeval/vlm/llava/__pycache__/__init__.cpython-310.pyc +0 -0
  46. VLMEvalKit-sudoku/vlmeval/vlm/llava/__pycache__/llava.cpython-310.pyc +0 -0
  47. VLMEvalKit-sudoku/vlmeval/vlm/llava/__pycache__/llava_xtuner.cpython-310.pyc +0 -0
  48. VLMEvalKit-sudoku/vlmeval/vlm/minicpm_v.py +1271 -0
  49. VLMEvalKit-sudoku/vlmeval/vlm/misc/minigpt4_7b_eval.yaml +38 -0
  50. VLMEvalKit-sudoku/vlmeval/vlm/ola/__pycache__/__init__.cpython-310.pyc +0 -0
README.md ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+
3
+ # LLaVA-UHD-v3 Pilot Experiment
4
+
5
+ **PROGRESSIVE VISUAL COMPRESSION FOR EFFICIENT NAIVE-RESOLUTION ENCODING IN MLLMS**
6
+
7
+ [📄 OpenReview](https://openreview.net/pdf/3bd376fce3e8ff071bfd2f7b509f651553e2cb38.pdf) | [💻 Github](https://github.com/Sishxo/LLaVA-UHD-v3/tree/master?tab=readme-ov-file)
8
+ </div>
9
+
10
+ Here, we will introduce several benchmarks used in the preliminary experiments of LLaVA-UHD-v3 (ShapeGrid, Sudoku, and Sudoku in the Appendix), along with the related plotting code, preliminary experiment model inference code, and the model inference results.
11
+
12
+ ## Summary of Preliminary Experiments
13
+
14
+ The pilot experiment is designed to systematically compare the performance of Global Naive-Resolution Encoding ([GNE](https://huggingface.co/ZzzHelloWorld/llava-uhd-final/tree/main)) against Slice-Based Encoding ([SBE](https://huggingface.co/ZzzHelloWorld/llava_uhd_resampler_query_49)) in multimodal models. Through controlled experiments on general benchmarks and a synthetic dataset (ShapeGrid) created specifically to test spatial perception, the study finds that GNE significantly outperforms SBE in both semantic understanding and spatial reasoning. To further investigate the advantages of GNE, the experiment introduced the ShapeGrid-Sudoku dataset. By querying the model on the position of patterns in a 3x3 grid relative to a central pentagram, it revealed that the SBE method exhibits a systematic "cross-shaped" directional bias stemming from its slicing mechanism. The root cause is that image partitioning disrupts the spatial continuity of attention. This conclusion strongly demonstrates the advantage of global encoding in preserving visual holism and highlights the necessity of developing a novel visual encoding method that is both efficient and global.
15
+
16
+ ## 🔥ShapeGrid benchmark
17
+ The `ShapeGrid` benchmark includes questions about distance, area, location, and count involving various random shapes, aiming to specifically evaluate the model’s spatial perception ability.
18
+
19
+ <p align="center">
20
+ <img src="figs/ShapeGrid.png" width="400" height="320">
21
+ </p>
22
+
23
+ Performance comparison between global naive-resolution encoding (GNE) and slice-based encoding (SBE) across different general benchmarks and ShapeGrid subsets.It can be seen that GNE outperforms all others by a large margin, both on the general benchmarks and the ShapeGrid subsets.
24
+
25
+ <div align="center">
26
+
27
+ <table style="color:black;">
28
+ <thead>
29
+ <tr style="background-color:#D0E8E2">
30
+ <th>Model</th>
31
+ <th>Distance</th>
32
+ <th>Count</th>
33
+ <th>Location</th>
34
+ <th>Area</th>
35
+ </tr>
36
+ </thead>
37
+ <tbody>
38
+ <tr style="background-color:#EDF3F1">
39
+ <td>GNE</td>
40
+ <td>60.4</td>
41
+ <td>71.2</td>
42
+ <td>73.5</td>
43
+ <td>89.2</td>
44
+ </tr>
45
+ <tr style="background-color:#EDF3F1">
46
+ <td>SBE</td>
47
+ <td>51.3</td>
48
+ <td>55.7</td>
49
+ <td>64.7</td>
50
+ <td>78.7</td>
51
+ </tr>
52
+ </tbody>
53
+ </table>
54
+
55
+ </div>
56
+
57
+ <div align="center">
58
+
59
+ <table style="color:black;">
60
+ <thead>
61
+ <tr style="background-color:#C2CAF0">
62
+ <th>Model</th>
63
+ <th>MMStar</th>
64
+ <th>SEED</th>
65
+ <th>MMBench</th>
66
+ <th>MME</th>
67
+ </tr>
68
+ </thead>
69
+ <tbody>
70
+ <tr style="background-color:#EFF1FB">
71
+ <td>GNE</td>
72
+ <td>51.0</td>
73
+ <td>74.0</td>
74
+ <td>74.8</td>
75
+ <td>78.6</td>
76
+ </tr>
77
+ <tr style="background-color:#EFF1FB">
78
+ <td>SBE</td>
79
+ <td>47.7</td>
80
+ <td>72.4</td>
81
+ <td>72.8</td>
82
+ <td>77.3</td>
83
+ </tr>
84
+ </tbody>
85
+ </table>
86
+
87
+ </div>
88
+
89
+ ## 🔥ShapeGrid-Sudoku benchmark
90
+ To precisely evaluate spatial directional awareness, the pilot experiment introduced a "`Sudoku`-style" dataset. Each image consists of a 3x3 grid with a fixed central anchor surrounded by random objects. The model is tasked with identifying the direction of a target object relative to the center, a design that isolates directional localization for a clear and independent assessment.
91
+
92
+ <p align="center">
93
+ <img src="figs/Sudoku.png" width="270" height="200">
94
+ </p>
95
+ The results revealed a stark contrast between the methods. Global Naive-Resolution Encoding (GNE) achieved high, balanced accuracy across all directions, indicating unbiased spatial understanding. In contrast, Slice-Based Encoding (SBE) exhibited a systematic "cross-shaped" bias, with significantly lower accuracy for objects directly above, below, left, and right of the center. This flaw was attributed to SBE's slicing mechanism disrupting spatial continuity and leading to uneven attention, strongly validating the critical advantage of global encoding in preserving visual holism.
96
+
97
+ <p align="center">
98
+ <img src="figs/sudoku_result.png" width="450" height="250">
99
+ </p>
100
+
101
+ ## 🔥Appendix-Sudoku benchmark
102
+ To verify whether the performance of global naive-resolution visual encoding and slice-based en-coding on the Sudoku subset exhibits consistent patterns observed in the pilot experiment, we further evaluate the widely discussed approaches, like Qwen2.5-VL representing GNE and MiniCPM-o 2.6 representing SBE on the Sudoku subset. Since the widely discussed approaches show stronger performance, we adopted the more challenging ShapeGrid-Sudoku subset.
103
+
104
+ <p align="center">
105
+ <img src="figs/appendix_sudoku.png" width="270" height="200">
106
+ </p>
107
+
108
+ It can be seen that Qwen2.5-VL achieves con-sistently high accuracy across all positions in the Sudoku subset, whereas MiniCPM-o 2.6 exhibits lower accuracy in the top and right positions.
109
+
110
+ <p align="center">
111
+ <img src="figs/appendix_sudoku_result.png" width="450" height="250">
112
+ </p>
113
+
114
+ ## Other Sections
115
+ If you want to reproduce the results of the pilot experiment, you need to first download the checkpoints of [GNE](https://huggingface.co/ZzzHelloWorld/llava-uhd-final) and [SBE](https://huggingface.co/ZzzHelloWorld/llava_uhd_resampler_query_49).Evaluation script is in `VLMEvalkit-sudoku`, you need to add the corresponding files to the official VLMEvalkit project for testing.For details of data organization, please refer to [here](https://github.com/open-compass/VLMEvalKit) for help.
116
+ We provide the same script to complete the testing.
117
+
118
+ You can start the inference by performing the following steps.
119
+ ```bash
120
+ cd ./VLMEvalKit-sudoku
121
+ bash eval.sh
122
+ ```
123
+
124
+ We also provide code for plotting the heatmaps of model answer accuracy, where the Sudoku results are generated using `heatmap.py`, and the Appendix-Sudoku results are generated using `heatmap_appendix.py`.The inference results of GNE, SBE, MiniCPM-o 2.6, and Qwen2.5-VL can be found in `eval_results`.
125
+
126
+ ## Citation
127
+
128
+ If you find LLaVA-UHD-v3 useful for your research and applications, please cite using this BibTeX:
129
+ ```bibtex
130
+ @inproceedings{anonymous2025llavauhd,
131
+ title={{LL}a{VA}-{UHD} v3: Progressive Visual Compression for Efficient Naive-Resolution Encoding in {MLLM}s},
132
+ author={Anonymous},
133
+ booktitle={Submitted to The Fourteenth International Conference on Learning Representations},
134
+ year={2025},
135
+ url={https://openreview.net/forum?id=T4pK6ByRit},
136
+ note={under review}
137
+ }
138
+ ```
139
+
Shapegrid/ShapeGrid_area.tsv ADDED
The diff for this file is too large to render. See raw diff
 
Shapegrid/ShapeGrid_loc.tsv ADDED
The diff for this file is too large to render. See raw diff
 
Sudoku/ShapeGrid_sudoku.tsv ADDED
The diff for this file is too large to render. See raw diff
 
VLMEvalKit-sudoku/.env ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # # .env 文件,将其放置在 $VLMEvalKit 下
2
+ # # 专有 VLMs 的 API 密钥
3
+ # # QwenVL APIs
4
+ # DASHSCOPE_API_KEY=
5
+ # # Gemini w. Google Cloud Backends
6
+ # GOOGLE_API_KEY=
7
+ # # OpenAI API
8
+ # # OPENAI_API_KEY=sk-PXKqPaLdZiIOZxeK81D94cC7E27f4d85Aa48Ec458f72A981
9
+ # # OPENAI_API_BASE=https://yeysai.com/v1
10
+ # OPENAI_API_KEY=
11
+ # OPENAI_API_BASE=
12
+ # # StepAI API
13
+ # STEPAI_API_KEY=
14
+ # # REKA API
15
+ # REKA_API_KEY=
16
+ # # GLMV API
17
+ # GLMV_API_KEY=
18
+ # # CongRong API
19
+ # CW_API_BASE=
20
+ # CW_API_KEY=
21
+ # # SenseChat-V API
22
+ # SENSECHAT_AK=
23
+ # SENSECHAT_SK=
24
+ # # Hunyuan-Vision API
25
+ # HUNYUAN_SECRET_KEY=
26
+ # HUNYUAN_SECRET_ID=
27
+ # # LMDeploy API
28
+ # LMDEPLOY_API_BASE=
29
+ # # 你可以设置一个评估时代理,评估阶段产生的 API 调用将通过这个代理进行
30
+ # EVAL_PROXY=
31
+ LMUData=/root/LMUData
VLMEvalKit-sudoku/.pre-commit-config.yaml ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ exclude: |
2
+ (?x)^(
3
+ scripts/|
4
+ assets/|
5
+ vlmeval/config.py |
6
+ vlmeval/dataset/utils/wemath.py |
7
+ vlmeval/dataset/OmniDocBench/ |
8
+ vlmeval/dataset/utils/megabench/ |
9
+ vlmeval/dataset/utils/vgrpbench/ |
10
+ vlmeval/dataset/utils/chartmimic/ |
11
+ vlmeval/vlm/ola/ |
12
+ vlmeval/vlm/ursa/ |
13
+ vlmeval/vlm/ovis/ |
14
+ vlmeval/dataset/utils/mme_reasoning.py
15
+ )
16
+ repos:
17
+ - repo: https://github.com/PyCQA/flake8
18
+ rev: 6.1.0
19
+ hooks:
20
+ - id: flake8
21
+ args:
22
+ [
23
+ "--max-line-length=120",
24
+ "--ignore=F401,F403,F405,E402,E722,E741,W503,E231,E702",
25
+ ]
26
+ exclude: ^configs/
27
+ - repo: https://github.com/pre-commit/mirrors-yapf
28
+ rev: v0.30.0
29
+ hooks:
30
+ - id: yapf
31
+ args: ["--style={column_limit=120}"]
32
+ - repo: https://github.com/pre-commit/pre-commit-hooks
33
+ rev: v3.1.0
34
+ hooks:
35
+ - id: trailing-whitespace
36
+ - id: check-yaml
37
+ - id: end-of-file-fixer
38
+ - id: requirements-txt-fixer
39
+ - id: check-merge-conflict
40
+ - id: fix-encoding-pragma
41
+ args: ["--remove"]
42
+ - id: mixed-line-ending
43
+ args: ["--fix=lf"]
VLMEvalKit-sudoku/LICENSE ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Copyright 2023 VLMEvalKit Authors. All rights reserved.
2
+
3
+ Apache License
4
+ Version 2.0, January 2004
5
+ http://www.apache.org/licenses/
6
+
7
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
8
+
9
+ 1. Definitions.
10
+
11
+ "License" shall mean the terms and conditions for use, reproduction,
12
+ and distribution as defined by Sections 1 through 9 of this document.
13
+
14
+ "Licensor" shall mean the copyright owner or entity authorized by
15
+ the copyright owner that is granting the License.
16
+
17
+ "Legal Entity" shall mean the union of the acting entity and all
18
+ other entities that control, are controlled by, or are under common
19
+ control with that entity. For the purposes of this definition,
20
+ "control" means (i) the power, direct or indirect, to cause the
21
+ direction or management of such entity, whether by contract or
22
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
23
+ outstanding shares, or (iii) beneficial ownership of such entity.
24
+
25
+ "You" (or "Your") shall mean an individual or Legal Entity
26
+ exercising permissions granted by this License.
27
+
28
+ "Source" form shall mean the preferred form for making modifications,
29
+ including but not limited to software source code, documentation
30
+ source, and configuration files.
31
+
32
+ "Object" form shall mean any form resulting from mechanical
33
+ transformation or translation of a Source form, including but
34
+ not limited to compiled object code, generated documentation,
35
+ and conversions to other media types.
36
+
37
+ "Work" shall mean the work of authorship, whether in Source or
38
+ Object form, made available under the License, as indicated by a
39
+ copyright notice that is included in or attached to the work
40
+ (an example is provided in the Appendix below).
41
+
42
+ "Derivative Works" shall mean any work, whether in Source or Object
43
+ form, that is based on (or derived from) the Work and for which the
44
+ editorial revisions, annotations, elaborations, or other modifications
45
+ represent, as a whole, an original work of authorship. For the purposes
46
+ of this License, Derivative Works shall not include works that remain
47
+ separable from, or merely link (or bind by name) to the interfaces of,
48
+ the Work and Derivative Works thereof.
49
+
50
+ "Contribution" shall mean any work of authorship, including
51
+ the original version of the Work and any modifications or additions
52
+ to that Work or Derivative Works thereof, that is intentionally
53
+ submitted to Licensor for inclusion in the Work by the copyright owner
54
+ or by an individual or Legal Entity authorized to submit on behalf of
55
+ the copyright owner. For the purposes of this definition, "submitted"
56
+ means any form of electronic, verbal, or written communication sent
57
+ to the Licensor or its representatives, including but not limited to
58
+ communication on electronic mailing lists, source code control systems,
59
+ and issue tracking systems that are managed by, or on behalf of, the
60
+ Licensor for the purpose of discussing and improving the Work, but
61
+ excluding communication that is conspicuously marked or otherwise
62
+ designated in writing by the copyright owner as "Not a Contribution."
63
+
64
+ "Contributor" shall mean Licensor and any individual or Legal Entity
65
+ on behalf of whom a Contribution has been received by Licensor and
66
+ subsequently incorporated within the Work.
67
+
68
+ 2. Grant of Copyright License. Subject to the terms and conditions of
69
+ this License, each Contributor hereby grants to You a perpetual,
70
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
71
+ copyright license to reproduce, prepare Derivative Works of,
72
+ publicly display, publicly perform, sublicense, and distribute the
73
+ Work and such Derivative Works in Source or Object form.
74
+
75
+ 3. Grant of Patent License. Subject to the terms and conditions of
76
+ this License, each Contributor hereby grants to You a perpetual,
77
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
78
+ (except as stated in this section) patent license to make, have made,
79
+ use, offer to sell, sell, import, and otherwise transfer the Work,
80
+ where such license applies only to those patent claims licensable
81
+ by such Contributor that are necessarily infringed by their
82
+ Contribution(s) alone or by combination of their Contribution(s)
83
+ with the Work to which such Contribution(s) was submitted. If You
84
+ institute patent litigation against any entity (including a
85
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
86
+ or a Contribution incorporated within the Work constitutes direct
87
+ or contributory patent infringement, then any patent licenses
88
+ granted to You under this License for that Work shall terminate
89
+ as of the date such litigation is filed.
90
+
91
+ 4. Redistribution. You may reproduce and distribute copies of the
92
+ Work or Derivative Works thereof in any medium, with or without
93
+ modifications, and in Source or Object form, provided that You
94
+ meet the following conditions:
95
+
96
+ (a) You must give any other recipients of the Work or
97
+ Derivative Works a copy of this License; and
98
+
99
+ (b) You must cause any modified files to carry prominent notices
100
+ stating that You changed the files; and
101
+
102
+ (c) You must retain, in the Source form of any Derivative Works
103
+ that You distribute, all copyright, patent, trademark, and
104
+ attribution notices from the Source form of the Work,
105
+ excluding those notices that do not pertain to any part of
106
+ the Derivative Works; and
107
+
108
+ (d) If the Work includes a "NOTICE" text file as part of its
109
+ distribution, then any Derivative Works that You distribute must
110
+ include a readable copy of the attribution notices contained
111
+ within such NOTICE file, excluding those notices that do not
112
+ pertain to any part of the Derivative Works, in at least one
113
+ of the following places: within a NOTICE text file distributed
114
+ as part of the Derivative Works; within the Source form or
115
+ documentation, if provided along with the Derivative Works; or,
116
+ within a display generated by the Derivative Works, if and
117
+ wherever such third-party notices normally appear. The contents
118
+ of the NOTICE file are for informational purposes only and
119
+ do not modify the License. You may add Your own attribution
120
+ notices within Derivative Works that You distribute, alongside
121
+ or as an addendum to the NOTICE text from the Work, provided
122
+ that such additional attribution notices cannot be construed
123
+ as modifying the License.
124
+
125
+ You may add Your own copyright statement to Your modifications and
126
+ may provide additional or different license terms and conditions
127
+ for use, reproduction, or distribution of Your modifications, or
128
+ for any such Derivative Works as a whole, provided Your use,
129
+ reproduction, and distribution of the Work otherwise complies with
130
+ the conditions stated in this License.
131
+
132
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
133
+ any Contribution intentionally submitted for inclusion in the Work
134
+ by You to the Licensor shall be under the terms and conditions of
135
+ this License, without any additional terms or conditions.
136
+ Notwithstanding the above, nothing herein shall supersede or modify
137
+ the terms of any separate license agreement you may have executed
138
+ with Licensor regarding such Contributions.
139
+
140
+ 6. Trademarks. This License does not grant permission to use the trade
141
+ names, trademarks, service marks, or product names of the Licensor,
142
+ except as required for reasonable and customary use in describing the
143
+ origin of the Work and reproducing the content of the NOTICE file.
144
+
145
+ 7. Disclaimer of Warranty. Unless required by applicable law or
146
+ agreed to in writing, Licensor provides the Work (and each
147
+ Contributor provides its Contributions) on an "AS IS" BASIS,
148
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
149
+ implied, including, without limitation, any warranties or conditions
150
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
151
+ PARTICULAR PURPOSE. You are solely responsible for determining the
152
+ appropriateness of using or redistributing the Work and assume any
153
+ risks associated with Your exercise of permissions under this License.
154
+
155
+ 8. Limitation of Liability. In no event and under no legal theory,
156
+ whether in tort (including negligence), contract, or otherwise,
157
+ unless required by applicable law (such as deliberate and grossly
158
+ negligent acts) or agreed to in writing, shall any Contributor be
159
+ liable to You for damages, including any direct, indirect, special,
160
+ incidental, or consequential damages of any character arising as a
161
+ result of this License or out of the use or inability to use the
162
+ Work (including but not limited to damages for loss of goodwill,
163
+ work stoppage, computer failure or malfunction, or any and all
164
+ other commercial damages or losses), even if such Contributor
165
+ has been advised of the possibility of such damages.
166
+
167
+ 9. Accepting Warranty or Additional Liability. While redistributing
168
+ the Work or Derivative Works thereof, You may choose to offer,
169
+ and charge a fee for, acceptance of support, warranty, indemnity,
170
+ or other liability obligations and/or rights consistent with this
171
+ License. However, in accepting such obligations, You may act only
172
+ on Your own behalf and on Your sole responsibility, not on behalf
173
+ of any other Contributor, and only if You agree to indemnify,
174
+ defend, and hold each Contributor harmless for any liability
175
+ incurred by, or claims asserted against, such Contributor by reason
176
+ of your accepting any such warranty or additional liability.
177
+
178
+ END OF TERMS AND CONDITIONS
179
+
180
+ APPENDIX: How to apply the Apache License to your work.
181
+
182
+ To apply the Apache License to your work, attach the following
183
+ boilerplate notice, with the fields enclosed by brackets "[]"
184
+ replaced with your own identifying information. (Don't include
185
+ the brackets!) The text should be enclosed in the appropriate
186
+ comment syntax for the file format. We also recommend that a
187
+ file or class name and description of purpose be included on the
188
+ same "printed page" as the copyright notice for easier
189
+ identification within third-party archives.
190
+
191
+ Copyright 2023 VLMEvalKit Authors.
192
+
193
+ Licensed under the Apache License, Version 2.0 (the "License");
194
+ you may not use this file except in compliance with the License.
195
+ You may obtain a copy of the License at
196
+
197
+ http://www.apache.org/licenses/LICENSE-2.0
198
+
199
+ Unless required by applicable law or agreed to in writing, software
200
+ distributed under the License is distributed on an "AS IS" BASIS,
201
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
202
+ See the License for the specific language governing permissions and
203
+ limitations under the License.
VLMEvalKit-sudoku/README.md ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ![LOGO](http://opencompass.openxlab.space/utils/MMLB.jpg)
2
+
3
+ <b>A Toolkit for Evaluating Large Vision-Language Models. </b>
4
+
5
+ [![][github-contributors-shield]][github-contributors-link] • [![][github-forks-shield]][github-forks-link] • [![][github-stars-shield]][github-stars-link] • [![][github-issues-shield]][github-issues-link] • [![][github-license-shield]][github-license-link]
6
+
7
+ English | [简体中文](/docs/zh-CN/README_zh-CN.md) | [日本語](/docs/ja/README_ja.md)
8
+
9
+ <a href="https://rank.opencompass.org.cn/leaderboard-multimodal">🏆 OC Learderboard </a> •
10
+ <a href="#%EF%B8%8F-quickstart">🏗️Quickstart </a> •
11
+ <a href="#-datasets-models-and-evaluation-results">📊Datasets & Models </a> •
12
+ <a href="#%EF%B8%8F-development-guide">🛠️Development </a>
13
+
14
+ <a href="https://huggingface.co/spaces/opencompass/open_vlm_leaderboard">🤗 HF Leaderboard</a> •
15
+ <a href="https://huggingface.co/datasets/VLMEval/OpenVLMRecords">🤗 Evaluation Records</a> •
16
+ <a href="https://huggingface.co/spaces/opencompass/openvlm_video_leaderboard">🤗 HF Video Leaderboard</a> •
17
+
18
+ <a href="https://discord.gg/evDT4GZmxN">🔊 Discord</a> •
19
+ <a href="https://www.arxiv.org/abs/2407.11691">📝 Report</a> •
20
+ <a href="#-the-goal-of-vlmevalkit">🎯Goal </a> •
21
+ <a href="#%EF%B8%8F-citation">🖊️Citation </a>
22
+ </div>
23
+
24
+ **VLMEvalKit** (the python package name is **vlmeval**) is an **open-source evaluation toolkit** of **large vision-language models (LVLMs)**. It enables **one-command evaluation** of LVLMs on various benchmarks, without the heavy workload of data preparation under multiple repositories. In VLMEvalKit, we adopt **generation-based evaluation** for all LVLMs, and provide the evaluation results obtained with both **exact matching** and **LLM-based answer extraction**.
25
+
26
+ ## Recent Codebase Changes
27
+ - **[2025-09-12]** **Major Update: Improved Handling for Models with Thinking Mode**
28
+
29
+ A new feature in [PR 1229](https://github.com/open-compass/VLMEvalKit/pull/1175) that improves support for models with thinking mode. VLMEvalKit now allows for the use of a custom `split_thinking` function. **We strongly recommend this for models with thinking mode to ensure the accuracy of evaluation**. To use this new functionality, please enable the following settings: `SPLIT_THINK=True`. By default, the function will parse content within `<think>...</think>` tags and store it in the `thinking` key of the output. For more advanced customization, you can also create a `split_think` function for model. Please see the InternVL implementation for an example.
30
+ - **[2025-09-12]** **Major Update: Improved Handling for Long Response(More than 16k/32k)**
31
+
32
+ A new feature in [PR 1229](https://github.com/open-compass/VLMEvalKit/pull/1175) that improves support for models with long response outputs. VLMEvalKit can now save prediction files in TSV format. **Since individual cells in an `.xlsx` file are limited to 32,767 characters, we strongly recommend using this feature for models that generate long responses (e.g., exceeding 16k or 32k tokens) to prevent data truncation.**. To use this new functionality, please enable the following settings: `PRED_FORMAT=tsv`.
33
+ - **[2025-08-04]** In [PR 1175](https://github.com/open-compass/VLMEvalKit/pull/1175), we refine the `can_infer_option` and `can_infer_text`, which increasingly route the evaluation to LLM choice extractors and empirically leads to slight performance improvement for MCQ benchmarks.
34
+
35
+ ## 🆕 News
36
+ - **[2025-07-07]** Supported [**SeePhys**](https://seephys.github.io/), which is a ​full spectrum multimodal benchmark for evaluating physics reasoning across different knowledge levels. thanks to [**Quinn777**](https://github.com/Quinn777) 🔥🔥🔥
37
+ - **[2025-07-02]** Supported [**OvisU1**](https://huggingface.co/AIDC-AI/Ovis-U1-3B), thanks to [**liyang-7**](https://github.com/liyang-7) 🔥🔥🔥
38
+ - **[2025-06-16]** Supported [**PhyX**](https://phyx-bench.github.io/), a benchmark aiming to assess capacity for physics-grounded reasoning in visual scenarios. 🔥🔥🔥
39
+ - **[2025-05-24]** To facilitate faster evaluations for large-scale or thinking models, **VLMEvalKit supports multi-node distributed inference** using **LMDeploy** (supports *InternVL Series, QwenVL Series, LLaMa4*) or **VLLM**(supports *QwenVL Series, LLaMa4*). You can activate this feature by adding the ```use_lmdeploy``` or ```use_vllm``` flag to your custom model configuration in [config.py](vlmeval/config.py) . Leverage these tools to significantly speed up your evaluation workflows 🔥🔥🔥
40
+ - **[2025-05-24]** Supported Models: **InternVL3 Series, Gemini-2.5-Pro, Kimi-VL, LLaMA4, NVILA, Qwen2.5-Omni, Phi4, SmolVLM2, Grok, SAIL-VL-1.5, WeThink-Qwen2.5VL-7B, Bailingmm, VLM-R1, Taichu-VLR**. Supported Benchmarks: **HLE-Bench, MMVP, MM-AlignBench, Creation-MMBench, MM-IFEval, OmniDocBench, OCR-Reasoning, EMMA, ChaXiv,MedXpertQA, Physics, MSEarthMCQ, MicroBench, MMSci, VGRP-Bench, wildDoc, TDBench, VisuLogic, CVBench, LEGO-Puzzles, Video-MMLU, QBench-Video, MME-CoT, VLM2Bench, VMCBench, MOAT, Spatial457 Benchmark**. Please refer to [**VLMEvalKit Features**](https://aicarrier.feishu.cn/wiki/Qp7wwSzQ9iK1Y6kNUJVcr6zTnPe?table=tblsdEpLieDoCxtb) for more details. Thanks to all contributors 🔥🔥🔥
41
+ - **[2025-02-20]** Supported Models: **InternVL2.5 Series, Qwen2.5VL Series, QVQ-72B, Doubao-VL, Janus-Pro-7B, MiniCPM-o-2.6, InternVL2-MPO, LLaVA-CoT, Hunyuan-Standard-Vision, Ovis2, Valley, SAIL-VL, Ross, Long-VITA, EMU3, SmolVLM**. Supported Benchmarks: **MMMU-Pro, WeMath, 3DSRBench, LogicVista, VL-RewardBench, CC-OCR, CG-Bench, CMMMU, WorldSense**. Thanks to all contributors 🔥🔥🔥
42
+ - **[2024-12-11]** Supported [**NaturalBench**](https://huggingface.co/datasets/BaiqiL/NaturalBench), a vision-centric VQA benchmark (NeurIPS'24) that challenges vision-language models with simple questions about natural imagery.
43
+ - **[2024-12-02]** Supported [**VisOnlyQA**](https://github.com/psunlpgroup/VisOnlyQA/), a benchmark for evaluating the visual perception capabilities 🔥🔥🔥
44
+ - **[2024-11-26]** Supported [**Ovis1.6-Gemma2-27B**](https://huggingface.co/AIDC-AI/Ovis1.6-Gemma2-27B), thanks to [**runninglsy**](https://github.com/runninglsy) 🔥🔥🔥
45
+ - **[2024-11-25]** Create a new flag `VLMEVALKIT_USE_MODELSCOPE`. By setting this environment variable, you can download the video benchmarks supported from [**modelscope**](https://www.modelscope.cn) 🔥🔥🔥
46
+
47
+ ## 🏗️ QuickStart
48
+
49
+ See [[QuickStart](/docs/en/Quickstart.md) | [快速开始](/docs/zh-CN/Quickstart.md)] for a quick start guide.
50
+
51
+ ## 📊 Datasets, Models, and Evaluation Results
52
+
53
+ ### Evaluation Results
54
+
55
+ **The performance numbers on our official multi-modal leaderboards can be downloaded from here!**
56
+
57
+ [**OpenVLM Leaderboard**](https://huggingface.co/spaces/opencompass/open_vlm_leaderboard): [**Download All DETAILED Results**](http://opencompass.openxlab.space/assets/OpenVLM.json).
58
+
59
+ Check **Supported Benchmarks** Tab in [**VLMEvalKit Features**](https://aicarrier.feishu.cn/wiki/Qp7wwSzQ9iK1Y6kNUJVcr6zTnPe?table=tblsdEpLieDoCxtb) to view all supported image & video benchmarks (70+).
60
+
61
+ Check **Supported LMMs** Tab in [**VLMEvalKit Features**](https://aicarrier.feishu.cn/wiki/Qp7wwSzQ9iK1Y6kNUJVcr6zTnPe?table=tblsdEpLieDoCxtb) to view all supported LMMs, including commercial APIs, open-source models, and more (200+).
62
+
63
+ **Transformers Version Recommendation:**
64
+
65
+ Note that some VLMs may not be able to run under certain transformer versions, we recommend the following settings to evaluate each VLM:
66
+
67
+ - **Please use** `transformers==4.33.0` **for**: `Qwen series`, `Monkey series`, `InternLM-XComposer Series`, `mPLUG-Owl2`, `OpenFlamingo v2`, `IDEFICS series`, `VisualGLM`, `MMAlaya`, `ShareCaptioner`, `MiniGPT-4 series`, `InstructBLIP series`, `PandaGPT`, `VXVERSE`.
68
+ - **Please use** `transformers==4.36.2` **for**: `Moondream1`.
69
+ - **Please use** `transformers==4.37.0` **for**: `LLaVA series`, `ShareGPT4V series`, `TransCore-M`, `LLaVA (XTuner)`, `CogVLM Series`, `EMU2 Series`, `Yi-VL Series`, `MiniCPM-[V1/V2]`, `OmniLMM-12B`, `DeepSeek-VL series`, `InternVL series`, `Cambrian Series`, `VILA Series`, `Llama-3-MixSenseV1_1`, `Parrot-7B`, `PLLaVA Series`.
70
+ - **Please use** `transformers==4.40.0` **for**: `IDEFICS2`, `Bunny-Llama3`, `MiniCPM-Llama3-V2.5`, `360VL-70B`, `Phi-3-Vision`, `WeMM`.
71
+ - **Please use** `transformers==4.42.0` **for**: `AKI`.
72
+ - **Please use** `transformers==4.44.0` **for**: `Moondream2`, `H2OVL series`.
73
+ - **Please use** `transformers==4.45.0` **for**: `Aria`.
74
+ - **Please use** `transformers==latest` **for**: `LLaVA-Next series`, `PaliGemma-3B`, `Chameleon series`, `Video-LLaVA-7B-HF`, `Ovis series`, `Mantis series`, `MiniCPM-V2.6`, `OmChat-v2.0-13B-sinlge-beta`, `Idefics-3`, `GLM-4v-9B`, `VideoChat2-HD`, `RBDash_72b`, `Llama-3.2 series`, `Kosmos series`.
75
+
76
+ **Torchvision Version Recommendation:**
77
+
78
+ Note that some VLMs may not be able to run under certain torchvision versions, we recommend the following settings to evaluate each VLM:
79
+
80
+ - **Please use** `torchvision>=0.16` **for**: `Moondream series` and `Aria`
81
+
82
+ **Flash-attn Version Recommendation:**
83
+
84
+ Note that some VLMs may not be able to run under certain flash-attention versions, we recommend the following settings to evaluate each VLM:
85
+
86
+ - **Please use** `pip install flash-attn --no-build-isolation` **for**: `Aria`
87
+
88
+ ```python
89
+ # Demo
90
+ from vlmeval.config import supported_VLM
91
+ model = supported_VLM['idefics_9b_instruct']()
92
+ # Forward Single Image
93
+ ret = model.generate(['assets/apple.jpg', 'What is in this image?'])
94
+ print(ret) # The image features a red apple with a leaf on it.
95
+ # Forward Multiple Images
96
+ ret = model.generate(['assets/apple.jpg', 'assets/apple.jpg', 'How many apples are there in the provided images? '])
97
+ print(ret) # There are two apples in the provided images.
98
+ ```
99
+
100
+ ## 🛠️ Development Guide
101
+
102
+ To develop custom benchmarks, VLMs, or simply contribute other codes to **VLMEvalKit**, please refer to [[Development_Guide](/docs/en/Development.md) | [开发指南](/docs/zh-CN/Development.md)].
103
+
104
+ **Call for contributions**
105
+
106
+ To promote the contribution from the community and share the corresponding credit (in the next report update):
107
+
108
+ - All Contributions will be acknowledged in the report.
109
+ - Contributors with 3 or more major contributions (implementing an MLLM, benchmark, or major feature) can join the author list of [VLMEvalKit Technical Report](https://www.arxiv.org/abs/2407.11691) on ArXiv. Eligible contributors can create an issue or dm kennyutc in [VLMEvalKit Discord Channel](https://discord.com/invite/evDT4GZmxN).
110
+
111
+ Here is a [contributor list](/docs/en/Contributors.md) we curated based on the records.
112
+
113
+ ## 🎯 The Goal of VLMEvalKit
114
+
115
+ **The codebase is designed to:**
116
+
117
+ 1. Provide an **easy-to-use**, **opensource evaluation toolkit** to make it convenient for researchers & developers to evaluate existing LVLMs and make evaluation results **easy to reproduce**.
118
+ 2. Make it easy for VLM developers to evaluate their own models. To evaluate the VLM on multiple supported benchmarks, one just need to **implement a single `generate_inner()` function**, all other workloads (data downloading, data preprocessing, prediction inference, metric calculation) are handled by the codebase.
119
+
120
+ **The codebase is not designed to:**
121
+
122
+ 1. Reproduce the exact accuracy number reported in the original papers of all **3rd party benchmarks**. The reason can be two-fold:
123
+ 1. VLMEvalKit uses **generation-based evaluation** for all VLMs (and optionally with **LLM-based answer extraction**). Meanwhile, some benchmarks may use different approaches (SEEDBench uses PPL-based evaluation, *eg.*). For those benchmarks, we compare both scores in the corresponding result. We encourage developers to support other evaluation paradigms in the codebase.
124
+ 2. By default, we use the same prompt template for all VLMs to evaluate on a benchmark. Meanwhile, **some VLMs may have their specific prompt templates** (some may not covered by the codebase at this time). We encourage VLM developers to implement their own prompt template in VLMEvalKit, if that is not covered currently. That will help to improve the reproducibility.
125
+
126
+ ## 🖊️ Citation
127
+
128
+ If you find this work helpful, please consider to **star🌟** this repo. Thanks for your support!
129
+
130
+ [![Stargazers repo roster for @open-compass/VLMEvalKit](https://reporoster.com/stars/open-compass/VLMEvalKit)](https://github.com/open-compass/VLMEvalKit/stargazers)
131
+
132
+ If you use VLMEvalKit in your research or wish to refer to published OpenSource evaluation results, please use the following BibTeX entry and the BibTex entry corresponding to the specific VLM / benchmark you used.
133
+
134
+ ```bib
135
+ @inproceedings{duan2024vlmevalkit,
136
+ title={Vlmevalkit: An open-source toolkit for evaluating large multi-modality models},
137
+ author={Duan, Haodong and Yang, Junming and Qiao, Yuxuan and Fang, Xinyu and Chen, Lin and Liu, Yuan and Dong, Xiaoyi and Zang, Yuhang and Zhang, Pan and Wang, Jiaqi and others},
138
+ booktitle={Proceedings of the 32nd ACM International Conference on Multimedia},
139
+ pages={11198--11201},
140
+ year={2024}
141
+ }
142
+ ```
143
+
144
+ <p align="right"><a href="#top">🔝Back to top</a></p>
145
+
146
+ [github-contributors-link]: https://github.com/open-compass/VLMEvalKit/graphs/contributors
147
+ [github-contributors-shield]: https://img.shields.io/github/contributors/open-compass/VLMEvalKit?color=c4f042&labelColor=black&style=flat-square
148
+ [github-forks-link]: https://github.com/open-compass/VLMEvalKit/network/members
149
+ [github-forks-shield]: https://img.shields.io/github/forks/open-compass/VLMEvalKit?color=8ae8ff&labelColor=black&style=flat-square
150
+ [github-issues-link]: https://github.com/open-compass/VLMEvalKit/issues
151
+ [github-issues-shield]: https://img.shields.io/github/issues/open-compass/VLMEvalKit?color=ff80eb&labelColor=black&style=flat-square
152
+ [github-license-link]: https://github.com/open-compass/VLMEvalKit/blob/main/LICENSE
153
+ [github-license-shield]: https://img.shields.io/github/license/open-compass/VLMEvalKit?color=white&labelColor=black&style=flat-square
154
+ [github-stars-link]: https://github.com/open-compass/VLMEvalKit/stargazers
155
+ [github-stars-shield]: https://img.shields.io/github/stars/open-compass/VLMEvalKit?color=ffcb47&labelColor=black&style=flat-square
VLMEvalKit-sudoku/eval.sh ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ #全图
2
+ export HF_ENDPOINT=https://hf-mirror.com
3
+ python run.py --data ShapeGrid_sudoku --model llava_uhd_final
4
+
5
+ # #切片
6
+ # export HF_ENDPOINT=https://hf-mirror.com
7
+ # python run.py --data ShapeGrid_sudoku --model llava_uhd_resampler_query_49
VLMEvalKit-sudoku/requirements.txt ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ accelerate
2
+ dotenv
3
+ einops
4
+ # for gemini api
5
+ google-genai
6
+ gradio
7
+ huggingface_hub
8
+ imageio
9
+ ipdb
10
+ json_repair
11
+ matplotlib
12
+ nltk
13
+ numpy
14
+ omegaconf
15
+ openai
16
+ opencv-python>=4.7.0.72
17
+ openpyxl
18
+ pandas
19
+ pillow
20
+ portalocker
21
+ protobuf
22
+ python-dotenv
23
+ qwen_vl_utils
24
+ requests
25
+ rich
26
+ sentencepiece
27
+ setuptools
28
+ sty
29
+ sympy
30
+ tabulate
31
+ tiktoken
32
+ timeout-decorator
33
+ timm
34
+ torch
35
+ torchvision
36
+ tqdm
37
+ transformers
38
+ typing_extensions
39
+ validators
40
+ xlsxwriter
VLMEvalKit-sudoku/vlmeval/dataset/CGAVCounting/__init__.py ADDED
File without changes
VLMEvalKit-sudoku/vlmeval/dataset/CGAVCounting/__pycache__/utils.cpython-310.pyc ADDED
Binary file (11 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/CGAVCounting/requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ scipy
2
+ word2number
VLMEvalKit-sudoku/vlmeval/dataset/EgoExoBench/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .egoexobench import EgoExoBench_MCQ
VLMEvalKit-sudoku/vlmeval/dataset/EgoExoBench/__pycache__/egoexobench.cpython-310.pyc ADDED
Binary file (11 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/GUI/__pycache__/screenspot.cpython-310.pyc ADDED
Binary file (15.2 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/GUI/screenspot_v2.py ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import re
3
+ import tempfile
4
+ from functools import partial
5
+
6
+ import pandas as pd
7
+ import ast
8
+
9
+ from ..image_base import img_root_map
10
+ from .screenspot import ScreenSpot
11
+ from ..utils import build_judge, DEBUG_MESSAGE
12
+ from ...smp import *
13
+ from ...utils import track_progress_rich
14
+ from ipdb import set_trace as st
15
+
16
+ logger = get_logger("RUN")
17
+
18
+ """
19
+ {
20
+ "img_filename": "web_3b0ad239-da6b-4f6f-8f12-f674dc90ff33.png",
21
+ "bbox": [42, 1102, 197, 70],
22
+ "instruction": "view the details of the item",
23
+ "data_type": "text",
24
+ "data_source": "shop"
25
+ },
26
+ {
27
+ "img_filename": "web_3b0ad239-da6b-4f6f-8f12-f674dc90ff33.png",
28
+ "bbox": [93, 74, 86, 132],
29
+ "instruction": "view the previous photo",
30
+ "data_type": "icon",
31
+ "data_source": "shop"
32
+ }
33
+ """
34
+
35
+ SYSTEM_PROMPT = """You are a GUI agent. You are given a task and a screenshot of the screen. You need to perform pyautogui click/moveTo action to complete the task. The answer format is `pyautogui.click(x=?, y=?), x and y is necessary`""" # noqa: E501
36
+
37
+ USER_INSTRUCTION = """Please complete the following tasks by clicking using `pyautogui.click`:\n{instruction}""" # noqa: E501
38
+
39
+ SYSTEM_PROMPT_V2 = """You are a GUI agent. You are given a screenshot of the screen and the description of a target element. You need to click the target element using `pyautogui.click`. The answer format is `pyautogui.click(x=?, y=?), x and y is necessary`""" # noqa: E501
40
+ USER_INSTRUCTION_V2 = """Please click the following target element using `pyautogui.click`:\n{description}"""
41
+
42
+
43
+ def parse_bbox_aguvis(response):
44
+ match = re.search(r"x=([\d.]+), y=([\d.]+)", response)
45
+ if match:
46
+ click_point = [float(match.group(1)), float(match.group(2))]
47
+ else:
48
+ click_point = [0.0, 0.0]
49
+ return click_point
50
+
51
+
52
+ def compute_iou(box1, box2):
53
+ """
54
+ Compute the Intersection over Union (IoU) of two bounding boxes.
55
+
56
+ Parameters:
57
+ - box1 (list of float): Bounding box [x_min, y_min, x_max, y_max].
58
+ - box2 (list of float): Bounding box [x_min, y_min, x_max, y_max].
59
+
60
+ Returns:
61
+ - float: IoU of box1 and box2.
62
+ """
63
+ # Determine the coordinates of the intersection rectangle
64
+ x_left = max(box1[0], box2[0])
65
+ y_top = max(box1[1], box2[1])
66
+ x_right = min(box1[2], box2[2])
67
+ y_bottom = min(box1[3], box2[3])
68
+
69
+ # Compute the area of intersection
70
+ intersection_area = max(0, x_right - x_left) * max(0, y_bottom - y_top)
71
+
72
+ # Compute the area of both bounding boxes
73
+ box1_area = (box1[2] - box1[0]) * (box1[3] - box1[1])
74
+ box2_area = (box2[2] - box2[0]) * (box2[3] - box2[1])
75
+
76
+ # Compute the area of the union
77
+ union_area = box1_area + box2_area - intersection_area
78
+
79
+ # Compute the Intersection over Union
80
+ iou = intersection_area / union_area
81
+
82
+ return iou
83
+
84
+
85
+ def compute_accuracy(box1, box2, threshold=0.5):
86
+ """
87
+ Compute the accuracy of two bounding boxes based on a specified threshold.
88
+
89
+ Parameters:
90
+ - box1 (list of float): Bounding box [x_min, y_min, x_max, y_max].
91
+ - box2 (list of float): Bounding box [x_min, y_min, x_max, y_max].
92
+ - threshold (float): Threshold for the IoU to consider the prediction correct.
93
+
94
+ Returns:
95
+ - float: Accuracy of the prediction based on the IoU threshold.
96
+ """
97
+ iou = compute_iou(box1, box2)
98
+ return iou >= threshold
99
+
100
+
101
+ def compute_center_accuracy(box1, box2):
102
+ """
103
+ Compute if the center point of box 2 is within box 1.
104
+
105
+ Parameters:
106
+ - box1 (list of float): Bounding box [x_min, y_min, x_max, y_max].
107
+ - box2 (list of float): Bounding box [x_min, y_min, x_max, y_max].
108
+
109
+ Returns:
110
+ - bool: True if the center point of box 2 is within box 1, False otherwise.
111
+ """
112
+ # Compute the center point of box 2
113
+ center_x = (box2[0] + box2[2]) / 2
114
+ center_y = (box2[1] + box2[3]) / 2
115
+
116
+ # Check if the center point is within box 1
117
+ return box1[0] <= center_x <= box1[2] and box1[1] <= center_y <= box1[3]
118
+
119
+
120
+ def convert_bbox(bbox, image_path):
121
+ new_bbox = bbox if isinstance(bbox, list) else ast.literal_eval(bbox)
122
+ new_bbox = [
123
+ new_bbox[0],
124
+ new_bbox[1],
125
+ new_bbox[0] + new_bbox[2],
126
+ new_bbox[1] + new_bbox[3],
127
+ ]
128
+ image = Image.open(image_path)
129
+ img_size = image.size
130
+ new_bbox = [
131
+ new_bbox[0] / img_size[0],
132
+ new_bbox[1] / img_size[1],
133
+ new_bbox[2] / img_size[0],
134
+ new_bbox[3] / img_size[1],
135
+ ]
136
+ return new_bbox
137
+
138
+
139
+ class ScreenSpotV2(ScreenSpot):
140
+ MODALITY = "IMAGE"
141
+ TYPE = "GUI"
142
+ DATASET_URL = {
143
+ "ScreenSpot_v2_Mobile": "ScreenSpot_v2_Mobile.tsv",
144
+ "ScreenSpot_v2_Desktop": "ScreenSpot_v2_Desktop.tsv",
145
+ "ScreenSpot_v2_Web": "ScreenSpot_v2_Web.tsv",
146
+ } # path
147
+ DATASET_MD5 = {}
148
+ EVAL_TYPE = "point" # point or rectangle
149
+ RE_TYPE = "functional" # type of referring expressions: functional or composite
150
+
151
+ def __init__(
152
+ self,
153
+ dataset="ScreenSpot_Mobile",
154
+ skip_noimg=True,
155
+ skeleton=False,
156
+ re_type="functional",
157
+ ):
158
+ # st()
159
+ ROOT = LMUDataRoot()
160
+ # You can override this variable to save image files to a different directory
161
+ self.dataset_name = dataset
162
+ self.img_root = osp.join(ROOT, "ScreenSpot_v2", "screenspotv2_image")
163
+ self.RE_TYPE = re_type
164
+ if skeleton:
165
+ return
166
+
167
+ data = self.load_data(dataset)
168
+ self.skip_noimg = skip_noimg
169
+ if skip_noimg and "image" in data:
170
+ data = data[~pd.isna(data["image"])]
171
+
172
+ data["index"] = [str(idx + 1) for idx, x in enumerate(data["bbox"])]
173
+
174
+ self.meta_only = True
175
+ self.parse_response_func = parse_bbox_aguvis # TODO: parse function can be specified through kwargs when initializing the dataset # noqa: E501
176
+
177
+ # The image field can store the base64 encoded image or another question index (for saving space)
178
+ if "image" in data:
179
+ data["image"] = [str(x) for x in data["image"]]
180
+ image_map = {x: y for x, y in zip(data["index"], data["image"])}
181
+ for k in image_map:
182
+ if len(image_map[k]) <= 64:
183
+ idx = image_map[k]
184
+ assert idx in image_map and len(image_map[idx]) > 64
185
+ image_map[k] = image_map[idx]
186
+
187
+ images = [toliststr(image_map[k]) for k in data["index"]]
188
+ data["image"] = [x[0] if len(x) == 1 else x for x in images]
189
+ self.meta_only = False
190
+
191
+ if "img_filename" in data:
192
+ paths = [toliststr(x) for x in data["img_filename"]]
193
+ data["image_path"] = [x[0] if len(x) == 1 else x for x in paths]
194
+
195
+ # if np.all([istype(x, int) for x in data["index"]]):
196
+ # data["index"] = [int(x) for x in data["index"]]
197
+
198
+ self.data = data
199
+ self.post_build(dataset)
200
+
201
+ def prepare_tsv(self, url, file_md5=None):
202
+ # st()
203
+ if self.RE_TYPE == "functional":
204
+ data_root = LMUDataRoot()
205
+ data_path = osp.join(data_root, "ScreenSpot_v2", url)
206
+ else:
207
+ data_path = self.DATASET_URL_V2[self.dataset_name]
208
+ return pd.DataFrame(load(data_path))
VLMEvalKit-sudoku/vlmeval/dataset/OmniDocBench/__init__.py ADDED
File without changes
VLMEvalKit-sudoku/vlmeval/dataset/OmniDocBench/__pycache__/omnidocbench.cpython-310.pyc ADDED
Binary file (17.2 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/OmniDocBench/metrics.py ADDED
@@ -0,0 +1,486 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import time
3
+ import Levenshtein
4
+ import evaluate
5
+ import random
6
+ import pdb
7
+ import copy
8
+ import pandas as pd
9
+
10
+ from .utils import save_paired_result,normalized_table
11
+ from collections import defaultdict
12
+ from apted.helpers import Tree
13
+ from apted import APTED, Config
14
+ from lxml import etree, html
15
+ from collections import deque
16
+ from tqdm import tqdm
17
+ from collections import defaultdict
18
+ from tabulate import tabulate
19
+
20
+ def show_result(results):
21
+ for metric_name in results.keys():
22
+ print(f'{metric_name}:')
23
+ score_table = [[k,v] for k,v in results[metric_name].items()]
24
+ print(tabulate(score_table))
25
+ print('='*100)
26
+
27
+ def sort_nested_dict(d):
28
+ # If it's a dictionary, recursively sort it
29
+ if isinstance(d, dict):
30
+ # Sort the current dictionary
31
+ sorted_dict = {k: sort_nested_dict(v) for k, v in sorted(d.items())}
32
+ return sorted_dict
33
+ # If not a dictionary, return directly
34
+ return d
35
+
36
+ def get_full_labels_results(samples:dict):
37
+ if not samples:
38
+ return {}
39
+ label_group_dict = defaultdict(lambda: defaultdict(list))
40
+ for sample in samples:
41
+ label_list = []
42
+ if not sample.get("gt_attribute"):
43
+ continue
44
+ for anno in sample["gt_attribute"]:
45
+ for k,v in anno.items():
46
+ label_list.append(k+": "+str(v))
47
+ for label_name in list(set(label_list)): # Currently if there are merged cases, calculate based on the set of all labels involved after merging
48
+ for metric, score in sample['metric'].items():
49
+ label_group_dict[label_name][metric].append(score)
50
+
51
+ print('----Anno Attribute---------------')
52
+ result = {}
53
+ result['sample_count'] = {}
54
+ for attribute in label_group_dict.keys():
55
+ for metric, scores in label_group_dict[attribute].items():
56
+ mean_score = sum(scores) / len(scores)
57
+ if not result.get(metric):
58
+ result[metric] = {}
59
+ result[metric][attribute] = mean_score
60
+ result['sample_count'][attribute] = len(scores)
61
+ result = sort_nested_dict(result)
62
+ show_result(result)
63
+ return result
64
+
65
+
66
+ def get_page_split(samples, page_info): # Page level metric
67
+ if not page_info:
68
+ return {}
69
+ result_list = defaultdict(list)
70
+
71
+
72
+ for sample in samples:
73
+ img_name = sample['img_id'] if sample['img_id'].endswith('.jpg') else '_'.join(sample['img_id'].split('_')[:-1])
74
+ page_info_s = page_info[img_name]
75
+ if not sample.get('metric'):
76
+ continue
77
+ for metric, score in sample['metric'].items():
78
+ gt = sample['norm_gt'] if sample.get('norm_gt') else sample['gt']
79
+ pred = sample['norm_pred'] if sample.get('norm_pred') else sample['pred']
80
+ result_list[metric].append({
81
+ 'image_name': img_name,
82
+ 'metric': metric,
83
+ 'attribute': 'ALL',
84
+ 'score': score,
85
+ 'upper_len': max(len(gt), len(pred))
86
+ })
87
+ for k,v in page_info_s.items():
88
+ if isinstance(v, list): # special issue
89
+ for special_issue in v:
90
+ if 'table' not in special_issue: # Table-related special fields have duplicates
91
+ result_list[metric].append({
92
+ 'image_name': img_name,
93
+ 'metric': metric,
94
+ 'attribute': special_issue,
95
+ 'score': score,
96
+ 'upper_len': max(len(gt), len(pred))
97
+ })
98
+ else:
99
+ result_list[metric].append({
100
+ 'image_name': img_name,
101
+ 'metric': metric,
102
+ 'attribute': k+": "+str(v),
103
+ 'score': score,
104
+ 'upper_len': max(len(gt), len(pred))
105
+ })
106
+
107
+ # Page level logic, accumulation is only done within pages, and mean operation is performed between pages
108
+ result = {}
109
+ if result_list.get('Edit_dist'):
110
+ df = pd.DataFrame(result_list['Edit_dist'])
111
+ up_total_avg = df.groupby(["image_name", "attribute"]).apply(lambda x: (x["score"]*x['upper_len']).sum() / x['upper_len'].sum()).groupby('attribute').mean() # At page level, accumulate edits, denominator is sum of max(gt, pred) from each sample
112
+ result['Edit_dist'] = up_total_avg.to_dict()
113
+ for metric in result_list.keys():
114
+ if metric == 'Edit_dist':
115
+ continue
116
+ df = pd.DataFrame(result_list[metric])
117
+ page_avg = df.groupby(["image_name", "attribute"]).apply(lambda x: x["score"].mean()).groupby('attribute').mean()
118
+ result[metric] = page_avg.to_dict()
119
+
120
+ result = sort_nested_dict(result)
121
+ # print('----Page Attribute---------------')
122
+ show_result(result)
123
+ return result
124
+
125
+
126
+ def get_groups(samples, group_info):
127
+ group_samples = defaultdict(list)
128
+ for sample in samples:
129
+ group_samples['all'].append(sample)
130
+ for group in group_info:
131
+ select_flag = True
132
+ for k, v in group.items():
133
+ for gt_attribute in sample['gt_attribute']: # gt_attribute is a list containing all merged gt attributes
134
+ if not gt_attribute: # if no GT attributes, don't include in calculation
135
+ select_flag = False
136
+ elif gt_attribute[k] != v: # if any gt attribute doesn't meet criteria, don't select
137
+ select_flag = False
138
+ if select_flag:
139
+ group_samples[str(group)].append(sample)
140
+ return group_samples
141
+
142
+
143
+ class Registry:
144
+ def __init__(self):
145
+ self._registry = {}
146
+ def register(self, name):
147
+ def decorator(item):
148
+ if name in self._registry:
149
+ raise ValueError(f"Item {name} already registered.")
150
+ self._registry[name] = item
151
+ return item
152
+ return decorator
153
+ def get(self, name):
154
+ if name not in self._registry:
155
+ raise ValueError(f"Item {name} not found in registry.")
156
+ return self._registry[name]
157
+ def list_items(self):
158
+ return list(self._registry.keys())
159
+
160
+ METRIC_REGISTRY = Registry()
161
+
162
+
163
+ @METRIC_REGISTRY.register("TEDS")
164
+ class call_TEDS():
165
+ def __init__(self, samples):
166
+ self.samples = samples
167
+ def evaluate(self, group_info=[], save_name='default'):
168
+ teds = TEDS(structure_only=False)
169
+ teds_structure_only = TEDS(structure_only=True)
170
+
171
+ group_scores = defaultdict(list)
172
+ group_scores_structure_only = defaultdict(list)
173
+
174
+ samples = self.samples
175
+ for sample in samples:
176
+ gt = sample['norm_gt'] if sample.get('norm_gt') else sample['gt']
177
+ pred = sample['norm_pred'] if sample.get('norm_pred') else sample['pred']
178
+
179
+ score = teds.evaluate(pred, gt)
180
+ score_structure_only = teds_structure_only.evaluate(pred, gt)
181
+ # print('TEDS score:', score)
182
+ group_scores['all'].append(score)
183
+ group_scores_structure_only['all'].append(score_structure_only)
184
+
185
+ if not sample.get('metric'):
186
+ sample['metric'] = {}
187
+ sample['metric']['TEDS'] = score
188
+ sample['metric']['TEDS_structure_only'] = score_structure_only
189
+
190
+ for group in group_info:
191
+ select_flag = True
192
+ for k, v in group.items():
193
+ for gt_attribute in sample['gt_attribute']: # gt_attribute is a list containing all merged gt attributes
194
+ if not gt_attribute: # if no GT attributes, don't include in calculation
195
+ select_flag = False
196
+ elif gt_attribute[k] != v: # if any gt attribute doesn't meet criteria, don't select
197
+ select_flag = False
198
+ if select_flag:
199
+ group_scores[str(group)].append(score)
200
+
201
+ result = {}
202
+ for group_name, scores in group_scores.items():
203
+ if len(scores) > 0:
204
+ result[group_name] = sum(scores) / len(scores) # average of normalized scores at sample level
205
+ else:
206
+ result[group_name] = 'NaN'
207
+ print(f'Warning: Empyty matched samples for {group_name}.')
208
+
209
+ structure_only_result = {}
210
+ for group_name, scores in group_scores_structure_only.items():
211
+ if len(scores) > 0:
212
+ structure_only_result[group_name] = sum(scores) / len(scores) # average of normalized scores at sample level
213
+ else:
214
+ structure_only_result[group_name] = 'NaN'
215
+ print(f'Warning: Empyty matched samples for {group_name}.')
216
+
217
+ return samples,{'TEDS': result, 'TEDS_structure_only': structure_only_result}
218
+
219
+
220
+ @METRIC_REGISTRY.register("BLEU")
221
+ class call_BLEU():
222
+ def __init__(self, samples):
223
+ self.samples = samples
224
+ def evaluate(self, group_info=[], save_name='default'):
225
+ group_samples = get_groups(self.samples, group_info)
226
+ result = {}
227
+ bleu = evaluate.load("bleu", keep_in_memory=True, experiment_id=random.randint(1,1e8))
228
+
229
+ for group_name, samples in group_samples.items():
230
+ predictions, references = [], []
231
+ for sample in samples:
232
+ gt = sample['norm_gt'] if sample.get('norm_gt') else sample['gt']
233
+ pred = sample['norm_pred'] if sample.get('norm_pred') else sample['pred']
234
+ predictions.append(pred)
235
+ references.append(gt)
236
+
237
+ if not predictions or not any(predictions) or not references or not any(references):
238
+ bleu_score = 0
239
+ else:
240
+ try:
241
+ bleu_results = bleu.compute(predictions=predictions, references=references)
242
+ bleu_score = bleu_results["bleu"]
243
+ except ZeroDivisionError:
244
+ bleu_score = 0
245
+
246
+ result[group_name] = bleu_score
247
+
248
+ return self.samples,{'BLEU': result}
249
+
250
+ @METRIC_REGISTRY.register("METEOR")
251
+ class call_METEOR():
252
+ def __init__(self, samples):
253
+ self.samples = samples
254
+ def evaluate(self, group_info=[], save_name='default'):
255
+ group_samples = get_groups(self.samples, group_info)
256
+ result = {}
257
+ for group_name, samples in group_samples.items():
258
+ predictions, references = [], []
259
+ for sample in samples:
260
+ gt = sample['norm_gt'] if sample.get('norm_gt') else sample['gt']
261
+ pred = sample['norm_pred'] if sample.get('norm_pred') else sample['pred']
262
+ predictions.append(gt)
263
+ references.append(pred)
264
+ meteor = evaluate.load('meteor', keep_in_memory=True, experiment_id=random.randint(1,1e8))
265
+ meteor_results = meteor.compute(predictions=predictions, references=references)
266
+ result[group_name] = meteor_results['meteor']
267
+
268
+ return self.samples,{'METEOR': result}
269
+
270
+
271
+ @METRIC_REGISTRY.register("Edit_dist")
272
+ class call_Edit_dist():
273
+ def __init__(self, samples):
274
+ self.samples = samples
275
+ def evaluate(self, group_info=[], save_name='default'):
276
+ samples = self.samples
277
+ for sample in samples:
278
+ img_name = sample['img_id'] if sample['img_id'].endswith('.jpg') else '_'.join(sample['img_id'].split('_')[:-1])
279
+ sample['image_name'] = img_name
280
+ gt = sample['norm_gt'] if sample.get('norm_gt') else sample['gt']
281
+ pred = sample['norm_pred'] if sample.get('norm_pred') else sample['pred']
282
+ upper_len = max(len(pred), len(gt))
283
+ sample['upper_len'] = upper_len
284
+ if len(pred) > 0 or len(gt) > 0:
285
+ edit_dist = Levenshtein.distance(pred, gt)
286
+ if not sample.get('metric'):
287
+ sample['metric'] = {}
288
+ sample['metric']['Edit_dist'] = edit_dist / upper_len
289
+ sample['Edit_num'] = edit_dist
290
+
291
+ if isinstance(samples, list):
292
+ saved_samples = samples
293
+ else:
294
+ saved_samples = samples.samples
295
+
296
+ if not saved_samples:
297
+ return {'Edit_dist': {'ALL_page_avg': 'NaN'}}
298
+
299
+ df = pd.DataFrame(saved_samples)
300
+ up_total_avg = df.groupby("image_name").apply(lambda x: x['Edit_num'].sum() / x['upper_len'].sum()) # page level, sum of edits divided by sum of max(gt,pred) lengths for each sample
301
+ per_img_score = up_total_avg.to_dict()
302
+
303
+ return samples,{'Edit_dist': {'ALL_page_avg': up_total_avg.mean()}}
304
+
305
+
306
+ @METRIC_REGISTRY.register("CDM")
307
+ class call_CDM():
308
+ def __init__(self, samples):
309
+ self.samples = samples
310
+ def evaluate(self, group_info=[], save_name='default'):
311
+ if isinstance(self.samples, list):
312
+ cdm_samples = copy.deepcopy(self.samples)
313
+ else:
314
+ cdm_samples = copy.deepcopy(self.samples.samples)
315
+ for idx, sample in enumerate(cdm_samples):
316
+ sample['img_name'] = sample['img_id']
317
+ sample['img_id'] = str(idx)
318
+ sample['gt'] = sample['gt'].lstrip("$$").rstrip("$$").strip()
319
+ sample['pred'] = sample['pred'].split("```latex")[-1].split("```")[0]
320
+ sample['pred'] = sample['pred'].lstrip("$$").rstrip("$$").strip()
321
+
322
+ return self.samples,False
323
+
324
+
325
+ class TEDS(object):
326
+ ''' Tree Edit Distance basead Similarity
327
+ '''
328
+ def __init__(self, structure_only=False, n_jobs=1, ignore_nodes=None):
329
+ assert isinstance(n_jobs, int) and (n_jobs >= 1), 'n_jobs must be an integer greather than 1'
330
+ self.structure_only = structure_only
331
+ self.n_jobs = n_jobs
332
+ self.ignore_nodes = ignore_nodes
333
+ self.__tokens__ = []
334
+
335
+ def tokenize(self, node):
336
+ ''' Tokenizes table cells
337
+ '''
338
+ self.__tokens__.append('<%s>' % node.tag)
339
+ if node.text is not None:
340
+ self.__tokens__ += list(node.text)
341
+ for n in node.getchildren():
342
+ self.tokenize(n)
343
+ if node.tag != 'unk':
344
+ self.__tokens__.append('</%s>' % node.tag)
345
+ if node.tag != 'td' and node.tail is not None:
346
+ self.__tokens__ += list(node.tail)
347
+
348
+ def load_html_tree(self, node, parent=None):
349
+ ''' Converts HTML tree to the format required by apted
350
+ '''
351
+ global __tokens__
352
+ if node.tag == 'td':
353
+ if self.structure_only:
354
+ cell = []
355
+ else:
356
+ self.__tokens__ = []
357
+ self.tokenize(node)
358
+ cell = self.__tokens__[1:-1].copy()
359
+ new_node = TableTree(node.tag,
360
+ int(node.attrib.get('colspan', '1')),
361
+ int(node.attrib.get('rowspan', '1')),
362
+ cell, *deque())
363
+ else:
364
+ new_node = TableTree(node.tag, None, None, None, *deque())
365
+ if parent is not None:
366
+ parent.children.append(new_node)
367
+ if node.tag != 'td':
368
+ for n in node.getchildren():
369
+ self.load_html_tree(n, new_node)
370
+ if parent is None:
371
+ return new_node
372
+
373
+ def evaluate(self, pred, true):
374
+ ''' Computes TEDS score between the prediction and the ground truth of a
375
+ given sample
376
+ '''
377
+ if (not pred) or (not true):
378
+ return 0.0
379
+ parser = html.HTMLParser(remove_comments=True, encoding='utf-8')
380
+ pred = html.fromstring(pred, parser=parser)
381
+ true = html.fromstring(true, parser=parser)
382
+ if pred.xpath('body/table') and true.xpath('body/table'):
383
+ pred = pred.xpath('body/table')[0]
384
+ true = true.xpath('body/table')[0]
385
+ if self.ignore_nodes:
386
+ etree.strip_tags(pred, *self.ignore_nodes)
387
+ etree.strip_tags(true, *self.ignore_nodes)
388
+ n_nodes_pred = len(pred.xpath(".//*"))
389
+ n_nodes_true = len(true.xpath(".//*"))
390
+ n_nodes = max(n_nodes_pred, n_nodes_true)
391
+ tree_pred = self.load_html_tree(pred)
392
+ tree_true = self.load_html_tree(true)
393
+ distance = APTED(tree_pred, tree_true, CustomConfig()).compute_edit_distance()
394
+ return 1.0 - (float(distance) / n_nodes)
395
+ else:
396
+ return 0.0
397
+
398
+ def batch_evaluate(self, pred_json, true_json):
399
+ ''' Computes TEDS score between the prediction and the ground truth of
400
+ a batch of samples
401
+ @params pred_json: {'FILENAME': 'HTML CODE', ...}
402
+ @params true_json: {'FILENAME': {'html': 'HTML CODE'}, ...}
403
+ @output: {'FILENAME': 'TEDS SCORE', ...}
404
+ '''
405
+ samples = true_json.keys()
406
+ # if self.n_jobs == 1:
407
+ scores = [self.evaluate(pred_json.get(filename, ''), true_json[filename]['html']) for filename in tqdm(samples)]
408
+ # else:
409
+ # inputs = [{'pred': pred_json.get(filename, ''), 'true': true_json[filename]['html']} for filename in samples]
410
+ # scores = parallel_process(inputs, self.evaluate, use_kwargs=True, n_jobs=self.n_jobs, front_num=1)
411
+ scores = dict(zip(samples, scores))
412
+ return scores
413
+
414
+
415
+ class CustomConfig(Config):
416
+ @staticmethod
417
+ def maximum(*sequences):
418
+ """Get maximum possible value
419
+ """
420
+ return max(map(len, sequences))
421
+
422
+ def normalized_distance(self, *sequences):
423
+ """Get distance from 0 to 1
424
+ """
425
+ return float(Levenshtein.distance(*sequences)) / self.maximum(*sequences)
426
+
427
+ def rename(self, node1, node2):
428
+ """Compares attributes of trees"""
429
+ if (node1.tag != node2.tag) or (node1.colspan != node2.colspan) or (node1.rowspan != node2.rowspan):
430
+ return 1.
431
+ if node1.tag == 'td':
432
+ if node1.content or node2.content:
433
+ return self.normalized_distance(node1.content, node2.content)
434
+ return 0.
435
+
436
+
437
+ class TableTree(Tree):
438
+ def __init__(self, tag, colspan=None, rowspan=None, content=None, *children):
439
+ self.tag = tag
440
+ self.colspan = colspan
441
+ self.rowspan = rowspan
442
+ self.content = content
443
+ self.children = list(children)
444
+
445
+ def bracket(self):
446
+ """Show tree using brackets notation"""
447
+ if self.tag == 'td':
448
+ result = '"tag": %s, "colspan": %d, "rowspan": %d, "text": %s' % \
449
+ (self.tag, self.colspan, self.rowspan, self.content)
450
+ else:
451
+ result = '"tag": %s' % self.tag
452
+ for child in self.children:
453
+ result += child.bracket()
454
+ return "{{{}}}".format(result)
455
+
456
+
457
+ class recogition_end2end_base_dataset():
458
+ def __init__(self, samples):
459
+ img_id = 0
460
+ for sample in samples:
461
+ if not sample.get('img_id'):
462
+ sample['img_id'] = img_id
463
+ img_id += 1
464
+ self.samples = samples
465
+ def __getitem__(self, idx):
466
+ return self.samples[idx]
467
+
468
+
469
+ class recogition_end2end_table_dataset(recogition_end2end_base_dataset):
470
+ def __init__(self, samples, table_format):
471
+ self.pred_table_format = table_format
472
+ self.samples = self.normalize_data(samples)
473
+
474
+ def normalize_data(self, samples):
475
+ img_id = 0
476
+ for sample in samples:
477
+ p = sample['pred']
478
+ r = sample['gt']
479
+ p = normalized_table(p, self.pred_table_format)
480
+ r = normalized_table(r)
481
+ sample['norm_gt'] = r
482
+ sample['norm_pred'] = p
483
+ sample['img_id'] = sample['img_id'] if sample.get('img_id') else img_id
484
+ img_id += 1
485
+
486
+ return samples
VLMEvalKit-sudoku/vlmeval/dataset/OmniDocBench/omnidocbench.py ADDED
@@ -0,0 +1,551 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ import copy
4
+ import pandas as pd
5
+ import tempfile
6
+ import base64
7
+ import numpy as np
8
+ from tqdm import tqdm
9
+ import torch.distributed as dist
10
+ from ..image_base import ImageBaseDataset
11
+ from ...smp import *
12
+ # from ..utils import get_intermediate_file_path, load, dump
13
+
14
+
15
+ class OmniDocBench(ImageBaseDataset):
16
+
17
+ MODALITY = 'IMAGE'
18
+ TYPE = 'QA'
19
+
20
+ DATASET_URL = {'OmniDocBench':'https://huggingface.co/datasets/ouyanglinke/OmniDocBench_tsv/resolve/main/OmniDocBench.tsv'}
21
+ DATASET_MD5 = {'OmniDocBench': '0fa5ccf31e682e219cb9ca83da741a59'}
22
+
23
+
24
+ system_prompt = r'''You are an AI assistant specialized in converting PDF images to Markdown format. Please follow these instructions for the conversion:
25
+
26
+ 1. Text Processing:
27
+ - Accurately recognize all text content in the PDF image without guessing or inferring.
28
+ - Convert the recognized text into Markdown format.
29
+ - Maintain the original document structure, including headings, paragraphs, lists, etc.
30
+
31
+ 2. Mathematical Formula Processing:
32
+ - Convert all mathematical formulas to LaTeX format.
33
+ # - Enclose inline formulas with \( \). For example: This is an inline formula \( E = mc^2 \)
34
+ - Enclose block formulas with \\[ \\]. For example: \[ \frac{-b \pm \sqrt{b^2 - 4ac}}{2a} \]
35
+
36
+ 3. Table Processing:
37
+ - Convert tables to HTML format.
38
+ - Wrap the entire table with <table> and </table>.
39
+
40
+ 4. Figure Handling:
41
+ - Ignore figures content in the PDF image. Do not attempt to describe or convert images.
42
+
43
+ 5. Output Format:
44
+ - Ensure the output Markdown document has a clear structure with appropriate line breaks between elements.
45
+ - For complex layouts, try to maintain the original document's structure and format as closely as possible.
46
+
47
+ Please strictly follow these guidelines to ensure accuracy and consistency in the conversion. Your task is to accurately convert the content of the PDF image into Markdown format without adding any extra explanations or comments.
48
+ '''
49
+
50
+ def __init__(self,dataset='OmniDocBench',**kwargs):
51
+ super().__init__(dataset,**kwargs)
52
+ print(f'self.img_root:{self.img_root}')
53
+
54
+ def build_prompt(self, line):
55
+
56
+ image_path = self.dump_image(line)[0]
57
+ msg = [
58
+ dict(type='image', value=image_path),
59
+ dict(type='text', value=self.system_prompt)
60
+ ]
61
+ return msg
62
+
63
+ def evaluate(self, eval_file, **judge_kwargs):
64
+ tsv_path=self.data_path
65
+ End2end_evaluator=end2end_evaluator(eval_file,tsv_path)
66
+ Table_evalutor=table_evalutor(eval_file,tsv_path)
67
+
68
+ metrics_all=End2end_evaluator.score()
69
+ metircs_table=Table_evalutor.score()
70
+
71
+ return metrics_all
72
+
73
+
74
+ class end2end_evaluator():
75
+ def __init__(self,
76
+ eval_file,
77
+ tsv_path,
78
+ match_method:str='quick_match',
79
+ filter_types:dict=None):
80
+ self.eval_file=eval_file
81
+ self.match_method=match_method
82
+ self.references=[]
83
+ self.predictions = load(eval_file)['prediction'].tolist()
84
+ self.dafault_metircs_dict={
85
+ 'text_block':
86
+ {'metric': ['Edit_dist', 'BLEU', 'METEOR']},
87
+ 'display_formula':
88
+ {'metric': ['Edit_dist', 'CDM']},
89
+ 'table':
90
+ {'metric': ['TEDS', 'Edit_dist']},
91
+ 'reading_order':
92
+ {'metric': ['Edit_dist']}
93
+ }
94
+
95
+ references = load(tsv_path)['answer'].tolist()
96
+
97
+ load_success,load_fail=0,0
98
+ for i,ans in tqdm(enumerate(references),desc='Loading data'):
99
+ try:
100
+ ans = json.loads(ans)
101
+ load_success+=1
102
+ self.references.append(ans) #[{},{}]
103
+ except json.JSONDecodeError as e:
104
+ load_fail+=1
105
+ continue
106
+ print(f'load_success:{load_success},load_fail:{load_fail}')
107
+
108
+ filtered_gt_samples = []
109
+ if filter_types:
110
+ for gt_sample in self.references:
111
+ select_flag = True
112
+ for k, v in filter_types.items():
113
+ if gt_sample["page_info"]["page_attribute"][k] != v:
114
+ select_flag = False
115
+ if select_flag:
116
+ filtered_gt_samples.append(gt_sample)
117
+ else:
118
+ filtered_gt_samples = self.references #[{},{},{}]
119
+ self.references=filtered_gt_samples
120
+
121
+
122
+ def score(self)->dict:
123
+ samples=self.get_matched_elements(self.references,self.predictions)
124
+ metrics=self.process_generated_metric_results(samples)
125
+ return metrics
126
+
127
+ def get_page_elements(self, selected_annos):
128
+ saved_element_dict = defaultdict(list)
129
+ related_truncated = []
130
+ truncated_all = {}
131
+ for relation in selected_annos["extra"]["relation"]: # Handle truncated text issues
132
+ if relation["relation_type"] == 'truncated':
133
+ truncated_all[relation["source_anno_id"]] = ""
134
+ truncated_all[relation["target_anno_id"]] = ""
135
+ exist_flag = False
136
+ for merge_list in related_truncated:
137
+ if relation["source_anno_id"] in merge_list or relation["target_anno_id"] in merge_list: # Consider cases where three text blocks may need to be merged
138
+ merge_list.append(relation["source_anno_id"])
139
+ merge_list.append(relation["target_anno_id"])
140
+ exist_flag = True
141
+ if not exist_flag:
142
+ related_truncated.append([relation["source_anno_id"], relation["target_anno_id"]])
143
+
144
+ for item in selected_annos['layout_dets']:
145
+ if item['anno_id'] not in truncated_all.keys():
146
+ saved_element_dict[item["category_type"]].append(item)
147
+ else:
148
+ truncated_all[item['anno_id']] = item
149
+
150
+ for merge_list in related_truncated:
151
+ text_block_list = [truncated_all[key] for key in merge_list]
152
+ sorted_block = sorted(text_block_list, key=lambda x: x['order'])
153
+ text = ""
154
+ for block in sorted_block:
155
+ text += block['text']
156
+ merged_block = {
157
+ "category_type": sorted_block[0]["category_type"], # Directly use information from the first block
158
+ "order": sorted_block[0]["order"],
159
+ "anno_id": sorted_block[0]["anno_id"],
160
+ "text": text,
161
+ "merge_list": sorted_block
162
+ }
163
+ saved_element_dict[sorted_block[0]["category_type"]].append(merged_block)
164
+
165
+ return saved_element_dict
166
+
167
+ def get_page_elements_list(self, gt_page_elements, category_list):
168
+ element_list = []
169
+ for category_type in category_list:
170
+ if gt_page_elements.get(category_type):
171
+ element_list.extend(gt_page_elements[category_type])
172
+ return element_list
173
+
174
+ def get_sorted_text_list(self, selected_annos):
175
+ # txt_type: text, latex, html
176
+ text_list = []
177
+ for item in selected_annos:
178
+ if item.get('order'):
179
+ order = item['order']
180
+ else:
181
+ order = 0
182
+ # 【txt_type,selecte_annos]
183
+ text_list.append((order, item))
184
+ sorted_text_list = sorted(text_list, key=lambda x: x[0])
185
+ return [_[1] for _ in sorted_text_list]
186
+
187
+ def filtered_out_ignore(self, items, ignore_category_list):
188
+ filted_items = []
189
+ for item in items:
190
+ if item['gt_category_type'] not in ignore_category_list:
191
+ filted_items.append(item)
192
+ return filted_items
193
+
194
+ def get_order_paired(self, order_match_s, img_name):
195
+ matched = [(item['gt_position'], item['pred_position']) for item in order_match_s if (item['gt_position'] != [""] and item['pred_position'] != "")]
196
+ gt_idx_all = [item['gt_position'] for item in order_match_s if (item['gt_position'] != [""])]
197
+ read_order_pred = [i[0] for i in sorted(matched, key=lambda x: x[1])]
198
+ read_order_gt = sum(gt_idx_all, []) # Convert to one-dimensional list
199
+ read_order_gt = [x for x in read_order_gt if x]
200
+ gt = sorted(read_order_gt)
201
+ pred = sum(read_order_pred, [])
202
+ pred = [x for x in pred if x]
203
+ if len(pred) > 0 or len(gt) > 0:
204
+ import Levenshtein
205
+ edit = Levenshtein.distance(gt, pred)/ max(len(pred), len(gt))
206
+ return {
207
+ 'gt': gt,
208
+ 'pred': pred,
209
+ 'img_id': img_name,
210
+ 'edit': edit
211
+ }
212
+ else:
213
+ return {} # If both GT and pred are empty for the page, return empty
214
+
215
+ def formula_format(self, formula_matches, img_name):
216
+ # formated_list = []
217
+ for i, item in enumerate(formula_matches):
218
+ item["img_id"] = img_name + '_' + str(i)
219
+ return formula_matches
220
+
221
+ def get_matched_elements(self,references:list,predictions:list)->dict:
222
+ from .metrics import recogition_end2end_base_dataset, recogition_end2end_table_dataset
223
+
224
+ plain_text_match = []
225
+ display_formula_match = []
226
+ html_table_match = []
227
+ latex_table_match = []
228
+ order_match = []
229
+
230
+
231
+ for i,sample in enumerate(references):
232
+ img_name = os.path.basename(sample["page_info"]["image_path"])
233
+ pred_content = predictions[i]
234
+ result = self.process_get_matched_elements(sample, pred_content, img_name)
235
+ [plain_text_match_clean, formated_display_formula, latex_table_match_s, html_table_match_s, order_match_single] = result
236
+
237
+ if order_match_single:
238
+ order_match.append(order_match_single)
239
+ if plain_text_match_clean:
240
+ plain_text_match.extend(plain_text_match_clean)
241
+ if formated_display_formula:
242
+ display_formula_match.extend(formated_display_formula)
243
+ if latex_table_match_s:
244
+ latex_table_match.extend(latex_table_match_s)
245
+ if html_table_match_s:
246
+ html_table_match.extend(html_table_match_s)
247
+
248
+ if len(latex_table_match) > len(html_table_match):
249
+ table_match = latex_table_match
250
+ table_format = 'latex'
251
+ else:
252
+ table_match = html_table_match
253
+ table_format = 'html'
254
+
255
+ matched_samples_all = {
256
+ "text_block": recogition_end2end_base_dataset(plain_text_match),
257
+ "display_formula": recogition_end2end_base_dataset(display_formula_match),
258
+ "table": recogition_end2end_table_dataset(table_match, table_format),
259
+ "reading_order": recogition_end2end_base_dataset(order_match)
260
+ }
261
+
262
+ return matched_samples_all
263
+
264
+ def process_get_matched_elements(self, sample, pred_content, img_name):
265
+ from .utils import match_gt2pred_simple, match_gt2pred_no_split, match_gt2pred_quick, md_tex_filter
266
+ from func_timeout import FunctionTimedOut, func_timeout
267
+
268
+ if self.match_method == 'simple_match': # add match choice
269
+ match_gt2pred = match_gt2pred_simple
270
+ elif self.match_method == 'quick_match':
271
+ match_gt2pred = match_gt2pred_quick
272
+ elif self.match_method == 'no_split':
273
+ match_gt2pred = match_gt2pred_no_split
274
+ else:
275
+ # print('Invalid match method name. The quick_match will be used.')
276
+ match_gt2pred = match_gt2pred_quick
277
+
278
+ pred_dataset = md_tex_filter(pred_content)
279
+ gt_page_elements = self.get_page_elements(sample)
280
+
281
+ text_all = self.get_page_elements_list(gt_page_elements, ['text_block', 'title', 'code_txt', 'code_txt_caption', 'reference', 'equation_caption',
282
+ 'figure_caption', 'figure_footnote', 'table_caption', 'table_footnote', 'code_algorithm', 'code_algorithm_caption',
283
+ 'header', 'footer', 'page_footnote', 'page_number'])
284
+
285
+
286
+ display_formula_match_s = []
287
+ plain_text_match_clean = []
288
+ latex_table_match_s = []
289
+ html_table_match_s = []
290
+ order_match_single = []
291
+ if text_all:
292
+ gt_text_list = self.get_sorted_text_list(text_all)
293
+ try:
294
+ plain_text_match_s = func_timeout(
295
+ 30, match_gt2pred, args=(gt_text_list, pred_dataset['text_all'], 'text', img_name)
296
+ )
297
+ except FunctionTimedOut as e1:
298
+ print(f'Time out for plain text match of {img_name}, match_gt2pred_simple will be used.')
299
+ plain_text_match_s = match_gt2pred_simple(gt_text_list, pred_dataset['text_all'], 'text', img_name)
300
+ except Exception as e:
301
+ print(str(e))
302
+ sys.exit()
303
+
304
+ if not plain_text_match_s:
305
+ print(f'No text match of {img_name}. The plain text match will be empty.')
306
+ else:
307
+ plain_text_match_clean = self.filtered_out_ignore(plain_text_match_s, ['figure_caption', 'figure_footnote', 'table_caption', 'table_footnote', 'code_algorithm', 'code_algorithm_caption', 'header', 'footer', 'page_footnote', 'page_number', 'equation_caption'])
308
+
309
+
310
+ if gt_page_elements.get('equation_isolated'):
311
+ gt_display_list = self.get_sorted_text_list(gt_page_elements['equation_isolated'])
312
+ display_formula_match_s = match_gt2pred(gt_display_list, pred_dataset['equation_isolated'], 'formula', img_name)
313
+ display_formula_match_s = [x for x in display_formula_match_s if x['gt_idx'] != [""]]
314
+ if not display_formula_match_s:
315
+ print(f'No display_formula_match of {img_name}. The display_formula_match will be empty.')
316
+
317
+ if gt_page_elements.get('table'):
318
+ gt_table_list = self.get_sorted_text_list(gt_page_elements['table'])
319
+ if pred_dataset['latex_table']:
320
+ latex_table_match_s = match_gt2pred_simple(gt_table_list, pred_dataset['latex_table'], 'latex_table', img_name)
321
+ latex_table_match_s = [x for x in latex_table_match_s if x['gt_idx'] != [""]]
322
+ if pred_dataset['html_table']:
323
+ html_table_match_s = match_gt2pred_simple(gt_table_list, pred_dataset['html_table'], 'html_table', img_name)
324
+ html_table_match_s = [x for x in html_table_match_s if x['gt_idx'] != [""]]
325
+ else:
326
+ html_table_match_s = match_gt2pred_simple(gt_table_list, [], 'html_table', img_name)
327
+ html_table_match_s = [x for x in html_table_match_s if x['gt_idx'] != [""]]
328
+
329
+
330
+ order_match_s = plain_text_match_clean
331
+ if order_match_s:
332
+ order_match_single = self.get_order_paired(order_match_s, img_name)
333
+
334
+ return [plain_text_match_clean, display_formula_match_s, latex_table_match_s, html_table_match_s, order_match_single]
335
+
336
+ def process_generated_metric_results(self,samples,save_name:str='end2end_quick_match'):
337
+ from .metrics import show_result, get_full_labels_results, get_page_split, METRIC_REGISTRY
338
+
339
+ result_all={}
340
+ page_info={}
341
+ metircs_dict=self.dafault_metircs_dict
342
+ pages=self.references #gt_samples list
343
+
344
+ for page in pages:
345
+ img_path=os.path.basename(page['page_info']['image_path'])
346
+ page_info[img_path]=page['page_info']['page_attribute']
347
+
348
+ for element in metircs_dict.keys():
349
+
350
+ result={}
351
+ group_info=metircs_dict[element].get('group',[])
352
+ # samples = samples.get(element) ##
353
+ cur_samples = samples[element]
354
+
355
+ for metric in metircs_dict[element]['metric']:
356
+ metric_val = METRIC_REGISTRY.get(metric)
357
+
358
+ cur_samples,result_s = metric_val(cur_samples).evaluate(group_info, f"{save_name}_{element}")
359
+ if result_s:
360
+ result.update(result_s)
361
+
362
+ if result:
363
+ print(f"{element}")
364
+ show_result(result)
365
+ result_all[element]={}
366
+
367
+
368
+ group_result=get_full_labels_results(cur_samples)
369
+ page_result=get_page_split(cur_samples,page_info)
370
+
371
+ result_all[element]={
372
+ 'all':result,
373
+ 'group':group_result,
374
+ 'page':page_result
375
+ }
376
+ if isinstance(cur_samples,list):
377
+ saved_samples=cur_samples
378
+ else:
379
+ saved_samples=cur_samples.samples
380
+ # NOTE: The original code has a bug here, it will overwrite the result file in each iteration.
381
+ # I will fix it by adding element to the filename.
382
+ # NOTE: Fixed typo .josn -> .json
383
+ result_file = get_intermediate_file_path(self.eval_file, f'_{save_name}_{element}_result', 'json')
384
+ dump(saved_samples, result_file)
385
+
386
+ metric_result_file = get_intermediate_file_path(self.eval_file, f'_{save_name}_metric_result', 'json')
387
+ dump(result_all, metric_result_file)
388
+
389
+ dict_list = []
390
+ save_dict={}
391
+ en_overall=[]
392
+ ch_overall=[]
393
+ for category_type, metric in [("text_block", "Edit_dist"), ("display_formula", "Edit_dist"), ("display_formula", "CDM"), ("table", "TEDS"), ("table", "Edit_dist"), ("reading_order", "Edit_dist")]:
394
+ if metric == 'CDM':
395
+ save_dict[category_type+'_'+metric+'_EN'] = '-'
396
+ save_dict[category_type+'_'+metric+'_CH'] = '-'
397
+ elif metric == "TEDS":
398
+ save_dict[category_type+'_'+metric+'_EN'] = result_all[category_type]["page"][metric]["language: english"] * 100
399
+ save_dict[category_type+'_'+metric+'_CH'] = result_all[category_type]["page"][metric]["language: simplified_chinese"] * 100
400
+ else:
401
+ save_dict[category_type+'_'+metric+'_EN'] = result_all[category_type]["page"][metric].get("language: english", np.nan)
402
+ save_dict[category_type+'_'+metric+'_CH'] = result_all[category_type]["page"][metric].get("language: simplified_chinese",np.nan)
403
+ if metric == "Edit_dist":
404
+ en_overall.append(result_all[category_type]["page"][metric].get("language: english", np.nan))
405
+ ch_overall.append(result_all[category_type]["page"][metric].get("language: simplified_chinese",np.nan))
406
+
407
+ save_dict['overall_EN'] = sum(en_overall) / len(en_overall)
408
+ save_dict['overall_CH'] = sum(ch_overall) / len(ch_overall)
409
+ dict_list.append(save_dict)
410
+ df = pd.DataFrame(dict_list,index=['end2end',]).round(3)
411
+
412
+ e2e_eval_file = get_intermediate_file_path(self.eval_file, '_End2End_Evaluation', 'json')
413
+ dump(result_all, e2e_eval_file)
414
+
415
+ overall_file = get_intermediate_file_path(self.eval_file, '_overall')
416
+ dump(df, overall_file)
417
+
418
+ print(f"The save path of End2End_Evaluation is: {e2e_eval_file}")
419
+ print(f"The save path of overall metrics is: {overall_file}")
420
+ return df
421
+
422
+
423
+ class table_evalutor():
424
+ def __init__(self,eval_file,tsv_path):
425
+ self.eval_file = eval_file
426
+ gt_key='html'
427
+ pred_key='pred'
428
+ self.category_filter='table'
429
+ self.category_type='table'
430
+ self.metircs_list=['TEDS','Edit_dist']
431
+ self.gt_samples,self.table_samples=self.load_data(eval_file,tsv_path,pred_key,gt_key)
432
+
433
+ def load_data(self,eval_file,gt_file,pred_key,gt_key):
434
+ from .data_preprocess import clean_string, normalized_formula, textblock2unicode, normalized_table
435
+ samples=[]
436
+ preds=[]
437
+ predictions=load(eval_file)['prediction'].tolist()
438
+ gt_samples=load(gt_file)['answer'].tolist()
439
+ load_success,load_fail=0,0
440
+ for i,gt_sample in tqdm(enumerate(gt_samples),desc='Loading data'):
441
+ try:
442
+ ans=json.loads(gt_sample)
443
+ for item in ans['layout_dets']:
444
+ if item['category_type']=="table":
445
+ item['pred']=predictions[i]
446
+ load_success+=1
447
+ preds.append(ans)
448
+
449
+ except json.JSONDecodeError as e:
450
+ load_fail+=1
451
+ continue
452
+ print(f'load_table_success:{load_success},load_table_fail:{load_fail}')
453
+
454
+ count=0
455
+ for pred in preds:
456
+ img_name = os.path.basename(pred['page_info']['image_path'])
457
+ for i, ann in enumerate(pred['layout_dets']):
458
+ if not ann.get(gt_key):
459
+ continue
460
+ if self.category_filter:
461
+ if ann['category_type'] not in self.category_filter:
462
+ continue
463
+ if not ann.get(pred_key):
464
+ # print(f'Cannot find pred for {img_name}. ann is {ann}')
465
+ # pdb.set_trace()
466
+ count += 1
467
+ continue
468
+ else:
469
+ gt_text = ann[gt_key]
470
+ norm_gt = gt_text
471
+ pred_text = ann[pred_key]
472
+ norm_pred = pred_text
473
+ if self.category_type:
474
+ if self.category_type == 'text':
475
+ norm_gt = clean_string(textblock2unicode(ann[gt_key]))
476
+ norm_pred = clean_string(textblock2unicode(ann[pred_key]))
477
+ elif self.category_type == 'formula':
478
+ norm_gt = normalized_formula(ann[gt_key])
479
+ norm_pred = normalized_formula(ann[pred_key])
480
+ elif self.category_type == 'table':
481
+ norm_gt = normalized_table(ann[gt_key], gt_key)
482
+ norm_pred = normalized_table(ann[pred_key], gt_key)
483
+ else:
484
+ raise ValueError(f'Invalid category type: {self.category_type}')
485
+
486
+ samples.append({
487
+ "gt": gt_text,
488
+ "norm_gt": norm_gt,
489
+ "gt_attribute": [ann['attribute']],
490
+ 'pred': pred_text,
491
+ "norm_pred": norm_pred,
492
+ 'img_id': img_name
493
+ })
494
+
495
+ print(f'Cannot find pred for {count} samples.')
496
+ return preds,samples
497
+
498
+ def score(self)->dict:
499
+ metrics=self.process_generated_metric_results()
500
+ return metrics
501
+
502
+ def process_generated_metric_results(self,save_name:str='OmniDocBench_table'):
503
+ from .metrics import show_result, get_full_labels_results, get_page_split, METRIC_REGISTRY
504
+
505
+ p_scores={}
506
+ page_info={}
507
+ no_page_flag=False
508
+ samples=self.table_samples
509
+ pages=self.gt_samples
510
+
511
+ for page in pages:
512
+ if 'page_info' not in page:
513
+ no_page_flag=True
514
+ break
515
+ img_path=os.path.basename(page['page_info']['image_path'])
516
+ page_info[img_path]=page['page_info']['page_attribute']
517
+
518
+ for metric in self.metircs_list:
519
+ metric_val=METRIC_REGISTRY.get(metric)
520
+ samples, result = metric_val(samples).evaluate({}, save_name)
521
+ if result:
522
+ p_scores.update(result)
523
+ show_result(p_scores)
524
+ group_result=get_full_labels_results(samples)
525
+ if no_page_flag:
526
+ page_result={}
527
+ else:
528
+ page_result=get_page_split(samples,page_info)
529
+
530
+ result_all={
531
+ 'all':p_scores,
532
+ 'group':group_result,
533
+ 'page':page_result
534
+ }
535
+
536
+ metric_result_file = get_intermediate_file_path(self.eval_file, f'_{save_name}_metric_result', 'json')
537
+ dump(result_all, metric_result_file)
538
+
539
+ dict_list=[]
540
+ dict_list.append(result_all["group"]["TEDS"])
541
+
542
+ df4 = pd.DataFrame(dict_list, index=['OmniDocBench_table'])
543
+ df4 = df4 * 100
544
+ df4 = df4.round(1)
545
+ selected_columns = df4[["language: table_en", "language: table_simplified_chinese", "language: table_en_ch_mixed", "line: full_line", "line: less_line", "line: fewer_line", "line: wireless_line",
546
+ "with_span: True", "with_span: False", "include_equation: True", "include_equation: False", "include_background: True", "include_background: False", "table_layout: vertical", "table_layout: horizontal"]]
547
+
548
+ table_attr_file = get_intermediate_file_path(self.eval_file, '_table_attribute')
549
+ dump(selected_columns, table_attr_file)
550
+ print(f'The save path of table_attribute is :{table_attr_file}')
551
+ return selected_columns
VLMEvalKit-sudoku/vlmeval/dataset/mmmath.py ADDED
@@ -0,0 +1,459 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ import json
3
+
4
+ import numpy as np
5
+ import sys
6
+ import math
7
+ import os
8
+ import argparse
9
+ import timeout_decorator
10
+ import logging
11
+
12
+ from .image_base import ImageBaseDataset
13
+ from ..utils import track_progress_rich
14
+ from ..smp import load, dump, get_intermediate_file_path
15
+
16
+ try:
17
+ import sympy as sp
18
+ from sympy import simplify, Eq, sympify, Pow, pi
19
+ from sympy.parsing.latex import parse_latex
20
+ except ImportError:
21
+ logging.warning('sympy is not installed, please install it for MM-Math evaluation.')
22
+
23
+
24
+ class AutoScoringJudge:
25
+ def __init__(self):
26
+ # Map of special symbols to their replacements
27
+ self.special_signal_map = {
28
+ "\\left": "",
29
+ "\\right": "",
30
+ "厘米":"",
31
+ # "∶": ":",
32
+ ",": ",",
33
+ "$": "",
34
+ "(":"(",
35
+ ")":")",
36
+ "\\infty":"oo",
37
+ "\\colon ":":",
38
+ # "\\approx": "=",
39
+ # "\\simeq": "=",
40
+ # "\\sim": "=",
41
+ # "^\\prime": "'",
42
+ # "^{\\prime}": "'",
43
+ "+":"+",
44
+ "\\, ": "",
45
+ "\\,":"",
46
+ "^\\circ": "",
47
+ "^{\\circ}": "",
48
+ # "%": "",
49
+ }
50
+ self.pi = parse_latex("\\pi")
51
+ # MM-Math default precision
52
+ self.precision = 1e-2
53
+
54
+ def trans_greater_sign_to_interval(self, expr:str):
55
+ expr_tmp = expr.split("<")
56
+ return "(" + expr_tmp[0] + ", " + expr_tmp[-1] + ")"
57
+
58
+ def split_by_comma(self, expr: str):
59
+ # Splits expressions by commas outside of brackets
60
+ in_bracket_num = 0
61
+ splitted_expr = []
62
+ start_idx = 0
63
+ for i, char in enumerate(expr):
64
+ if char in ["(", "["]:
65
+ in_bracket_num += 1
66
+ elif char in [")", "]"]:
67
+ in_bracket_num -= 1
68
+ elif char == "," and in_bracket_num == 0:
69
+ splitted_expr.append(expr[start_idx:i].strip())
70
+ start_idx = i + 1
71
+
72
+ if start_idx < len(expr):
73
+ splitted_expr.append(expr[start_idx:].strip())
74
+
75
+ return splitted_expr
76
+
77
+ def trans_plus_minus_sign(self, expr_list: list):
78
+ # Translates plus-minus signs into separate expressions
79
+ new_expr_list = []
80
+ for expr in expr_list:
81
+ if "\\pm" in expr:
82
+ new_expr_list.append(expr.replace("\\pm", "+"))
83
+ new_expr_list.append(expr.replace("\\pm", "-"))
84
+ else:
85
+ new_expr_list.append(expr)
86
+
87
+ return new_expr_list
88
+
89
+ def judge(self, expression1, expression2, precision=1e-2):
90
+ # Judge if two expressions are equal (expression1 is considered as the Ground Truth)
91
+ # Default precision is a list for supporting multiple expressions
92
+ precision = precision if isinstance(precision, list) else [precision]
93
+
94
+ try:
95
+ expression1, expression2 = self.preprocess(expression1, expression2)
96
+ except:
97
+ return False
98
+ if expression1 == expression2:
99
+ # print("Exactly equal")
100
+ return True
101
+
102
+ # Remove Chinese characters from the string, as answers like "yes" or "no" in Chinese have been considered
103
+ expression1 = expression1 if re.fullmatch(r"[\u4e00-\u9fff]+", expression1) else re.sub(r'[\u4e00-\u9fff]+', '', expression1) # noqa: E501
104
+ expression2 = expression2 if re.fullmatch(r'[\u4e00-\u9fff]+', expression2) else re.sub(r'[\u4e00-\u9fff]+', '', expression2) # noqa: E501
105
+ # Check if two < or > in expression
106
+ if self.is_two_greater_sign(expression1):
107
+ expression1 = self.trans_greater_sign_to_interval(expression1)
108
+
109
+ if self.is_two_greater_sign(expression2):
110
+ expression2 = self.trans_greater_sign_to_interval(expression2)
111
+
112
+ expression1 = self.split_by_comma(expression1)
113
+ expression2 = self.split_by_comma(expression2)
114
+
115
+ temp_list1 = self.trans_plus_minus_sign(expression1)
116
+ temp_list2 = self.trans_plus_minus_sign(expression2)
117
+
118
+ # Set up a list for allowed errors
119
+ if len(precision) <= 1:
120
+ precision = precision * len(temp_list1)
121
+
122
+ if len(temp_list1) != len(temp_list2):
123
+ return False
124
+
125
+ # Check if elements in both lists can be paired and are equal
126
+ idx = -1
127
+ while len(temp_list1) != 0:
128
+ idx = (idx + 1) % len(temp_list1)
129
+
130
+ item1 = temp_list1[idx]
131
+ self.precision = precision[idx]
132
+
133
+ for item2 in temp_list2:
134
+ try:
135
+ if self.is_equal(item1, item2):
136
+ temp_list1.remove(item1)
137
+ temp_list2.remove(item2)
138
+ precision.remove(self.precision)
139
+ break
140
+ except Exception as err:
141
+ logging.warning(f'{type(err)}: {err}')
142
+ continue
143
+ else:
144
+ # If no match was found, return False
145
+ return False
146
+
147
+ # If all elements are matched, return True
148
+ return True
149
+
150
+ def is_interval(self, expr):
151
+ # Checks if an expression is an interval
152
+ return expr.startswith(("(", "[")) and expr.endswith((")", "]"))
153
+
154
+ def is_two_greater_sign(self, expr):
155
+ match = re.findall(r'<', expr)
156
+ return len(match) == 2
157
+
158
+ def sympy_sub_pi(self, expression_sympy):
159
+ # Replaces the symbol for pi in sympy expressions with its numerical value
160
+ return expression_sympy.subs(self.pi, math.pi)
161
+
162
+ # Set timeout to 30 seconds for is_equal
163
+ @timeout_decorator.timeout(30)
164
+ def is_equal(self, expression1, expression2):
165
+ # Default first expression is ground truth. Check if expressions are equal in different aspects
166
+ if expression1 == expression2 and expression1 != "" and expression2 != "":
167
+ # print("Equivalent natively")
168
+ return True
169
+
170
+ # First check if both are intervals
171
+ if self.is_interval(expression1) and self.is_interval(expression2):
172
+ try:
173
+ if self.interval_equal(expression1, expression2):
174
+ # print("Interval equivalent")
175
+ return True
176
+ except:
177
+ return False
178
+
179
+ # Then check for numerical equality
180
+ try:
181
+ if self.numerical_equal(expression1, expression2):
182
+ # print("Numerically equivalent")
183
+ return True
184
+ except:
185
+ pass
186
+ # Then check if expressions are mathematically equal
187
+ try:
188
+ if self.expression_equal(expression1, expression2) and not ("=" in expression1 and "=" in expression2):
189
+ # print("Expression equivalent")
190
+ return True
191
+ except:
192
+ pass
193
+
194
+ # Lastly, check for equation equality
195
+ try:
196
+ if self.equation_equal(expression1, expression2):
197
+ # print("Equation equivalent")
198
+ return True
199
+ except:
200
+ pass
201
+
202
+ return False
203
+
204
+ def numerical_equal(self, expression1: str, expression2: str, include_percentage: bool = True):
205
+ # Check if two numerical values are equal within an allowed error range
206
+ # Includes possible percentage cases
207
+ reference = float(expression1)
208
+ prediction = float(expression2)
209
+
210
+ if include_percentage:
211
+ gt_result = [reference / 100, reference, reference * 100]
212
+ else:
213
+ gt_result = [reference]
214
+
215
+ for item in gt_result:
216
+ if abs(item - prediction) <= self.precision * 1.01:
217
+ return True
218
+ return False
219
+
220
+ def expression_equal(self, exp1, exp2):
221
+ # Check if two expressions are mathematically equivalent
222
+ # Extract expression and use sympy for equivalence checking
223
+ def extract_expression(expression):
224
+ if "=" in expression:
225
+ expression = expression.split("=")[1]
226
+ return expression.strip()
227
+
228
+ exp1 = extract_expression(exp1)
229
+ exp2 = extract_expression(exp2)
230
+
231
+ exp_too_long = len(exp1) > 300 or len(exp2) > 300
232
+
233
+ expr1_sym = sympify(parse_latex(exp1))
234
+ expr2_sym = sympify(parse_latex(exp2))
235
+ if expr1_sym == expr2_sym:
236
+ return True
237
+ else:
238
+ expr1_sym = self.sympy_sub_pi(expr1_sym)
239
+ expr2_sym = self.sympy_sub_pi(expr2_sym)
240
+
241
+ if (expr1_sym.has(sp.Symbol) and not expr2_sym.has(sp.Symbol)) or \
242
+ (not expr1_sym.has(sp.Symbol) and expr2_sym.has(sp.Symbol)):
243
+ return False
244
+ elif not expr1_sym.has(sp.Symbol) and not expr2_sym.has(sp.Symbol):
245
+ try:
246
+ if not (self.can_compute_power(expr1_sym) and self.can_compute_power(expr2_sym)):
247
+ print("These two numbers cannot be calculated by the current computer for: "
248
+ f"\"{str(expr1_sym)}\" and \"{str(expr2_sym)}\"")
249
+ return False
250
+ if exp_too_long:
251
+ print(f'Expression {exp1} or {exp2} is too long to compute. ')
252
+ return False
253
+ if abs(expr1_sym.evalf() - expr2_sym.evalf()) <= self.precision * 1.01:
254
+ return True
255
+ else:
256
+ return False
257
+ except:
258
+ return False
259
+ elif exp_too_long:
260
+ print(f'Expression {exp1} or {exp2} is too long to compute. ')
261
+ return False
262
+ else:
263
+ try:
264
+ simplified_expr = simplify(expr1_sym - expr2_sym)
265
+ num_value = simplified_expr.evalf()
266
+ return abs(num_value) < 1e-3
267
+ except:
268
+ return False
269
+
270
+ def equation_equal(self, expression1, expression2):
271
+ # Check if two equations are mathematically equivalent
272
+ # Simplify equations and use sympy for equivalence checking
273
+ def simplify_equation(latex_eq):
274
+ lhs, rhs = latex_eq.split('=')
275
+
276
+ lhs_expr = parse_latex(lhs)
277
+ rhs_expr = parse_latex(rhs)
278
+
279
+ equation = Eq(lhs_expr, rhs_expr)
280
+
281
+ simplified_eq = simplify(equation.lhs - equation.rhs)
282
+
283
+ return simplified_eq
284
+
285
+ expr1_sym = simplify_equation(expression1)
286
+ expr2_sym = simplify_equation(expression2)
287
+
288
+ division_result_1 = simplify(expr1_sym / expr2_sym)
289
+ division_result_2 = simplify(expr2_sym / expr1_sym)
290
+
291
+ if ((division_result_1.is_Integer and division_result_1 != 0) or # noqa: W504
292
+ (division_result_2.is_Integer and division_result_2 != 0)):
293
+ return True
294
+ else:
295
+ return False
296
+
297
+ def interval_equal(self, expression1, expression2):
298
+ # Check if two intervals are mathematically equivalent
299
+ def compare_two_interval(inter1, inter2):
300
+ if inter1[0] != inter2[0] or inter1[-1] != inter2[-1]:
301
+ return False
302
+
303
+ inter1 = inter1.strip('[]()')
304
+ inter2 = inter2.strip('[]()')
305
+
306
+ items_1 = inter1.split(',')
307
+ items_2 = inter2.split(',')
308
+
309
+ for item_1, item_2 in zip(items_1, items_2):
310
+ if not self.expression_equal(item_1, item_2):
311
+ return False
312
+ return True
313
+
314
+ interval1 = expression1
315
+ interval2 = expression2
316
+
317
+ if interval1 == interval2:
318
+ return True
319
+ else:
320
+ inter_list1 = interval1.split("\\cup")
321
+ inter_list2 = interval2.split("\\cup")
322
+
323
+ if len(inter_list1) != len(inter_list2):
324
+ return False
325
+ else:
326
+ for inter1, inter2 in zip(inter_list1, inter_list2):
327
+ if not compare_two_interval(inter1, inter2):
328
+ return False
329
+ return True
330
+
331
+ def preprocess(self, expression1, expression2):
332
+ # Preprocess expressions to extract and replace special symbols
333
+ def extract_boxed_content(latex_str):
334
+ boxed_matches = re.finditer(r'\\boxed{', latex_str)
335
+ results = ""
336
+
337
+ for match in boxed_matches:
338
+ start_index = match.end()
339
+ end_index = start_index
340
+ stack = 1
341
+
342
+ while stack > 0 and end_index < len(latex_str):
343
+ if latex_str[end_index] == '{':
344
+ stack += 1
345
+ elif latex_str[end_index] == '}':
346
+ stack -= 1
347
+ end_index += 1
348
+
349
+ if stack == 0:
350
+ content = latex_str[start_index:end_index - 1]
351
+ results += content + ","
352
+ else:
353
+ raise ValueError("Mismatched braces in LaTeX string.")
354
+
355
+ if results == "":
356
+ last_line_ans = latex_str.strip().split("\n")[-1]
357
+ dollar_pattern = r"\$(.*?)\$"
358
+ answers = re.findall(dollar_pattern, last_line_ans)
359
+
360
+ if answers:
361
+ for ans in answers:
362
+ results += ans + ","
363
+ else:
364
+ results = latex_str
365
+
366
+ return results
367
+
368
+ def sepcial_symbol_replace(expression):
369
+
370
+ expression = expression.replace("\\text{cm}^2", '').replace("\\text{cm}", "").replace("\\,cm", '').replace("\\text{ cm}", '').replace("cm", '').replace("\\text{分米}^2", '').replace("cm^{2}", '').replace("60 \\text{ cm}^2",'').replace("\\ \\text{m}", "").replace("\\text{米}","").strip() # noqa: E501
371
+
372
+ expression = re.sub(r"(.+)m$", r"\1", expression)
373
+
374
+ if "\\in " in expression:
375
+ expression = expression.split("\\in ")[1]
376
+
377
+ for signal in self.special_signal_map:
378
+ expression = expression.replace(signal, self.special_signal_map[signal])
379
+
380
+ expression = re.sub(r'(\\sin|\\cos|\\tan)(\d+)', r'\1((\2/180)\\pi)', expression)
381
+
382
+ expression = expression.strip("\n,.:;^_=+`!@#%^&*~,。")
383
+
384
+ pattern = r'\\(?:mathrm|mathbf)\{~?([^}]*)\}'
385
+ expression = re.sub(pattern, r'\1', expression)
386
+
387
+ return expression
388
+
389
+ exp1, exp2 = extract_boxed_content(expression1), extract_boxed_content(expression2)
390
+
391
+ exp1, exp2 = sepcial_symbol_replace(exp1), sepcial_symbol_replace(exp2)
392
+
393
+ return exp1, exp2
394
+
395
+ def can_compute_power(self, expr):
396
+ # Checks if a power expression can be computed
397
+ if isinstance(expr, Pow):
398
+ base, exp = expr.as_base_exp()
399
+ if base.is_number and exp.is_number:
400
+ MAX_EXP = 1000 # Adjust based on computing environment
401
+ if abs(exp.evalf()) > MAX_EXP:
402
+ return False
403
+ else:
404
+ return True
405
+ else:
406
+ return False
407
+ else:
408
+ return True # Not a power expression, can compute
409
+
410
+
411
+ class MMMath(ImageBaseDataset):
412
+
413
+ TYPE = 'VQA'
414
+
415
+ DATASET_URL = {
416
+ 'MM-Math': 'https://opencompass.openxlab.space/utils/VLMEval/MM-Math.tsv',
417
+ }
418
+ DATASET_MD5 = {
419
+ 'MM-Math': '1f064ed7c4e0e8926a3fa65849419ca5',
420
+ }
421
+
422
+ @classmethod
423
+ def evaluate(self, eval_file, **kwargs):
424
+
425
+ data = load(eval_file)
426
+ judger = AutoScoringJudge()
427
+ func = judger.judge
428
+
429
+ tups = [dict(expression1=x, expression2=y) for x, y in zip(data['answer'], data['prediction'])]
430
+
431
+ res = track_progress_rich(func, tups, nproc=16)
432
+ data['hit'] = res
433
+ dump(data, eval_file)
434
+
435
+ score_file = get_intermediate_file_path(eval_file, '_score', 'json')
436
+ score = {}
437
+ score['overall'] = np.mean(data['hit'])
438
+ # Results by Difficulty
439
+ difficulties = set(data['difficulty'])
440
+ for d in difficulties:
441
+ score[f'Difficulty-{d}'] = np.mean(data[data['difficulty'] == d]['hit'])
442
+
443
+ # Results by Year
444
+ years = set(data['year'])
445
+ for y in years:
446
+ score[f'Year-{y}'] = np.mean(data[data['year'] == y]['hit'])
447
+
448
+ # Results by Knowledge-L1
449
+ points = set(data['knowledge_l1'])
450
+ for p in points:
451
+ score[f'Knowledge-L1-{p}'] = np.mean(data[data['knowledge_l1'] == p]['hit'])
452
+
453
+ # Results by Knowledge-L2
454
+ points = set(data['knowledge_l2'])
455
+ for p in points:
456
+ score[f'Knowledge-L2-{p}'] = np.mean(data[data['knowledge_l2'] == p]['hit'])
457
+
458
+ dump(score, score_file)
459
+ return score
VLMEvalKit-sudoku/vlmeval/dataset/utils/bmmr_grade.py ADDED
@@ -0,0 +1,470 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # flake8: noqa
2
+ # Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Copyright (c) Microsoft Corporation.
17
+ #
18
+ # Permission is hereby granted, free of charge, to any person obtaining a copy
19
+ # of this software and associated documentation files (the "Software"), to deal
20
+ # in the Software without restriction, including without limitation the rights
21
+ # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
22
+ # copies of the Software, and to permit persons to whom the Software is
23
+ # furnished to do so, subject to the following conditions:
24
+ #
25
+ # The above copyright notice and this permission notice shall be included in all
26
+ # copies or substantial portions of the Software.
27
+ #
28
+ # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
29
+ # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
30
+ # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
31
+ # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
32
+ # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
33
+ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
34
+ # SOFTWARE
35
+
36
+ # Copyright (c) 2023 OpenAI
37
+ #
38
+ # Permission is hereby granted, free of charge, to any person obtaining a copy
39
+ # of this software and associated documentation files (the "Software"), to deal
40
+ # in the Software without restriction, including without limitation the rights
41
+ # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
42
+ # copies of the Software, and to permit persons to whom the Software is
43
+ # furnished to do so, subject to the following conditions:
44
+
45
+ # The above copyright notice and this permission notice shall be included in all
46
+ # copies or substantial portions of the Software.
47
+ #
48
+ # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
49
+ # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
50
+ # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
51
+ # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
52
+ # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
53
+ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
54
+ # SOFTWARE.
55
+
56
+ # Copyright (c) 2021 Dan Hendrycks
57
+ #
58
+ # Permission is hereby granted, free of charge, to any person obtaining a copy
59
+ # of this software and associated documentation files (the "Software"), to deal
60
+ # in the Software without restriction, including without limitation the rights
61
+ # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
62
+ # copies of the Software, and to permit persons to whom the Software is
63
+ # furnished to do so, subject to the following conditions:
64
+ #
65
+ # The above copyright notice and this permission notice shall be included in all
66
+ # copies or substantial portions of the Software.
67
+ #
68
+ # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
69
+ # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
70
+ # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
71
+ # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
72
+ # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
73
+ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
74
+ # SOFTWARE.
75
+
76
+
77
+ """
78
+ This logic is largely copied from the Hendrycks' MATH release (math_equivalence), and borrowed from:
79
+ - https://github.com/microsoft/ToRA/blob/main/src/eval/grader.py
80
+ - https://github.com/microsoft/ProphetNet/tree/master/CRITIC
81
+ - https://github.com/openai/prm800k
82
+ """
83
+
84
+
85
+ import contextlib
86
+ import re
87
+ import signal
88
+ import math
89
+ from math import isclose
90
+ from typing import Union
91
+
92
+ import sympy
93
+ from sympy import N, simplify
94
+ from sympy.parsing.latex import parse_latex
95
+ from sympy.parsing.sympy_parser import parse_expr
96
+
97
+
98
+ def is_digit(s):
99
+ try:
100
+ if "{,}" in str(s):
101
+ num = float(str(s).replace("{,}", ""))
102
+ return True, num
103
+
104
+ num = float(str(s).replace(",", ""))
105
+ return True, num
106
+ except ValueError:
107
+ return False, None
108
+
109
+
110
+ def normalize(answer, pi) -> str:
111
+ # checking if answer is $<number> and removing $ in that case to compare
112
+ if isinstance(answer, str) and bool(re.match(r'\$\d+(\.\d+)?', answer)):
113
+ return answer[1:]
114
+
115
+ # checking if answer is <number>% or <number>\\% and removing %
116
+ if isinstance(answer, str) and (
117
+ bool(re.match(r'^\d+(\.\d+)?%$', answer)) or bool(re.match(r'^\d+(\.\d+)?\\%$', answer))
118
+ ):
119
+ return answer.replace("\\%", "").replace("%", "")
120
+
121
+ # handle base
122
+ answer = handle_base(answer)
123
+
124
+ # handle pi
125
+ answer = handle_pi(answer, pi)
126
+
127
+ return answer
128
+
129
+
130
+ def handle_base(x) -> str:
131
+ if isinstance(x, str) and "_" in x:
132
+ try:
133
+ # Due to base
134
+ x = x.split("_")[0]
135
+ x = float(x)
136
+ return int(x)
137
+ except:
138
+ pass
139
+ return x
140
+
141
+
142
+ def handle_pi(string, pi):
143
+
144
+ if isinstance(string, str) and "\pi" in string:
145
+ # Find the first occurrence of "\pi"
146
+ idx = string.find("\pi")
147
+
148
+ # Iterate over the string and find all occurrences of "\pi" with a valid previous character
149
+ while idx != -1:
150
+
151
+ if idx > 0 and string[idx - 1].isdigit():
152
+ # Replace "\pi" with "*math.pi" if the previous character is a digit
153
+ string = string[:idx] + f"*{pi}" + string[idx + 3:]
154
+ else:
155
+ # Replace "\pi" with "1*math.pi" if the previous character is not a digit
156
+ string = string[:idx] + f"1*{pi}" + string[idx + 3:]
157
+
158
+ # Find the next occurrence of "\pi"
159
+ idx = string.find("\pi", idx + 1)
160
+
161
+ # Evaluate the expression using eval() function
162
+ try:
163
+ string = eval(string)
164
+ except:
165
+ pass
166
+
167
+ return string
168
+
169
+
170
+ def math_equal(
171
+ prediction: Union[bool, float, str],
172
+ reference: Union[float, str],
173
+ include_percentage: bool = True,
174
+ tolerance: float = 1e-4,
175
+ timeout: float = 10.0,
176
+ pi: float = math.pi
177
+ ) -> bool:
178
+ """
179
+ Exact match of math if and only if:
180
+ 1. numerical equal: both can convert to float and are equal
181
+ 2. symbolic equal: both can convert to sympy expression and are equal
182
+ """
183
+
184
+ prediction = normalize(prediction, pi)
185
+ reference = normalize(reference, pi)
186
+
187
+ if isinstance(prediction, str) and len(prediction) > 1000: # handling weird corner-cases
188
+ prediction = prediction[:1000]
189
+
190
+ # 0. string comparison
191
+ if isinstance(prediction, str) and isinstance(reference, str):
192
+ if prediction.strip().lower() == reference.strip().lower():
193
+ return True
194
+ if prediction.replace(" ", "") == reference.replace(" ", ""):
195
+ return True
196
+
197
+ try: # 1. numerical equal
198
+ if is_digit(prediction)[0] and is_digit(reference)[0]:
199
+ prediction = is_digit(prediction)[1]
200
+ reference = is_digit(reference)[1]
201
+ # number questions
202
+ if include_percentage:
203
+ gt_result = [reference / 100, reference, reference * 100]
204
+ else:
205
+ gt_result = [reference]
206
+ for item in gt_result:
207
+ try:
208
+ if isclose(item, prediction, rel_tol=tolerance):
209
+ return True
210
+ except Exception:
211
+ continue
212
+ return False
213
+ except Exception:
214
+ pass
215
+
216
+ if not prediction and prediction not in [0, False]:
217
+ return False
218
+
219
+ # 2. symbolic equal
220
+ reference = str(reference).strip()
221
+ prediction = str(prediction).strip()
222
+
223
+ # deal with [], (), {}
224
+ prediction = format_intervals(prediction)
225
+
226
+ pred_str, ref_str = prediction, reference
227
+ if (prediction.startswith("[") and prediction.endswith("]") and not reference.startswith("(")) or (
228
+ prediction.startswith("(") and prediction.endswith(")") and not reference.startswith("[")
229
+ ):
230
+ pred_str = pred_str.strip("[]()")
231
+ ref_str = ref_str.strip("[]()")
232
+ for s in ["{", "}", "(", ")"]:
233
+ ref_str = ref_str.replace(s, "")
234
+ pred_str = pred_str.replace(s, "")
235
+ if pred_str == ref_str:
236
+ return True
237
+
238
+ # [a, b] vs. [c, d], return a==c and b==d
239
+ if (
240
+ prediction
241
+ and reference
242
+ and prediction[0] in "(["
243
+ and prediction[-1] in ")]"
244
+ and prediction[0] == reference[0]
245
+ and prediction[-1] == reference[-1]
246
+ ):
247
+ pred_parts = prediction[1:-1].split(",")
248
+ ref_parts = reference[1:-1].split(",")
249
+ if len(pred_parts) == len(ref_parts):
250
+ if all(
251
+ [
252
+ math_equal(pred_pt, ref_pt, include_percentage, tolerance)
253
+ for pred_pt, ref_pt in zip(pred_parts, ref_parts)
254
+ ]
255
+ ):
256
+ return True
257
+
258
+ if "," in prediction and "," in reference:
259
+ pred_parts = [item.strip() for item in prediction.split(",")]
260
+ ref_parts = [item.strip() for item in reference.split(",")]
261
+
262
+ if len(pred_parts) == len(ref_parts):
263
+ if all(
264
+ [
265
+ math_equal(pred_parts[i], ref_parts[i], include_percentage, tolerance)
266
+ for i in range(len(pred_parts))
267
+ ]
268
+ ):
269
+ return True
270
+ else:
271
+ return False
272
+
273
+ # if we have point == tuple of values
274
+ if len(reference) == 0:
275
+ return False
276
+ if prediction.startswith("Point") and reference[0] == "(" and reference[-1] == ")":
277
+ pred_parts = prediction[prediction.find("(") + 1: -1].split(",")
278
+ ref_parts = reference[1:-1].split(",")
279
+ if len(pred_parts) == len(ref_parts):
280
+ if all(
281
+ [
282
+ math_equal(pred_pt, ref_pt, include_percentage, tolerance)
283
+ for pred_pt, ref_pt in zip(pred_parts, ref_parts)
284
+ ]
285
+ ):
286
+ return True
287
+
288
+ # if reference is a matrix
289
+ if "\begin{pmatrix}" in reference and prediction.startswith("Matrix"):
290
+ try:
291
+ pred_matrix = parse_expr(prediction)
292
+ ref_matrix_items = reference.split()[1:-1:2]
293
+ if len(pred_matrix) == len(ref_matrix_items):
294
+ if all(
295
+ [
296
+ math_equal(pred, ref, include_percentage, tolerance)
297
+ for ref, pred in zip(ref_matrix_items, pred_matrix)
298
+ ]
299
+ ):
300
+ return True
301
+ except Exception:
302
+ pass
303
+ elif "\begin{pmatrix}" in reference and prediction.startswith("[") and prediction.endswith("]"):
304
+ if isinstance(eval(prediction), list):
305
+ try:
306
+ pred_matrix = eval(prediction)
307
+ # ref_matrix_items = reference.split()[1:-1:2]
308
+ ref_matrix_items = reference.lstrip("\\begin{pmatrix}").lstrip("\begin{pmatrix}").rstrip("\\end{pmatrix}").rstrip("\end{pmatrix}")
309
+ ref_matrix_items = ref_matrix_items.split("\\")
310
+ ref_matrix_items = [row.split("&") if "&" in row else row for row in ref_matrix_items]
311
+ if len(pred_matrix) == len(ref_matrix_items):
312
+ if all(
313
+ [
314
+ math_equal(pred, ref, include_percentage, tolerance)
315
+ for ref, pred in zip(ref_matrix_items, pred_matrix)
316
+ ]
317
+ ):
318
+ return True
319
+ except Exception:
320
+ pass
321
+
322
+ return symbolic_equal(prediction, reference, tolerance, timeout)
323
+
324
+
325
+ def symbolic_equal(a, b, tolerance, timeout=10.0):
326
+ def _parse(s):
327
+ for f in [parse_expr, parse_latex]:
328
+ try:
329
+ with time_limit(timeout):
330
+ return f(s)
331
+ except Exception:
332
+ pass
333
+ return s
334
+
335
+ a = _parse(a)
336
+ b = _parse(b)
337
+
338
+ try:
339
+ with time_limit(timeout):
340
+ if simplify(a - b) == 0:
341
+ return True
342
+ except Exception:
343
+ pass
344
+
345
+ try:
346
+ with time_limit(timeout):
347
+ if isclose(N(a), N(b), rel_tol=tolerance):
348
+ return True
349
+ except Exception:
350
+ pass
351
+ return False
352
+
353
+
354
+ def extract_answer(string):
355
+ """Extract Answer String from \\boxed expression."""
356
+ idx = string.rfind("\\boxed")
357
+ if idx < 0:
358
+ idx = string.rfind("\\fbox")
359
+ if idx < 0:
360
+ return None
361
+
362
+ i = idx
363
+ right_brace_idx = None
364
+ num_left_braces_open = 0
365
+ while i < len(string):
366
+ if string[i] == "{":
367
+ num_left_braces_open += 1
368
+ if string[i] == "}":
369
+ num_left_braces_open -= 1
370
+ if num_left_braces_open == 0:
371
+ right_brace_idx = i
372
+ break
373
+ i += 1
374
+
375
+ if right_brace_idx is None:
376
+ retval = None
377
+ else:
378
+ retval = string[idx : right_brace_idx + 1]
379
+
380
+ if retval:
381
+ left = "\\boxed{"
382
+ try:
383
+ assert retval[: len(left)] == left
384
+ assert retval[-1] == "}"
385
+ return retval[len(left) : -1]
386
+ except AssertionError:
387
+ return None
388
+
389
+ return None
390
+
391
+
392
+ class TimeoutException(Exception):
393
+ pass
394
+
395
+
396
+ @contextlib.contextmanager
397
+ def time_limit(seconds: float):
398
+ def signal_handler(signum, frame):
399
+ raise TimeoutException("Timed out!")
400
+
401
+ signal.setitimer(signal.ITIMER_REAL, seconds)
402
+ signal.signal(signal.SIGALRM, signal_handler)
403
+ try:
404
+ yield
405
+ finally:
406
+ signal.setitimer(signal.ITIMER_REAL, 0)
407
+
408
+
409
+ def format_intervals(prediction):
410
+ patterns = {
411
+ "Interval(": r"^Interval\((.*)\)$",
412
+ "Interval.Ropen(": r"^Interval\.Ropen\((.*)\)$",
413
+ "Interval.Lopen(": r"^Interval\.Lopen\((.*)\)$",
414
+ "Interval.open(": r"^Interval\.open\((.*)\)$",
415
+ }
416
+
417
+ for key, pattern in patterns.items():
418
+ match = re.match(pattern, prediction)
419
+ if match:
420
+ inner_content = match.group(1)
421
+
422
+ if key == "Interval(": # Intarval(a, b) == [a, b]
423
+ return f"[{inner_content}]"
424
+ elif key == "Interval.Ropen(": # Intarval.Ropen(a, b) == [a, b)
425
+ return f"[{inner_content})"
426
+ elif key == "Interval.Lopen(": # Intarval.Lopen(a, b) == (a, b]
427
+ return f"({inner_content}]"
428
+ elif key == "Interval.open(": # Intarval.open(a, b) == (a, b)
429
+ return f"({inner_content})"
430
+
431
+ return prediction
432
+
433
+
434
+ # def _test_math_equal():
435
+ # ref = "6,-2"
436
+ # pred = "6"
437
+ # print(math_equal(ref, pred))
438
+
439
+ def _test_math_equal():
440
+ pi = math.pi
441
+ ref = "900\pi"
442
+ pred = 812.0
443
+ print(math_equal(pred, ref, pi=pi))
444
+
445
+ ref = "25\pi"
446
+ pred = 78.5
447
+ print(math_equal(pred, ref, pi=pi))
448
+
449
+ ref = "90\pi"
450
+ pred = 282.6
451
+ print(math_equal(pred, ref, pi=pi))
452
+
453
+ ref = "24+4\pi"
454
+ pred = 36.57142857142857
455
+ print(math_equal(pred, ref, pi=pi))
456
+
457
+ ref = "9\pi"
458
+ pred = 28.274309999999993
459
+ print(math_equal(pred, ref, pi=pi))
460
+
461
+
462
+ # def _test_math_equal():
463
+ # ref = "\\begin{pmatrix}0&1\\1&0\\end{pmatrix}"
464
+ # # ref=ref.split()[1:-1:2]
465
+ # pred = [[0,1], [1,0]]
466
+ # print(math_equal(pred, ref))
467
+
468
+
469
+ if __name__ == "__main__":
470
+ _test_math_equal()
VLMEvalKit-sudoku/vlmeval/dataset/utils/megabench/scoring/xml_nbbox_iou.py ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+ from .common.metrics import calculate_iou
3
+ from .common.conversions import parse_bboxes_from_xml
4
+ from numbers import Number
5
+
6
+
7
+ class XmlNbboxIouSingle:
8
+ """Calculates the IoU of bounding box.
9
+
10
+ Assumes that co-ordinates are normalized between 0 and 1 and that the bounding boxes
11
+ are of the form <box>top_left_x, top_left_y, bottom_right_x, bottom_right_y</box>
12
+ """
13
+
14
+ @classmethod
15
+ def match(cls, responses, targets) -> float:
16
+
17
+ logging.debug(f"{responses=}, {targets=}")
18
+ if not isinstance(responses, (tuple | list)):
19
+ responses = parse_bboxes_from_xml(responses)
20
+ if not isinstance(targets, (tuple | list)):
21
+ targets = parse_bboxes_from_xml(targets)
22
+
23
+ if len(responses) == 0:
24
+ return 0
25
+ elif isinstance(responses[0], Number) and len(responses) == 4:
26
+ responses = [responses]
27
+
28
+ iou_scores = calculate_iou(responses, targets)
29
+ if not iou_scores:
30
+ return 0
31
+
32
+ # Take the mean IoU score for now.
33
+ return sum(iou_scores) / len(iou_scores)
VLMEvalKit-sudoku/vlmeval/vlm/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (4.53 kB). View file
 
VLMEvalKit-sudoku/vlmeval/vlm/__pycache__/idefics.cpython-310.pyc ADDED
Binary file (8.52 kB). View file
 
VLMEvalKit-sudoku/vlmeval/vlm/__pycache__/phi3_vision.cpython-310.pyc ADDED
Binary file (4.46 kB). View file
 
VLMEvalKit-sudoku/vlmeval/vlm/__pycache__/points.cpython-310.pyc ADDED
Binary file (7.97 kB). View file
 
VLMEvalKit-sudoku/vlmeval/vlm/__pycache__/smolvlm.cpython-310.pyc ADDED
Binary file (16.4 kB). View file
 
VLMEvalKit-sudoku/vlmeval/vlm/__pycache__/transcore_m.cpython-310.pyc ADDED
Binary file (6.03 kB). View file
 
VLMEvalKit-sudoku/vlmeval/vlm/granite_vision/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (234 Bytes). View file
 
VLMEvalKit-sudoku/vlmeval/vlm/granite_vision/__pycache__/granite_vision.cpython-310.pyc ADDED
Binary file (5.4 kB). View file
 
VLMEvalKit-sudoku/vlmeval/vlm/hawk_vl/__pycache__/prompt.cpython-310.pyc ADDED
Binary file (5.91 kB). View file
 
VLMEvalKit-sudoku/vlmeval/vlm/hawk_vl/hawk/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .model import HawkQwenForCausalLM
VLMEvalKit-sudoku/vlmeval/vlm/hawk_vl/hawk/model/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .language_model.hawk_qwen import HawkQwenConfig, HawkQwenForCausalLM
VLMEvalKit-sudoku/vlmeval/vlm/hawk_vl/hawk/model/vision_encoder/__init__.py ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ from .qwen_vit import QwenVisionModel
2
+
3
+ VISION_TRANSFORMER_CLASSES = {
4
+ 'qwen_vit': QwenVisionModel
5
+ }
VLMEvalKit-sudoku/vlmeval/vlm/hawk_vl/hawk/model/vision_encoder/qwen_vit/__init__.py ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ from .modeling_qwen_vit import QwenVisionModel
2
+ from .configuration_qwen_vit import QwenVisionConfig
VLMEvalKit-sudoku/vlmeval/vlm/hawk_vl/hawk/model/vision_encoder/qwen_vit/configuration_qwen_vit.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # --------------------------------------------------------
2
+ # pandayin: Copied and modified from transformers/models/qwen2_vl/configuration_qwen2_vl.py
3
+ # --------------------------------------------------------
4
+
5
+ import os
6
+ from typing import Union
7
+
8
+ from transformers.configuration_utils import PretrainedConfig
9
+ from transformers.utils import logging
10
+
11
+ logger = logging.get_logger(__name__)
12
+
13
+
14
+ class QwenVisionConfig(PretrainedConfig):
15
+ model_type = "qwen_vit"
16
+ # base_config_key = "vision_config"
17
+
18
+ def __init__(
19
+ self,
20
+ depth=32,
21
+ embed_dim=1280,
22
+ hidden_act="quick_gelu",
23
+ mlp_ratio=4,
24
+ num_heads=16,
25
+ in_channels=3,
26
+ patch_size=14,
27
+ spatial_merge_size=2,
28
+ temporal_patch_size=2,
29
+ **kwargs,
30
+ ):
31
+ super().__init__(**kwargs)
32
+
33
+ self.depth = depth
34
+ self.embed_dim = embed_dim
35
+ self.hidden_act = hidden_act
36
+ self.mlp_ratio = mlp_ratio
37
+ self.num_heads = num_heads
38
+ self.in_channels = in_channels
39
+ self.patch_size = patch_size
40
+ self.spatial_merge_size = spatial_merge_size
41
+ self.temporal_patch_size = temporal_patch_size
42
+
43
+ @classmethod
44
+ def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> 'PretrainedConfig':
45
+ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
46
+
47
+ if 'vision_config' in config_dict:
48
+ config_dict = config_dict['vision_config']
49
+
50
+ if 'model_type' in config_dict and hasattr(cls, 'model_type') and config_dict['model_type'] != cls.model_type:
51
+ logger.warning(
52
+ f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
53
+ f'{cls.model_type}. This is not supported for all configurations of models and can yield errors.'
54
+ )
55
+
56
+ return cls.from_dict(config_dict, **kwargs)
VLMEvalKit-sudoku/vlmeval/vlm/hawk_vl/hawk/utils.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch.distributed as dist
2
+
3
+
4
+ def rank0_print(*args):
5
+ if dist.is_initialized():
6
+ if dist.get_rank() == 0:
7
+ print(f"Rank {dist.get_rank()}: ", *args)
8
+ else:
9
+ print(*args)
10
+
11
+
12
+ def rank_print(*args):
13
+ if dist.is_initialized():
14
+ print(f"Rank {dist.get_rank()}: ", *args)
15
+ else:
16
+ print(*args)
VLMEvalKit-sudoku/vlmeval/vlm/internvl/__init__.py ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ from .internvl_chat import InternVLChat
2
+
3
+ __all__ = ['InternVLChat']
VLMEvalKit-sudoku/vlmeval/vlm/internvl/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (225 Bytes). View file
 
VLMEvalKit-sudoku/vlmeval/vlm/internvl/__pycache__/internvl_chat.cpython-310.pyc ADDED
Binary file (16.2 kB). View file
 
VLMEvalKit-sudoku/vlmeval/vlm/internvl/__pycache__/utils.cpython-310.pyc ADDED
Binary file (10.8 kB). View file
 
VLMEvalKit-sudoku/vlmeval/vlm/internvl/utils.py ADDED
@@ -0,0 +1,312 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ import pandas as pd
3
+ import random
4
+ import re
5
+ import string
6
+ import torch
7
+ import torch.distributed as dist
8
+ import torchvision.transforms as T
9
+ import transformers
10
+ import warnings
11
+ from PIL import Image
12
+ from torchvision.transforms.functional import InterpolationMode
13
+ from transformers import AutoTokenizer, AutoConfig, AutoModel, CLIPImageProcessor
14
+
15
+ from ..base import BaseModel
16
+ from ...dataset import DATASET_TYPE, DATASET_MODALITY
17
+ from ...smp import *
18
+
19
+ IMAGENET_MEAN = (0.485, 0.456, 0.406)
20
+ IMAGENET_STD = (0.229, 0.224, 0.225)
21
+
22
+
23
+ def build_transform(input_size):
24
+ MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
25
+ transform = T.Compose([
26
+ T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
27
+ T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
28
+ T.ToTensor(),
29
+ T.Normalize(mean=MEAN, std=STD)
30
+ ])
31
+ return transform
32
+
33
+
34
+ def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
35
+ best_ratio_diff = float('inf')
36
+ best_ratio = (1, 1)
37
+ area = width * height
38
+ for ratio in target_ratios:
39
+ target_aspect_ratio = ratio[0] / ratio[1]
40
+ ratio_diff = abs(aspect_ratio - target_aspect_ratio)
41
+ if ratio_diff < best_ratio_diff:
42
+ best_ratio_diff = ratio_diff
43
+ best_ratio = ratio
44
+ elif ratio_diff == best_ratio_diff:
45
+ if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
46
+ best_ratio = ratio
47
+ return best_ratio
48
+
49
+
50
+ def dynamic_preprocess(image, min_num=1, max_num=6, image_size=448, use_thumbnail=False):
51
+ orig_width, orig_height = image.size
52
+ aspect_ratio = orig_width / orig_height
53
+
54
+ # calculate the existing image aspect ratio
55
+ target_ratios = set(
56
+ (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
57
+ i * j <= max_num and i * j >= min_num)
58
+ target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
59
+
60
+ # find the closest aspect ratio to the target
61
+ target_aspect_ratio = find_closest_aspect_ratio(
62
+ aspect_ratio, target_ratios, orig_width, orig_height, image_size)
63
+
64
+ # calculate the target width and height
65
+ target_width = image_size * target_aspect_ratio[0]
66
+ target_height = image_size * target_aspect_ratio[1]
67
+ blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
68
+
69
+ # resize the image
70
+ resized_img = image.resize((target_width, target_height))
71
+ processed_images = []
72
+ for i in range(blocks):
73
+ box = (
74
+ (i % (target_width // image_size)) * image_size,
75
+ (i // (target_width // image_size)) * image_size,
76
+ ((i % (target_width // image_size)) + 1) * image_size,
77
+ ((i // (target_width // image_size)) + 1) * image_size
78
+ )
79
+ # split the image
80
+ split_img = resized_img.crop(box)
81
+ processed_images.append(split_img)
82
+ assert len(processed_images) == blocks
83
+ if use_thumbnail and len(processed_images) != 1:
84
+ thumbnail_img = image.resize((image_size, image_size))
85
+ processed_images.append(thumbnail_img)
86
+ return processed_images
87
+
88
+
89
+ def load_image(image_file, input_size=448, max_num=6, upscale=False):
90
+ image = Image.open(image_file).convert('RGB')
91
+ if upscale:
92
+ image = image.resize((image.width * 2, image.height * 2), Image.BILINEAR)
93
+ transform = build_transform(input_size=input_size)
94
+ images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
95
+ pixel_values = [transform(image) for image in images]
96
+ pixel_values = torch.stack(pixel_values)
97
+ return pixel_values
98
+
99
+
100
+ def get_local_rank_and_local_world_size():
101
+ if not dist.is_available():
102
+ return 0, 1
103
+ if not dist.is_initialized():
104
+ return 0, 1
105
+
106
+ if 'SLURM_LOCALID' in os.environ:
107
+ local_rank = int(os.environ['SLURM_LOCALID'])
108
+ local_world_size = int(os.environ['SLURM_NTASKS_PER_NODE'])
109
+ return local_rank, local_world_size
110
+
111
+ if 'LOCAL_RANK' in os.environ and 'LOCAL_WORLD_SIZE' in os.environ:
112
+ return int(os.environ['LOCAL_RANK']), int(os.environ['LOCAL_WORLD_SIZE'])
113
+
114
+ raise NotImplementedError(
115
+ "Fail to get local_rank and local_world_size! "
116
+ "Please ensure that you set the environment variable "
117
+ "`LOCAL_RANK` and `LOCAL_WORLD_SIZE`"
118
+ )
119
+
120
+
121
+ def build_mcq_cot_prompt(line, prompt, cot_prompt=None):
122
+ if cot_prompt is None:
123
+ cot_prompt = (
124
+ "Answer the preceding multiple choice question. The last line of your response should follow "
125
+ "this format: 'Answer: \\boxed{$LETTER}' (without quotes), where LETTER is one of the options. "
126
+ "If you are uncertain or the problem is too complex, make a reasoned guess based on the "
127
+ "information provided. Avoid repeating steps indefinitely—provide your best guess even if "
128
+ "unsure. Think step by step logically, considering all relevant information before answering."
129
+ )
130
+ prompt = prompt.replace("Answer with the option's letter from the given choices directly.", '').strip()
131
+ prompt = prompt + '\n' + cot_prompt
132
+
133
+ return prompt
134
+
135
+
136
+ def build_qa_cot_prompt(line, prompt, cot_prompt=None):
137
+ if cot_prompt is None:
138
+ cot_prompt = (
139
+ "Answer the preceding question. The last line of your response should follow this format: "
140
+ "'Answer: \\boxed{$FINAL_ANSWER}' (without quotes), where 'FINAL_ANSWER' is your conclusion "
141
+ "based on the reasoning provided. If you are uncertain or the problem is too complex, make "
142
+ "a reasoned guess based on the information provided. Avoid repeating steps indefinitely—"
143
+ "provide your best guess even if unsure. Think step by step logically, considering all "
144
+ "relevant information before answering."
145
+ )
146
+ prompt = prompt + '\n' + cot_prompt
147
+
148
+ return prompt
149
+
150
+
151
+ def build_multi_choice_prompt(line, dataset=None):
152
+ question = line['question']
153
+ hint = line['hint'] if ('hint' in line and not pd.isna(line['hint'])) else None
154
+ if hint is not None:
155
+ question = hint + '\n' + question
156
+
157
+ options = {
158
+ cand: line[cand]
159
+ for cand in string.ascii_uppercase
160
+ if cand in line and not pd.isna(line[cand])
161
+ }
162
+ for key, item in options.items():
163
+ question += f'\n{key}. {item}'
164
+ prompt = question
165
+
166
+ if len(options):
167
+ prompt += '\n请直接回答选项字母。' if cn_string(
168
+ prompt) else "\nAnswer with the option's letter from the given choices directly."
169
+ else:
170
+ prompt += '\n请直接回答问题。' if cn_string(prompt) else '\nAnswer the question directly.'
171
+
172
+ return prompt
173
+
174
+
175
+ def build_video_prompt(prompt, dataset=None, max_frames=64):
176
+ for start in range(0, max_frames, 8):
177
+ images_to_remove = ''.join([f'<Image-{i}>' for i in range(start + 1, start + 9)])
178
+ prompt = prompt.replace(images_to_remove, '')
179
+ for i in range(max_frames):
180
+ prompt = prompt.replace(f'Image-{i + 1}', f'Frame-{i + 1}')
181
+ if listinstr(['MMBench-Video'], dataset):
182
+ prompt = prompt.replace('\nAnswer:', '')
183
+ elif listinstr(['Video-MME', 'WorldSense'], dataset):
184
+ prompt = prompt.replace('\nAnswer:', '')
185
+ prompt += "\nAnswer with the option's letter from the given choices directly."
186
+ elif listinstr(['MVBench'], dataset):
187
+ prompt = prompt.replace('Best option:(', '')
188
+
189
+ return prompt
190
+
191
+
192
+ def reorganize_prompt(message, image_num, dataset=None):
193
+ if dataset is not None and listinstr(['MUIRBench'], dataset):
194
+ prompt = '\n'.join([x['value'] for x in message if x['type'] == 'text'])
195
+ images_to_remove = ' '.join(['<image>'] * image_num)
196
+ prompt = prompt.replace(images_to_remove, '')
197
+ for i in range(image_num):
198
+ prompt = prompt.replace('<image>', f'<Image-{i + 1}>', 1)
199
+ prompt = ''.join([f'Image-{i + 1}: <image>\n' for i in range(image_num)]) + prompt
200
+ elif dataset is not None and listinstr(["bmmr"], dataset.lower()):
201
+ if image_num == 1:
202
+ prompt = "\n".join([x["value"] for x in message if x["type"] == "text"])
203
+ else:
204
+ prompt, image_idx = "", 1
205
+ for x in message:
206
+ if x["type"] == "text":
207
+ prompt += x["value"]
208
+ elif x["type"] == "image":
209
+ image_idx += 1
210
+ elif image_num == 1:
211
+ prompt = '<image>\n' + '\n'.join([x['value'] for x in message if x['type'] == 'text'])
212
+ else:
213
+ prompt, image_idx = '', 1
214
+ for x in message:
215
+ if x['type'] == 'text':
216
+ prompt += x['value']
217
+ elif x['type'] == 'image':
218
+ prompt += f'<Image-{image_idx}>'
219
+ image_idx += 1
220
+ prompt = ''.join([f'Image-{i + 1}: <image>\n' for i in range(image_num)]) + prompt
221
+ images_to_remove = ''.join([f'<Image-{i + 1}>' for i in range(image_num)])
222
+ prompt = prompt.replace(images_to_remove, '')
223
+ return prompt
224
+
225
+
226
+ mpo_prompt_with_final_answer = (
227
+ "Your task is to answer the question below. "
228
+ "Give step by step reasoning before you answer, and when you're ready to answer, "
229
+ "please use the format \"Final answer: ..\""
230
+ "\n\n"
231
+ "Question:"
232
+ "\n\n"
233
+ "{question}"
234
+ )
235
+
236
+ mpo_prompt_without_final_answer = (
237
+ "Your task is to answer the question below. "
238
+ "Give step by step reasoning. "
239
+ "\n\n"
240
+ "Question:"
241
+ "\n\n"
242
+ "{question}"
243
+ )
244
+
245
+
246
+ def mpo_post_processing(response, dataset):
247
+
248
+ def extract_answer(text):
249
+ match = re.search(r'(Final answer:|Answer:)\s*(.*)', text, re.IGNORECASE)
250
+ if match:
251
+ return match.group(2).strip()
252
+ return text
253
+
254
+ if dataset is not None and (DATASET_TYPE(dataset) in ['Y/N', 'MCQ'] or listinstr(['CRPE'], dataset)):
255
+ response = extract_answer(response).strip()
256
+ return response
257
+
258
+
259
+ def parse_bbox_internvl(response):
260
+ # 使���正则表达式匹配bounding box
261
+ # pattern = r"<box>\[\[(\d+), (\d+), (\d+), (\d+)\]\]</box>"
262
+ pattern = r"\[\[(\d+), (\d+), (\d+), (\d+)\]\]"
263
+ match = re.search(pattern, response)
264
+ if match:
265
+ # 提取匹配到的坐标值并转换为整数
266
+ x1, y1, x2, y2 = map(int, match.groups())
267
+ return [(x1 + x2) / 2, (y1 + y2) / 2]
268
+ else:
269
+ return response
270
+
271
+
272
+ def build_mpo_prompt(message, line, dataset):
273
+ if listinstr(['LLaVABench', 'MMVet'], dataset):
274
+ return message
275
+
276
+ question_orig = line['question']
277
+ if listinstr(['MathVerse', 'MathVision'], dataset):
278
+ question_orig = question_orig.split('Question:', 1)[-1].strip()
279
+ question_orig = question_orig.replace('Choices:\n', '').strip()
280
+ if listinstr(['WeMath'], dataset):
281
+ question_orig = question_orig.replace('Regarding the format, please answer following the template below, and be sure to include two <> symbols:\n<Thought process>: <<your thought process>> <Answer>: <<your option>>', '').strip() # noqa: E501
282
+ options = {
283
+ cand: line[cand]
284
+ for cand in string.ascii_uppercase
285
+ if cand in line and not pd.isna(line[cand])
286
+ }
287
+ options_prompt = ''
288
+ for key, item in options.items():
289
+ options_prompt += f'{key}. {item}\n'
290
+
291
+ if options_prompt.strip():
292
+ question_orig = f'{question_orig}\n{options_prompt}'
293
+
294
+ cot_prompt = mpo_prompt_with_final_answer
295
+ prompt = cot_prompt.format(question=question_orig).strip()
296
+ message[0]['value'] = prompt
297
+ return message
298
+
299
+
300
+ def format_nav_prompt(template, placeholders, **kwargs):
301
+ prompt = template
302
+ for placeholder in placeholders:
303
+ value = kwargs.get(placeholder, '')
304
+ prompt = prompt.replace(f"{{{placeholder}}}", str(value))
305
+ return prompt
306
+
307
+
308
+ def pile_action_history(history, max_num=4):
309
+ if len(history) > 0:
310
+ return '\n'.join(history[-max_num:])
311
+ else:
312
+ return 'None'
VLMEvalKit-sudoku/vlmeval/vlm/llava/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (383 Bytes). View file
 
VLMEvalKit-sudoku/vlmeval/vlm/llava/__pycache__/llava.cpython-310.pyc ADDED
Binary file (20.9 kB). View file
 
VLMEvalKit-sudoku/vlmeval/vlm/llava/__pycache__/llava_xtuner.cpython-310.pyc ADDED
Binary file (6.97 kB). View file
 
VLMEvalKit-sudoku/vlmeval/vlm/minicpm_v.py ADDED
@@ -0,0 +1,1271 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ import torch
3
+ import random
4
+ import numpy as np
5
+ from PIL import Image
6
+ from transformers import AutoModel, AutoTokenizer
7
+
8
+ from .base import BaseModel
9
+ from ..smp import *
10
+ from ..dataset import DATASET_TYPE, DATASET_MODALITY
11
+
12
+ import re
13
+
14
+
15
+ class MiniCPM_V(BaseModel):
16
+
17
+ INSTALL_REQ = False
18
+ INTERLEAVE = False
19
+
20
+ def __init__(self, model_path='openbmb/MiniCPM-V', **kwargs):
21
+ assert model_path is not None
22
+ self.model_path = model_path
23
+ print(f'load from {self.model_path}')
24
+ self.model = AutoModel.from_pretrained(self.model_path, trust_remote_code=True)
25
+ self.model = self.model.to(dtype=torch.bfloat16)
26
+ self.model.eval().cuda()
27
+ self.kwargs = kwargs
28
+ self.tokenizer = AutoTokenizer.from_pretrained(self.model_path, trust_remote_code=True)
29
+ torch.cuda.empty_cache()
30
+ self.num_beams = 3
31
+
32
+ def use_custom_prompt(self, dataset):
33
+ assert dataset is not None
34
+ if listinstr(['MMDU', 'MME-RealWorld', 'MME-RealWorld-CN', 'MMAlignBench'], dataset):
35
+ # For Multi-Turn we don't have custom prompt
36
+ return False
37
+ return False
38
+
39
+ def build_prompt(self, line, dataset=None):
40
+ assert dataset is None or isinstance(dataset, str)
41
+ assert self.use_custom_prompt(dataset)
42
+ tgt_path = self.dump_image(line, dataset)
43
+
44
+ question = line['question']
45
+ options = {
46
+ cand: line[cand]
47
+ for cand in string.ascii_uppercase
48
+ if cand in line and not pd.isna(line[cand])
49
+ }
50
+ options_prompt = 'Options:\n'
51
+ for key, item in options.items():
52
+ options_prompt += f'{key}. {item}\n'
53
+ hint = line['hint'] if ('hint' in line and not pd.isna(line['hint'])) else None
54
+ prompt = ''
55
+ if hint is not None:
56
+ prompt += f'Hint: {hint}\n'
57
+ prompt += f'{question}\n'
58
+ if len(options):
59
+ prompt += options_prompt
60
+ prompt = 'Study the image carefully and pick the option associated with the correct answer. \
61
+ Focus solely on selecting the option and avoid including any other content.\n' + prompt
62
+ message = [dict(type='text', value=prompt)]
63
+ message.extend([dict(type='image', value=p) for p in tgt_path])
64
+
65
+ return message
66
+
67
+ def generate_inner(self, message, dataset=None):
68
+ prompt, image_path = self.message_to_promptimg(message, dataset=dataset)
69
+ image = Image.open(image_path).convert('RGB')
70
+ msgs = [{'role': 'user', 'content': prompt}]
71
+ if DATASET_TYPE(dataset) == 'MCQ':
72
+ max_new_tokens = 20
73
+ elif DATASET_TYPE(dataset) == 'Y/N':
74
+ max_new_tokens = 100
75
+ else:
76
+ max_new_tokens = 1024
77
+
78
+ default_kwargs = dict(
79
+ max_new_tokens=max_new_tokens,
80
+ sampling=False,
81
+ num_beams=self.num_beams
82
+ )
83
+ default_kwargs.update(self.kwargs)
84
+ res, _, _ = self.model.chat(
85
+ image=image,
86
+ msgs=msgs,
87
+ context=None,
88
+ tokenizer=self.tokenizer,
89
+ **default_kwargs
90
+ )
91
+ return res
92
+
93
+
94
+ class MiniCPM_Llama3_V(BaseModel):
95
+
96
+ INSTALL_REQ = False
97
+ INTERLEAVE = True
98
+
99
+ def __init__(self, model_path='openbmb/MiniCPM-Llama3-V-2_5', **kwargs):
100
+ assert model_path is not None
101
+ self.model_path = model_path
102
+ print(f'load from {self.model_path}')
103
+ self.model = AutoModel.from_pretrained(self.model_path, trust_remote_code=True)
104
+ self.model = self.model.to(dtype=torch.float16)
105
+ self.model.eval().cuda()
106
+ self.kwargs = kwargs
107
+ self.tokenizer = AutoTokenizer.from_pretrained(self.model_path, trust_remote_code=True)
108
+ torch.cuda.empty_cache()
109
+ self.num_beams = 3
110
+ self.options_system_prompt = ('Carefully read the following question and select the letter corresponding '
111
+ 'to the correct answer. Highlight the applicable choices without giving '
112
+ 'explanations.')
113
+ self.wo_options_system_prompt = 'Carefully read the following question Answer the question directly.'
114
+ self.detail_system_prompt = 'Answer this question in detail.'
115
+ self.vqa_prompt = 'Answer the question using a single word or phrase.'
116
+
117
+ def use_custom_prompt(self, dataset):
118
+ if listinstr(['MCQ', 'VQA'], DATASET_TYPE(dataset)):
119
+ return True
120
+ elif dataset is not None and listinstr(['HallusionBench'], dataset):
121
+ return True
122
+ return False
123
+
124
+ def build_prompt(self, line, dataset=None):
125
+ if isinstance(line, int):
126
+ line = self.data.iloc[line]
127
+
128
+ tgt_path = self.dump_image(line, dataset)
129
+ system_prompt = ''
130
+
131
+ question = line['question']
132
+ if DATASET_TYPE(dataset) == 'MCQ':
133
+ options = {
134
+ cand: line[cand]
135
+ for cand in string.ascii_uppercase
136
+ if cand in line and not pd.isna(line[cand])
137
+ }
138
+ options_prompt = 'Options:\n'
139
+ for key, item in options.items():
140
+ options_prompt += f'{key}. {item}\n'
141
+ hint = line['hint'] if ('hint' in line and not pd.isna(line['hint'])) else None
142
+ prompt = ''
143
+ if hint is not None:
144
+ prompt += f'Hint: {hint}\n'
145
+ prompt += f'Question: {question}\n'
146
+ if len(options):
147
+ prompt += options_prompt
148
+ system_prompt = self.options_system_prompt + '\nPlease just indicate your choice.'
149
+ else:
150
+ system_prompt = self.wo_options_system_prompt
151
+ if 'MMMU' in dataset: # Corner Case
152
+ prompt = system_prompt + '\n' + prompt
153
+ system_prompt = ''
154
+ elif dataset is not None and listinstr(['HallusionBench'], dataset):
155
+ question = line['question'] + ' Yes or No?'
156
+ prompt = question
157
+ elif dataset is not None and listinstr(['MME'], dataset):
158
+ question = line['question'] + ' Yes or No?'
159
+ prompt = question
160
+ elif dataset is not None and listinstr(['OCRBench'], dataset):
161
+ system_prompt = self.vqa_prompt
162
+ question = line['question']
163
+ prompt = question
164
+ elif DATASET_TYPE(dataset) == 'VQA':
165
+ if listinstr(['LLaVABench', 'MMLongBench_DOC'], dataset):
166
+ system_prompt = ''
167
+ prompt = question
168
+ elif listinstr(['MMVet'], dataset):
169
+ system_prompt = self.detail_system_prompt
170
+ prompt = question
171
+ else:
172
+ system_prompt = self.vqa_prompt
173
+ prompt = question
174
+
175
+ msgs = []
176
+ if system_prompt:
177
+ msgs.append(dict(type='text', value=system_prompt))
178
+ if isinstance(tgt_path, list):
179
+ msgs.extend([dict(type='image', value=p) for p in tgt_path])
180
+ else:
181
+ msgs = [dict(type='image', value=tgt_path)]
182
+ msgs.append(dict(type='text', value=prompt))
183
+ return msgs
184
+
185
+ def generate_inner(self, message, dataset=None):
186
+ if DATASET_TYPE(dataset) == 'MCQ':
187
+ max_new_tokens = 200
188
+ elif DATASET_TYPE(dataset) == 'Y/N':
189
+ max_new_tokens = 3
190
+ else:
191
+ max_new_tokens = 1024
192
+
193
+ default_kwargs = dict(
194
+ max_new_tokens=max_new_tokens,
195
+ sampling=False,
196
+ num_beams=self.num_beams,
197
+ )
198
+ default_kwargs.update(self.kwargs)
199
+
200
+ content = []
201
+ for x in message:
202
+ if x['type'] == 'text':
203
+ content.append(x['value'])
204
+ elif x['type'] == 'image':
205
+ image = Image.open(x['value']).convert('RGB')
206
+ content.append(image)
207
+ msgs = [{'role': 'user', 'content': content}]
208
+
209
+ res = self.model.chat(
210
+ msgs=msgs,
211
+ context=None,
212
+ image=None,
213
+ tokenizer=self.tokenizer,
214
+ **default_kwargs
215
+ )
216
+
217
+ if isinstance(res, tuple) and len(res) > 0:
218
+ res = res[0]
219
+ return res
220
+
221
+ def chat_inner(self, message, dataset=None):
222
+ max_new_tokens = 1024
223
+
224
+ default_kwargs = dict(
225
+ max_new_tokens=max_new_tokens,
226
+ sampling=False,
227
+ num_beams=self.num_beams,
228
+ )
229
+ default_kwargs.update(self.kwargs)
230
+
231
+ msgs = []
232
+ for msg in message:
233
+ content = []
234
+ if len(msg['content']) == 1 and msg['content'][0]['type'] == 'text':
235
+ msg_new = {'role': msg['role'], 'content': msg['content'][0]['value']}
236
+ msgs.append(msg_new)
237
+ continue
238
+
239
+ for x in msg['content']:
240
+ if x['type'] == 'text':
241
+ content.append(x['value'])
242
+ elif x['type'] == 'image':
243
+ image = Image.open(x['value']).convert('RGB')
244
+ content.append(image)
245
+ msg_new = {'role': msg['role'], 'content': content}
246
+ msgs.append(msg_new)
247
+
248
+ res = self.model.chat(
249
+ msgs=msgs,
250
+ context=None,
251
+ image=None,
252
+ tokenizer=self.tokenizer,
253
+ **default_kwargs)
254
+
255
+ if isinstance(res, tuple) and len(res) > 0:
256
+ res = res[0]
257
+ return res
258
+
259
+
260
+ class MiniCPM_V_2_6(BaseModel):
261
+ INSTALL_REQ = False
262
+ INTERLEAVE = True
263
+
264
+ def __init__(self, model_path='openbmb/MiniCPM-V-2_6', **kwargs):
265
+ random.seed(0)
266
+ np.random.seed(0)
267
+ torch.manual_seed(0)
268
+ torch.cuda.manual_seed_all(0)
269
+ self.use_lmdeploy = kwargs.get('use_lmdeploy', False)
270
+ assert model_path is not None
271
+ self.model_path = model_path
272
+ print(f'load from path {self.model_path}')
273
+ if self.use_lmdeploy:
274
+ logging.warning(
275
+ 'Currently LMDeploy does not support interleaved text-image prompt. '
276
+ 'All images will be placed at the beginning of the prompt, '
277
+ 'which may lead to performance degradation.'
278
+ )
279
+ from lmdeploy import TurbomindEngineConfig, pipeline, ChatTemplateConfig
280
+ num_gpus = torch.cuda.device_count()
281
+ self.model = pipeline(
282
+ model_path,
283
+ backend_config=TurbomindEngineConfig(session_len=32768, cache_max_entry_count=0.1, tp=num_gpus)
284
+ )
285
+ torch.cuda.set_device(0)
286
+ self.device = 'cuda'
287
+ else:
288
+ self.model = AutoModel.from_pretrained(self.model_path, trust_remote_code=True)
289
+ self.model = self.model.to(dtype=torch.bfloat16)
290
+ self.model.eval().cuda()
291
+
292
+ self.kwargs = kwargs
293
+ self.tokenizer = AutoTokenizer.from_pretrained(self.model_path, trust_remote_code=True)
294
+ torch.cuda.empty_cache()
295
+
296
+ self.num_beams = 3
297
+
298
+ self.options_suffix_prompt = '''\nAnswer with the option's letter from the given choices directly.'''
299
+ self.wo_options_system_prompt = 'Carefully read the following question Answer the question directly.'
300
+ self.detail_system_prompt = 'Answer this question in detail.'
301
+ self.vqa_prompt = 'Answer the question using a single word or phrase.'
302
+
303
+ self.multi_choice_cot_prompt = ('''Carefully read the following multichoice question, solve it step '''
304
+ '''by step and finally pick the option associated with the correct '''
305
+ '''answer in the format of "Answer: selected option\n\n''')
306
+ self.short_ans_cot_prompt = ('''Read the following question carefully, solve it step by step, and '''
307
+ '''then output the final answer in the format of "Answer: single number '''
308
+ '''or single word or phrase".\n\n''')
309
+
310
+ def use_custom_prompt(self, dataset=None):
311
+ if dataset is None:
312
+ return False
313
+ if DATASET_TYPE(dataset) in ['MCQ', 'VQA', 'Y/N']:
314
+ return True
315
+ return False
316
+
317
+ def use_cot(self, dataset=None):
318
+ if dataset is None:
319
+ return False
320
+ if listinstr(['MMMU', 'HallusionBench', 'OCRBench', 'ChartQA'], dataset):
321
+ return True
322
+ elif listinstr(['MathVista', 'MMVet', 'MMBench', 'MMStar', 'AI2D', 'RealWorldQA',
323
+ 'POPE', 'ScienceQA', 'TextVQA', 'DocVQA'], dataset):
324
+ return False
325
+ else:
326
+ return False
327
+
328
+ def use_upsize(self, dataset=None):
329
+ if dataset is None:
330
+ return False
331
+ if listinstr(['MMVet', 'MMBench', 'MMStar', 'AI2D', 'OCRBench'], dataset):
332
+ return True
333
+ else:
334
+ return False
335
+
336
+ def build_prompt(self, line, dataset=None):
337
+ if isinstance(line, int):
338
+ line = self.data.iloc[line]
339
+
340
+ tgt_path = self.dump_image(line, dataset)
341
+ system_prompt, prompt = '', ''
342
+
343
+ question = line['question']
344
+
345
+ if not self.use_cot(dataset):
346
+ if DATASET_TYPE(dataset) == 'MCQ':
347
+ options = {
348
+ cand: line[cand]
349
+ for cand in string.ascii_uppercase
350
+ if cand in line and not pd.isna(line[cand])
351
+ }
352
+ options_prompt = 'Options:\n'
353
+ for key, item in options.items():
354
+ options_prompt += f'{key}. {item}\n'
355
+ hint = line['hint'] if ('hint' in line and not pd.isna(line['hint'])) else None
356
+ if hint is not None:
357
+ prompt += f'Hint: {hint}\n'
358
+ prompt += f'Question: {question}\n'
359
+ if len(options):
360
+ prompt += options_prompt
361
+ prompt += self.options_suffix_prompt
362
+ else:
363
+ system_prompt = self.wo_options_system_prompt
364
+
365
+ if 'MMMU' in dataset:
366
+ if len(system_prompt) > 0:
367
+ prompt = system_prompt + '\n' + prompt
368
+ system_prompt = ''
369
+ elif dataset is not None and listinstr(['HallusionBench'], dataset):
370
+ question += ' Yes or No?'
371
+ prompt = question
372
+ elif dataset is not None and listinstr(['OCRBench'], dataset):
373
+ system_prompt = self.vqa_prompt
374
+ prompt = question
375
+ elif DATASET_TYPE(dataset) == 'VQA':
376
+ if listinstr(['LLaVABench'], dataset):
377
+ system_prompt = ''
378
+ elif listinstr(['MMVet'], dataset):
379
+ system_prompt = self.detail_system_prompt
380
+ else:
381
+ system_prompt = self.vqa_prompt
382
+ prompt = question
383
+ else:
384
+ prompt = question
385
+ else:
386
+ has_options = True
387
+ if DATASET_TYPE(dataset) == 'MCQ':
388
+ options = {
389
+ cand: line[cand]
390
+ for cand in string.ascii_uppercase
391
+ if cand in line and not pd.isna(line[cand])
392
+ }
393
+ options_prompt = ''
394
+ for key, item in options.items():
395
+ options_prompt += f'{key}. {item}\n'
396
+ hint = line['hint'] if ('hint' in line and not pd.isna(line['hint'])) else None
397
+ if hint is not None:
398
+ prompt += f'Hint: {hint}\n'
399
+ prompt += f'{question}\n'
400
+
401
+ if len(options):
402
+ prompt += options_prompt
403
+ else:
404
+ has_options = False
405
+
406
+ if 'MMMU' in dataset:
407
+ if len(system_prompt) > 0:
408
+ prompt = system_prompt + '\n' + prompt
409
+ system_prompt = ''
410
+ else:
411
+ prompt = question
412
+
413
+ if DATASET_TYPE(dataset) in ['MCQ', 'Y/N', 'VQA']:
414
+ if DATASET_TYPE(dataset) == 'MCQ':
415
+ if has_options:
416
+ prompt = self.multi_choice_cot_prompt + prompt
417
+ else:
418
+ prompt = self.short_ans_cot_prompt + prompt
419
+ elif DATASET_TYPE(dataset) == 'Y/N':
420
+ prompt = self.short_ans_cot_prompt + prompt
421
+ else:
422
+ prompt = self.short_ans_cot_prompt + prompt
423
+
424
+ msgs = []
425
+ if system_prompt:
426
+ msgs.append(dict(type='text', value=system_prompt))
427
+ if isinstance(tgt_path, list):
428
+ msgs.extend([dict(type='image', value=p) for p in tgt_path])
429
+ else:
430
+ msgs = [dict(type='image', value=tgt_path)]
431
+ msgs.append(dict(type='text', value=prompt))
432
+
433
+ return msgs
434
+
435
+ def message_to_lmdeploy(self, messages, system_prompt=None):
436
+ """
437
+ TODO:
438
+ Support interleaved text-image prompt
439
+ after LMDeploy supports it.
440
+ """
441
+ from PIL import Image
442
+ prompt, image_path = '', []
443
+ for msg in messages:
444
+ if msg['type'] == 'text':
445
+ prompt += msg['value']
446
+ elif msg['type'] == 'image':
447
+ image_path.append(msg['value'])
448
+ content = [{'type': 'text', 'text': prompt}]
449
+ for image in image_path:
450
+ img = Image.open(image).convert('RGB')
451
+ b64 = encode_image_to_base64(img)
452
+ img_struct = dict(url=f'data:image/jpeg;base64,{b64}')
453
+ content.append(dict(type='image_url', image_url=img_struct))
454
+ ret = []
455
+ if system_prompt is not None:
456
+ ret.append(dict(role='system', content=system_prompt))
457
+ ret.append(dict(role='user', content=content))
458
+ return [ret]
459
+
460
+ def generate_inner_transformers(self, message, dataset=None):
461
+ if dataset is not None and DATASET_MODALITY(dataset) == 'VIDEO':
462
+ max_slice_nums = 1
463
+ use_image_id = False
464
+ max_inp_length = 2048 * 10
465
+ else:
466
+ max_slice_nums = None
467
+ use_image_id = True
468
+ max_inp_length = 8192
469
+
470
+ max_new_tokens = 2048
471
+ default_kwargs = dict(
472
+ max_new_tokens=max_new_tokens,
473
+ sampling=False,
474
+ num_beams=self.num_beams,
475
+ )
476
+ default_kwargs.update(self.kwargs)
477
+
478
+ content = []
479
+
480
+ for x in message:
481
+ if x['type'] == 'text':
482
+ content.append(x['value'])
483
+ elif x['type'] == 'image':
484
+ image = Image.open(x['value']).convert('RGB')
485
+ if not self.use_upsize(dataset):
486
+ content.append(image)
487
+ else:
488
+ img_width, img_height = image.width, image.height
489
+ if (img_width * img_height) >= (1344 * 1344):
490
+ content.append(image)
491
+ else:
492
+ ratio = math.sqrt((1344 * 1344) / (img_width * img_height))
493
+ max_img_width = int(img_width * ratio)
494
+ new_img_width = random.randint(img_width, max_img_width)
495
+ new_img_height = int(new_img_width / img_width * img_height)
496
+ resized_image = image.resize((new_img_width, new_img_height))
497
+ content.append(resized_image)
498
+ msgs = [{'role': 'user', 'content': content}]
499
+
500
+ res = self.model.chat(
501
+ image=None,
502
+ msgs=msgs,
503
+ context=None,
504
+ tokenizer=self.tokenizer,
505
+ max_inp_length=max_inp_length,
506
+ use_image_id=use_image_id,
507
+ max_slice_nums=max_slice_nums,
508
+ **default_kwargs
509
+ )
510
+
511
+ if isinstance(res, tuple) and len(res) > 0:
512
+ res = res[0]
513
+
514
+ return res
515
+
516
+ def generate_inner_lmdeploy(self, message, dataset=None):
517
+ from lmdeploy import GenerationConfig
518
+ gen_config = GenerationConfig(
519
+ max_new_tokens=2048,
520
+ top_p=0.001,
521
+ top_k=1,
522
+ temperature=0.01,
523
+ repetition_penalty=1.0,
524
+ )
525
+ gen_config.random_seed = None
526
+ messages_list = self.message_to_lmdeploy(message, system_prompt=None)
527
+ assert len(messages_list) == 1
528
+ response = self.model(messages_list, gen_config=gen_config)[0]
529
+ response = response.text
530
+ return response
531
+
532
+ def generate_inner(self, message, dataset=None):
533
+ if self.use_lmdeploy:
534
+ return self.generate_inner_lmdeploy(message, dataset)
535
+ else:
536
+ return self.generate_inner_transformers(message, dataset)
537
+
538
+
539
+ class MiniCPM_o_2_6(BaseModel):
540
+ INSTALL_REQ = False
541
+ INTERLEAVE = True
542
+
543
+ def __init__(self, model_path='openbmb/MiniCPM-o-2_6', **kwargs):
544
+ random.seed(0)
545
+ np.random.seed(0)
546
+ torch.manual_seed(0)
547
+ torch.cuda.manual_seed_all(0)
548
+
549
+ assert model_path is not None
550
+ self.model_path = model_path
551
+ print(f'load from path {self.model_path}')
552
+ self.model = AutoModel.from_pretrained(
553
+ self.model_path,
554
+ trust_remote_code=True,
555
+ attn_implementation='sdpa',
556
+ torch_dtype=torch.bfloat16,
557
+ init_vision=True,
558
+ init_audio=False,
559
+ init_tts=False
560
+ )
561
+
562
+ self.model.eval().cuda()
563
+
564
+ self.kwargs = kwargs
565
+ self.tokenizer = AutoTokenizer.from_pretrained(self.model_path, trust_remote_code=True)
566
+ torch.cuda.empty_cache()
567
+
568
+ self.num_beams = int(os.getenv("NUM_BEAMS", "3"))
569
+
570
+ repetition_penalty = float(os.getenv("PENALTY", "1.2"))
571
+ self.repetition_penalty = repetition_penalty
572
+
573
+ self.options_suffix_prompt = '''\nAnswer with the option's letter from the given choices directly.'''
574
+ self.wo_options_system_prompt = 'Carefully read the following question Answer the question directly.'
575
+ self.detail_system_prompt = 'Answer this question in detail.'
576
+ self.vqa_prompt = 'Answer the question using a single word or phrase.'
577
+
578
+ self.multi_choice_cot_prompt = ('''Carefully read the following multichoice question, solve it step '''
579
+ '''by step and finally pick the option associated with the correct '''
580
+ '''answer in the format of "Answer: selected option\n\n''')
581
+ self.short_ans_cot_prompt = ('''Read the following question carefully, solve it step by step, and '''
582
+ '''then output the final answer in the format of "Answer: single number '''
583
+ '''or single word or phrase".\n\n''')
584
+
585
+ def use_custom_prompt(self, dataset=None):
586
+ if dataset is None:
587
+ return False
588
+ if listinstr(['MCQ', 'VQA', 'Y/N'], DATASET_TYPE(dataset)) and not listinstr(['Video'], DATASET_TYPE(dataset)):
589
+ return True
590
+ return False
591
+
592
+ def use_cot(self, dataset=None):
593
+ if dataset is None:
594
+ return False
595
+ if listinstr(['MMMU', 'MathVista', 'OCRBench', 'ChartQA', 'MathVision', 'MathVerse_MINI_Vision_Only'], dataset):
596
+ return True
597
+ elif listinstr(['MMVet', 'MMBench', 'MMStar', 'HallusionBench', 'AI2D', 'RealWorldQA',
598
+ 'POPE', 'ScienceQA', 'TextVQA', 'DocVQA'], dataset):
599
+ return False
600
+ else:
601
+ return False
602
+
603
+ def use_upsize(self, dataset=None):
604
+ if dataset is None:
605
+ return False
606
+ if listinstr(['MathVista', 'MMBench_TEST_CN', 'MMStar', 'AI2D', 'OCRBench', 'DynaMath'], dataset):
607
+ return True
608
+ else:
609
+ return False
610
+
611
+ def build_prompt(self, line, dataset=None):
612
+ if isinstance(line, int):
613
+ line = self.data.iloc[line]
614
+
615
+ tgt_path = self.dump_image(line, dataset)
616
+ system_prompt, prompt = '', ''
617
+
618
+ question = line['question']
619
+
620
+ if not self.use_cot(dataset):
621
+ if DATASET_TYPE(dataset) == 'MCQ':
622
+ options = {
623
+ cand: line[cand]
624
+ for cand in string.ascii_uppercase
625
+ if cand in line and not pd.isna(line[cand])
626
+ }
627
+ options_prompt = 'Options:\n'
628
+ for key, item in options.items():
629
+ options_prompt += f'{key}. {item}\n'
630
+ hint = line['hint'] if ('hint' in line and not pd.isna(line['hint'])) else None
631
+ if hint is not None:
632
+ prompt += f'Hint: {hint}\n'
633
+ prompt += f'Question: {question}\n'
634
+ if len(options):
635
+ prompt += options_prompt
636
+ prompt += self.options_suffix_prompt
637
+ else:
638
+ system_prompt = self.wo_options_system_prompt
639
+
640
+ if 'MMMU' in dataset:
641
+ if len(system_prompt) > 0:
642
+ prompt = system_prompt + '\n' + prompt
643
+ system_prompt = ''
644
+ elif dataset is not None and listinstr(['HallusionBench'], dataset):
645
+ question += ' Yes or No?'
646
+ prompt = question
647
+ elif dataset is not None and listinstr(['OCRBench'], dataset):
648
+ system_prompt = self.vqa_prompt
649
+ prompt = question
650
+ elif DATASET_TYPE(dataset) == 'VQA':
651
+ if listinstr(['LLaVABench'], dataset):
652
+ system_prompt = ''
653
+ elif listinstr(['MMVet'], dataset):
654
+ system_prompt = self.detail_system_prompt
655
+ else:
656
+ system_prompt = self.vqa_prompt
657
+ prompt = question
658
+ else:
659
+ prompt = question
660
+ else:
661
+ has_options = True
662
+ if DATASET_TYPE(dataset) == 'MCQ':
663
+ options = {
664
+ cand: line[cand]
665
+ for cand in string.ascii_uppercase
666
+ if cand in line and not pd.isna(line[cand])
667
+ }
668
+ options_prompt = ''
669
+ for key, item in options.items():
670
+ options_prompt += f'{key}. {item}\n'
671
+ hint = line['hint'] if ('hint' in line and not pd.isna(line['hint'])) else None
672
+ if hint is not None:
673
+ prompt += f'Hint: {hint}\n'
674
+ prompt += f'{question}\n'
675
+
676
+ if len(options):
677
+ prompt += options_prompt
678
+ else:
679
+ has_options = False
680
+
681
+ if 'MMMU' in dataset:
682
+ if len(system_prompt) > 0:
683
+ prompt = system_prompt + '\n' + prompt
684
+ system_prompt = ''
685
+ else:
686
+ prompt = question
687
+
688
+ if DATASET_TYPE(dataset) in ['MCQ', 'Y/N', 'VQA']:
689
+ if DATASET_TYPE(dataset) == 'MCQ':
690
+ if has_options:
691
+ prompt = self.multi_choice_cot_prompt + prompt
692
+ else:
693
+ prompt = self.short_ans_cot_prompt + prompt
694
+ elif DATASET_TYPE(dataset) == 'Y/N':
695
+ prompt = self.short_ans_cot_prompt + prompt
696
+ else:
697
+ prompt = self.short_ans_cot_prompt + prompt
698
+
699
+ msgs = []
700
+ if system_prompt:
701
+ msgs.append(dict(type='text', value=system_prompt))
702
+ if isinstance(tgt_path, list):
703
+ msgs.extend([dict(type='image', value=p) for p in tgt_path])
704
+ else:
705
+ msgs = [dict(type='image', value=tgt_path)]
706
+ msgs.append(dict(type='text', value=prompt))
707
+
708
+ return msgs
709
+
710
+ def extract_answer(self, res, dataset=None):
711
+ if dataset is None:
712
+ return res
713
+ if self.use_cot(dataset):
714
+ if DATASET_TYPE(dataset) == 'MCQ':
715
+ pattern = r'Answer:\s*([A-Ia-i])(?![A-Za-z])'
716
+ matches = re.findall(pattern, res, re.DOTALL)
717
+ if matches:
718
+ extracted_res = matches[-1].strip()
719
+ else:
720
+ extracted_res = res
721
+ return extracted_res
722
+ elif DATASET_TYPE(dataset) == 'VQA' and not listinstr(['OCRBench'], dataset):
723
+ pattern = r'Answer:\s*(.*)\s*$'
724
+ match = re.search(pattern, res, re.DOTALL)
725
+ if match:
726
+ extracted_res = match.group(1)
727
+ else:
728
+ extracted_res = res
729
+ return extracted_res
730
+ return res
731
+
732
+ def generate_inner(self, message, dataset=None):
733
+ if dataset is not None and DATASET_MODALITY(dataset) == 'VIDEO':
734
+ max_slice_nums = 1
735
+ use_image_id = False
736
+ max_inp_length = 2048 * 10
737
+ else:
738
+ max_slice_nums = None
739
+ use_image_id = True
740
+ max_inp_length = 8192
741
+
742
+ max_new_tokens = 2048
743
+ default_kwargs = dict(
744
+ max_new_tokens=max_new_tokens,
745
+ sampling=False,
746
+ repetition_penalty=self.repetition_penalty,
747
+ num_beams=self.num_beams,
748
+ )
749
+ default_kwargs.update(self.kwargs)
750
+
751
+ content = []
752
+
753
+ if dataset is not None and DATASET_TYPE(dataset) == 'Video-MCQ':
754
+ message.append(dict(type='text', value=self.options_suffix_prompt))
755
+
756
+ for x in message:
757
+ if x['type'] == 'text':
758
+ content.append(x['value'])
759
+ elif x['type'] == 'image':
760
+ image = Image.open(x['value']).convert('RGB')
761
+ if not self.use_upsize(dataset):
762
+ content.append(image)
763
+ else:
764
+ img_width, img_height = image.width, image.height
765
+ if (img_width * img_height) >= (1344 * 1344):
766
+ content.append(image)
767
+ else:
768
+ ratio = math.sqrt((1344 * 1344) / (img_width * img_height))
769
+ max_img_width = int(img_width * ratio)
770
+ new_img_width = random.randint(img_width, max_img_width)
771
+ new_img_height = int(new_img_width / img_width * img_height)
772
+ resized_image = image.resize((new_img_width, new_img_height))
773
+ content.append(resized_image)
774
+ msgs = [{'role': 'user', 'content': content}]
775
+
776
+ res = self.model.chat(
777
+ image=None,
778
+ msgs=msgs,
779
+ context=None,
780
+ tokenizer=self.tokenizer,
781
+ max_inp_length=max_inp_length,
782
+ use_image_id=use_image_id,
783
+ max_slice_nums=max_slice_nums,
784
+ **default_kwargs
785
+ )
786
+
787
+ if isinstance(res, tuple) and len(res) > 0:
788
+ res = res[0]
789
+
790
+ res = self.extract_answer(res, dataset)
791
+
792
+ return res
793
+
794
+
795
+ class MiniCPM_V_4(BaseModel):
796
+ INSTALL_REQ = False
797
+ INTERLEAVE = True
798
+
799
+ def __init__(self, model_path='openbmb/MiniCPM-V-4', **kwargs):
800
+ random.seed(0)
801
+ np.random.seed(0)
802
+ torch.manual_seed(0)
803
+ torch.cuda.manual_seed_all(0)
804
+ assert model_path is not None
805
+ self.model_path = model_path
806
+ print(f'load from path {self.model_path}')
807
+ self.model = AutoModel.from_pretrained(self.model_path, trust_remote_code=True)
808
+ self.model = self.model.to(dtype=torch.bfloat16)
809
+ self.model.eval().cuda()
810
+ self.kwargs = kwargs
811
+ self.tokenizer = AutoTokenizer.from_pretrained(self.model_path, trust_remote_code=True)
812
+ torch.cuda.empty_cache()
813
+
814
+ self.num_beams = 3
815
+ self.max_new_tokens = 2048
816
+ self.options_suffix_prompt = '''\nAnswer with the option's letter from the given choices directly.'''
817
+ self.wo_options_system_prompt = 'Carefully read the following question. Answer the question directly.'
818
+ self.detail_system_prompt = 'Answer this question in detail.'
819
+ self.vqa_prompt = 'Answer the question using a single word or phrase.'
820
+ self.multi_choice_cot_prompt = ('''Carefully read the following multichoice question, solve it step '''
821
+ '''by step and finally pick the option associated with the correct '''
822
+ '''answer in the format of "Answer: selected option\n\n''')
823
+ self.short_ans_cot_prompt = ('''Read the following question carefully, solve it step by step, and '''
824
+ '''then output the final answer in the format of "Answer: single number '''
825
+ '''or single word or phrase".\n\n''')
826
+ self.ocrbench_cot_prompt = 'Carefully observe the image and answer the OCR-related questions below. \n\n'
827
+
828
+ def use_custom_prompt(self, dataset=None):
829
+ if dataset is None:
830
+ return False
831
+ if listinstr(['MCQ', 'VQA', 'Y/N'], DATASET_TYPE(dataset)):
832
+ return True
833
+ return False
834
+
835
+ def use_cot(self, dataset=None):
836
+ if dataset is None:
837
+ return False
838
+ if listinstr([
839
+ 'MMMU', 'MathVista', 'MMStar', 'HallusionBench', 'OCRBench',
840
+ 'ChartQA', 'MathVision', 'MathVerse_MINI_Vision_Only'
841
+ ], dataset):
842
+ return True
843
+ elif listinstr([
844
+ 'MMVet', 'MMBench', 'AI2D', 'RealWorldQA', 'POPE', 'ScienceQA',
845
+ 'TextVQA', 'DocVQA'
846
+ ], dataset):
847
+ return False
848
+ else:
849
+ return False
850
+
851
+ def use_upsize(self, dataset=None):
852
+ if dataset is None:
853
+ return False
854
+ if listinstr([
855
+ 'MathVista', 'MMVet', 'MMStar', 'AI2D', 'OCRBench', 'ChartQA',
856
+ 'TextVQA'
857
+ ], dataset):
858
+ return True
859
+ else:
860
+ return False
861
+
862
+ def build_prompt(self, line, dataset=None):
863
+ if isinstance(line, int):
864
+ line = self.data.iloc[line]
865
+
866
+ tgt_path = self.dump_image(line, dataset)
867
+ system_prompt, prompt = '', ''
868
+
869
+ question = line['question']
870
+
871
+ if not self.use_cot(dataset):
872
+ if DATASET_TYPE(dataset) == 'MCQ':
873
+ options = {
874
+ cand: line[cand]
875
+ for cand in string.ascii_uppercase
876
+ if cand in line and not pd.isna(line[cand])
877
+ }
878
+ options_prompt = 'Options:\n'
879
+ for key, item in options.items():
880
+ options_prompt += f'{key}. {item}\n'
881
+ hint = line['hint'] if ('hint' in line and not pd.isna(line['hint'])) else None
882
+ if hint is not None:
883
+ prompt += f'Hint: {hint}\n'
884
+ prompt += f'Question: {question}\n'
885
+ if len(options):
886
+ prompt += options_prompt
887
+ prompt += self.options_suffix_prompt
888
+ else:
889
+ system_prompt = self.wo_options_system_prompt
890
+
891
+ if 'MMMU' in dataset:
892
+ if len(system_prompt) > 0:
893
+ prompt = system_prompt + '\n' + prompt
894
+ system_prompt = ''
895
+ elif dataset is not None and listinstr(['HallusionBench'], dataset):
896
+ question += ' Yes or No?'
897
+ prompt = question
898
+ elif dataset is not None and listinstr(['OCRBench'], dataset):
899
+ system_prompt = self.vqa_prompt
900
+ prompt = question
901
+ elif DATASET_TYPE(dataset) == 'VQA':
902
+ if listinstr(['LLaVABench'], dataset):
903
+ system_prompt = ''
904
+ elif listinstr(['MMVet'], dataset):
905
+ system_prompt = self.detail_system_prompt
906
+ else:
907
+ system_prompt = self.vqa_prompt
908
+ prompt = question
909
+ else:
910
+ prompt = question
911
+ else:
912
+ has_options = True
913
+ if DATASET_TYPE(dataset) == 'MCQ':
914
+ options = {
915
+ cand: line[cand]
916
+ for cand in string.ascii_uppercase
917
+ if cand in line and not pd.isna(line[cand])
918
+ }
919
+ options_prompt = ''
920
+ for key, item in options.items():
921
+ options_prompt += f'{key}. {item}\n'
922
+ hint = line['hint'] if ('hint' in line and not pd.isna(line['hint'])) else None
923
+ if hint is not None:
924
+ prompt += f'Hint: {hint}\n'
925
+ prompt += f'{question}\n'
926
+
927
+ if len(options):
928
+ prompt += options_prompt
929
+ else:
930
+ has_options = False
931
+
932
+ if 'MMMU' in dataset:
933
+ if len(system_prompt) > 0:
934
+ prompt = system_prompt + '\n' + prompt
935
+ system_prompt = ''
936
+ else:
937
+ prompt = question
938
+
939
+ if DATASET_TYPE(dataset) in ['MCQ', 'Y/N', 'VQA']:
940
+ if DATASET_TYPE(dataset) == 'MCQ':
941
+ if has_options:
942
+ prompt = self.multi_choice_cot_prompt + prompt
943
+ else:
944
+ prompt = self.short_ans_cot_prompt + prompt
945
+ elif DATASET_TYPE(dataset) == 'Y/N':
946
+ prompt = self.short_ans_cot_prompt + prompt
947
+ elif listinstr(['OCRBench'], dataset):
948
+ prompt = self.ocrbench_cot_prompt + prompt
949
+ else:
950
+ prompt = self.short_ans_cot_prompt + prompt
951
+
952
+ msgs = []
953
+ if system_prompt:
954
+ msgs.append(dict(type='text', value=system_prompt))
955
+ if isinstance(tgt_path, list):
956
+ msgs.extend([dict(type='image', value=p) for p in tgt_path])
957
+ else:
958
+ msgs = [dict(type='image', value=tgt_path)]
959
+ msgs.append(dict(type='text', value=prompt))
960
+
961
+ if dataset.startswith('MMMU_'):
962
+ from .. import MMMUDataset
963
+ msgs = MMMUDataset.split_MMMU(msgs)
964
+
965
+ return msgs
966
+
967
+ def extract_answer(self, res, dataset=None):
968
+ if dataset is None:
969
+ return res
970
+ if self.use_cot(dataset):
971
+ if DATASET_TYPE(dataset) == 'MCQ':
972
+ pattern = r'Answer:\s*([A-Ia-i])(?![A-Za-z])'
973
+ matches = re.findall(pattern, res, re.DOTALL)
974
+ if matches:
975
+ extracted_res = matches[-1].strip()
976
+ else:
977
+ extracted_res = res
978
+ return extracted_res
979
+ elif DATASET_TYPE(dataset) == 'VQA' and not listinstr(['OCRBench', 'MMVet'], dataset):
980
+ pattern = r'Answer:\s*(.*)\s*$'
981
+ match = re.search(pattern, res, re.DOTALL)
982
+ if match:
983
+ extracted_res = match.group(1)
984
+ else:
985
+ extracted_res = res
986
+ return extracted_res
987
+ elif DATASET_TYPE(dataset) == 'Y/N':
988
+ pattern = r'Answer:\s*(.*)\s*$'
989
+ match = re.search(pattern, res, re.DOTALL)
990
+ if match:
991
+ extracted_res = match.group(1)
992
+ else:
993
+ extracted_res = res
994
+ return extracted_res
995
+ return res
996
+
997
+ def generate_inner(self, message, dataset=None):
998
+ if self.use_cot(dataset):
999
+ max_new_tokens = self.max_new_tokens
1000
+ else:
1001
+ max_new_tokens = 1024
1002
+ default_kwargs = dict(
1003
+ max_new_tokens=max_new_tokens,
1004
+ sampling=False,
1005
+ num_beams=self.num_beams,
1006
+ )
1007
+ default_kwargs.update(self.kwargs)
1008
+
1009
+ content = []
1010
+
1011
+ for x in message:
1012
+ if x['type'] == 'text':
1013
+ content.append(x['value'])
1014
+ elif x['type'] == 'image':
1015
+ image = Image.open(x['value']).convert('RGB')
1016
+ if not self.use_upsize(dataset):
1017
+ content.append(image)
1018
+ else:
1019
+ img_width, img_height = image.width, image.height
1020
+ if (img_width * img_height) >= (1344 * 1344):
1021
+ content.append(image)
1022
+ else:
1023
+ ratio = math.sqrt((1344 * 1344) / (img_width * img_height))
1024
+ max_img_width = int(img_width * ratio)
1025
+ new_img_width = random.randint(img_width, max_img_width)
1026
+ new_img_height = int(new_img_width / img_width * img_height)
1027
+ resized_image = image.resize((new_img_width, new_img_height))
1028
+ content.append(resized_image)
1029
+ msgs = [{'role': 'user', 'content': content}]
1030
+
1031
+ res = self.model.chat(
1032
+ image=None,
1033
+ msgs=msgs,
1034
+ context=None,
1035
+ tokenizer=self.tokenizer,
1036
+ max_inp_length=8192,
1037
+ **default_kwargs
1038
+ )
1039
+
1040
+ if isinstance(res, tuple) and len(res) > 0:
1041
+ res = res[0]
1042
+ res = self.extract_answer(res, dataset)
1043
+
1044
+ return res
1045
+
1046
+
1047
+ class MiniCPM_V_4_5(MiniCPM_V_4):
1048
+ INSTALL_REQ = False
1049
+ INTERLEAVE = True
1050
+
1051
+ def __init__(self, model_path='openbmb/MiniCPM-V-4_5', **kwargs):
1052
+ super().__init__(model_path, **kwargs)
1053
+ from transformers import AutoProcessor
1054
+ self.processor = AutoProcessor.from_pretrained(self.model_path, trust_remote_code=True)
1055
+ self._original_chat_template = self.tokenizer.chat_template
1056
+ self._long_cot_chat_template = self._original_chat_template.replace(
1057
+ "{{- '<think>\\n' }}", "{{- '<think>\\nI' }}")
1058
+
1059
+ def use_long_cot(self, dataset=None):
1060
+ if dataset is None:
1061
+ return False
1062
+ if listinstr([
1063
+ 'MMMU', 'MathVista', 'MMVet', 'MMBench', 'HallusionBench',
1064
+ 'MMStar', 'MathVision', 'MathVerse_MINI',
1065
+ 'MathVerse_MINI_Vision_Only', 'DynaMath', 'LogicVista'
1066
+ ], dataset):
1067
+ return True
1068
+ else:
1069
+ return False
1070
+
1071
+ def use_cot(self, dataset=None):
1072
+ if dataset is None:
1073
+ return False
1074
+ if listinstr([
1075
+ 'MMMU', 'MathVista', 'MMBench', 'HallusionBench', 'MMStar',
1076
+ 'OCRBench', 'ChartQA', 'MathVision', 'MathVerse_MINI',
1077
+ 'MathVerse_MINI_Vision_Only', 'DynaMath', 'LogicVista'
1078
+ ], dataset):
1079
+ return True
1080
+ else:
1081
+ return False
1082
+
1083
+ def use_upsize(self, dataset=None):
1084
+ if dataset is None:
1085
+ return False
1086
+ if self.use_long_cot(dataset):
1087
+ return True
1088
+ if listinstr(['AI2D', 'OCRBench', 'ChartQA', 'TextVQA'], dataset):
1089
+ return True
1090
+ else:
1091
+ return False
1092
+
1093
+ def build_prompt(self, line, dataset=None):
1094
+ if self.use_long_cot(dataset):
1095
+ self.tokenizer.chat_template = self._long_cot_chat_template
1096
+ else:
1097
+ self.tokenizer.chat_template = self._original_chat_template
1098
+
1099
+ if isinstance(line, int):
1100
+ line = self.data.iloc[line]
1101
+
1102
+ tgt_path = self.dump_image(line, dataset)
1103
+ system_prompt, prompt = '', ''
1104
+ question = line['question']
1105
+
1106
+ if not self.use_cot(dataset):
1107
+ if DATASET_TYPE(dataset) == 'MCQ':
1108
+ options = {
1109
+ cand: line[cand]
1110
+ for cand in string.ascii_uppercase
1111
+ if cand in line and not pd.isna(line[cand])
1112
+ }
1113
+ options_prompt = 'Options:\n'
1114
+ for key, item in options.items():
1115
+ options_prompt += f'{key}. {item}\n'
1116
+ hint = line['hint'] if ('hint' in line and not pd.isna(line['hint'])) else None
1117
+ if hint is not None:
1118
+ prompt += f'Hint: {hint}\n'
1119
+ prompt += f'Question: {question}\n'
1120
+ if len(options):
1121
+ prompt += options_prompt
1122
+ prompt += self.options_suffix_prompt
1123
+ else:
1124
+ system_prompt = self.wo_options_system_prompt
1125
+
1126
+ if 'MMMU' in dataset:
1127
+ if len(system_prompt) > 0:
1128
+ prompt = system_prompt + '\n' + prompt
1129
+ system_prompt = ''
1130
+ elif dataset is not None and listinstr(['HallusionBench'], dataset):
1131
+ question += ' Yes or No?'
1132
+ prompt = question
1133
+ elif dataset is not None and listinstr(['OCRBench'], dataset):
1134
+ system_prompt = self.vqa_prompt
1135
+ prompt = question
1136
+ elif DATASET_TYPE(dataset) == 'VQA':
1137
+ if listinstr(['LLaVABench'], dataset):
1138
+ system_prompt = ''
1139
+ elif listinstr(['MMVet'], dataset):
1140
+ system_prompt = self.detail_system_prompt
1141
+ else:
1142
+ system_prompt = self.vqa_prompt
1143
+ prompt = question
1144
+ else:
1145
+ prompt = question
1146
+ else:
1147
+ has_options = True
1148
+ if DATASET_TYPE(dataset) == 'MCQ':
1149
+ options = {
1150
+ cand: line[cand]
1151
+ for cand in string.ascii_uppercase
1152
+ if cand in line and not pd.isna(line[cand])
1153
+ }
1154
+ options_prompt = ''
1155
+ for key, item in options.items():
1156
+ options_prompt += f'{key}. {item}\n'
1157
+ hint = line['hint'] if ('hint' in line and not pd.isna(line['hint'])) else None
1158
+ if hint is not None:
1159
+ prompt += f'Hint: {hint}\n'
1160
+ prompt += f'{question}\n'
1161
+
1162
+ if len(options):
1163
+ prompt += options_prompt
1164
+ else:
1165
+ has_options = False
1166
+
1167
+ if 'MMMU' in dataset:
1168
+ if len(system_prompt) > 0:
1169
+ prompt = system_prompt + '\n' + prompt
1170
+ system_prompt = ''
1171
+ else:
1172
+ prompt = question
1173
+
1174
+ if DATASET_TYPE(dataset) in ['MCQ', 'Y/N', 'VQA']:
1175
+ if DATASET_TYPE(dataset) == 'MCQ':
1176
+ if has_options:
1177
+ prompt = self.multi_choice_cot_prompt + prompt
1178
+ else:
1179
+ prompt = self.short_ans_cot_prompt + prompt
1180
+ elif DATASET_TYPE(dataset) == 'Y/N':
1181
+ prompt = self.short_ans_cot_prompt + prompt
1182
+ elif listinstr(['OCRBench'], dataset):
1183
+ prompt = self.ocrbench_cot_prompt + prompt
1184
+ else:
1185
+ prompt = self.short_ans_cot_prompt + prompt
1186
+
1187
+ msgs = []
1188
+ if system_prompt:
1189
+ msgs.append(dict(type='text', value=system_prompt))
1190
+ if isinstance(tgt_path, list):
1191
+ msgs.extend([dict(type='image', value=p) for p in tgt_path])
1192
+ else:
1193
+ msgs = [dict(type='image', value=tgt_path)]
1194
+ msgs.append(dict(type='text', value=prompt))
1195
+
1196
+ if dataset.startswith('MMMU_'):
1197
+ from .. import MMMUDataset
1198
+ msgs = MMMUDataset.split_MMMU(msgs)
1199
+
1200
+ return msgs
1201
+
1202
+ def generate_inner(self, message, dataset=None):
1203
+ if self.use_long_cot(dataset):
1204
+ default_kwargs = dict(
1205
+ enable_thinking=True,
1206
+ max_new_tokens=8192,
1207
+ sampling=True,
1208
+ temperature=0.7,
1209
+ num_beams=1,
1210
+ top_p=1.0,
1211
+ top_k=0,
1212
+ repetition_penalty=1.0,
1213
+ no_repeat_ngram_size=0
1214
+ )
1215
+ elif self.use_cot(dataset):
1216
+ default_kwargs = dict(
1217
+ max_new_tokens=2048,
1218
+ sampling=False,
1219
+ num_beams=self.num_beams,
1220
+ repetition_penalty=1.2
1221
+ )
1222
+ else:
1223
+ default_kwargs = dict(
1224
+ max_new_tokens=1024,
1225
+ sampling=False,
1226
+ num_beams=self.num_beams,
1227
+ repetition_penalty=1.2
1228
+ )
1229
+
1230
+ default_kwargs.update(self.kwargs)
1231
+
1232
+ content = []
1233
+ for x in message:
1234
+ if x['type'] == 'text':
1235
+ content.append(x['value'])
1236
+ elif x['type'] == 'image':
1237
+ image = Image.open(x['value']).convert('RGB')
1238
+ if not self.use_upsize(dataset):
1239
+ content.append(image)
1240
+ else:
1241
+ img_width, img_height = image.width, image.height
1242
+ if (img_width * img_height) >= (1344 * 1344):
1243
+ content.append(image)
1244
+ else:
1245
+ ratio = math.sqrt((1344 * 1344) / (img_width * img_height))
1246
+ max_img_width = int(img_width * ratio)
1247
+ new_img_width = random.randint(img_width, max_img_width)
1248
+ new_img_height = int(new_img_width / img_width * img_height)
1249
+ resized_image = image.resize((new_img_width, new_img_height))
1250
+ content.append(resized_image)
1251
+ msgs = [{'role': 'user', 'content': content}]
1252
+
1253
+ self.processor.tokenizer = self.tokenizer
1254
+
1255
+ res = self.model.chat(
1256
+ image=None,
1257
+ msgs=msgs,
1258
+ context=None,
1259
+ tokenizer=self.tokenizer,
1260
+ processor=self.processor,
1261
+ max_inp_length=8192,
1262
+ **default_kwargs
1263
+ )
1264
+
1265
+ if isinstance(res, tuple) and len(res) > 0:
1266
+ res = res[0]
1267
+
1268
+ res = res.replace('<think>\n', '<think>\nI ')
1269
+ res = self.extract_answer(res, dataset)
1270
+
1271
+ return res
VLMEvalKit-sudoku/vlmeval/vlm/misc/minigpt4_7b_eval.yaml ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ model:
2
+ arch: minigpt4
3
+ model_type: pretrain_vicuna_7b
4
+ max_txt_len: 160
5
+ end_sym: "###"
6
+ low_resource: True
7
+ prompt_template: '###Human: {} ###Assistant: '
8
+ ckpt: "please set this value to the path of pretrained checkpoint"
9
+
10
+ # vit encoder
11
+ image_size: 224
12
+ drop_path_rate: 0
13
+ use_grad_checkpoint: False
14
+ vit_precision: "fp16"
15
+ freeze_vit: True
16
+ freeze_qformer: True
17
+
18
+ # Q-Former
19
+ num_query_token: 32
20
+
21
+ # generation configs
22
+ prompt: ""
23
+
24
+ llama_model: "please set this value to the path of vicuna-7b-v0"
25
+
26
+
27
+ datasets:
28
+ cc_sbu_align:
29
+ vis_processor:
30
+ train:
31
+ name: "blip2_image_eval"
32
+ image_size: 224
33
+ text_processor:
34
+ train:
35
+ name: "blip_caption"
36
+
37
+ run:
38
+ task: image_text_pretrain
VLMEvalKit-sudoku/vlmeval/vlm/ola/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (185 Bytes). View file