Dataset Viewer
Auto-converted to Parquet Duplicate
paper_id
uint32
title
string
authors
list
cvf_url
string
pdf_url
string
supp_url
string
bibtex
string
abstract
large_string
arxiv_id
string
comment
string
github
string
project_page
string
space_ids
list
model_ids
list
dataset_ids
list
embedding
list
0
kh: Symmetry Understanding of 3D Shapes via Chirality Disentanglement
[ "Weikang Wang", "Tobias Weißberg", "Nafie El Amrani", "Florian Bernard" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Wang_kh_Symmetry_Understanding_of_3D_Shapes_via_Chirality_Disentanglement_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Wang_kh_Symmetry_Understanding_of_3D_Shapes_via_Chirality_Disentanglement_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Wang_kh_Symmetry_Understanding_ICCV_2025_supplemental.pdf
@InProceedings{Wang_2025_ICCV, author = {Wang, Weikang and Wei{\ss}berg, Tobias and El Amrani, Nafie and Bernard, Florian}, title = {kh: Symmetry Understanding of 3D Shapes via Chirality Disentanglement}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {28292-28302} }
Chirality information (i.e. information that allows distinguishing left from right) is ubiquitous for various data modes in computer vision, including images, videos, point clouds, and meshes. While chirality has been extensively studied in the image domain, its exploration in shape analysis (such as point clouds and meshes) remains underdeveloped. Although many shape vertex descriptors have shown appealing properties (e.g. robustness to rigid-body transformations), they are often not able to disambiguate between left and right symmetric parts. Considering the ubiquity of chirality information in different shape analysis problems and the lack of chirality-aware features within current shape descriptors, developing a chirality feature extractor becomes necessary and urgent. Based on the recent Diff3F framework, we propose an unsupervised chirality feature extraction pipeline to decorate shape vertices with chirality-aware information, extracted from 2D foundation models. We evaluated the extracted chirality features through quantitative and qualitative experiments across diverse datasets. Results from downstream tasks including left-right disentanglement, shape matching, and part segmentation demonstrate their effectiveness and practical utility. Project page: https://wei-kang-wang.github.io/chirality/
null
null
null
null
[]
[]
[]
[ 0.014335653744637966, 0.0011623065220192075, 0.010857202112674713, 0.0178863313049078, 0.020875804126262665, 0.03718924894928932, 0.006471932865679264, 0.011918001808226109, -0.014221075922250748, -0.05570879206061363, -0.011482015252113342, -0.04218422994017601, -0.05513007193803787, 0.05...
1
Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy
[ "Yiting Yang", "Hao Luo", "Yuan Sun", "Qingsen Yan", "Haokui Zhang", "Wei Dong", "Guoqing Wang", "Peng Wang", "Yang Yang", "Hengtao Shen" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Yang_Efficient_Adaptation_of_Pre-trained_Vision_Transformer_underpinned_by_Approximately_Orthogonal_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Yang_Efficient_Adaptation_of_Pre-trained_Vision_Transformer_underpinned_by_Approximately_Orthogonal_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Yang_Efficient_Adaptation_of_ICCV_2025_supplemental.zip
@InProceedings{Yang_2025_ICCV, author = {Yang, Yiting and Luo, Hao and Sun, Yuan and Yan, Qingsen and Zhang, Haokui and Dong, Wei and Wang, Guoqing and Wang, Peng and Yang, Yang and Shen, Hengtao}, title = {Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {4878-4887} }
A prevalent approach in Parameter-Efficient Fine-Tuning (PEFT) of pre-trained Vision Transformers (ViT) involves freezing the majority of the backbone parameters and solely learning low-rank adaptation weight matrices to accommodate downstream tasks. These low-rank matrices are commonly derived through the multiplication structure of down-projection and up-projection matrices, exemplified by methods such as LoRA and Adapter. In this study, we observe an approximate orthogonality among any two row or column vectors within any weight matrix of the backbone parameters; however, this property is absent in the vectors of the down/up-projection matrices. Approximate orthogonality implies a reduction in the upper bound of the model's generalization error, signifying that the model possesses enhanced generalization capability. If the fine-tuned down/up-projection matrices were to exhibit this same property as the pre-trained backbone matrices, could the generalization capability of fine-tuned ViTs be further augmented? To address this question, we propose an Approximately Orthogonal Fine-Tuning (AOFT) strategy for representing the low-rank weight matrices. This strategy employs a single learnable vector to generate a set of approximately orthogonal vectors, which form the down/up-projection matrices, thereby aligning the properties of these matrices with those of the backbone. Extensive experimental results demonstrate that our method achieves competitive performance across a range of downstream image classification tasks, confirming the efficacy of the enhanced generalization capability embedded in the down/up-projection matrices. Our code is available at link.
2507.13260
This paper is accepted by ICCV 2025
null
null
[]
[]
[]
[ -0.01595032773911953, -0.031848710030317307, 0.04327205568552017, 0.010211489163339138, 0.03500578552484512, 0.028599634766578674, 0.016466189175844193, 0.005730366334319115, -0.024307364597916603, -0.037106577306985855, -0.027496790513396263, 0.014403547160327435, -0.07350608706474304, -0...
2
MM-IFEngine: Towards Multimodal Instruction Following
[ "Shengyuan Ding", "Shenxi Wu", "Xiangyu Zhao", "Yuhang Zang", "Haodong Duan", "Xiaoyi Dong", "Pan Zhang", "Yuhang Cao", "Dahua Lin", "Jiaqi Wang" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Ding_MM-IFEngine_Towards_Multimodal_Instruction_Following_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Ding_MM-IFEngine_Towards_Multimodal_Instruction_Following_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Ding_MM-IFEngine_Towards_Multimodal_ICCV_2025_supplemental.pdf
@InProceedings{Ding_2025_ICCV, author = {Ding, Shengyuan and Wu, Shenxi and Zhao, Xiangyu and Zang, Yuhang and Duan, Haodong and Dong, Xiaoyi and Zhang, Pan and Cao, Yuhang and Lin, Dahua and Wang, Jiaqi}, title = {MM-IFEngine: Towards Multimodal Instruction Following}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {1099-1109} }
The Instruction Following (IF) ability measures how well Multi-modal Large Language Models (MLLMs) understand exactly what users are telling them and doing it right.Existing multimodal instruction following training data is scarce, the benchmarks are simple with atomic instructions, and the evaluation strategies are imprecise for tasks demanding exact output constraints.To address this, we present MM-IFEngine, an effective pipeline to generate high-quality image-instruction pairs.Our MM-IFEngine pipeline yields large-scale, diverse, and high-quality training data MM-IFInstruct-23k, which is suitable for Supervised Fine-Tuning (SFT) and extended as MM-IFDPO-23k for Direct Preference Optimization (DPO).We further introduce MM-IFEval, a challenging and diverse multi-modal instruction-following benchmark that includes (1) both textual constraints for output responses and visual constraints tied to the input images, and (2) a comprehensive evaluation pipeline incorporating rule-based assessment and LLM-as-a-Judge evaluation.We conduct SFT and DPO experiments and demonstrate that fine-tuning MLLMs on MM-IFInstruct-23k and MM-IFDPO-23k achieve notable gains on various IF benchmarks, such as MM-IFEval (+11.8%), MIA (+7.7%), and IFEval (+10.5%).
null
null
null
null
[]
[]
[]
[ -0.0023685619235038757, 0.0012950691161677241, -0.00007703046867391095, 0.023427100852131844, 0.03204823657870293, -0.003080470021814108, 0.04342794418334961, 0.019115056842565536, -0.04817984253168106, -0.003305742284283042, -0.03213934972882271, 0.06202574819326401, -0.0503186360001564, ...
3
Who is a Better Talker: Subjective and Objective Quality Assessment for AI-Generated Talking Heads
[ "Yingjie Zhou", "Jiezhang Cao", "Zicheng Zhang", "Farong Wen", "Yanwei Jiang", "Jun Jia", "Xiaohong Liu", "Xiongkuo Min", "Guangtao Zhai" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Zhou_Who_is_a_Better_Talker_Subjective_and_Objective_Quality_Assessment_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Zhou_Who_is_a_Better_Talker_Subjective_and_Objective_Quality_Assessment_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Zhou_Who_is_a_ICCV_2025_supplemental.pdf
@InProceedings{Zhou_2025_ICCV, author = {Zhou, Yingjie and Cao, Jiezhang and Zhang, Zicheng and Wen, Farong and Jiang, Yanwei and Jia, Jun and Liu, Xiaohong and Min, Xiongkuo and Zhai, Guangtao}, title = {Who is a Better Talker: Subjective and Objective Quality Assessment for AI-Generated Talking Heads}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {12201-12211} }
Speech-driven methods for portraits are figuratively known as "Talkers" because of their capability to synthesize speaking mouth shapes and facial movements. Especially with the rapid development of the Text-to-Image (T2I) models, AI-Generated Talking Heads (AGTHs) have gradually become an emerging digital human media. However, challenges persist regarding the quality of these talkers and AGTHs they generate, and comprehensive studies addressing these issues remain limited. To address this gap, this paper presents the largest AGTH quality assessment dataset THQA-10K to date, which selects 12 prominent T2I models and 14 advanced talkers to generate AGTHs for 14 prompts. After excluding instances where AGTH generation is unsuccessful, the THQA-10K dataset contains 10,457 AGTHs, which provides rich material for AGTH quality assessment. Then, volunteers are recruited to subjectively rate the AGTHs and give the corresponding distortion categories. In our analysis for subjective experimental results, we evaluate the performance of talkers in terms of generalizability and quality, and also expose the distortions of existing AGTHs. Finally, an objective quality assessment method based on the first frame, Y-T slice and tone-lip consistency is proposed. Experimental results show that this method can achieve state-of-the-art (SOTA) performance in AGTH quality assessment. The work is released at https://github.com/zyj-2000/Talker.
2507.23343
null
https://github.com/zyj-2000/Talker
null
[]
[]
[]
[ 0.010329356417059898, -0.020546993240714073, -0.0022839230950921774, 0.04020318016409874, 0.013103203848004341, 0.04535313695669174, 0.04665455222129822, 0.005283264443278313, -0.0022130522411316633, -0.050677791237831116, -0.039543699473142624, 0.031019918620586395, -0.0645647644996643, 0...
4
LayerAnimate: Layer-level Control for Animation
[ "Yuxue Yang", "Lue Fan", "Zuzeng Lin", "Feng Wang", "Zhaoxiang Zhang" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Yang_LayerAnimate_Layer-level_Control_for_Animation_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Yang_LayerAnimate_Layer-level_Control_for_Animation_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Yang_LayerAnimate_Layer-level_Control_ICCV_2025_supplemental.pdf
@InProceedings{Yang_2025_ICCV, author = {Yang, Yuxue and Fan, Lue and Lin, Zuzeng and Wang, Feng and Zhang, Zhaoxiang}, title = {LayerAnimate: Layer-level Control for Animation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {10865-10874} }
Traditional animation production decomposes visual elements into discrete layers to enable independent processing for sketching, refining, coloring, and in-betweening. Existing anime generation video methods typically treat animation as a distinct data domain different from real-world videos, lacking fine-grained control at the layer level. To bridge this gap, we introduce LayerAnimate, a novel video diffusion framework with layer-aware architecture that empowers the manipulation of layers through layer-level controls. The development of a layer-aware framework faces a significant data scarcity challenge due to the commercial sensitivity of professional animation assets. To address the limitation, we propose a data curation pipeline featuring Automated Element Segmentation and Motion-based Hierarchical Merging. Through quantitative and qualitative comparisons and user study, we demonstrate that LayerAnimate outperforms current methods in terms of animation quality, control precision, and usability, making it an effective tool for both professional animators and amateur enthusiasts. This framework opens up new possibilities for layer-level animation applications and creative flexibility. Our code is available at https://layeranimate.github.io.
2501.08295
Project page: https://layeranimate.github.io
null
https://layeranimate.github.io
[ "IamCreateAI/LayerAnimate" ]
[ "Yuppie1204/LayerAnimate-Mix" ]
[]
[ 0.014414518140256405, -0.036979883909225464, 0.005391916260123253, 0.015496291220188141, 0.028710853308439255, -0.0005574686801992357, -0.009399556554853916, -0.0020826796535402536, -0.026559410616755486, -0.051289815455675125, -0.032109182327985764, -0.019442444667220116, -0.017617933452129...
5
Towards a Unified Copernicus Foundation Model for Earth Vision
[ "Yi Wang", "Zhitong Xiong", "Chenying Liu", "Adam J. Stewart", "Thomas Dujardin", "Nikolaos Ioannis Bountos", "Angelos Zavras", "Franziska Gerken", "Ioannis Papoutsis", "Laura Leal-Taixé", "Xiao Xiang Zhu" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Wang_Towards_a_Unified_Copernicus_Foundation_Model_for_Earth_Vision_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Wang_Towards_a_Unified_Copernicus_Foundation_Model_for_Earth_Vision_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Wang_Towards_a_Unified_ICCV_2025_supplemental.pdf
@InProceedings{Wang_2025_ICCV, author = {Wang, Yi and Xiong, Zhitong and Liu, Chenying and Stewart, Adam J. and Dujardin, Thomas and Bountos, Nikolaos Ioannis and Zavras, Angelos and Gerken, Franziska and Papoutsis, Ioannis and Leal-Taix\'e, Laura and Zhu, Xiao Xiang}, title = {Towards a Unified Copernicus Foundation Model for Earth Vision}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {9888-9899} }
Advances in Earth observation (EO) foundation models have unlocked the potential of big satellite data to learn generic representations from space, benefiting a wide range of downstream applications crucial to our planet. However, most existing efforts remain limited to fixed spectral sensors, focus solely on the Earth's surface, and overlook valuable metadata beyond imagery. In this work, we take a step towards next-generation EO foundation models with three key components: 1) Copernicus-Pretrain, a massive-scale pretraining dataset that integrates 18.7M aligned images from all major Copernicus Sentinel missions, spanning from the Earth's surface to its atmosphere; 2) Copernicus-FM, a unified foundation model capable of processing any spectral or non-spectral sensor modality using extended dynamic hypernetworks and flexible metadata encoding; and 3) Copernicus-Bench, a systematic evaluation benchmark with 15 hierarchical downstream tasks ranging from preprocessing to specialized applications for each Sentinel mission. Our dataset, model, and benchmark greatly improve the scalability, versatility, and multimodal adaptability of EO foundation models, while also creating new opportunities to connect EO, weather, and climate research. Codes at https://github.com/zhu-xlab/Copernicus-FM.
2503.11849
Accepted to ICCV 2025. 33 pages, 34 figures
https://github.com/zhu-xlab/Copernicus-FM
null
[]
[ "wangyi111/Copernicus-FM" ]
[ "wangyi111/Copernicus-Pretrain" ]
[ 0.027766352519392967, -0.06929122656583786, 0.02689434587955475, 0.015745263546705246, 0.03621228039264679, 0.004305608570575714, -0.004965104162693024, 0.05807843804359436, -0.051789600402116776, -0.055493514984846115, -0.03125998377799988, 0.006975578144192696, -0.07972833514213562, 0.00...
6
ROADWork: A Dataset and Benchmark for Learning to Recognize, Observe, Analyze and Drive Through Work Zones
[ "Anurag Ghosh", "Shen Zheng", "Robert Tamburo", "Khiem Vuong", "Juan Alvarez-Padilla", "Hailiang Zhu", "Michael Cardei", "Nicholas Dunn", "Christoph Mertz", "Srinivasa G. Narasimhan" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Ghosh_ROADWork_A_Dataset_and_Benchmark_for_Learning_to_Recognize_Observe_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Ghosh_ROADWork_A_Dataset_and_Benchmark_for_Learning_to_Recognize_Observe_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Ghosh_ROADWork_A_Dataset_ICCV_2025_supplemental.pdf
@InProceedings{Ghosh_2025_ICCV, author = {Ghosh, Anurag and Zheng, Shen and Tamburo, Robert and Vuong, Khiem and Alvarez-Padilla, Juan and Zhu, Hailiang and Cardei, Michael and Dunn, Nicholas and Mertz, Christoph and Narasimhan, Srinivasa G.}, title = {ROADWork: A Dataset and Benchmark for Learning to Recognize, Observe, Analyze and Drive Through Work Zones}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {6132-6142} }
Perceiving and autonomously navigating through work zones is a challenging and under-explored problem. Open datasets for this long-tailed scenario are scarce. We propose the ROADWork dataset to learn to recognize, observe, analyze, and drive through work zones. State-of-the-art foundation models fail when applied to work zones. Fine-tuning models on our dataset significantly improves perception and navigation in work zones. With ROADWork, we discover new work zone images with higher precision (+32.5%) at a much higher rate (12.8x) around the world. Open-vocabulary methods fail too, whereas fine-tuned detectors improve performance (+32.2 AP).Vision-Language Models (VLMs) struggle to describe work zones, but fine-tuning substantially improves performance (+36.7 SPICE). Beyond fine-tuning, we show the value of simple techniques. Video label propagation provides additional gains (+2.6 AP) for instance segmentation. While reading work zone signs, composing a detector and text spotter via crop-scaling improves performance (+14.2% 1-NED). Composing work zone detections to provide context further reduces hallucinations (+3.9 SPICE) in VLMs. We predict navigational goals and compute drivable paths from work zone videos. Incorporating road work semantics ensures 53.6% goals have angular error (AE) < 0.5 (+9.9%) and 75.3% pathways have AE < 0.5 (+8.1%).
null
null
null
null
[]
[]
[]
[ 0.055151909589767456, 0.015910230576992035, 0.020566178485751152, 0.03400871157646179, 0.033986639231443405, 0.01489538885653019, 0.047573987394571304, 0.043161697685718536, -0.0028116675093770027, -0.05412827059626579, -0.04244190827012062, 0.010571218095719814, -0.07279396802186966, -0.0...
7
Gradient Decomposition and Alignment for Incremental Object Detection
[ "Wenlong Luo", "Shizhou Zhang", "De Cheng", "Yinghui Xing", "Guoqiang Liang", "Peng Wang", "Yanning Zhang" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Luo_Gradient_Decomposition_and_Alignment_for_Incremental_Object_Detection_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Luo_Gradient_Decomposition_and_Alignment_for_Incremental_Object_Detection_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Luo_Gradient_Decomposition_and_ICCV_2025_supplemental.pdf
@InProceedings{Luo_2025_ICCV, author = {Luo, Wenlong and Zhang, Shizhou and Cheng, De and Xing, Yinghui and Liang, Guoqiang and Wang, Peng and Zhang, Yanning}, title = {Gradient Decomposition and Alignment for Incremental Object Detection}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {4486-4495} }
Incremental object detection (IOD) is crucial for enabling AI systems to continuously learn new object classes over time while retaining knowledge of previously learned categories, allowing model to adapt to dynamic environments without forgetting prior information.Existing IOD methods primarily employ knowledge distillation to mitigate catastrophic forgetting, yet these approaches overlook class overlap issues, often resulting in suboptimal performance. In this paper, we propose a novel framework for IOD that leverages a decoupled gradient alignment technique on top of the specially proposed pseudo-labeling strategy. Our method employs a Gaussian Mixture Model to accurately estimate pseudo-labels of previously learned objects in current training images, effectively functioning as a knowledge-replay mechanism. This strategy reinforces prior knowledge retention and prevents the misclassification of unannotated foreground objects from earlier classes as background. Furthermore, we introduce an adaptive gradient decomposition and alignment method to maintain model stability while facilitating positive knowledge transfer. By aligning gradients from both old and new classes, our approach preserves previously learned knowledge while enhancing plasticity for new tasks. Extensive experiments on two IOD benchmarks demonstrate the effectiveness of the proposed method, achieving superior performances to state-of-the-art methods.
null
null
null
null
[]
[]
[]
[ 0.0035268301144242287, -0.0004766414931509644, 0.004043825902044773, 0.03530827537178993, 0.028325721621513367, 0.03982969745993614, 0.026156311854720116, -0.01055420283228159, -0.045582883059978485, -0.02777264639735222, -0.024598974734544754, 0.02481015957891941, -0.0692133903503418, -0....
8
One Polyp Identifies All: One-Shot Polyp Segmentation with SAM via Cascaded Priors and Iterative Prompt Evolution
[ "Xinyu Mao", "Xiaohan Xing", "Fei Meng", "Jianbang Liu", "Fan Bai", "Qiang Nie", "Max Meng" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Mao_One_Polyp_Identifies_All_One-Shot_Polyp_Segmentation_with_SAM_via_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Mao_One_Polyp_Identifies_All_One-Shot_Polyp_Segmentation_with_SAM_via_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Mao_One_Polyp_Identifies_ICCV_2025_supplemental.pdf
@InProceedings{Mao_2025_ICCV, author = {Mao, Xinyu and Xing, Xiaohan and Meng, Fei and Liu, Jianbang and Bai, Fan and Nie, Qiang and Meng, Max}, title = {One Polyp Identifies All: One-Shot Polyp Segmentation with SAM via Cascaded Priors and Iterative Prompt Evolution}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {24182-24191} }
Polyp segmentation is vital for early colorectal cancer detection, yet traditional fully supervised methods struggle with morphological variability and domain shifts, requiring frequent retraining. Additionally, reliance on large-scale annotations is a major bottleneck due to the time-consuming and error-prone nature of polyp boundary labeling. Recently, vision foundation models like Segment Anything Model (SAM) have demonstrated strong generalizability and fine-grained boundary detection with sparse prompts, effectively addressing key polyp segmentation challenges. However, SAM's prompt-dependent nature limits automation in medical applications, since manually inputting prompts for each image is labor-intensive and time-consuming. We propose OP-SAM, a One-shot Polyp segmentation framework based on SAM that automatically generates prompts from a single annotated image, ensuring accurate and generalizable segmentation without additional annotation burdens. Our method introduces Correlation-based Prior Generation (CPG) for semantic label transfer and Scale-cascaded Prior Fusion (SPF) to adapt to polyp size variations as well as filter out noisy transfers. Instead of dumping all prompts at once, we devise Euclidean Prompt Evolution (EPE) for iterative prompt refinement, progressively enhancing segmentation quality. Extensive evaluations across five datasets validate OP-SAM's effectiveness. Notably, on Kvasir, it achieves 76.93% IoU, surpassing the state-of-the-art by 11.44%.
2507.16337
accepted by ICCV2025
null
null
[]
[]
[]
[ -0.017479155212640762, -0.04860679432749748, 0.013979200273752213, 0.015341182239353657, 0.0326567180454731, 0.019246399402618408, 0.044737037271261215, -0.015183096751570702, -0.03546231612563133, -0.09151994436979294, -0.036008477210998535, -0.017747389152646065, -0.043996911495923996, 0...
9
Gradient Extrapolation for Debiased Representation Learning
[ "Ihab Asaad", "Maha Shadaydeh", "Joachim Denzler" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Asaad_Gradient_Extrapolation_for_Debiased_Representation_Learning_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Asaad_Gradient_Extrapolation_for_Debiased_Representation_Learning_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Asaad_Gradient_Extrapolation_for_ICCV_2025_supplemental.pdf
@InProceedings{Asaad_2025_ICCV, author = {Asaad, Ihab and Shadaydeh, Maha and Denzler, Joachim}, title = {Gradient Extrapolation for Debiased Representation Learning}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {3819-3829} }
Machine learning classification models trained with empirical risk minimization (ERM) often inadvertently rely on spurious correlations. When absent in the test data, these unintended associations between non-target attributes and target labels lead to poor generalization. This paper addresses this problem from a model optimization perspective and proposes a novel method, Gradient Extrapolation for Debiased Representation Learning (GERNE), designed to learn debiased representations in both known and unknown attribute training cases. GERNE uses two distinct batches with different amounts of spurious correlations and defines the target gradient as a linear extrapolation of the gradients computed from each batch's loss. Our analysis shows that when the extrapolated gradient points toward the batch gradient with fewer spurious correlations, it effectively guides training toward learning a debiased model. GERNE serves as a general framework for debiasing, encompassing ERM and Resampling methods as special cases. We derive the theoretical upper and lower bounds of the extrapolation factor employed by GERNE. By tuning this factor, GERNE can adapt to maximize either Group-Balanced Accuracy (GBA) or Worst-Group Accuracy (WGA). We validate GERNE on five vision and one NLP benchmarks, demonstrating competitive and often superior performance compared to state-of-the-art baselines. The project page is available at: https://gerne-debias.github.io/.
2503.13236
Accepted at International Conference on Computer Vision, ICCV 2025
null
https://gerne-debias.github.io/
[]
[]
[]
[ -0.002792726969346404, 0.029606139287352562, -0.01197305228561163, 0.030782189220190048, 0.024950338527560234, 0.018540887162089348, 0.02460322342813015, -0.014949070289731026, -0.025812873616814613, -0.041030917316675186, -0.006143881939351559, 0.008882169611752033, -0.0672808289527893, 0...
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
29

Space using ai-conferences/ICCV2025 1