LoRA-fine-tuned climatebert/distilroberta-base-climate-f on climatebert/netzero_reduction_data
This model is a LoRA (Low-Rank Adaptation) fine-tuned version of
climatebert/distilroberta-base-climate-f on the dataset climatebert/netzero_reduction_data.
It is designed for climate-related text classification / net-zero commitments.
ℹ️ Evaluation (validation set)
| metric | value |
|---|---|
| eval_loss | 0.1172 |
| eval_accuracy | 0.9535 |
| eval_f1 | 0.9535 |
Metrics are computed using the Hugging Face
Traineron the validation split.
ℹ️ Training configuration
- Epochs:
1 - Batch size:
8 - Learning rate:
0.0001 - Max sequence length:
256 - LoRA r (rank):
8 - LoRA alpha:
16 - LoRA dropout:
0.05 - Seed:
42
LoRA is applied on top of the base model using the PEFT library.
ℹ️ Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_id = "cheekeong2025/climatebert-distilroberta-base-climate-f-lora-0da70e39"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
inputs = tokenizer("This company announced an ambitious net-zero plan.", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
pred = logits.argmax(-1).item()
label = model.config.id2label[str(pred)]
print(label)
Model tree for cheekeong2025/climatebert-distilroberta-base-climate-f-lora-0da70e39
Base model
climatebert/distilroberta-base-climate-f