MolCrawl/genome_sequence
Collection
11 items • Updated
GPT-2 large (774M parameters) foundation model pre-trained on human genome DNA sequences from the GRCh38 reference assembly.
GRCh38 human genome reference assembly: https://www.ncbi.nlm.nih.gov/assembly/GCF_000001405.26/ (Pre-training corpus)
Model Type: dnabert2
Data Type: DNA/Genome
Training Date: 2026-04-24
from transformers import AutoModelForMaskedLM, AutoTokenizer
import torch
model = AutoModelForMaskedLM.from_pretrained("kojima-lab/molcrawl-genome-sequence-dnabert2-large")
tokenizer = AutoTokenizer.from_pretrained("kojima-lab/molcrawl-genome-sequence-dnabert2-large")
# Predict masked DNA token
# Use tokenizer.mask_token instead of hardcoded "[MASK]":
# BERT-style tokenizers vary ("[MASK]", "<mask>", etc.)
if tokenizer.mask_token is None:
raise ValueError("This tokenizer has no mask_token; masked LM inference is not supported.")
prompt = "ATCGATCG{MASK}ATCGATCG".replace("{MASK}", tokenizer.mask_token)
inputs = tokenizer(prompt, return_tensors="pt")
mask_index = (inputs["input_ids"] == tokenizer.mask_token_id).nonzero(as_tuple=True)[1]
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predicted_token_id = logits[0, mask_index].argmax(dim=-1)
predicted_token = tokenizer.decode(predicted_token_id)
result = prompt.replace(tokenizer.mask_token, predicted_token)
print(f"Predicted: {result}")
Training pipeline, configuration files, and data preparation scripts are available in the MolCrawl GitHub repository: https://github.com/mmai-framework-lab/MolCrawl
This model is released under the APACHE-2.0 license.
If you use this model, please cite:
@misc{molcrawl_genome_sequence_dnabert2_large,
title={molcrawl-genome-sequence-dnabert2-large},
author={{RIKEN}},
year={2026},
publisher={{Hugging Face}},
url={{https://huggingface.co/kojima-lab/molcrawl-genome-sequence-dnabert2-large}}
}