rajpurkar/squad
Viewer • Updated • 98.2k • 148k • 363
How to use harpertoken/harpertokenConvAI with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("question-answering", model="harpertoken/harpertokenConvAI") # Load model directly
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("harpertoken/harpertokenConvAI")
model = AutoModelForQuestionAnswering.from_pretrained("harpertoken/harpertokenConvAI")A context-aware conversational AI model based on DistilBERT for natural language understanding and generation.
Advanced Response Generation
Flexible Architecture
Robust Processing
pip install transformers torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
# Load model and tokenizer
model = AutoModelForQuestionAnswering.from_pretrained('harpertoken/harpertokenConvAI')
tokenizer = AutoTokenizer.from_pretrained('harpertoken/harpertokenConvAI')
@misc{harpertoken-convai,
title={Harpertoken ConvAI},
author={Niladri Das},
year={2025},
url={https://huggingface.co/harpertoken/harpertokenConvAI}
}