AgamP commited on
Commit
170e854
·
verified ·
1 Parent(s): 34b641f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -149
README.md CHANGED
@@ -1,149 +1,6 @@
1
- # llm_recommendation_engine
2
- Recommendation engine for SHL's product catalogue with conversational agents
3
-
4
- ## Quick commands (crawler + export + QA)
5
- - Install deps (and Playwright browser): `python -m pip install -r requirements.txt && python -m playwright install chromium`
6
- - Clean DB: `rm -f data/crawler.db`
7
- - Crawl (bypass robots if needed): `ALLOW_ROBOTS_BYPASS=1 python -m crawler.run --mode=crawl_all --max-discover=20`
8
- - Drop `--max-discover` for full crawl.
9
- - Export dataset: `python -m crawler.run --mode=export --limit-export=20`
10
- - Outputs: `data/catalog.parquet`, `data/catalog.jsonl`
11
- - Drop `--limit-export` for full export.
12
- - QA checks: `python -m crawler.qa_checks data/catalog.jsonl > data/qa_summary.json`
13
- - Summary JSON saved to `data/qa_summary.json`
14
-
15
- ## What’s implemented
16
- - Playwright-based crawler with catalog pagination, detail fetch, and structured storage in SQLite.
17
- - Field extraction: url, name, description, test_type (+full), remote/adaptive flags, duration (minutes/hours), job_levels, languages, downloads.
18
- - Export to Parquet/JSONL plus QA summary script for downstream sanity checks.
19
-
20
- ## Evaluation harness (Phase 2)
21
- - Catalog loader with canonical IDs: `python -m data.catalog_loader --input data/catalog.jsonl --output data/catalog_with_ids.jsonl`
22
- - Train loader + label resolution report: `python -m data.train_loader --catalog data/catalog.jsonl --train <train_file> --report data/label_resolution_report.json`
23
- - Run eval (dummy baseline): `python -m eval.run_eval --catalog data/catalog.jsonl --train <train_file> --recommender dummy_random`
24
- - Run eval (BM25 baseline): `python -m eval.run_eval --catalog data/catalog.jsonl --train <train_file> --recommender bm25`
25
- - Outputs run folder under `runs/<timestamp>_<recommender>/` with `metrics.json`, `per_query_results.jsonl`, `worst_queries.csv`, `label_resolution_report.json`
26
- - Compare runs: `python -m eval.compare_runs runs/<run_a> runs/<run_b>`
27
-
28
- Recommender interface lives in `recommenders/base.py`; a random baseline is in `recommenders/dummy_random.py`. Metrics (Recall@k, MRR@10) are in `eval/metrics.py`.
29
-
30
- ## Label probing & backfill (improve label coverage)
31
- - Probe unmatched label URLs (after a label match run): `python -m scripts.probe_unmatched_labels --labels data/label_resolution_report.json --output reports/label_url_probe.csv` — classifies label URLs (valid detail vs 404/blocked).
32
- - Backfill valid label pages into DB: `python -m crawler.backfill_labels --probe-csv reports/label_url_probe.csv --allow-robots-bypass` — fetches & inserts DETAIL_PAGE_VALID URLs.
33
- - Re-export and rematch after backfill:
34
- - `python -m crawler.run --mode=export`
35
- - `python -m data.catalog_loader --input data/catalog.jsonl --output data/catalog_with_ids.jsonl`
36
- - `python -m data.train_loader --catalog data/catalog.jsonl --train <train_file> --sheet "Train-Set" --report data/label_resolution_report.json`
37
-
38
- ## Vector pipeline (semantic retrieval)
39
- - Build doc_text: `python -m data.document_builder --input data/catalog.jsonl --output data/catalog_docs.jsonl`
40
- - Generate embeddings: `python -m embeddings.generator --catalog data/catalog_docs.jsonl --model sentence-transformers/all-MiniLM-L6-v2 --output-dir data/embeddings`
41
- - Build FAISS index: `python -m retrieval.build_index --embeddings data/embeddings/embeddings.npy --ids data/embeddings/assessment_ids.json --index-path data/faiss_index/index.faiss`
42
- - Vector components:
43
- - Model wrapper: `models/embedding_model.py`
44
- - Index wrapper: `retrieval/vector_index.py`
45
- - Index builder script: `retrieval/build_index.py`
46
- - Vector recommender scaffold: `recommenders/vector_recommender.py` (wire with assessment_ids + index)
47
-
48
- ## Hybrid retrieval (BM25 + vector with RRF)
49
- - Run hybrid eval: `python -m eval.run_eval --catalog data/catalog_docs.jsonl --train data/Gen_AI\ Dataset.xlsx --recommender hybrid_rrf --vector-index data/faiss_index/index.faiss --assessment-ids data/embeddings/assessment_ids.json --model sentence-transformers/all-MiniLM-L6-v2 --topn-candidates 200 --rrf-k 60`
50
- - Run hybrid + cross-encoder rerank: `python -m eval.run_eval --catalog data/catalog_docs.jsonl --train data/Gen_AI\ Dataset.xlsx --recommender hybrid_rrf_rerank --vector-index data/faiss_index/index.faiss --assessment-ids data/embeddings/assessment_ids.json --model sentence-transformers/all-MiniLM-L6-v2 --reranker-model cross-encoder/ms-marco-MiniLM-L-6-v2 --topn-candidates 200 --rrf-k 60`
51
- - Run hybrid + LGBM rerank: `python -m eval.run_eval --catalog data/catalog_docs.jsonl --train data/Gen_AI\ Dataset.xlsx --recommender hybrid_rrf_lgbm --vector-index data/faiss_index/index.faiss --assessment-ids data/embeddings/assessment_ids.json --model sentence-transformers/all-MiniLM-L6-v2 --topn-candidates 200 --rrf-k 60 --lgbm-model models/reranker/v0.1.0/lgbm_model.txt --lgbm-features models/reranker/v0.1.0/feature_schema.json`
52
- - Diagnostics (positives in top-N vs top-10): `python -m eval.diagnostic_topk --catalog data/catalog_docs.jsonl --train data/Gen_AI\ Dataset.xlsx --vector-index data/faiss_index/index.faiss --assessment-ids data/embeddings/assessment_ids.json --model sentence-transformers/all-MiniLM-L6-v2 --topn 200`
53
- - Run ablation (bm25/vector/hybrid across topN): `python -m scripts.run_ablation --catalog data/catalog_docs.jsonl --train data/Gen_AI\ Dataset.xlsx --vector-index data/faiss_index/index.faiss --assessment-ids data/embeddings/assessment_ids.json --model sentence-transformers/all-MiniLM-L6-v2 --topn-list 100,200,377`
54
-
55
- ## Current findings & next steps
56
- - Candidate coverage is solved by top200; ranking is the bottleneck. Use union fusion + rerank.
57
- - Locked decisions:
58
- - Candidate pool (train): top200
59
- - Candidate pool (infer): top100–200
60
- - Base retriever: hybrid (BM25 + vector), union fusion, dual-query (raw + rewritten).
61
- - Next: focus on reranking and constraint handling; no more embedding/model swaps.
62
-
63
- ## Core pipeline (concise commands)
64
-
65
- ### Build rich docs, embeddings, index (BGE)
66
- ```bash
67
- python -m data.document_builder \
68
- --input data/catalog.jsonl \
69
- --output data/catalog_docs_rich.jsonl \
70
- --variant rich \
71
- --version v2_struct
72
-
73
- python -m embeddings.generator \
74
- --catalog data/catalog_docs_rich.jsonl \
75
- --model BAAI/bge-small-en-v1.5 \
76
- --batch-size 32 \
77
- --output-dir data/embeddings_bge
78
-
79
- python -m retrieval.build_index \
80
- --embeddings data/embeddings_bge/embeddings.npy \
81
- --ids data/embeddings_bge/assessment_ids.json \
82
- --index-path data/faiss_index/index_bge.faiss
83
- ```
84
-
85
- ### Build vocab for query rewriter (optional, recommended)
86
- ```bash
87
- python -m scripts.build_role_vocab \
88
- --catalog data/catalog_docs_rich.jsonl \
89
- --out data/catalog_role_vocab.json
90
- ```
91
-
92
- ### Evaluate hybrid + cross-encoder rerank (with rewriting and union fusion)
93
- ```bash
94
- python -m eval.run_eval \
95
- --catalog data/catalog_docs_rich.jsonl \
96
- --train data/Gen_AI\ Dataset.xlsx \
97
- --recommender hybrid_rrf_rerank \
98
- --vector-index data/faiss_index/index_bge.faiss \
99
- --assessment-ids data/embeddings_bge/assessment_ids.json \
100
- --model BAAI/bge-small-en-v1.5 \
101
- --reranker-model models/reranker_crossenc/v0.1.0 \
102
- --topn-candidates 200 --rrf-k 60 \
103
- --use-rewriter --vocab data/catalog_role_vocab.json \
104
- --out-dir runs/$(date +%Y%m%d_%H%M%S)_hybrid_rrf_rerank_rewrite
105
- ```
106
-
107
- ### Candidate coverage (bm25 vs vector vs hybrid; grouped per query)
108
- ```bash
109
- python -m scripts.candidate_coverage \
110
- --catalog data/catalog_docs_rich.jsonl \
111
- --train data/Gen_AI\ Dataset.xlsx \
112
- --vector-index data/faiss_index/index_bge.faiss \
113
- --assessment-ids data/embeddings_bge/assessment_ids.json \
114
- --embedding-model BAAI/bge-small-en-v1.5 \
115
- --topn 200 \
116
- --use-rewriter --vocab data/catalog_role_vocab.json \
117
- --out runs/candidate_coverage.jsonl
118
-
119
- python -m scripts.summarize_candidate_coverage \
120
- --input runs/candidate_coverage.jsonl \
121
- --out runs/candidate_coverage_stats.json
122
- ```
123
-
124
- ### Rewrite impact (optional)
125
- ```bash
126
- python -m scripts.eval_rewrite_impact \
127
- --catalog data/catalog_docs_rich.jsonl \
128
- --train data/Gen_AI\ Dataset.xlsx \
129
- --vector-index data/faiss_index/index_bge.faiss \
130
- --assessment-ids data/embeddings_bge/assessment_ids.json \
131
- --embedding-model BAAI/bge-small-en-v1.5 \
132
- --topn 200 \
133
- --vocab data/catalog_role_vocab.json \
134
- --out runs/rewrite_impact.jsonl
135
- ```
136
-
137
- ## Frontend + backend (Next.js + FastAPI)
138
-
139
- Backend (FastAPI):
140
- - Start: `uvicorn agent.server:app --reload --port 8000`
141
- - Health: `GET /health`
142
- - Chat: `POST /chat` (returns compact top-10 + optional summary when verbose=true)
143
- - Recommend: `POST /recommend` with `{"query": "..."}` returns `{"recommended_assessments": [...]}` (top-10)
144
-
145
- Frontend (Next.js in `frontend/`):
146
- - Install deps: `cd frontend && npm install`
147
- - Dev: `npm run dev` (will start on port 3000; ensure backend is running on 8000 or set API base in UI)
148
- - Build/start: `npm run build && npm run start`
149
- - UI is at `http://localhost:3000/` (API base defaults to `http://localhost:8000`, editable in the UI)
 
1
+ ---
2
+ title: llm recommendation backend
3
+ emoji: 🚀
4
+ sdk: docker
5
+ pinned: false
6
+ ---