{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# DBbun Crypto — Machine Learning & Anomaly Detection\n", "\n", "End-to-end notebook that trains baseline ML models and simple anomaly detection\n", "on the **DBbun Crypto Synthetic Dataset**.\n", "\n", "Inspired by the paper:\n", "> *“Beyond Static Datasets: A Behavior-Driven Entity-Specific Simulation to Overcome Data Scarcity and Train Effective Crypto Anti-Money Laundering Models.”*\n", "\n", "This notebook is designed to be memory-friendly and works on large datasets by sampling.\n" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Reading transactions (sampled)...\n", "Reading labels (all or aligned subset)...\n", "Reading edges (sampled)...\n", "(300000, 9) (300000, 8) (1000000, 4)\n" ] } ], "source": [ "# --- CONFIG ---\n", "DATA_DIR = 'C:/DBbun/Code/Crypto/out/' # change if your folder differs\n", "TX_ROWS = 300_000 # cap rows from transactions.csv\n", "EDGE_ROWS = 1_000_000 # cap rows from edges.csv for fan-in/out features\n", "TEST_SIZE = 0.2\n", "RANDOM_STATE = 2025\n", "\n", "import os, gc, math\n", "import pandas as pd\n", "import numpy as np\n", "from pathlib import Path\n", "import matplotlib.pyplot as plt\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.metrics import classification_report, roc_auc_score, confusion_matrix\n", "from sklearn.preprocessing import StandardScaler\n", "from sklearn.linear_model import LogisticRegression\n", "from sklearn.ensemble import RandomForestClassifier, IsolationForest\n", "from sklearn.impute import SimpleImputer\n", "\n", "DATA_DIR = Path(DATA_DIR)\n", "TX_CSV = DATA_DIR / 'transactions.csv'\n", "TXLBL_CSV = DATA_DIR / 'labels_transactions.csv'\n", "EDGES_CSV = DATA_DIR / 'edges.csv'\n", "\n", "print('Reading transactions (sampled)...')\n", "tx = pd.read_csv(TX_CSV, nrows=TX_ROWS)\n", "print('Reading labels (all or aligned subset)...')\n", "lbl = pd.read_csv(TXLBL_CSV)\n", "if len(lbl) > TX_ROWS:\n", " # Keep only labels for the sampled transaction IDs\n", " lbl = lbl[lbl['tx_id'].isin(tx['tx_id'])]\n", "\n", "print('Reading edges (sampled)...')\n", "edges = pd.read_csv(EDGES_CSV, nrows=EDGE_ROWS, usecols=['tx_id','sender','receiver','value'])\n", "print(tx.shape, lbl.shape, edges.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Feature Engineering (Transaction-level)\n", "We build compact features that typically help AML classifiers:\n", "\n", "- Structural: **fan-in**, **fan-out** (unique senders/receivers per tx)\n", "- Monetary: `total_in`, `total_out`, **fee**, **avg_edge_value**\n", "- Ratios: `fee_ratio = fee / (total_in + 1)`, `out_in_ratio = total_out / (total_in + 1)`\n", "- Shape: **is_roundish** (amount rounded to bucket), `num_inputs`, `num_outputs`\n", "\n", "You can extend with time-based features (hour, weekday), pattern one-hot, etc." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Feature matrix: (300000, 11)\n" ] } ], "source": [ "# Fan-in / Fan-out and avg edge value from edges.csv subset\n", "agg_edges = edges.groupby('tx_id').agg(\n", " fan_in=('sender','nunique'),\n", " fan_out=('receiver','nunique'),\n", " total_edge_value=('value','sum'),\n", " avg_edge_value=('value','mean')\n", ").reset_index()\n", "\n", "# Join with transactions and labels\n", "df = tx.merge(agg_edges, on='tx_id', how='left')\n", "df = df.merge(lbl[['tx_id','tx_label']], on='tx_id', how='left')\n", "\n", "# Basic cleaning\n", "for c in ['fan_in','fan_out','total_edge_value','avg_edge_value']:\n", " if c in df:\n", " df[c] = df[c].fillna(0)\n", "\n", "# Derived features\n", "df['fee_ratio'] = df['fee'] / (df['total_in'] + 1)\n", "df['out_in_ratio'] = df['total_out'] / (df['total_in'] + 1)\n", "ROUND_STEP = 10_000\n", "df['is_roundish'] = (df['total_out'] // ROUND_STEP) * ROUND_STEP == df['total_out']\n", "\n", "# Encode label\n", "df = df.dropna(subset=['tx_label'])\n", "df['y'] = (df['tx_label'] == 'suspicious').astype(int)\n", "\n", "feature_cols = [\n", " 'num_inputs','num_outputs','total_in','total_out','fee',\n", " 'fan_in','fan_out','avg_edge_value','fee_ratio','out_in_ratio','is_roundish'\n", "]\n", "X = df[feature_cols].copy()\n", "y = df['y'].values\n", "print('Feature matrix:', X.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Supervised Baselines\n", "Two strong starter baselines:\n", "- **Logistic Regression** (with standardization)\n", "- **Random Forest** (nonlinear, robust to scaling)\n", "\n", "We report **ROC AUC** and a compact classification report." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "LogReg ROC AUC: 0.880\n", " precision recall f1-score support\n", "\n", " 0 0.76 0.99 0.86 32667\n", " 1 0.99 0.63 0.77 27333\n", "\n", " accuracy 0.83 60000\n", " macro avg 0.87 0.81 0.81 60000\n", "weighted avg 0.86 0.83 0.82 60000\n", "\n", "RandomForest ROC AUC: 0.883\n", " precision recall f1-score support\n", "\n", " 0 0.76 1.00 0.86 32667\n", " 1 1.00 0.63 0.77 27333\n", "\n", " accuracy 0.83 60000\n", " macro avg 0.88 0.81 0.82 60000\n", "weighted avg 0.87 0.83 0.82 60000\n", "\n", "Top Feature Importances (RF):\n", "out_in_ratio 3.419028e-01\n", "total_out 2.987599e-01\n", "num_outputs 1.329544e-01\n", "total_in 9.291522e-02\n", "fee 6.016838e-02\n", "fee_ratio 4.089986e-02\n", "num_inputs 2.044308e-02\n", "avg_edge_value 7.129035e-03\n", "fan_out 3.111387e-03\n", "fan_in 1.715945e-03\n", "is_roundish 5.993288e-15\n", "dtype: float64\n" ] } ], "source": [ "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=TEST_SIZE, random_state=RANDOM_STATE, stratify=y)\n", "\n", "# Pipeline: impute -> scale -> logistic regression\n", "imp = SimpleImputer(strategy='median')\n", "X_train_imp = imp.fit_transform(X_train)\n", "X_test_imp = imp.transform(X_test)\n", "\n", "scaler = StandardScaler(with_mean=True, with_std=True)\n", "X_train_std = scaler.fit_transform(X_train_imp)\n", "X_test_std = scaler.transform(X_test_imp)\n", "\n", "logreg = LogisticRegression(max_iter=200, n_jobs=None)\n", "logreg.fit(X_train_std, y_train)\n", "lr_proba = logreg.predict_proba(X_test_std)[:,1]\n", "lr_auc = roc_auc_score(y_test, lr_proba)\n", "print(f'LogReg ROC AUC: {lr_auc:.3f}')\n", "print(classification_report(y_test, (lr_proba>=0.5).astype(int)))\n", "\n", "# Random Forest (no scaling required)\n", "rf = RandomForestClassifier(n_estimators=200, max_depth=None, n_jobs=-1, random_state=RANDOM_STATE)\n", "rf.fit(X_train_imp, y_train)\n", "rf_proba = rf.predict_proba(X_test_imp)[:,1]\n", "rf_auc = roc_auc_score(y_test, rf_proba)\n", "print(f'RandomForest ROC AUC: {rf_auc:.3f}')\n", "print(classification_report(y_test, (rf_proba>=0.5).astype(int)))\n", "\n", "# Feature importances (RF)\n", "fi = pd.Series(rf.feature_importances_, index=feature_cols).sort_values(ascending=False)\n", "print('Top Feature Importances (RF):')\n", "print(fi.head(15))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Unsupervised: Isolation Forest\n", "Detect unusual transactions without labels.\n", "We compare anomaly scores against the ground-truth `tx_label` for reference." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "IsolationForest (unsupervised) ROC AUC vs labels: 0.736\n", "Anomaly detection (top 5% as anomalies):\n", " precision recall f1-score support\n", "\n", " 0 0.57 0.99 0.72 32667\n", " 1 0.92 0.10 0.18 27333\n", "\n", " accuracy 0.59 60000\n", " macro avg 0.75 0.55 0.45 60000\n", "weighted avg 0.73 0.59 0.48 60000\n", "\n" ] } ], "source": [ "iso = IsolationForest(\n", " n_estimators=300,\n", " max_samples=min(10000, len(X_train)),\n", " contamination=0.05,\n", " random_state=RANDOM_STATE,\n", " n_jobs=-1\n", ")\n", "iso.fit(X_train_imp)\n", "scores = -iso.decision_function(X_test_imp) # higher = more anomalous\n", "auc_unsup = roc_auc_score(y_test, scores)\n", "print(f'IsolationForest (unsupervised) ROC AUC vs labels: {auc_unsup:.3f}')\n", "\n", "# Threshold at top 5% most anomalous\n", "thr = np.quantile(scores, 0.95)\n", "pred_anom = (scores >= thr).astype(int)\n", "print('Anomaly detection (top 5% as anomalies):')\n", "print(classification_report(y_test, pred_anom))" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.7" } }, "nbformat": 4, "nbformat_minor": 4 }