⚠️ Warning: This model can produce narratives and RP that contain violent and graphic erotic content. Adjust your system prompt accordingly, and use Llama 3 chat template.

Meme-Trix MoE 14B A8B v1

Meme-Trix

A custom built Llama 3.1 8B MoE (Mixture of Experts) merge which combines Morpheus v1 with Assistant Pepe. The merge is suprisingly intelligent, detailed, and based. Scores ~15K at Q0 Bench. Fully uncensored and almost as fast as 8B dense. It also appears to have strong context retrieval ability. Asked for a summary at 16K and it works flawlessly (did not test higher yet).

If you want to merge custom Llama MoE you can add these scripts to your mergekit environment:

Then assign the num_experts_per_tok in config.json (or the config.yaml)

Recommended Settings

(bolded kobold non-defaults)

  • Temp 1.0
  • TopNSigma 1.25
  • Min-P 0.1
  • Repetition Penalty 1.08
  • Top-P 1.0
  • Top-K 100
  • Top-A 0
  • Typical Sampling 1
  • Tail-Free Sampling 1
  • Presence Penalty 0
  • Sampler Seed -1
  • Rp.Range 360
  • Rp.Slope 0.7
  • Smoothing Factor 0
  • Smoothing Curve 1
  • DynaTemp 0
  • Mirostat Mode OFF ("2" enhances creativity but also errors)
  • Mirostat Tau 5
  • Mirostat Eta 0.1
  • DRY Multiplier 0.8
  • DRY Base 1.75
  • DRY A.Len 2
  • DRY L.Len 320
  • XTC Threshold 0.1
  • XTC Probability 0.08 (The "Anti-Cliche" Shield)
  • DynaTemp ON (The "Poor Man's Fading Mirostat")
  • Minimum Temperature 0.65
  • Maximum Temperature 1.35
  • Temperature 1.0
  • DynaTemp-Range 0.35
  • DynaTemp-Exponent 1
Downloads last month
114
Safetensors
Model size
14B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Naphula/Meme-Trix-MoE-14B-A8B-v1

Dataset used to train Naphula/Meme-Trix-MoE-14B-A8B-v1

Collections including Naphula/Meme-Trix-MoE-14B-A8B-v1