Diffusion Single File
comfyui

Training LyCORIS for Anima

#92
by njs45 - opened

As of right now, has anyone successfully trained a LyCORIS (LOKr/LOHa) for Anima? I tried doing so using Kohya's sd-scripts but its LyCORIS training scripts don't appear to be compatible with the Anima architecture itself.

As seen below, it basically detects 0 modules for the UNet.

import network module: lycoris.kohya
2026-03-18 13:39:47|[LyCORIS]-WARNING: use_cp is deprecated. Please use use_tucker instead.
2026-03-18 13:39:47|[LyCORIS]-INFO: Weight decomposition is enabled
2026-03-18 13:39:47|[LyCORIS]-INFO: Using rank adaptation algo: lokr
2026-03-18 13:39:47|[LyCORIS]-INFO: Disable conv layer
2026-03-18 13:39:47|[LyCORIS]-INFO: Use Dropout value: 0.0
2026-03-18 13:39:47|[LyCORIS]-INFO: Create LyCORIS Module
2026-03-18 13:39:47|[LyCORIS]-INFO: create LyCORIS for Text Encoder: 0 modules.
2026-03-18 13:39:47|[LyCORIS]-INFO: Create LyCORIS Module
2026-03-18 13:39:47|[LyCORIS]-INFO: create LyCORIS for U-Net: 0 modules.
2026-03-18 13:39:47|[LyCORIS]-INFO: module type table: {}
2026-03-18 13:39:47|[LyCORIS]-INFO: enable LyCORIS for U-Net
prepare optimizer, data loader etc.

However, I see some models on Civitai have the LyCORIS tag, so I am curious how people have actually accomplished this. Perhaps it was a configuration error from my end, or there is some other script for doing this.

This can sound out of nowhere but I was training using dora, another variant of lora, but I needed revising sd-script and even raised issue on comfy due to compatibility.

you could also dm the people that trained a lycoris for anima.

No wonder, Anima doesn't use a UNet, it uses DiT. Did you download the LyCoris and extract the metadata from them? They should have their training settings embedded.

I use this fork of Lora Easy Training Scripts and it trains LyCORIS just fine for Anima. For me it shows this when doing Lokr:

import network module: lycoris.kohya
2026-03-21 15:06:56|[LyCORIS]-INFO: Full matrix mode for LoKr is enabled
2026-03-21 15:06:56|[LyCORIS]-INFO: Anima model detected: added ['PatchEmbed', 'TimestepEmbedding', 'FinalLayer'] to target modules
2026-03-21 15:06:56|[LyCORIS]-INFO: Anima model detected: added ['.*(_modulation|_norm|_embedder|final_layer).*'] to target exclude names
2026-03-21 15:06:56|[LyCORIS]-INFO: Using rank adaptation algo: lokr
2026-03-21 15:06:56|[LyCORIS]-INFO: Apply different lora dim for conv layer
2026-03-21 15:06:56|[LyCORIS]-INFO: Conv Dim: 1, Linear Dim: 100000
2026-03-21 15:06:56|[LyCORIS]-INFO: Use Dropout value: 0.0
2026-03-21 15:06:56|[LyCORIS]-INFO: wd_on_output=True
2026-03-21 15:06:56|[LyCORIS]-INFO: Create LyCORIS Module
2026-03-21 15:06:56|[LyCORIS]-INFO: create LyCORIS for Text Encoder: 196 modules.
2026-03-21 15:06:56|[LyCORIS]-INFO: Create LyCORIS Module
2026-03-21 15:06:56|[LyCORIS]-INFO: create LyCORIS for U-Net: 280 modules.
2026-03-21 15:06:56|[LyCORIS]-INFO: module type table: {'LokrModule': 476}
2026-03-21 15:06:56|[LyCORIS]-INFO: enable LyCORIS for U-Net

Yes, I got it working with Lora Easy Training Scripts. Thanks for the recommendation!

Yes, I got it working with Lora Easy Training Scripts. Thanks for the recommendation!

@njs45 @Comuse123 do you mind me asking on the model usual vram/ram comsumption when training? And the speed of training? I'm semi sure that LoRAs can easily be trained with a 15GB VRAM gpu but I have concerns over whether its fast or not ( I use T4 , which is pretty ass and old nowadays, cant do mixed or full bf16).

Yes, I got it working with Lora Easy Training Scripts. Thanks for the recommendation!

@njs45 @Comuse123 do you mind me asking on the model usual vram/ram comsumption when training? And the speed of training? I'm semi sure that LoRAs can easily be trained with a 15GB VRAM gpu but I have concerns over whether its fast or not ( I use T4 , which is pretty ass and old nowadays, cant do mixed or full bf16).

I have a 4080 and doing a normal Lora at batch 4 on CAME with 75 images (mixed bf16, fp16 just results in nan) uses around 14-14.5GB, speed is around 4.2s/it. Overall I'd say it's a little faster than SDXL and Anima does also learn faster, so you shouldn't need to train as long. The trainer linked also has settings like Unsloth Offload and VAE chunking which can reduce VRAM by around 2GB (and increase RAM usage by 2GB) with a 0.5-1s/it speed penalty.

@sorryhyun
May I ask what tools or options you used to learn Dora?

@nagarago I used custom sd-scripts fork, implementing DoRAModule in networks/lora_flux.py . here is my fork: https://github.com/sorryhyun/sd-scripts

Yes, I got it working with Lora Easy Training Scripts. Thanks for the recommendation!

@njs45 @Comuse123 do you mind me asking on the model usual vram/ram comsumption when training? And the speed of training? I'm semi sure that LoRAs can easily be trained with a 15GB VRAM gpu but I have concerns over whether its fast or not ( I use T4 , which is pretty ass and old nowadays, cant do mixed or full bf16).

I have been training LoRAs using my RTX 4060 8GB, so you certainly don't need that much VRAM. The limitation is that the max dimension I seem to be able to train is 64, and the text encoder has to be frozen and cached or else the training time becomes 50+ hours (or my laptop shuts down from overheating). I turn on all VRAM saving settings, like gradient checkpointing and accumulation.

VRAM usage is 98-99% based on nvidia-smi, and RAM usage is about the same as what @Comuse123 mentioned above. My previous LoKr attempt (rank 64) took around 5 hours for 30 epochs, with 11-12 s/it. I use bf16 as well.

@sorryhyun
May I ask what tools or options you used to learn Dora?

The fork I linked has the option to use Dora as long as you select any LyCORIS option. It works well on SDXL but I personally don't use it on Anima though, it has given me pretty bad results for some reason. It doesn't seem very necessary for stuff like styles anyways (especially with the speed penalty), Anima learns styles very well compared to SDXL.

Sign up or log in to comment