Skip to main content

Train a Custom Style (LoRA)

Train a custom style using your own images to generate AI art with consistent visual aesthetics.

Train a Custom Style (LoRA)

Train a custom style using your own images to generate AI art with consistent visual aesthetics.

Overview

LoRA (Low-Rank Adaptation) is a fine-tuning technique that teaches an AI model a new visual style using a small set of example images. It works by injecting new “style weights” into an existing model, making training both fast and efficient. This guide walks through the complete workflow:

Prepare Training Images

Curate and upload high-quality images that represent your desired style

Submit Training Job

Send a POST request to /styles/train with your image URLs and configuration

Monitor Progress

Track your training job status using the returned job_id

Generate with Your Style

Apply your trained style to image generation

Prepare Training Images

Curating Your Dataset

The quality of your training images directly impacts results. Different training types have different requirements:
TypeUse CaseTips
StyleArtistic styles, visual aestheticsConsistent style across varied subjects
CharacterPersonal likeness, consistent charactersVaried poses, expressions, lighting
ObjectSpecific items, productsMultiple angles, consistent object

How Many Images?

Quality matters far more than quantity. A small set of excellent images will outperform a large set of mediocre ones.
Dataset SizeGuidance
5 imagesMinimum viable. Can work for simple, consistent styles
10-30 imagesRecommended. Best balance of quality and coverage
50+ imagesDiminishing returns unless style has high variation
Quality > Quantity15 high-quality images will produce better results than 50 low-quality ones. Prioritize:
  • High resolution (1024x1024 minimum)
  • Consistent style across all images
  • No watermarks, text overlays, or compression artifacts
  • Varied subjects while maintaining style coherence
Example Datasets
  • Character training: Photos of a person with varied poses, expressions, and lighting conditions. Avoid including other people in the images.
  • Style training: A collection of artwork in a consistent style. For example, The Metropolitan Museum of Art Ukiyo-E Dataset provides Japanese woodblock prints ideal for training an artistic style.

Upload Images

Before training, upload your images to get hosted URLs. Use the /assets endpoint:
import requests
import os
from dotenv import load_dotenv
import mimetypes

load_dotenv()

API_BASE = "https://api.krea.ai"
API_TOKEN = os.getenv("API_TOKEN")

image_dir = "training_images"
uploaded_urls = []

for filename in os.listdir(image_dir):
    if filename.lower().endswith(('.jpg', '.jpeg', '.png', '.webp')):
        filepath = os.path.join(image_dir, filename)
        mime_type, _ = mimetypes.guess_type(filepath)

        with open(filepath, 'rb') as f:
            response = requests.post(
                f"{API_BASE}/assets",
                headers={"Authorization": f"Bearer {API_TOKEN}"},
                files={"file": (filename, f, mime_type)},
                data={"description": f"Training image: {filename}"}
            )

        if response.ok:
            data = response.json()
            uploaded_urls.append(data["image_url"])
            print(f"Uploaded: {filename}")
        else:
            print(f"Failed: {filename}")

print(f"\nUploaded {len(uploaded_urls)} images")
Save the returned image_url values—you’ll pass these to the training endpoint.

Train Your Style

Basic Training Example

Submit your image URLs to start training:
import requests
import os
from dotenv import load_dotenv

load_dotenv()

API_BASE = "https://api.krea.ai"
API_TOKEN = os.getenv("API_TOKEN")

# Training URLs from the upload step
urls = [
    "https://krea.ai/assets/img1.jpg",
    "https://krea.ai/assets/img2.jpg",
    "https://krea.ai/assets/img3.jpg",
    # ... more images
]

response = requests.post(
    f"{API_BASE}/styles/train",
    headers={
        "Authorization": f"Bearer {API_TOKEN}",
        "Content-Type": "application/json"
    },
    json={
        "name": "Ukiyo-E Style",
        "urls": urls,
        "model": "flux_dev",
        "type": "Style",
        "max_train_steps": 500
    }
)

response.raise_for_status()
job = response.json()
print(f"Training started! Job ID: {job['job_id']}")

Training Types

The type parameter sets intelligent defaults optimized for your use case:
TypeBest For
StyleArtistic styles, visual aesthetics
CharacterPersonal likeness, consistent characters
ObjectSpecific items, products
DefaultGeneric training

Parameters

Required Parameters

name
string
required
A descriptive name for your custom style.Example: "Ukiyo-E Style", "Product Photos"
urls
array
required
Array of image URLs to train on. Include more images for better results.

Optional Parameters

model
string
default:"flux_dev"
Base model for training:Image models:
  • flux_dev - High quality, versatile
  • flux_schnell - BFL’s realtime model
  • qwen - Alibaba’s model
  • wan22 - Image generation only
Video models:
  • wan - Alibaba’s video model
type
string
default:"Default"
Training category: Style, Object, Character, or Default
trigger_word
string
Custom word to activate this style in prompts. When not specified, uses the style name.
Choose unique trigger words that won’t appear in typical prompts. Use underscores for multi-word triggers: ukiyo_style
learning_rate
number
Controls training intensity. Higher values train faster but may overfit.Recommended range: 0.0001 - 0.001
max_train_steps
integer
Maximum training iterations. Range: 1-2000
batch_size
integer
Images processed simultaneously. Larger batches = faster training but more memory.

Tuning Advanced Parameters

Start with defaults set by the type field—they work well for most cases. Only adjust these if you’re seeing specific issues:
Controls how aggressively the model adapts to your training images.
ValueWhen to Use
0.0001 (lower)Overfitting issues, complex styles, small datasets
0.0003 (default)Most use cases
0.0005-0.001 (higher)Faster training
Signs you need to adjust:
  • Outputs look identical to training images → lower the rate
  • Style influence is weak after training → raise the rate slightly
How long the model trains on your images.
Dataset SizeRecommended Steps
5-10 images300-500 steps
15-30 images500-800 steps
50+ images800-1500 steps
Signs you need to adjust:
  • Outputs are too rigid, ignoring prompts → reduce steps
  • Style influence is weak → increase steps
  • Generated images look exactly like training data → reduce steps (overfitting)
Iterative TuningIf your first training attempt doesn’t produce the results you want:
  1. Start by adjusting max_train_steps (most common fix)
  2. Then try learning_rate if steps alone don’t help

Response Format

{
  "job_id": "550e8400-e29b-41d4-a716-446655440000",
  "status": "queued",
  "created_at": "2024-01-15T10:30:00Z"
}

Monitor Training Progress

Training typically takes 5-15 minutes. Poll the Jobs API to check status:
import requests
import time
import os
from dotenv import load_dotenv

load_dotenv()

API_BASE = "https://api.krea.ai"
API_TOKEN = os.getenv("API_TOKEN")
job_id = "your-job-id"

while True:
    response = requests.get(
        f"{API_BASE}/jobs/{job_id}",
        headers={"Authorization": f"Bearer {API_TOKEN}"}
    )
    job = response.json()
    status = job["status"]

    print(f"Status: {status}")

    if status == "completed":
        style_id = job["result"]["style_id"]
        print(f"Training complete! Style ID: {style_id}")
        break
    elif status in ["failed", "cancelled"]:
        print(f"Training {status}")
        break

    time.sleep(30)
Training jobs progress through these states:
  1. queued - Waiting in queue
  2. processing - Active training
  3. completed - Training finished successfully
  4. failed - Training encountered an error
  5. cancelled - Job manually cancelled

Use Your Trained Style

Once training completes, apply your style to image generation using the styles parameter:
import requests
import time
import os
from dotenv import load_dotenv

load_dotenv()

API_BASE = "https://api.krea.ai"
API_TOKEN = os.getenv("API_TOKEN")
STYLE_ID = "w29t6pvy0"

response = requests.post(
    f"{API_BASE}/generate/image/bfl/flux-1-dev",
    headers={
        "Authorization": f"Bearer {API_TOKEN}",
        "Content-Type": "application/json"
    },
    json={
        "prompt": "An abstract, colorful, surreal composition of symmetry and balance. Swirling colors, imagery of trees and coalescing patterns converge. The lantern of light and death. It is as if the world was at once dark, and yet again lit.",
        "width": 1024,
        "height": 1024,
        "steps": 20,
        "styles": [
            {
                "id": STYLE_ID,
                "strength": 0.95
            }
        ]
    }
)

job = response.json()
job_id = job["job_id"]
print(f"Generation started! Job ID: {job_id}")

# Poll for completion
while True:
    check = requests.get(
        f"{API_BASE}/jobs/{job_id}",
        headers={"Authorization": f"Bearer {API_TOKEN}"}
    )
    status_data = check.json()

    if status_data["status"] == "completed":
        image_url = status_data["result"]["urls"][0]
        print(f"Image ready: {image_url}")
        break
    elif status_data["status"] == "failed":
        print("Generation failed")
        break

    time.sleep(2)

Style Strength

The strength parameter (0.0-1.0) controls how strongly your style is applied:
StrengthEffect
0.5-0.7Subtle influence, maintains prompt flexibility
0.8-0.9Strong style application, recommended starting point
0.95-1.0Maximum style adherence, may reduce prompt responsiveness
Start with 0.8 strength and adjust based on results. Lower values give more creative freedom; higher values enforce stricter style adherence.

Combining Multiple Styles

Apply multiple styles by adding them to the styles array:
"styles": [
    {"id": "style-id-1", "strength": 0.6},
    {"id": "style-id-2", "strength": 0.4}
]

Best Practices

  • Use as many high-quality images as you have for optimal results
  • Ensure consistent style across all training images
  • Include variety in subjects while maintaining style coherence
  • Avoid watermarks, text overlays, or artifacts
  • Use images at least 1024x1024 resolution
  • Start with default parameters using the type field
  • For styles: 500-1000 steps is usually sufficient
  • Lower learning rates (0.0001-0.0003) prevent overfitting
  • Increase steps if style isn’t strong enough
  • Decrease steps if output is too rigid
  • Use the same trigger word if you plan on combining multiple styles
  • Trigger words are automatically injected into the prompt if you include the style
  • Avoid common words that appear in typical prompts
  • Use underscores for multi-word triggers: my_custom_style