The post you are looking for "fine-tuning-a-diffusion-transformer" has been moved or does not exist.

Oxen.ai Blog

Welcome to the Oxen.ai blog 🐂

The team at Oxen.ai is dedicated to helping AI practictioners go from research to production. To help enable this, we host a research paper club on Fridays called ArXiv Dives, where we go over state of the art research and how you can apply it to your own work.

Take a look at our Arxiv Dives, Practical ML Dives as well as a treasure trove of content on how to go from raw datasets to production ready AI/ML systems. We cover everything from prompt engineering, fine-tuning, computer vision, natural language understanding, generative ai, data engineering, to best practices when versioning your data. So, dive in and explore – we're excited to share our journey and learnings with you 🚀

How We Cut Inference Costs from $46K to $7.5K Fine-Tuning Qwen-Image-Edit
How We Cut Inference Costs from $46K to $7.5K Fine-Tuning Qwen-Image-Edit

Running quality inference at scale is something we think about a lot at Oxen.ai. It’s one thing to generate a few decent results with a state of the art image editing model, but it...

Eloy Martinez
Eloy Martinez
10/26/2025
11 min read
How to Set Noise Timesteps When Fine-Tuning Diffusion Models for Image Generation
How to Set Noise Timesteps When Fine-Tuning Diffusion Models for Image Generation

Fine-tuning Diffusion Models such as Stable Diffusion, FLUX.1-dev, or Qwen-Image can give you a lot of bang for your buck. Base models may not be trained on a certain concept or st...

Greg Schoeninger
Greg Schoeninger
10/10/2025
- Practical ML
6 min read
Fine-Tuned Qwen-Image-Edit vs Nano-Banana and FLUX Kontext Dev
Fine-Tuned Qwen-Image-Edit vs Nano-Banana and FLUX Kontext Dev

Welcome back to Fine-Tuning Friday, where each week we try to put some models to the test and see if fine-tuning an open-source model can outperform whatever state of the art (SOTA...

Greg Schoeninger
Greg Schoeninger
9/27/2025
12 min read
We Fine-Tuned GPT OSS 20B to Rap Like Eminem
We Fine-Tuned GPT OSS 20B to Rap Like Eminem

OpenAI came out with GPT-OSS 120B and 20B in August 2025. The first “Open” LLMs from OpenAI since GPT-2, over six years ago. The idea of fine-tuning a frontier OpenAI model was exc...

Greg Schoeninger
Greg Schoeninger
9/3/2025
14 min read
How We're Building a “Tab Tab” Code Completion Model
How We're Building a “Tab Tab” Code Completion Model

Welcome to Fine-Tuning Fridays, where we share our learnings from fine-tuning open source models for real world tasks. We’ll walk you through what models work, what models don’t an...

Greg Schoeninger
Greg Schoeninger
7/29/2025
8 min read
How to Fine-Tune a FLUX.1-dev LoRA with Code, Step by Step
How to Fine-Tune a FLUX.1-dev LoRA with Code, Step by Step

FLUX.1-dev is one of the most popular open-weight models available today. Developed by Black Forest Labs, it has 12 billion parameters. The goal of this post is to provide a barebo...

Greg Schoeninger
Greg Schoeninger
6/28/2025
- Fine-Tune Fridays
20 min read
How to Fine-Tune PixArt to Generate a Consistent Character
How to Fine-Tune PixArt to Generate a Consistent Character

Can we fine-tune a small diffusion transformer (DiT) to generate OpenAI-level images by distilling off of OpenAI images? The end goal is to have a small, fast, cheap model that we ...

Greg Schoeninger
Greg Schoeninger
6/19/2025
- Fine-Tune Fridays
21 min read
How to Fine-Tune Qwen3 on Text2SQL to GPT-4o level performance
How to Fine-Tune Qwen3 on Text2SQL to GPT-4o level performance

Welcome to a new series from the Oxen.ai Herd called Fine-Tuning Fridays! Each week we will take an open source model and put it head to head against a closed source foundation mod...

Greg Schoeninger
Greg Schoeninger
5/28/2025
- Fine-Tune Fridays
15 min read
Fine-Tuning Fridays
Fine-Tuning Fridays

Welcome to a new series from the Oxen.ai Herd called Fine-Tuning Fridays! Each week we will take an open source model and put it head to head against a closed source foundation mod...

Greg Schoeninger
Greg Schoeninger
5/16/2025
- Fine-Tune Fridays
4 min read
How RWKV-7 Goose Works 🪿 + Notes from the Author
How RWKV-7 Goose Works 🪿 + Notes from the Author

In this special Arxiv Dive, we're joined by Eugene Cheah - author, lead in RWKV org, CEO of Featherless AI, to discuss the development process and key decisions behind these models...

Greg Schoeninger
Greg Schoeninger
4/15/2025
- Arxiv Dives
17 min read