# ACE_plus **Repository Path**: analyzesystem/ACE_plus ## Basic Information - **Project Name**: ACE_plus - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-03-07 - **Last Updated**: 2025-03-15 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
Chaojie Mao
·
Jingfeng Zhang
·
Yulin Pan
·
Zeyinzi Jiang
·
Zhen Han
·
Yu Liu
·
Jingren Zhou
Tongyi Lab, Alibaba Group
|
| Workflow | Description | Setting | ||||
| ACE_Plus_FFT_workflow_no_preprocess.json | Use the preprocessed images, such as depth and contour, as input, or the super-resolution. | Task_type: no_preprocess (you don't need to install dependencies like scepter) | ||||
| ACE_Plus_FFT_workflow_controlpreprocess.json | Controllable image-to-image translation capability. To preprocess depth and contour information from images, we use externally-provided models that are typically downloaded from the ModelScope Hub. Because download success can vary depending on the user's environment, we offer alternatives: users can either leverage existing community nodes (depth extration node or contour extraction node) for this task (then choosing the 'no_preprocess' option), or users can pre-download the required models contour and depth and adjust the configuration file 'workflow/ComfyUI-ACE_Plus/config/ace_plus_fft_processor.yaml' to specify the models' local paths. | Task_type: contour_repainting/depth_repainting/recolorizing (you need to install dependencies like scepter) | ||||
| ACE_Plus_FFT_workflow_reference_generation.json | Reference image generation capability for portrait or subject. | Task_type: repainting (you don't need to install dependencies like scepter) | ||||
| ACE_Plus_FFT_workflow_referenceediting_generation.json | Reference image editing capability | Task_type: repainting (you don't need to install dependencies like scepter) | ||||
| Input Reference Image | Input Edit Image | Input Edit Mask | Output | Instruction | Function |
![]() |
![]() |
"Maintain the facial features, A girl is wearing a neat police uniform and sporting a badge. She is smiling with a friendly and confident demeanor. The background is blurred, featuring a cartoon logo." | "Character ID Consistency Generation" | ||
![]() |
![]() |
"Display the logo in a minimalist style printed in white on a matte black ceramic coffee mug, alongside a steaming cup of coffee on a cozy cafe table." | "Subject Consistency Generation" | ||
![]() |
![]() |
![]() |
![]() |
"The item is put on the table." | "Subject Consistency Editing" |
![]() |
![]() |
![]() |
![]() |
"The logo is printed on the headphones." | "Subject Consistency Editing" |
![]() |
![]() |
![]() |
![]() |
"The woman dresses this skirt." | "Try On" |
![]() |
![]() |
![]() |
![]() |
"{image}, the man faces the camera." | "Face swap" |
![]() |
![]() |
![]() |
"{image} features a close-up of a young, furry tiger cub on a rock. The tiger, which appears to be quite young, has distinctive orange, black, and white striped fur, typical of tigers. The cub's eyes have a bright and curious expression, and its ears are perked up, indicating alertness. The cub seems to be in the act of climbing or resting on the rock. The background is a blurred grassland with trees, but the focus is on the cub, which is vividly colored while the rest of the image is in grayscale, drawing attention to the tiger's details. The photo captures a moment in the wild, depicting the charming and tenacious nature of this young tiger, as well as its typical interaction with the environment." | "Super-resolution" | |
![]() |
![]() |
![]() |
"a blue hand" | "Regional Editing" | |
![]() |
![]() |
![]() |
"Mechanical hands like a robot" | "Regional Editing" | |
![]() |
![]() |
![]() |
"{image} Beautiful female portrait, Robot with smooth White transparent carbon shell, rococo detailing, Natural lighting, Highly detailed, Cinematic, 4K." | "Recolorizing" | |
![]() |
![]() |
![]() |
"{image} Beautiful female portrait, Robot with smooth White transparent carbon shell, rococo detailing, Natural lighting, Highly detailed, Cinematic, 4K." | "Depth Guided Generation" | |
![]() |
![]() |
![]() |
"{image} Beautiful female portrait, Robot with smooth White transparent carbon shell, rococo detailing, Natural lighting, Highly detailed, Cinematic, 4K." | "Contour Guided Generation" |
| Tuning Method | Input | Output | Instruction | Models |
| LoRA + ACE Data |
![]() ![]() |
![]() |
"By referencing the mask, restore a partial image from the doodle {image} that aligns with the textual explanation: "1 white old owl"." |
| Application | ACE++ Model | Examples | ||||
|---|---|---|---|---|---|---|
| Try On | ACE++ Subject | ![]() |
![]() |
![]() |
![]() |
"The woman dresses this skirt." |
| Logo Paste | ACE++ Subject | ![]() |
![]() |
![]() |
![]() |
"The logo is printed on the headphones." |
| Photo Editing | ACE++ Subject | ![]() |
![]() |
![]() |
![]() |
"The item is put on the ground." |
| Movie Poster Editor | ACE++ Portrait | ![]() |
![]() |
![]() |
![]() |
"The man is facing the camera and is smiling." |
export FLUX_FILL_PATH="path/to/FLUX.1-Fill-dev"
export PORTRAIT_MODEL_PATH="path/to/ACE++ PORTRAIT PATH"
export SUBJECT_MODEL_PATH="path/to/ACE++ SUBJECT PATH"
export LOCAL_MODEL_PATH="path/to/ACE++ LOCAL EDITING PATH" | export FLUX_FILL_PATH="hf://black-forest-labs/FLUX.1-Fill-dev"
export PORTRAIT_MODEL_PATH="${scepter_path}"
export SUBJECT_MODEL_PATH="${scepter_path}"
export LOCAL_MODEL_PATH="${scepter_path}" |
## 🚀 Inference
Under the condition that the environment variables defined in [Installation](#-installation), users can run examples and test your own samples by executing infer.py.
The relevant commands for lora models are as follows:
```bash
export FLUX_FILL_PATH="hf://black-forest-labs/FLUX.1-Fill-dev"
export PORTRAIT_MODEL_PATH="ms://iic/ACE_Plus@portrait/comfyui_portrait_lora64.safetensors"
export SUBJECT_MODEL_PATH="ms://iic/ACE_Plus@subject/comfyui_subject_lora16.safetensors"
export LOCAL_MODEL_PATH="ms://iic/ACE_Plus@local_editing/comfyui_local_lora16.safetensors"
# Use the model from huggingface
# export PORTRAIT_MODEL_PATH="hf://ali-vilab/ACE_Plus@portrait/comfyui_portrait_lora64.safetensors"
# export SUBJECT_MODEL_PATH="hf://ali-vilab/ACE_Plus@subject/comfyui_subject_lora16.safetensors"
# export LOCAL_MODEL_PATH="hf://ali-vilab/ACE_Plus@local_editing/comfyui_local_lora16.safetensors"
python infer_lora.py
```
The relevant commands for fft models are as follows:
```bash
export FLUX_FILL_PATH="hf://black-forest-labs/FLUX.1-Fill-dev"
export ACE_PLUS_FFT_MODEL="ms://iic/ACE_Plus@ace_plus_fft.safetensors.safetensors"
python infer_fft.py
```
## 🚀 Train
We provide training code that allows users to train on their own data. Reference the data in 'data/train.csv' and 'data/eval.csv' to construct the training data and test data, respectively. We use '#;#' to separate fields.
The required fields include the following six, with their explanations as follows.
```angular2html
"edit_image": represents the input image for the editing task. If it is not an editing task but a reference generation, this field can be left empty.
"edit_mask": represents the input image mask for the editing task, used to specify the editing area. If it is not an editing task but rather for reference generation, this field can be left empty.
"ref_image": represents the input image for the reference image generation task; if it is a pure editing task, this field can be left empty.
"target_image": represents the generated target image and cannot be empty.
"prompt": represents the prompt for the generation task.
"data_type": represents the type of data, which can be 'portrait', 'subject', or 'local'. This field is not used in training phase.
```
All parameters related to training are stored in 'train_config/ace_plus_lora.yaml'. With the following default configuration, the memory usage for LoRA training is between 38GB and 40GB.
| Hyperparameter | Value | Description |
| --- | --- | --- |
| ATTN_BACKEND | flash_attn / pytorch |Set 'flash_attn' to use flash_attn2(Make sure you have installed flash-attn2 correctly). If the version of PyTorch is greater than 2.4.0, use 'pytorch' to utilize PyTorch's implementation.|
| USE_GRAD_CHECKPOINT | True / False |Using gradient checkpointing can also significantly reduce GPU memory usage, but it may slow down the training speed. |
| MAX_SEQ_LEN | 2048 | The MAX_SEQ_LEN refers to the sequence size limit for a single input image (calculated as H/16 * W/16). A larger value indicates a longer computation sequence and a higher training resolution. The default value I provided is 2048.|
To run the training code, execute the following command.
```bash
export FLUX_FILL_PATH="{path to FLUX.1-Fill-dev}"
python run_train.py --cfg train_config/ace_plus_lora.yaml
# Training from fft model
export FLUX_FILL_PATH="{path to FLUX.1-Fill-dev}"
export ACE_PLUS_FFT_MODEL="path to ace_plus_fft.safetensors.safetensors"
python run_train.py --cfg train_config/ace_plus_fft.yaml
```
The models trained by ACE++ can be found in ./examples/exp_example/xxxx/checkpoints/xxxx/0_SwiftLoRA/comfyui_model.safetensors.
## 💻 Demo
We have built a GUI demo based on Gradio to help users better utilize the ACE++ model. Just execute the following command.
```bash
export FLUX_FILL_PATH="hf://black-forest-labs/FLUX.1-Fill-dev"
export PORTRAIT_MODEL_PATH="ms://iic/ACE_Plus@portrait/comfyui_portrait_lora64.safetensors"
export SUBJECT_MODEL_PATH="ms://iic/ACE_Plus@subject/comfyui_subject_lora16.safetensors"
export LOCAL_MODEL_PATH="ms://iic/ACE_Plus@local_editing/comfyui_local_lora16.safetensors"
# Use the model from huggingface
# export PORTRAIT_MODEL_PATH="hf://ali-vilab/ACE_Plus@portrait/comfyui_portrait_lora64.safetensors"
# export SUBJECT_MODEL_PATH="hf://ali-vilab/ACE_Plus@subject/comfyui_subject_lora16.safetensors"
# export LOCAL_MODEL_PATH="hf://ali-vilab/ACE_Plus@local_editing/comfyui_local_lora16.safetensors"
python demo_lora.py
# Use the fft model
export FLUX_FILL_PATH="hf://black-forest-labs/FLUX.1-Fill-dev"
export ACE_PLUS_FFT_MODEL="ms://iic/ACE_Plus@ace_plus_fft.safetensors.safetensors"
python demo_fft.py
```
## 📚 Limitations
* For certain tasks, such as deleting and adding objects, there are flaws in instruction following. For adding and replacing objects, we recommend trying the repainting method of the local editing model to achieve this.
* The generated results may contain artifacts, especially when it comes to the generation of hands, which still exhibit distortions.
## 📝 Citation
ACE++ is a post-training model based on the FLUX.1-dev series from black-forest-labs. Please adhere to its open-source license. The test materials used in ACE++ come from the internet and are intended for academic research and communication purposes. If the original creators feel uncomfortable, please contact us to have them removed.
If you use this model in your research, please cite the works of FLUX.1-dev and the following papers:
```bibtex
@article{mao2025ace++,
title={ACE++: Instruction-Based Image Creation and Editing via Context-Aware Content Filling},
author={Mao, Chaojie and Zhang, Jingfeng and Pan, Yulin and Jiang, Zeyinzi and Han, Zhen and Liu, Yu and Zhou, Jingren},
journal={arXiv preprint arXiv:2501.02487},
year={2025}
}
```
```bibtex
@article{han2024ace,
title={ACE: All-round Creator and Editor Following Instructions via Diffusion Transformer},
author={Han, Zhen and Jiang, Zeyinzi and Pan, Yulin and Zhang, Jingfeng and Mao, Chaojie and Xie, Chenwei and Liu, Yu and Zhou, Jingren},
journal={arXiv preprint arXiv:2410.00086},
year={2024}
}
```