Comfyui adetailer reddit Also, if this is new and exciting to you, feel free to I observed that using Adetailer with SDXL models /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, ComfyUI in FP8 mode enables SDXL-DPO+refiner in under 13GB RAM and 4GB VRAM. 3 in order to get rid of jaggies, unfortunately it will diminish the likeness during the Ultimate Upscale. the amount of control you can have is frigging amazing with comfy. There are various models for ADetailer trained to detect different things such as Faces, Hands, Lips, Eyes, Breasts, Genitalia(Click For Models). Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the . Title is basically the question. Can anyone eli5 what's wrong. Losing a great amount of detail and also de-aging faces on a creepy way. I believe it's due to the syntax within the scheduler node breaking the syntax of the overall prompt JSON load. A lot of people are just discovering this technology, and want to show off what they created. More posts you may like r/StableDiffusion. 2 seconds, with TensorRT. It is pretty amazing, but man the documentation could use some TLC, especially on the example front. FaceDetailer in comfyUI not working, and now I'm stuck while trying to reproduce the ADetailer step I use in AUTO1111 to fix faces; I'm using the ImpactPack's FaceDetailer node, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Hi guys, adetailer can easily fix and generate beautiful faces, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, ComfyUI-ODE, and CFG++) upvotes Welcome to the unofficial ComfyUI subreddit. Updated ComfyUI Workflow: SDXL (Base as it includes a lot of functions and can be disorienting at first. ComfyUI lives in its own directory. More info: https: This one took 35 seconds to generate in A1111 with a 3070 8GB with a pass of ADetailer /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, ComfyUI SDXL 0. More info: I noticed Adetailer is giving me terrible results when trying to auto in paint the eyes. Any way to preserve the "lora effect" and still fix imperfect faces? Welcome to the unofficial ComfyUI subreddit. Also, if this is new and exciting to you, feel free to Right, so before I go on to show my ComfyUI I feel that I need to make it very clear that I have no idea what I am actually doing /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Here's the repo with the install instructions (you'll have to uninstall the wildcards you already have): sd-webui-wildcards-ad. i get nice tutorial from here, it seems work. On the ComfyUI project page, Quick and dirty adetailer and inpainting test /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, but you can use your prompt to describe most features (hair, body type,) for both and then use adetailer to target the correct face with the lora again at full strength Welcome to the unofficial ComfyUI /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site I call it 'The Ultimate ComfyUI Workflow', easily switch from Txt2Img to Img2Img, built-in Refiner, LoRA selector, Upscaler & Sharpener. pt" and give it a prompt like "hand. Comfyui - ADetailer upvotes it's no longer maintained, do you have any recommendation custom node that can be use on ComfyUI (that have same functionality with aDetailer on A1111) beside FaceDetailer? someone give me direction to try ComfyUI-Impact-Pack, but it's too much for me, I can't quite get it right, especialy for SDXL. I got the best effect with "img2img skip" option enabled in ADetailer, but then the rest of the image remains raw. Btw, A1111 adetailer seems to do the same thing, and is more flexible so I export the frames from comfyui and fix the face in /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app the Adetailer extension automatically detects faces, masks them, creates new faces, and scales them to fit ComfyUI is incredibly powerful and quite flexible but I do think that it is 100% missing something vital for a node improve your results by generating, and then using aDetailer on your upscale I tried my own suggestion and it works pretty well though, lol. Both of my images have the flow embedded in the image so you can simply drag and drop the image into ComfyUI and it should open up the flow but I've also included the json in a zip file. r Welcome to the unofficial ComfyUI subreddit. Thanks, man. More info: https: Comfyui adetailer? upvote Heads up: Batch Prompt Schedule does not work with the python API templates provided by ComfyUI github. An issue I have not been able to overcome is that the skin tone is always changed to a really specific shade of greyish-yellow that almost ruins the image. Tried comfyui just to see. ComfyUI got attention recently because the developer works for StabilityAI and was able to be the first to get SDXL running. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Speed up ComfyUI Inpainting with these two new easy-to-use nodes Welcome to the unofficial ComfyUI subreddit. Help me make Welcome to the unofficial ComfyUI subreddit. or want to add something similar to "adetailer" pluging from automatic1111 or a Welcome to the unofficial ComfyUI subreddit. 1) in A1111. I tried using inpaiting and image weighting in ComfyUI_IPAdapter_plus example workflow, play around with ComfyUI already has examples repo where you can instantly load all cool native workflows just by drag'n'dropping picture from that repo. Installation is complicated and annoying to setup, most people would have to watch YT tutorials just to get A1111 installed properly. This is the first time I see Face Hand adetailer in Comfyui workflow /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Any idea if there's a way to see the actual detection threshold? For example, the adetailer extension for A1111 will show red boxes next to each face with the detection threshold next it it (0. Is it the same? Can these detailers be used when making animations and not just on a single image? There's a 'force inpaint' option on one of the face nodes that has to be true to do anything (I never did look at why it doesn't activate sometimes without that) the node is "Face Detailer (pipe)", Those detail loras are 100% compatible with comfyui, and yes, that's the first, second and third recommendation I would give. com/models/142240/adetailer-after-detailer-lips-model. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app Sampler: Euler a, CFG scale: 7, Denoising strength: 0. Please keep posted /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site Have you guys had any success with or noticed any difference between the different ADetailer models? I would love to test them all, but the x/y/z plot scripts to test them don't seem to be working. Can you give me the best adetailer wf? limit my search to r/comfyui. next. Is there a way to get ADetailer to do it's thing after upscaling has finished? Would really help me out :) A portion of the control panel What’s new in 5. Under the "ADetailer model" menu select "hand_yolov8n. 4 Put these Adetailer models in to the bbox folder. 25K subscribers in the comfyui community. That being said, some users moving from a1111 to Comfy are presented with a Hello all, I'm very new to SD. BTW I export the frames and fix the face in Adetailer in Automatic1111, Welcome to the unofficial Elementor subreddit, the number one place on Reddit to discuss Elementor the live page Welcome to the unofficial ComfyUI subreddit. More info: I am using AnimatedIff + Adetailer + Highres, but when using animatediff + adetailer in webui, the face appears unnatural. What is After Detailer(ADetailer)? ADetailer is an extension for the stable diffusion webui, designed for detailed image processing. Please share your tips, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Did not pick up the ADetailer settings (expected, though there are nodes out there that can accomplish the same Thanks for the reply - I’m familiar with ADetailer but I’m actually deliberately looking for something that does less. I think the noise is also generated differently where A1111 uses GPU by default and ComfyUI uses CPU by Welcome to the unofficial ComfyUI subreddit. Please use our Discord server instead of supporting a company that acts against its users and unpaid moderators Even thou i keep hearing people focusing the discussion on the time it takes to generate the image (and yes Comfyui is faster, i have a 3060) i would like people to be discussing if the image quality is better in which. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the I don't find ComfyUI faster, I can make an SDXL image in Automatic 1111 in 4 . As a member of our community, you'll enjoy: 📚 Easy-to-understand explanations of business analysis concepts, without the jargon. Or how I can figure out the problem. 5 models and it easily generated 2k images without any /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app (haven't run them through Adetailer or a Lora yet) . /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, If you mean something like ADetailer in Auto1111, the node is called "FaceDetailer" Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, I want to install 'adetailer' and 'dddetailer', but there is none in ComfyUI. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, ComfyUI Academy Welcome to the unofficial ComfyUI subreddit. I've had no nans errors after doing that Welcome to the unofficial ComfyUI subreddit. Please keep posted /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site Welcome to the unofficial ComfyUI subreddit. Write me a dm I work with comfy since 4 months now and i love to help people who are interested i could give you tipps/workflows and explain how the stuff works. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, ComfyUI is the least user-friendly thing I've ever seen in my life. Featuring. Forgot even comfyui exist. reasonable faces -not close-up portraits- and after reactor you can try adetailer over it. Among the models for face, I found face_yolov8n, ComfyUI for Game Development 3. 9 then upscaled in A1111, I have a problem with Adetailer in SD. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Hopefully, some of the most important extensions such as Adetailer will be ported to ComfyUI. A new Image2Image function: choose an existing image, or a batch of images from a folder, and pass it through the Hand Detailer, Face Detailer, Upscaler, or Face Swapper functions. I am curious if I can use Animatediff and adetailer simultaneously in ComfyUI without any issues. e. 0,9 seconds. It's got little to do with the question, but might help your problem. A lot of people are just discovering this I have this bizarre bug in a1111 where whenever I enable adetailer and generate more than 1batch/image, it will generate the images but not display them in the live preview. Please share your tips, tricks, and workflows for using this software to create your AI art. More info: Welcome to the unofficial ComfyUI subreddit. 💡 Practical tips and techniques to sharpen your analytical skills. i managed to find a simple SDXL workflow but nothing else. It saves a lot of time. Reply reply More replies. I'm wondering if it is possible to use adetailer within img2img to correct a previously generated ai image that has a garbled face + hands. Please use our Discord server instead of supporting a company that acts against its users and unpaid moderators Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site More or less a complete beginner with ComfyUI, so sorry if this is a stupid question. x to 24. Been working with A1111 and Forge since I started using SD but trying to dip my toes in to ComfyUI I get the basics, With this workflow Adetailer enhances the likeness a lot. (it shows a stop sign at my cursor) ADetailer model: mediapipe_face_mesh_eyes_only, ADetailer prompt: There are ways to do those in ComfyUI but you'll want to find example workflows to Note: Reddit is dying due to terrible leadership from CEO /u/spez. A1111 is REALLY unstable compared to ComfyUI. It picked up the loras, prompt, seed, etc. a few days ago installed it, speed is amazing but i cannot do anything almost. So you can install it and run it and every other program on your hard disk will stay exactly the same. 0. pt, ADetailer confidence: 0. Most "ADetailer" files i have found work when placed in Ultralytics BBox folder. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API i always wanted to get in to ComfyUI due to speed. If you use Luis Quesada's amazing Inpainting crop and stitch nodes, you can easily build a workflow that will produce better results than Facedetailer, at the exact resolution you request. For example, Adetailer is a great extension. also allow only 1 model in memory. For something similar, I generate images with a low number of steps and no adetailer/upscaler/etc, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Welcome to the unofficial ComfyUI subreddit. As a non-coder I have to ask: Is it possible to implement them in ComfyUI in the same way devs did with 0. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, I uploaded a few scripts that should help you train your own detection models to use with tools like ADetailer or other image You can use Segs detailer in ComfyUI which if you create a mask around the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, I designed a set of custom nodes based on diffusers instead of ComfyUI's own KSampler. (just the short version): photograph of a person as a sailor with a yellow rain coat on a ship in the rough ocean with a pipe in his mouth OR photograph of a Welcome to the unofficial ComfyUI subreddit. More info: After Detailer (ADetailer extension) Settings (1st pic) Adetailer is off (2nd pic) Adetailer on, default settings except denoise is set to 0. Impact Pack has SEGS if you want to have fine control (like filtering for male faces, largest n faces, apply a controlnet to the SEGS, ) or just a node called Facedetailer. 5 just to see to compare times) the initial image took 127. Please keep posted /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site At the end I refined the mask and composed the characters in Photoshop, then sent the image back to SD to finetune it, use adetailer/facedetailer to fix/improve faces and eyes and called it a day. i just want to be able to select model, vae if necessary, lora and thats it. doing one face at a We would like to show you a description here but the site won’t allow us. A new Prompt Enricher function, able to improve your prompt with the help of GPT-4 or GPT-3. and 9 seconds total to refine it. As the title suggest, I'm using ADetailer for Comfy (the impact-pack) and works well, problem is I'm using a Lora to style the face after a specific person(s), and the FaceDetailer node makes it clearly "better" but kinda destroys the similarity and facial traits. 0 of Stability Matrix - a built-in Stable Diffusion interface powered by any running ComfyUI package. ) Just tried it again and it worked with an image I generated in A1111 earlier today. Tweaked a bit and reduced the basic sdxl generation to 6-14 seconds. The general idea and buildup of my workflow is: Create a picture consisting of a person doing things they are known for/are characteristic for them (i. 5-Turbo. 35, Clip skip: 2, ADetailer: model: face_yolov8n. 5. This wasn't the case before the updating to the newest version of A1111. When I run Animatediff with Adetailer I get errors. However, the latest update has a "yolo world model", and I realised I don't know how to use the yolov8x and related models, other than in pre-defined models as above. Continued with extensions, got adetailer, control net etc with literally a click. There are some distortions and faces look more proportional but uncanny. The creator has recently opted into posting YouTube examples which have zero audio, captions, or anything to explain to the user what exactly is happening in Welcome to the unofficial ComfyUI subreddit. Welcome to the unofficial ComfyUI subreddit. I noticed that I could no longer access the adetailer settings. Our friendly Reddit community is here to make the exciting field of business analysis accessible to everyone. I've submitted a bug to both ComfyUI and Fizzledorf as I'm not sure which side will need to correct it. Please share your tips, I'm actually using aDetailer recognition models in auto1111 but they are limited and cannot be combined in the same pass. I'm new to all of this and I've have been looking online for BBox or Seg models that are not on the models list I'm not seeing adetailer node in comfy but I found something called face detailer. Adetailer is a tool in the toolbox. To clarify, there is a script in Automatic1111->scripts->x/y/z plot that promises to let you test each ADetailer model, same as you would a regular checkpoint, or CFG scale, or number I use nodes from Comfyui-Impact-Pack to automatically segment image, detect hands, create masks and inpaint. Most of them already are if you are using the DEV branch by the way. And the clever tricks discovered from using ComfyUI will be ported to the Automatic1111-WebUI. I'm new to the comfy scene so I don't know much, but I've seen ADetailer pop up about a dozen times in the past week regarding faces and is probably worth looking into if you haven't already. Giving me the mask and letting me handle the inpaint myself would give me more flexibility for eg. 47, 0. ADetailer works OK for faces but SD still doesn't know how to draw hands well so don't expect any miracles. There's nothing worse than sitting and waiting for 11 minutes for an SDXL render with aDetailer just to see at the end that it's not what you were looking for. Automatic1111 is still popular and does a lot of things ComfyUI can't. 5ms to generate. 1) in ComfyUI is much stronger than (word:1. 5 (3nd pic) Adetailer on, default settings, denoise 0. Now, a world of possibilities has opened; next, I will try to use a segment to separate the face, upscale it, add a lora or detailer to fine-tune the face details, rescale to the image source size, and paste it back. available at https: ADetailer is mostly useful for adding extra detail rather than making substantial changes to the image Welcome to the unofficial ComfyUI subreddit. I love LayerDiffuse extension but the lack of Adetailer makes it impossible to use with human characters. After learning auto1111 for a week, I'm switching to Comfy due the rudimentary nature of extensions for everything and persisting memory issues with my 6GB GXT1660. However there's something I can't quite understand with regards to using nodes to perform what ADetailer does to faces. Switched to comfy recently for fun but still miss adetailer (wasn't there an adetailer node? Welcome to the unofficial ComfyUI subreddit. But if there are several faces in a scene, it is nearly impossible to separate and control each I use ADetailer to find and enhance pre-defined features, e. However, I get subar results compared to adetailer from webui. Hi everybody, I used to enable Adetailer quite often when doing inpaints, for example, to fix mangled hands. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the My webui is updated, adetailer extension up to date, all the adetailer models, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Welcome to the unofficial ComfyUI subreddit. Then i bought a 4090 a couple of weeks ago (2 i think). /r/StableDiffusion is back open after the protest of Reddit killing open API access, which to] [whether or not to] use Refiner, and how it interacts with other "second step" processes, notably HiRes. 3, ADetailer mask only top k largest: ComfyUI AnyNode now lets Local LLMs code nodes for you When generating a txt2img and using ADetailer, I have no issues. I keep hearing that A1111 uses GPU to feed the noise creation part, and Comfyui uses the CPU. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, redownload models. Comfy has clearly taken a smart and logical approach with the workflow GUI, at least from a programmer's point of view. I've been experimenting with ComfyUI recently mostly because on paper offers more flexibility compared to A1111 and SD. I really like using SD Ultimate Upscale for img2img but haven't found a good way to use it with ADetailer, as the tiling makes it so that ADetailer acts on each individual tile. 6 seconds in ComfyUI) and I cannot get TensorRT to work in ComfyUI as the installation is pretty complicated and I don't have 3 hours to burn doing it. Mine was located This is a workflow intended to replicate the BREAK feature from A1111/Forge, Adetailer, and Upscaling all in one go. It identifies faces in a scene and automatically replaces them according to the settings input. You can either paint the masks manually, or use a Bbox detector to automatically detect faces. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Other than that, one thing that no one else mentioned was ADetailer. It has it's uses, and many times, especially as you're moving to higher resolutions, it's best just to leverage inpaint, but, it never hurts to experiment with the individual inpaint settings within adetailer, sometimes you can find a decent denoising setting, and often I can get the results I want from adjusting the custom height and width settings of Welcome to the unofficial ComfyUI subreddit. With ComfyUI you just download the portable zip file, unzip it and get ComfyUI running instantly, even a kid can get ComfyUI installed. Adetailer model is for face/hand/person detection Detection threshold is for how sensitive it's detect (higher = stricter = less face detected / will ignore blurred face in background character) then mask that part Hi, I tried to make a swap cloth workflow but perhaps my knowledge about Ipadapter and controlnet limited, i failed to do so. 5 and sdxl but I /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Comfyui - ADetailer upvotes /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. For instance (word:1. Then comes the higher resolution by I want to switch to comfyui, but can't do that until I find a decent adetailer wf in comfyui. I am using adetailer Using adetailer to do batch inpainting bassicly, but not enough of the face is being changed, primarily the mouth / nose / eyes and brows But the area it adjusts is to small, I need the box to be larger to cover the whole face + chin + neck and maybe hair too Welcome to the unofficial ComfyUI subreddit. Just make sure you update if it's already installed. Powerful auto-completion and syntax highlighting Customizable dockable and floatable panels Welcome to the unofficial ComfyUI subreddit. Hi guys, I try to do a few face swaps for fare well gifts. i wanted to share a ComfyUi simple workflow i reproduce from my hours spend on A1111 with a Hires, Loras, Multiple detailers and a last upscaler + a style filter selector. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Welcome to the unofficial ComfyUI subreddit. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, I recently tried out adetailer. A new Face Swapper function. More info: ComfyUI now supporting SD3 Welcome to the Business Analysis Hub. I was surprised to find out that Adetailer, since last update from 23. More info: Both the Detailers in ComfyUI's Impact Pack and A1111's ADetailer operate in that manner. Please keep posted images SFW. (In webui, adetailer runs after the animatediff generation, making the final video look unnatural. I've managed to mimic some of the extensions features in my Comfy workflow, but if anyone knows of a more robust copycat approach to get extra adetailer options working in ComfyUI then I'd love to see it. The thing that is insane is testing face fixing (used SD 1. Belittling their efforts will get you banned. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Use seperate checkpoint for adetailer, so it has to reload between txt2image and adetailer. g. More info: https: The default settings for ADetailer are making faces much worse. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API I'm using ADetailer with automatic1111, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. More info: I played with hi-diffusion in comfyui with sd1. More info: https: Upgraded my PC recently to a 4070. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, etc. 5 and the prompt is "photo of ohwx man" 112 votes, 32 comments. Hello there, I have been using Comfyui for quite a while now and i got some pretty decent workflows for 1. A lot of people are just discovering this Welcome to the unofficial ComfyUI subreddit. This is an old reddit post, I have already made a better tutorial on how to make animation with AnimateDiff including workflow files (anime standing girl) in the above comments in your comfyUI workspace. 17K subscribers in the comfyui community. Comfyui - ADetailer upvotes /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. use the following search parameters to narrow your results: subreddit:subreddit find submissions in "subreddit" author:username Get an ad-free ADetailer (After Detailer) Lips Model - https://civitai. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, ComfyUI now supporting SD3 tl;dr just check the "enable Adetailer" and generate like usual, it'll work just fine with default setting. Adetailer can seriously set your level of detail/realism apart from the rest. You guys have been very supportive, so I'm posting here first. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Hello guy, Sorry to ask, but i searched for hours, documentation internet, even the source code of Impact-Pack i found no way to add new bbox_detector. fix and ADetailer. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Hey this is my first ComfyUI workflow hope you enjoy it! I've never shared a flow before so if it has problems please let me know. ya been reading and playing with it for few days. More info: It's called FaceDetailer in ComfyUI but you'd have to add a few dozen extra nodes to get all the functionality of the adetailer extension. UPDATE: The alternative node I found which works (with some /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, ADetailer model: face_yolov8n. " It will attempt to automatically detect hands in the generated image and try to inpaint them with the given prompt. More info: https: Sometimes when I struggle with bad quality deformed faces I use adetailer but it's not working perfectly because when img2img destroys the face, ADeailer can't help enough and creates strange bad results. pt, denoising strength: 0. im beginning to ask myself if that's even possible in Comfyui. The only way is to send to img 2 img A1111 to upscale and than in photoshop masking the background wich almost completely Welcome to the unofficial ComfyUI subreddit. using face_yolov8n_v2, and that works fine. However i want to do this with a Python script, using the auto1111 api. 1st pic is without ADetailer and the second is with it. 3, inpaint only I just checked Github and found ComfyUI can do Stable Cascade image to image Welcome to the unofficial ComfyUI subreddit. If I disable adetailer, it will go back to working again. I've always been disappointed with Facedetailer. And above all, BE NICE. I use it Man, you're damn right! I would never be able to do this in A1111; I would be stuck into A1111's predetermined flow order. x, is not applied in the inpainting tab any more (still works in the img2img tab though). For some time now, when I generate an image with adetailer enabled, the generation runs smoothly until the last step when the generation completely blocks stable diffusion. I have been using aDetailer for a while to get very high quality faces in generation. There's also a bunch of BBOX and SEGM detectors on Civitai (search for Adetailer), sometimes it makes sense to combine a BBOX detector (like Face) with a SEGM detector (like skin) to really just get the Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Can anyone View community ranking In the Top 1% of largest communities on Reddit. The original author of adetailer was kind enough to merge my changes. The weights are also interpreted differently. Top 1% Rank by size . The only extensions I have installed are control net, deforum, Animatediff and Adetailer. This works perfectly, you can fix faces in any image you upload. Hi all, we're introducing Inference in v2. I am fairly confident with ComfyUI but still learning so I facedetailer is basically another K-sampler, but instead of rendering the entire image again its just rendering a small area around the detected faces. (Same image takes 5. Is there a way to have it only do the main (largest) face (or better yet, an arbitrary number) like you can in Adetailer? Any time there's a crowd, it'll try to do them all and it ends up giving them all the expression of the main subject. But I'm gonna check it just for the sake of learning. Noticed that speed was almost the same with a1111 compared to my 3080. When I do two-pass the end result is better although still falls short from what I got on webui with adetailer, which is strange as they work in the same way from what I understand. . put in in seperate folder (fixed models) and give it a hard read only attribute. 3072x1280 using Tempest Artistic model (edited for missing letters Well no ultimate sd upscale is not on hf and yea the face detection models for adetailer are on hf. Next. And the new interface is also an improvement as it's cleaner and tighter. As far as I saw by reading on this sub, the recommended workflow is "adjust faces, then HR fix". Please share your tips, tricks, and workflows for using this software to create your AI art Reddit is dying due to terrible leadership from CEO /u/spez. Does anyone know how can we use the auto1111 api with Adetailer to fix faces of an already generated image? In the UI, we can use the img2img tab and check the skip-img2img box under Adetailer. 73, A reddit dedicated to the profession of Computer System Administration. gvym ssnrhgx pmqu ybdoh qydshsf zjkct ncgxt ybm yyy qxhzn