Theta Health - Online Health Shop

Best upscale model for comfyui reddit

Best upscale model for comfyui reddit. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). And when purely upscaling, the best upscaler is called LDSR. I run some tests this morning. Jan 13, 2024 · TLDR: Both seem to do better and worse in different parts of the image, so potentially combining the best of both (photoshop, seg/masking) can improve your upscales. A pixel upscale using a model like ultrasharp is a bit better -and slower- but it'll still be fake detail when examined closely. That's because of the model upscale. 5, see workflow for more info Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. 6. But I probably wouldn't upscale by 4x at all if fidelity is important. 5, now I use it only with SDXL (bigger tiles 1024x1024) and I do it multiple times with decreasing denoise and cfg. Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. Best method to upscale faces after doing a faceswap with reactor It's a 128px model so the output faces after faceswapping is blurry and low res. But basically txt2img, img2img, 4x upscale with a few different upscalers. So from VAE Decode you need a "Uplscale Image (using model)" under loaders. But for the other stuff, super small models and good results. This is the 'latent chooser' node - it works but is slightly unreliable. model: base sd v1. It turns out lovely results, but I'm finding that when I get to the upscale stage the face changes to something very similar every time. I haven't been able to replicate this in Comfy. An alternative method is: - make sure you are using the k-sampler (efficient) version, or another sampler node that has the 'sampler state' setting, for the first pass (low resolution) sample Welcome to the unofficial ComfyUI subreddit. The custom node suites I found so far either lack the actual score calculator, don't support anything but CUDA, or have very basic rankers (unable to process a batch, for example, or only So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. Thanks. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from… hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. Florence2 (large, not FT, in more_detailed_captioning mode) beats MoonDream v1 and v2 in out-of-the-box captioning. with a denoise setting of 0. fix. Upscaling: Increasing the resolution and sharpness at the same time. You create nodes and "wire" them together. There's "latent upscale by", but I don't want to upscale the latent image. Yep , people do say that ultimate SD works for SDXL as well now but didn't work for me. Aug 5, 2024 · Flux has been out of under a week and already seeing some great innovation in the open source community. We would like to show you a description here but the site won’t allow us. co) Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. 101 votes, 27 comments. e. Usually I use two my wokrflows: For upscaling I mainly used the chaiNNer application with models from the Upscale Wiki Model Database but I also used the fast stable diffuison automatic1111 google colab and also the replicate website super resolution collection. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. Also, both have a denoise value that drastically changes the result. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting I'm using a workflow that is, in short, SDXL >> ImageUpscaleWithModel (using 1. I took a 2-4 month hiatus, basically when the OG upscale checkpoints came out like SUPIR so I have no heckin' idea what is the go-to these days. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. If you want actual detail at a reasonable amount of time you'll need a 2nd pass with a 2nd sampler. 34 per hour) and discovered this workflow by @plasm0 that runs locally and support upscaling as well. Search for upscale and click on Install for the models you want. I want to upscale my image with a model, and then select the final size of it. Like I can understand that using the Ultimate Upscale one could add more details through adding steps/noise or whatever you'd like to tweak on the node. All of this can be done in Comfy with a few nodes. true. The resolution is okay, but if possible I would like to get something better. in a1111 the controlnet Welcome to the unofficial ComfyUI subreddit. pth or 4x_foolhardy_Remacri. to the corresponding Comfy folders, as discussed in ComfyUI manual installation. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. That's practically instant but doesn't do much either. DirectML (AMD Cards on Windows) pip install torch-directml Then you can launch ComfyUI with: python main. Note: Remember to add your models, VAE, LoRAs etc. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. There is no tiling in the default A1111 hires. 25 i get a good blending of the face without changing the image to much. If you want a better grounding at making your own comfyUI systems consider checking out my tutorials. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird ComfyUI uses a flowchart diagram model. Tried the llite custom nodes with lllite models and impressed. so i. For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. Please keep posted images SFW. Forgot to mention, you will have to download this inpaint model from huggingface and put it in your comfyUI "Unet" folder that can be found in the models folder. There are also "face detailer" workflows for faces specifically. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. 0 and want to add an Aesthetic Score Predictor function. 5 models such as dreamshaper or those which provide good details. 5 model) >> FaceDetailer. Please share your tips, tricks, and workflows for using this software to create your AI art. 15-0. 5), with an ESRGAN model. You could also try a standard checkpoint with say 13, and 30. I was working on exploring and putting together my guide on running Flux on Runpod ($0. Good for depth, open pose so far so good. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. 0 seconds (IMPORT FAILED): R:\diffusion\ComfyUI\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale 0. To get the absolute best upscales, requires a variety of techniques and often requires regional upscaling at some points. I am curious both which nodes are the best for this, and which models. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. diffusers/stable-diffusion-xl-1. It's possible that MoonDream is competitive if the user spends a lot of time crafting the perfect prompt, but if the prompt simply is "Caption the image" or "Describe the image", Florence2 wins. "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. Does anyone have any suggestions, would it be better to do an ite From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. I believe it should work with 8GB vram provided your SDXL Model and Upscale model are not super huge E. Import times for custom nodes: 0. g Use a X2 Upscaler model. Id say it allows a very high level of access and customization, more thanA1111 - but with added complexity. py --directml In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. Now go back to img2img generated mask the important parts of your images and upscale that. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling If you want to use RealESRGAN_x4plus_anime_6B you need work in pixel space and forget any latent upscale. same seed probably not nessesary and can cause bad artifacting by the "Burn in" problem when you stack same seed samplers. Best aesthetic scorer custom node suite for ComfyUI? I'm working on the upcoming AP Workflow 8. Connect the Load Upscale model with the Upscale Image (using model) to VAE Decode, then from that image to your preview/save image. pth "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. There are also other upscale models that can upscale latents with less distortion, the standard ones are going to be bucubic, billinear, and bislerp. Sometimes models appear twice, for example “4xESRGAN” used by chaiNNer and “4x_ESRGAN” used by Automatic1111. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. For the samplers I've used dpmpp_2a (as this works with the Turbo model) but unsample with dpmpp_2m, for me this gives the best results. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. You can also run a regular AI upscale then a downscale (4x * 0. Ultimate sd upscale is the best for me, you can use it with controlnet tile in SD 1. 0-inpainting-0. 4 This custom node is failing to load but I think this is a separate issue. So you workflow should look like this: KSampler (1) -> VAE Decode -> Upscale Image (using Model) -> Upscale Image By (to downscale the 4x result to desired size) -> VAE Encode -> KSampler (2) Welcome to the unofficial ComfyUI subreddit. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). It's a lot faster that tiling but outputs aren't detailed. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider from 1 to 4 or more). Then another node under loaders> "load upscale model" node. the factor 2. You can easily utilize schemes below for your custom setups. Fastest would be a simple pixel upscale with lanczos. Though, from what someone else stated it comes to use case. This ComfyUI nodes setup lets you use Ultimate SD Upscale custom nodes in your ComfyUI AI generation routine. 2 - image upscale is less detailed, but more faithful to the image you upscale. Upscale x1. The world’s best aim trainer, trusted by top pros, streamers, and players like you. Upgrade your FPS skills with over 25,000 player-created scenarios, infinite customization, cloned game physics, coaching playlists, and guided training and analysis. The downside is that it takes a very long time. FWIW, i was using it WITH the PatchModelAddDownscale node to generate with RV 5. That's because latent upscale turns the base image into noise (blur). I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. 1 and LCM for 12 samples at 768x1152, then using a 2x image upscale model, and consistently getting the best skin and hair details I've ever seen. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. Edit: i changed models a couple of times, restarted comfy a couple of times… and it started working again… OP: So, this morning, when I left for… r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. attach to it a "latent_image" in this case it's "upscale latent" Welcome to the unofficial ComfyUI subreddit. Then output everything to Video Combine . That's a a good model but to be very clear it's not "objectively better" than anything else on that site, OP's entire basis for the post is just wrong, purpose built upscale models are NOT "advancing" in the way they seem to believe. Welcome to the unofficial ComfyUI subreddit. - latent upscale looks much more detailed, but gets rid of the detail of the original image. Reply reply Welcome to the unofficial ComfyUI subreddit. Super late here but is this still the case? I've got CCSR & TTPlanet. Generates a SD1. I first create the image with SDXL then ultimate upscale using a SD 1. 1 at main (huggingface. messing around with upscale by model is pointless for high res fix. 0-RC , its taking only 7. So in those other UIs I can use my favorite upscaler (like NMKD's 4xSuperscalers) but I'm not forced to have them only multiply by 4x. . Reactor has built in codeformer and GFPGAN, but all the advice I've read said to avoid them. fyzczm knrywh ngjav slud oxcni pjyema yzducd dpwfg lksrk dkmvn
Back to content