Comfyui load workflow example reddit

Comfyui load workflow example reddit. To add content, your account must be vetted/verified. This repo contains examples of what is achievable with ComfyUI. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. Just my two cents. Flux Schnell is a distilled 4 step model. If you have the SDXL 0. We would like to show you a description here but the site won’t allow us. I had to place the image into a zip, because people have told me that Reddit strips . 9(just search in youtube sdxl 0. something of an advantage comfyUI has over other interfaces is that the user has full control over every step of the process which allows you to load and unload models, images and use stuff entirely in latent space if you want. For ComfyUI there should be a license information for each node in my opinion: "Commercial use: yes, no, needs license" and a workflow using non-commercial should show some warning in red. 1:8188 but when i try to load a flow through one of the example images it just does nothing. Just load your image, and prompt and go. 4 - The best workflow examples are through the github examples pages. Adding same JSONs to main repo would only add more hell to commits history and just unnecessary duplicate of already existing examples repo. This is a more complex example but also shows you the power of ComfyUI. Img2Img ComfyUI workflow. If you have the SDXL 1. Same workflow as the image I posted but with the first image being different. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. Really happy with how this is working. I have made a workflow to enhance my images, but right now I have to load the image I want to enhance, and then, upload the next one, and so on, how can I make my workflow to grab images from a folder and for each queued gen, it loads the 001 image from the folder, and for the next gen, grab the 002 image from the same folder? That's a bit presumptuous considering you don't know my requirements. Initial Input block - Welcome to the unofficial ComfyUI subreddit. K12sysadmin is for K12 techs. Any ideas on this? Starting workflow. 0 and upscalers I think it was 3DS Max. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". . You can find the Flux Dev diffusion model weights here. Then restart ComfyUI. problem with using the comfyUI manager is if your comfyui won't load you are SOL fixing it. Comfy Workflows Comfy Workflows. or through searching reddit, the comfyUI manual needs updating imo. Of course with so much power also comes a steep learning curve, but it is well worth it IMHO. Still working on the the whole thing but I got the idea down Yes, 8Gb card, ComfyUI workflow loads both SDXL base & refiner models, separate XL VAE, 3 XL LoRAs, plus Face Detailer and its sam model and bbox detector model, and Ultimate SD Upscale with its ESRGAN model and input from the same base SDXL model all work together. sft file in your: ComfyUI/models/unet/ folder. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. The workflow in the example is passed in via the script in inline string, but it's better (and more flexible) to have your python script load that from a file instead. Study this workflow and notes to understand the basics of ComfyUI, SDXL, and Refiner workflow. It'll load a basic SDXL workflow that includes a bunch of notes explaining things. If you asked about how to put it into the PNG, then you just need to create the PNG in ComfyUI and it will automatically contain the workflow as well. I actually just released an open source extension that will convert any native ComfyUI workflow into executable Python code that will run without the server. Welcome to the unofficial ComfyUI subreddit. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. be/ppE1W0-LJas - the tutorial. Help, pls? comments sorted by Best Top New Controversial Q&A Add a Comment You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. I used to work with Latent Couple then Regional prompter module for A1111, which allowed me to generate separate regions of an image through masks and guided with ControlNets (for instance, generate several characters using poses derived from a preprocessed picture). 1 or not. Here are approx. Save workflow: Ctrl + O: Load workflow: Ctrl + A: Select all nodes: Alt + C: Collapse/uncollapse selected nodes: Ctrl + M: Mute/unmute selected nodes: Ctrl + B: Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) Delete/Backspace: Delete selected nodes: Ctrl + Backspace: Delete the current Welcome to the unofficial ComfyUI subreddit. pngs of metadata. The image blank can be used to copy (clipspace) to both the load image nodes, then from there you just paint your masks, set your prompts (only the base negative prompt is used in this flow) and go. I'm not going to spend two and a half grand on high-end computer equipment, then cheap out by paying £50 on some crappy SATA SSD that maxes out at 560MB/s. I might do an issue in ComfyUI about that. Using the Comfy image saver node will add EXIF fields that can be read by IIB, so you can view the prompt for each image without needing to drag/drop every single one. I'll do you one better, and send you a png you can directly load into Comfy. Table of contents. It is not much an inconvenience when I'm at my main PC. second pic. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. 8 denoise won't have actually 20 steps but rather decrease that amount to 16. I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. Merging 2 Images together. 1. I can load workflows from the example images through localhost:8188, this seems to work fine. 1; Flux Hardware Requirements; How to install and use Flux. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. I tried to find either of those two examples, but I have so many damn images I couldn't find them. 168. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. I couldn't find the workflows to directly import into Comfy. https://youtu. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. So every time I reconnect I have to load a presaved workflow to continue where I started. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. Ending Workflow. This could lead users to increase pressure to developers. This is done using WAS nodes. Please keep posted images SFW. Create animations with AnimateDiff. You can then load or drag the following image in ComfyUI to get the workflow: Load Image Node. Here's a quick example where the lines from the scribble actually overlap with the pose. I use a google colab VM to run Comfyui. 0 Base and Refiners models downloaded and saved in the right place, it should work out of the box. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. Breakdown of workflow content. If anyone else is reading this and wanting the workflows, here's a few simple SDXL workflows, using the new OneButtonPrompt nodes, saving prompt to file (I don't guarantee tidiness): That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. eh, if you build the right workflow, it will pop out 2k and 8k images without the need for alot of ram. 9 leaked repo, you can read the README. 5 with lcm with 4 steps and 0. How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. 1 ComfyUI install guidance, workflow and example. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion - this creats a very basic image from a simple prompt and sends it as a source. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) 10 upvotes · comments One trick I learned yesterday that makes sharing workflows easier when those include pictures and videos: use the Load Video (Path) node, post your video source online (on imgur for example), and link to it via that node with a simple URL. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. md file yourself and see that the refiner is in fact intended as img2img and basically as you see being done in the ComfyUI example workflow someone posted. Its just not intended as an upscale from the resolution used in the base model stage. You can encode then decode bck to a normal ksampler with an 1. Put the flux1-dev. Upscaling ComfyUI workflow. Thank you u/AIrjen!Love the variant generator, super cool. ai/profile/neuralunk?sort=most_liked. This guide is about how to setup ComfyUI on your Windows computer to run Flux. where did you extract the frames zip file if you are following along with the tutorial) image_load_cap will load every frame if it is set to 0, otherwise it will load however many frames you choose which will determine the length of the animation ComfyUI already has examples repo where you can instantly load all cool native workflows just by drag'n'dropping picture from that repo. 0. Share, discover, & run thousands of ComfyUI workflows. 1; Overview of different versions of Flux. You can just use someone elses workflow of 0. The EXIF data won't capture the entire workflow but to quickly see an overview of a generated image, this is the best you can currently get. They do overlap. and if you copy it into comfyUI, it will output a text string which you can then plug into you 'Clip text encoder' node and it is then used as your SD prompt. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: https://openart. K12sysadmin is open to view and closed to post. Nobody needs all that, LOL. this is just a simple node build off what's given and some of the newer nodes that have come out. I recently switched from A1111 to ComfyUI to mess around AI generated image. But when I'm doing it from a work PC or a tablet it is an inconvenience to obtain my previous workflow. Please share your tips, tricks, and workflows for using this software to create your AI art. I see youtubers drag images into ComfyUI and they get a full workflow, but when I do it, I can't seem to load any workflows. Hope you like some of them :) Flux. If you have any of those generated images in original PNG, you can just drop them into ComfyUI and the workflow will load. You can then load or drag the following image in ComfyUI to get the workflow: ComfyUI Examples. Also notice that you can download that image and drag'n'drop it to your comfyui to load that workflow and you can also drag'n'drop images to Load Image node to quicker load them And another general difference is that A1111 when you set 20 steps 0. It's simple and straight to the point. You need to select the directory your frames are located in (ie. So, I just made this workflow ComfyUI. I cant load workflows from the example images using a second computer. WAS suite has some workflow stuff in its github links somewhere as well. SDXL Default ComfyUI workflow. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. ComfyUI needs a stand alone node manager imo, something that can do the whole install process and make sure the correct install paths are being used for modules. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. 1 with ComfyUI Aug 2, 2024 · Flux Dev. com/. I even have a working sdxl example in raw python on the readme. ControlNet Depth ComfyUI workflow. You can see it's a bit chaotic in this case but it works. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. But let me know if you need help replicating some of the concepts in my process. Workflow. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. It covers the following topics: Introduction to Flux. After studying the nodes and edges, you will know exactly what Hi-Res Fix is. I can load the comfyui through 192. 6 min read. it's nothing spectacular but gives good consistent results without Welcome to the unofficial ComfyUI subreddit. Upcoming tutorial - SDXL Lora + using 1. Many of the workflow examples can be copied either visually or by downloading a shared file containing the workflow. Somebody suggested that the previous version of this workflow was a bit too messy, so this is an attempt to address the issue while guaranteeing room for future growth (the different segments of the Bus can be moved horizontally and vertically to enlarge each section/function. Jul 6, 2024 · Download the first image on this page and drop it in ComfyUI to load the Hi-Res Fix workflow. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Save workflow: Ctrl + O: Load workflow: Ctrl + A: Select all nodes: Alt + C: Collapse/uncollapse selected nodes: Ctrl + M: Mute/unmute selected nodes: Ctrl + B: Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) Delete/Backspace: Delete selected nodes: Ctrl + Backspace: Delete the current Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. You should now be able to load the workflow, which is here. Instead, I created a simplified 2048X2048 workflow. Besides, by recording the precise "workflow" (= the collection of interconnected nodes), you even get reasonably good reproducibility, namely, if you load the workflow and change nothing (including the seed) you should get exactly the same result. The API workflows are not the same format as an image workflow, you'll create the workflow in ComfyUI and use the "Save (API Format)" button under the Save button you've probably Using the Comfy image saver node will add EXIF fields that can be read by IIB, so you can view the prompt for each image without needing to drag/drop every single one. pnsk iou cncij ysilko aihtj ilbp fobgzlr vrxf jibf pezv