• Log in
  • Enter Key
  • Create An Account

Comfyui workflow examples reddit

Comfyui workflow examples reddit. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. 1. We would like to show you a description here but the site won’t allow us. Flux. You can find the Flux Dev diffusion model weights here. Create animations with AnimateDiff. Is there a workflow with all features and options combined together that I can simply load and use ? 2/Run the step 1 Workflow ONCE - all you need to change is put in where the original frames are and the dimensions of the output that you wish to have. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. I think perfect place for them is Wiki on GitHub. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values A higher clipskip (in A1111, lower in ComfyUI's terms, or more negative) equates to LESS detail in CLIP (not to be confused by details in the image). Workflow Image with generated image But standard A1111 inpaint works mostly same as this ComfyUI example you provided. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. comfy uis inpainting and masking aint perfect. Please keep posted images SFW. To add content, your account must be vetted/verified. It covers the following topics: Introduction to Flux. Merging 2 Images together. Its a simpler setup than u/Ferniclestix uses, but I think he likes to generate and inpaint in one session, where I generate several images, then import them and inpaint later (like this) Welcome to the unofficial ComfyUI subreddit. Thats where I'd gotten my second workflow I posted from, which got me going. 150 workflow examples of things I created with ComfyUI and ai models from Civitai This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. I originally wanted to release 9. The example images on the top are using the "clip_g" slot on the SDXL encoder on the left, but using the default workflow CLIPText on the right. AP Workflow 9. K12sysadmin is open to view and closed to post. 75s/it with the 14 frame model. If you asked about how to put it into the PNG, then you just need to create the PNG in ComfyUI and it will automatically contain the workflow as well. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. It's not meant to overwhelm anyone with complex, cutting edge tech, but rather show the power of building modules/groups as blocks, and merging into a workflow through muting (and easily done so from the Fast Muter nodes) and Context Switches. But let me know if you need help replicating some of the concepts in my process. Please share your tips, tricks, and workflows for using this software to create your AI art. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. You can then load or drag the following image in ComfyUI to get the workflow: 6 min read. or through searching reddit, the comfyUI manual needs updating imo. LoRA selector, (for example, download SDXL LoRA example from StabilityAI, put into ComfyUI\models\lora\) VAE selector, (download default VAE from StabilityAI, put into \ComfyUI\models\vae\), just in case in the future there's better VAE or mandatory VAE for some models, use this selector Restart ComfyUI Hey everyone,Got a lot of interest in the documentation we did of 1600+ ComfyUI nodes and wanted to share the workflow + nodes we used to do so using GPT4. Inside the workflow, you will find a box with a note containing instructions and specifications on the settings to optimize its use. json files into an executable Python script that can run without launching the ComfyUI server. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. Workflow. Aug 2, 2024 · Flux Dev. Seems very hit and miss, most of what I'm getting look like 2d camera pans. K12sysadmin is for K12 techs. I think it was 3DS Max. Everything else is the same. Welcome to the unofficial ComfyUI subreddit. WAS suite has some workflow stuff in its github links somewhere as well. The idea of this workflow is to sample different parts of the sigma_min, cfg_scale, and steps space with a fixed prompt and seed. In addition, I provide some sample images that can be imported into the program. ComfyUI could have workflow screenshots like example repo has to demonstrate possible usage and also variety of extensions. Put the flux1-dev. A1111 has great categories like Features and Extensions that simply show what repo can do, what addon out there and all that stuff. Starting workflow. this is just a simple node build off what's given and some of the newer nodes that have come out. You can't change clipskip and get anything useful from some models (SD2. I found it very helpful. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You can then load or drag the following image in ComfyUI to get the workflow: Welcome to the unofficial ComfyUI subreddit. (Same seed, etc, etc. Upcoming tutorial - SDXL Lora + using 1. . While waiting for it, as always, the amount of new features and changes snowballed to the point that I must release it as is. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Welcome to the unofficial ComfyUI subreddit. 1 or not. Open-sourced the nodes and example workflow in this Github repo and my colleague Polina made a video walkthrough to help explain how they work! Nodes include: LoadOpenAIModel It works by converting your workflow. 86s/it on a 4070 with the 25 frame model, 2. I used to work with Latent Couple then Regional prompter module for A1111, which allowed me to generate separate regions of an image through masks and guided with ControlNets (for instance, generate several characters using poses derived from a preprocessed picture). This guide is about how to setup ComfyUI on your Windows computer to run Flux. 5 with lcm with 4 steps and 0. That being said, here's a 1024x1024 comparison also. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. but mine do include workflows for the most part in the video description. So. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Welcome to the unofficial ComfyUI subreddit. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. Table of contents. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. No Loras, no fancy detailing (apart from face detailing). Just bse sampler and upscaler. But for a base to start at it'll work. Breakdown of workflow content. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. The sample prompt as a test shows a really great result. second pic. and it got very good results. 4. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to I recently switched from A1111 to ComfyUI to mess around AI generated image. Img2Img ComfyUI workflow. For your all-in-one workflow, use the Generate tab. https://youtu. ControlNet Depth ComfyUI workflow. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. 0 with support for the new Stable Diffusion 3, but it was way too optimistic. Civitai has few workflows as well. Just my two cents. Ignore the prompts and setup That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. com/. You can find the workflow here and the full image with metadata here. 4 - The best workflow examples are through the github examples pages. This workflow by Antzu is a good example of prompt scheduling, which is working well in Comfy thanks to Fitzdorf's great work. of course) To make differences somewhat easiser to see, the above image is at 512x512. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. be/ppE1W0-LJas - the tutorial. Still working on the the whole thing but I got the idea down And then the video in the post shows a rather simple layout that proves out the building blocks of a mute-based, context-building workflow. This by Nathan Shipley didn't use this exact workflow but is a great example of how powerful and beautiful prompt scheduling can be: Share, discover, & run thousands of ComfyUI workflows. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. The examples were generated with the RealisticVision 5. 1 checkpoint. If you have any of those generated images in original PNG, you can just drop them into ComfyUI and the workflow will load. hopefully this will be useful to you. You can encode then decode bck to a normal ksampler with an 1. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. Step 2: Download this sample Image. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. Here is an example of 3 characters each with its own pose, outfit, features, and expression : Left : woman wearing full armor, ginger hair, braids hair, hands on hips, serious Middle : girl, princess dress, blonde hair, tiara, jewels, sitting on a throne, blushing Users of ComfyUI which premade workflows do you use? I read through the repo but it has individual examples for each process we use - img2img, controlnet, upscale and all. Surprisingly, I got the most realistic images of all so far. it's nothing spectacular but gives good consistent results without Welcome to the unofficial ComfyUI subreddit. 1 with ComfyUI Get the Reddit app Scan this QR code to download the app now Here are approx. (for 12 gb VRAM Max is about 720p resolution). 1; Flux Hardware Requirements; How to install and use Flux. Flux Schnell is a distilled 4 step model. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Ending Workflow. I then just sort of pasted them together. Jul 28, 2024 · Over the last few months I have been working on a project with the goal of allowing users to run ComfyUI workflows from devices other than a desktop as ComfyUI isn't well suited to run on devices with smaller screens. 1 ComfyUI install guidance, workflow and example. 1; Overview of different versions of Flux. Upscaling ComfyUI workflow. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite, supposedly for chained loras, however in my tests that node made no difference whatsoever so it can be ignored as well. sft file in your: ComfyUI/models/unet/ folder. [If for some reasons you want to run somthing that is less that 16 frames long all you need is this part of the workflow] Warning. 0 for ComfyUI. Comfy Workflows Comfy Workflows. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt. all in one workflow would be awesome. The video is just a screenshot of the workflow I used in ComfyUI to get the output files. Hi everyone, I’m working on a project to generate furnished interiors from images of empty rooms using ComfyUI and Stable Diffusion, but I want to avoid using inpainting. 0 and Pony for example which, for Pony I think needs 2 always) because of how their CLIP is encoded. You can construct an image generation workflow by chaining different blocks (called nodes) together. SDXL Default ComfyUI workflow. EDIT: For example this workflow shows the use of the other prompt windows. How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. I put the workflow to test by creating people with hands etc. Only the LCM Sampler extension is needed, as shown in this video. iriaoa yfuxu nwbf zgaz yyv dij fufolo cpnd huxm ptbzt

patient discussing prior authorization with provider.