Comfyui public link reddit
-
I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner Generally it's fine to update xformers, as for pytorch you can always try an update (recommended a separate version per the link below) and if there's any problems roll it back to the version you were on. Enjoy a comfortable and intuitive painting app. So far this includes 4 custom nodes for ComfyUI that can perform various masking functions like blur, shrink, grow, and mask from prompt. Ah, That might be it. It allows you to put Loras and Embeddings Ipadaptor for all. Full rules are in the "Community Rules" link at the top of the sub. New ComfyUI Node - Better Image Dimensions. I got some exiting updates to share for One Button Prompt. Hot . Please share your tips, tricks, and workflows for using this software to create your AI art… POD-MOCKUP generator using SDXL turbo and IP-adaptor plus #comfyUI. Results and speed will vary depending on sampler used. You should be able to drag it anywhere. Workflows are much more easily reproducible and versionable. Then I may discover that ComfyUI on Windows works only with Nvidia cards and AMD needs 4: Once you're in the correct directory (it will say the directory before the typing area for the command), you can enter the command "webui. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. 72. Looking forward to seeing your workflow. I will give it a try when it is available, I love staying current with new workflows. And the new interface is also an improvement as it's cleaner and tighter. Public Service Announcement Unveiling the Game-Changing ComfyUI Update. Just upload the JSON file, and we'll automatically download the custom nodes and models for you, plus offer online editing if necessary. Here is ComfyUI's workflow: Checkpoint: First, download the inpainting model Dreamshaper 8-inpainting (opens in a new tab) and place it in the models/checkpoints folder inside ComfyUI. Go to the ComfyUI root folder, open CMD there and run: python_embeded\python. ComfyUI basics tutorial. Online runner can be useful for people without latest high-end GPUs, who want to try the new model. Took my 35 steps generations down to 10-15 steps. Once installed, download the required files and add them to the appropriate folders. Step one: Hook up IPAdapter x2. No quality loss that I could see after hundreds of tests. md gives a more thorough overview but essentially the extension adds a menu item `Inspect Image Metadata` accessible in a few places: by right-clicking on an image in the explorer pane, the image's title tab, or by opening the command palette & typing `> Inspect Image metadata` (this last method Latent Vision just released a ComfyUI tutorial on Youtube. VFX artists are also typically very familiar with node based UIs as they are very common in that space. Or you could use a photoeditor like GIMP (free), photoshop, photopea and make a rough fix of the fingers and then do an Img2Img in comfyui at low denoise (0. If you do that, try running comfy with the disabke xformers command and see if it runs any better. The GNOME Project is a free and open source desktop and computing platform for open platforms like Linux that strives to be an easy and elegant way to use your computer. To access invokeai over thr internet: In the invokeai. Press go π. You can now use half or less of the steps you were using before and get the same results. I have tried adding a "load image" node, converting the A bit late but it's a top result on Google: You need to run a proxy in front or use something like ngrok as described above - however, I don't like ngrok and have found Cloudflare Zero trust to be much more friendly. Go to the comfyUI Manager, click install custom nodes, and search for reactor. Hello, I am trying to run stable diffusion with a public link, while this is not for iamge generation, it's TTS "tortoise" it's still using stable diffusion. The latest ComfyUI update introduces the "Align Your Steps" feature, based on a groundbreaking NVIDIA paper that takes Stable Diffusion generations to the next level. async def echo (websocket, path): I don't know if you clicked on this link, but if you did, you'll see that it's a tutorial for ComfyUI. May 2, 2023 Β· It will give you a public URL to use. bat) set "host" to "0. It’s not a big leap from using SDXL to SD 1. Now you can manage custom nodes within the app. Upscaling to x2 and changing the Sampler/Scheduler improved matters somewhat. And above all, BE NICE. ComfyUI is like a car with the hood open or a computer with an open case - you can see everything inside and you are free to experiment with, rearrange things, or add/remove parts depending on what you are trying to do. You have to run it on CPU. After that you would want to employ tricks within. Then switch to this model in the checkpoint node. GitHub Repo. First, create a workflow consisting only of shared checkpoints and Loras. Use the image of the face you generated in the IP adapter in the load image box. ) so far, we’ve only rolled this out to a small set of the existing workflows, and we’ll be enabling this for all workflows very soon! Try to install the reactor node directly via ComfyUI manager. Nodes in ComfyUI represent specific Stable Diffusion functions. Input images can be any AI art generated or your own From my limited knowledge, you could try to mask the hands and inpaint after (will either take longer or you'll get lucky). a search of the subreddit Didn't turn up any answers to my question. I want to replicate the "upscale" feature inside "extras" in A1111, where you can select a model and the final size of the image. I don't know why you don't want to use manager, if you install nodes with manager, a new folder is created in the custom_nodes folder, if something is messed up after installation, you sort folders by modification date and remove the last one you installed. Hello, A1111 user here, trying to make a transition to Comfyui, or at least to learn of ways to use both. exe -m pip uninstall -y opencv-python opencv-contrib-python opencv-python-headless. I'm an ArchViz artist and it would be important for me to be able to control the lighting. There are always readme and instructions. I always run in a window. Specifically One of these 3 (if not using xformers): Or installing xformers (it'll run by default if installed) Are all worth trying. py when launching it in Terminal, this should fix it. For StableDiffusion/ComfyUI users who don't want to pay Adobe or who can't justify the monthly subscription rate, I'd encourage you to look into Krita 5. This is how I quickly share my comfyui with colleagues without restarting or anything. Here is the . That's it! Welcome to the unofficial ComfyUI subreddit. βΊοΈππΌππΌ. For example I enjoy mixing different models and see the results, with comfy I just select few models then let comfy generate random weights for each merging and see the results. Acly also has an AI Tools mod for Krita that lets you select with seg . Hello all! I'd like to present to you the alpha of our company (MineTheFutr. ComfyUI installed and running! Such a beautiful interface! Every time I get an application working and starting properly, I feel like a master hacker! HAHAHA! I'll now continue reading the links you sent folks earlier including the youtube link too. The title explains it, I am repeating the same action over and over on a number of input images and I would like, instead of having to manually load each image and then pressing on the "queue prompt", to be able to select a folder and have Comfy process all input images in that folder. It then creates bounding boxes over each mask and upscales the images, then sends them to a combine node that can preform color transfer and then resize and paste the images back into the original. Not bad. This way, you can minimize unloading of models during workflow transitions. We would like to show you a description here but the site won’t allow us. If you're not using --force-fp16 use this. It now officialy supports ComfyUI and there is now a new Prompt Variant mode. In your router, activate TCP port forwarding for port that is set in the invokeai. If however what you want to do is take a 2D Play with Stable Cascade with your prompts online. Install ComfyUI Manager. ComfyUI runs in a browser - try running it full screen or hide those bars from the browser settings menu. 19K subscribers in the comfyui community. "comfyUI workflows" No workflows on video just selfpromote webpage. For example, I can load an image, select a model (4xUltrasharp OsmAnd is an open source (GPLv3) map and navigation app for Android and iOS using OpenStreetMap for map data. 5x upscale and another round of KSampler from a roughly 50% This is the SubReddit for the Creality CR-10/CR-10S 3D Printer. Olivio Sarikas also has an excellent tutorial series. But the resulted image is not something that I expected. py -h". This is all free information, stop trying to generate junk traffic. If you run "python C:\Users\***\ComfyUI\main. 2. As far as I understand, as opposed to A1111, ComfyUI has no GPU support for Mac. • 17 days ago. will output this resolution to the bus. r/comfyui: Welcome to the unofficial ComfyUI subreddit. Uses less VRAM than A1111. Hi, I'm looking for input and suggestions on how I can improve my output image results using tips and tricks as well as various workflow setups. Same issue is with beginner-photographer with expensive cameras. 8 and 12. ai Best Comfyui Workflows, Ideas, and Nodes/Settings. Please be friendly and helpful. If you have favorite settings you can turn them into templates for a quick start. Wire in an IP adapter to the face detailer ksampler. I'm trying to get dynamic prompts to work with comfyui but the random prompt string won't link with clip text encoder as is indicated on the diagram I have here from the Github page. Could you please elaborate how I can use ngrok with ComfyUI? First thing I always check when I want to install something is the github page of a program I want. - The areas should overlap a little, this helps to ensure that the whole thing is not staggered, but continuous. Lightning LoRa (and/or models with Transform Your ComfyUi Workflows into Fully Functional Apps on https://cheapcomfyui. ComfyUi, in its startup, recognizes the A100 and its VRAM. Both great, but with comfy you have way more flexibility, you can do probably anything just need to figure out how. 5 and SDXL version. Plus quick run-through of an example ControlNet workflow. OsmAnd has many features which help you at your journey: offline maps and navigation, trip recorder, ruler, mapillary, the online tracker, navigation for any profiles like a car, bicycle, public transport, boat, pedestrian, hiking and Welcome to the unofficial ComfyUI subreddit. python_embeded\python. we found quite a few problems with ComfyUI, so we put a lot of effort into optimizing and developing Comflowy( Tutorial + better version ComfyUI), and making it completely open source. com : r/comfyui. Please share your tips, tricks, and workflows for using this…. bat --share" and this will launch the webui with the share argument, and it will generate a public link for your session. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Dude this is awesome, and kind of inspiring. Better Image Dimensions is my first developed node for ComfyUI, starting simple to get a feel for how they work and solving a problem I had right away with ComfyUI. Read the nodes installation information on github. Next, install RGThree's custom node pack, from the manager. 6), and then you can run it through another sampler if you want to try and get more detailer. The basic workflow is already set up for SD 1. If not, go into settings and see if the option to “remember” (or lock) the manager menu is on. In this workflow we try and explore one concept of making T shirt mockups with some cool Input images and using the IP adaptor to convert same into final images. 1. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion -. I haven’t found a close second to the way Scott breaks you into the ComfyUI way of doing things. Beginners' guide for ComfyUI π We discussed the fundamental comfyui workflow in this post π You can express your creativity with ComfyUI #ComfyUI #CreativeDesign #ImaginativePictures #Jarvislabs. It will also be a lot slower this way than A1111 unfortunately. To open ComfyShop, simply right click on any image node that outputs an image and mask and you will see the ComfyShop option much in the same way you would see MaskEditor. Introducing ComfyUI-Magic. Thank you! If you see progress in live preview but final output is black, it is because your VAE is unable to decode properly (either due to wrong vae or memory issues), however, if you see black all throughout in preview it is issue with your checkpoint. This feature delivers significant quality improvements in half the number of steps, making your image generation process faster and Troubleshooting Steps Taken: Tested Local Connection: Successful on the home PC. bat" which launches the webui, in the command lines I get the usual "Could not create share link. 0. Learning Comfy goes much faster when you can try things quickly. There are tutorials covering, upscaling ComfyUI Portrait Master. Tried this on an image I generated in ComfyUI. By being a modular program, ComfyUI allows everyone to make Making Horror Films with ComfyUI Tutorial + full Workflow. Good concept for the tools available to you. I have attached the images and work flow. The controlnet selector is open so you can see which are included. If you want consistent clothes from image to image, it really helps to set up a Reference Only latent input to your main ksampler, instead of a blank latent image. I teach you how to build workflows rather than just use them, I ramble a bit and damn if my tutorials aren't a little long winded, I go into a fair amount of detail so maybe you like that kind of thing. com) where you can earn USDC for your idle GPU, but the big thing I want to share HERE is our container making it simple for anyone with an NVIDIA card to run ComfyUI. Lacks the extension and other functionalities, but is amazing if all you need to do is generate images. There's plenty of things to try. 23K subscribers in the comfyui community. ) These games tend to focus heavily on role-play and autonomy through the application of a player's chosen attributes and skills. To find out, simply drop your image on an Openpose Controlnet, and see what happens. And now for part two of my "not SORA" series. Watch Scott. 4. I go to ComfyUI GitHub and read specification and installation instructions. x. Some very cool stuff! For those who don't know what One Button Prompt is, it is an feature rich auto prompt generator, easy to use in A1111 and ComfyUI, to inspire and surprise. Don't forget to share your link only with trusted sources! Edit: Formatting ComfyShop has been introduced to the ComfyI2I family. Great job! I do something very similar and find creating composites to be the most powerful way to gain control and bring your vision to life. He makes really good tutorials on ComfyUI and IP Adapters specifically. Dear all, i am new to the comfyui and just installed it. edit: The container does not have to hook up to our GPU network, to be clear! It's installable through the ComfyUI and lets you have a song or other audio files to drive the strengths on your prompt scheduling. This is Reddit's home for Computer Role Playing Games, better known as the CRPG subgenre! CRPGs are characterized by the adaptation of pen-and-paper RPG, or tabletop RPGs, to computers (and later, consoles. A lot of people are just discovering this technology, and want to show off what they created. 7. Automatic1111 for multiple workflows and extensions. I am trying to reproduce a image which i got from civitai. Auto1111 gives you tons of tools ready out of the box. Anyone have any ideas? Top 10 ComfyUI Workflows To Use in 2024. Belittling their efforts will get you banned. Hope this helps you. akanshtyagi. I also tried pointing it towards the correct graphics card with --cuda-device DEVICE_ID , but that didn't help. Here's a basic example of using a single frequency band range to drive one prompt: Workflow. import websockets. To OP: Dont be a nerd. Checked Port Forwarding: Verified on the router. There's an SD1. Please share your tips, tricks, and workflows for using this software to create your AI art. json displaying action! The repo's README. Worked wonders with plain euler on initial gen and dpmpp2m on second pass for me. And when you save an image the plugin settings for the image are included. 3-0. Upscale image using model to a certain size. ComfyUI Lighting Control. Try alternate checkpoint or pruned version (fp16) to see if it works. yaml (default: 9090) Connect with your ip address with [your ip]:9090. I tried torch with CUDA 11. Firewall Settings: Configured to allow port 8765. If you get a repeatable Openpose skeleton from it, you're good to go. Add extra details until it fits your needs. Unfortunately, I can't see the numbers for your final sampling step at the moment. The goal is to enable easier sharing, batch processing, and use of workflows in apps/sites. 25K subscribers in the comfyui community. Comfyui is much better suited for studio use than other GUIs available now. Hopefully, some of the most important extensions such as Adetailer will be ported to ComfyUI. ComfyUI Style Model, Comprehensive Step-by-Step Guide From Installation if you follow the video to youtube, there's a link to a redo of this with sound working. The default Ultimate Upscaler settings delivered a very poor outcome. With a compression of 42 you get better results with the following heights and widths: 672, 840, 1008, 1176, 1344, 1512, 1680, 1848, 2016, 2016x2016. - As a final touch do a 1. Please keep posted images SFW. Then, build all other workflows as variations based on this initial workflow. Another 5% photoshop would go a long way though, that is the majority of the criticism I'm seeing to this despite your hard work. Step three: Feed your source into the compositional and your style into the style. New Tutorial: How to rent 1-8x GPUS, install ComfyUI in the cloud (+Manager, Custom nodes, models, etc). ComfyUI is also trivial to extend with custom nodes. 2 and Acly's Krita AI Diffusion plugin. json: WkFl_As_It_Stands. 14. ComfyShop phase 1 is to establish the basic painting features for ComfyUI. Copy link Embed Go to comfyui r/comfyui • by Old View community ranking In the Top 10% of largest communities on Reddit. You are using IP adapter on generating clothes over mask, which is really unreliable. Nella repository trovate le istruzioni di installazione e la descrizione di tutti i settaggi disponibili. Input sources-. Close ComfyUI if it is running. The graphic style Allows you to choose the resolution of all output resolutions in the starter groups. Ho pubblicato su GitHub il mio nodo custom per ComfyUI per la generazione semi-automatica dei prompt dedicati ai ritratti. exe -m pip install opencv-python==4. Tutorial on accessing your local ComfyUI server over the internet from anywhere securely. I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. It probably depends how well the hand-drawn/painted character is drawn, how clearly the anatomy is shown, and in what style it's been depicted. 5 in ComfyUI. type --cpu after main. Be cautious and make sure to keep nodes that are not used in other workflows as they are, rather than just connecting Congrats on a new release. Also, using --disable-cuda-malloc didn't really help, it's executing the prompt, but it takes a very long time to even generate one step. . GNOME software is developed openly and ethically by both individual contributors and corporate partners, and is distributed under the GNU General Public License. When I just open "start. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Message and mods before posting any sales links/affiliate links. Something like this. Check out his channel and show him some love by subscribing. yaml (which is next to the invoke. Il controllo del prompt avviene attraverso dei selettori e slider. Please share your tips, tricks, and workflows for r/comfyui. • 4 days ago. I have installed all missing models and could get prompt queued. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. And the clever tricks discovered from using ComfyUI will be ported to the Automatic1111-WebUI. - Gradually reduce area weights until they still affect the image, but do not break the cohesion. ComfyUI - great for complex workflows. This won't require editing batch files or anything. MOCKUP generator using SDXL turbo and IP-adaptor plus workflow. If the ip that you found starts with 192, you got the wrong one (that's the Welcome to the unofficial ComfyUI subreddit. Step two: Set one to compositional and one to style weight. This also works when you don't have access to ports, FYI, which is handy for a number of scenarios. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. Fooocus / Fooocus-MRE / RuinedFooocus - quick image generation, and simple and easy to use GUI (Based on the Comfy backend). At what angle etc. Scripts: WebSocket Server : import asyncio. Need Help, comfyui cannot reproduce the image. 2. This pack includes a node called "power prompt". I have a wide range of tutorials with both basic and advanced workflows. It will Model download links + custom node links for all workflows Run workflows on the site with ZERO setup (no model downloads, custom node setup, etc. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. 1 or not. Not only I was able to recover a 176x144 pixel 20 year old video with this, in addition it supports the brand new SD15 model to Modelscope nodes by exponentialML, an SDXL lightning upscaler (in addition to the AD LCM one), and a SUPIR second Image Realistic Composite & Refine ComfyUI Workflow. Most of them already are if you are using the DEV branch by the way. Hi guys, My question is if there is any way to control the light in comfyui. Prompt: Add a Load Image node to upload the picture you want to modify. this creats a very basic image from a simple prompt and sends it as a source. It uses the amplitude of the frequency band and normalizes it to strengths that you can add to the fizz nodes. It will list all commandline options. I mean that in a scene, I precisely set the angle of the sun to where the sun should shine. If it is, turn it off. We've built a quick way to share ComfyUI workflows through an API and an interactive widget. Linked is the article I wrote on Civitai if you want more details, as well as the repo if you want to give it a try. 0". Welcome to the unofficial ComfyUI subreddit. r/comfyui. The concepts are all the same. For example I want to install ComfyUI. The power prompt node replaces your positive and negative prompts in a comfy workflow. the diagram doesn't load into comfyui so I can't test it out. 5. Oxffff0000. I have no affiliation to the channel, just thought that the content was good. id nv yd gp tk ku jc wn ov we