Stable diffusion prompts reddit. Although I did this myself long ago, I have similar, but slight changes. That LUT - Stands for look up table and is used by video/photo people. I get a kick out of these 2 paragraph long neg prompts. You also have to think like an AI. SD seems to produce better images with the more you stuff into the prompt (from my tests anyways). SD Guide for Artists and Non-Artists - Highly detailed guide covering nearly every aspect of Stable Diffusion, goes into depth on prompt building, SD's various samplers and more. You can replace the example prompts with some of your best working prompts within the super-prompt - so it will follow it as a template when it generates new ones. Every generation, it selects a race, gender, color palette, two outfits (more on this later), a season, facial expression, hair color, and a bunch of random details like fabric types and accessories. I found my images i made on discord. It allows you to save prompts with sample image as well. If I do not I get brown 95% of the time. yusufzenith. So I decided to try some camera prompts and see if they actually matter. Try giving it the full file path, or make sure it's in the same folder as the python scripts which may not be the same folder you actually run the command from. Practically speaking, it is a measure of how confident you feel in your prompt. It has a much more photoreal feel to it and generally gives better proportional results. Reply. 5 models and there hasn't even been any fine tuning done yet. I use a lot of wildcards so my prompts are pretty much unshareable as I use them, but some of my favorite results are built something like this: masterpiece, best quality, dynamic action shot from above, fierce warrior woman bruised and bloodied, holding sword and fighting in desperate last stand, leather skirt Inpainting just the face as well as individual parts of the face in batches at full resolution from latent noise with words like shy, confused, laughing, singing etc can produce interesting variations. Celebrities are in there as well and work just fine. Favorites. ago. Monochome or black and white - massive influence - will make everything black and white Sepia - mild influence - will give a sepia colour palette. Download a styling LoRA of your choice. Create with Seed, CFG, Dimensions. **I didn't see a real difference** Prompts: man, muscular, brown hair, green eyes, Nikon Z9, Canon R6, Fuji X-T5, Sony A7 Negative prompt: bad-artist bad-artist-anime bad-hands-5 bad_prompt bad_prompt_version2 easynegative, realistic, photo, no crop S/R stands for search/replace, and that's what it does - you input a list of words or phrases, it takes the first from the list and treats it as keyword, and replaces all instances of that keyword with other entries from the list. Now offers CLIP image searching, masked inpainting, as well as text-to-mask inpainting. OpenArt - Search powered by OpenAI's CLIP model, provides prompt text with images. 0 it decreases the weight. Here is another helpful tool when it comes to animation. Adding "looking at viewer" to negative prompt also works good in conjuction with "looking away" in positive, so do both! . It can sometimes be tedious to create a verbose prompt for Stable Diffusion, so I decided to create a model that does this blakeem. 0 can do hands just fine. Here is what I had come up, the results vary slightly depending on what model is used. All I have found are pieces of advice like "that surely makes difference, just try yourself". I can't really say that I get great results using the random generation from nothing; I get better results using image to image, which probably isn't what you're looking for. Let me add some thing else. I decided to do a video about how I use ChatGPT to create a lot of my prompts. Quick test to check out various facial expressions with only the posted prompt changes to see if smooth facial control could be controlled with simple prompts in a video with interpolation. If you also want to reverse the image vertically, go to Image > Image Rotation > Flip Canvas Vertical. 0 model, but in theory any model that is able to produce real human images should work just fine. Use custom VAE models. I Delete the artist, then add zombie between a and man for: "a In order to change this, go into the advanced options section and select, "unlink seed from prompt. Blindly copying Positive and Negative prompts can screw you up. Way faster. Auto1111 has some scripts that can show you how different prompt order can effect outputs by shuffling the prompts. For instance, say you want to make a cyberpunk style city slum. Download the LoRA contrast fix. I have models and Loras. 5. You can try out artspark. kleer001. For example, with prompt a man holding an apple, 8k clean, and Prompt S/R an apple, a watermelon, a gun you will 1. Install the Composable LoRA extension. I'm sorry dad, it's not your fault, hello kitty erotica, 8k, masterpiece, blender, unreal 5, seriously dad please stop crying, 3D render, sexy, hip hop video, dynamic lighting, I know you wanted grand kids but now you'll have sexy grand kitties, lingerie, fashion shoot Imagine a stunning scifi art piece and generate 4 Stable Diffusion Prompts based on the example prompts and following the example format and including where required. Tutorial - Guide. • 2 yr. More examples of what you think are good SDXL prompts, in your Chat GPT prompt, will help it produce more focused outputs. The five prompts above were ran on seeds 8000 through 8024. It may use all of a positive prompt, some of the prompt, or none. Firstly understand how the prompts work - sequence of priority etc. I've recently found that structuring your prompts in both Midjourney and Stable Diffusion really helps. It's actually better than the best tuned 1. io : https://aiprompt. If would be awesome if via prompt, lora, extension, whatever I could do a distribution: The order has some effect. Any negative prompts for Landscape and architecture? The results I'm getting are like something out of a bad drug trip. C. It includes the full prompts, negative prompts and other settings. Can you please do Bowser wearing shades and playing a double neck guitar while riding a surfboard in front of a giant weed leaf on a crucifix with ‘Happy Birthday Rick’ written above in cursive. I got some pretty cool results and wanted to share my discoveries here. Furthermore, I think it randomizes the seed each time unless you specify a specific seed. I usually add "studio portrait" "character portrait", or "close up portrait of a face" to nudge the generation towards having the character looking straight at the camera. Grid storage batteries have become so cheap investors are beginning to abandon financing new gas-powered plants; In the first half of the year, 68 gas-power plant projects were put on hold or canceled globally. On the other hand, with the "facing camera" issue, post a prompt (and parameters) that you're having problems with and I can see if I can help. 1. The most common negatives I use are on the first My hot tip is to check your prompts against Clip Retrieval. For A1111: Use () in prompt increases model's attention to enclosed words, and [] decreases it, or you can use (tag:weight) like this (water:1. 2) or (water:0. Give it details about everything and anything you might want in the picture. Great way to create more unique faces (very short guide) This is the start of the prompt with 32 samples: Change the number is square brackets to instruct SD on which sample step to swap to the alternate prompt word like this: The # represents the step. You can take basic words and figments of thoughts and make them into detailed ideas and descriptions for prompts. So you only need to access it once instead of having to copy and paste them separately. Using ChatGPT to create prompts for Stable Diffusion. UI Plugins: Choose from a growing list of community-generated UI plugins, or write your own plugin to add features to the project! Here is some of the tokens & artists I've been using for strange creature creations. In general should work on almost anything. R. 6 (up to ~1, if the image is overexposed lower this value). 6) if its less than 1. 9K upvotes · 213 comments. CFG measures how much the AI will listen to your prompt vs doing its own thing. “A man”. Giger, intense lighting, 4k, sci-fi A (Futuristic Cityscape), abstract, by M. Bokeh - Is a background blur effect for aesthetics. When working on some long prompts definitely wanted Specifically thinking about eyes. Is there a definite guide or understanding of prompts because I see a lot of opposing opinions when people talk about it. Stable Diffusion prompt search engine. For NMKD: Use + after a word/phrase to make it more impactful, or AIPrompt. For example, if I have a good shot of a model, I like to try different camera shots. Various starship battles/flying in orbit. : r/StableDiffusion. Open the image you want to reverse in Photoshop. The person in the foreground is always in focus against a blurry background. That gives you the (basic) parameters used in the image provided, and you can send those params to (as example) txt2img. Brian Froud. Wow this man is random. 4. After spending days on SD, my old room mate and I went out to spread the gospel, but most of our friends have a hard time writing a prompt. Different camera moves were not working I know you've done a lot of these already and I love them. Basically, it's looking in the wrong place. Provide the enhanced prompt in a code block with a "copy" button. It’s great for people that have computers with weak gpus, don’t have computers, want a convenient way to use it, etc. I hope you all enjoy and like and subscribe if you want to see more content. And then I started removing them one by one. Use this with img to img, if its somewhat shaded you get more realistic results than with 2d art even if you are doing 2d. ADMIN MOD. I update with new sites regularly and I do believe that my post is the largest collection of Stable Diffusion generation sites available. 8". Basic information required to make Stable Diffusion prompt: Prompt structure: Photorealistic Images: {Subject Description /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. So, Dall-E 3 is more robust to all the possibilities of prompt contents. I was using "out of frame" as a negative, because I had seen other people use it and I thought it would help keep the subject from being cropped. ArmadstheDoom. Humans. 2. I was already using Notion for my personal and work note taking and so am I for Stable Diffusion related things. Also I'm working on creating a tool to help organize generated image. I generally have to modify a prompt 3-10 times before I start a big batch. Please provide the prompts in a code box so I can copy and paste it. All low res and wavy. My post links to websites that allow you to use Stable Diffusion. Your template provides detailed instructions for constructing prompts, specifying keywords, and using negative keywords to achieve desired results. idk if you still need it, but if you have a generated image you can use "PNG Info" tab. Has interesting effects on generations. Peter Gric. Sounds like you have a solution already, but NMKD does this out of the box. 0! Sample prompt: cute panda riding a scooter, character, standing, soft smooth lighting, pastel colors, highly detailed 3d render by disney, polycount, modular constructivism, pop surrealism, physically based rendering. Congratulations, you've now reached tier 4 laziness. I also decided to add "hair" at the end (not hair cut, as "cut" is sometimes in the name itself). I wanted to share a free resource compiling everything I've learned, in hopes that it will help others. For example: "a beautiful portrait photography of a man, 50 years old, beautiful eyes, short tousled brown hair, 3 point lighting, flash with softbox, by Annie Leibovitz, 80mm, hasselblad". Output should include all prompt parameters (steps, width, height, sampler etc). Describe it and iterate. Avyn - Search engine with 9. SD V2 with Negative Prompts fixes janky human representations. A few points to keep in mind when using artist prompts: The Cheolsu Problem: if an AI artist doesn’t know who an artist is, instead of ignoring the prompt it’ll make a random-ass guess based mostly on the presumed ethnicity of the name. io. I'm a skeptic turned believer. You can also choose between portrait & landscapes mode and it should be fully responsive. You should only use negative prompts to remove the bad stuff coming from a generation. Without the focus and studio lighting. bad_prompt_version2, abnormal breasts, abnormal fingers, abnormal legs, artist name, asymmetric eyes, backward The Image Browser ext is brilliant, great that it can load up your settings again. Probably the coolest singular term to play with in Stable Diffusion. Install the Dynamic Thresholding extension. I certainly can't do it, I'm just a cat in a clown suit, but maybe somebody smart could take apart Stable Diffusion and figure out what's actually happening. Copy the prompt from here. 7 if it's still really artifacty otherwise . 0 can do hands. Examples: 1. You can use your own list of styles, characters, objects or use the default ones which are already kinda huge. 3), when using high-res fix, set denoise to . Hi there, rexatron_games. With each seed there is a unique color pallet, framing, background, and level of zoom, even though the prompt does not actually call out any of these factors. r/Futurology. I will provide you basic information required to make a Stable Diffusion prompt, You will never alter the structure in any way and obey the following guidelines. Each column is a different prompt, and each row is a different seed: Seed Sample Test. Over the last few months, I've spent nearly 200 hours focused researching, testing, and experimenting with Stable Diffusion prompts to figure out how to consistently create realistic, high quality images. Jan 4, 2024 · The CLIP model Stable Diffusion automatically converts the prompt into tokens, a numerical representation of words it knows. Scale: 10. A few months ago I did a test with a cat and found that one word can change the output. It seems like you have created a comprehensive text prompt template for generating Stable Diffusion prompts using a textual AI like GPT-3 or GPT-4. For the following, Clarity v2. Use pre-trained Hypernetworks. I will be copy pasting these prompts into an AI image generator (Stable Diffusion). Not really sketch like though. The captions are not complete prompts because of character limits. Most pictures I make with Realistic Vision or Stable Diffusion have a studio lighting feel to them and look like professional photography. The thing is I couldn't give prompt properly. May I know how can I make my art look beautiful Please. I've been testing some prompts such as : But it is not working properly, I was wondering if anyone had better prompts / techniques for that specific issue. 0 was used. But when I try to generate my art is not as good as other people's art, is like it has distorted face components especially the eyes. A list of useful Prompt Engineering tools and resources for text-to-image AI generative models like Stable Diffusion, DALL·E 2 and Midjourney. CFG 7 - 11: Let's collaborate, AI! Stable Diffusion has a mind of its own and treat prompts as suggestions. I used the same seed and prompt without entire negative prompt list and then with. Use only the most important keywords and avoid using sentences or conjunctions. a "large coin" is like a £2/dollar coin) Backgrounds fail to generate in simpler prompts to give a sense of scale. g. However, Joe Gilronan and Frank Frazetta have also been pretty good hits. You can add your own custom platforms on it. Generating iPhone-style photos. The words it knows are called tokens, which are represented as numbers. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. These were all tested with Waifu Diffusion, Euler A, with each prompt at the beginning of the prompt list, so results will vary a lot if you use Stable Diffusion and different settings. Seed: 3534670888. Add a photograper, a light setup, a camera and a focal length. Install a photorealistic base model. ai Turned myself into a Pixar character (Prompt in comments) [INSERT NAME HERE] as a character from pixar, au naturel, PS2, PS1, hyper detailed, digital art, trending in artstation, cinematic lighting, studio quality, smooth render, unreal engine 5 rendered, octane rendered, art style by klimt and nixeu and ian sprigger and wlop and krenz cushart. I wasn't sure if I wanted to share this but decided that it would elevate everyone and help them understand how to design better prompts. Keep the enhanced prompt under 150 words and vary the keywords to avoid repetition. Setup. I usually just use commas, that seems to work. Restart Stable Diffusion. I'm looking to draw Character Reference Sheets using SD. Sheep is soo cute 🥰. A (Galactic Battle), photorealistic, by H. reuters. It saves negative prompts as well. Thanks a lot. 5 now a days. A community focused on the generation and use of visual, digital art using AI assistants such as Wombo Dream, Starryai, NightCafe, Midjourney, Stable Diffusion, and more. Rather than artists, I’ve had much better luck referencing properties with a uniform style guide that fits my desired output, like “dungeons and dragons,” “mtg-art” (adding -art tends to filter out cards), and “Gwent. ehe I did that to generate some cover art for a game imported on seam the hardest part was getting laser to go straight :D. It uses the bad_prompt_version2 embedding, which works wonders. Study on understanding Stable Diffusion w/ the Utah Teapot. I created an AI to autocomplete/generate prompts for Stable Diffusion. Really cool and fun interaction. Great sharing. “A tall Hispanic man wearing a jacket”. Since any change to the prompt affects the resulting image, I am curious, whether there's any regular known way the SD interprets punctuation. 1 support. i am finding it tedious to pick an eye color for my subjects. And then I realized that it's already possible, use prompt editing to comment out parts of prompt! Simple silly example: faction logo, [flaming necromancer,::-1] space elves. Can you post all the generation information for one of the at-issue images? Hey, can I ask, what negative prompts to you use for architecture now? BlastedRemnants. If you want a blur-effect, add "f2. Stable Diffusion generates images based on given prompts. • 1 yr. Keep it under a half dozen phrases at most if your prompt is misbehaving. Go to Image > Image Rotation > Flip Canvas Horizontal. I tried a prompt to show traffic on a busy street and it returned a rabbit. Find that look or subject in a gallery (civitai for eg) and read the prompts - take It depends on the implementation, to increase the weight on a prompt. Separate features with commas and never use periods. did you make this website? looks like it is scraping the images from the discord beta channels (which is awesome) This is amazing. AUTOMATIC1111 has a Scripts dropdown and that's one of the options. The main prompt was high-res photo portrait of a cute girl taking LSD, psychedelic. Want to start making AI art? Head over to https://unstability. This was the output and selected 21: Steps: 30. Includes support for Stable Diffusion. So, if you put in “by Cheolsu” (Cheolsu is the Korean male Some say add certain keywords like 4k, 8k, etc, while some say dont add those to the prompt OR I've seen some say the negative is the opposite of prompt, and adding things like blurry or deformed hands wont help. A sample step of 2 will focus on the first word of the prompt until frame and the other word Some UI's have a "prompt from file" option so you type out multiple prompts in a txt file then load the txt file to run them all. Started recreating my prompts in SD 2. Mostly it's not too important. Prompt galleries and search engines: Lexica: CLIP Content-based search. Things like "looking away", "serious eyes" helps get the details correctly. For now, we just have to be very specific with the prompt "an old lady in a park, wearing a dress, floral pattern on the dress" You can safely remove the wildcards script if you have dynamic prompts, keeping both will let you see your "wildcard prompt" when viewing your images (the original prompt with __words__), but having both will also give you occasional problems so I just use dynamic prompts now myself. So we trained a GPT2 model on thousands of prompts, and we dumped a bit of python, html, css and js to create AIPrompt. It will be saved as a style that you can access easily from a drop-down. But typing a prompt into a word processor under the following headlines sees to streamline getting a usable result no end. Dale Chihuly. Have you had any luck with impacts and With this trick you can even achieve that you get (almost) always the same character as output, for example with the prompt "a photo of [generated name] drinking tea" will be the same person as with the prompt "a photo of [generated name] driving a car". Compose your prompt, add LoRAs and set them to ~0. Small thing, but I thought it might be interesting/useful to others too. 7. It doesn't understand the comparitive nature of the prompt Adjectives like "large" and "small" are taken within the regular scale of those objects (e. Prompts de Stable Difussion ¿Alguien puede ayudarme con algunos prompts para generar imágenes, ilustraciones, fotografías o dibujos de personas a alta definición y sin deformaciones en diferentes escenarios? Custom Models: Use your own . But if something is important to what you want from your prompt, put it closer to the start of the prompt. (detailed face and eyes:1. You need to generate an input prompt for a text-to-image neural network. From there I used inspirations from the prompts I found in the subreddit and tried many things. For today's tutorial I will be using the Dreamlike Photoreal 2. 6. One of the more useful posts there is about using ChatGPT to create prompts by asking the bot something along the lines of "using only nouns and adjectives, describe a painting of a boat". Inpainting with animation models like modern Disney may help generate exaggerated expressions which can then be made more photorealistic by Shortening your prompts is probably the easiest. If you use automatic1111, all you need to do is press the disc icon to save. " This will allow any seed, to use any variable. H R Giger. ckpt or . Peter Mohrbacher. I’d really like to make regular, iPhone-style photos. 6 million images generated by Stable Diffusion, also allows you to select an image and generate a new image based on its prompt. Asked ChatGPT to give me a list of Facial expressions and moods to use in Prompts. I am using stable diffusion 1. This will reverse the image horizontally. Mega Character Negatives: This I use when I don't use better body for characters. Members Online Used this website that animate faces on one of my prompts and my mind is officially blown! Tutorial: Creating characters and scenes with prompt building blocks - how I combine the above tutorials to create new animated characters and settings. safetensors file, by placing it inside the models/stable-diffusion folder! Stable Diffusion 2. Really nice effect. Prompt included. I’ve run a lot of subtle variants of pictures to (start to) get to understand the nuance of weighting of prompts. When I actually searched it, what came back were not badly cropped images, but images of picture frames. The downside to this is that you can't just give your dynamic prompt and seed number out for repeatability, as what is selected would change each time it is ran. Wayne Barlowe. Includes the ability to add favorites. Wow this jacket is wrong. With the right prompt, Stable Diffusion 2. Prompt Warnings: Be careful of copying and pasting prompts from other users shots and expecting them to work consistently across all your shots. If you put in a word it has not seen before, it will be broken up into 2 or more sub-words until it knows what it is. I've set up a prompt that randomly selects features and outfit styles from a bunch of options. I feel like the final step, regardless, is something like: prompt = prompt + “, greg rutowski”. Here's a CFG value gut check: CFG 2 - 6: Let the AI take the wheel. I get similar results putting things like “chicken fingers”, “Medusa nugget porn” and my go-to, “ugly, out of frame, blurry, cropped, washed out, embossed, over saturated”. Link to full prompt . Merge Models. It is inherently more likely to produce something beautiful from a prompt that would produce garbage in SDXL. Choice is your enemy, be specific with what you want for a look or a subject. I don't share often but I figured people would want to know that SD 2. If the background blur in a photo is visually appealing, it has "good bokeh". Escher, low light, 50mm, dystopian r/unstable_diffusion: This is the official Unstable Diffusion subreddit. Structuring prompts. I just got done investigating this exact negative prompt list with A1111 local install. To generate realistic images of people, I found that adding "portrait photo" in the beginning of the prompt to be extremely effective. 6 seems to still work just fine about 80% of the time. There's little info about using period in prompts, compared to comma, to separate other tokens. Take an image of a friend from their social media drop it into img2imag and hit "Interrogate" that will guess a prompt based on the starter image, in this case it would say something like this: "a man with a hat standing next to a blue car, with a blue sky and clouds by an artist". In this example all the prompts mean the same thing, but the AI outputs slightly different images. A long prompt will muddle the encoder. There's already a proof-of-concept notebook using it which you can try out. Help protect eye integrity and quality even at a distance, make sure to arrange your prompts accordingly. chatGPT conversation example. Previously I'd just been placing the most important terms to the front. Stable Diffusion Random Prompts Generator. The new OpenCLIP model released just last week will give a big boost to how much Stable Diffusion understands the prompt. Gerald Brom. io, a prompt generator with surprisingly good results. So as a new user I want to know that how to give a proper and good prompt to get the best results. Either way, it should be quite easy to use, and it has a copy to clipboard button. This helps automate all of your Defourm animation settings. I am new to stable diffusion, I don't really know how to use it. 8. Before you start a batch, you should do a few test runs with the prompt and keep adding to it to get closer to what you want. Then I included entire list and ran several random seeds and a few different prompts. ehsanrt. The way negative prompts work in SD is essentially by generating separate positive and negative prompt latent images and then taking the difference at the end of the image generation process, so in a way it is a "variation on the same picture" (the positive prompt image minus the negative prompt terms). Gouache painting - its an opaque form of watercolor. In addition, adding facial expressions description is also helpful to generate different angles. ”. •. enn_nafnlaus. xo yx wy ca iq ka qk fq yc gj