*This uses an older version of Vizcom, so pictures may not the current website.
Now that you can use text descriptions as a communication tool you can now ideate and explore ideas even faster. This opens up an entirely new way to create that wasn’t possible before.
Here is a workflow diagram that breaks down the order of operations that were done to achieve different results. Starting just from a sketch you can see all of the design possibilities that are explored all throughout Vizcom.
AI-generated images from a single sketch and text prompts
The middle image is the original input image. The surrounding images were generated with the prompt “A 3D render of red concept car vehicles rendered in Unreal engine, timeless design, great stance, Great proportions.”
AI-Generated results from an AI input rendering.
In this example the prompt “ 3D render of a shiny blue car design concept, by Honda, Unreal engine, artstation.” was used on top of the input drawing.
All you need is a simple drawing and a text description.
Prompts
Phrase, sentence, or string of words and phrases describing what the image should look like. The words will be analyzed by the AI and will guide the process toward the image(s) you describe. These can include commas and weights to adjust the relative importance of each element.
E.g. “A 3D render of a sports car design concept by toyota rendered in Unreal Engine.”
Notice that this prompt loosely follows a structure: [subject], [prepositional details], [setting], [meta modifiers and artist]; this is a good starting point for your experiments
There was no Photoshop used in any of these images. They were all rendered with AI.