Using AI to generate concept imagery - fast.

The images below were created using the image to image AI workflow. Word prompts combined with the image, directed by word prompts and the Ai parameters create a number of options. We then select our preferred outcome for further working up or deploy to our AI resolution upscaler for final use.

The outcomes can be rough, elements misinterpreted or misshapen but depending on the intended final use, they can often communicate enough or be viable for some post production editing.

The images below were produced quickly and served as candidates for further post production (photoshop et al). The majority of the time going in to tuning the text prompts and tuning the AI’s available parameters.

Most AI neural networks are not intelligent

Popular AI image creators do not use a combination of a million ‘if - then’ statements to filter through. Their networks are trained to produce realistic images by creating effectively mathematical filters for removing noise iteratively from a more noisy image. The weighting of each parameter input is automatically tuned in the training process to maximise the chance of a successful outcome as defined by the training set. Asking a neural net why it made a particular choice is next to impossible (unless you use another network to figure out the key influencing parameters - good luck). They are by nature enormous aggregations of auto-tuned weighted parameters, flowing from one layer of nodes to another. Because the process has been reduced to maths we can leverage our gpu’s to give us results startlingly quickly. They are quintessential black boxes, unless you supervised the training yourself and even then you may have introduced unconscious biases through omission or unseen faults in the training.