We are in a boom time for artificial intelligence. The much-hyped technology converts user prompts into new digital content, including images, text, and video. The visual results are often astonishing. Photo-realistic images and complex artwork materialize on screen in a matter of seconds.
Aquent Studios believes AI holds huge potential to transform creative work. So our Designers have spent the last several months testing out different tools and exploring the ways they could support and enhance our work. While many practical applications are already available, they require significant direction from skilled Designers. There’s no autopilot setting. Nonetheless, AI tools can make us more efficient, more precise, and even expand our imagination.
Putting AI to the test
As an agency, we understand that source materials and the pace of work can vary greatly from our initial proposals. It’s important that we’re able to adapt to the unexpected while still delivering a high-quality, on-time work product. AI tools can help achieve this by assisting with tedious yet necessary tasks.
We saw the opportunity to pilot this on a large batch of images that our Designers would normally spend considerable time cleaning up. We wanted the AI to do three key things:
- Enhance low-resolution photos.
- Isolate objects from their busy backgrounds.
- Apply realistic shadows.
To streamline workflow, we anticipated the AI could work within staple creative tools like Photoshop and the collaborative design platform Figma.
Initial tests began with plugins in Figma. Unfortunately, promising results were few and far between. The removal of backgrounds was only successful with sharp, high-resolution images, while shadow creation was mostly unimpressive. Even the best results were inflexible, meaning our Designers couldn’t make much-needed fixes afterward.
With the Figma tools being lackluster, we shifted focus to Photoshop, where we predicted more precise AI plugins that allowed Designers to improve the output. The RemoveBG plugin performed best. It lacked any tools for Designers to refine the object selected, but further editing could be done right inside Photoshop, which was a huge benefit.
Finally, we looked at stand-alone applications outside the programs we already use. When enlarging and enhancing images, Topaz Photo AI outpaced Photoshop. It produced cleaner image enlargements in less time. For isolating products from their backgrounds, Clipping Magic recognized objects without time-consuming prompting and offered highly intuitive features to refine the result. In addition to object selection, Clipping Magic adds impressive shadows with controls for intensity, distance, angle, and blur. Though it didn’t always hit the mark, it surprisingly recognized the relative distance of an object’s different areas to the ground, allowing it to create accurate shadows
A great addition to the Designer's toolbox
Did the AI tools expedite our design work? In short, yes. The new AI surpassed the tools we commonly use in Photoshop. After enlarging and enhancing low-quality images, it left cleaner textures and sharper detail, even when zoomed to four times the size. AI tools could discern and cleanly isolate objects without additional direction. Even in complex scenes, the tool mapped out an object’s edge. Combining AI tools in our workflow was even better. Image quality and isolation improved, plus we saved up to 10 minutes per image—a clear win.
There is still progress to be made, however. While AI brilliantly isolated products from backgrounds, at times we needed to edit further in Photoshop, offsetting some of our gains. Using AI to create shadows, though incredibly accurate for most objects, is still challenging for some. Shadows created directly in Figma or Photoshop are still more consistent and realistic.
Making artificial intelligence smarter
To expand the use of AI tools, some areas of development are needed. AI systems for photo retouching are efficient, but there’s room for improvement. With better recognition of the parts of the human face, AI might remove small blemishes and flyaway hairs in one step, leaving the subject looking realistic but neat. Scene retouching could also be enhanced by giving Designers more control to preserve elements or remove objects like leaves, dirt, and cracks.
These new features are coming. It's difficult to tell how quickly, as the big players focus on making leaps in improvement in generating completely original imagery. Tools like Midjourney, Stable Diffusion, Dall-E, and Adobe are increasingly adding special features to address things like:
- Removing an object from an image and having the tool replace the object using the rest of the image as a reference.
- Adjusting parts of the face to accentuate emotion and even age or de-age subjects.
- Retouching a scene by taking both image(s) and prompt to add objects and render a cohesive scene where lighting, angle, color, and tone are consistent between the image input and the new objects added by AI.
Coming up with options for visual composition can be a challenge similar to writer’s block. AI could speed up the production of banner ads and motion graphics. As these tools progress, they may be able to produce 50 options in seconds. A trained Designer could then identify the most promising designs and complete their composition more quickly.
AI image creation sparked imaginations the world over. Hyperreal portraits sprang to life from simple commands, while intricate alien landscapes emerged from meticulously honed prompts. When this image capability is used as part of conceptual development, the applications are almost endless. The limitation right now is for AI to perfectly replicate a brand’s identity, from colors to typography to visual signatures. We expect this development to be around the corner, with AI fueling storyboards, photo shoots, and more.
Getting better and faster with help from AI
There is more to design than meets the eye. So much goes into each and every project that AI cannot fully replicate. At Aquent Studios, our Designers work closely with clients, absorbing their style guides and learning how to communicate seamlessly with the brand’s customer base. Our creatives collaborate with different teams, apply iterative feedback across new areas, and, of course, quickly adapt to business changes. AI platforms can’t do all the things that Designers do, but they can help them work more efficiently. There’s more time to make creative deliverables better than ever and brainstorm new ideas. In many ways, design has never been more exciting.
This blog talks about generative AI tools and their potential applications. As these technologies rapidly evolve, we encourage you to research the latest developments in terms of their capabilities, safety and security, and ethical use. For more information on the responsible use of generative AI, download our whitepaper, “Using Generative AI for Design: Legal Considerations and Best Practices.