Blog: Testing Midjourney's New Character References for Automotive Rendering

by CGI.Backgrounds // 5 Files //  

0

// back to blogs

 
As we've  explored in detail before , AI can be a powerful tool for creating 3D renders and exploring concepts.  The challenge with directly creating 3D renders using generative AI tools like Midjourney or DALL·E, however, has always been in their lack of specificity. 
  As we've explored, you can easily ask a system like Midjourney to create a render of a vehicle in New York City, or another common automotive rendering theme. It will create a dramatic and fairly realistic render. 
  The vehicle that it creates, however, generally looks nothing like any actual vehicle on the market. It’s often an amalgam of parts from different cars, combined with Midjourney's own hallucinations. 
  That makes it hard to use the output from these systems for anything other than ideation. Even demonstrating a concept using a vehicle that looks similar to a client product, for example, is nearly impossible.  A new addition to today's most powerful AI generator may soon change that.
  The Launch of Character References 
  Last month, Midjourney Version 6 rolled out a powerful new feature called character references. The goal of this feature was to allow illustrators to preserve a consistent character between images when using the AI generator. 
  A children’s book illustrator, for example, could create a character once. They could then reuse that character in multiple additional images to illustrate an entire book with a consistent look and feel. 
  The feature is tuned specifically to duplicate the appearance of people or human-like characters. But we wondered whether the same feature could also be used to preserve the feel of a vehicle between generated automotive renders.   Testing Character References for Automotive Rendering 
  For our test, we used Midjourney Version 6, which is the most up-to-date version of the AI image generator. We asked for a scene of a Porsche on a coastal road, which is a common automotive rendering request. 
  The Results 
  We were pleasantly surprised by the results! It appears the character reference feature works just as well on vehicles as it does on people.
  Midjourney captured much of the styling of the Porsche Targa accurately in its renders.  The distinctive lines of the rear wheel, the slope of the rear window, and even the lettering and spacing of the Porsche's name were rendered fairly accurately. 
  Amazingly, the system was able to create a reasonably accurate render of the front of the vehicle as well, given only the rear view from the render.
  Some of the images had a much more painterly than photorealistic quality. Still, the system did much better with the character reference than it would've done had we simply asked for an image of a Porsche on a road. 
  Here’s the results from the same prompt without character references. As you can see, it looks far less like the Targa.
  The Takeaways 
  What does the character reference feature mean for the future of AI-generated automotive renders? 
  For the time being, human designers are still safe. Although the character reference feature allowed Midjourney to get closer to a photorealistic render of the Porsche, it was still way off on many details. The wheels of Midjourney’s imagined Porsche, for example, had strange circular or honeycomb patterns embedded in them. The side windows of the Targa were not rendered properly, and the roof was the wrong shape. 
  These details are subtle and may not even be obvious to the casual viewer. But to a brand or an enthusiast interested in purchasing such an iconic car, they would be glaringly obvious. 
  Digital twins have to be incredibly detailed and accurate in order to be useful for automotive rendering. We’ve seen people call out 3D renders that have the wrong color brake calipers! That's how detail-oriented customers and enthusiasts can be.  For that reason, even though features like Midjourney's character reference get designers much closer to a specific vehicle's look and feel, they're still not good enough to be used in a production capacity. 
  That said, features like these make generative AI an even better tool for internal design processes and for ideation.  That’s always how we've advocated for  the use of generative AI in the automotive rendering space. 
  The tech isn't there yet when it comes to many production use cases, but it’s an incredible tool for quickly generating visual concepts to share internally– or even to create concepts to share with clients. 
  If tools like character references allow designers to create an even more convincing conceptual render featuring a brand-specific vehicle (even if the details aren’t perfect), tools like these will make generative AI that much better for ideation. 
  Want to explore using generative AI to create HDRi Maps and explore visual concepts? Check out our  CGI.B Horizon platform , which is trained on our industry-leading HDRi Maps and backplates, and can help to turbocharge your ideation process with AI.
IMAGE // CGI.BACKGROUNDS
IMAGE // CGI.BACKGROUNDS
IMAGE // CGI.BACKGROUNDS
IMAGE // CGI.BACKGROUNDS
IMAGE // CGI.BACKGROUNDS
 
0
0 Comments

// back to blogs