After doing the first model, I wanted to experiment making a robot with as much AI assist as possible. As there would be many robotic characters in the game’s story. Again the model would be very simple. Not a AAA quality polished game model, but a quick, but passable character to populate our tiny little point and click world.

Finished robot model in the game

Prompting

I generated a ton of different robot turnarounds and many of them turned out very usable! But I wanted a little bit more challenge.

As I had some success with the odd back angle in the beardy man character, I set out to explore if AI assisted modeling would work with a reference image that is not good at all for using as a modeling reference image: an image without any front, side or back views!

dirty robot, full body, model sheet turnaround, full color, two thirds wiev, front::4 view and back view –v 4 –ar 3:2

This robot looked the way I wanted the robot to look like, Industrial, quirky and somewhat “delicate”, but I was unable to generate good modeling reference from it, so this had to do.

Block-out modeling

Modeling the robot was a breeze. Naturally I was not able to use the reference imagery for much else than general scale and details. I figured I would just use the texture projection step to force the parts to overlap. Nailing the character proportions for the mesh was not crucial at all. I could fine-tune the shapes and the scale after it was fully textured and I can better see what I am doing.

I was especially happy about the shape of the head the AI hallucinated up. Part insect, part flying drone it was perfect.

Getting from zero to the fully finished model took me 4,5 hours. I am purposefully keeping the polygon count low, instead relying on the AI imagery for the details. At this moment the model is still missing the UV unwrapping.

UV mapping

My plan was to unwrap only half of the model, then mirror it. After this I added morph maps to pose the character to match the AI imagery and project both images on the UV texture twice, first the left side, then the right side. This way I would end up with 4 overlapping projections for the UV map, providing me with the most coverage for the character surfaces.

The UV unwrapping took gruelling 1,5 hours to complete. It Is not that long, but I absolutely hate it. The time this took does not really matter at all for this, but I timed it so might as well report it. Even though I will be using texture projections, I need to make the map clean as I need to manually blend the different projections together in Photoshop. Making a clean UV map makes it a lot easier.

Texture projection

Getting this step right is very important. The robot, even very low poly, is already a very complicated mesh. I found it easier to move everything in place when I selected one part of the robot, hide everything else and focused on that individual piece. Some pieces had very little coverage in the AI generated images, so I just rotated the parts around and tried to find a spot that would project something usable on the surface.

After the projection morphs were done, I was able to project the front-side and back-side views on both sides of the robots individually. The resulting UV maps, even though I tried to make them somewhat understandable, were still not too clear to look at. It took a while to find the best projection for each surface.

The final UV map. As you can see some surfaces are still weird and stretchy. But these characters will not be seen from close up and they will be lit and moving and visibility will be obstructed so it will be just fine!

Rigging

On the previous character, I was able to use Mixamo for rigging and the results were great.

But this time, the model was not an airtight mesh. It was a selection of separate meshes stuck together. Mixamo’s auto-rigger had no change against this monstrosity. So I had to do the rigging by hand myself. For this model, it took me 2 hours. I would have loved to have an AI that could have done it for me!

After all these steps we have a model that we can throw in Unity and use in the game.

The end result

All in all, this was again a very straightforward process: get a turnaround from the AI, model a mesh, distort the mesh to match the projection, project the textures and bob’s your uncle.

I am very happy with the silhouette of the robot-guy. It looks very industrial, made from simple shapes. Easy to manufacture. Functional. The insect like feet and head especially were a nice touches. Good boy AI!

The robot character, while fun, was not nearly as easy to create as the human. Not having any direct front / side shots seriously made the process more painful. Not impossible, but tedious.

Creating the projection morph maps was a pain. Also creating the UV maps was slow as there were a lot more parts than a human body has.

It took me 8,5 hours, a hair over a work day, to go from AI image to fully usable in-game character. That is way faster than this sort of work has any right to be. In real professional setting, there would be no reason to go that fast. But this is a hobby project. Time is what we do not have, and every shaved minute is making this project more and more likely to succeed.

When I think about this more, there is a possibility the having to create the projection morphs and cleaning up the UV map might have actually made the texture creation process slower than it would have been without the AI image texturing step. As this model has very simple textures to begin with and the projection images were from off angles.

6 responses to “Modeling a robot with some added difficulty”

  1. Avatar

    Hola soy de España y quiero jugar como lo puedo hacer se ve interesante

  2. Avatar
    Greg Miller

    What are these moprh maps? Is there an equivalent in Blender?

    1. Jussi Kemppainen

      Yes, all programs support these “morph maps”. A morph is just a duplicate of a mesh, with the vertexes in other locations. It is that simple. The projection morphs I create are just a version of the character that makes the projected texture match the form correctly. This needs to be done for all the reference images. In this case for the 2 images the AI generated. The idea is that once I bake the basecolor of the material to the UV map, the pixels would be in the correct positions for the Uv to catch the correct data. Baking is an industry standard way of either baking a normal map to a low polygon mesh from a high polygon mesh, or baking a procedurally generated texture into an image format.

      A morph for the front*ish projection:
      Projection morph front-ish

      A morph for the back*ish projection:
      Projection morph back-ish

      As you can see the model is all wonky and messed up. But none of that matters. What matters is that in 2D view the vertexes of the mesh are correctly placed on the projected texture (as seen in the 2D views of the morphs in the blog post).

      1. Avatar
        UnrealN00b

        Thanks for answering this question. So essentially it’s just shape keys utilized to position the model for better UV projection.

  3. Avatar
    T0b7

    Wow Super Cool! But i was wanting to find out how to make a robot design for a game not how to put one in. Can you help me with that?

  4. […] was the very first post! with 100 000 reads! This was followed not very closely by the post about modeling a utilitarian robot character and the third place goes to the AI powered fluid simulations piece. I do not expect any post ever […]

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts


Archive


Social Links