FYI.

This story is over 5 years old.

Tech

This Is the Best 3D Animation of Putting on Pants Yet

Getting 3D animated characters to dress is surprisingly difficult, but researchers are making progress.

Everyone puts their pants on one leg at a time, but it's not something you see often in 3D animated movies or video games.

For all the progress we've made in special effects and adorable, animated movies, you'll rarely see a Pixar character or a Sim getting dressed because it's surprisingly difficult to get the stiff character model to interact with simulated fabric. However, a group of researchers at the Georgia Institute of Technology recently suggested a method that would allow 3D animated characters to independently manipulate their limbs to put on a jacket and other items of clothing.

Advertisement

"The closest thing to putting clothes on you'll see in an animated movie is The Incredibles putting on capes, and that's still a very simple case because it doesn't really involve a lot of physical contact between the characters and the cloth," Karen Liu, one of the Georgia Tech researchers and co-author of the paper "Animating Human Dressing," told Motherboard. "This is the problem we want to address because generating this kind of scene involves the physical interaction between two very different physical systems."

Cloth, like hair and water, is incredibly difficult to animate by hand, so it's typically generated with physics, the cloth reacting and moving as it would in real life. Character movement on the other hand, is typically created manually. Getting these two systems to interact is difficult, especially if you're trying to simulate the many, small, awkward movements involved in sliding your arm into the constantly shifting shape of a sleeve.

"The hardest part is to compute or come up with the algorithms that control the characters," Liu said. "The character has to make the right decision on how he or she is going to move the limbs so it has a clear path to the opening."

The solution suggested in "Animating Human Dressing" uses a limited number of "primitive actions," such as putting your hand or foot through an opening, and path planning algorithms that consider the state of the garment only at the brief moments the team identified as crucial to completing the action.

Advertisement

For now, the Georgia Tech researchers are using the method to generate pre-rendered animations, but plan to apply it to real-time 3D as well, meaning it could potentially be used in a video game.

More importantly, at a higher level, the team is solving planning and optimization problems that could be useful in the field of robotics.

"If you want a robot to achieve something in the real world there's lot's of real world issues that we don't need to deal with because we're doing animation," Liu said. "On the other hand, the nature of the task is even harder because we're dealing with a highly deformable object in a very constrained space."

Most robots, like the kind we saw researches trying to solve at the DARPA Robotics Finals, try to avoid collisions, but Liu and her team are trying to leverage them.

"Imagine you're putting on a shirt. You're not trying to avoid collisions," Liu said. "You're trying to somehow understand the collision and use the information you get from collisions to make the right decision to move your hand along the right path."

If the team's method can help a 3D character get into a realistically simulated shirt, robots in the future could conceivably use the same method to help disabled or elderly adults to get dressed.