Graphics gurus master wispy hair, snowballs, torn paper

Princeton University, Industrial Light & Magic, and the University of Southern California
The computer graphics industry has an insatiable appetite for realism, and researchers next month will show how they plan to feed it with innovations in computerized hair, snow, cloth, paper, and more.

Those researchers will strut their stuff at the annual Siggraph 2013 conference in Anaheim, Calif. Conference organizers published a video preview of coming attractions at Siggraph for those who want a taste of the technical papers.
Siggraph attendees can relish work at UCLA and Walt Disney Animation Studios to combine "a Lagrangian/Eulerian semi-implicitly solved material-point method with an elasto-plastic constitutive model."
The rest of the world, though, can appreciate that work in the form of cartoons with realistic snowballs.
A snowball hits a wall in this simulation by UCLA and Walt Disney Animation Studios researchers.
A snowball hits a wall in this simulation by UCLA and Walt Disney Animation Studios researchers.
(Credit: UCLA and Walt Disney Animation Studios)
Other research to be spotlighted at the show:
• "Automated video looping with progressive dynamism," which automates the creation of those eerie videos where only some parts of the scene moves. The automated approach, from researchers at Microsoft and the University of Illinois, can be guided by brushstrokes that indicate parts of the image that should remain stationary.
• "Structure-aware hair capture," a technique for modeling computer-generated hair based on photographs of the wispy, tangled reality of actual hair. It's from researchers at Princeton University, Industrial Light & Magic, and the University of Southern California. Alas, it doesn't yet work with dreadlocks or braids.
Ohio State University researchers bring realism to tearing paper and foil in work to be presented at the Siggraph conference.
Ohio State University researchers bring realism to tearing paper and foil in work to be presented at the Siggraph conference. The Aireal puffer is perched atop the screen.
(Credit: Ohio State University)
• "Adaptive fracture simulation of multi-layered thin plates," which means a more believable look to tearing foil and paper, from Ohio State University researchers.
• A device called Aireal that provides some tactile feedback to go along with videogames' usual video and audio information. The device directs puffs of air at a person and thereby "enables users to feel virtual objects, experience free-air textures, and receive haptic feedback with free-space gestures," according to the researchers from Disney Research Pittsburgh and the University of Illinois.
The Aireal from Disney Research shoots puffs of air at a person to try to give a more physical feel to videogames.
The Aireal from Disney Research shoots puffs of air at a person to try to give a more physical feel to videogames.
(Credit: Disney Research Pittsburgh and the University of Illinois)