An anonymous reader shares a report: A new algorithm creates 3D models using standard video footage from one angle. The system has three stages. First, it analyzes a video a few seconds long of someone moving — preferably turning 360-degree to show all sides — and for each frame creates a silhouette separating the person from the background. Based on machine learning techniques — in which computers learn a task from many examples — it roughly estimates the 3D body shape and location of joints. In the second stage, it “unposes” the virtual human created from each frame, making them all stand with arms out in a T shape, and combines information about the T-posed people into one, more accurate model. Finally, in the third stage, it applies color and texture to the model based on recorded hair, clothing, and skin.
Read more of this story at Slashdot.
Source: Slashdot – AI Can Generate a 3D Model of a Person After Watching a Few Seconds of Video
