Body Labs has launched its SOMA Shape API. SOMA provides a backend server that can predict a 3D model and digital measurements based on a single photo and a person's height, weight, and gender. The SOMA Shape API allows third party apps to easily connect to the SOMA backend. With API access to such technology, third parties can build experiences around the individual, whether that experience is for shopping, gaming, or anything else.
"The SOMA Shape API combines CNNs with our statistical model of human shape and pose to enable developers to easily fit our 3D models to real-world image data in a wide range of clothing and lighting conditions," Body Labs co-founder and CTO, Eric Rachlin, commented in a press release. "Using just a single photo, height, weight, and gender our API provides state-of-the-art 3D shape estimation suitable for a wide range of real-world use cases. The approach we've taken offers a highly scalable, accurate, and reliable solution that are confined will continue to improve overtime."
On the backend, SOMA uses AI that is designed to analyze the single photo and body data to predict the 3D model. To do so, SOMA identifies specified landmarks, and extracts measurements. Body Labs hopes that SOMA can fill the gap that currently exists when consumers look for clothing online. API access to such technology could standardize the process across the wide range of e-retialers. If successful, the need to order four sizes of every item only to return three, could go away.
The Shape API uses two resource types: files and artifacts. The file endpoint accepts external input: a photo, height, etc.. The artifact endpoint delivers a computed object: the 3D mesh measurements generated by SOMA for a particular subject. The typical user flow takes the following order: 1-create new file resource, 2-upload an image, 3-set the file version, 4-create a new artifact resource, 5-pool the artifact status, 6-download the artifact. For more details, check out the API docs and release notes.