Irina Raicu is the director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University. Views are her own.
A company named Naked Labs recently released a new product. It is a mirror bordered by 3D cameras, combined with a small scale that rotates as a person stands on it. As the user is spun around, the cameras take pictures: the result is a full-body 3D scan that is ultimately depicted as a silhouette of the person. As writer Lauren Goode describes it in an article for Wired magazine, “The processed image files are… shared from the mirror to the cloud, and then to the Naked Labs mobile app,” which then displays the user’s silhouette accompanied by additional statistics.
Those include weight, but also body fat, lean mass, and fat mass. “You can swipe through different parts of your body and get measurements for your waist, chest, each thigh, each calf,” explains Goode. She adds that Naked Labs assesses those aspects
algorithmically by comparing your images against a database of body shapes and DEXA scan data (… [which] measures bone density as well as body fat estimates). The Naked Labs approach is supposed to get better over time, which is the promise of a lot of quantified-self products: Feed us more data and we will, in turn, feed you the information you need to be a better you.
However, according to the Naked Labs team, “most of the prediction algorithms for body fat and body muscle currently derive from data from white males.”
Several other articles announced the launch of this new product (see, for example, “Naked Labs’ body-scanning mirror might be the smartest thing in the CNET Smart Home,” or “Naked And a Little Afraid: Testing the Body-Scanning Mirror”), and a number of them address privacy concerns about the data collection involved—but only the Wired piece seems to highlight the data on which the prediction algorithms were trained.
The Wired article also mentions a related NIH-funded study called “Shape Up,” which aims to determine the utility of body scans for health related purposes. That study is tracking 1,500 participants “from five different ethnic backgrounds,” including men and women of a variety of ages.
Is it ethical to release products that seek to quantify various aspects of people’s physiques but that rely on algorithms trained on data from a particular subset of potential users, and which are supposed to “get better over time” as they ingest and learn from data collected from their (perhaps more diverse group of) users? Does the answer vary depending on the purpose(s) for which such products are intended?
The question is urgent, and much more fraught in contexts that don’t involve mirrors and fitness settings—including, as one article explains, in “AI-powered systems… [that] are quickly becoming the next frontier in health care.” Dave Gershgorn’s article, titled “If AI Is Going to Be the World’s Doctor, It Needs Better Textbooks,” mentions, among other examples, a 2017 Stanford study of AI used in skin cancer diagnosis. The study was highlighted in Nature. Shortly after its publication,
co-author Brett Kuprel told Quartz that the only datasets they could find [on which to train their AI] were made up mostly of lighter skin samples. … Sebastian Thrun, a Stanford University professor, co-author of the paper, and a legendary name in AI research … told Quartz via email that the team did not test for variation in skin tone. ‘I agree much more work is needed before we can confidently recommend such a technique for field use,’ Thrun wrote.
One key question, then, is that of ethical deployment. When are AI-powered tools ready for field use? What kinds of disclosures and caveats should accompany them, in order to make their deployment more ethical? As Gershgorn puts it, “This isn’t a problem we can table for future ethicists to handle.”
Photo by Mac Jacobs, cropped, used uder a Creative Commons license.