Over dinner a few months ago, a fellow Bronco alumna and I were talking about her work. Where my world is marketing and emails, hers is clients and numbers. Though we’ve been friends since our first year at Santa Clara, our academic paths barely crossed. She’s a financial advisor, which means most of her time was spent in Leavey, while I was typically running around in Arts and Sciences. Our industries are distinct enough from each other that I was shocked to hear her talk about “robo-advisors” inciting competition to her world of finance services.
Just the word alone – RoboAdvisors! – inspires imagery of RoboCop sitting behind a desk in suit and tie.
What’s so bad about human-to-human advisor-to-client relationships that the advisor model even needs to be disrupted?
Coming from an environment where every other company proclaims to “disrupt” the way outdated industries work, this is the first time I actually got a little scared of disruption. Never mind the number of things that could go wrong if an automated robo-advisor took over your funds – what’s so bad about human-to-human advisor-to-client relationships that the advisor model even needs to be disrupted?
The whole reason clients choose advisors is so they have an understanding person on their team: someone who has seen financial cycles and knows how current events influence individual and societal spending. Someone they can trust.
Apparently my thinking is passé, because trusting a person with your funds isn’t necessarily what every adult demands. Some adults want automated recommendations based on big data from years past. Algorithm, algorithm, algorithm. But do algorithms really know my best next move based solely on my age, gender, and location?
maybe we should draw the line when it comes to interpersonal jobs
I know I said “Sky’s the limit!” on my last
post imagining dreamy worlds of travel enhanced by technological advances, but maybe we should draw the line when it comes to interpersonal jobs.
We’re surrounded by algorithmic choices every day, just steps away from artificial intelligence built into everyday tools. While I understand that no technological advance is without potential bugs, we’ve got to look at the cost of certain bugs and weigh them differently than others.
Did you make a “Year in Review”
video on your Facebook account last year? Capitalizing on people’s collective reflective spirits during the holiday season, Facebook offered to generate a montage of the most Facebook moments in your Timeline. In this case, “popular” Facebook moments meant the updates and uploads that garnered the most attention from your Facebook network through Likes and comments. Though Facebook ended every video with "It’s been a great year!" that wasn’t the case for every user.
In 2014, the death of Web design consultant Eric Meyer’s daughter was the most significant event of the year (and probably his lifetime). By the Facebook algorithm’s standards, a picture of her face was Meyer’s most “popular” moment. Moved to grief by imagery of his late child, Meyer described Facebook’s automated post urging him to create a Year-in-Review video as “inadvertent algorithmic cruelty”
To show me Rebecca’s face and say “Here’s what your year looked like!” is jarring. It feels wrong, and coming from an actual person, it would be wrong. Coming from code, it’s just unfortunate. These are hard, hard problems. It isn’t easy to programmatically figure out if a picture has a ton of Likes because it’s hilarious, astounding, or heartbreaking.
Facebook’s algorithm had engagement metrics, but it simply did not have context.
While I know human financial advisors can make gaffes, I’m not so convinced that robo-advisors can be imperfect, either. How much can you trust any system that cannot factor in context?