The latest story about Facebook conducting a psychological experiment by tinkering with 600,000 users’ walls started a number of discussions. I would argue that technically, it’s no big deal: one more A/B testing experiment, where a website shows two slightly different alternatives and tests how users respond to each one.
We know Facebook applies filters to what we see on our Facebook wall: If they showed us everything our friends share, the wall would be scrolling like crazy. We suspect that they are trying to pick the stories that seem to interest us more, but we also know that this space is there for buying too, to anyone interested in spending money.
So, no, I’m not shocked that they decided to apply a “sadder” or “happier” filter to some of the users’ timelines for a while. As far as I know, they could have done the same to see if on-line purchases or ad clicks would go up.
For me the real issue is power. And it’s closely related to what I was writing a couple of weeks ago about Google and the right to be forgotten: The power Google, Facebook and a handful of other on-line giants have is beyond anything we’ve seen so far.
Yes, Google or Facebook do affect our understanding of the world. We know that Google can make or break a business when it changes its ranking algorithms, or that they can destroy someone’s personal life or political career by favouring a top result that portrays them in a certain way. And obviously, Facebook could alter the way we think about a political party or a debate or a social issue or even the way we feel, by favouring some of our friends posts over others.
In all these cases, their answer is always the same:
- It’s not personal, there is no deliberate bias: What we show is automatically generated by algorithms.
- We do what’s best for our users.
Now, the fact that “it’s not personal”, is not very reassuring. I’m willing to believe that none of these companies has put any bias in the way they present results —I wouldn’t be surprised however if it turns out that in some countries, results are picked in a way that favours, let’s say, US interests.
But what if the algorithm that constantly A/B tests user behaviour by altering font sizes, content ranking, and image sizes discovers that showing 10% more violence or 13.5% simpler texts, or 19.2% fewer results from academic sources, or 9.2% more results that favour left or right political parties, increases the click-through rate (and the revenues) of the company? That’s not personal, it’s just more efficient.
As for the “we do what’s best for our users” part, well, that’s bull. They don’t know what’s best for you or me —I mean, most of the time, we don’t even know ourselves. “What’s best for our users”, is translated in the best case to “what makes users use more our service” —which, don’t get me wrong, is a totally legit and acceptable benchmark for any kind of business, but it’s not the same.
But the biggest question is not how they use the enormous power they have. It’s who controls it.
When we moved from monarchy to democracy, we didn’t do it because all kings were bad. In most democratic countries, our history is full of kings we hold high because of what they did for the country —some of them, even some of the “bad” ones, did amazing things for the arts and sciences, too. We moved to democracy because we wanted more people to have control over their power —to be able to decide ourselves what’s good for us, and not to be told.
We are in a similar crossroad. We have entities (corporate entities in this case), with political, social and financial power in a global scale that exceeds anything we’ve seen so far. The question we will have to answer sooner or later is not how they use this power, but who controls it.
This article was originally posted on Medium.