do algorithms institutionalise prejudice?

In the past, when I was a kid, it was operating-system software that ruled the world: CPM; PC-DOS; MS-DOS; then Windows.

Software allowed you to do stuff with the outside world.

Print to printers.

Save to floppies.

Later, even email and browse.

Humans still took editorial decisions, mind.  I remember (I think) a Netscape project where real people (yes, flesh and blood!) scoured the web for interesting things you might need to find.  A bit like Google today, but a manual operation of grand and creative idiosyncrasy.

They were living algorithms.  Entities which made decisions based on prior experience, tact, intelligence and – hey-ho – intuition.

Mostly quite sensible, but with a dash or two of mysterious procedure, too.  And why not?  After all, that is life as humanity has always experienced it.

Then came movements to bring software out into the open.  Too many liberties were being taken under lock and key.  We didn’t just want to drive our virtual engines; we wanted to have the right to take them apart and rebuild them.

So the open source movement is now grand and important.  It’s had its recent failures – but for every failure of open source we’ve uncovered, closed source has had its own share multiplied a hundred- or a thousand-fold.

However, Microsoft, the great end-of-the-20th-century closed-source software publisher, doesn’t rule the web – or our lives – in quite the same way any more as, say, Google or Facebook do.  Google and Facebook’s rules operate through the mysterious actions of the already mentioned algorithms: only, using mathematics to substitute the decisions of once very real editors.

Those editors weren’t infallible.  They were prejudiced; they were ethnocentric; they could hardly be anything but a product of their times and places.  Yet (I assume), despite these imperfections, their practice was used to define their automated cousins.  As this wonderful post points out:

[…]  Like the problem with Google algorithms defining “beauty” as whiteness per layers and years of discrimination, there is no way to amplify marginalized voices if structural inequality is reflected in our algorithms and reinforced in user pageviews.

Now of course, I may be wrong.  I’m not privy to the interior workings of Google or Facebook’s algorithms.  But surely that was exactly the problem we faced all those years ago when operating-system software ruled the roost.  And it was resolved, in great part, by open-sourcing parallel forms of code.

Curiously, I don’t hear a great clamour to do the same with search or social-network algorithms.  Maybe I’m reading in the wrong places; maybe I’m just a bit too mainstream for my own good.  I do faintly remember calls for Facebook to be a little more transparent about how it automated the decisions which allowed content into users’ feeds or not – but nothing as radical or downright as open-sourcing the whole caboodle.

I find this difficult to understand.  Algorithms now control what we see, do, act on and even feel.  They affect our perception of the world around us much more than traditional software and operating systems ever did. In the early days of PCs, the code was still an extension of ourselves.  Now, to a great extent, we have become extensions of the code.

So.  Do algorithms institutionalise prejudice?  And will they continue to do so until they are open-sourced?  I think the answer to both questions is: “Yes!”

And whilst the former continues to be the case, the latter needs to happen.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s