Trust in Me?


In the light of today's outages of FaceBook, WhatsApp & Instagram, two recent articles I read kind of stand out - the first was in the current issue of Wired, and the second from this weekend's FT magazine. In the FT, Tim Harford talks of the potential benefits of AI, and its drawbacks as it stands so far. In his description, AI currently is neither a murder weapon nor a cure for cancer, but perfectly good for knocking in nails.

Given the experiences of one woman, reported in Wired's October issue, by Maia Szalavitz, I don't think I would be quite so generous, on the face of it. The details of her case are grim: a US psychology graduate living with endometriosis, she suffers a very bad episode that sees her in hospital on opioids for the extreme pain she is suffering. Admitted for observation and placed on intravenous analgesia, all was going normally until her fourth day there. Out of the blue she is effectively accused by a different member of staff from the one who had started her treatment, that she is actually an opioid addict; all further treatment and observation is withdrawn and she is discharged from the hospital. At home a fortnight later, she receives news that her gynaecologist is "terminating" their relationship because of "a report from the NarxCare database". At the heart of these erroneous decisions was AI. An algorithm deployed by the database system had flagged her as having "problems", and effectively marked her out as an addict (she wasn't) not deserving of the care that it must have been patently obvious from her medical records that she  desperately needed.

Over-reliance on machine made or mediated decisions is one of the reasons the modern world of work is so deeply unpleasant for many of those who work for large, corporately-driven organisations, private or publicly-owned. I speak from personal experience and know so many people, just within my own social and familial ambit who have experienced bullying and harassment justified by the abusive application of IT.  Testimony to the scope of these problems is evident in the huge numbers of comments - anecdotal and verifiable - extant online. There is a seemingly endless litany of exposés of deliberate and often malicious mis-applications of tech and stats, reported both in fringe and mainstream news, that for the most part are effectively ignored and go lightly, if at all, punished: the British Post Office scandal [blog posts passim] being a particularly nefarious case in point.

The world we inhabit now couldn't function without tech. It's a given. But we should be very, very wary of putting all of our decision-making eggs in one basket. AI, as is all computer-based tech, is the product of many hundreds of thousands of programming hours by many, many - all with feet of clay - people. Anyone who knows coding - better yet, has coded themselves - knows that there is always a trace of their personality and social bias in the code they produce. It is inherently flawed. Like everything else in life, one needs to have an overview based on education, experience and human intuition in order to see the reality amongst the ghosts that machines will always - always - throw at us. The measure of how flawed we are as a species is a measure of how flawed our tech is, AI included. To keep our tech in check, we need to learn first how to keep ourselves in check.



Comments

Popular posts from this blog

Of Feedback & Wobbles

Sister Ray

A Time of Connection