Meet in the Meat World

 


Interesting piece in my Wired feed today, by Steven Levy, about AI and chatbots, which contained the following sub-paragraph:

'Last year, [Blake Lemoine] was fired from Google, essentially for his insistence that its LaMDA chatbot was sentient. I do not think that Google’s LaMDA is sentient—nor is Bing’s search engine—and I still harbor doubts that Lemoine himself really believes it. (For the record, he insists he does.) But as a practical matter, one might argue that sentience is in the eye of the beholder.'

The final sentence contains the clincher: the mere perception or interpretation of sentience in an AI is sufficient to render it as such to the observer/consumer. No matter that any particular AI is based solely on the ultra-rapid statistical analysis of huge language data-sets culled from the internet. I've referred before to Chomsky's now largely defunct, but laudable concept of deep linguistic/proto-linguistic-structures; which was intended to form the basis of machine translation of natural language, based on the technological capabilities of the era. The concept failed on a number of fronts, but was philosophically elegant.

The irony now is that the brute-strength, imitative, approach that Chomsky's theory sought to debunk, is technically in the ascendant. This stuff works. To a point. But like politicians and the gutter press, whether you can trust what it says is moot in the extreme. Question anything you don't possess the facts about, and certainly don't take any-thing/one that you can't meet face to face in the real world, at face value...

 

Comments

Popular posts from this blog

Of Feedback & Wobbles

A Time of Connection

Sister Ray