You Can Fool Some of The People...


Here's a thing: idly reading around and watching various YouTubes this afternoon - as you do - I picked up on three things that seem to reflect where we are at the moment in terms of large-scale computing. First off, COBOL: Common Business-Oriented Language; developed in the fifties and released in '59 as the de facto standard platform for handling significantly large amounts of data - principally financial transaction data - for banks, government departments and other large institutions: and it is still widely in use across the world today, with virtually all ATM transactions mediated and processed by it, as I think I've mentioned before; likewise the rapidly dwindling numbers of programmers left in the wild that can understand and modify/repair/extend existing codebases.

Secondly, I chanced upon an advert for a software product/service that claims to hide the fact that someone has used AI for content creation - of whatever kind and for whatever purpose, from the intended recipient(s) of that content - for instance, using the the widely-known ChatGPT to create a document from scratch from a simple set of instructions. This software/service is called 'undetectable AI', which processes whatever text you want to pass off as your own, but in reality has been AI generated, by 'humanising it' and generating a fresh text that will fool AI detection systems into thinking that a real person most likely wrote the text. The foregoing text was most definitely was written by me; a human [at least I think I am: but let's not get too Cartesian at this time of the evening].

So, let's do a bit of an experiment here and ask ChatGPT to write something on the subject of the last paragraph's content. Here's the seed question: 'Write a short summary of the process by which undetectable AI hides the presence of AI by humanising the text'. Here's the output:

Undetectable AI often uses advanced natural language processing techniques to make AI-generated text appear more human-like. This involves several key steps:

  1. Contextual Understanding: The AI comprehends the context and nuances of the conversation or topic, allowing it to generate responses that are relevant and coherent.

  2. Emulating Human Writing Patterns: It mimics human writing styles, including varied sentence structures, informal language, idioms, and emotional undertones, to make the text seem more natural.

  3. Personalization: The AI tailors responses based on the user's previous interactions, preferences, and tone, creating a more personalized and engaging experience.

  4. Error Introduction: Occasionally, subtle errors or imperfections are introduced intentionally to mirror the occasional typos or inconsistencies found in human writing.

  5. Adaptive Learning: The AI continuously learns from interactions, adjusting its responses to better fit the expected human-like patterns over time.

By integrating these techniques, the AI can produce text that feels authentic and indistinguishable from that written by a human.

OK: so the piece was rightly flagged as definitely having been written by an AI. So, let's 'humanise' the output and try again. Which results in: me hitting a paywall, to which I say piss off, people: go find some other mug punter that wants to hide their lack of intellect or writing skills: I'll carry on as I am, thank you very much, and goodbye...

Oh, and thirdly? Current IBM Z-series mainframe servers have hardwired AI built in at chip level, with which to interrogate in near real-time certain kinds of [financial] transactions to detect fraudulent activity: a whole different kettle of poisson from using some dumb text generator and a cloaking device to hide the reality of some people's shortcomings/evil intent from the public. Interesting to think of all that 'dumb' but innocent COBOL code still running reliably behind everything, though. Talk later. Cogito ergo sum, I think...


Comments

Popular posts from this blog

Of Feedback & Wobbles

Sister Ray

A Time of Connection