Martin Bartels

5 March 2023

 

 

Starting in Greece


The human quest to obtain information about an idea, a situation or a project from a source beyond one's own capacity to think is a classic. For centuries, the questioners of the Oracle at Delphi, often of high rank, acted as if they understood the passages that, according to legend, were uttered in sophisticated dactylic hexameters by the oracular priestess Pythia. The picture inserted above shows her workplace.


Anyone consulting one of the Natural Language Artificial Intelligence (AI) portals now available to all of us for the first time may indeed find themselves in a similar mood to Pythia's interrogators. The expectation of an omniscient ‘mind’ on the other side inspires a sense of awe. However, this is not a hallucinating priestess answering in verse, but a machine trimmed to deliver reasonably structured factual information in whatever language you wish.


Those who are now experimenting or working with the new portals are also the ones who are training the algorithms and enhancing their performance.


There is no reason to give in to the perhaps burgeoning inclination to surrender to the seemingly overwhelming new power. Defeatism is not appropriate.


What follows is a look at how these AI portals with a focus on those that interact discursively with people. While there are certainly other functions that modern AI is being trialled with, I want to explore the language function primarily and its implications for human dialogue going forward, as well as make some suggestions on how to regulate this potentially revolutionary industry.



How does AI “think”?


The output of natural language AI may feel like that of a thinking human. However, the way it works is completely different because the machine only relies on the pool of information it has access to. Algorithms determine how that information is processed. AI works much faster and more accurately than the human mind and can process much more information. However, it does not develop abstract thoughts and combine cognitive processes. So, while you’ll certainly get tonnes of information at the click of a button, these machines are not going to have flashes of inspiration or create a eureka moment.



User experience


We have become accustomed to typing keywords into search engines and then sifting through the usually rich array of links to find relevant information.


The ability for a new AI-powered chat function to respond to detailed questions and text also means that it can provide us with the answers we require, much more quickly that it would have taken us through a search engine.

 

We can also feed these systems with key data and ask them to generate documents such as letters, essays, contracts and even poems. The results can be further perfected by tweaking our questions and specifications, although you will find occasionally that the machine will throw up a white flag of surrender.


In other cases, and perhaps more worryingly, the results are blatantly wrong, sometimes comical.

They can also be misleading, and this is not always obvious because in the case of one of the AI portals, the answer does not disclose the source of information. Nonetheless, despite these faults, on the whole, these AI programmes feel as though they are at the beginning of something that is building up a tremendous capacity, one moving swiftly towards perfection.



Fears?


Professionals who work with text, for example teaching professionals, copywriters lawyers, journalists and theologians, have expressed unease at this new technology, as it appears to threaten their core skills.


In the late 1960s and early 1970s, there were heated discussions in many schools and universities about whether it was beneficial for the intellectual development of pupils and students to use  calculators instead of slide rules. Today, hardly any technician still handles a slide rule, and yet the art of engineering continues to climb to new heights.


Many people also found the introduction of personal computers in the workplace threatening. Today, we feel neglected if we are not equipped with the most modern devices.

On the other hand, others have recognised the advantages of AI and intend to implement it in their work.


There have always been productivity-enhancing advancements that were perceived as incisive. In no case, however, have the refuseniks been able to maintain their positions. Instead, people have learned to recognise the pros and cons of the innovation and how to benefit from them. Innovation for which there is massive demand can be channelled, but not stopped.



"If a technology can be abused, it will be abused"


Immediately after the first public release of natural language AI, people started to test the potential misuse of the appealing new technology


However, even for those that support the AI, negative examples of its use should be welcomed, for they lead to improvements. Engineers of real world goods are called upon to fix things when a new product is launched and consumers complain about malfunctions. Developers of naturally speaking AI systems, legislators and regulators are similarly looking for aberrations in order to develop and enforce appropriate safeguards. Negative phenomena thus provide valuable material for the development of rules.


Some guidance on appropriate rules may come from food legislation, where suppliers are required to disclose ingredients and additives. Failure to do so can result in fines or other sanctions. Similarly, AI legislation could require any user of these systems to disclose if all or part of their published work was generated by AI. The first verification software is already on the market.

Those who do not comply with the disclosure rules will fail exams, be disqualified, lose their jobs or be obliged to pay a fine.


As a society, we are going to quickly get used to the fact that some texts do not come from a pen guided by a human hand. In many cases, this is not a problem, as long as the text is useful. Do we care whether the coffee cup we drink from was hand-made or produced on an robotic assembly line? Probably not, but there is also a difference with writing, and text written by a human being may be a matter of professional integrity, especially when remuneration agreements have been concluded on the assumption of human-produced text. The fair price for machine-made products is significantly lower.



Better not a free lunch


The development and operation of AI systems are expensive. While anyone can use common search engines, this is rarely free, because searchers pay with the data they generate. This data is sold to the advertising industry, allowing for a profile that becomes more and more accurate over time, thus leading to increasingly more valuable targeted advertising.


Now, as users of AI systems enter more sophisticated and detailed queries, even from the private sphere, the possibilities for ever more precise profiling are skyrocketing. We occasionally see in crime movies how profiling techniques can be used to catch criminals. However, the users of natural language AI systems are usually not criminals, and automated profiling can easily amount to an invasion of their privacy.


Therefore, providers should be contractually and legally prevented from selling data from the operation of their system to third parties unless the user has expressly consented. The wording of such consent should be unequivocal and not obscured by small print text designed to discombobulate. The alternatives should be crystal clear: Sale of personal data: yes or no. This is not difficult.


Of course, AI systems are not charities. In order for them to work well and make a valuable contribution to the functioning of society, we must not deny them the opportunity to benefit from a margin. So, AI providers should be able to charge an ongoing fee that generates a profit from a large number of users. People who have a budget to buy music and films over the internet will be willing to pay for a high-quality AI service, as will commercial customers whose work is based in whole or in part on the use of data. Commercial entities have already been purchasing access to professional databases that have been on the market for many years, so it seems plausible they will be willing to spend more on AI for more sophisticated searches.


In terms of cost, it is already apparent that a larger number of AI providers will compete with each other in terms of quality, privacy and prices. So the market will ensure that fees will be reasonable.

The prerequisite for everything is the integrity of the AI business sector: We need absolute clarity on the use of the personal data and even the aggregated metadata. Here, legislators and regulators have serious tasks ahead of them. Just as the market will ensure accessible pricing, so too will operators of AI systems that switch to jurisdictions without effective regulation will be at a competitive disadvantage.



The dark side


Artificial intelligence has no bottleneck in sifting through very large amounts of data. Since the machine by its nature does not wear blinders, it can come to conclusions that we find disturbing. One remedy for this has been to calibrate the search procedures so that certain results are considered wrong, irrelevant or unethical, and thus excluded. An algorithm ‘educated’ in this way is one that only allows conclusions from selected data.


While certainly some sources of information must be kept out of the data pool from which the algorithm draws, for example ‘medical advice’ that could put user health at risk, but is there such a thing as an ’unbiased data pool’ or an ‘unbiased algorithm’? The boundaries between 'sense' and 'nonsense', 'scientific' and 'unscientific', and 'ethical' and 'unethical' can be blurred. Furthermore, the idea that users could be surreptitiously guided towards certain normative answers is far from exotic.


The question of the right balance will likely continue to haunt us, and in many instances it will not be possible to reach a consensus on clearly correct answers. Users will therefore always have to exercise caution and judgement when using these platforms.


I submitted the question of AI ethics to one AI portal and, to please the reader, requested the answer in the form of a poem. The response took less than 15 seconds:



Use and abuse, twin sides of a coin,

Like right and wrong, forever entwined,

Each doth the other, doth complement,

As day and night, in balance are bent.

A coin with but one side, there is none,

For in all things, there are two, not one,

The balance, doth make the whole,

Use and abuse, like two sides of a roll.

One side may shine, the other may rust,

But both are needed, for coin's worth to trust,

For without abuse, use would be without measure,

And without use, abuse would be a dark treasure.

Therefore, let us not be hasty to judge,

But weigh both sides, with an open mind and no grudge,

For use and abuse, are twin sides of the same coin,

And in all things, there are no one-sided coins.



Don't be fooled: the poem makes relevant points and at first seems pleasantly balanced. And yet it is an example of a subtly controlled process, which in this case is fortunately only meant to be experimental.


  • It is a rhetorical trick to make a statement seem more convincing by presenting it in the form of a rhyme
  • The wording of the question submitted to the machine fixed it in advance to the desired train of thought.


The question manipulates the machine’s answer just as much as the algorithm chiselled by its operator. Truthfulness is not guaranteed, neither by the algorithm nor on the part of the user.



New division of labour between humans and machines


For the user of written professional information, it makes no difference whether a human or a man-made machine has written a text. Modern industrial societies need wording that is correct in content and easy to understand. So we can safely abandon resistance to ’artificial’ texts that fulfil their function and whose origination is correctly disclosed. Directions, user manuals and summaries of research findings do not have to be penned by a human being to fulfil their function. This conclusion takes some pressure out of the steam boiler.


The history of technological progress is always associated with substitution effects. The pertinent question has always been how we deal with these effects. If the new technology is in greater demand than the old one, the practical and economic benefits will outweigh the disadvantages. If it requires less human labour, we may perceive this as a disadvantage, but the benefit to society outweighs it. It is way too early to draw clear conclusions of the upcoming changes in the workplace that the perfection of natural language AI portals accessing the accumulated knowledge of the internet will entail.


The Internet is just one of many different data pools. The appropriate way of extracting the essential information and translating it into practical work can be very different. This will always depend on the subject area and the technical design of an AI tool, as there will be many different ones. Databases for medicine, biology, electrical engineering, tax tax planning or economics, for example, require search and analysis methods that are tailored to those specific fields. The functioning of existing data pools will be redesigned by AI in ways that boost their power.


Therefore, the implications will vary. It will take a little time before the orchestra of very diverse applications has been assembled and we can hear its symphonies.


It is clear that many people will have to move into new, and very likely higher, positions in the value chains that keep society alive. It is possible to slow down, stop or reverse the current process of innovation. But the price of such attempts to halt efficiency-enhancing developments has always been daunting. So it remains. If we don't play along and contribute, we will be left behind.


People caught up in effects of such rationalisation may feel disoriented at first. But as the processes steered by AI become closer to the nature of human thinking, we can venture the prediction that changes to the position of humans within value creation processes will not cause too much pain. The transition from the horse-drawn carriage to the automobile was certainly more drastic.


The solution path can be formulated abstractly from the differences in the ‘thinking styles’ of humans and AI applications: Humans will set goals, monitor, assess plausibility, make projections and make adjustments wherever AI applications reach their limits. This presupposes a higher qualification of the people thus upgraded. The combination of increased intellectual work and continual variation will give working people more satisfaction than the previous abundance of mostly uncreative work. This feeling of creative mastery is already being felt by those who are experimenting with the first AI instruments without prior training or preparation.


There is already evidence from aeronautics that humans can develop a kind of emotional bond with robots. What is there to say against this also succeeding with artificial intelligence?


There are signs that Humphrey Bogart’s words hit the nail on the head: “Louis, I think this is the beginning of a beautiful friendship”. 





Authorship disclosure:

Poem: ChatGTP

Text: human generated






Dialogue with Natural Language AI Machines