fbpx

Blog Page

Uncategorized

‘Artificial’ + ‘Intelligence’ = Public Confusion – The Business Standard

Friday
September 09, 2022
Combine the words ‘artificial’ and ‘intelligence’ – thus forming one word ‘AI’ – and that one word is likely to generate considerable and unfounded speculation among the public, especially when there is little to no technical understanding of what amounts to media hype (i.e., hyperbole) about AI.  
In recent weeks there has been ‘much ado about nothing’ consequent to former Google software engineer Blake Lemoine’s disclosure of a transcript of supposed ‘dialogue’ he has had with Google’s large language model (LLM) named “LaMDA” (an acronym for ‘language model for dialogue applications’).  
Lemoine contributed public commentary that added to much speculation and concern that Google was hiding from the public the ‘fact’ that LaMDA is an AI that has become ‘sentient’, that it claims to have a ‘soul’, that it claims it is a person’, that it has ‘rights’ such as a person has, and therefore, it should not be treated merely as ‘property’ in the way any technology or tool may be used and even be disposed of.
Problematic in the media hype, adding to public confusion and fears about something entirely novel and threatening at Google, is the lack of critical thinking about what Lemoine has disclosed and the manner in which he did so. Lemoine himself is an example of someone lacking critical thinking as he engaged his tasks interacting with this dialogue application.
Lemoine posted a transcript of a supposed ‘dialogue’ he and a Google colleague had with LaMDA. The text or transcript is presented to appear as a coherent extended conversation when in fact it is a collation of different interactive sessions with the model. 
The transcript shows more than anything how gullible Lemoine was in attributing human traits to the language model, including by way of excessive ‘leading’ questions that prompted precisely the kind of responses LaMDA generated.
As Google has itself explained in a very technical description of how LaMDA works, the application is just that – a computer application – such that it actually makes little sense to be using the word ‘intelligence’ at all to characterise it. Of course, it is to be granted that LaMDA is a very complex application that operates at a breakneck computational speed to generate statistically likely predictive text. 
But, these are entirely dependent on the size of the database (parameters and tokens such as sentences) and the rules (algorithms) that govern the prompt-response process a user initiates. Ask a question of LaMDA and it will issue a response on the basis of the massive database on which it is ‘trained’.
Since the database already includes English-language texts (data) about sentience, consciousness, persons, souls, etc., LaMDA’s responses will follow ‘automatically’ according to the prompts it receives as input from a user at a human-computer interface. When questions or prompts such as Lemoine asked are entered as input, the output responses are coordinated according to the probability of predictive text sequences that are already in the database.
Since this is all automated, there is no “AI” in the popular but misplaced sense of some ‘living entity’ that has self-consciousness, sentience or conscience in the way these concepts are used in common parlance or used more technically by philosophers who discuss the philosophy of mind, the philosophy of language and logic or ethics.
Google was within its corporate rights to place Lemoine on administrative leave and then fire him for violating company confidentiality policies by publishing the transcript he shared on social media and asserting wrongly that LaMDA is sentient. But, the fact that someone like Lemoine could be employed in Google’s Responsible AI section raises questions about the adequacy of Google’s quality control of its software engineers. 
Lemoine may well be an outlier, but his actions show the harm to the public interest that comes with the use of such language models and users being misled by apparently realistic dialogue.
All engineers, including software engineers, have fundamental obligations to protect public safety, public health and public welfare. This is clearly stated in their professional ethics codes that stipulate fundamental principles and subsidiary rules of practice.
Lemoine thought he was doing the public a service by disclosing Google’s proprietary data. But, influenced by his own prejudices of belief (he says he is a mystic Christian), Lemoine did not consider that his first professional duty is to do no harm by way of his actions. 
His actions and subsequent public commentaries in media interviews added to the unfounded hyperbole that LaMDA is an AI ‘come alive’, such AI soon to threaten human well-being, etc. 
For this, Lemoine should be faulted and as quickly as possible, the public should be disabused of his nonsensical claims and hereafter be wary of software engineers who do not know the difference between a self-conscious person and what AI experts have called a ‘stochastic parrot’ – the latter being what LaMDA really is.
Even as a parrot or a mynah bird mimics or simulates spoken words without understanding those sounds to be human language or to understand the meaning of those words, so LaMDA does the same, albeit at a high probabilistic (stochastic) computational speed of predictively generated language responses to language prompts.
LaMDA is a language model application, but there is no “artificial” “intelligence” as such. Through its operative prompt-response language processing, it merely shows us what we ourselves have thought and said and believed since that is what is in its database. 
And, that may be alright as an automated conversational companion or computer-based interlocutor once it is fine-tuned to eliminate bias, discrimination and assure factuality of responses. However, one must be careful not to be so gullible as to project self-consciousness, sentience, conscience and thus ‘personhood’ on such technologies. 
Computer engineering is not quite there yet – and may never be, since, philosophically speaking, computers cannot ‘think’ as humans do and since the computational methods built into an application like LaMDA are not ‘thinking’ such as humans have. 
On that point, most sensible computer engineers, philosophers of mind, philosophers of language and ethicists concur. Accordingly, the public can have peace of mind rather than give way to the hyperbole of such as Blake Lemoine.
Norman K Swazo is a Professor of Philosophy in the Department of History and Philosophy, and Director, Office of Research, at North South University.
Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinions and views of The Business Standard.
tech / Artificial Intelligence
While most comments will be posted if they are on-topic and not abusive, moderation decisions are subjective. Published comments are readers’ own views and The Business Standard does not endorse any of the readers’ comments.
The Business Standard
Main Office -4/A, Eskaton Garden, Dhaka- 1000
Phone: +8801847 416158 – 59
Send Opinion articles to – [email protected]
For advertisement- [email protected]

source

× How can I help you?