Tommy Fernando
The article below which I just found today talks of the pitfalls of using AI-generated articles for obvious reasons such as that in using Large Language Models like ChatGPT one is open to decisions made within the software that are not just relations between concepts only, but also higher level choices based on human judgments made in the comparison of the truth/validity of meanings. ‘AI and Ethics’ is a most interesting and important topic today as most software developers have no idea of what ethics is even about.
The Outputs of Large Language Models are Meaningless.Anandi Hattiangadi & Anders J. Schoubye – forthcoming – In Herman Cappelen & Rachel Sterken, Communicating with AI: Philosophical Perspectives. Oxford: Oxford University Press.
In this paper, we offer a simple argument for the conclusion that the outputs of large language models (LLMs), such as OpenAI’s ChatGPT and Google’s Gemini, are meaningless. Our argument is based on two key premises: (a) that certain kinds of intentions are needed in order for LLMs’ outputs to have literal meanings, and (b) that LLMs cannot plausibly have the right kinds of intentions. We defend this argument from various types of responses, for example, the semantic externalist argument that deference can be assumed to take the place of intentions and the semantic internalist argument that meanings can be defined purely in terms of intrinsic relations between concepts, such as conceptual roles. We conclude the paper by discussing why, even if our argument is sound, the outputs of LLMs nevertheless seem meaningful and can be used to acquire true beliefs and even knowledge.
I have not read this article but one can always have access to it with some sort of payment
