Your shopping cart is empty.
Log in

Procedural and semantic models in the description of text-generating functions of linguistic neural networks

S.V. Gusarenko, M.C. Gusarenko
80,00 ₽

UDC 81`32
DOI 10.20339/PhS.6-25.117

 

Gusarenko Sergey V.,

Doctor of Philology,

Professor of the Linguistics Department

North Caucasus Federal University

е-mail: sgusarenko@mail.ru

https://orcid.org/0009-0000-9245-2255

Gusarenko Marina K.,

Candidate of Philology,

Associate Professor of the Linguistics Department

North Caucasus Federal University

e-mail: mkgusarenko@mail.ru

https://orcid.org/0009-0005-9312-8621


 

Linguistic neural networks are a product of human intelligence, but the semantic procedure that forms the entirety of the meanings and meaning of the generated text is currently not fully described by either artificial intelligence specialists or linguists working in this field. In this state of affairs, it seems advisable to study meaningful interpretations of the work of neural networks, in particular, the construction of models of semantic operations that are minimally necessary for a deep understanding of texts by neural networks. It is concluded that the procedure for generating a direct answer to a question in the text can be presented as a general model that includes the following semantic operations: converting the inversion structure of a question in the prompt into the direct structure of a representative response sentence; the operation of determining descriptions in the text analyzed by the neural network — coreferences of descriptions in the prompt; referring to frame structures (or ontologies) for the detection of semantic links between these descriptions; the operation of identifying a semantic structure in the text that corresponds to the task and the structure of the previously formed response sentence. Neural networks were able to determine the humorous nature of a text they were unfamiliar with and had not previously published, which suggests their ability to identify a comic device regardless of the material on which this technique was performed. This, in turn, allowed us to assume that the so-called attention mechanisms in the studied neural networks identify latent connections and dependencies relevant to the task, which, under certain conditions in the text and within certain linguistic cultures, can be identified as the basis for creating a comic effect.

Keywords: semantic model, linguistic neural network, generated text, semantic operation, ontologies, deep understanding

 

References

1. Wolfram St. What’s Really Going on in Machine Learning? Some Minimal Models. URL: https://writings.stephenwolfram.com/2024/08/whats-really-going-on-in-machine-learning-some-minimal-models/ (06.02.2025).

2. Wolfram St. What is ChatGPT Doing... and Why Does it Work? URL: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/ (06.02.2025).

3. Arai Yu.,  Tsugawa Sh. Do Large Language Models Defend Inferentialist Semantics? On the Logical Expressivism and Anti-Representationalism of LLMs. URL: https://arxiv.org/abs/2412.14501 (06.02.2025).

4. Enyan Zh., Wang Z., Lepori M. A., Pavlick E. Are LLMs Models of Distributional Semantics? A Case Study on Quantifiers. URL: https://www.researchgate.net/publication/385091077_Are_LLMs_Models_of_Distributional_Semantics_A_Case_Study_on_Quantifiers (06.02.2025).

5. Wolfram Natural Language Understanding, Ili spasenie dlia studentov. URL: https://habr.com/ru/articles/851536/ (10.02.2025).

6. GigaChat. URL: https://web.telegram.org/a/#6218783903 (04.02.2025).

7. GPT-4o DUM-E. URL: https://web.telegram.org/a/#6010034370 (04.02.2025).

8. DeepSeek Рико. URL: https://web.telegram.org/a/#6433748390 (04.02.2025).

9. Shklovskii V.B. Gamburgskii schet: Stat’i — vospominaniia — esse (1914–1933). Moscow: Sovetskii pisatel’, 1990. 544 s.