Without Ontology LLMs are Clueless by JohnSowa

Ғылым және технология

John Sowa presenting "Without Ontology, LLMs are clueless" at the Ontology Summit 2024 on 6 March 2024. See bit.ly/3P4YxYw. Abstract: Large Language Models (LLMs) are a powerful technology for processing natural languages. But the results are sometimes good and sometimes disastrous. The methods are excellent for translation, useful for search, but unreliable in generating new combinations. Any results found or generated by LLMs are abductions (hypotheses) that must be tested by deduction. An ontology of the subject matter is necessary for the test. With a good ontology, errors, hallucinations, and deliberate lies can be detected and avoided.

Пікірлер: 70

  • @glasperlinspiel
    @glasperlinspiel3 ай бұрын

    Finally, a community of people who understand. I felt like I was yelling into a well. For an AGI, ontology is destiny. Last year, I published Amaranthine: How to create a regenerative civilization using artificial intelligence which addresses this and an offers an ontological approach which provides a way off humanity’s roller coaster of creation and destruction. What worries me is the failure to recognize implicit ontology in language. The last thing we need is AI that relates to reality the way we do. LLMs aren’t the only hallucinators. 😂

  • @LimabeanStudios

    @LimabeanStudios

    2 ай бұрын

    Gonna read your book one day. Just the description sent me down really cool thought paths. A future of self healing wounds.

  • @American_Moon_at_Odysee_com
    @American_Moon_at_Odysee_com3 ай бұрын

    Thank you John. Very informative. So much helpful detail. Thank you very much.

  • @zhangcx93
    @zhangcx933 ай бұрын

    Does the ontology for LLM means the same as we need the LLMs to have a world model built inside? Also for many examples mentioned in the videos are already solvable for GPT-4 or similar system, which are built by statistical learning and doesn't have an explicit training/design on world model or "ontology".

  • @Ginto_O

    @Ginto_O

    3 ай бұрын

    Id say already solved. This dude stuck in 2022

  • @zacharychristy8928

    @zacharychristy8928

    2 ай бұрын

    Really? Because I can still break the shit out of GPT-4 without much effort. Just ask it a question you can't model with pure language data, like a physics question. I even asked it to tell me all the countries that start with 'Y' and it told me Yemen, Zambia, and Zimbabwe, lol

  • @zhangcx93

    @zhangcx93

    2 ай бұрын

    @@zacharychristy8928 Indeed, you find the weakpoint of GPT-4, which is not that it lack a world model, but it have limited logic capability, or more precisely, week at solving tasks which require recurrent computation process, for example let raw LLM to do a multiplication is almost impossible for larger number. I would say, its learning algorithm(statistical learning) is very inefficient at learning recurrent process, not that it cannot, but inefficient, especially for deep recurrent process. A perfect world model require recurrent computation, but not all. Actually most tasks doesn't need it, like understanding a joke from a picture, or reasoning on complicated but shallow logic problem. Also LLM as trasformer based model, cannot trigger arbitrary depths of recurrent process at arbitrary point, it is using a fixed length of computation to approximate all recurrent process in all tasks. This make it very weak at deep recurrent process if the model is not big enough. But none of these problem has anything to do with that it cannot learn a world model at all or it need ontology built in.

  • @szebike

    @szebike

    Ай бұрын

    @@Ginto_O No its not solved yet they trained those visible edge cases and most known obivous proofs the system doesn't have a clue what it is "reasoning" it doesn't mean it does understand anything properly. Even Josha Bach and most OpenAi people acknowledged that current transformer tehcnology is not sufficient for proper world building and there is still some work to do. That being said even Garry Marcus agrees that it may be possible to achieve human like reasoning with hardware I think they all have more in common than it seems but we have to be a bit more patient and open to all approaches rather than brute force trillion data training. The hype around that one particular approach is overfunding one way to approach this and is , in my opinion, a bit one sided they should spread those insane funds on all aspects of approaches.

  • @glasperlinspiel
    @glasperlinspiel2 ай бұрын

    25:07 Not quite, I think. Underlying the images are relationships. The brain maps relationships which provide the stimulus for hallucinations which draw on subconsciously and consciously memorialized experience hieroglyphically. These form ontological engrams. I conjecture that the relationships are stored as Fourier transforms

  • @DrGauravThakur38
    @DrGauravThakur383 ай бұрын

    Insightful

  • @zyzzyva303
    @zyzzyva3033 ай бұрын

    Presumably multimodal AI is a step in this direction where these implicit ontologies are already embedded in the latent space of the AI to some degree. Multimodality (done right) would result in an robust internal model, and perhaps robust ontologies. Though I suppose embedding explicit descriptions has value.

  • @veganphilosopher1975
    @veganphilosopher19753 ай бұрын

    So powerful and interesting. This is exactly the area of research I'd like to work in

  • @mltiago
    @mltiago3 ай бұрын

    "its the imagery that are the foundation of language". It seems that Jacques Lacan has something to say about it in its simbolic, imaginary, real.

  • @LaplacianDalembertian

    @LaplacianDalembertian

    3 ай бұрын

    LLM is just a search engine filter. People should stop calling it "AI".

  • @Gnaritas42

    @Gnaritas42

    2 ай бұрын

    @@LaplacianDalembertian given that we can make useful Star Wars style droids that can speak, reason, and do what we tell them using that "search engine filter" as a brain is more than reason enough to call it AI, because it is. So what are you even talking about? Do have any actual argument?

  • @rumfordc

    @rumfordc

    2 ай бұрын

    @@Gnaritas42 "What are you even talking about" is exactly why we shouldn't call it AI. It's effectively meaningless.

  • @rumfordc

    @rumfordc

    2 ай бұрын

    @@LaplacianDalembertian I prefer to call them auto-completers, or auto-incompleters if i'm feeling cheeky

  • @Gnaritas42

    @Gnaritas42

    2 ай бұрын

    @@rumfordc you make zero sense, it is AI. Sorry that words are hard for you and you don't know what they mean, but LLM's are artificial intelligence, they are literally machine brains that can run robots, exactly what we've always envisioned for AI. You're being foolish.

  • @lesfreresdelaquote1176
    @lesfreresdelaquote11763 ай бұрын

    I discovered Sowa's theory on Conceptual Graphs back in the 90s (1996 to be exact) and I spent more than 20 years to make them work. To no avail. Languages are so fluid, incomplete that CG could never capture their ever changing nature. I tried to use them for machine translation, text extraction and many other language related tasks, it did not work. I tried to use powerful ontologies such as wordnet, and it was a disappointment. I also got interested into the Semantic Web, the so-called RDF representation, which were combined with graph descriptions in XML. The result was the same. Handcrafted graphs are always leaking, always partially wrong or incomplete, projection is too restricted. This approach is too much of carcan, too much of a prison in which languages always find a way to escape. Of course, he tries to salvage the work of a life, but he tries very hard (I'm not surprised that Gary Marcus is around) to find flaws in LLMs that are only 1 year old and are already much more powerful than anything he has tried in the past.

  • @whowouldntlettheirdogsout

    @whowouldntlettheirdogsout

    2 ай бұрын

    You spent 20 years? Jesus! You know, the nature of logic(axiomatic view of things), in systems that aren't closed, always yields an incompleteness theorem. Always. Cladistics, phylogeny, taxonomy, all of them have had errors and imprecisions in their models to encompass living things. Even economists figured out the problem....Economists! You spent 20 years on a problem that was well documented at the start of 20th century. He's talking about the usefulness of abstractions. LLMs are machines that interpolates and to have any verifiable methodology with which these machines can converse with one another, to share notes and insights when they're used to extrapolate or contemplate, you're gonna need abstractions, a low entropy(not reductive) view of things so the communication channels aren't a 7 trillion dollar pricetag big. Relax, the adults are talking. NLP vocabulary should embarrass your industry but you spent 20 years on it.

  • @lesfreresdelaquote1176

    @lesfreresdelaquote1176

    2 ай бұрын

    @@whowouldntlettheirdogsout I have a PhD in Computational Linguistics and I developed a symbolic parser: XIP (Xerox Incremental Parser) based on my PhD thesis that was used for 20 years in my laboratory at XRCE (Xerox Research Centre Europe). The team I worked with published about 100 papers and patents on the domain. So yeah!!! I have some clear ideas of what I'm talking about. We won many challenges over the years and participated to many European projects with our tools. We even sold our technology to EDF, the French Energy Company. We worked on medical and legal documents, trying to use CG to capture meanings and abstractions. Our last success was in 2016 when we scored first at SemEval on sentiment analysis, which was based on a combination of our symbolic parser, an SVM classifier, and a CRF model for part of speech tagging. We managed to get an accuracy of 88%, when chatGPT can get close to 99% without breaking a sweat. These models show that most of these behaviours can emerge from data. I don't deny it anymore... The ideas of Marcus and Sowa are so out of touch with what is going on today that it isn't funny anymore.

  • @sgttomas

    @sgttomas

    2 ай бұрын

    @@whowouldntlettheirdogsout your attempt at dialog here missed the mark. I’ll mimic your mockery to demonstrate what an ineffective means of communicating it is. You could have learned something but used your words like dogma. Irony!!! 😏

  • @sgttomas

    @sgttomas

    2 ай бұрын

    @lesfreresdelaquote1176 I’ve been looking back at research from the last 20 - 30 years in computational semantics and other linguistic approaches to AI and they all seemed to fail with edge cases, using more and more sophisticated means to distinguish only to find that there would always be exceptions. Yet clearly humans learn and now these LLMs are doing a good job of aping understanding. It always this ambiguity that frustrated the researchers. And that would be the end of the story. I’m curious if you think that the transformer function could breathe new life into computational semantics and related fields? Cosine similarity isn’t perfect but fine tuned models will still get to the right thing pretty well. This video is just “edge case hard” over and over, but make the edges fuzzy and I dunno…. It isn’t problem solved but it’s avoiding the way that all this research failed. I imagine an ontology to an LLM is more like a magnet pulling in nearby semantics rather than a rigid basket Thoughts?

  • @lesfreresdelaquote1176

    @lesfreresdelaquote1176

    2 ай бұрын

    @@sgttomas I discovered LLM back in 2022 with Inner Monolog that was based on the very furst instructed model by OpenAI: Instruct GPT. LLM already have a very complex ontology built-in, which they based all their training on: namely the embeddings. Since Mikolov and word2vec, we know that these models capture a ontology. The larger and the more data in the process, the larger the embeddings, the more complex the ontology. When I was working with ontologies some 20 years ago, our goal was to get a kind of summary of a document through a list of concepts. I used wordnet quite a lot in this perspective. But we would always fail because of the inherent word ambiguity. How would you distinguish bank next a river to bank the financial business? LLM do not have this problem, they will happily and easily distinguish between the two interpretations. The real reason why it works is that an embedding is built on top of a very large context, which captures these ambiguities. This presentation was painful because it was obvious that Sowa didn't actually test any of his ideas on actual LLMS, or he would have discovered that many of his issues are no longer relevant.

  • @glasperlinspiel
    @glasperlinspiel2 ай бұрын

    28:28 Cerebellum, of course, old fashioned magnesium flash bulbs going off in my head.

  • @glasperlinspiel
    @glasperlinspiel2 ай бұрын

    35:25 A CE is unnecessary, however a validator-auditor is essential. Sentience suggests a CE, but I suspect that is epiphenominal which suggests why sentience is so fragile. This is one reason AGI excites me, reducing that fragility by anchoring it ontologically

  • @lancemarchetti8673
    @lancemarchetti86733 ай бұрын

    Can a string of zeros and ones develop narcisistic traits?

  • @LaplacianDalembertian

    @LaplacianDalembertian

    3 ай бұрын

    LLMs are even going into schizophrenia, when two chunks have very close distance, but looking at their content they are just random pieces of information garbage.

  • @meisherenow
    @meisherenow3 ай бұрын

    Even if LLMs can learn an ontology from enough high-quality text data, you might get gains in sample efficiency and reasoning control from building one in.

  • @glasperlinspiel
    @glasperlinspiel2 ай бұрын

    As for Socratic AI, it’s closer than you think…Amaranthine: How to create a regenerative civilization using artificial intelligence

  • @diga4696
    @diga46963 ай бұрын

    Personally I use ontology as one of the dimensions when describing reductionism orthogonality of language to other observational or symbolic modalities. What would be interesting is to find the loss function between these interpretations.

  • @thesleuthinvestor2251

    @thesleuthinvestor2251

    3 ай бұрын

    Ontology cannot be a dimension, when applying reductionism. The original (Greek) meaning of the word was the question: Can the world be entirely apprehended via its attributes / categories / features? Ontology, when taken in its widest sense, means both what can and what cannot be described in symbols. I.e.: Also what cannot be apprehended by categories-dependent AI. No categories, no math, no math, no AI.

  • @thesleuthinvestor2251
    @thesleuthinvestor22513 ай бұрын

    The ultimate Turing Test for AGI is: Write a novel that, (1) once a human starts reading it, he/she cannot put it down, and (2) once he/she has finished it, he/she cannot forget it. How many years do you think we have to wait for this task to be accomplished by an AI?

  • @reinerwilhelms-tricarico344

    @reinerwilhelms-tricarico344

    3 ай бұрын

    I think a reader decides on the first few pages whether the novel is worth reading. There is also a certain bias here: it matters whether the reader knows or doesn’t know that the novel was written by an AI. The outcome of judging a novel will heavily depend on this.

  • @thesleuthinvestor2251

    @thesleuthinvestor2251

    3 ай бұрын

    Give any AI of your choice the task of writing a novel, with characters in conflict, and if you ever wrote fiction in your life, you'll realize very fast that the AI is clueless. It has no idea what fiction is, what is subtext, conflicts, dialogues, revelation of character and any of the subtler tricks that depend on human ontology, of which AI has no smidgen of a clue. That's perhaps because the essence of humanity cannot be encapsulated by ink squiggles or screen blips-- which is the answer to the Turing Test, too, as well as to Plato's cave parable.

  • @nafg613

    @nafg613

    3 ай бұрын

    I don't think it will ever happen. What I find gripping about a story is what it reveals about the mind of the author. If the story is generated, the drama is devoid of meaning and holds no interest for me.

  • @1dgram

    @1dgram

    3 ай бұрын

    ​@@nafg613what if the series of prompts start with developing the state of mind of the hypothetical author and then developing the novel from there?

  • @nafg613

    @nafg613

    3 ай бұрын

    ​@@1dgram I used to think about building a game with everything generated by AI. But I realized the same thing. Even if in theory the AI would generate something identical in every relevant way to something a human would create, it would not have much appeal to me. The excitement in discovering how things unfold, it seems, is anchored in the excitement of knowing a person's mind. Procedurally generated data is just arbitrary data at the end of the day. Why would I endure emotional suspense for some machine-generated ending, no matter how surprising or happy? It was a flip of the coin anyway. What we love about the roller coaster of fiction is discovering deeper facets of the human mind, it turns out, IMO.

  • @charlessmyth
    @charlessmyth2 ай бұрын

    [15:14] So being a "bird brain" is not so bad :-)

  • @3thinking
    @3thinking2 ай бұрын

    GPT-4 doesn't totally agree with you John 😉I Innate Learning Abilities: Modern LLMs, particularly those employing advanced neural network architectures, have demonstrated remarkable abilities to learn from vast amounts of data without explicit ontological structures. They develop a form of emergent understanding, capturing nuances, contexts, and relationships within the data they are trained on. This capability allows them to generate coherent, contextually appropriate responses across a wide array of topics and questions without relying on predefined ontologies. Contextual Understanding Through Massive Data: The training data for state-of-the-art LLMs encompasses a wide range of languages, contexts, and domains, enabling them to develop a broad understanding of the world and how concepts relate to each other. This extensive exposure allows LLMs to perform tasks such as language translation, question answering, and content generation with a high degree of proficiency, challenging the notion that they are "clueless" without traditional ontological frameworks. Flexibility and Adaptability: One of the strengths of LLMs is their adaptability to new, unseen data and their ability to learn from context. While ontologies require explicit definitions and relationships to be manually built and maintained, LLMs continuously evolve as they are exposed to new information. This makes them highly flexible and capable of handling emergent knowledge and concepts, which might not yet be codified in existing ontologies. Synthetic Ontology Creation: Some argue that through their training and operation, LLMs create a form of "synthetic" ontology. By analyzing relationships between words, phrases, and contexts within their training corpus, they construct an implicit model of the world that functions similarly to an ontology, but is far more extensive and less rigid. This model allows them to infer relationships and generate responses that are surprisingly insightful, even in areas where explicit ontological structures might not exist. Complementarity Rather Than Dependency: The role of ontologies in enhancing LLMs should be seen as complementary rather than fundamental. While integrating ontological structures can certainly improve an LLM's performance in specific domains by providing clear definitions and relationships, the absence of such structures does not render state-of-the-art LLMs clueless. Instead, it highlights the remarkable capacity of these models to derive meaning and understanding from the linguistic patterns and knowledge embedded in their training data. In conclusion, while ontologies can enhance the performance of LLMs in specific domains, the assertion that state-of-the-art LLMs are clueless without them underestimates the sophistication and capabilities of these models. The emergent understanding, adaptability, and the synthetic ontology created through their operation enable LLMs to navigate a vast array of topics and questions with a high degree of competence.

  • @glasperlinspiel

    @glasperlinspiel

    2 ай бұрын

    I align with the hypothesis that synthetic ontology is real because I think the root of cognition is recognizing relationships. Unfortunately, LLMs are constrained by the ontological relationships we provide them. If AGI emerges in that ontological framework, we might be toast. That’s why I wrote Amaranthine: How to create a regenerative civilization using artificial intelligence which proposes a different ontological relationship with reality

  • @nsfeliz7825
    @nsfeliz78252 ай бұрын

    okay, since you know so much about language, why not build your own ai?

  • @zacharychristy8928

    @zacharychristy8928

    2 ай бұрын

    So you weren't paying attention...

Келесі