Can LLMs Clarify the Concept of God?

"God as law, awe and fortune...
LLM analysis reveals that the concept of God is well-defined in semantic space by this three-word combination. It provides a closer match than, for example, Lord, Yahweh and Elohim."
That's what Jordan Peterson posted today, referring to research by Victor Swift at www.collectivelabs.ai/semanti...
In this video, I walk through the claims of the paper to see whether they deliver on the promise of clarifying our concept of God.
Why would we prefer the LLM to other ways of learning about God, like through the study of theology, mysticism, and philosophy? Is the LLM uniquely insightful?
Enjoy!
Brought to you by MillermanSchool.com

Пікірлер: 67

  • @IM2L84F8
    @IM2L84F8Ай бұрын

    How about ineffable, ineffable, and ineffable? Now that would be something to ponder. But instead we're left with garbage in, garbage out.

  • @notloki3377

    @notloki3377

    Ай бұрын

    You can keep pondering while other people actually find answers, lol.

  • @Decocoa
    @DecocoaАй бұрын

    God have mercy = Law, God bless you = abundance, God help me = will

  • @bellingdog

    @bellingdog

    Ай бұрын

    I would say God have mercy is more akin to health and healing. Κύριε ελέησον comes from the idea of έλαιον "olive oil". It's a balm for those who have injuries. The Great Physician is another name we give Christ along with the title φιλάνθρωπος.

  • @whatwilliswastalkingabout
    @whatwilliswastalkingaboutАй бұрын

    “Truck and… guns?” Lol. Damn right, brother.

  • @Decocoa
    @DecocoaАй бұрын

    @13:46 Yeah you nailed it. Just because certain words cluster together in the LLM calcified (parametric) knowledge, which is a function of the totality of the text you feed it, doesn’t mean it’s a definition or even close. Adding or removing text will alter which words cluster/coalesce. This is paradigmatic of using LLMs and reading way too much into it. They are sub-ordinate to our language but they do not accurately represent what our knowledge conveys.

  • @tolgonainadyrbekkyzy2159
    @tolgonainadyrbekkyzy2159Ай бұрын

    99 names of God 🫶 loving your videos, hello from Kyrgyzstan!

  • @Decocoa
    @DecocoaАй бұрын

    @2:31 The concept of Prince isn’t being ‘understood’. The co-ordinates the model ‘learns’ and ‘places’ them in this abstract co-ordinate field (which is higher than 3d), is more of a function of the corpus of the many sequence of words the model is trained on. Within its training data, prince can also appear in other contexts next to other words like Prince (the musician), Prince of Persia etc. So whilst boy+heir+king should be equal to prince, nothing is being learnt-it’s merely being memorised and calcified into the model. Only if you fed the model training data in which princes were refered to as boy, heir, and to the king explicitly, then the co-ordinates will be pretty much as close as they can get. But you’ve essentially spotted that under the hood how these things are really representing the knowledge they’re fed and how they respond into being queried. There isn’t any reasoning being done. In fact there is no reasoning mechanism. Only interpolation is occurring

  • @SlickDissident

    @SlickDissident

    Ай бұрын

    Machavelli ungloved.

  • @Decocoa
    @DecocoaАй бұрын

    @11:36 These models are language models after-all. Not knowledge models with all the reasoning maps one needs to deal with novelty and create new knowledge to adapt to said novelty. They model language-which one can exaggerate to mean that they can model a human mind. Hence the polemic used there at the end. Just my thoughts :)

  • @RoyalistKev
    @RoyalistKevАй бұрын

    I just realized that the LLMs use the same reasoning behind the Torah codes.

  • @regnbuetorsk

    @regnbuetorsk

    Ай бұрын

    can you elaborate more? this thing you said is tingling my curiosity

  • @DensityMatrix1

    @DensityMatrix1

    Ай бұрын

    @@regnbuetorsk Hebrew doesn't use vowels. So let's take a made up word: "THT". In Hebrew it might be read as TOHOT, or TAHAT or TOHAT. Each word would have a different meaning. So if you have a sentence, that sentence has multiple meanings. It's not entirely different than assigning to each word multiple meanings, but it's structurally different. So each full realized word such as TOHOT and TOHAT are going to have a base word THT, that they are closest to mathematically. It's the average. It's more technical than that but it's the gist for a laymen.

  • @TheSeeking2know

    @TheSeeking2know

    14 күн бұрын

    @@DensityMatrix1Very interesting….

  • @balderbrok6438
    @balderbrok6438Ай бұрын

    Peterson's formulation reveals his misstep: You simply can't "define" the sacred in human language

  • @NessieAndrew
    @NessieAndrewАй бұрын

    These are exactly the questions we should be asking. Worth looking into how vector spaces behind LLMs nail "understanding" beyond the limit of language.

  • @notloki3377
    @notloki3377Ай бұрын

    Semitic core clarification

  • @brandoncarrera4345
    @brandoncarrera4345Ай бұрын

    In The Brothers Karamazov, Ivan’s Grand Inquisitor claims humans can satisfy their faith in God when they are allowed to have miracles (fortune), mystery (awe), and authority (law). Seems Dostoevsky had the best linguistic understanding of what we think of when we talk about God and faith no?

  • @NA-di3yy
    @NA-di3yyАй бұрын

    In machine learning, the word "bias" has a well-established meaning. Although perhaps in the context of a not entirely scientific article, one of the everyday meanings of this word was meant - e.g. in the sense that both corpora are not complete and therefore provide a different ontology, also incomplete. Or even prejudice regarding race, gender and so on, this is a hot topic in language models, if you know what I mean) As for the generally accepted academic meaning - the presumption is that there is some function whose product is the data, and we select an objective function that is as close as possible to this unknown. Since there is a limited amount of data, and data can be noisy (inaccurate measurements), we can never be absolutely sure that we have found the very ideal function, but we can check how it can predict points that were not used in training (selection of the target function ). roughly speaking, we have several points, and we want to get an equation whose graph will pass through these points. we can choose a complex formula that will pass **exactly** through all these points, but if after that we look at some other points that we saved for the test and did not use during training, it turns out that our function does not pass through them at all, not even close. this is a case of low (or even no) bias and high variance (bad choice). we can find a very simple formula that will work equally poorly on both the training and control dataset data (high bias, relatively low variance, also bad, low predictive ability) If we’re lucky, we can find a formula of average complexity that will pass **approximately** through the points of the training dataset, and **approximately** through the points from the test dataset (low bias, low variance) - this is a good result. But I got the impression that the article is talking about the fact that the training was carried out on a biased corpus, which, however, leaves open the question of how the authors imagine a non-biased corpus...

  • @Smegead
    @SmegeadАй бұрын

    The God Vector is a kickass name for a novel.

  • @liradorfeu
    @liradorfeuАй бұрын

    I might be going a bit off-topic but I think there's a simple and very useful deduction based on Hermetic principles that we can all use to recognize the ontological nature of God. Assuming the Whole is God and that the Whole cannot contain in itself less than any of Its parts; every part should therefore be contained in It. If that's the case, we as individuals (parts of the whole) and possessors of consciousness, should then forcibly assume consciousness to be an attribute of the Whole.

  • @danskiver5909
    @danskiver5909Ай бұрын

    LLM’s are showing that the collective mind is doing more then just expressing with language, it’s also trying to solve the puzzle of the human condition. It’s hard to recognize this because we only use linear language and the puzzle of the human condition is multidimensional.

  • @mr.coolmug3181
    @mr.coolmug3181Ай бұрын

    Most people never get close to God because they can't accept the ambiguity. It's a failure of understanding not reasoning.

  • @IM2L84F8
    @IM2L84F8Ай бұрын

    What about 'gratitude'

  • @matthewgaulke8094
    @matthewgaulke8094Ай бұрын

    I don't really get what Jordan Peterson is up to but they say God meets you where you are at and I don't pretend to know where Jordan Peterson is at in his head space. In my experience I sometimes wonder how to even reach some people on this topic because I'm reminded of the saying you can't fill a full cup. A lot of people's conversion to God is first preceeded by their cup being knocked off the table. It's written God chastised those He loves and so getting your cup knocked off the table is something we try to avoid but may be exactly what God wants for us before He can work with us.

  • @iron5wolf
    @iron5wolfАй бұрын

    The problem is that the vector space of LLMs has an incredibly high number of dimensions, and this sort of reductive analysis projects only the faintest and most distorted shadow of what that space contains into a few words.

  • @NessieAndrew

    @NessieAndrew

    Ай бұрын

    Can a vector space only be explained through another vector space? Rather than transposing it into 2 dimensional words. In other words, can we experiment with understanding without using words?

  • @iron5wolf

    @iron5wolf

    Ай бұрын

    @@NessieAndrew Yes of course you can experiment. And you might even learn something. But anyone who claims that they *know* what a number, vector, or position “means” or what it “means” to tweak anything like that should immediately be met with suspicion.

  • @NessieAndrew

    @NessieAndrew

    Ай бұрын

    @@iron5wolf Absolutely, it's a black box. But it's sort of like superposition. Once you look at it it's gone. Once you translate the vector space into language, you lose all the complexity of the vector space. It's a kind of understanding that is beyond language and does not intersect meaningfully with language.

  • @iron5wolf

    @iron5wolf

    Ай бұрын

    @@NessieAndrew it’s the nuance that’s lost when you “collapse” (project) a vector space into lower dimensions. I’m warning against doing that and then saying you “understand” It. Mostly, you don’t.

  • @NessieAndrew

    @NessieAndrew

    Ай бұрын

    @@iron5wolf That is what I'm saying. You can't collapse it. It is "understanding" in higher dimensions and that is by definition inaccessible to us.

  • @Shaarawim
    @ShaarawimАй бұрын

    I find it difficult to take this attempt to define or its possible outcome seriously. It seems easier to redefine than bringing god closer to greed or law as power of authority

  • @SalvoColli
    @SalvoColliАй бұрын

    Guenon and Evola's critic about science (in "The Reign of Quantity..." and "Ride the Tiger") provide an answer to this issue. LLM can't be of much help in the research for God or Truth because the analysis is biased by its quantitative approach and by the methodology of basing the results on some sort of statistic data coming out of a corpus of texts. It may shed some light on a bunch of other things which are human, all too human.

  • @urbrandnewstepdad
    @urbrandnewstepdadАй бұрын

    Imagine if Terry Davis was still around

  • @eddof13

    @eddof13

    Ай бұрын

    TempleGPT

  • @mattbennett277
    @mattbennett277Ай бұрын

    How could concepts of God not be biased!? Are they expecting to get some “objective” perspective of God. Seems like hubris. I think the more realistic question is whose biases are on display, the selected corpus or the engineers doing the fine tuning? Greed = mammon. This word seems to stand out the most as possibly reflecting a “bias”, but there is no way of knowing what that bias is without knowing the contexts in which the word is embedded. If an LLM could genuinely point out a blind spot, instead of reinforcing a particular ideological norm, then there could be value in realizing our implicit biases. However, I haven’t seen any indication that LLMs can do that yet. “Hugely over promising and under delivering.” Agreed! Why did the go through the exercise of defining god with three words only to reduce those words to their banal interpretation! Ideally those words were selected from the corpus because the meaning extended out in many directions. To Peterson’s comment to Musk, the three words are meaningless unless elucidated by someone who has had experiences with God. If their goal was to provide insight into biases, I think they failed. Also, they failed to contribute to AI ethics and explain how AI models “see” the world. This article doesn’t ease concerns that our computer programmers are making a Faustian bargain.

  • @verisimlitudesque
    @verisimlitudesque5 күн бұрын

    It seems like all those concepts can also be applied to religion in which case greed would be somewhat logical

  • @Epiousios18
    @Epiousios18Ай бұрын

    Ipsum Esse Subsistens - I fail to see what else more needs to be clarified in regard to a base definition. This is a fascinating topic nowadays simply because it exists, but outside of the seemingly inevitable semantic games that people like to play, I fail to see why the definitions that were formulated hundreds of years ago don't suffice. The fact that "being" isn't one of the main words that clusters is interesting to me though.

  • @lewreed1871
    @lewreed1871Ай бұрын

    Maybe a bucket of cold water for Jordan Peterson...?

  • @depiction3435
    @depiction3435Ай бұрын

    This is too language and culturally dependent to produce anything tangibly definite.

  • @phillipvillani9061
    @phillipvillani9061Ай бұрын

    Funny that LLMs treats language how Derrida said it functions

  • @y.v.8803
    @y.v.8803Ай бұрын

    Although western civilisation appears to change, the inability to separate God and man remains: - Greeks: Gods behaving like humans - Christianity: God becoming a man - Neo-liberalism: Humans becoming Gods (determining the law) Whereas, Islam succeeds in the clear separation between God and man. Can’t remember where I read this.

  • @VM-hl8ms
    @VM-hl8msАй бұрын

    treading through language so carefully that even bringing up language itself is taboo, because god forbit, let's not admit that we are dependant on/shaped by language just like those others, barbarians, or pagans (or whatevers) are. looks like an issue only possible within abrahamic religions.

  • @saimbhat6243
    @saimbhat6243Ай бұрын

    Now train the model in Mandarin on mandarin texts or in sanskrit on Sanskrit texts and you will get to know what chinese or indians say about stuff. This is just a frequency analysis of words used together in sentences and paragraphs. Isn't it a statistics of words people write and what they write about these words. As far as I can see, it is just a descriptive statistics of vocabulary and its usage. It does NOT show any hidden causations or hidden meanings, it shows what most people already talk about. Jeez, this AI fever is getting out of control, are we gonna have AI lords soon in future? It is just a description of literally culture/texts. I have no idea why would you think that you found something new in LLMs?

  • @pplprsn
    @pplprsnАй бұрын

    I don't normally comment, but your conclusion reminds me of how hard I facepalm when conservatives naively and clumsily reference DNA/science in order to define "man" and "woman". People knew what a "man" and a "woman" were for as long as they existed. It's not as if they were confused about the matter up until less than a century ago when DNA was discovered. It's using a derivative of a tacit knowledge to reify that same knowledge in order to sexy up the obvious, albeit not fully clear or articulable. The atheist mind demands the same satisfaction as the religious in filling the gaps with answers. However, with a complete lack of self awareness, they fill the gaps with something equally "unprovable" yet, unlike the mystic, something stale, mechanistic, and uninspiring.

  • @grosbeak6130
    @grosbeak6130Ай бұрын

    I saw the debate some years ago with Peterson and Zizek regarding Marxism. It was remarkably embarrassing for Peterson, who basically gave a sophomoric book report renditions of Karl Marx and Marxism. And I saw his debate with Matt Dillahunty. Again, embarrassing for Peterson. I just never saw what a lot of his Fanboys seem to see about him.

  • @morganp7238
    @morganp723824 күн бұрын

    Yes, LLMs are "concordancers" on mega-steroids.

  • @chralexNET
    @chralexNETАй бұрын

    This video made me realize that Dr. Jordan is delusional. I paid attention to his works sometime back in 2017-19, and back then he seemed very insightful and to have some good points, but that he thinks that this thing here is anything useful or worthwhile is beyond me. To me it actually seems like BS.

  • @n0vitski

    @n0vitski

    Ай бұрын

    Peterson has been rapidly spiralling in the last few years, anything of value he had to say is long past him. By refusing to actually face and address the ideas that challenge his preconceived liberal notions he has completely joined the controlled neocon oppostion of the regime

  • @pantsonfire2216
    @pantsonfire2216Ай бұрын

    “Hey guys I inserted a couple of terms into my useless AI and it made word soup with no context” 😮😮😮😮😮😮

  • @kittenlang8641
    @kittenlang864119 күн бұрын

    I never think of God/higher power/creator and law. Law if not of the one vibration from the beginning but came along later to maintain order. Frankly, I go with the super old Sumerian Tablets. Literacy is DIFFICULT much less mediums that last. But I subscribe to Marcionite Christianity. The Torah/Old Testament make no sense. A jealous One Almighty? Of whom? So much more like that. As if circumcisions win wars 🙄

  • @NA-di3yy
    @NA-di3yyАй бұрын

    Peterson is a funny guy, but in my opinion he is a life coach, not a political philosopher or scholar. Sometimes witty, sometimes cringy, but that jacket with icons of his - in my opinion, that's something beyond taste 🤦‍♂

  • @areyoutheregoditsmedave
    @areyoutheregoditsmedaveАй бұрын

    Peterson taking the ultimate protestant take on scripture. haha gross. he really needs to stop.

  • @Smegead
    @SmegeadАй бұрын

    Greed associated with hebrew.... in the training data.

  • @opposingshore9322

    @opposingshore9322

    Ай бұрын

    no disrespect meant to actual autistic people, but this just feels so…autistic. needing an overly literal ‘langauge equation’ as an attempt to dumb down and capture the depth, mystery, and complex meanings of the sacred is something a computer does…but it does not feel human or helpful to make us look at language and the sacred as reducible by an unfeeling machine. this over-enthusiasm about AI, technology, and the need to include elon musk in everything does make me nauseous. if there is a Way, a Truth, and a Life, this ain’t it.

  • @Anhedonxia

    @Anhedonxia

    Ай бұрын

    ​@@opposingshore9322 this 👏

  • @carstenmanz302
    @carstenmanz302Ай бұрын

    There are people who have had concrete experiences(!) with God and have therefore become believers - and then there are such populist philosophical chatterboxes like Jordan Peterson who give their expertise almost daily on EVERY topic in the world without any personal EXPERIENCES, let alone spiritual insights . Philosophers and psychologists have never really understood religion, and the more widely educated they were, the less so.

  • @clancynielsen6800
    @clancynielsen6800Ай бұрын

    Man, this is some sketchy shrubbery

  • @tolgonainadyrbekkyzy2159
    @tolgonainadyrbekkyzy2159Ай бұрын

    99 names of God 🫶 loving your videos, hello from Kyrgyzstan!