Geoffrey Hinton Unpacks The Forward-Forward Algorithm

Ғылым және технология

In this episode, Geoffrey Hinton, a renowned computer scientist and a leading expert in deep learning, provides an in-depth exploration of his groundbreaking new learning algorithm - the forward-forward algorithm. Hinton argues this algorithm provides a more plausible model for how the cerebral cortex might learn, and could be the key to unlocking new possibilities in artificial intelligence.
Throughout the episode, Hinton discusses the mechanics of the forward-forward algorithm, including how it differs from traditional deep learning models and what makes it more effective. He also provides insights into the potential applications of this new algorithm, such as enabling machines to perform tasks that were previously thought to be exclusive to human cognition.
Hinton shares his thoughts on the current state of deep learning and its future prospects, particularly in neuroscience. He explores how advances in deep learning may help us gain a better understanding of our own brains and how we can use this knowledge to create more intelligent machines.
Overall, this podcast provides a fascinating glimpse into the latest developments in artificial intelligence and the cutting-edge research being conducted by one of its leading pioneers.
Craig Smith Twitter: / craigss
Eye on A.I. Twitter: / eyeon_ai

Пікірлер: 125

  • @SaftaCatalinMihai
    @SaftaCatalinMihai Жыл бұрын

    Great interview ! ! Small constructive feedback: when Geoff Hinton isn't talking the video shows the "Eye on AI" Logo and (for some reason) that's distracting.

  • @paulprescod6150
    @paulprescod6150 Жыл бұрын

    Great interview! I could do without the blinking eye thing.

  • @eruiluvatar236
    @eruiluvatar236 Жыл бұрын

    I think that this will open so many possibilities. When working with small MLPs RELU is rearely the best activation function, something like tanh tends to perform much better but if you try to have more than 4 or 5 layers backpropagation chokes on it due to vanishing gradients but with this it wouldn't matter. It doesn't really have to be an exclusive or between Forward-Forward and back propagation, you could train many small backprop networks and join them with the forward-forward algorithm. It won't be as efficient as forward-forward for an analog hardware implementation but it would likely squeeze more into the same amount of weights and will likely provide better accuracy in some tasks. It will also be much less memory demanding than trying to do backprop over the full network and that would increase what our current hardware can do by a lot. Backwards connections would be much more trainable even without the trick of replicating the input data and the layers. With true backwards connections, it may still not converge into a stable solution due to the feedback loop formed, but it won't have the issues of backpropagation through time. If that can be made to work, models can develop something akin to our working memory. Not needing a differentiable model of everything opens the possibilities of inserting stuff in the middle of the network that wouldn't be easy to integrate normally, like database queries based on the output of previous layers or fixed function calculations.

  • @LudovicGuegan
    @LudovicGuegan Жыл бұрын

    It makes so much sense intuitively that it's hard to comprend that it took so long for this idea to hatch. Hilton is a genius.

  • @noomade

    @noomade

    Жыл бұрын

    or...since it took so long ... everyone else is just dumb :P

  • @Bronco541

    @Bronco541

    Жыл бұрын

    Hopefully our AI children wont be this dumb

  • @madamedellaporte4214

    @madamedellaporte4214

    11 ай бұрын

    @@noomade Yes, especially when he tells us AI will kill us all; something he created.

  • @5pp000
    @5pp000 Жыл бұрын

    Fascinating discussion! Thanks so much for posting it, and extra thanks to Prof. Hinton! He explains things very clearly.

  • @markr9640
    @markr9640 Жыл бұрын

    Fantastic interview. I may well need to listen to it 3 or 4 times!

  • @nullbeyondo
    @nullbeyondo Жыл бұрын

    Basically, it is training a neural network but instead of using positive training data, we're using negative training data. This can yield high perplexity due to the fact no one can get "perfect negative data" but we can easily get positive training data; thus I think it will not replace back propagation, but will be very useful in many applications, like neuromorphic hardware; or maybe even applications where we don't even know what the positive data should look like! So we're reverse-solving the problem somehow. This is really very interesting.

  • @MrErick1160
    @MrErick1160 Жыл бұрын

    This is a real ai KZread channel. I'm sick of all the channels feeding on buzz and popularity over éducative content

  • @sillystuff6247
    @sillystuff6247 Жыл бұрын

    as usual, geoff hinton explains everything so clearly (and mercifully free of techno jargon) this is a tremendous interview

  • @Xavier-es4gi

    @Xavier-es4gi

    Жыл бұрын

    His paper is so clear, even to me without a strong ML background.

  • @user-yi1wd9cl9o

    @user-yi1wd9cl9o

    Жыл бұрын

    @@Xavier-es4giやしゆさ😊しやしやさ😅ひ😊ひさやさ

  • @user-yi1wd9cl9o

    @user-yi1wd9cl9o

    Жыл бұрын

    @@Xavier-es4giささや

  • @madamedellaporte4214

    @madamedellaporte4214

    11 ай бұрын

    Yes, especially when he tells us AI will kill us all; something he created.

  • @caiyu538
    @caiyu538 Жыл бұрын

    Great to we can hear Dr. Hinton's lecture through social media.

  • @BR-hi6yt
    @BR-hi6yt8 ай бұрын

    Could be a historical interview for all time in the future. Good job.

  • @user-nn8kh1cv4s
    @user-nn8kh1cv4s Жыл бұрын

    이렇게 재테크 유튜브중에 가장 가슴에와닿고 고갤끄덕이게하는영상이 있다니!!!

  • @Pokemon00158
    @Pokemon00158 Жыл бұрын

    Such a good talk, thank you for organizing this Eye on AI! I have been implementing the FF-algorithm in python and whilst the training is understandable, the testing becomes tricky for multi-class classification trained with the supervised version that Hinton describes. This is because for new examples you don't have any labels, so you need to impose all possible labels on top of the example the same way as in the training and run the network with these to see which has highest hidden layer activation or "goodness" as Hinton describes it. Since the overlayed label is a part of the input, it contributes to the activations, meaning that there is currently no way to test all possible labels at once, which yields to scaling problems for ImageNet or other classification problems with a big amount of possible predictions where every possible class label representation has to be overlayed with the tested input. Will be interesting to see if this can be overcome or if unsupervised learning will be the standard procedure with this technique. Another super-interesting part in my opinion is the fact that Spiking Neural Networks have the Heaviside function as the activation which has no derivative. So traditionally trained SNN's have a Heaviside forward pass and a Sigmoid backwards pass to tune the weights, using FF we will be able to tune SNN's without having to "trick" the backwards pass to not be a step function, which may yield a better representation of our biological processes.

  • @ScorcherEmpathy

    @ScorcherEmpathy

    Жыл бұрын

    A.I. and WW3 Updates: REPUGNICANS WANT WW3 & CIVIL WAR IN U.S. - AI WILL GIVE IT TO THEM!! Don’t believe it? Ask AI! (We did!) “Commercial Artificial Intelligence” implementations (i.e., Enterprise-wide, mature instantiations) will be very bad for global and local economies, easily replacing all workers, including designers, architects, programmers, analysts, writers, accountants, testing and diagnostics, etc, etc, etc. - ALL (expensive) white collar jobs are the soonest at risk. A.I.-centric CEO’s will MAKE millions being the first to quickly replace all workforce ASAP, starting in the next 12-18 months, as AI “utilities”, then full-blown AI systems and deployments become ubiquitous. It will occur very quickly in the USA. *******Even (especially!) CEO’s will be replaced. Simply put: Using existing corporate data stores and database systems, in the next 12-18 months AI will re-engineer whole economies. Changes will then be implemented, effecting whole market sectors, literally, over night. Only low level, manual labor skills will be highly coveted but, as the global economy crashes, the result will be scaled down work forces everywhere. In the USA, it will become very violent, as ignorant people CONTINUE to lose their jobs with no place to turn for work. Putin’s wartime exit strategy is based on global ollapse to protect his insanity. Xi will sit back and observe, allowing Kim Jong Un to act as a chess board pawn. Kim Jong Un is an angry psychopath, worse than Putin. A.I.: THE WEALTHY ELITISTS’ CRACK PIPE Nearly completed and hoping to keep U. S. distracted, today, the REPUGNICAN’s stinging strategy is more clearly evident, as REPUGNICAN handlers bribe and cajole old and new minions while their elitist controllers are greedily grasping for their newest crack pipe: *******Native mode Artificial Intelligence used to replace the human white collar and blue collar labor forces, as the early robber barons boldly proclaimed and contemplated, aloud. ******* I DARE YOU TO GOOGLE IT! Robber Baron, Jay Gould, repugnantly proclaimed as their elitist goal to control the world and rape Mother Earth to extract her finite resources: **********We will “employ half of America to kill the other half” - Google it.****** We DARE you to seek these (and other) truths! Another greedy psychopath and Gould contemporary, Cornelius Vanderbilt declared, “What do I care about the law. Ain't I got the power?” Google it! And then ask your favorite AI chat bot: Were these well known elitist statements sane or were they the words of a psychopath? Ask soon! Because elitists control all AI technology and future A.I. implementations which are being hacked, and future versions will soon filter (mask) these early conclusions and edit the truth out of and away from their truth-filled responses, as elitists re-program AI bots to omit truth and, instead, invoke the will and desire of REPUGNICAN strategists!

  • @AZTECMAN
    @AZTECMAN Жыл бұрын

    Extremely fascinating to hear this after Chomsky's criticisms of the current deep learning paradigm as failing to differentiate between possible and impossible languages

  • @phoneticalballsack

    @phoneticalballsack

    Жыл бұрын

    Chomsky is a dumbass

  • @AZTECMAN

    @AZTECMAN

    Жыл бұрын

    @@phoneticalballsack why do you say that?

  • @phoneticalballsack

    @phoneticalballsack

    Жыл бұрын

    @@AZTECMAN Have you talked to him in person?

  • @AZTECMAN

    @AZTECMAN

    Жыл бұрын

    @@phoneticalballsack Nope. But my lack of personal encounter doesn't seem very important to understanding your statement. Please explain to me, why Chomsky is a dumbass. If you happen to have met him, I'd certainly welcome a anecdote though I don't consider it crucial.

  • @JerryFederspiel
    @JerryFederspiel Жыл бұрын

    The discussion at 33:32 immediately suggests the possibility of applying a "color" to each neuron, where the squared activation of neurons of one color contributes positively to "goodness", and the squared activation of neurons of the other color contribute negatively to goodness. Any given layer could have neurons of *both* colors. Of course, that leads to additional questions: 1. Is there a rule for determining each neuron's color that could be applied a priori to give better results? 2. Should there be a rule for changing/updating the color of a neuron so the distribution of colors can be adapted to the problem and the data at hand? Finally, to get even farther afield: something whose activation squared counts as positive sounds like a real number. Something whose activation squared counts as negative sounds like an imaginary number. Instead of choosing between one of two colors for neurons, should the activations be multiplied by a *complex* number before squaring, with the sums of the real parts of the squares being used for the objective? Because the effect of complex color is continuous and differentiable, it may be trainable. The network could find, through learning, the balance of importance between features and constraints for the problem domain.

  • @user-hl3zb1xm3g
    @user-hl3zb1xm3g Жыл бұрын

    Great talk! Also looking forward to see the Matlab code.

  • @user-pd2bd7jp6x
    @user-pd2bd7jp6x Жыл бұрын

    마음가짐이 정말 중요하죠.

  • @user-jm6gp2qc8x
    @user-jm6gp2qc8x Жыл бұрын

    I think the idea of high layer-activations only for the positive data, interesting. The network essentially isn’t giving an Output like in backpropagation, but it’s now the Property of the network to “light up” for correct labels, and therefore indicating whether it’s a positive data or not. I enjoyed this interview given by Hinton about his paper.

  • @schumachersbatman5094
    @schumachersbatman5094 Жыл бұрын

    I wonder how the forward algorithm, capsules and "GLOM" connect to building those "world models" from observation. I think I understand Yann when he says that you shouldn't make generative models that predict things like pixels, but make predictions about more abstract representations so that you can ignore irrelevant details (like leaves blowing in the trees). Making predictions about higher order, more abstract concepts like "which car overtakes who" etc will make the network start modelling dynamics, and gain an understanding of what it sees, including causal reasoning. Is this Hinton's plan too or does he not think in terms of world models?

  • @ekstrapolatoraproksymujacy412

    @ekstrapolatoraproksymujacy412

    Жыл бұрын

    this is obvious, real question is how to decide what's relevant and what's not, then this will change with time when system learns new concepts and so generative models have to change, how to make such system stable?

  • @eyeonai3425

    @eyeonai3425

    Жыл бұрын

    Schumachers Batman, see the Yann interview I just posted. He addresses your question obliquely.

  • @Jay-kb7if
    @Jay-kb7if Жыл бұрын

    Would negative data training be somewhat similar to hypothesis testing? Or at least what they originally conceptualized a null hypothesis as but has now been obscured. Trying to maximize true negatives as opposed to minimizing false positives.

  • @jabowery
    @jabowery Жыл бұрын

    What was the constraining (low variance) complement to PCA Hinton mentioned?

  • @huyked
    @huyked Жыл бұрын

    Thank you for this interview. Though I don't understand the technical details of it, I did get to draw on some simple things, and also was able to appreciate the serious brain power in Mr. Hinton.

  • @rb8049
    @rb8049 Жыл бұрын

    I’m wondering if the brain isn’t using both the positive and negative training at the same time. Much of daily brain operation is on the negative training. Surprise generates activity. Otherwise not active.

  • @toddcooper5077
    @toddcooper50779 ай бұрын

    I have been trying to find the podcast where Hinton basically says that the longer length of tokens contributes to hallucinations and variance based on standard ML/DL, anybody out there that heard the same thing?

  • @Gabcikovo
    @Gabcikovo Жыл бұрын

    25:18 hidden layer is asking: "are my inputs agreeing with each other, in which case I'll be highly active, or are they disagreeing, in which case I won't." :)

  • @wi2rd
    @wi2rd Жыл бұрын

    Makes me wonder. Do things like LSD perhaps trigger parts of this 'sleep' state system, but while still awake. Makes quite a bit of sense to me, especially considering how extremely similar 'tripping' hallucinations are to the things AI produces when it is allowed to 'dream away'. Curious.

  • @semtex6412

    @semtex6412

    5 ай бұрын

    im high af watching this video and im like, "hooooly shit this vid is one trippy dope" lol

  • @uppner147
    @uppner147 Жыл бұрын

    Groundbreaking!

  • @eduardosuela7291
    @eduardosuela7291 Жыл бұрын

    Let me see if I understand He is redesigning the black box. Classical black box has explanatory features in the entry and labels or variables to be predicted in the output. In this approach, everything is in the input. And the output is the "hint of simultaneity" of blocks of entries. If that's like so, I would like to stress that this concept is the foundation of all this. The learning algo depends on this structure. One more thought. "Idea association" works this way. "Perception-action" must work in another way. Action looks like an output. Or can it match a FF framework

  • @davedouglass438
    @davedouglass438 Жыл бұрын

    There are other ways to achieve what backprop does, without backprop: use complex, not linear, quantities; use Conversation Theory; use Active Inference. "Attenuation" is a term used by neurosciences for enforcing the "fake data" / "real data" discernment.

  • @Gabcikovo
    @Gabcikovo Жыл бұрын

    31:34 capsules, depth in pixels, and comparison to how babies learn, concentrating on what's odd

  • @AliEP
    @AliEP Жыл бұрын

    I'm also slow at reading especially when it comes to equations!

  • @bujin5455
    @bujin5455 Жыл бұрын

    44:56. I think it depends on what is meant by "but it doesn't really matter if you can't tell the difference." Do we simply mean, as long as the illusion is convincing? Like a Hollywood special effect? Or do we mean, it's not "possible" to tell the difference, because it's beyond our capacity to interrogate? The former is a matter of laziness, where we are willing to accept the "optical illusion" because we don't want to understand the magic. Whereas the latter, the situation has moved to a point where we've pushed the investigation to a sort of "event horizon" from which we are bounded from making any further inquiry. I think it very much matters which of these situations we find ourselves in; ethically, if nothing else.

  • @urimtefiki226
    @urimtefiki226 Жыл бұрын

    Multitasking is beneficial for the brain, it mixes things up.

  • @fredzacaria
    @fredzacaria Жыл бұрын

    very interesting but not so easy to understand for laymen/women, perhaps another FF Algo video would be very enlightening, thanks God bless.

  • @rickybloss8537
    @rickybloss8537 Жыл бұрын

    Fascinating model. His view of consciousness doesn't seem as good as Joshua Bach's work though. He says there are a million definitions of consciousness but I believe the most commonly used meaning by philosophers says consciousness is the feeling that its like to be something. Consciousness is a model of a person embedded in a story generated by the neocortex to be stored in memory.

  • @eyeonai3425

    @eyeonai3425

    Жыл бұрын

    see what Yann says about consciousness in the latest episode: my full theory of consciousness ... is the idea that we have essentially a single world model in our head. Somewhere in our prefrontal cortex and that world model is configurable to the situation we're facing at the moment. And so we are configuring our brain, including our world model for ... satisfying the objective that we currently set for ourselves. ... And so if you have only one world model that needs to be configured for the situation at hand, you need some sort of meta module that configures it, figures out like what situation am I in? What sub goals should I set myself and how should I configure the rest of my brain to solve that problem? And that module would have to be able to observe the state and capabilities - would have to have a model of the rest of itself, of the agent, and that perhaps is something that gives us the illusion of consciousness.

  • @rogermarin1712

    @rogermarin1712

    Жыл бұрын

    ​@@eyeonai3425 it's models all the way down!

  • @fungiside
    @fungiside Жыл бұрын

    Really enjoyed the talk but I do wish you’d ditch the big blinking eye. It’s distracting.

  • @Xavier-es4gi

    @Xavier-es4gi

    Жыл бұрын

    Yes it's disturbing please don't do that

  • @craigsmith8368

    @craigsmith8368

    Жыл бұрын

    @@Xavier-es4gi thanks for the feedback. wont' use it again.

  • @wolfgangpernice2283
    @wolfgangpernice2283 Жыл бұрын

    AI is about to change your world, so pay attention. Love it :)

  • @jonbrand5068
    @jonbrand5068 Жыл бұрын

    Hi Jeff. As infant animal learners, we output a behavior and get almost immediate feedback from a parent on whether that behavioral output of a moment ago was "good" or "bad." Did mom look away or smile and interact more? This seems like a crude but fair example of back propagation. No? What do you think Mr. Hinton?

  • @user-ii3ii6cr4v
    @user-ii3ii6cr4v Жыл бұрын

    I have been watching and following this man since 2007 and all I have to say is he is an "EXTREMELY SMART FOOLISH MAN".

  • @ste07an

    @ste07an

    Жыл бұрын

    Why foolish?

  • @OKBumble
    @OKBumble Жыл бұрын

    The largest neural network has a trillion connections, which is about a cubic centimeter of the human cortex, which is about 1,000x larger... What a magnificent thing the human brain is!

  • @lucamatteobarbieri2493

    @lucamatteobarbieri2493

    Жыл бұрын

    But transistors are more than 1000x faster than synapses, in some cases billions of times faster. And smaller.

  • @strictnonconformist7369

    @strictnonconformist7369

    Жыл бұрын

    @@lucamatteobarbieri2493 and for the same amount of computation as the human brain does, uses many times as much energy. Not a problem for a stationary computer, it'd never work for biological beings even if they were born fully formed for their brains and their sizes.

  • @Gabcikovo
    @Gabcikovo Жыл бұрын

    12:16 what exactly Hinton means by "negative data"

  • @Gabcikovo

    @Gabcikovo

    Жыл бұрын

    13:01 supervised learning with an image with correct/incorrect data

  • @Gabcikovo

    @Gabcikovo

    Жыл бұрын

    14:10 subtracting negative (incorrect) data from positive (correct) data

  • @Gabcikovo

    @Gabcikovo

    Жыл бұрын

    16:34 example of negative data in a negative phase you use characters that have been predicted already.. you're trying to get low activity cuz it's negative data..

  • @Gabcikovo

    @Gabcikovo

    Жыл бұрын

    17:04 they cancel each other out if your predictions were perfect (positive and negative phase)

  • @Gabcikovo

    @Gabcikovo

    Жыл бұрын

    33:11 the very basic algorithm of how to generate negative data effectively from the model should be done nicely before you choose to scale it up

  • @harveydent7559
    @harveydent7559 Жыл бұрын

    Can someone explain what he means by real data vs fake data? ~7:30 ish

  • @AliEP

    @AliEP

    Жыл бұрын

    I think he means T and F prediction

  • @Gabcikovo
    @Gabcikovo Жыл бұрын

    54:08 Yann LeCun's convolutional neural networks are fine for little things like handwritten digits but they'll never work for real images says the vision community

  • @Gabcikovo

    @Gabcikovo

    Жыл бұрын

    56:17

  • @Gabcikovo

    @Gabcikovo

    Жыл бұрын

    When there finally was a big enough data set to show that neural networks would really work well, Yann wanted to take a bunch of different students to make a serious attempt to do the image convolutional neural network work, but he couldn't find a student who'd be interested in doing that :( and at the same time Ilya Sutskever and Alex Krizhevsky, who's a superb programmer, started to be interested in doing that and put a lot of hard work into making it work eventually.. so Yann LeCun deserves to be mentioned, too, according to Geoffrey Hinton

  • @briancase6180
    @briancase6180 Жыл бұрын

    I agree about consciousness. It's a matter of degree, I think and that's what I hear Hinton saying.

  • @VivekPatel-sj3up
    @VivekPatel-sj3up Жыл бұрын

    This comment is for future visitors! ♥️ I was here! 26 January 2023.

  • @RogerBarraud
    @RogerBarraud Жыл бұрын

    I'll never look at a pink elephant quite the same way again 🙂

  • @marketsqueezer
    @marketsqueezer Жыл бұрын

    The problem with forward propagation is that it may change its mind to a projection already made and switch fast back again to earlier prediction. However, it is still the better than back propagation. Actually "funny", because negative data is how you get rid of all the BS you don't want to know 🙂

  • @pensiveintrovert4318
    @pensiveintrovert4318 Жыл бұрын

    Zero explanation what "high" vs. "low" activity mean.

  • @ekstrapolatoraproksymujacy412

    @ekstrapolatoraproksymujacy412

    Жыл бұрын

    it means literally that, high or low magnitude of output vector

  • @csabaczcsomps7655
    @csabaczcsomps76554 ай бұрын

    Entirely (as whole) the world data is composed by; good, bad, and hallucinating (half good+half bad) data. You can't make non hallucinating AI with current data. Probable far far in future AI will can solve somehow to be non hallucinating. Or you can make one special AI to filter out the hallucinating data, but is not good idea, lot things to work need hallucinating data. My noob opinion.

  • @Gabcikovo
    @Gabcikovo Жыл бұрын

    8:08

  • @Gabcikovo

    @Gabcikovo

    Жыл бұрын

    8:12

  • @Gabcikovo

    @Gabcikovo

    Жыл бұрын

    8:43 similar to GANs

  • @Gabcikovo

    @Gabcikovo

    Жыл бұрын

    8:51 using the same units

  • @scottmiller2591
    @scottmiller2591 Жыл бұрын

    AI/ML only want one thing, and it's disgusting - Hinton's MATLAB code.

  • @artlenski8115

    @artlenski8115

    Жыл бұрын

    Mate don’t generalise your own opinion to the whole AI/ML.

  • @scottmiller2591

    @scottmiller2591

    Жыл бұрын

    @@artlenski8115 😆

  • @mmvblog

    @mmvblog

    Жыл бұрын

    Hinton's Matlab code is disgusting?

  • @redpepper74

    @redpepper74

    Жыл бұрын

    @@mmvblog 57:43

  • @Flameandfireclan

    @Flameandfireclan

    Жыл бұрын

    Lmao, it’s a meme. Calm down nerds

  • @biggestthreattoyourexistence
    @biggestthreattoyourexistence Жыл бұрын

    I know this guy is smart because none of his shelves are sagging.

  • @craigsmith8368

    @craigsmith8368

    Жыл бұрын

    in fact, he's a carpenter and built the shelves himself!

  • @ScottVanKirk
    @ScottVanKirk Жыл бұрын

    That blinking eye is really annoying. I'd rather see the interviewer.

  • @josephvanname3377
    @josephvanname3377 Жыл бұрын

    All the quantum computing and AI folks are persecuting the very few RC folks by ignoring them and laughing at them when they actually try to make an RC business. No more!

  • @tbraghavendran

    @tbraghavendran

    Жыл бұрын

    What is RC business?

  • @josephvanname3377

    @josephvanname3377

    Жыл бұрын

    @@tbraghavendran RC means reversible computing. Reversible computers are the advanced computers of the future.

  • @tbraghavendran

    @tbraghavendran

    Жыл бұрын

    Thank you. How is RC unique?

  • @josephvanname3377

    @josephvanname3377

    Жыл бұрын

    @@tbraghavendran Um. Landauer's principle states that in order to delete a bit of information, one must spend k*T*ln(2) energy where k is Boltzmann's constant, T is the temperature, and ln(2)=0.69314... The only way to get around this energy cost is to compute while deleting as little information as possible, and this is where RC comes into play. RC is the art of computing while deleting as little information as possible. RC is the only way to get around the k*T*ln(2) energy efficiency limit. And we are approaching this limit quickly since one runs into insurmountable problems with irreversible computation whenever one spends thousands of times k*T using modern irreversible hardware (to reliably delete the information, one must overcome the thermal noise). Without RC, computers will have a limited energy efficiency and limited performance, but RC has no such limits. RC will be much better.

  • @missionpupa
    @missionpupa Жыл бұрын

    Bro just put your logo in the corner or something, no need to flash the whole screen, its just distracting to the conversation

  • @ahsanmohammed1
    @ahsanmohammed1 Жыл бұрын

    That eye is used in superstitions.

  • @PaulHigginbothamSr
    @PaulHigginbothamSr10 ай бұрын

    Geoff chose the wrong acronym. Pink elephant. The N Vietnamese had pink elephants. They rolled in the red clay and became pink. Geoff seems to be taking of absurdity rather than reality. To me pink elephants really are a thing in reality.

  • @igormorgado
    @igormorgado3 ай бұрын

    A logo talking is so creepy.

  • @Rakibrown111
    @Rakibrown111 Жыл бұрын

    the eye thing popping up is ANNOYING, just stop it

  • @samiloom8565
    @samiloom8565 Жыл бұрын

    Talking without slides is waste of time

  • @Rakibrown111
    @Rakibrown111 Жыл бұрын

    insanely annoying and pointless

  • @Rakibrown111
    @Rakibrown111 Жыл бұрын

    Sooooo annoying with that eye 😝

  • @Methodinmadness2019
    @Methodinmadness2019 Жыл бұрын

    It was annoying to watch so l would have to listen to your program. It's better to see the person who asks questions instead of some White screen

  • @5ty717
    @5ty717 Жыл бұрын

    Don’t appreciate your eye flickering eye motif repetition gaining frequency in a disturbing way; instead of staying on the guest… you are open to subliminally training … regardless of intent, IS illegal as well as against u tube regulations. Not good either way. Ive documented. Desist.

  • @nathaliecamp2630
    @nathaliecamp2630 Жыл бұрын

    Very enlightening video, There’s this woman I got in touch with during the 2020 lockdown which cost me my job. Ms. Norman Davis helped me manage my assets by introducing my to the best trading platform and strategies, I earned a lot of $$$ working with Norman at the comfort of my home. I still keep in touch with the amazing lady

  • @eleanorchelsie6094

    @eleanorchelsie6094

    Жыл бұрын

    Hi there, I’m commenting from Switzerland . Interesting to know she connects with people from different parts of the world. Such an ambitious woman. I got in touch with Ms. Davis early this year. As a newbie in the market, I had little knowledge on predicting the stock market, but with Ms. Norman weekly analysis and advise profits are guaranteed! I received three times my initial deposit in a week!!

  • @ashleystevens4550

    @ashleystevens4550

    Жыл бұрын

    I have a master in mathematical finance, so it wasn’t so easy to get me convinced to begin an investment without me carrying out proper research on her. I had her broker ID checked and she’s fully verified! So I began with a few bucks, only to get huge returns in a month. I reinvested and now I get long term monthly returns… So glad I don’t rely on pay checks anymore.

  • @waynestackheim600

    @waynestackheim600

    Жыл бұрын

    I’m trying to create long term wealth to set towards property one day . How can i reach out to her? she could be of great help

  • @nathaliecamp2630

    @nathaliecamp2630

    Жыл бұрын

    < normandavis

  • @vnnyCao

    @vnnyCao

    Жыл бұрын

    This is still a window-shopping market. But there are a lot of intriguing stocks to watch from a variety of sectors. You don’t have to act on every forecast, hence i will suggest you get yourself a financial-advisor that can provide you with entry and exit points on the shares/ETF you focus on.kudos to Norman, great remarks!

Келесі