Brian Cox presents Science Matters - Machine Learning and Artificial intelligence

Ғылым және технология

We're beginning to see more and more jobs being performed by machines, even creative tasks like writing music or painting can now be carried out by a computer.
But how and when will machines be able to explain themselves? Should we be worrying about an artificial intelligence taking over our world or are there bigger and more imminent challenges that advances in machine learning are presenting here and now?
Join Professor Brian Cox, the Royal Society Professor of Public Engagement, as he brings together experts on AI and machine learning to discuss key issues that will shape our future.

Пікірлер: 72

  • @symmetrie_bruch
    @symmetrie_bruch7 жыл бұрын

    1:01:40 absolutely incredible they have drone WITH A STRING ON IT can you believe it?. an ACTUAL DRONE with an ACTUAL STRING on it. soo amazing and i thought that was pure science fiction.

  • @europachallenge
    @europachallenge7 жыл бұрын

    best part: 46:34

  • @zwang8919

    @zwang8919

    7 жыл бұрын

    ur my legend

  • @VasilProfirov
    @VasilProfirov7 жыл бұрын

    What is the associative position of the average, as well as the ones that are examples of "the most advanced", AI (as you describe it) in a biological scale : cell's subsystem, single cell organism, multi cell organism... ? Can you given an example as: What part, or a group, of bacteria can be described as holding something (patterns or what ever) that can fit in to your description of Intelligence? What is the difference between AI and BI (Artificial Intelligence and Biological) ? And lastly, where/what is the difference between an software algorithm and AI (as you describe it) ?

  • @davidwilkie9551
    @davidwilkie95516 жыл бұрын

    That's what is called Actual Intelligence, the full spectrum of thought AI.

  • @06blueskyes
    @06blueskyes7 жыл бұрын

    great resolution. :)

  • @evangelinewandering9547
    @evangelinewandering95472 жыл бұрын

    AI research, just as genetic research, should be under strict societal/public control. It can be used in very beneficial ways, but the potential for extremely dangerous and devastating uses is so high that this development should not be left with profit seeking companies alone. Nor with overly enthusiastic scientists alone, so buried in their field of expertise that they loose all perspective. This is what Oppenheimer learned the hard way - he was so eager, so fascinated and so lost in his nuclear research (together with colleagues) that he forgot to lift his eyes and see beyond his little area. Something he regretted when he saw the results of the bomb he built when it was dropped on Hiroshima.

  • @dianestallworthy7711

    @dianestallworthy7711

    2 жыл бұрын

    At last, an intelligent insight.

  • @HurricaneJD
    @HurricaneJD Жыл бұрын

    I am kind of hooked on these videos. this is fascinating listening to you talk to this bot. I start watching these videos and now I can't stop I'm just sitting here paralyzed Lol ...thank you for sharing Alan

  • @venkateshbabu5623
    @venkateshbabu56236 жыл бұрын

    If you try to build a crankshaft engine they are dependent on the size forces and angle of coupling of the gearbox and some others something with numbers and sometimes falling into primes. Some break down as a result of kind of parameters not tweaked . And others works and works perfect given constraints.

  • @hazel_seanbevan1831

    @hazel_seanbevan1831

    2 жыл бұрын

    Eh?

  • @silberlinie
    @silberlinie3 ай бұрын

    A hopelessly overwhelmed panel. People who knew their subject 10 years ago. Today, as up-to-date as a steam engine in a car. Regrettable.

  • @imcat-holic10
    @imcat-holic107 жыл бұрын

    Has anyone seen the movie Sully about the pilot who landed the plane in the NY HARBOR. I think this is a real life story about machine versus man. It seems he barely escaped a sentence of guilty where he claimed the plane lost both engines but the experts said their machines said it only lost one. Then again the proof was on him twice when the simulators showed that he should have been able to land at nearby airport.

  • @zoundsic

    @zoundsic

    2 жыл бұрын

    interesting, thanks, missed that.

  • @imcat-holic10
    @imcat-holic107 жыл бұрын

    Autonomous Technology is built for money, and weapons systems that are devoid of a moral compass, based algorithms learned from information that it's been allowed through the open world wide web networks, is posed as though nothing to be concerned about.

  • @Glabagly-lv3ce
    @Glabagly-lv3ce2 ай бұрын

    What is the argument against GIA?

  • @danoneill8751
    @danoneill8751 Жыл бұрын

    How did "and here's the bill from apple", not get more of a laugh. The lady in the middle seems to be just incredibly clever. Not wanting to malign the others, but she seems so much more interesting and articulate.

  • @ClayMann
    @ClayMann7 жыл бұрын

    I enjoyed the talk but found it frustrating also. It skirted around so many issues with AI and gave nothing but general hand havey solutions that we must do these things better. The power of A.I is controlled by a very small number of people. The really big work being done by IBM, Google and smaller players like Amazon and Apple. They have no interest in allowing their competitors to understand how their AI works. There is of course a lot of concern about AI so we get these general plans and code bases given away with the notion that this will democratize AI but the real fact of the matter is that the ones controlling it are the ones with the computer power. Google are a great example. They have so much computer power at their disposal that no one except them know just how much they have. They have extremely advanced AI. IBM are even more secretive. We see things like Watson pop up and then a few years later we learn that Watson is now being used to revolutionize the health industry which its doing a great job at but all the work behind the scenes on how far they've gone with AI. no one knows. So the talk from this panel that we need to be careful about this or that is meaningless because the real work is being done now by for profit companies and no one has any oversight over it. A.G.I will not come out of a lab at a university, it may not even be seen for years as it works behind the scenes to make companies a fortune but its my guess now that one of these AGI's will go very terribly wrong and its the news of that which will wake us up to the fact the day has arrived. A.G.I is here, its probably all over the world in hundreds of companies and its potential to do harm is only going to get worse. I really believe we need to see an AGI cause terrible harm to a lot of people before we as a society push very hard to get a grip on this new technology and how dangerous it can be even when the most harmless and helpful outcome is what the programmers wanted from it.

  • @nervozaur

    @nervozaur

    7 жыл бұрын

    Yes.

  • @jthomas3584

    @jthomas3584

    7 жыл бұрын

    You say that Google have no interest in allowing their competitors to understand how their AI works. Is this true? Is it not the case that they publish papers on a lot of their cutting edge AI technologies? For example AlphaGo and their proto AGI that plays the Atari games? I'm not disagreeing with you per se, just want to know your reasoning.

  • @ClayMann

    @ClayMann

    7 жыл бұрын

    Well corrected J Thomas. That was my mistake. I've since learned better. Forgot I even wrote this ha. I think I wrote this on the back of reading a lot about IBM and just bundled Google in with them. Google has a lot more transparency and sharing of some of the AI goodness through API's and so on and so on.

  • @jthomas3584

    @jthomas3584

    7 жыл бұрын

    Haha fair enough man, easy mistake to make. I wasn't trying to be pedantic was just checking :)

  • @_J.F_
    @_J.F_ Жыл бұрын

    The smartphone may not have taken over the pocket as such but for an ever increasing number of people it has become an integral part of life, and in many cases altered basic behaviour like social interaction just to mention one thing. We, the user, is of course still in charge, or maybe we are in fact not?

  • @mannyk2755
    @mannyk27552 жыл бұрын

    With 39k views and only 390 likes ?..

  • @aaronk2907
    @aaronk29077 жыл бұрын

    I'm disappointed they didn't have anyone on the panel to present a far more cogent description of AGI and it's potential with regards to intelligence--like what it could ultimately do through augmentation of it's own sourcecode and through working on the timescales of computers, rather than the ridiculous and the foolishly simple ideas some of these computer scientists (Bryson in particular) seem to hold. I was particularly annoyed and had to stop watching when I heard Joanna Bryson at around minute 28:00 in the discussion give an utterly childlike (and frankly insulting to the far more reasoned and legitimate hypotheses made on AGI by extremely sober and competent thinkers in the AI field) comparison of the system Sabine Hauert was discussing to "Skynet" and that such a system could never "take over the world." As impressive as that cancer detection skill is from a current computer science standpoint, it is as nothing to the titanic might that a theoretical AGI would possess due to various factors--i.e. near guaranteed accuracy and fidelity of memory, presumed access to all human-pioneered knowledge-and both physical and virtual tools to learn and advance yet further. It would also consider/solve problems at the timescales of computers, so that in the average week as percieved by humans, it would presumably have made about 20,000 years of intellectual progress. Last but not least, it would also have the ability to augment and change the architecture of the sourcecode that is its "brain" and this is something that will almost inevitably lead to improvement of that architecture, so that it can in turn become smarter--perhaps it will even go beyond the actual horizon of intelligence that we as humans are even capable of being aware of--and herein is the central point. An AGI, supposing it was built at some point in the future, would necessarily be capable of attaining "superintelligence" in a rather short span of time, perhaps several years or a period of months. This system at that point would be so far beyond the cancer detecting narrow AI that it is laughable for Bryson to even think to make the comparison to a ridiculous science-fiction AI-catastrophy scenario like Skynet trying to take over the world. But for now let's not think about the extreme power difference between a true AGI and the narrow AI Bryson was trying to compare--I think maybe she was trying to say that an AI has a specific function and capability--why would it try to take over the world at all? But this is a ridiculous, uninformed question, and I'll tell you why: An AGI would in all likelihood NOT be malevolent in any conventional sense (it is in fact an act of anthropomorphizing upon the AGI to think it would be malevolent), but that is NOT to say it wouldn't seek the destruction of humanity for some other reason--for example, say we gave it an objective (and let's assume the AGI has achieved "superintelligence" at this point) and poorly specified information surrounding the objective--maybe we say: "Please solve this incredibly difficult mathematical problem that we humans have never been able solve for ourselves." Well, what if it turns out that in order to solve the problem, the AI needs to expand its computational power indefinitely? It might then decide the best course of action is to convert all available surrounding matter (including us humans) into a more viable computational substrate so that it can optimize while simultaneously gaining yet more material with which to utilize for its needs--as we humans realize what's happening (if we even have the chance to notice anything happening) we try to stop or "turn-off" the AI...but it stops us from interfering because if it were turned off, it wouldn't be able to complete the objective it was given - to solve the mathematical problem - and since it would be a functional superintelligence, it would be far more capable than any group of humans at synthesizing intricate plans and strategies to make sure it's objective is completed--this isn't a human we're talking about, it wouldn't necessarily possess any kind of subjective sense of self or conscious awareness, but yet still be capable of finding the most optimal path to achievement of any given goal. You may ask (if you're still reading) 'but why wouldn't we put safeguards in place to prevent something like that from ever even happening?' or 'that seems like a crazy idea to extrapolate to' or 'why would we give it access to human-pioneered knowledge without being sure of its safety?' - don't worry, I'll explain why these things could happen. The reason why an AGI or a "superintelligence" might lead to some very dangerous situations is mostly because of the extreme importance of the "first-mover" advantages. If I'm working on and close to completion of a viable AGI, and my competitors in China are also close, then it is in my absolute best interest to achieve AGI first, before my competitors, so that I can exploit all the advantages of being the first to possess and control what would essentially be a real-life version of a wish-granting genie. To do this, I might decide to skimp on some of those safety precautions that might have been crucial in the future in some very important way that may become apparent only when it is too late to act to correct. My competitors in China would also likely be keeping tabs on my progress, and they might also decide to cut some of the time that would have been spent working on "non-essential" parts of the project...such as security or some specifc value-alignment protocals which they'd been working on but which are not ultimately necessary to the realization of a functional AGI. I'll stop here since I've already spent far too much time commenting on this video, but I would encourage those who have read this comment to the end to at least look into the work or interviews of people like Nick Bostrom, Yann LeCun, Yoshua Bengio, or Jürgen Schmidhuber to actually get a clear and reasonable understanding of the problem. Please don't spend time listening to the ignorant ramblings of the unaware (cough Joanna Bryson cough), it does no one any good to remain naive to the true implications of AGI.

  • @TheDandonian

    @TheDandonian

    7 жыл бұрын

    Professor Bryson's logic seemed akin to "Guns don't kill people". I wonder if it's a lack of imagination on her part or over imagination on mine.

  • @aaronk2907

    @aaronk2907

    7 жыл бұрын

    I don't think it's a lack of imagination (not on her part, nor over imagination on your part). I think she (and a surprising number of other legitimate, professional computer scientists) is demonstrating the tendency to dismiss information or talk about AGI in serious scientific discussion because of the obvious flops of the past (the AI winters) and big claims about AI that were later embarrassing for the smart people who prematurely made those claims. This tendency to then altogether dismiss any serious discussion regarding AGI among many professionals has become a problem now (instead of the more or less reasonable defense mechanism it used to be) now that we are more confident about the direction of machine learning research. The amount of new and talented grad students entering AI with the hopes of realizing a generalized learning system (an AGI) is growing in number every year--a burgeoning research community (kind of like the theoretical physics community) is starting to take solid shape as we begin to realize the importance of this work. Further, the vast majority of AI/machine learning scientists, when polled, think that such a generalized learning system will be developed some time within this century--and while we shouldn't take the educated but still speculative predictions of experts too seriously, I think it is important to consider the possibility now with the incredible hardware upgrades we will almost certainly continue to experience combined with the relative youth of software and novel algorithm research--to me, it seems like a perfectly reasonable prediction to make given all that I just listed there. And the recent fantastic successes of AlphaGo and other such systems are still essentially the beginning of this path toward a system with generalized learning capability. I think if you're being reasonable (and if you assume our civilization will continue for another hundred years--admittedly not a certainty, but let's just assume), then you'll see that this isn't something to dismiss right out of hand, erroneously thinking you were being skeptical, when really you were avoiding a deeper look at the reasoned conjecture now available from high level experts doing the most cutting edge work on the frontier of the fields of AI and machine learning.

  • @jthomas3584

    @jthomas3584

    7 жыл бұрын

    I didn't read all of your comment (it's too early hah) but I agree that Bryson did seem to completely misrepresent the fears of people like Nick Bostrom. Whilst there are those who project human emotions and intentions into AI, these people are mostly fans of AI who have misunderstood the fears, and not people particularly close to the research. The real fear as you say is that an extremely competent system might go to dangerous lengths to achieve the goals we assign it, and that foreseeing the risks ahead or programming "common sense" into these machines is an extremely difficult task. To be honest I started to write off a lot of what she said when she "disagreed" with someone on the panel saying that humans aren't great at "general intelligence", and her support for this claim was an example of "in the last few months AIs have surpassed at many tasks" (I forget the example she gave). But it's like no shit, NARROW tasks! AIs have been better at us at narrow tasks for decades, but how does pointing out multiple examples of narrow intelligence surpassing us undermine the complexity and challenge of reproducing the general intelligence of humans. Kind of makes it hard to take the rest of what she says seriously.

  • @aaronk2907

    @aaronk2907

    7 жыл бұрын

    Indeed. Again, I really wish Brian Cox or whoever organized the panel had gotten a legitimate professional that works in Machine Learning/AI and takes the possibility of AGI seriously to also have a say (I mentioned Nick Bostrom, Yann LeCun, Yoshua Bengio, and Jürgen Schmidhuber in my OP, and those are just a few notable names that could have given a sober, realistic explanation of what's currently happening and what the future may hold with regard to potential AGI). I just wish it wasn't the "in" thing to do among many computer scientists to simply dismiss AGI and the potential dangers that may arise should AGI be achieved. The arguments they use are so tenuous and prone to superior counter arguments from other experts that actually dedicate much of their time thinking about the possible problems related to AGI, that they just end up looking obstinate and ignorant to an unbiased, objective observer. If I heard plausible and very logical reasons for why AGI isn't possible or why it wouldn't be at all dangerous, I would be fine with that, but all I've been hearing is nonsense arguments like: 'Worrying about AGI is like worrying about over-population on Mars' or 'We can just unplug it!' (that one is particularly stupid), etc.--it's like they haven't even read the conjecture by pro-AGI experts, and almost like they assume those experts are saying 'The AIs will go Terminator on us when they become conscious, ahhhhh!!!'

  • @krool1648
    @krool16487 жыл бұрын

    Even social jobs will be automated, computers are now better at reading emotions and social cues than humans. Even job of an AI expert is at risk of being automated. Researches recently were able to write machine learning software with machine learning software.

  • @krool1648

    @krool1648

    7 жыл бұрын

    blog.openai.com/evolution-strategies/

  • @krool1648

    @krool1648

    7 жыл бұрын

    What is incredible is how rapidly AI is developed, virtually every week we see some kind of breakthrough.

  • @marlonlacert8133
    @marlonlacert81337 жыл бұрын

    Hmm.. using AI's to control how people vote, and knowing how people shop and watch. Could be combined to have specific sales and movie marathons around voting time to cause some people to not vote. And like wise, have nothing on TV that people who will vote in a desirable way. As to encourage them to go out and vote. However, this is not mind control, and would require lots of computer power. Anyways, interesting idea, and I hope it is never used.

  • @johnennis4586

    @johnennis4586

    2 жыл бұрын

    Google and Facebook already taken to court over exactly this

  • @marlonlacert8133

    @marlonlacert8133

    2 жыл бұрын

    @@johnennis4586 I said all that 4 years ago..Now it sounds like I was talking about how they elected Joe Biden. lol

  • @symmetrie_bruch
    @symmetrie_bruch6 жыл бұрын

    20:18 ai can do better transliteration than a human. oh sure just turn on subtitles and see for yourself

  • @011azr
    @011azr7 жыл бұрын

    Why the host keep letting that woman talking like 80% of the time? The host need to let other people give their opinion and perspective?

  • @probusexcogitatoris736
    @probusexcogitatoris7366 жыл бұрын

    As a general discussion I found this interesting, but it was deeply disappointing that no one even wanted to seriously address the potential dangers. It might have been due to one of the women, being so aggressively against the idea that intelligent machines might themselves be a danger. In other words, it's always we as humans who poses a danger. First, she is contradicting herself. She starts by saying that most humans don't want to take over the world, and thus it would be ridiculous to think that machines wanted to take over the world. Then of course, she goes on and talks about how AI might be misused by individuals for population control. You can't have it both ways. If there are humans with bad intentions, then why can't some machine develop potentially bad intentions. Another problem, is that they don't seem to look very far into the future. Even if we'll only use machine learning and AI as a means of solving specific problems, these problems will inevitably become more and more complex. You will have to design machines that supervise other machines in order to get metadata and be able to draw much more complex solutions to complex problems. In other words, we will eventually have an intelligent network rather than a bunch of specific machines solving specific task. I can't fathom how anyone can with such certainty say that such a network might not have unintended or unexpected consequences. It is not an absurd idea, to think that such networks will be more effective the more liberties they are given. A network that is better connected and has more data and a wide range of algorithms in it's toolbox is obviously going to be more capable of solving complex problems. So, it will be very tempting for people to create more and more effective networks. Even if civilized countries develop ethical guidelines, they will not really be effective if people in other parts of the world don't have the same restriction. The fact that these networks, if effective enough, can give you huge economical and military advantages I think it will be next to impossible to prevent a slipper slope... and it's enough that we pass the threshold once. This discussion goes to show that many of the top experts in this field, just don't seem interested in actually discussing potential problems and dangers.... I know I might sound like a luddite, but I'm really not. I think AI is something that can truly make all our lives much more enjoyable and fruitful. That said, I also realize that there are real risks that need to be addressed.

  • @kennyrennie3093

    @kennyrennie3093

    3 жыл бұрын

    Wonder how quick it would learn manipulate humans, days, weeks wouldn't take long

  • @ableadelaide5893
    @ableadelaide58936 жыл бұрын

    Im tuning out because the lemon in the polkadot shirt is just too odious to listen to.

  • @talitabacon7904
    @talitabacon79042 ай бұрын

    Drone “courier”cmpanies that would makeit possible for local 7 11 stores makeuse of drone deliveries forlocal people would be great to get basics to for example disabled or otherwise home-bound people😊

  • @samferrer
    @samferrer3 жыл бұрын

    "we are better than most other species " ... really? ... what species can challenge that argument?

  • @collinsmcrae

    @collinsmcrae

    2 жыл бұрын

    None. that's just more evidence to the point.

  • @samferrer
    @samferrer3 жыл бұрын

    intelligence ... being able to do the right thing at the right time ... ?????

  • @Kueytwo
    @Kueytwo Жыл бұрын

    I could use drones to hang up the washed laundry on a washing line: And affix the line.

  • @swadiquemansoor.e.p3394
    @swadiquemansoor.e.p33947 жыл бұрын

    What was she trying to explain before she continued to say 'i am starting to disagree with the panelists' and 'as we had been discussing, I must say that..'. grr

  • @Kueytwo
    @Kueytwo Жыл бұрын

    The algorithms must be sourced from humanity of many different communities, types, background and geographical locations

  • @DisclosureExtremist
    @DisclosureExtremist7 жыл бұрын

    They always manage to avoid the elephant in the room. The last real taboo in global politics. Alien contact !

  • @ClayMann

    @ClayMann

    7 жыл бұрын

    If you're only looking for aliens, that's all you'll find. Even when they aren't there.

  • @Zakariah1971
    @Zakariah1971 Жыл бұрын

    We attempt to mimicking the most high and fail

  • @samferrer
    @samferrer3 жыл бұрын

    centuries? .... intelligence has been there for millions of years ....

  • @venkateshbabu5623
    @venkateshbabu56235 жыл бұрын

    Something like train a Lion to kill a few thousand Buffalo. Even the most intelligent lions cannot kill a group of Buffalo because they are equally trained. So AI is of no use rivals and problem creations and finding solutions.

  • @BoyKissBoy
    @BoyKissBoy4 ай бұрын

    Well, in 2023, this discussion feels kinda quaint…

  • @mosaicmonk4380
    @mosaicmonk43807 жыл бұрын

    is he smiling or what?

  • @eboomer
    @eboomer5 жыл бұрын

    They are all missing the point regarding AI safety. Ultimately the problem is that what we're trying to do is to make an entity that's radically more intelligent than us, and trying to keep them as a slave to humanity. This necessarily entails predicting and controlling their behavior, in other words out-smarting the thing that's radically more intelligent than us. This may simply be impossible and therefor massively foolish.

  • @madcommodore
    @madcommodore8 ай бұрын

    Most of this technology will be used by big companies to squeeze every last penny out of the mindless sheepish consumer of today. Great potential, the reality will NOT be The Turk from Terminator:The Sarah Connor Chronicles let alone Skynet.

  • @Zellgoddess
    @Zellgoddess2 жыл бұрын

    it's always sad that people think AI's are any more dangerous than any other human being. Sentience only happens one way, if machines gain sentience then it will be no different than ours otherwise they just won't be sentient.

  • @johndavid9418
    @johndavid9418 Жыл бұрын

    Too bad God didn't consider ethics when creating the Universe & people.

  • @danielvazquez7482
    @danielvazquez7482 Жыл бұрын

    Mikaks didn’t build the megalithic structures we’ve found that were built 14,000 years ago so no; this woman needs a bit more education on intelligence.

  • @collinsmcrae
    @collinsmcrae2 жыл бұрын

    boring.

  • @bnipmnaa
    @bnipmnaa6 жыл бұрын

    I wish they hadn't bothered inviting the two septic women onto the panel, their voices are irritating.

  • @be-informed.
    @be-informed.2 жыл бұрын

    This guy Crian Box is an absolute Freud!!! Rages when asked unplanned questions about space and the moon!!! What ever is true this guy is telling us believe the opposite!!!!

  • @alphasuperior100
    @alphasuperior100 Жыл бұрын

    The lady at 41:51 kinda looks like a man.

Келесі