The Neuron covers the latest AI developments, trends and research, hosted by Pete Huang. Digestible, informative and authoritative takes on AI that get you up to speed and help you become an authority in your own circles. Available on all podcasting platforms and KZread.
Пікірлер
Pete! We miss you. There’s so much to talk about since this video! Can’t wait to hear your thoughts on things. Don’t leave us hanging too long 😉
Hi. Can you suggest which one to choose? Which one would be better in researching; for example, sending a big list of people (either as a text file or as text) from which one tells me who of all listed (for ex. public people) plays tennis? Chat GPT, Perplexity, or some other? Thank you!
? Looks like this is AI generated? Weird reflections... Anthropic just discovered that data is stored in a vector database? Or how a vector database works by decoding the embedding? ... so what was the real discovery?
...and you were someone else yesterday :D ?
Pete, this is great, you need to do some B-roll for this for newbies learning AI so they can get visuals! that would go viral! this video is so good and easy to understand. Especially for older folks who want to learn AI but feel overwhelmed (like my 75 year old mom) who loves ai but doesn't understand how it works! HA
Thank you!!
Super easy to understand. Great breakdown!
You can keep that Microsoft Recall spyware
Excellent work breaking down a complex research paper and making it understandable. You have good presentation skills.
KZread suggested this video to me and it did not disappoint. Already looking forward to your next one!
I love your content
This is very interesting. Thanks for such awesome videos
Good vid mate. Thanks for the channel.
Here's my recommendation of your channel to my community "High quality, educational and newsworthy channel on AI." Well done, and thank you so much for your hard work and unique talent. Much appreciated.
wowww omg why youtube algorithm is so badd you should have 100x more subs are you kidding meeee. my front page is terrible I want more videos like thisss plzzzzzzz
Put down that microphone 😅
class Memory: def __init__(self): self.msg_count = 0 self.snippet_count = 0 self.history = [] self.predicting_snippets = [] def add_msg(self, msg): self.msg_count += 1 self.history.append(msg) def add_snippet(self, snippet): self.snippet_count += 1 self.predicting_snippets.append(snippet) def get_history(self): return self.history def get_predicting_snippets(self): return self.predicting_snippets def predict_next(memory, user_input): predicting_snippets = memory.get_predicting_snippets() for snippet in predicting_snippets: print(snippet) memory = Memory() user_input = "" # Initialize user_input with an empty string task_complete = False while not task_complete: predict_next(memory, user_input) user_input = input("User's response: ") memory.add_msg(user_input) if user_input == "end conversation": task_complete = True print("Total Messages:", memory.msg_count) print("Total Snippets:", memory.snippet_count)
class rh: def __init__(self): self.conv = [] self.snippets = [] def add_conv(self, c): self.conv.append(c) def add_snippet(self, s): self.snippets.append(s) def history(self): return self.conv def snippet_history(self): return self.snippets class chatgpt: def __init__(self, rh): self.rh = rh def history(self): return self.rh.history() def snippet_history(self): return self.rh.snippet_history() #chat memory 1 chat_history = rh() cg = chatgpt(chat_history) chat_history.add_conv("Conversation 1") chat_history.add_conv("Conversation 2") chat_history.add_snippet("Snippet 1") chat_history.add_snippet("Snippet 2") task = "follow user instructions" signal_issue = True if signal_issue: print("Signal issue detected. Collecting information...") while task != "completed": print("Attempting to follow user instructions again...") task = "completed" else: history = cg.history() print("Conversation History:") [print(f"{i}: {c}") for i, c in enumerate(history, start=1)] snippet_history = cg.snippet_history() print("Snippet History:") [print(f"{i}: {s}") for i, s in enumerate(snippet_history, start=1)]
Hey Pete. I want to bring you on my podcast AI Inside. Big fan of the newsletter. HMU!
Love ya man and I'm sorry but it has to be said ....
Wow, these videos just open my mind. The perfect balance of facts and insights. "Not" !! !! IT WILL NOT BE BUT FOR THE NEXT AWAKENING !! !! nOT An Emotional reaction !! This is real ! The losing of many more millions of employment positions due to AI full stop ! We are talking feeding your kid’s people! AI, might best be know more accurately as the very real "A-sinine I-gnorance".... This is what is coming! citing post WW2 Nuremberg Trials November 20, 1945 and the continued German arrogance defending of the death camps genocide but even worse is all that lead up to such a monster public blind siting? Not !! Just as you and every one about to loose their jobs to AI applications It is up to you people KZread that! When things get inevitably impoverished the autocrats pull out their brand-new gas chambers. Oh, but you-tube are just to good as where the average German who simply denied knowing anything. A for absent! I for lack of valued Intelligence! "Oh but it is so much fun and entertaning right"?
😂 It's a scam ™ Time Slip Tech.
Great stuff once again. Love the location of this one.
Thanks for your analysis. Maybe we don't have time for decades to come. I believe the estimation by Prof. Geoffrey Hinton (recent bbc interview) of a chance of 50% of AI trying to take over between 5 and 20 years from now, should be taken seriously by everyone. Also some AI researchers say that Anthropic safety research at this moment doesn't reach deep or far enough towards robustly predictable big generative AI models.
I feel we need to align on the risk first, because right now that's a real mess and that prevents us from deciding on a clear path forwards with this immensely powerful technology. Or at least we could all agree on a joint approach or risk management, for safe AI.
I'm in the safety camp, but at the same time I don't see a world in which we grant current LLM tech control over critical systems that might negatively impact humanity at scale. I think the bigger immediate threat from AI, that only few are discussing at the moment, is LLMs turning into the biggest intellectual property theft (and by consequence, power shifting) mechanism in human history. Related to this, there's also the security threats, misinformation deep fakes and scams that will undoubtedly cause instability all over the place. We need to solve all this stuff first, and now imo.
"A full-throated embrace" at 4:54 made me lol. **and again at 14:04
I came here to say that. He says it again? 🤣
I see, so there's nothing that this does that requires specialized hardware, sounds compatible to any windows user that gives the app permissions. Interesting why they made it.
The desktop app is far less powerful (at the moment it does not have full access to all your data). It is not part of the OS and does not have the recall feature. Also, copilot is part of most Microsoft applications, another thing does not exist on mac...
I don't understand OAI making IOS exclusive software when they have such a close relationship to MSFT. Seems like a betrayal.
It's not. They are just putting it on Mac first. Windows is coming in a few months.
It’s on Microsoft, it’s just named as “copilot”
I have many Apple devices, but I love that Microsoft is taking the risk in launching AI-focused devices. Competition is good.
Same. Also I can't wait to see this on apple products.
Macbook default mouse scroll scrolls in the opposite direction, Steve Jobs is a fcking genius
apple sucks, cant play game, macOS is a nightmare to use for work
Google just a follower
This might not be such an abstract question. Is it a coincidence that the rise of AI is happening while global stability is being undermined? Is it any wonder that 'capability' is driving the narrative while integrating AI capabilities into modern warfare? A cynical perspective might suspect that government military contracts are in play and that the development dollars for AGI will come from the Pentagon's black budget no matter who they inevitably get to do the AI military integration. They might theorize that the AI targeting of Hamas militants in Gaza is evidence of the fact that somebody has already undertaken the project of automating the AI war machine for the opportunity at the treasure trove of military data for the expansion of 'capability,' and the money to win the AGI race. The notion of 'safety' is just a palliative cleanser to the people building the war machine. It will happen faster than anyone can believe. I would be surprised if we had ten years left.
Completely agree, unfortunately.
@@igortovstopyat-nelip648 Point out what is happening to others.
IMO: OpenAi will still exist when these people leave. Sometimes when people don't get their way, they throw tantrums. This drama is not new. Timnit Gebru, A lead ethics researcher left google over ethics and safety concerns. Years later, Google is still around. These people end up creating new opportunities for themselves by going on media interview sprees. There is always an agenda. For e.g. The ordinary persons who doesn't agree with a mining company's values, tenders their resignation, looks for another company and moves on. It's okay to leave a company that no longer aligns with your values. You can perfectly do that without the drama
Love and forgiveness conquers all
The AI needs to be trained in morals probably related off of the Bible would be the best ones because those are the best ones to follow
Jesus
AI didn’t evolve in a kill or be killed environment. Our need to survive and our hate towards oppression is based on an organic instinct of a finite life and the need to compete for resources to survive and reproduce. I think we need to be cautious about what we’re creating, because my biggest concern is we’re filling its data with human moral which is full of hypocrisy due to it being a hodge podge of religious doctrines and laws that have been put in place for often illogical reasons.
I think you are the most insightful and level-headed commenter on AI that I have discovered so far. I look forward to all your videos.
Woke guys leaving 🎉
I was getting anxious looking at twitter/X watch this go down. I think I’ll back off twitter/X and leave the level-headed reporting to you. Better to be a cool cat for my mental health 🐱😅
Here's my thoughts on all this. We are, most likely, extremely far from anything resembling human level intelligence. That said, sure, I do believe we'll get there at some point. But it being so far away from anything we can imagine, it's also nearly impossible to imagine the challenges and issues that will surface along the way. I do not believe that AGI will be achieved overnight. That makes very little sense. I believe it is going to be a very slow, gradual process, and as any piece of software, the best way to discover failure modes is to test it exhaustively before deploying it. In this case, the very rudimentary models we currently have are the ideal testing grounds. These dumb LLMs, diffusion based generative models and the like are very unlikely to cause any severe harm, and it allows us to familiarize ourselves and find out the potential issues that no amount of theoretical foresight could possibly predict. So yeah, I'm definitely in the camp of full steam ahead. We can't solve a problem that doesn't even exist yet, we just don't know how to even begin to approach it. We need information, and the best way to obtain it is to make the use of AI as widespread as possible while it's mostly harmless. This will not only bring about great economic prosperity, as these dumb models are still extremely useful, but will also educate people and keep them save from potential new pitfalls that might appear as a consequence of this technology, such as very sophisticated scams.
What an excellent and balanced summary and explanation of the situation as it stands and a clear outlook on what is important.
Starting to like this Pete guy.
Great video! You certainly gave us much to think about. Sometimes, we are so enthralled by new shiny products that we fail to see their overall impact. You're correct; there is no right answer; only time will tell.
Who's Ilya? 😶 Great analysis and explainer thank you. Seems like Anthropic would be a great destination for these safety researchers.
Thanks Pete! Great breakdown. I can see you have talent.
Cool cats are cool
Wow, these videos just open my mind. The perfect balance of facts and insights. Loving it! 🙌🏻. Let's get this channel into the big leagues!
Damn, Pete turn on the A/C!!