No Priors: AI, Machine Learning, Tech, & Startups
No Priors: AI, Machine Learning, Tech, & Startups
Your guide to the AI revolution, co-hosts Elad Gil and Sarah Guo talk to the world's leading engineers, researchers and founders about the biggest questions:
How far away is AGI? What markets are at risk for disruption? How will commerce, culture, and society change? What’s happening in state-of-the-art in research? Email feedback to [email protected].
Sarah Guo is a startup investor and the founder of Conviction, an investment firm purpose-built to serve intelligent software, or "Software 3.0" companies. She spent nearly a decade incubating and investing at venture firm Greylock Partners.
Elad Gil is a serial entrepreneur and a startup investor. He was co-founder of Color Health, Mixer Labs (which was acquired by Twitter). He has invested in over 40 companies now worth $1B or more each, and is also author of the High Growth Handbook.
Пікірлер
here for the elad hats #NOPRIORSGANGGANG
pffft. 200b a year? the US gov is printing 1 trillion every 3 months.
knowledge will, i think, be modules like LORAs that you download. the core reasoning will be independent of knowledge
People should understand this guy needs to protect the IP of his company at this stage. If he revealed their core model to build Devin, they would be eaten alive in no time.
Thank you for sharing
I'm sorry buddy, but ClosedAI will launch the Devin like agents of their own, within 6 months. 😢 They're just too greedy, they'll kill all new AI startups of RAG, Memory and everything else. 😢 But even closedAI will lose to Google and Microsoft. Claude is already in the best position it will ever achieve. Even achieving top 5 within 2-3 years would be hard work for Anthropic. Only hardware companies have any possibility of makiy any money in this AI race. Also data centres. 😂
i recently discovered this podcast show and i'm really learning alot :)
❤
anyone else get Nelson "Big Head" Bighetti vibes from this guy?
The sandbagging is real. First you have code completion, now code engineering, next comes the frameworks themselves will be written by AI - at that point the human role is done.
They gave him every opportunity to address the criticism of the video and he dodged it the whole time. He didn't say anything of substance. This interview does not inspire any confidence in his company.
OMG !!!! YOU DID IT !!!! It was my request on X (when you asked your audience). You guys are the BEST !!!!!!!!
Great pod
Haven't finished the whole interview yet but, he's just not gonna talk about faking the demo at all?
Wow.
Github copilot just crushed devin😂
There has been some legitimate suspicion that their demo was not entirely authentic. It's all over the Internet, including KZread, so it would be interesting to hear their side of the story. This interview is a great example how to speak 30 minutes without saying ANYTHING about the technology of your product. Feel not truly legit
Crazy to see so many haters here. Scott has build a crazy good team and on the way to build a revolutionary product.
how much did they pay you to boottlick?
Probably interns
Devin is early but has been fun to use. The planner is especially interesting, and the dev environments are a good start. Appreciate the community on Slack.
LOL We are inviting scammers and griters now?
Wowwww
Crypto grifter interviewed by crypto grifters lol
vocal fry contest
Wait wait... Did that Indian guy do a nose job?!😮
When you know that a man was sentenced to 20 years for car theft, what sentence is appropriate for these three asshole datas thieves?
The Matrix basically
As a 3d artist, filmmaker and actor, SORA has me super excited. I can't wait to play around with this tech. It's pretty crazy how all these modalities are happening at once--image, video, voice, sound effect, and music. All the pipelines needed to create media. There will be a time not far off, where we can plug in the prompt, and SORA 5 will create all the needed departments. As the human working with this, I would of course be heavily involved in the iterative generation and direction of each piece of media...and in the end the edit would be mine. I wonder how much 'authorship' a creator will have or be given.
but prior to commercially utilizing the SORA output there must be clarity on the source of the training data it can't be OpenAI pushing it to creators, and the creators saying they trust OpenAI this is almost the exact same issue as textual generation for fun and brainstorming, fair use i suppose
Why would they hype Sora up and then not even have a timeline for releasing a product??
Because they are still working on prevention from misuse
Great interview
Cool interview, awesome to see a glimpse into the innovation being done to develop these video models
Smart! 😊 Personalisation and esthetics. Cool. But also PRACTICAL worldbuilding please. How can this help create quality lifestyles? Happy communities? A convivial society?
Im definitely following these three talented guys on X. Really great interview and without a doubt Sora is already making an impact in Hollywood like once Pixar did during a steve jobs era.
Really great interview. Thanks to all.
our subconscious does a much better job at modeling physics. you conscious mind imagines the apple falling vaguely. your subconsious mind can learn to juggle several apples without dropping them so it knows when they will be where
We perceive possibility (which can be thought of as an extra dimension, idea from "imagining extra dimensions"). I would think if trained on branching "possibilities" it'd be much more consistent physics. But especially with the idea of polygon-rendering to photoreal image-to-image inference on the horizon, there's more of a focus on speeding up inference these days (see Meta's amazing work on "Imagine flash" with emu). With this sort of temporal consistency, if openai manages to get inference speed up, could just use a traditional videogame physics engine with photoreal inference laid on top. It'll probably sell a lot, especially if they map electrical signals through the spinal cord to touch input and replicate that. Seeing and touching the real world through vr will be epic, and yeah probably sell loads. Could train the next gen of AI engineers (think deep-sea or deep space repair) in a simulation that looks identical to, and behaves identically to the real world.
Branching possibility introduces higher cost in an exponential way, so knowing how to (ralatively) precisely predict something is also important. Human certainly learn possibility, and we learn certainty too.
@tianjiancai1118 Certainly. I'm almost sure it'd have a positive effect on modelling what are essentially 4d interactions effectively, but with the sort of inference speed ups we're seeing now, I'm pretty sure image-to-image inference, polygon rendering to photorealistic is the way to go for the easy win.
You have memtioned "easy win". I would argue that any generation without understanding its nature can't be precise enough. Reference speed is important, but reference quality is also important to achieve indistinguishable (or so called no mistake) result. Though you can speed up reference and offer realtime generation, they are still cases requiring resonable results.
@@tianjiancai1118 "Imagine Flash: Accelerating Emu Diffusion Models with Backward Distillation" is a really good paper by Meta that you should read, its achieves super-fast inference without really compromising on quality. there are some pretty good demos of the quality they're achieving with real-time inference.
Compute and data are converging on becoming interchangeable sides of the same coin. Flops are all you need.
Really all these amazing things are just possible with transformers, nothing much innovation but just apply transformers to X and scale it. The most innovative thing they did was a tokenization method as boxes the rest is mechanics.
Adding another axis in the form of imaginary numbers improved our ability to model higher dimensional interactions before. That's negative, bordering on bias - if it isn't innovation, then why didn't everyone else do it?
Interesting video! It really highlights the potential of using 3D tokens with time as an added dimension :). My experience with diffusion models and video generation didn't show anything quite like Sora's temporal coherence. Looking ahead, I'm excited about the prospects of evolving from polygon rendering to photorealism via image-to-image inference. While I might be biased due to my interest in this rendering, I think incorporating 'possibility' as an additional dimension, as suggested by "imagining higher dimensions", could address issues like the leg switching effects we currently see. Such physics-consistent behavior could potentially be borrowed from game engine scenarios, where, unlike an apple that behaves predictably when dropped, a leg has specific movement constraints (also affected by perspective shifts). It’s a speculative route, but it might be worth exploring if it promises substantial improvements.
Maybe internal 3D modling should be introduced to solve the issue you have mentioned (leg switching, or so called "entity inconsistency".
@@tianjiancai1118 How so? (NB: you're familiar with how diffusion models work? It's just learning to denoise an image, or a cube in this case. I just suggest that it learns to denoise the branching possibilities rather than a cube, so it knows what is not a possibility - suggesting, not guaranteeing the idea will work. There are things like ControlNets though, so if this internal 3D modelling is a valid idea, please share)
Sorry to clear that, but internal 3d modeling is hard to achieve in a diffusion model (as far as I know). What I mean is somehow a totally new arch.
I'm old. these guys look like they just left high school.
Haha, I'm 71. I know exactly what you mean. The average age of the developers of the first Mac was 28 years old. It seems like the average age of the AI community is so young but that gives these super smart people a lot of years to get things straightened out.
They almost have . Peebles is just out of university.
❤
It looks like someone is trying to url hijack Playground? playground.ai playground.com | i assume this is the legit one
They have the biggest chance now just by having OpenAI on their side
I can say only one thing I hope Ilys keep up his dreams through tbis long challenging journey. I'm writing an essay at tbis very moment I will.use one quote, a sublima one from him. Its title is: Artificial Persons: Between Market Value and Humanist Cooperation.
Back then, Mighty browser felt like if you took all the hot tech buzzword being thrown around on twitter and merged them into one product. Hope playground doesn't take that same route! rooting for him anyways, people making new stuff is always good!
what happened to his face? Are they beating him up in the basement of OpenAI?
你可以先借我一些錢嗎,我沒錢都不敢出門
good hosts 6:00 Jensen is clearly hungry and feels little self conscious about eating, but they both join him and that im sure made him relax more. Great hosts.
We are at the Model T stage of robotics & In this reality he's our "Dr. Noonien Soong" & "Henry Ford" at the same time. This is an exciting time to bear witness to history.. Watch the films The Creator & to a lesser extent I Robot & Spielberg's A.I..thats where we are going hopefully with better less dramatic results..👍
good
"Great insights! Thanks for sharing." Love you Guys!
Truly excellent interview. It is a pleasure when these conversations with geniuses are so well executed
Love these conversations and the future thinking of Elad and Sarah!
Absolutely fantastic podcast - really appreciate hearing your views as investors in looking at what’s happening in ai. Would be curious how you see particular public companies as positioned in relation to what you’re seeing or anticipate in startups in ai.