No video

Why OOP is slow - and a stupid idea to fix it

Please watch this to get the corrections/clarifications to this video: kzread.info...
References
[1] react-native Github, github.com/fac... (Accessed 22/06/24)
[2] Object-Oriented Programming is Bad, • Object-Oriented Progra...
[3] 0AD Github, github.com/0ad... (Accessed 09/07/24)
[4] www.catb.org/es... (Accessed 08/06/24)
[5] Linux Kernel Github, github.com/tor... (Accessed 22/06/24)
[6] Zstandard Github, github.com/fac... (Accessed 22/06/24)
[7] Online C++ to C Converter, www.codeconver... (Accessed 22/06/24)
[8] codeconverter, codeconverter.... (Accessed 22/06/24)
[9] The famous red eye of HAL 9000 (User: Cryteria)
en.m.wikipedia...
[10] github.com/tor... (Accessed 09/06/24)
[11] en.wikipedia.o... (Accessed 23/06/24)
[12] Sun SparcStation 10 with 20" CRT by Thomas Kaiser
commons.wikime...
[13] en.wikipedia.o... (Accessed 09/07/24)
The simple examples in this video can be cloned from here:
[14] github.com/Val...

Пікірлер: 345

  • @ImprobableMatter
    @ImprobableMatterАй бұрын

    Livestream to correct issues in this video is available here: kzread.info7TbPHExtGnQ Edit2: Fine, I am happy to take the points people have raised. I don't believe in deleting videos even if I'm wrong, but I don't want to leave this up if it misleads people about compiler optimizations. I will leave it up overnight and then either: (1) delete this permanently, (2) reupload later with better examples, (3) leave it up and make a follow-up, possibly live. Vote for whichever you want here. It would be fun to do a livestream to explain this further if someone wants to come on and critique my ideas live. Edit: As people have pointed out, yes, with full optimizations, a compiler would optimize away the overhead in my very simple example. However, I would argue my point still stands: there will be a bunch of cases where OOP causes overheads that would not be optimized away. I will strategically put the "channel owner hearts" on people who I believe have raised good points. I’m intending to have guests on for podcast-style livestreams. If you think that yourself, or someone you know, would be worth having on a show - let me know. I also have a more silly gaming channel: kzread.info/dron/A6Xc5VekJ17g86hqP5ATjg.html

  • @markowitzen

    @markowitzen

    Ай бұрын

    yeah, I think this is still pretty helpful regardless and people should spam fryingpan to add it to his ai copilot startup - it's kind of dangerous in my opinion to always assume you can get away with certain things since the compiler will take care of it and the concept can be extended as you mentioned, going by oop principles it makes some amount of sense that hardware should be taken care of by the compiler (increasing compiler efficiency there and possibly allowing for greater hardware optimization) while development stuff can theoretically be safely abstracted away into cloud based systems on push of course such an environment will need to further be seemlessly integrated and also work well with testing frameworks and etc which I imagine would add a lot of headaches but it seems doable

  • @chudchadanstud

    @chudchadanstud

    Ай бұрын

    No mate, it gets optimised by the compiler anyways. Look at Rust. If Rust code wasn't optimised by the compiler it would have horrid performance. Back to Basics: You are smarter than the compiler. Repeat it 3x in the mirror.

  • @BonsaiBurner

    @BonsaiBurner

    Ай бұрын

    To the contrary keeping these discussions in place and keeping them relevant is a good thing. We should never blindly journey forth with all of our modern crutches without recognizing the tradeoffs for not operating at the lower levels.

  • @michaelrenper796

    @michaelrenper796

    Ай бұрын

    Recommendation: the real peoblem with most application performance boils down to data handling. How much copying, composition and decomposition ( parse and render) happens. As well as communication between modules. OOP (as well as many other modern pattern) an lead people down the weong path. But its rarely the code path as such thats the issue.

  • @markowitzen

    @markowitzen

    Ай бұрын

    @@ImprobableMatter community post poll with voting options listed might be more organized

  • @TheEnigmaOf47
    @TheEnigmaOf47Ай бұрын

    what you are describing here, sounds best implemented as a compilation prepass. there is no real reason why the programmer should be touching the transpiled code (i say this as someone who does work with both c++ and c), and doing so introduces opportunities for bugs that the oop languages and styles were already specifically designed to prevent, does doubling that work again, but making it the editor's job really make any sense there? instead, it could, and probably should be a compiler argument, and at most, appear as a temporary file during the compilation process. additionally, the issue of struct size, is also something that could, and probably already is a compiler flag, somewhere amongst the sea of flags available for nearly every compiler. as stated by some of the other commenters, you are touching into the array of structs vs struct of arrays problem, and data locality. made sufficiently smartly, you may be able to partially skirt this issue, but in practice this is likely to come down to profiling both the oop and transpiled versions to see which is faster for the given task at hand. in your field, that of physics, the computations are far more likely to be light in the complexity and interdependence of the data, but heavy in its quantity. in which case the struct of arrays mentality makes the most sense. cases such as being able to compute the x axis and y axis in a 2d particle simulation separately for all particles together for example, would be great here. but if you consider some more complex structures, such as a video game character, in which you often have to find yourself converting between local and world-space coordinate systems you do __NOT__ want to have each element of your coordinate frame matrix (in which for most operations position and rotation may depend on one another) separated from one another across separate arrays as the cpu would be idle most of the time making a series of long distance jumps back and forth just to compute one formula for one item. (nor would they even be able to be true arrays, on account of needing to add and remove elements, but that is a completely different can of worms) all of this is missing the forest for the trees though, as the majority of actual optimization work is the problem of picking the right algorithm for the job. a program that completes a job in less steps is nearly always going to complete faster than one that takes more steps but does so in some clever way. that being said, at least the transpilation part does sound like an interesting proposal, it would be interesting to see the same code done with a side by side comparison in some real world programs.

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    The reason I suggested it this way and not a straight one-to-one mapping is that there would be some leeway in this scheme for the programmer to make some optimizations if they wish. You could simply type code with classes and it gets straightforwardly preprocessed, but you could also type code with classes, click the little plus sign, and then optimize it somewhat by hand.

  • @CakeIsALie99
    @CakeIsALie99Ай бұрын

    I hate oop but this video doesnt have a good foundation of abstraction, and why we use abstraction in the first place

  • @asandax6

    @asandax6

    Ай бұрын

    4:15

  • @madpsyber636
    @madpsyber636Ай бұрын

    bro made typescript for c

  • @0x0michael

    @0x0michael

    Ай бұрын

    C++

  • @user-xo6go4xc3w

    @user-xo6go4xc3w

    Ай бұрын

    🤣 LMAO

  • @gawhyrghun1913
    @gawhyrghun1913Ай бұрын

    3:00 Why are you doing it this way? Getter would almost always return const reference and will be inlined by the compiler. You can verify there is no assembly difference between these two on release builds. Also getters and setters biggest advantage is debugging, especially large programs. I'm not saying you haven't made some good points in this video, but this is not one of them. As to whether oop is a bad or good practice is an ill posed question. It depends on what you're doing with it. As with all things in programming, use the right tool for the right job. 9:00 If we only had such a standard. Better yet, a language standard instead so that people using different IDEs could cooperate. We could even name it for something fancy like C but better. C++ maybe?

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    But that's my point - the compiler hasn't inlined it. This is a textbook example (I think you can see an example in the wild scrolling past around 11:50 ) which has worse performance than if you didn't go the OOP route.

  • @gawhyrghun1913

    @gawhyrghun1913

    Ай бұрын

    ​​​​@@ImprobableMatterit did not because you are returning a copy, thus the compiler has to generate one for you. Had you returned a CR it would have been inlined on some O level. I don't know why anyone would put this in some oop textbook as an example, but it's certainly very poor practice not done in real big projects.

  • @Lahha

    @Lahha

    Ай бұрын

    GCC with -O3 gives me exact same assembly whether I use direct access, const reference getter or non-reference getter.

  • @gawhyrghun1913

    @gawhyrghun1913

    Ай бұрын

    @@Lahha So gcc is even smarter than I thought :)

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    I'm happy you mentioned real projects as opposed to a KZread example, because it's pretty clear that functional code is faster than OOP in the wild. If you think you know differently, you should tell Linus Torvalds and Facebook as just the two examples mentioned in the video.

  • @samuelprice538
    @samuelprice538Ай бұрын

    As a developer with decades of experience this video made me wince, for all the reasons raised by other commentors.

  • @CjqNslXUcM
    @CjqNslXUcMАй бұрын

    I think there's a lot of mistakes in this. Modern compilers use inlining to completely remove the cost of member functions. Iterating over a padded struct with only two items is a worst case scenario, the struct will almost always be faster for any real use, and you can still tell the compiler to pack it if you want.

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    Try it yourself from the Github link. I used GCC 11.4 from just over a year ago - how much more modern would you like?

  • @CjqNslXUcM

    @CjqNslXUcM

    Ай бұрын

    @@ImprobableMatter I don't know gcc, but you probably didn't enable release mode and optimizations. in debug mode it'll keep all of that in so you can step through the functions when debugging.

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    It's reportedly the top or second-most used compiler...

  • @gawhyrghun1913

    @gawhyrghun1913

    Ай бұрын

    @@ImprobableMatter The point remains. The fact that user above verified that the assembly you should get with getter is exactly the same on -O3 means you didn't enable it here either. Also gcc 11 isn't modern, it's 3 years old at this point. GCC 14/clang 18 are.

  • @Zpajro

    @Zpajro

    Ай бұрын

    @@ImprobableMatter Try `gcc -Wall -Wextra -Wpedantic -Werror -O2 -g -o output_program source_file.c`, it's my good to base for compiling; note the "-O2".

  • @NomenNescio99
    @NomenNescio99Ай бұрын

    This is far from a new discussion, it has been going on in various form at least since the 90ies. I don't find to be controversial, but perhaps a little misdirected. Whenever I have been dealing with performance issues I have never found myself in a real world situation where I said "Oh dear, if only this piece of code had a loop that ran 50% faster , it would really save the day". Database performance, ORM and cache factors, network latency and avoiding any form of tertiary storage, even the fastest NVMe drive will still be a 1000x slower than memory - these are all much more likely to have a dramatic impact on overall system performance than code execution speed. Rewriting some core functions in a imperative/procedural language or even assembler can in some rare cases be beneficial, I have done that exactly once in my 25+ years of working in IT. And it still a happy memory when I was able to inline some assembler code that used vector instructions to solve a problem and the performance increased by orders of magnitudes for that part application. But most of the time, even faced with performance issues in a problem like image scaling and cropping, it turned out that it is much cheaper to throw hardware at the problem and scale out vertically rather than rewrite code to be more performant. Finally - faulty and unreliable code is a so much bigger real world problem, never mind bugs that affects user experience or software reliability - just look at all the CVE's with security issues that keeps pouring in a daily basis - they are a real problem. This is where we should focus our effort to improve the software landscape imho.

  • @ozgurpeynirci4586

    @ozgurpeynirci4586

    Ай бұрын

    What is your experience? Why does it matter in this case? Not all experiences are ssd bottlenecks.

  • @jhacklack

    @jhacklack

    Ай бұрын

    On the contrary, hardware has gotten orders of magnitudes faster but most programs have gotten slower or remained the same speed, which means the software has gotten orders of magnitudes slower, and it's getting slower faster than hardware can get faster

  • @NomenNescio99

    @NomenNescio99

    Ай бұрын

    @@jhacklack My main points were: * Performance issues in real world scenarios is not affected by loop-unrolling or compiler level optimizations, bottlenecks are almost always somewhere else. * Developer time tends be more expensive than hardware, and hence it makes more economic sense to throw hardware at a performance issue rather than to write more efficient code. * Software performance is by far not the most important problem that faces software development, buggy software is. What you say has no relevance for any of those points. Besides that, you are factually wrong in what you are saying. I challenge you to dig up a vintage computer and install some legacy software on that, you will be amazed how much faster both software and hardware is today compared to 10-20 years age. And the problem domains computers address today are much bigger and order of magnitudes more complex. What was considered to be a big database 15 years ago that required the largest servers can more or less be handled by a mobile phone today with better performance. And it is not only thanks to hardware, the query optimizers, compilers and most other software tools have also improved significantly since then.

  • @jhacklack

    @jhacklack

    Ай бұрын

    @@NomenNescio99 Firstly, most slow software is closed source so we can't say what exactly is causing the slow down, but that's immaterial because code like ms office or visual studio should be orders of magnitudes faster just than 20 years ago, but they are not. Secondly, it's unethical to waste people's time and profit off of the development cost savings, especially when the people using your software don't understand how insanely slow it us compared to how fast it should be. You don't have the right to turn people's machines into sludge even if it earns you more money. Thirdly, I think slow programs and buggy programs are both caused by developers not understanding their code and using too many abstractions and frameworks as a crutch. As for your challenge, I have to use some proprietary software from the 90's for my work on windows 11 and it is dogshit slow because windows 11 is dogshiy slow despite the hardware requirements!

  • @jhacklack

    @jhacklack

    Ай бұрын

    @@NomenNescio99 also, code that runs so slow it tanks the framerate, or becomes unresponsive, or takes minutes to load, IS A BUG.

  • @romainvincent7346
    @romainvincent7346Ай бұрын

    I must say I wasn't ready for a video that starts by saying OOP is inefficient, and then goes on to recommand a self-operating napkin to replace it. 8:03 So first let's skip the AI mention, alright. Developers (and people who can read code in general) should be able to use whatever IDE/text editor they want (or can), be it notepad or nano. What you propose is a dangerous slope where some code bases would become entirely unusable if not using the proper software, which mind you, I'm sure already exists somewhere. But the world does not need more of that. 15:13 The processor doesn't know about for-loops either, since it's all a bunch of conditions and GOTOs in the end. By definition, programming languages are an abstraction for instructions to machines, OOP is just another layer. EDIT: I somehow feel obligated to make it clear that I'm not usually a proponent of OOP, I just wanted to share my 2 cents.

  • @markowitzen

    @markowitzen

    Ай бұрын

    1. reasonable point 2. my understanding is that formal methods and etc are increasingly fields of study and all levels are becoming increasingly integrated, chip designers will work with the people who produce assembly instructions and actively try to optimize execution for that for example branch prediction has been a thing for decades and is today a core element of design down to the hardware level, the processor might not "know" about for loops but this doesn't mean people have just completely thrown away any hope of doing optimization work for it - modern compilers will regularly unroll, inline, and outright rewrite loops, I've seen all kinds of crazy hexagonal memory access patterns and various things

  • @maxweber06
    @maxweber06Ай бұрын

    It shouldn't be overlooked that in a lot of situations, code written in OOP is infinitely more performant than code that was never finished (due to lack of time and/or funding).

  • @Arbiteroflife

    @Arbiteroflife

    Ай бұрын

    Yea, a lot of people really don’t get why OOP is a thing. Completing the project always comes first, performance is secondary. If you can’t complete, performance does not matter. OOP significantly helps in getting the project completed by organizing the code and helping with cognitive load.

  • @muha0644
    @muha064423 күн бұрын

    fun fact: this can also be done by passing `-O 3` to the compiler

  • @Xylos144
    @Xylos144Ай бұрын

    I think a lot of comments here are nitpicking, giving knee-jerk responses because IM dared call OOP 'bad'. But the point of this video to me seems quite simple and well reasoned. 1. OOP has benefits. 2. Some of these benefits involve forcing good coding practice through the structure of the code itself. 3. These enforcments can and do add inefficiencies into the code. 4. Enforcment of writing good and well-coordinated code should be a task assigned to the IDE rather than the code itself, so it doesn't impact performance. Of course the examples given are already well-optimized by compilers. They are simple examples in a KZread video given for explanatory purposes, so it'd be embarrassing if they didn't. That doesn't mean they do a good job on all variations of the examples given, or more complicated examples. And that fact is well-evidenced by the reality that high-performing code is not written in OOP languages in the industry. We're making our code bad in order to force that it be written well, and coordinated across many programmer teams. Writing code in a maintainable and coordinated manner is something IDEs long since should be responsible for, not the code. And arguably, for many applications (performance-dependent or not), teams use OOP not because it logically fits the task, but primarily because it provides this coordination. So IF that important aspect is offloaded to a capable IDE, then the default superiority of OOP is worth questioning. There's plenty of room to disagree with the video's position, but stop missing the forrest for the trees and actually try to engage with the idea; not the arbitrary specifics.

  • @something4074

    @something4074

    Ай бұрын

    Hating on OOP has actually been pretty popular on the Internet for a while now, this video is just a bad critique. There are plenty of good videos on the subject, "Clean code horrible performance" by Casey Muratori, for instance. And moving functionality from the compiler to the IDE makes no sense, the compiler can do anything the IDE can do. I also very much disagree with calling the comments "nitpicking" and saying they are about "arbitrary specifics". Using the examples in the video with timings shows a fundamental misunderstanding of the topic. It's a bit like a physics crank saying: "My math might all be wrong, but the idea is correct". No, the math is important, and so are the specifics in this case.

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    Yes. It's annoying that people saw 2 trivial examples and started laying specifically into them, instead of seeing the bigger picture.

  • @markowitzen

    @markowitzen

    Ай бұрын

    a lot of these people definitely seem overly self-righteous, you should obviously take everything on the internet with a grain of salt you should see the old academic papers from some of the "holy wars" in cs history though, people will argue over the most idiotic things and the answer usually ends up just being "use both" until one side wins out history and the community will decide who is ultimately "more right" and who is not, and youtube comment sections probably aren't a good reflection of this

  • @ctbur
    @ctburАй бұрын

    Thanks for putting your idea out there! I'm sure this video was a lot of work, so I'm going to give my input to the best of my abilities. First off, most code does not have to be written with performance in mind. One reason you mentioned is that for a lot of software it's more economical to optimize for developer experience rather than performance, which is correct. Another big reason is also that, for most software, there are a few "hot" sections in code that run over and over, while others run very rarely. For a backend service it might for example be parsing / writing of JSON data that happens a lot more often than other things. Usually there are ways to solve this need just for those sections of code by writing the code differently, e.g., in C++ you can write C-style code which will be as fast as regular C. Modern chip architectures are usually I/O bound, as it's a lot faster to do a calculation with local data than it's possible to transport data into and out of the chip. To reduce the amount I/O, any modern CPU uses a cache to keep data closer on the chip. Thus, performance of code generally comes down to how well you use that cache. The cache works with the assumption of time- and space-locality of memory access, i.e., if you access memory address X, it assumes it's likely you access address X+1 soon. Thus, it keeps memory around the memory address you access in the cache. If you often jump around in the memory space, the cache has to be emptied and refilled again a lot of times, which amplifies your I/O way beyond the padding / slop you mentioned. (Also see cache lines.) The example at 2:24 is a very simple case where you actually don't need the cache at all: You just stream data from memory in order, and touch each address once. Thus the only thing that matters here is the amount of memory you stream, which is lower for the procedural example which doesn't have the padding. While this access pattern does happen in the real world (e.g., see data oriented design in game engines), it may not be the norm, and really depends on the use case. If, for example, you were to access the Alices in random order, the OOP way would be faster, as the X's and Y's of the same Alice will always sit next to each other in the memory / cache. With the procedural option you jump around more in the memory space. Then, to your programming annotation idea: OOP is not intrinsically slower than procedural. It really comes down to what you write and what it compiles down to. Compiling down to a lower level language generally already happens. Instead of using C, a lot of modern compilers use an intermediate representation as a first stage of optimization (see, e.g., LLVM IR). This alone doesn't help anything unless you give the compiler some freedom to make certain assumptions, so that it can apply optimizations on the IR. This typically means that you have to pass more information around how the variables / data are used (e.g., with the keyword const). However, this tends to also restrict what the programmer can do and may impact developer experience in a negative way. Rust is a good example for a language that tries to pass along as much information as possible to the compiler, to allow it to do lots of optimizations (which by itself does not mean that the compiler will always so be smart enough to do them in the right way however). So, to tie it together, except for extreme cases, only some sections of some programs need to be optimized. Usually this can be done within any language, especially those that compile to machine code. To help the compiler can optimize code there are special directives or paradigms in languages. These are applied liberally where cheap, but may conflict with developer experience.

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    Fair enough, there are times when it actually is better to pack things as a struct, say for example a complex number is highly likely to require joint operations on both components. It would be more efficient to have an array of complex doubles, say, than two complex arrays. I guess my point here would be that the IDE/copilot/whatever should then help guide the programmer whether to use a struct, several separate structs, separate arrays, whatever.

  • @ctbur

    @ctbur

    Ай бұрын

    @@ImprobableMatter Yes, complex numbers are a great example. It's mathematically speaking even just one number :) > the IDE/copilot/whatever should then help guide the programmer whether to use a struct, several separate structs, separate arrays, whatever Ideally yes, but this is a question even a human expert may not be able to answer, as it not just depends on the code, but also the very complex CPU architecture, compiler optimizations, and even the data you put into the program. As an example: When a web service serves two endpoints, but 99% of your users only use one of them, it does not make sense to optimize the other. You only know this after profiling the code with real data. Sometimes you can tell ahead of time, but this can require very general intelligence and domain-specific knowledge like, e.g., which country are most of your users going to make a booking in.

  • @crono331

    @crono331

    Ай бұрын

    " for a lot of software it's more economical to optimize for developer experience rather than performance" Rright there is the giant problem with IT nowadays. Code is written for coders, and coder's convenience. And to make the PO happy. That epic, you know... Users? Whats that? I have been in IT for over 40 years. What is done today is not more sophisticated than what we did in 1982. it just uses 10000x the memory, cpu, is slower, buggier, and generally sucks as it is a one size fits nobody solution. The solution of course will be to migrate everything to the cloud and add another 10x memory and cpu overhead.

  • @virior
    @viriorАй бұрын

    Isn't the "DEE" just a compiler and having two windows open? I mean, if you can optimize the code translation, shouldn't it just be a compiler feature instead of a separate program?

  • @Bravo-oo9vd
    @Bravo-oo9vdАй бұрын

    5:51 what you're talking about here is commonly known as Array of Structs vs Struct of Arrays approach of arranging data in memory. Yes, due to spacial locality, it makes sense to pack the arrays of primitives so that when running in a for loop elements are closer together, but by doing so, the other properties of the object are no longer close, so if we want to touch other properties, then we're practically guaranteed a cache miss. I don't think that most for loops over objects only ever touch one property, so I think if this type of transformation were applied to all OOP code currently running, this would result in net loss of performance. What layout is better really depends on a particular access pattern and there is no right answer for all code. Of course still, OOP had to make a choice about which one to use, but arguably they made the right one, because it really isn't that rare, when using an object, to use many of its properties at a time. And also for primitives, we can use them with pointers, dynamic arrays, take their address in memory, and so on. How would that work if all objects were automatically Struct of Array'd? Either these operations would have to be defined somehow, increasing the complexity of OOP implementations, or these operations would have to be forbidden for objects, which dramatically decreases their usefulness. So I don't think we'd have to necessarily restrict OOP for people to optimize their usage patterns. Data Oriented Design is getting more popular, and in applications where it's necessary to keep cache hit rates in mind people do tend to use these patterns, i.e. in gamedev Entity Component Systems are used to quickly query and process different types of entities that have different behaviours, and they can be written in big, bulky OOP languages. So I would say that if performance is the target, instead of having IDEs transforming OOP to non-OOP code, we could instead perhaps better integrate profilers and other performance analyzers so that instead of some system which probably doesn't have enough information transforming the code in the background, programmers can write better code in the first place. The performance loss of bloated code is mostly a result of lack of feedback from tools and just lack of incentives to optimize instead of working on other features.

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    The thing is, OOP effectively forces an array of structs for what could be an enormous number of variables. Sure, in one loop you might use two of them, in another three others. But you would be loading potentially hundreds into your cache for the sake of those examples. I guess my point here would be that the IDE/copilot/whatever should then help guide the programmer whether to use a struct, several separate structs, separate arrays, whatever.

  • @primbin_

    @primbin_

    Ай бұрын

    On the topic of AoS (Array of Structs) and SoA (Struct of Arrays), what I'd like to see is a programming language with syntax to automatically SoA-ify a struct. For instance, I could declare a variable, e.g. Foo#16 myFoo, then that would represent an array of 16 Foo instances, but with the members of foo laid out in an SoA format, i.e.: Foo::member1[16]; Foo::member2[16]; Foo::member3[16]; ... There could also be special syntax for code which works invariant of the data layout, written as Foo#? or something of the sort. Even better would be if the chunked structs could automatically be made to use SIMD intrinsics where available. It'd probably have to be limited to a subset of types though, something like C#'s unmanaged types, or C++ PoD types. I'm aware that all of this can be done with templates, such as the [nalgebra](docs.rs/nalgebra/latest/nalgebra/index.html#modules) rust crate's SimdX types, but I've found the process of writing generic code using it, and of converting to and from SoA types to be a major headache.

  • @Bravo-oo9vd

    @Bravo-oo9vd

    Ай бұрын

    Also, addressing your main point of what the IDE should do: in 13:05 where you're saying that compiler should only do hardware optimizations and IDE should focus on "organizational" things like checking if a programmer isn't trying to set a private variable. I'm not sure what would be the advantage of coupling "organizational" aspect with a particular GUI program, i.e. an IDE. If an IDE would use some higher level organizational language to generate an actual programming language code that would then be turned into a program by the compiler, so that we would have in total 3 levels of code, where we write the first one, and run the third one, with one extra in between, then the "high-level organizational language" would just become another programming language, where we would have 2 layers of codegen (so 2x as big surface where we'd have opportunity for either compiler or IDE to generate bad code so we'd have to edit high-level org language so that we can first get good codegen of prog. language and then via compiler good assembly), and where regular compilers have frontend and backend, where the frontend handles org. aspect and backend does codegen, our frontend would be simply coupled to a particular GUI IDE. In 13:41 I'm in agreement that IDEs could do more to make development more visual and repeatable. I very much recommend "Tomorrow Corporation Tech Demo" and "Stop Writing Dead Programs" talk by Jack Rusher from Strange Loop 2022 for what should be possible in programming languages and development environments of the 21st century. But there's much more that goes into this, not just whether or not the code is OOP or procedural.

  • @brag0001

    @brag0001

    Ай бұрын

    ​@@ImprobableMatter except that oop doesn't prevent you from organizing your code differently, if that actually benefits you. You just chose to present it that way and then also chose to not tell the compiler to try its hardest to optimize. There are many reasons why you'd sometimes want to break away from oop patterns. But this video really doesn't propose an improvement. Instead you are proposing to break the entire ecosystem, offloading responsibilities of the compiler onto the IDE, version control and code change visualization systems while breaking debuggers at the same time (or putting more responsibility onto them). If the solution is: make everything way more complex, what was the question again?

  • @hypercomms2001
    @hypercomms2001Ай бұрын

    As someone who grew up with Fortran 77, and pascal, and now Java, and now swift … if I wanted to write a program to calculate the radiation pattern in magnitude and phase I would use a procedural language like Fortran, but if I was developing a GUI undo/redo capability I would use an OO language like Java… I comes down the application need…

  • @chudchadanstud
    @chudchadanstudАй бұрын

    >It is agreed... That's not how humans work. Murphy's law. >Let's get our IDE to enforce le rules... Now you're gonna make a sub prooject for your project. How are you gonna justify this? More ram costs few hundred bucks. How many years will it take before your efforts to build these tools start paying themselves back? You're gonna have to get someone to maintain these tools too. Also 300MB saved isn't that big of an issue. By the time memory becomes an issue you're pretty much working on a solution with clients who's budget is astronomical.

  • @chrimony
    @chrimonyАй бұрын

    @1:27: Usually that's called padding, not "slop".

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    Check reference 4. The bytes used to pack are reported to be called "slop" there.

  • @chrimony

    @chrimony

    Ай бұрын

    @@ImprobableMatter Your reference says that's the "old school" term for it, and he calls it padding in the article, and even uses the variable name "pad" in his example code.

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    The exact words I used are "The compiler will add 3 random unused bytes onto the end of the object, apparently these are called 'slop', to pad it out to 8 bytes". "These" in that sentence refer to the bytes, and the verb "to pad" refers to the "usual" definition of the term.

  • @chrimony

    @chrimony

    Ай бұрын

    @@ImprobableMatter You're usage of the term makes no mention that is an "old school" term. Nobody uses it in modern discussions, and viewers of your video who don't know any better would get a misleading idea. It's padding.

  • @starc0w

    @starc0w

    Ай бұрын

    @@ImprobableMatter It is usually called "padding". The reason for this is that it must also be possible to create an array from these structs. Due to the pointer arithmetic, there must be no space between two units of an array. If you had no padding at the end of your structs (based on the example you have shown), then the int of the next struct (in an array) would not be placed on an address of 4. Especially on ARM you would have a pretty significant performance penalty. (You can also simply switch off the padding (packing), at least in gcc.) structs basically have nothing to do with the concept of OOP. Your demand to abolish them is really quite nonsensical. However, I agree with you that OOP is not a good concept and does not deliver what it promises. In my opinion, however, you are addressing the wrong points here. Furthermore, making yourself dependent on a certain type of IDE is certainly not a good idea.

  • @0dWHOHWb0
    @0dWHOHWb0Ай бұрын

    16:03 Wait, "students"? May god have mercy on their souls Or I guess mine, if they ever end up as junior C++ devs on my team... Fuck...

  • @taktoa1
    @taktoa1Ай бұрын

    I'm a compiler engineer who works specifically on high performance domain specific compute and I can say with some confidence that to the extent that you run into these types of issues, it's a result of path dependence (and to a lesser extent incompetence) on the part of language designers and compiler engineers. In particular, there are a wide variety of techniques around data layout and whole program optimization that are wholly unused in the imperative languages you've probably ever written. In short, compiler and language design are far from "closed fields" where no advances are possible, so there is no need to route around the field by recreating it. That said, I think exposing average programmers to compiler IR (which is essentially what is on the right hand pane of your UI) is a great idea.

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    Having IR in the panel on the right is kind of the point I was getting at, but I alluded to a difference. In this case, it would not be a one-to-one mapping, so the programmer would be free to go in and make optimizations to the lower level code in the right panel. I guess I didn't explain myself well enough, and in retrospect used two terrible examples to motivate what I said, but do you see any merit in that idea? I'm pretty much resigned to either deleting or remaking this video at this point.

  • @taktoa1

    @taktoa1

    Ай бұрын

    @@ImprobableMatter I think it's going to be extremely challenging to maintain a mapping between changes to source code and changes to IR, such that developers can modify either. LLMs can't (currently) be trusted with tasks like this that are highly correctness-dependent. You would need a search technique like program synthesis to prove that the high level code implements the same thing as the low level code. I would say the closest thing that I've seen in the literature that actually works is Halide, in which the user is responsible for choosing a specific sequence of compiler optimizations to be applied to a given piece of code. I think an approach like this is pretty cool and should be more widespread.

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    Right, I agree it would be challenging in practice, but this was my idea: Have the IDE give some more fine-tuned control to the programmer, but then "slap their hands away" when the programmer tried to do something that would actually invalidate the intent of the original OOP/class code. Now, obviously the markup suggestions I gave were very simplistic, but maybe going the other way is better: have comments/directives in the "left panel" above the class/object that then get propagated to the IR directly, or indirectly via a compiler?

  • @taktoa1

    @taktoa1

    Ай бұрын

    @@ImprobableMatterYeah, the former is a challenging formal methods problem, and undecidable in general. The latter is basically the Halide approach.

  • @revengerwizard

    @revengerwizard

    27 күн бұрын

    Are you going to give all faults to the languages and compilers themselves? Have you tried designing one, making a compiler for it? If you want to improve something, do it yourself

  • @marcux83
    @marcux83Ай бұрын

    your momma is slow

  • @zlodevil426

    @zlodevil426

    Ай бұрын

    object-oriented parenting

  • @chudchadanstud
    @chudchadanstudАй бұрын

    Just watched the whole thing. Someone tell this guy LLVM exists. Read up on Intermediate Representation.

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    I'm aware, but that's not quite what I'm suggesting.

  • @ArgoIo

    @ArgoIo

    Ай бұрын

    @@ImprobableMatter It kinda is, though. You are suggesting some kind of transparent and customizable lowering of C++ to C as an intermediate language. Compilers already do that. Though C would be an odd choice, compilation can involve an arbitrary number of intermediate languages. LLVM-IR is just the language used by the LLVM compiler backend.

  • @Skiddings
    @SkiddingsАй бұрын

    The biggest advantage of OOP programming is cost. It's just cheaper. It's easier to write, maintain, design, debug. It's easier to integrate frameworks. It takes less skill and less discipline - more people can do it, higher availability makes the labour cheaper. It's easier (and thus cheaper) to teach and learn - a big reason that most science students learn python these days. It is much, much harder to create and maintain a large software ecosystem in a language like C. The need for high performance needs justify the cost and for the majority of businesses, institutions, and hobbyists it isn't justified. This is not to say that sharing ideas is wrong, and everyone is entitled to their own opinion. I just want to raise something you've not talked about in the video.

  • @boggo3848
    @boggo3848Ай бұрын

    Yeah I'd love to have to debug not just someone's OOP code but also the hidden autogen procedural code that came from it that can have completely independent bugs.

  • @raphaeldarley
    @raphaeldarleyАй бұрын

    Some of this reminds me of an interesting talk on zig about memory optimisation and things like struct of arrays vs array of struct

  • @AGCipher
    @AGCipherАй бұрын

    The thing you're looking for is ECS, and it's not a language but an architecture, aimed at writing data oriented code Even then, OOP is perfectly fine in most scenarios, because what really matters more in a world driven by money, is the time to write the software to begin with, which OOP tends to be better for, as everything is neatly organised, ECS can still do this to an extent, but takes hits in other areas such as debugability And before you really need to squeeze out every ounce of performance, one should simply just profile their code and focus on critical path bottlenecks, it's not until you reach 'death by a thousand cuts' terriroty that you might've hit a wall I've worked at a place where the 'performance is everything' mindset really just hurt the development of software, because it impacts the UX of your code base so drastically that actally developing in it becomes a problem, resulting in projects taking up to a decade or more (yes, seriously), by which point, all the extra effort from working in a performant codebase is negated in part due to hardware improvements, but entirely by the sheer cost of such a long development cycle (that is, if you're lucky enough to have funding for that long), and then we haven't even touched upon the psychological impact on devs, as well as retention of talent aspect, very few individuals want to be working on the same thing for more than a couple years, let alone 10 As someone who has to work with other programmers at large scale and has had to 'school' many programmers of all kinds of levels, please don't misguide students that performance is everything, writing maintainable code is so much more important in this day and age (bar a few exceptions such as one and done projects) That is not to say that one shouldn't consider performance at all, obviously, but consider your data and its usage first, asking yourself questions such as; how many will I have of object X? will object X be needed in lots of different places? do we even need dynamic allocation for object X, can't it be on the stack? *how do users want to interface with object X?¨ is object X used in batches or at random?, .... tldr; The compiler's job is to re-structure the code a human can understand, into something the machine can, and inherently, as part of this process, the resulting code gets tuned for machines, so let the compilers do their thing, and write well organised, human readable code, with performance in the back of your mind, and not the front, and if you do hit a perfomance problem, just bloody profile the damn thing! If you do not agree with this sentiment, then you are writing in the wrong language, and I would suggest you go write in assembler instead, just let me know how it's coming along in a decade from now

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    I totally agree with you, and that is exactly what I teach students: that encapsulating things (for example) is generally a good idea and helps you write good code. On the other hand, I do think it wrong to make that a complete blanket statement. There are plenty of genuine reasons to code with performance in mind, CUDA being a good example.

  • @ghoulean
    @ghouleanАй бұрын

    A few other points that I haven't seen after a superficial perusal of other comments: 1. 0:25 If you picked any other example, you would be correct (Java, for example, makes a decent amount of compromises to get OOP working) but instead you pick C and C++. C++ is a zero-cost abstraction of C. That means there is zero runtime cost between C++ and the equivalent C code. So why is your C code faster? Because objects are an abstraction layer over structs and not two variables that happened to be declared close together. In other words, your C code is not equivalent to your C++ code, and if you want to write the equivalent C code, you should use an array of structs and not two variables that happened to be declared close together. 2. 2:20 A 25ms difference for 300 MB difference of memory...doesn't matter. Like, if you bumped the number of objects to 10 billion from 100 million, you'd take ~28 seconds instead of 25. At which point the bigger question is why are you trying to process 10 billion objects? 3. 3:32 No one's saying that OOP is "strictly better", except maybe the one guy in the corner trying to sell his book. This was a pretty disingenuous argument to make, but to provide counterexamples anyways, Bitcoin Core (Bitcoin client), Stockfish (chess engine), and gcc (literally the standard C/C++ compiler) are all written written in C++ 4. 4:15 This is a common argument made by OOP proponents for why getter functions are "necessary"...but I would argue that they're wrong. Just access the field directly. OOP's strengths come from separating data from behavior, which allows you to "plug in" various different behaviors for the same data. If you're hand-writing getter functions, either you're conforming to idiomatic Java conventions, or you're mixing data with behavior (which is generally bad and should be avoided when possible). 5. 6:00 Already mentioned, but C++ is a zero cost abstraction. If your program behavior is different, that means your C and C++ code aren't equivalent. 6. 9:00 Any language that locks you into an IDE is going to be hit with immediate disgust. Tooling is an immensely important aspect of onboarding developers; it's a hard sell to go onto your programming language when I can run `npm init` and have a starter project set up, complete with linter, autoformatter, unit test setup, various compiler options pre-configured, etc. right out of the box within 30 seconds...and I could be using Notepad for all npm could care. Requiring a potentially AI-powered IDE for marking a variable as "private" is akin to telling users that they need a Microsoft account just to log into their PC. 7. 11:40 Inheritance is a common strategy that OOP proponents like to push for code reuse. It's also a popular way to illustrate and educate students about OOP. But don't do this. You should never use inheritance for code reuse; you should use it for polymorphism. However, interfaces are sufficient for polymorphism and are also less committal to create. And OOP is centered around interfaces and behavior contracts...this enables the "plugging in" of various different behaviors mentioned above 8. 14:20 Backwards. It is common practice for wikis and such to be generated from comments in code; therefore, you can't get rid of comments but keep the wikis as you are proposing. See: JavaDocs, pydocs, cargo doc. Okay, cargo doc can pull from markdown files also, but you get my point From a high-level...apologies, but I'm going to try and phrase this as politely as I can. Please reach out and ask people what they actually need before solving a problem for them. This is just things that came to mind off the top of my head. Feel free to ask for clarification if you have questions on certain points.

  • @giovannicristellon3853
    @giovannicristellon3853Ай бұрын

    i think there is a stong agument to say that if any software has enough understanding of the behaviour of a piece of code as to make and a faster euivalent then that's going to be just part of the compiler

  • @hamesparde9888
    @hamesparde9888Ай бұрын

    The thing is people that promote OOP often act as if it's a good fit for every problem, but not every problem maps well to it.

  • @Kelmoir
    @KelmoirАй бұрын

    Some interesting points, indeed. But The first example was comparing apples with pears. You were comparing calculating an array of structs vs. a struct of arrays. That itself tends to quite some topic in C++ and how to optimize it. C and C++ are starting to be shunned by governments because they are not safe, as in preventing undefined behavior granted by pointers and references, when badly managed. Yet, Pointer magic is one of the point C did happily shine performance wise. I understand the point of performance, may it be memory or speed. But the other examples actually scare me. I.e. don't inherit from classes, that got actual executable code/memory - slicing bugs and undefined behavior, though I guess, that was just an unlucky example, as it is the obvious textbook one. And as others stated, the shown issues don't assert much of penalties, when a well versed developer is using those. In fact, if the compiler knows fundamentals in the architecture well enough through classes, etc., to actually use fancy parallel processing and such. Nvidia is writing their graphics card libraries for the heterorganic programming in C++, according the Cpp podcast. So, the DEE Ideas are great, personally, I would prefer the language and compiler enforcing them. Because I did see some mean Bugs, when the comment stated things, and no one cared, until things crashed. Comments can lie, btw.

  • @kubajackiewicz2
    @kubajackiewicz2Ай бұрын

    Wooo controversial subject - though as a "half-programmer" i thought it was the conventional wisdom that OOP is used for clarity, ease of implementation and organization where convenient - especially when less technical people have to code, while functional programming is used where performance matters but otherwise just not as needed. For more something needs performance, the lower level tje programming needed will typically be - and when its not a concern, everyone is fine with some high level language. I spent years doing functional programming in C initially as a hobby before moving on to OOP in python since its just much more convenient to rapidly build something fairly complex and keep track of what's going on, but naturally when there is a bottleneck i "outsource" parts of the code to C/C++ implementations that will often do the same thing orders of magnitude faster (i frequently deal with crunching large data sets and image processing, in cases where it needs to run just once to prepare the data for further work, but its still nice when it doesnt take 10hours)

  • @Mallchad

    @Mallchad

    Ай бұрын

    None of these things that OOP claims to solve actually happens in reality. Even worse, the concept of "OOP" that most people talking about is not really the origins of "object-oriented" code is a completely messed up and harsh version of what it used to be. Think Java and C++ vs SmallTalk and List...

  • @danielrhouck
    @danielrhouck15 күн бұрын

    The difference between array-of-struct and struct-of-array is not the same as OOP; I’d use structs with padding in C too. And there are times that array-of-struct can be faster, but I wish struct-of-array was more widely supported for the times it makes sense. 3:07 Are you doing this with `-O0`? I’d expect that getter to be pretty much always inlined and not change the output code at all.

  • @aaronspeedy7780
    @aaronspeedy7780Ай бұрын

    I see your point, and others have brought up optimizations, but there is a more major way in which OOP(as opposed to structs, which is what you're describing and is used in procedural code as well) is slow: virtual functions. This is indirection, and using it excessively can make it hard for the computer to predict how you are using your memory, which makes your program much slower. This can't be easily optimized by the compiler either, as far as I'm aware, so it's a major problem in many scenarios.

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    Yes, I see now I should have used that as an example instead of inlining getter methods.

  • @michaelrenper796

    @michaelrenper796

    Ай бұрын

    @@aaronspeedy7780 incorrect, most virtual functions get optimized away by just in time compilers.

  • @aaronspeedy7780

    @aaronspeedy7780

    Ай бұрын

    @@michaelrenper796 C++, which is what he used as an example in the video, will not optimize virtual functions calls, as it changes the observational behavior of your program. There is a flag where you can enable this optimization, but it doesn't work everywhere in your program, so in excessive enough cases, the problem will still persist.

  • @michaelrenper796

    @michaelrenper796

    Ай бұрын

    @@aaronspeedy7780 Yes, C++ is probably the worst example of an OO language to pick ....

  • @solidreactor
    @solidreactorАй бұрын

    From my point of view, this video is about starting a conversation, not about being right or wrong. Please keep this video and hopefully the community will continue the conversation in interesting insightful ways... and perhaps a future video that continues this discussion? With that said, comments are mentioning that there are optimisations, however sometimes the programmer has to write some of them by hand, like bit field optimisation, const reference, inline (usually the compiler should do this for you) e.t.c. and sometimes setting up the compiler "correctly". I do not really resonate with this specific discussion at this level, however I really did resonate with your "Separating high level OOP with low level C". It reminded me of when HTML separated between what was content (HTML code) and what was style that later became CSS, separating them into different things, and different files for that matter; I am talking about organisational matter. It has been interesting to see from mono chromatic CRT with line number coding to IDE with code completion, intellisense and how we programmers can now just hover over a class or function in the IDE and get a popup with documentation (kinda like your wiki idea... but not really), the documentation popup explaining what purpose the item has and how to use it. We have come a long way but there are still more places to go and explore! About your wiki idea, this is what is lacking today, there is no bridge between the actual code and the "program specification", it is like two different teams that hardly communicate :D I personally have my own way, workflow in other word, which includes the use of Mindmap as a documentation, description or a specification tool, used as for both for "what" and for "how" the program would do, work and behave. In my case it is not that it's a "Mindmap" per say that makes it into a powerful tool, it is the 12+ years of redefining project development workflow and having a well thought out structure in the Mindmap that helps with coding. I usually design the program specification and even do the initial "code" (the structure) in Mindmap before I start writing actual C++ code. Basically I work as an "Architect" in Mindmap before going into "Construction & Engineering" in C++ code land (borrowing building engineering terms, pervious work before becoming programmer) and I structure my Mindmap with the "6W1H" method, my own specific language classification and some philosophies, or approaches one might say, think of it as OOP being a coding paradigm and my method is kinda like a "structural paradigm". However currently it is far from ideal, I have many thoughts about further developing it and integrating it into an IDE but as of today it is a separate tool, separated from my IDE unfortunately, but I see the strength of what it brings and will never work without it. To bring it back to your "wiki idea discussion", this is what I see is lacking holistically in programming, no languages or frameworks uses both text and graphics together in a manner to bring a better understanding and overview of the program design and development, from the initial start of the design, while developing the program and also later maintaining it; And lastly also how to communicate about it to others that are both programmers and also not programmers(!) but all having an interest in the development (for example graphics designers, strategist, producers, marketing, planing, UX, CX e.t.c.). What I am looking for is what is kinda similar to the building construction world with their Twin models, Central models, USD file format e.t.c. but instead for program development including all the fields around the program development aside from the programmers themselves (UI, UX, UC, Planning, Strategy Marketing e.t.c.). Anyone else kinda resonates in this way? What are your approaches or thoughts?

  • @ewerybody
    @ewerybodyАй бұрын

    Well, I'm "just" a Python person but even I learned where to do OOP and where to keep it functional. Starting off it feels so tempting and smart to make everything an object but with huge amounts of data that'll bite you hard.

  • @ronald3836

    @ronald3836

    11 күн бұрын

    You don't mean "functional" but "procedural" (even if you do that with what Python call functions). Functional programming is a different beast (basically a hobby for academics with not much practical benefit). en.wikipedia.org/wiki/Functional_programming

  • @Goofball6386
    @Goofball6386Ай бұрын

    I applaud you for putting yourself out there. I'll take interpretability and reliability over performance any day, but I'm not working in an environment with harsh time or space constraints. I really enjoy functional reactive styles of programming from a testability and interpretability perspective, while maintaining levels of abstraction.

  • @cheerwizard21
    @cheerwizard21Ай бұрын

    With compiler optimizations and knowing how to use it properly in C++ code, both C and C++ will have ABSOLUTELY same goddamn performance speed. OOP is bad, if you use it incorrectly. Memory alignment, padding, tight packing - can all be enabled in C++ OOP style a well, use for instance GCC compiler with #pragma pack or __attribute__(packed) if you want to avoid bytes padding in your C++ structs. The only difference is that C++ has much more features than C do and has more compiler options. That's one of the reason why it's more complicated than C language. SOA principle is also not suitable for everything. It has only specific use cases, mostly for big data loops and calculations, but not really convinient for something like camera controller.

  • @MrSmertnick
    @MrSmertnickАй бұрын

    As a game developer, I had thoughts pretty much like theese for a while now. OOP is extremely ineficient comapred to data oriented design, especially when there's a lot of entities/gameobjects/etc (if you disagrees with this, I'd like to introduce to you this cool device called a GPU). Iterating over every NPC, for example, individually to perform almost identical logic just to move them - thats a lot of memory jumps to various pointers. And memory is slow - about 200~ish CPU cycles per jump, if you get a cache miss. So those 200 cycles are, essentially, wasted by the CPU. Not to mention, that if your data is spread across random points in memory - ther's no chance of any SIMD happening. Whereas iterating over two arrays of positions and velocities (for example) to move said NPCs, is significantly faster not only because all data is stored in contiguous memory chunk (so you get more cache hits), but also because it is now easier to do SIMD code. But the problem is that, because of the way modern languages work, you end up with a lot of public fields that can be written to from anywhere (because they have to be in rder to change them). So debugging becomes quite hard, and you have to invent your own tools to essentially repackage data from "optimized" version into "human-friendly" version and hope that nobody in your team decides to change a value from some unrelated part of the code. So I had exactly the same idea about "Why not do access restrictions in IDE? Terms like "public" and "private" are irrelevant to CPU and exist only because we need them to easedevelopment process, so why not move all this stuff to our development environments". There will have to be a single standard across all IDEs in order for this to work though...

  • @evancombs5159

    @evancombs5159

    Ай бұрын

    That last part is called a language. If you want a procedural language that restricts the changing of a variable to a specific method you can create that.

  • @ronald3836

    @ronald3836

    11 күн бұрын

    Why move compiler functionality to the IDE? In the end it only makes sense to do all those checks on the final compilable program. (Until then it is fine if the IDE provides hints, but it should not prevent you from editing the program in the way and in the order you want. Imagine you can't add a certain line before a co-worker has removed some other line in an entirely different section of the project.)

  • @michaelrenper796
    @michaelrenper796Ай бұрын

    Look at all the downvotes. This video is seriously flawed. Rather than a point against OOP its a point for "Why most people don't understand compilers."

  • @markowitzen

    @markowitzen

    Ай бұрын

    why is refactoring a thing then?

  • @benebene9525

    @benebene9525

    Ай бұрын

    ​@@markowitzenBecause there is a difference between organizing code structure and optimizing data structures and algorithm, and localized microoptimizations. Learn about compilers and software engineering before trying to argue about them.

  • @markowitzen

    @markowitzen

    Ай бұрын

    @@benebene9525 what’s your point other than trying to be pretentious?

  • @gawhyrghun1913

    @gawhyrghun1913

    Ай бұрын

    @@markowitzen Who's being pretentious here? You clearly do not understand what refactoring is, and it has little to do with code optimization.

  • @markowitzen

    @markowitzen

    Ай бұрын

    ​@@gawhyrghun1913 you are clearly missing the point, either on purpose or out of a denseness borne from a desire to criticize

  • @Air-wr4vv
    @Air-wr4vv9 күн бұрын

    "Premature optimisation is the root of all evil" Another thing is that programmers don't really care about performance. Good abstraction and maintenance are much more important, and, it's believed, give better performance in the long term(if it's easier to reason about a program, then it's easier to maintain high performance design/algorithms)

  • @nathanfranck5822
    @nathanfranck5822Ай бұрын

    Typescript does this for the JavaScript ecosystem to a small extent, I would love a typing layer over c code that would do something similar. Integrating AI validation sounds good too. In the meantime I have been learning and using Zig, which provides great tools for defining useful types and methods to do automatic AOS to SOA conversion, and tonnes of other fun type manipulation tricks. It won't stop you from touching data in a struct though, Zig isn't neurotic about private data, so the idea would be that the programmer reads the code and tests before referencing it, but the code should be super easy to understand

  • @SimGunther
    @SimGuntherАй бұрын

    2:32 You could do the same test for the procedural style, but have separate loops for each array. The point might be slightly different, but the nuance in how the data is accessed makes OOP suitable only when the data is accessed all in an iteration instead of individual arrays that can be processed efficiently in separate loops thanks to cache locality. It might not detract some that say "OOP SLOWER BECAUSE DIFFERENT LANGUAGE" despite being almost exactly the same kind of language. Now if you were doing the exact same video, but using C++ in different ways, not using two different languages two different ways, this would've illustrated the point much better.

  • @kristofferbouchard8395
    @kristofferbouchard8395Ай бұрын

    Didn't expect your pivot from physics to computer science. Good video none the less. Do you have a background in CompSci as well?

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    Well, the first video on this channel is "How a Computer Works". I just recently started a job teaching CompSci, so have been thinking a lot about this sort of thing.

  • @brag0001

    @brag0001

    Ай бұрын

    It's actually pretty bad. He is essentially proposing a less efficient way of optimizing code than what 10+year old compilers already offer while breaking the entire tool chain supporting software development. He wants to offload things that a simple gcc -O3 does onto the IDE as well as version control and code visualization while essentially making it harder to use the IDE efficiently. Trying to outsmart the compiler at scale is always a bad idea. That's not how you optimize code. Instead you sample your code during runtime, create a flame graph and then optimize the hot code passes where your code spends 90+% of its time. It's pretty likely that you end up with very few lines of code that you actually need to care about. For those it can actually be useful to hand optimize them, even going so far as writing assembler if needed. There is a reason why python is so successful in the AI world. Most of the code doesn't need to be efficient. Only the libraries used to do the heavy lifting need to. And even in those only a very small portion of the code is actually important for performance.

  • @asandax6

    @asandax6

    Ай бұрын

    ​@@brag0001 Hence why AI is using as much electricity as entire countries. One little optmizatiom on code that gets called a lot can solve a lot of compute time and power. It all adds up.

  • @brag0001

    @brag0001

    28 күн бұрын

    @@asandax6 no, its really not. You didn't understand my point. Your second sentence is hinting at exactly what I've been saying over and over on the threads below this video. But the python code in the AI world is NOT called a lot. It is mostly called once and only used to initialize the highly optimized code that actually DOES run a lot, or to evaluate the output of the AI code. That's the also entire reason why Java is actually quite efficient in the real world, despite it being perceived as slow. The hotspot compiler does keep track of code executions and specifically optimizes code that DOES run a lot even further. The real reason AI uses as much electricity is that the current way of training AI, even with perfectly optimized code, is only really bound by the amount of compute you can throw at training your models. Thus the real limit in AI isn't code optimization, it's your wallet. That's why all the big tech companies are either dominating the field already or buying promising startups and throwing money at the problem. That's also why Nvidia is the only one making real money in this field. They sell compute! No one is executing inefficient python code on Nvidia hardware. The models executed there already represent the peak of optimizations humans are capable of at the time of their execution. If you manage to create more efficient code, you won't use less compute. Instead you will build better models with it. You'll still consume the same amount of compute. Because your wallet is the only limit there ...

  • @nicolasjoulin3004
    @nicolasjoulin3004Ай бұрын

    You kind of glossed over the "inheritance" aspect. The moment you use dynamic runtime behaviour using abstract classes or interface you can't know in advance the size of the "things" you need to allocate. You also don't know in advance what your functions pointers you will be using. I think if you dig into this problem you either make something inefficient (like OOP) or something tricky to use (like C). As pointed out many times, inlining and transforming objects into simple "flat" data structures are things compilers are already great at, those are solved problems.

  • @Mallchad

    @Mallchad

    Ай бұрын

    If you use inheritance it generally becomes an explicit memory layout choice, rather than one that's "a side effect" of struct packing. ie array of pointers vs array of value objects. Also the "size" of virtual objects is fully known at compile time- its just that normal arrays don't allow for straightforward packing unless you use unions

  • @simonfarre4907
    @simonfarre4907Ай бұрын

    Another reason why all the provided "solutions" are terrible, just abhorrent ideas, is that he thinks he can compare computationally heavy algorithms with "normal" (read: "every day kind of") logic. When doing physics simulations the software development part is astonishingly trivial - that's not the hard part. The *hard* part is where a physicist comes up with the model and how to calculate it etc - translating this onto hardware is then trivial, because we have 1-to-1 mapping (sort of) of mathematical computation on paper and in hardware. But as soon as you are to build any system that involves more than just mathematical I/O - which usually also is keen to parallelization too - you will see that performance penalties doesn't show up because of some getter or setter somewhere. A mixture of data oriented design, object oriented design where the emphasis is on composition is always to prefer. OOP aims to solve some problems, which can't be solved at compile time (or AOT, ahead of time). A complex system is *rarely* a 1 million ints here, a million unsigned chars there. That's not how software systems work - that's usually only in code paths where you do heavy computation. And yes, you *should* use DOD there, to utilize the cache and all of the speculative engine as well as allowing for the compiler to unroll loops and vectorize possibly using SIMD like AVX, that comes with modern CPU's. But that's an incredibly atrocious idea to map on to *everything, everywhere.* Because these are different problem domains.

  • @skilz8098
    @skilz8098Ай бұрын

    There's always tradeoffs and the minimal performance gain between the strictly procedural C version compared to the C++ version isn't truly enough to be fully concerned with when the C++ class types with their CTORs and DTORs natively have the built in property of RAII. There's more to it than that with C++ compared to C. Sure C++ has OOP capabilities yet one is not bound to have to use it. It just so happens to be one of the language's many features as it is a multiparadigm language. One of the things that C++ does have is generics through its template system. I can write a class or function template object that will perform a specific or generalized task that will be applied to many different types instead of having to write the same class or function over and over again for every type that I support is required for it. If this mechanism suggested by this video worked on basic classes where it only gives you a marginal performance gain for trivial class types, what would the conversion process end up doing when it runs into a class template? There's more than just RAII, there's also CRTP, SFINAE, and many other types of idioms and paradigms within the capabilities of the C++ language. Is it really worth the trade off? Don't get me wrong C does have its uses and merits and is as practical and viable to use as C++ is. C is a great language for systems programming, OS development, embedded programming and so on. C++ can do just about everything C can do and some with very little to minimal extra overhead if used and implemented correctly. It's not like using Java, C#, etc. that has slow automatic garbage collection making the software performance garbage. It's not like using Python or Typescript interpreted languages, and so on. It is the fast closer to the metal lower-level high level language like C that uses minimal resources if possible. You're also not beholden to have to use all of the bells and whistles or every single feature of modern C++. You can still use a C++ compiler, linker and debugger and are still able to use C library code, C type functions such as printf(), malloc() and free() and you can still write your application or program in C++ purely in a procedural way just like you would in C. Yet you still have the capabilities and options to use OOP, Inheritance, Polymorphism (multiple types of polymorphism), generics with templates, meta programming either through the use of the macro system, templates, or possibly even with the use of lambdas. For me, I think it's better to look at it as a tool and then just choosing the right tool for the job. It depends on the task at hand, the importance of it, the requirements and or criteria it needs to meet, etc... Not every application is going to be time sensitive critical down to each nano second as if you were trying to launch a rocket into space. It all depends on the application, the target audience and what you need out of it that matters. Just shaving off a few milliseconds here or there within the power and capabilities of modern hardware may not be that big of a deal if the application is a website where someone want's to play a song title or look at videos of people making fools of themselves. Now if you were doing software for flight navigation systems, for highway traffic control, for bank security, for nuclear power plants, etc. then sure I can then see it making a difference but in those contexts, sometimes speed may not be all that important as compared to accuracy and the ability to protect data through security measures.

  • @mouadlouahi9985
    @mouadlouahi9985Ай бұрын

    I am apparently late to the comment section. There are better examples you could have put to make your case for why OOP can be not so great. One of them is that it encourages bad programming practices, such as excessively splitting functions and objects where it's not warranted. For instance imagine a class of "Unit" in an RTS game, a clever programmer would have building and a solider be the same type of object, the difference is that buildings have movement speed set to zero, but a beginner programmer may make a separate class for each that inherits from the parent class "Unit". As far as performance goes, the example you showed of SOA (Struct of Arrays) can also end up hurting performance as you will be taxing the translation lookaside buffer. Optimal performance is a much bigger problem and requires good programming practices as well as higher level planning and good tooling to achieve. Something no free lunch something..

  • @simonfarre4907
    @simonfarre4907Ай бұрын

    As a software engineer; don't quit your day job. There are so, so, so many things wrong about this video. Yes, OOP has issues, but all the issues you raised are not the problem. You don't even know how to build optimized code.

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    That's not true. I write code for most of my work, some of which I have documented on this channel. Most of my publications have involved some sort of high performance code, whether on a local or national cluster, or some set of GPUs. I do indeed use compiler optimizations for actual performant code; granted, I did not for this simple example.

  • @simonfarre4907

    @simonfarre4907

    Ай бұрын

    @@ImprobableMatter is that true? How have you measured the code to be "high performance"? Because you didn't seem to even know what optimization was, just a moment ago. You also get OOP wrong, getters and setters is not the problem, or even where the performance penalty is. Over reliance on inheritance instead of composition is the absolute largest problem. Yes I know it is en vogue to rant about OOP, and I will actually gladly join in together. But you seem to not really be well versed in many of the topics because I have seen you conflate functional programming with imperative and even claiming that "functional programming is much more performant than OOP" and this is the most untrue thing I have seen you say in this video and comments section. FP maps even worse on to the hardware than OOP does.

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    @@simonfarre4907 I actually do know what optimization is, objectively speaking. Here's a very old example from about 8 years ago, but I wrote, optimized and profiled a CUDA implementation of a realistic physics simulation I needed to run: eprints.whiterose.ac.uk/117841/1/Aslanyan2017CPC.pdf I got up to about 20-25% of the theoretical maximum performance of the cards, which I don't think is too shabby given that it's real physics and can't be perfectly parallelized.

  • @simonfarre4907

    @simonfarre4907

    Ай бұрын

    @@ImprobableMatter Well you had no clue about optimizing flags to your examples. Because had you actually compiled them with any real flags, that code would have been instant, because the compiler would be able to see through that you are not actually doing anything else with that memory (and it would elide it). Clearly you don't have the experience you think you have. Did you *move* the implementation to CUDA or did you optimize an actual CUDA implementation? Because simply *moving* calculations onto the GFX-cards will have *fantastic* results in and of itself (and 20% faster is nowhere *near* the amount of optimization physics simulations can get compared to CPU-computations). I quickly scanned that document and saw no code in it. If you optimized the math behind it, that's a physics related question not a software related one - and that's *not* rarely how things can get done faster when calculating on computers (finding better ways to model a problem *before* actually implementing software to do the heavy lifting).

  • @simonfarre4907

    @simonfarre4907

    Ай бұрын

    @@ImprobableMatter Oh great, youtube decided to remove a comment I wrote again, probably because I had the word "doc#m3nt" in it. And you want to leave optimizations in the hands of AI? That's funny. Anyway, you clearly did not know about the flags, because if you did, you would know that the compiler would see through your example and elide it entirely altogether in the end. There are real problems with OOP, but you don't mention them at all here. Anyway, did you move the optimization onto cuda or did you optimize the cuda implementation? Because simply moving simulation like computation onto the graphics card will show *massive* speedups. Measuring if you saturate the bandwidth of the GFX doesn't really say much.

  • @nathanbanks2354
    @nathanbanks2354Ай бұрын

    2:20 The speed test is likely unfair because it's a loop and doesn't have to deal with a cache miss every time the program looks for x and there's no y next to it. If you did random access, you'd probably get different numbers. On the other hand, iterating through an array is pretty common; random access is more like a hash table. However I'm usually more concerned about speed of programming than execution. I switched from C++ to Rust recently because C++ reinvents things like memory management every decade or two so it's more difficult to program well. Rust separates data structures from function implementation with traits instead of inheritance, so it isn't really object oriented. I've never come across anything like the borrow checker in other languages, but I've read enough about threads in Java to recognize it's safer. More time to program, but less time to debug.

  • @monad_tcp
    @monad_tcpАй бұрын

    4:02 If they were all C++ classes that were wasting resources I wouldn't complain. Its worst, as this thing is running Javascript, which is hashtables of hashtables, not only there are classes wasting memory, every dynamic dispatch wastes hash lookups, and every property access also does that. Classes just waste 2 extra pointers, not an entire hashtable, as they do in Javascript.

  • @SteinGauslaaStrindhaug
    @SteinGauslaaStrindhaugАй бұрын

    I agree that OOP is slow, but not for the reasons you talk about. Most of the issues you talk about could be fixed with a better optimising compiler. My main issue with OOP is that it's slow to write well and hard to reuse code (inheritance is a horrible way to reuse code; it's much easier, cleaner and in my practical experience safer to put reusable code in a free standing function than messing about with inheritance. And besides most non trivial class structures end up with tons of copy pasted code even if you descend into the madness of inheritance because most OOP languages doesn't support multiple inheritance so any idea that applies to multiple classes but not all classes in each subtree of inheritance will have to be implemented in OOP fashion by manually copy pasting the code around or by "implementing an interface" which is just an OOP language feature that enforces copy pasting code (or you could probably implement the interface differently; but usually it will be somewhere between 98% and 100% copy pasted code). Whenever I'm forced to work with a language that is very OOP focused and doesn't have multiple inheritance; I tend to regularly rewrite stuff as classes that are very light on logic, often basically a struct with a couple of attached methods that mainly just plumbs the data properties into a set of shared freestanding functions.... or a static method of a util class in those crazy languages that insist having absolutely everything wrapped by a class, like Java. Because fortunately even the the insane "everything is an object" dogmatic OOP languages are still usually multi paradigm allowing plain procedural code and usually some functional code (even if you are forced to make stupid pointless classes to wrap the functions and procedural code).

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    Wouldn't you agree that having an IDE handle some of that copy-pasting in a systematic way is a good idea? Fair enough if you don't, copy paste is copy paste...

  • @SteinGauslaaStrindhaug

    @SteinGauslaaStrindhaug

    Ай бұрын

    @@ImprobableMatter i prefer not to copy paste shared code but rather actually reusing the actual implementation using a function/procedure call. I also don't like IDEs, I prefer editors that are pretty lightweight and responsive over heavy slow IDEs that try to justify their slowness by writing code for me. It's mostly the stupidest OOP languages that "require" using an IDE, because stupid OOP languages often require lots and lots of repetitive verbose boilerplate code (and copy pasting if you try to write pure OOP in a non multiple inheritance language), if you use a better language that doesn't require such huge amounts of boilerplate so trivial that an IDE can generate it for you, you don't need an IDE.

  • @theondono
    @theondonoАй бұрын

    Casey (Molly Rocket) made a much better argument (his video makes reference to “clean code”, but covers classical OOP conventions). While I agree you are right about OOP, I think you got the wrong reasons. In any case, your proposed solution, which I’m not a fan of, looks a lot like what the js people do with JSDoc + TS for getting type checking. Because you’re playing with semantics, I’d argue it’s worth it to just create a different language, virtually identical, but with the features you want. Having parts of your build system care a lot about comments and others strip them out completely sounds like the hackish solution that gets people called in a panic to fix a misbehaving system at 2am.

  • @Salabar_
    @Salabar_Ай бұрын

    The problem with OOP is that it never lived up to the promise to model entities of a domain. Say you create a class Fireball with method explode(). In a constructor you have to provide a reference to World so that Fireball could locate every entity it will hit with explosion. You also have to provide reference to the caster to calculate damage using their stats. Now you have World calling fireball.explode() which in turn queries world.get_characters() and caster.get_stats(). At this point you realize you effectively broke encapsulation and it would be easier and more maintanable to just have all calculations done by World. And now, since using OOP for actial business logic turned out to be a fool's errand, it is relegated to all sorts of hardware abstraction duties or easing self-imposed limitations. Except it does this in a still cumbersome way and everyone teaches OOP with those stupid examples like "Square in a child of Rectangle".

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    Yes, I agree. I daresay this is the argument made by Brian Will in [2]. If you were to actually follow the OOP principles rigorously, it would definitely be slower than procedural, and a compiler would not optimize that away.

  • @Meech-e8z
    @Meech-e8zАй бұрын

    Almost immediately in this video is a fundamental misunderstanding of what compilers are capable of. Even the most basic compilers have no problem inlining accessors. If you are seeing something to the contrary it is almost certainly because either you have not built with optimization enabled or you have created an artificial inlining barrier through dynamic dispatch or placing the accessor implementation in a separate translation unit without enabling LTO (either of which would just be bad engineering).

  • @RichardLofty

    @RichardLofty

    Ай бұрын

    "Compiler can fix my trash code" Ok. It's still trash, BY YOUR OWN ADMISSION.

  • @benebene9525

    @benebene9525

    Ай бұрын

    ​@@RichardLoftyIt is pretty obvious, that you have no clue about how programming, processors or compilers work. Learn about language design, cpu architecture and compiler design first, before blindly following other peoples misguided ideas...

  • @Meech-e8z

    @Meech-e8z

    Ай бұрын

    ​@@RichardLofty lol I've never heard someone try to argue against compiler inlining before...if you want to hand inline your code by all means. When you need to fix something and it's in 20 different places i hope it goes well

  • @KingJellyfishII
    @KingJellyfishIIАй бұрын

    I do disagree that you should separate "organisational" issues from the rest of the compiler. It makes sense to me to have strict rules enforced bt the compiler to give the programmer little option but to do things correctly. Your example - having a comment denoting that a value is private - is entirely possible and in fact common in OOP code. Some say you should getter/setter everything which would be mad, you can have private varisbles that strictly must be private and public variables without getters and setters. Since struct padding is only relevant on huge datasets, why not worry about it only when there is significant memory to be gained? A good compiler should optimise out the case of a single stack allocated variable anyway. Keep things as structs, if you have tens of millions of structs then consider for that one case to use a set of arrays, this does not necessitate completely redoing all of the langauge and tooling.

  • @Air-wr4vv
    @Air-wr4vv9 күн бұрын

    Look into Haskell, it's a language that has absolutely 0 common with turing computer architecture, therefore highly inefficient when translated 1 to 1 to machine code. But because of compiler optimizations unique to it, it can perform as fast as C in some cases. Same thing with Racket, Erlang, Ocaml, etc.

  • @dradic9452
    @dradic9452Ай бұрын

    This should be possible now. Using something like a linter, LSP (Language Server Protocol) or even JSdocs now. You can set rules for type checking your variable, I don't see why you couldn't limit which function can edit what variable.

  • @AndrejVakrcka
    @AndrejVakrckaАй бұрын

    OOP is not bad or good. There are just some cases where it's inefficient. To describe complex systems of objects, it's surely better than procedural approach. Encapsulation hides the complexity and you can nicely separate various classes of objects. Inheritance and virtual methods simplify everything further. But all this is slower for large number of objects. Sometime it's 10k, sometimes it's 1M. It depends on application. The video jumps between various topics without explaining important details. The IDE part is wild, I think it makes no sense to solve problems in 45 years old language. All the things you describe can be solved in newer languages (Rust, Zig). In my opinion, the use of AI to compile the code has serious issues, from computational complexity for large projects to variable results and need for verification of output. You can use ChatGPT to generate whole classes, but there should be someone to check the result. Usually writing everything on your own is faster. I assume you are not software engineer, but any video is better than none. This is what I like about this platform. There are some comments, which are helpful, and in the end helps the author and other viewers.

  • @Kapendev
    @KapendevАй бұрын

    OOP is slow if you use it like a crazy person and follow a clean code style of programming. In C, you might want to use some kind of OOP by having a base struct and then have it in other structs as the first field.

  • @milasudril
    @milasudrilАй бұрын

    Object oriented programming is much more than classes. Classes provide a way of implementing certain aspects of object oriented programming. The main penalty of traditional object oriented programming is the tendency to use reference semantics and virtual functions. Reference semantics implies possible aliasing (prevents optimizations), and cache misses. Virtual function call also acts as an optimization barrier. You are advocating for data oriented design. This is indeed a useful pattern for some tasks, when you are interested in doing the same operation on a particular field. You can create a class that implements a structure of arrays efficiently.

  • @savyblizzard6481
    @savyblizzard6481Ай бұрын

    Even as someone with many misgivings about oop, this strikes me as bad programming. Embracing structs is fine, as are abstractions. Sugar is great, and should not be removed. It's a question of bad design patterns imo, which I acknowledge is not your target criticisms. Although frankly the performance argument is also pretty weak these days. I think rust exemplifies the ideas I'm contrasting you against best. Good language architecture matters the most, to me at least

  • @0x0michael
    @0x0michaelАй бұрын

    Someone doesn't know how to use his compilers

  • @formbi
    @formbiАй бұрын

    why put that in an IDE and not in something like a preprocessor?

  • @maxqutekerman907
    @maxqutekerman907Ай бұрын

    Linux kernel uses a lot of OOP, just not OOP that's built into language. One can easily write OOP in C, with just a bit more extra verbosity.

  • @WDGKuurama
    @WDGKuuramaАй бұрын

    i don't get why you would need specific things to be handled on the compiled code down to C, isn't the whole point to write C++ and get the "benefit" of your approach? why would you edit the compiled to C code directly and not the source directly?

  • @ronald3836

    @ronald3836

    11 күн бұрын

    Some people believe they are able to write "hand-optimized" low-level C code better than the compiler can optimize normal code. They are wrong. By writing less abstract C code, they prevent the compiler from recognising the higher-level structure of the program, which prevents many compiler optimizations.

  • @WDGKuurama

    @WDGKuurama

    11 күн бұрын

    @@ronald3836 Yes, the lack of abstraction and imperative programming makes it harder to optimize compiler wise. Rust "zero cost" abstraction is a good counter example to it, where method chainings on operators gets understood and optimized

  • @fennewald5230
    @fennewald5230Ай бұрын

    As others have laid out, all member functions would be inlined, and the kind of "type erased memory rearrangement" described has been around since the inception of Cpp. It is important to remember that Cpp began as a "C Pre Processor". The point made, as I understand it, is this: SOA is better than AOS, and OOP makes SOA harder. First, SOA is absolutely more performant than AOS _for very specific cases_. Modern CPUs have entire scatter-gather engines built in to make single-field AOS operations more performant. For some data-intensive use-cases, though, it does still make sense to opt for SOA. For example, planar video formats. However, SOAs entire problem is centered around accessing multiple fields of a class simultaneously. Not only do you now need to make disparate memory access, and fragment your CPUs data cache(remember, vector engines can do direct main memory IO, but your CPU must go via cache.), but the required context to make and calculate these offsets is repeated for every single field of your struct. In practice, this means that SOA really only makes sense in niche use cases.

  • @5cover

    @5cover

    Ай бұрын

    If only there was a way to make SOA/AOS flexible. That is, to define a structure and access instances in whichever way is most performant depending on the context.

  • @fennewald5230

    @fennewald5230

    Ай бұрын

    @@5cover There is! zig ships this feature afaik, and there are some Rust macros that let you derive new types that implement the behavior. C++ will probably never get it, because, y'know, it's c++.

  • @charliesta.abc123
    @charliesta.abc123Ай бұрын

    As a web developer who is very interested in lower level stuff I found this very very interesting and intriguing. I'm subscribed now. Is there a need for the programmer to even touch the "transpiled" C? Please pardon my ignorance if there's an obvious reason. I'm just a web developer 😅

  • @exotic-gem
    @exotic-gemАй бұрын

    I’m not sure this would be better than just throwing out object oriented programming in favor of procedural.

  • @hhvhhvcz
    @hhvhhvczАй бұрын

    abstractions exists for a reason, .e.g. lessen the mental burden on programmer. What you propose is what basically is standard in the "compiler magic" box. You could even argue that languages with bytecode such as Java already do this - you get OOP high level code translated into pseudo assembly which you may tinker with and then run that. The main problem however is, the gains are not worth it. That might sound controversial but there's only so much, the diminishing returns starts to creep in fast. Like yes, everything in Javascript and Python or PHP before, that's shit. Rewriting stuff where it matters to something faster would require some effort but offer 10-100x speed increases but also the effort for devs to change stuff could be a lot larger or sometimes not because again, language design matters. However going as far as rewriting something super highlevel to C might offer benefits which won't outweigh the efforts and mental burden. And no amount of IDE magic or AI whatever will help you. There's a reason why in programming, you're supposed to choose the right tool for the right job or you'll suffer.

  • @bigbrother1211
    @bigbrother1211Ай бұрын

    I like the distinction between hardware concerns and people/organization concerns. This may have some interesting implications (that lead to the evolution of languages and ecosystems). Thanks!

  • @gueratom
    @gueratomАй бұрын

    I hate C++ with all my heart, but it is unfair to compare a C++ class with C standalone variables. If something is organized in classes, it would be organized as structs in C, then the packing problem would remain. This is over optimization. Progamming is a trade between speed and organization. Not everything should be directed towards speed. Moreover, you can do OOP in C. OOP is not the problem, C++ is the problem, because it's way too complicated. But OOP itself is very handful.

  • @SnakeEngine
    @SnakeEngine25 күн бұрын

    OOP is slow for all the other reasons, but not the ones depicted in this video, lol. Penalty of packing is negligible in real world code, and calling one-liner functions can be inlined by compiler. So yeah, this was a pretty silly video ;)

  • @essmene
    @essmeneАй бұрын

    If Speed is your metric - you choose Assembler. If other things enter your metric - your result changes. E.g. Maintainability and readability. OOP was not set out to do speed. It was setup to make very large projects _maintainable_ and to reduce hard to fix bug like run-away conditions by limiting access - private variables and functions. If X can only be accessed via Y1, Y2 and Y3 and you discover a problem you can set a breakpoint on them.

  • @dumb_ptr
    @dumb_ptrАй бұрын

    The struct/class padding is a good example of data oriented programming vs OOP. But everything else is simply something that any C++ compiler will optimize out (e.g. function inlining). As soon as you said c++ preprocessor to convert to c i realized it was bait 😂

  • @chrismcgowan3938
    @chrismcgowan3938Ай бұрын

    Yes, OO programming is bad. Most people misuse it. I write embedded code and whilst classes are great, inheritance is forbidden ... new / delete is slow etc etc....

  • @thememesarealive9813
    @thememesarealive9813Ай бұрын

    Definitely some food for thought. Also makes me wonder if functional programming has these shortcomings. For example, does currying 5 bytes of data actually mean currying 8 bytes under the hood? The only real issue I have is the use of AI. Kind of feels like a vscode plugin (or equivalent in another editor) could do what your proposing. Maybe an lsp could as well? Since, at a high level, your proposing compiling cpp to c (really a subset of c). “Just write a compiler and LSP” is no small task but neither is training an AI model to be 90% or 99% accurate. Cool video!

  • @ronald3836

    @ronald3836

    11 күн бұрын

    The shortcomings identified in the video do not actually exist. The author of the video apparently does not know about is is afraid of using optimization flags. Functional languages certainly tend to be inefficient. Don't use them to implement a massive physics simulation. (But do use them where it makes sense.)

  • @silicalnz
    @silicalnzАй бұрын

    You sound exactly like these pompous compsci professors Ive experienced. Confidently presenting archaiec viewpoints as fact. Honestly just a badly presented video. You sound both bored and angry.

  • @MonoBrawI
    @MonoBrawI28 күн бұрын

    I largely agree that language features intended for improving productivity and even safety or security can be abstracted to a higher level to allow compilers to focus on performance and hardware support. I believe a sensible step along the way to an optimal solution is for more solutions akin to TypeScript which adds type safety (and aggravation) to JS which in turn can compiled down to machine code to a large extent by V8 for example. These multi-layer approaches do come with many drawbacks of their own of course so careful balancing is always needed.

  • @sciencoking
    @sciencokingАй бұрын

    Average C programmer's idea of OOP (it's accurate)

  • @paul-tz7ld
    @paul-tz7ld7 күн бұрын

    What about safety and team projects ? For example, one of the main reasons I use OOP is polymorphism. I can write my main code in an abstract class and let others write implementation classes without tinkering with my code.

  • @Marxone
    @MarxoneАй бұрын

    I wonder if it would be a neat idea to have actually 4 different views on the code. 1. the standard code we do write today regardless of language 2. transpiled code 3. compiled code 4. high level "code description" that would be human like definition Could be fun to have ability to modify each of the views to just look how it could get interpretted in each level of development thinking. Even if it would be just some learning tool that could actually serve as dynamic dictionary between product managers and coders. Any involvement of the current LLMs would probably result in some hilarious hallucinations :D

  • @kvarok1548
    @kvarok1548Ай бұрын

    Why transpile an OO language to a procedural language then compile it instead of just making a better compiler?

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    I thought I had made it clear, but it seems to have been lost in the discussion: the IDE would allow the programmer to make low-level optimizations within the wiggle room that exists to do so, but would still enforce that the high level code is followed. Of course, this step is completely optional (it might only be used in 10% or even 1% of cases) and kept locked behind the "little plus sign" in the UI.

  • @something4074

    @something4074

    Ай бұрын

    ​@@ImprobableMatter You can already just write C or assembly for those cases, and call into that, right?

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    Yes, but then you would have blocks of clear high-level code interspersed with difficult to read assembly. By contrast, in this scheme, another programmer reading the code sees seamless classes or whatever, and only needs to see the low level optimizations if they choose to do so by expanding the code.

  • @ronald3836

    @ronald3836

    11 күн бұрын

    @@ImprobableMatter Modern compilers are better at making low-level optimizations. If you try to make them yourself by writing "low-level C", you are just hiding information from the compiler that it could and would have used to find better optimizations. I am guessing the code you write has hand-written common subexpression elimination all over the place, which makes it difficult to read, hard to maintain, and results in slower code (unless you insist on -O0, which is utterly silly).

  • @spicy_wizard
    @spicy_wizardАй бұрын

    using class is not a very good example here. it is on par with struct using C which also have some memory alignment going on.

  • @MisterFanwank
    @MisterFanwankАй бұрын

    Please look at Jonathan Blow and how he talks about compilers, and then look into building compilers yourself. All of this was possible in a much nicer way decades ago.

  • @kezi___
    @kezi___Ай бұрын

    please stick to physics videos, this hardly makes any sense

  • @gawhyrghun1913

    @gawhyrghun1913

    Ай бұрын

    @@kezi___ I sadly have to agree.

  • @nathanbanks2354

    @nathanbanks2354

    Ай бұрын

    As a programmer I found it easy to understand, though I don't agree with his conclusion.

  • @ImprobableMatter

    @ImprobableMatter

    Ай бұрын

    Easily the best comment on this video. It's a shame I can't pin comment replies.

  • @kezi___

    @kezi___

    Ай бұрын

    @@ImprobableMatter mate, I understood exactly what you meant, this is not a comprehension problem on my part the video just doesn't make sense in a logical way others have explained why thoroughly, it's useless for me to repeat, you have a poor understanding on the matter at hand, you got stuck in a (completely irrelevant to oop) AoS vs SoA debate, and for the rest of the video you are just describing an optimizing compiler without really getting it and I hate OOP like the next guy, but these are not at all valid reasons, feel free to google "The Trillion Dollar Disaster" if you want to find more keep doing physics and fusion videos, those were great

  • @bbugarschi
    @bbugarschiАй бұрын

    essentially what you're arguing for is a transpiler (ala typescript, coffeescript and other common stuff for web) judging from experience, not a good idea :P

  • @lomiification
    @lomiificationАй бұрын

    Youd like typescript, but this seems very arbitrary, and assuming that all the ints will be used together, rather than with the char. If you use the char with the int, then your cache will be dropping over and over again to swap from pulling the int array, and then the char array back and forth. The idea that structs should never be used because they have a certain pack behaviour seems kinda silly. Programmers, especially physics programmers should understand that they arent locked to CPUs with specifc byte sizes, and that you can instead use an FPGA to set your own data sizes and have #pragma pack structs without having to write that above your struct or suffer any major performance consequences for it

  • @QIZI94
    @QIZI94Ай бұрын

    I think oddly sized structs with few members probably will be faster when they interact with each other since they can probably fit into single cache load and if had those member variables in separated array/vectors it would had to constantly load for each iteration

  • @ronald3836

    @ronald3836

    11 күн бұрын

    It'll depend on access patterns. If speed or memory usage are of the utmost importance, then one should indeed be careful in picking the right data structure.

  • @cristiano4949
    @cristiano4949Ай бұрын

    No, that's made by the first stages of compilers

  • @milasudril
    @milasudrilАй бұрын

    If you inline the getter, there is zero overhead.

  • @0x0michael

    @0x0michael

    Ай бұрын

    You might not even need to do so explicitly, compilers do this

  • @linamishima
    @linamishimaАй бұрын

    As a computer science teacher, you are no doubt familiar with the truism "programming languages exist to tell humans what the code does, the output of a compiler is what tells a computer what the code does". Whilst the performance/memory concerns you raise are true, this really is two seperate issues - firstly are programmers able to optomise at an appropriate point, and do they have appropriate skills in order to do so? Optomisation at the appropriate point is the crux of why some experienced developers might be reacting badly to this video - a core concept of professional practice is to avoid premature optomisation. It's better to have code that works reliably and can be understood, and only later consider optomisation. And odds are, in most cases an algorithm change will give quicker and much more effective wins than moving away from OOP. From a technical computer science perspective, the performance difference between OOP and procedural is typically O(n) (using Big O notation), and so really only worth it when you've already addressed the rest of the performance challenges. However, as I mentioned, there is a much wider issue here that you nearly arrived at - to properly understand what your code will do, you need to have a good understanding of how the internals of a computer and compilers work. As a cyber security professional who has trained a lot of juniours in my field, it has been a little eye opening to discover how little most people remember about how computers work. Their degree may have covered it, but that information left their operating knowledgebase as soon as the exam was done. You rightly call OOP syntactic sugar, and really every single piece of code (compiled or otherwise) is just that over the twiddling of the bits / wiggling of the electrons. Whilst many programmers might not need a working knowledge of how CPU caches, TLBs, paging, or OS schedulers work, any who really want to deep dive into performance (or write specialised systems with tight constraints) really need to be familiar with all of this. (oh, and on the slop discourse - ain't in the jargon file, dispite ESR calling it slop. It's been known as padding since the mid nineties, and given it's not in the jargon file, I think it's more ESR's social group term rather than a general one)

  • @gronki1
    @gronki1Ай бұрын

    I disagree with almost everything said in this video. Oop is not perfect and has its flaws but I don't even feel they were addressed here.

  • @niclash
    @niclashАй бұрын

    In the same spirit; Complex types (structs, unions, and such) and many primitive types on various CPU architectures, are just "Development Environment Concerns". The idea has very little practical substance, regardless whether one likes OOP or not.

  • @Dogo.R
    @Dogo.RАй бұрын

    I mean there should be the distinction between system languages and non-system languages here. But at least in non-system languages there are solutions like having a namespace difference between what you do and don't want other parties to touch.

  • @0dWHOHWb0
    @0dWHOHWb0Ай бұрын

    LOL! Yeah, don't quit your day job

  • @ronald3836
    @ronald383611 күн бұрын

    Congratulations. You succeeded in making the ulmitate embarrassing video. I don't know where to start addressing the problems with this video, so I won't.

  • @ronald3836

    @ronald3836

    11 күн бұрын

    (And I am in no way a fan of OOP.)

  • @hamesparde9888
    @hamesparde9888Ай бұрын

    What about interpreted languages? I would think that they would incur a much greater overhead.

  • @ronald3836

    @ronald3836

    11 күн бұрын

    They do. And at the same time they will often be the perfect tool to glue things together. (Or even to do a one-time computation that will take 100 seconds to complete instead of 1 second when written in C, but can be implemented in Python in 5 minutes whereas it would take 15 minutes to get it running in C.)

  • @BonsaiBurner
    @BonsaiBurnerАй бұрын

    Hardware is cheap, programmers and bugs are expensive. Optimization is strictly on an as needed basis and you go low level when needed but otherwise it is like chasing the 9's in uptime - ever increasing costs for diminishing returns. Conquering this naturally in tooling is the answer. OOP's biggest benefit is the abstraction model it gives you - objects parallel real life analogies making concepts and rules of best practice easier to understand and share/enforce.

  • @Mallchad

    @Mallchad

    Ай бұрын

    Modern OOP makes programming significantly more taxxing on the programmer and more expensive to develop.

  • @peterpodgorski
    @peterpodgorskiАй бұрын

    Any and all abstractions are slower. OOP is slower than C, C is slower than assembly (assuming you can beat the compiler...). That's why Rust's penalty free abstractions are such a selling point. Code with lots of abstractions (granted, Rust isn't OOP, but the idea is the same) is often faster in Rust than if you wrote the same thing with no abstractions in C. You pay for that with glacial compilation time. But that's just an initial thought - looking forward to getting to the "stupid idea to fix it" part :D

  • @gawhyrghun1913

    @gawhyrghun1913

    Ай бұрын

    >All abstractions are slower >rust penalty free abstracions You don't see a contradiction here?

  • @CjqNslXUcM

    @CjqNslXUcM

    Ай бұрын

    @@gawhyrghun1913 Rust's penalty is the long compile time, large binaries and that everything needs to be statically linked.

  • @gawhyrghun1913

    @gawhyrghun1913

    Ай бұрын

    @@CjqNslXUcM so is lto in clang/gcc.

  • @Mallchad

    @Mallchad

    Ай бұрын

    Not always. Some problems are solvable at compile time and can completely optimized away by the compiler. You should know this, you brough up Rust penalty free abstractions (although I don't know of any Rust abstractions personally that are actually "zero cost", but its a nice idea).

  • @gawhyrghun1913

    @gawhyrghun1913

    Ай бұрын

    @@Mallchad >rust penalty free abstractions >i don't know any that are actually zero cost This keeps getting better and better.

  • @julianwalde4810
    @julianwalde481029 күн бұрын

    0ad mentioned *rattles shield and spear*