Adding ambient occlusion to my game engine [Voxel Devlog #15]

Ойындар

Try CodeCrafters for free today: app.codecrafters.io/join?via=...
Online demo: github.com/DouglasDwyer/octo-...
In this devlog, I talk about the journey of adding ambient occlusion to my voxel programming project. Ambient occlusion is a lighting effect which darkens the corners and crevices of objects to mimic real life. I review the existing approaches to AO and go over my attempts at implementing them. Finally, I showcase the technique (called volumetric voxel ambient occlusion, or VVAO) that I invented in order to achieve coherent multi-voxel occlusion!
Music used in the video:
C418 - Clumsiness and Innovation
Lofi Geek - Aesthetic Christmas
David Cutter Music - Electronivator
Corbyn Kites - Honey

Пікірлер: 113

  • @DouglasDwyer
    @DouglasDwyer5 ай бұрын

    Do you enjoy digging into the details of how things work while writing high-performance code? Then be sure to check out CodeCrafters using the link below, it would really help the channel out: app.codecrafters.io/join?via=DouglasDwyer They have one project which is completely free to complete during their beta, and you can begin any of their projects for free! Get 40% off if you upgrade to a paid account within three days.

  • @shinobudev
    @shinobudev5 ай бұрын

    You can take your "volume based" AO further by changing the sample size of your voxels from 1 unit to a summation of several sizes. So for example, you'd start with the AO you have now with its tiny resolution 1x1x1, then add on top another pass this time calculating larger groups of voxels (16x16x16). If you do this with 1, 16, 128 sizes you can create macro scale AO that has small AO details.

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Interesting idea! This might help calculate the smaller shadows that should appear on paper-thin surfaces. The one downside is that you need to store all that ambient occlusion data. For storing it per-voxel you'd probably need some kind of compression scheme :)

  • @chickendoodle32

    @chickendoodle32

    5 ай бұрын

    A compression scheme? Sounds like a fun video :D

  • @whoeverseesthatbehappy2722

    @whoeverseesthatbehappy2722

    5 ай бұрын

    ​@@DouglasDwyergeneralizing this idea, I'd like to add that iterative approach also enables user preferences with easy control over performance/quality ratio!

  • @frozein
    @frozein5 ай бұрын

    Very smart solution, your engine is really taking shape. Makes me want to start another engine myself lol. I also want to add that the quality of your videos has really improved since you first started uploading!

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Thanks for the kind words! Yes, it is really SLOWLY taking shape haha (still jealous that you managed to fully release a game 🙂). I finally got a real microphone, so I definitely hope that the video quality is better this episode.

  • @justingifford4425

    @justingifford4425

    5 ай бұрын

    It’s amazing what a decent microphone can add.

  • @Bebeu4300
    @Bebeu43005 ай бұрын

    When you said "I'd like to get to 6000 subs", I just thought "What? He only has just under 6k?" You deserve more than that!

  • @Gwilo
    @Gwilo5 ай бұрын

    I really do love the look of the new AO, makes the game look cleaner and more sophisticated

  • @sjoerdev
    @sjoerdev5 ай бұрын

    i LOVE this engine. you are one of the best voxel programmers alive atm. also your mic now sounds waaaay better. i might try implementing this ao technique myself. but i still dont fully understand how its done. and also, you can do light probes this way too. that way you can have really fast lights in your scene.

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Thanks Sjoerd. Let me know on Discord if you have questions about the implementation, I would love to see it in your engine too.

  • @Conlexio
    @Conlexio5 ай бұрын

    looks good! really glad you stuck to per voxel lighting

  • @SethDrebitko
    @SethDrebitko5 ай бұрын

    So glad I found this. I have been wanting to play around with voxels in Godot and your videos are going to awesome to learn from.

  • @coleslater1419
    @coleslater14195 ай бұрын

    I've been watching these videos since the start and have always looked forward to new uploads. It's insane what you're able to accomplish man, good work!

  • @shadamethyst1258
    @shadamethyst12585 ай бұрын

    For reducing noise, you can use averages (for reducing the variance until you get to an acceptable window around the true value; this is known as Chebyshev boosting and is a special case of monte carlo integration, for when you sample the input uniformly), but you can also use medians (especially useful for removing abnormally high or low values, this is also known as Chernoff boosting). Also don't forget to do all of this in linear space, or you will have bias/skewness making things funnier

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Good suggestions. I do fear that taking the median of a whole bunch of values may not be practical on the GPU, whereas taking the mean with a few atomic operations is simple. Maybe there is some parallel algorithm for finding medians that I don't know about.

  • @judgsmith
    @judgsmith5 ай бұрын

    Love to see you continue your work on this. I hope you will be able to finish this.

  • @wilsonwilson3674
    @wilsonwilson36743 ай бұрын

    1:32 this is an excellent analogy that I wouldn't have thought of myself.

  • @letmedevplease
    @letmedevplease5 ай бұрын

    Well done! Your engine is looking good!

  • @thomasmcbride2287
    @thomasmcbride22875 ай бұрын

    Happy Holidays! Great video :)

  • @valet_noir
    @valet_noir5 ай бұрын

    This is excellent!! Congratulations 🎉👏 🥳😁

  • @vitulus_
    @vitulus_5 ай бұрын

    Nice stuff Douglas!

  • @etsuns
    @etsuns5 ай бұрын

    super cool as always

  • @thebumblecrag61
    @thebumblecrag615 ай бұрын

    I love this channel!

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Glad that you enjoyed the video!

  • @gmanster_ster
    @gmanster_ster5 ай бұрын

    very well done!

  • @bargledargle7941
    @bargledargle79415 ай бұрын

    Amazing stuff, I watched lots of your videos for inspiration (I'm also developing a game and running into problems so it inspires me to find solutions) and also because of the aesthetical value. Wondering how Italian architecture would look like in your game (Specifically my PFP, probably would look amazing). I would like to ask do you think it's possible you'll be able to create some of your terrains with "smoothening" of sorts? Kind of like marching cubes or something like that? Is marching cubes even feasible in such scales as you've created? (with this huge amount of voxels as you've done)

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Regarding smoothing - it's theoretically possible to do smoothing, but the generation times for the meshes might not be fast enough for realtime uses. On the other hand, smoothing might allow for reducing the total number of faces generated and increasing frame rates!

  • @bargledargle7941

    @bargledargle7941

    5 ай бұрын

    @@DouglasDwyer Thanks for the answer! I didn't think about the possibility that amount of triangles would decrease.

  • @ThyTrueNightmare
    @ThyTrueNightmare5 ай бұрын

    Man this looks interesting. hope your looking at possible licensing options in the future because I could see myself attempting to make a game in this. I've been wanting something like a voxel engine for a game I have been dreaming of though, if possible, I would like it to be able to look at real as possible. with the limitations of course, even if it just means the voxels are smoothed etc. Or maybe I need to learn how to make my own engine. would be an experience

  • @dottedboxguy
    @dottedboxguy5 ай бұрын

    omg i haven't watched the whole vid yet but the new mic makes the vid so much more enjoyable edit : finished the vid and i have to say, this algo is really cool, especially seeing the results it gives ! i can't wait to see what you'll figure out for lights hah

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    I'm glad that you noticed the new mic! I am still working out the settings and ensuring that my voice doesn't clip. Regarding lights, I originally wanted to make a Christmas tree with multi-colored lights for this video. But the graphics rewrite is not quite there yet; I still need to make a proper scene manager :)

  • @dottedboxguy

    @dottedboxguy

    5 ай бұрын

    @@DouglasDwyer wow that's amazing ! i didn't realize you were that far into the lighting engine !

  • @xx_Ashura_xx
    @xx_Ashura_xx5 ай бұрын

    makes me wanna make my own engine in rust now

  • @dandymcgee
    @dandymcgee5 ай бұрын

    It looks amazing, but I'm really curious about 12:15. Why isn't there any AO at the corner between the desk and the floor in the middle of the screen?

  • @swegdude9235

    @swegdude9235

    5 ай бұрын

    My guess is that because the desk is so thin it gets calculated as having very little occlusion

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Excellent observation. This is one of the limitations to which I alluded in the video. The desk shown is hollow, so there isn't quite enough volume in that region to cause the shadows to appear. The ambient occlusion doesn't much show on paper-thin surfaces since 50% volume is required. Still, I think that the algorithm's benefits far outweigh this.

  • @dandymcgee

    @dandymcgee

    5 ай бұрын

    @@DouglasDwyer Hmm interesting. Thanks for taking the time to reply, and looking forward to your seeing your progress!

  • @TinkerHat
    @TinkerHat5 ай бұрын

    pretty!

  • @FavoritoHJS
    @FavoritoHJS4 ай бұрын

    using volume does result in some issues for thin walls... maybe use each channel of an RGB triplet to mark how opaque this block is in each direction?

  • @charlieliske3603
    @charlieliske36033 ай бұрын

    I have a question about parallax ray marching: How do you deal with overdraw without rendering every fragment invocation? (cases where one object is occluded by another's bounding box but not the actual voxels within that bounding box)

  • @DouglasDwyer

    @DouglasDwyer

    3 ай бұрын

    There are two optimizations that I used to reduce overdraw. Firstly, I made sure that all of the bounding boxes were as tight as possible, to reduce the number of discarded fragments. Secondly, I used OpenGL's conservative depth feature with depth_greater, which allowed early depth optimizations to function despite the fact that I was setting depth in the fragment shader.

  • @Vextrove
    @Vextrove5 ай бұрын

    You will get increasingly more subs over time

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    I sure hope so!

  • @kaliyuga14surfer88
    @kaliyuga14surfer885 ай бұрын

    great work as always! one thing i've always wanted to know is how small can we realistically make voxels? for minecraft its one meter so is 1 mm voxel size possible? also MERRY CHRISTMAS DOUGLAS!! hope you are well every year!!!

  • @MichaelPohoreski

    @MichaelPohoreski

    5 ай бұрын

    There are two types of voxel grids: * fixed * adaptive Minecraft uses a fixed sampling of 1 m but it is possible to have an adaptive grid. Look at old _EverQuest Next_ for micro voxels.

  • @kaliyuga14surfer88

    @kaliyuga14surfer88

    5 ай бұрын

    @@MichaelPohoreski thank you!

  • @SeanTBarrett

    @SeanTBarrett

    5 ай бұрын

    with the same technology, 1m voxels with a 1000m view distance is the same as 1mm voxels with a 1m view distance. so it's the really same question as "more voxels". in practice, developers who increase # of displayable voxels tend to split them between shrinking voxels smaller and increasing view distance.

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    The other commenters both have good insights! To add my two cents, I want to point out that voxel size depends upon the scope of your project. The way that you represent your voxels - and thus how many you can store - influences the rendering, physics, and simulations that you can do. If your only goal is rendering, then something like point clouds might allow you to achieve the highest resolution of voxels. But that data might not be amenable to collision detection, for instance. I'm trying to design my game for a scale of 1 vox = 5 cm.

  • @kaliyuga14surfer88

    @kaliyuga14surfer88

    5 ай бұрын

    @@DouglasDwyer understood thank you!

  • @georgehawryluk7976
    @georgehawryluk79764 ай бұрын

    Quite impressive! tried your demo - the objects (trees) once disconnected from the world becomes unusable (not interactable?) Have you got any concept of monetarization? are you planing on creating a game like Vintage Story or more interested in creating the engine itself?

  • @DouglasDwyer

    @DouglasDwyer

    4 ай бұрын

    Yes, I haven't implemented interactions with objects yet but when I get to gameplay that will certainly happen. The way the project has evolved, I want to turn it into a platform where users can create and publish games built on the voxel technology! I would probably charge a fixed price to purchase an account (singleplayer would be free to try, multiplayer would require an account), and would probably take a cut of any user-published content (similar to how app stores work). I haven't done too much business planning yet.

  • @williamheckman4597
    @williamheckman45972 ай бұрын

    How do you represent the geometry for a voxel engine? For Example, in Polygon based engines, its a series of vertex points and then line descriptors... How is this stored in Voxel?

  • @thalesfm
    @thalesfm5 ай бұрын

    You could probably get your "perfect world" version of ambient occlusion in just about the same amount of time as your approximate solution (if you could spare a bit of memory). If you used a running sum to compute the density around each voxel it wouldn't be necessary to sample the entire 8x8x8 volume each time (though it would require doing 3 passes). The downside being that you'd need to store the density value per voxel rather than per 8x8x8 cube

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Great observation! Indeed, using a 3D summed-area table would allow for O(n) generation and O(1) sampling. But that would require engineering a compression scheme (since it would not be possible to store that much voxel data in memory) and also would be slower to generate (the current scheme is very quick since I accelerate the counting with voxel octrees). The current scheme yields convincing results for me and uses next to no memory or extra runtime.

  • @BentoGambin
    @BentoGambin5 ай бұрын

    Genius

  • @iZulach
    @iZulach5 ай бұрын

    Did you think about using vulkan before setting your mind on webgpu? If so, would really like to hear the decision process or what were the deal breakers etc. Edit - also, what about the fact most browsers (especially on mobile) don't support webgpu yet, doesn't that make you worry?

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Good questions. I chose WebGPU primarily because it is the only API (aside from WebGL) supported in browsers, and I want my game to be web-compatible. Using Vulkan would restrict me to native platforms. The WebGPU project also has a very nice Rust library for it, which is idiomatic and easy to use. I'm not too worried about browsers not supporting WebGPU now that it ships in desktop Chrome/Edge/Opera (which hold the vast majority of desktop market share). It's good to be ahead of the curve, and the technology should be mature by the time that I am finished with this project.

  • @olliveraira6122
    @olliveraira612220 күн бұрын

    12:50 How are you getting a rust function like that to execute on the GPU? Do you do all your GPU programming like this, or do you mix in some HLSL/GLSL?

  • @DouglasDwyer

    @DouglasDwyer

    20 күн бұрын

    I think you may have the incorrect timestamp, but I use WebGPU as my graphics API and WGSL as my shading language. It may look a little similar to Rust, but it's separate :)

  • @olliveraira6122

    @olliveraira6122

    20 күн бұрын

    @@DouglasDwyer oooo okay, never heard about GWSL before :) Timestamp was supposed to be 11:50

  • @MultiSdgsg
    @MultiSdgsg5 ай бұрын

    Are you still doing the parallax marching technique? When you finish the transition from opengl to webgpu (swore you were using vulkan, guess im crazy) it would be cool to get an overview of all the rendering tricks youre currently using and what your pipeline looks like. Or even better, the source code.

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    My current engine uses greedy meshing (it has since devlog #9). It's similar in efficiency and makes it easier to support transparency. Kind of disappointing that parallax rendering didn't outperform greedy meshing by all that much in practice - GPUs are just too good at churning through lots of triangles, even iGPUs! Maybe I can give a quick pipeline overview in the next devlog since things have evolved, but other than greedy meshing most of the tricks are the same as detailed in devlog #7: like using many kinds of face culling and baking shadowmaps whenever they change :)

  • @stormyy_ow
    @stormyy_ow5 ай бұрын

    on the topic of per voxel lighting, i was wondering if you had an idea on how to approach per-voxel fog and volumetric lighting. this has been a problem i’ve been thinking about for my own engine constantly. i’m not at that point yet of implementation but i have a few ideas, none of them great the main issue for me is clogging up the multi level dda optimizations since i’m using full raymarching for rendering. seems like a huge performance hit doing it off the voxel grid is very simple (similar to how AO is very simple off voxel grids), but i don’t want to violate the grid for any reason since that’s the gimmick of my engine

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    I haven't studied per-voxel fog yet, but from what I understand it works something like this: for (steps along the ray) - Sample a fog density function at the ray position - Add that value to an accumulator After reaching the end of the ray, return the accumulator. So you're essentially integrating the fog density function along the ray axis. One idea that I have here is to try and choose a fog function that is analytically integrable. Say your fog density function was just a sum of sines and cosines, weighted so that the above algorithm looks convincing. Then you could just replace the accumulation loop with the integral of your fog density function, which should be much faster for performance. Does this approach spark any ideas for you?

  • @stormyy_ow
    @stormyy_ow5 ай бұрын

    wondering if you have any thoughts on the switch to wgsl? i like how explicit it is, makes things less confusing and error prone. translating shaders from shadertoy has been a bit of a pain for me though

  • @stormyy_ow

    @stormyy_ow

    5 ай бұрын

    i’ve been working on a wgsl brickmap dda raytracer. setting up the writable storage buffer has been surprisingly satisfying

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Wonderful question, I'm glad you asked. As a solid Rustacean I like WGSL's syntax, and I like the fact that there is only one standard for WGSL (as opposed to the many versions of the GLSL standard) which means that compilation should be more consistent across devices. I will add that I am a bit disappointed that WGSL exists in the first place, though. If W3C had been able to adopt SPIR-V as the web standard for shaders, we would have gotten a bytecode format which is better suited to runtime distribution.

  • @NoVIcE_Source
    @NoVIcE_Source5 ай бұрын

    interesting

  • @RAndrewNeal
    @RAndrewNeal5 ай бұрын

    Looking good. But have you tried it on hollow objects?

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Good observation about the hollow objects. This is one of the limitations to which I alluded in the video. The ambient occlusion doesn't much show on paper-thin surfaces since 50% volume is required. But this only ever results in lack of shadows (never incorrect shadows). As such, I think that the algorithm's benefits far outweigh this drawback. In addition, another commenter has suggested improvements which could result in better shadows on hollow objects, which I will be trying shortly.

  • @RAndrewNeal

    @RAndrewNeal

    5 ай бұрын

    @@DouglasDwyer Awesome, glad to hear it's not such an issue! My thoughts were mostly along things like corners of walls in a house type structure. Just wondering if such cases had been tested for yet. Love to see how it's coming along in every new video!

  • @yugene-lee
    @yugene-lee5 ай бұрын

    Your method for generating AO may also have the side effect of being able to generate highlights. For example, let's return to the more intensive sphere model you were using. If the voxels formed a shape with over 180 degrees, say like the edges of a pyramid, in adverse to a corner, you have more than 50% air in the sphere radius vs. voxels. In this way, the corner can be considered sharper and therefore may have highlights. While I'm not sure this would apply well in implementation, what you therefore have is highlighted edges or an effect akin to Fresnel. Just something to consider into your quest to create better lighting and maybe even materials.

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Agreed, the approach could also work the other way around - finding convex edges rather than concave ones. I don't know how highlights would fit into the aesthetic that I currently envision, but maybe it could be a customizable material setting :)

  • @justingifford4425
    @justingifford44255 ай бұрын

    “I’m switching from a mature, proven API to a new really cool one!” Sounds like a JavaScript dev lol. JK though this stuff is awesome and I look forward to the next video!

  • @quantumdeveloper2733
    @quantumdeveloper27335 ай бұрын

    Really interesting approach. Have you tried different intensities? For my taste the effect is a bit too weak. I have actually had some experience with 3d-texture based ambient occlusion(although on a per-voxel scale) and I would propose a different sampling technique. Instead of sampling at the voxel position I would sample the position offset by 8 times the normal vector. Then instead of setting everything to 0 below the 50% mark, which doesn't capture AO on thin walls and small poles, you could capture the full spectrum of values. Also what are your plans/ideas for lighting? Normal floodfill lighting isn't going to work. And normal point lights could easily get expensive, when the player has the ability to place many light sources.

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Thanks for watching, Quantum. I did play with the intensity and the blending function to find an intensity that I liked. Maybe I can make it a user-configurable value to an extent. Your suggestion about sampling at an offset position is a very intriguing suggestion! I am going to try it and let you know how it looks. My one worry would be that moving 8 voxels away along a normal direction doesn't capture locality as well - that is, if you have two adjacent voxels with different normals, they could have very different AO values and the results would look weird. But it's worth an attempt for sure! Regarding lighting, I'm going to first attempt the simple approach, normal point lights. I find that usually the simplest thing is the best thing to try first. To mitigate the cost (because they can get expensive) I'm planning a couple of optimizations: - Separating the scene into static (the terrain) and dynamic (the entities) components, and baking the lighting for the static part, so that I only re-render it when it changes. - Maintaining a data structure to determine when entities have moved or the scene has changed, and only re-rendering the lights which are affected by changes each frame. - Merging light sources which are very close together to ensure that there are no more than a constant number (say 16) lights per chunk. Let me know if you have any other ideas!

  • @quantumdeveloper2733

    @quantumdeveloper2733

    5 ай бұрын

    @@DouglasDwyer > "if you have two adjacent voxels with different normals, they could have very different AO values and the results would look weird." On the contrary. Normals matter for AO. If you for example have a small crack in your terrain, then the pixels inside the crack should naturally have more occlusion than the pixels that are facing the open air. But you have to try it out, it might not work everywhere. Also keep in mind that you can always use a mixed approach, offsetting maybe by 4× the normal and cutting of at 20-ish% > "Separating the scene into static (the terrain) and dynamic (the entities) components, and baking the lighting for the static part, so that I only re-render it when it changes." Not sure how well that works, but you could maybe use 2 shadow maps per light source, one for static geometry and one for entities. Then you could pregenerate the static one with a good amount of detail. And for the entities you could just use a tiny shadow map on low-end hardware, maybe 16×16×6 or something like that. Then you only need to update a small shadow map each frame. Another thing to consider is whether to use deferred rendering. It's usually a lot faster for point lights, but especially on integrated GPUs I found that the memory bandwidth requirement of handling many, often large framebuffers like this is quite significant. On my Ryzen 3200G for example it cost 1.5 ms to read a 1440p screen of 4×16 bit floats. Also deferred rendering just doesn't work for transparency. So the game might still lag for transparent scenes. > Let me know if you have any other ideas! Generally I have thought about a bunch of things that could make floodfill more efficient. For example you could do floodfill on some coarse graph of the air volume. Not sure how to generate such a graph, though. After all you would want light to propagate through tight tunnels, while no light should tunnel through walls. And the biggest open question is of course how to map the light info from the graph back onto the terrain. Every voxel face would need to store the nearest 3-4 nodes of the graph, but how to interpolate these while keeping the result smooth?

  • @theodorlundqvist8174
    @theodorlundqvist81745 ай бұрын

    Hi! Have you switched to meshing instead of parallax rendering?

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Yes, my current engine uses greedy meshing. It's similar in efficiency and makes it easier to support transparency. Kind of disappointing that parallax rendering didn't outperform greedy meshing by all that much in practice - GPUs are just too good at churning through lots of triangles, even iGPUs!

  • @theodorlundqvist8174

    @theodorlundqvist8174

    4 ай бұрын

    @@DouglasDwyer Ok! Thanks for the response. Too bad really... Is it heavy to build the mesh?

  • @nou5440
    @nou54405 ай бұрын

    wont thin walls and stuff confuse it and get bad shadows though

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Excellent observation. This is one of the limitations to which I alluded in the video. The ambient occlusion doesn't much show on paper-thin surfaces since 50% volume is required. But this only ever results in lack of shadows (never incorrect shadows). As such, I think that the algorithm's benefits far outweigh this drawback.

  • @diadetediotedio6918
    @diadetediotedio69185 ай бұрын

    Wait, you weren't using raymarching for drawing the voxels?

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Yes, my current engine uses greedy meshing (it has since devlog #9). It's similar in efficiency and makes it easier to support transparency. Kind of disappointing that parallax rendering didn't outperform greedy meshing by all that much in practice - GPUs are just too good at churning through lots of triangles, even iGPUs!

  • @diadetediotedio6918

    @diadetediotedio6918

    5 ай бұрын

    @@DouglasDwyer Oh, interesting to know, I think I missed this devlog. I was thinking that meshing in general with these little chunks would lose compared to ray marching, principally in more complex scenes on which greedy meshing would not be so useful, but it is an interesting approach nonetheless

  • @NeoShameMan
    @NeoShameMan5 ай бұрын

    Basically light volume

  • @EndroEndro
    @EndroEndro5 ай бұрын

    would be nice if online demo would get update and multiplayer fix it show me errors oh and ability to look 90 degree down it's hard to build pillars (those basic controls as well of course for testing purpose xd)

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Thank you for the feedback on camera controls. Like I said, the graphics system is being completely rewritten and so basic functionality (like loading the world and rendering entities) is still incomplete. As soon as this functionality is done, I will be able to release an updated demo! As for the broken multiplayer... well, I just haven't been maintaining the server. In fact, I think that Oracle Cloud might have deleted my server instance due to inactivity lol. I will put it back up when I do the next demo update.

  • @fabien7123
    @fabien71235 ай бұрын

    Essentially your solution was to use light probes

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    I'm not very familiar with how light probes in Unity and other engines work. If you have some resources or a specific instance you were thinking about, I'd love to see it! But yes, I am dynamically calculating some data about the scene which gets baked into a texture.

  • @NeoShameMan

    @NeoShameMan

    5 ай бұрын

    More like light volume, ie a 3d array of light data, probe are directional data, generally cubemap or spherical harmonics coefficient

  • @streetware-games
    @streetware-games5 ай бұрын

    Greedy meshing on the gpu sounds like a nightmare to implement. GG!

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    To be clear, the greedy meshing happens on the CPU. Then, the data is uploaded to the GPU. I use sparse voxel octrees on the CPU to represent things, and SVOs aren't really efficient for GPU algorithms. But I can very efficiently mesh my SVOs on the CPU side, so that's what I do :)

  • @GhostlyOnion
    @GhostlyOnion5 ай бұрын

    I wouldn't mind testing this game for free to be honest. I have time on my hands

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Thanks for the interest! There is a tech demo available to try on GitHub, but it is a bit outdated. Once I am done with the graphics rewrite, I will be posting a new version of the demo. At that point, hopefully I can start adding some gameplay :)

  • @user-gm6qf1ph4n
    @user-gm6qf1ph4n5 ай бұрын

    1:10 - Am I living in a simulation or it is not a thing in my apartment? I mean besides small gaps between walls and ceiling tiles (yes I have tiled ceiling, don't judge me) I can't even see it in a darker room with a very dim light? Are my eyes a crime against videogames now? I mean, I have a long standing grudge against AAA designers that like to spam shadows everywhere to a point when I can't see a thing in a sufficiently lit pseudo dark place.

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Haha the intensity of ambient occlusion and whether it occurs depends upon how your room is lit. It doesn't always happen in real life, but AAA designers (and myself) use it in order to achieve the goals shown in the video: to highlight detail and help users perceive the relative locations of objects :)

  • @SnakeEngine
    @SnakeEngine5 ай бұрын

    Nice :) But why is OpenGL trash?

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    Haha. I was exaggerating a bit in the video - OpenGL remains useful for learning and quick, small projects. However, it has a number of problems (all stemming from the fact that it is a legacy API from the early 2000s): - The API is state-machine based rather than object-based, making it difficult to write clean code (since state-changing calls have side effects) and making multithreading impossible. - Things like resource synchronization and buffer usages are implicit, meaning that it is harder to use the OpenGL API efficiently. - OpenGL error handling (in my experience) is unhelpful and hard to use. Vulkan has validation layers which tell you instantly when you've made a mistake, but if you do something wrong in OpenGL, oftentimes all that happens is... nothing. Maybe you getGlError returns an enum, but there is no textual description of the problem, and the program continues to run, just not doing what you want. Modern graphics APIs solve all of these problems. Vulkan, WebGPU, and the like use an object-oriented API to ensure that side-effects don't happen and multithreading is possible. Synchronization, object creation and binding is more explicit which allows the API to be used efficiently. Plus, the error handling is more user-friendly. Which is why I prefer the more modern APIs :)

  • @SnakeEngine

    @SnakeEngine

    5 ай бұрын

    @@DouglasDwyer Ok, but have you run into limits with OpenGL? Do you benefit from multithereaded rendering?

  • @DouglasDwyer

    @DouglasDwyer

    5 ай бұрын

    @@SnakeEngine yes, I've run into the issues that I mentioned above. I've had problems with lag due to synchronous buffer and texture uploads, and glBlitFramebuffer causing pipeline flushes. I've had my code break because changing OpenGL state in one place affects it everywhere else. I've had problems with inconsistencies between native and web GL because the specs are different and OpenGL gets second class support next to the newer APIs. Most importantly, with OpenGL ES 3.0, I haven't had access to fancy new features like compute shaders. Switching to WebGPU for graphics has been a breath of fresh air :)

Келесі