ALL Software Development Is Incremental

Ғылым және технология

Kent Beck talks to Dave about incremental software development and how building software this way enables software engineers to receive fast feedback on their work. That fast feedback is one of the keys to building better software faster.
This is a clip taken from Kent's full appearance, which you can see HERE ➡️ • Kent Beck On The FIRST...
___________________________________________
🙏The Engineering Room series is SPONSORED BY EQUAL EXPERTS
Equal Experts is a product software development consultancy with a network of over 1,000 experienced technology consultants globally. They increase the pace of innovation by using modern software engineering practices that embrace Continuous Delivery, Security, and Operability from the outset ➡️ bit.ly/3ASy8n0
___________________________________________
#softwareengineer #developerproductivity

Пікірлер: 31

  • @trappedcat3615
    @trappedcat36153 ай бұрын

    This only works in a world with no gatekeepers, whether it is the absent reviewers or the awful tests preventing any changes.

  • @gabrielpauna62

    @gabrielpauna62

    3 ай бұрын

    Let's not forget the client, who often doesn't understand you can deploy at any time

  • @marcotroster8247

    @marcotroster8247

    3 ай бұрын

    ​@@gabrielpauna62Clients are so confused when they have flexibility 😂

  • @juliansegura5507

    @juliansegura5507

    28 күн бұрын

    Or management wanting to follow a "procedure" for deployment.... GOD...

  • @marcotroster8247

    @marcotroster8247

    28 күн бұрын

    @@juliansegura5507 Nothing wrong with that when you script it.

  • @juliansegura5507

    @juliansegura5507

    28 күн бұрын

    @@marcotroster8247 It is when the procedure involves multiple authorization steps and can last weeks.

  • @horsethi3f
    @horsethi3f3 ай бұрын

    Could you talk about the recent spike in tech layoffs.

  • @HemalVarambhia
    @HemalVarambhia3 ай бұрын

    Hi, Dave. I am sorry I keep asking this: are you closer to interviewing Tim Mackinnon?

  • @dennistucker1153
    @dennistucker11533 ай бұрын

    I think this video would have benefitted from some context in the beginning.

  • @esra_erimez
    @esra_erimez3 ай бұрын

    What are you thoughts regarding "squashing" commits?

  • @IronCandyNotes

    @IronCandyNotes

    3 ай бұрын

    People who want a beautiful commit history are dangerous.

  • @rothbardfreedom

    @rothbardfreedom

    3 ай бұрын

    A bigger commit is more risky to be deployed than a smaller commit. (Not solely in lines of code, but more holisticly)

  • @jorenchik8731

    @jorenchik8731

    3 ай бұрын

    @@IronCandyNotes Can you please explain why?

  • @trappedcat3615

    @trappedcat3615

    3 ай бұрын

    If you branch from a PR that gets squashed, your commit history is then wrecked with it.

  • @trappedcat3615

    @trappedcat3615

    3 ай бұрын

    Squashing is not terrible. The history is still available.

  • @pauligrossinoz
    @pauligrossinoz3 ай бұрын

    In the context of *_fixing technical debt_* - where poor structure has persisted in the code for a long time - the ideal of _"must be minute by minute deliverable"_ fails. Fixing technical debt can easily be several hours of coding, and often takes days and days if the debt has become very deep. But, yeah, in the context of well structured code that is well understood by all the developers, moving forward this fast is possible in theory.

  • @gabrielpauna62

    @gabrielpauna62

    3 ай бұрын

    Cant you break it down to modular updates, first fix basic debt , then architecture debt ?

  • @pauligrossinoz

    @pauligrossinoz

    3 ай бұрын

    @@gabrielpauna62 - you can fix it however you please, but some ways might just take a very long time. In the case case of technical debt, its always complicated to both to analyse and to fix, and to additionally burden the fix further such that each step must have the system still work exactly the same just makes it so much more complicated than it needs to be. Technical debt is usually causing the system's internal structure to become like the proverbial "ball of mud". In impenetrable mess. To make an analogy, it's like trying to change the angle of an engine's pistons while the engine is still running. That's just too hard. It's much quicker to stop the engine then do the change, rather than leaving it running.

  • @Fanmade1b

    @Fanmade1b

    3 ай бұрын

    ​@@pauligrossinoz I think it's not easy, but doable. Of course, the worse the legacy code is, the harder it gets. But you should be able to refactor it in very small steps. One of the first things to understand is what is valuable. Just today I worked on a piece of code where I just added one small method to add a property to a construct. I that same file, I also saw a piece of code that has been growing a while and it was cascading three levels deep. So I quickly refactored it to use early returns, thus collapsing the structure from three level deep to one level. It was a very small change that did not change any functionality, but it was already valuable. If the tests are properly written, you also are pretty much guaranteed to have it still working. Of course it takes hours to implement a "big bang" refactoring. But you are already working incrementally, the trick is to keep it in an executable state and commit often. What I feel happens a lot to me are big "switch" statements that need refactoring. Most times, these result in some version of the strategy pattern being used. If course you can do this in one large step, or you can do this in a few small steps. Like at first write small tests for whatever happens in that switch. Add one small test after another, until most of the code (and all of the most relevant one) is covered. Then textract one of the code blocks within a case to a method (if the separate cases contain a lot of logic). Then extract the next case. Continue this and find the overlaps in your new methods, their parameters and what they affect/return. Maybe create interfaces and/or DTOs for their input and output, depending on what fits best. Then create some kind of a central handler for that logic. Then create singular classes for each case and register them to your handler. Then replace the switch with your handler. In theory, if you do everything right, you can basically deploy at least once between all my previous sentences. It is not trivial to do this, but it really gets easier the more you do it and it really helps in a lot of ways. Just don't think that it has to be a fully finished change before it is worthwhile to push it. Just make sure that it still works when you're only half finished and don't try to fix everything at once. I am working towards this for quite a while and even though I am far from an expert on the subject, I have already improved a lot and I am getting both a lot of positive feedback and I see others adapting this methodology as well :)

  • @pauligrossinoz

    @pauligrossinoz

    3 ай бұрын

    @@Fanmade1b - you are missing my whole point. What _can_ be done and what is a _good idea_ to do can be completely different. To insist that the system be minute-by-minute deliverable while trying to fix a ball-of-mud structure problem means you would very likely sacrifice a massive amount of time and money at the altar of this minute-by-minute ideology. It's absurd on its face to choose that ... but if you think it's a good ideology, I'm not going to stop you! 🙄

  • @Fanmade1b

    @Fanmade1b

    3 ай бұрын

    @@pauligrossinoz Well, what's your counterpoint? So far you only said that if the code is really bad, then this method may not work. I gave an example of something that I am facing on a regular basis and so far it has worked way better than the alternative, where big refactorings created a lot of issues because they introduced too many changes which resulted in bugs that in turn landed on the production environment. I actually faced three of those bugs in one system this evening, because too much changes had been deployed together. Too many changes are hard to review, they very often result in conflicts and lead to a higher chance of bugs which in turn can be harder to debug, because you can't say something like "it has to be one of the five files we changed in the last hour". I don't know you and you may have more experience than me, but at least for me what these guys in the video (who probably have more coding experience than us together^^) say worked really well for me. I have worked in several very different projects and I can't think of a single incidence within the last ten years where I'd say that I would have preferred a larger change over multiple smaller ones. Apart from completely throwing away a few systems and rebuilding them on a green field, of course ^^

  • @chrisdams
    @chrisdams3 ай бұрын

    Hmmm.... I think there has to be some optimum here. Working in increments that are too big certainly makes things difficult. But I think it is also possible to work in increments that are too small. After reading Kent Becks Test Driven Development I tried to go to the extreme to keep my steps as small as possible in a personal project but I found that it took more time than taking bigger steps would have taken. Say you want to do a refactor that touches some places in the code. I could do it in one step or I could find a way to do it first in one place and then in some other place and run the tests in between. The problem with the latter approach is that you have to find a way to support both approaches in the interim. While that may be possible it may also not be that the cost is higher than the benefit.

  • @AnonymousAccount514
    @AnonymousAccount5143 ай бұрын

    Is it me or is he nervous around Kent Beck?

Келесі