NDC Conferences aims to find the greatest minds and leaders in the field of software development, striving to make the best and most updated knowledge available to developers everywhere.
Join these innovative speakers on our KZread channel and at our year-round conferences in Oslo, London, Porto, Copenhagen and Sydney!
We hope to see you in person at our live events:\u2028
ndcoslo.com/ \u2028
ndclondon.com/
ndcporto.com/
cphdevfest.com/
ndcsydney.com/
Пікірлер
Good talk aside from the lazy Marxist analysis at the start
"Capitalism only works when the output of labour is sold for more than its cost" ... um. This is just business not "capitalism". It's what all businesses, organisations, humans and lifeforms do: they create a "profit" by using energy to reduce local entropy at the cost of global entropy. What is this utopia you dream of in which business or even humans operate at a loss?
OK but what *font* is that in his editor? so clean...
Treating warning as errors is a headache during the development phase, in this phase you don't need perfect code, you want to experiment and find answers as fast as possible, this involves having imperfect code that we are not ready to commit yet but that we might want to debug or test. So when do we need to have perfect code? easy, when we are ready to merge our changes. We can add a flag to the build step of our CI process `dotnet build -warnaserror`, that way we don't add speed bumps during the development phase, but we ensure we don't have dirty warnings in our code base.
💜💜
Sorry, couldn't make it to more than 1/3 of that... You should have practiced watching yourself first before standing up in front of people and the camera.
Actually good talk 🙂
IDL (of CORBA) keeps getting re-invented in myriad interesting ways from gRPC to WIT 😀
27:53 injecting variables into SQL is not a security vulnerability if it is properly escaped, which it appears to be. So what is the vulnerability?
Topic "enable" vs. "true": in regard to "nullable" you can enable/disable it with a precompiler directive in a file and you can "restore" it. So the value of "nullable" is more an enum than a bool. I prefer enums over bool because of clarity and readability.
IaC and App in the same repo presents permissions problems. A dev may not be allowed to change infrastructure resources, such as firewall rules, replica counts, ingresses, etc.
Then you have an organizational problem with devops if that is the case
@@Mig440 Not really.
@@josefromspace so who then is responsible? An OPS guy that knows nothing about the app in question and so is ill equipped to handle any issues that a developer might have known immediately?
TLDR use password generators and store them somewhere secure because the big danger is the ease of getting cracked
"When it comes to cloud programming, history is on the side of functional programming. I'm sorry. OOP is not made for that" Man... that hurts.
So, you're saying we should use Serilog because of maintainability? Are you seriously claiming that using NLog will make .NET apps unmaintainable?
Tbh many advices reminds me of bad advices on linkedin "do this instead of this". For example: classes with > 200 loc advice. Lol. Do not forget: there are many ways to do things correctly.
it's true and all the software engineers in the audience switched their codebase to Serilog when they came back from the conference. 😂
He didn't say that at all. He said he preferred Serilog and disclaimed the entire talk with "these are just opinions"
Nlog 5 is just as good as Serilog now. Nlog gets a bad rep because <5.0.0 versions.
I think this most be highlighted more, because I heard so many people in work and on KZread claim, "switch to SeriLog", but I am still waiting for an compelling argument why an up-to date Nlog should be thrown out in favor of Serilog. It seams to be more of an preference choose rather than an technical choose. Both are good, use the one you like.
NICE ONE !!!! cheers mate Come on over to Archethic ? node in #Elixir
12:34 in a nutshell You have to be sure of yourself. But be ready to be wrong.. The optimal, but hardest MO is Open minded but skeptical. So think in probabilities.. and get more data. F around and find out
10 opinions on .Net plus bunus opinion on name of X
I love this! ❤ I never knew about anything past the functional pattern. I know about async/await, but had no idea it is a monad.
This just looks like another way to make a "Risk analysis" table, and to see if it risks can be catered for and mitigated. The presenter seems a bit unhinged and drinking his own kool-aid without being able to take any questions that dare question his theory. Which is not what science is about - you should be able to enter into discussions and debate the merits of something, instead of just blindly following.
What is it that you want to debate? The idea differs from standard risk analysis by not using probability or impact guesses and discards the actual inputs as irrelevant at the end, which is a pretty radical approach that hasn’t been used before. On a scientific level all the information required for replication and refutation has been made available through peer reviewed sources.
@@Barry-ru9kf This is assuming that risk analysis is done seperately, and it is just presented as a check list of things to "CYA" incase of a problem in the future, and how to handle it at that point. I am referring to doing these risk analysis or "what if" situations up front, embedded, and part of the design process and the design output. That seems very much aligned with this?
This is very different than a risk analysis because it’s not analyzing risk or interested in protecting against risk - it’s looking for gaps in our understanding of a problem and how that relates to any possible solution and any weaknesses in an architecture that will be revealed in uncertain environments. Simply assessing risk during a design process is still laboured with ideas about probability that stifles exploration. The key is the use of random simulation - which gives the weird result that architectures get stronger than when we employ traditional risk strategies. My research shows that this weird result is actually to be expected theoretically and is replicable experimentally - this is a long way from risk management and makes the people who work with that very angry. This is a very different way of thinking than traditional approaches, but it’s not for everyone. If it feels uncomfortable or feels like something else it’s probably best not to adopt it. A good read is the article “The It’s Just Like….Heuristic” which you can find on the web.
@@Barry-ru9kf "The key is the use of random simulation - which gives the weird result that architectures get stronger than when we employ traditional risk strategies." That's exactly what I said. Implementing risk analysis at the start and during the design of a system is a concept very similar to what you are discussing. It's not new, and many companies already practice this. So far, all you're doing is trying to sell us on your theory. My question is: where is the empirical validation of this theory? I would like to review those results and, as scientists do, reproduce them. We don't want another LK-99 situation on our hands.
I guess you didn’t read the “It’s Just Like…” article then. There’s a huge difference between random simulation and risk analysis. There’s a huge body of literature behind that statement, so if you like to do what scientists do you should start by reading the literature. Now, certain senior architects, as I mention in the talk, eventually figure out that random simulation gives better results, but they’ve never written it down, never understood the implications, never formulated a theory - instead we stumble around with half baked definitions of risk and risk analysis. That some people have figured this out intuitively is actually part of the talk so I’m not sure what point you think you’re adding. If you’re already doing this, then you should know that it works. It seems you’re making two arguments - one is that this isn’t replicable, and the other that you’re already doing it, contradicting yourself isn’t really the basis for a good discussion and seems like you’re attacking the idea for the sake of it. I’d suggest since you already do this and it doesn’t work that you spend some time thinking about things. I’m not trying to sell you on the theory, but I would love to see an actual argument against it.
I've been programming since 4th grade in 1978. Got my first pro gig in 1992. I can't imagine doing anything else. I understand being angry at Jira and "agile", but I do not understand burning out. So much to learn. So many smart people to work with. Every day pecking the keys is a gift.
Similar background here. I think the closest I’ve come to burning out was when I was part of a team or company culture that was toxic. The correct course of action is to move on to a better place. However, that may not be easy or straightforward for everybody’s situation.
Ha hello old man about the same era.. I played games.. my whole life. Please check out Elixir and Archethic, node in Elixir I think it will take over in this 20yr period. It's perfect for internet building. Discord built on a core of Elixir with only 5 engineers maintaining it. Rust on top afaik Sasa Juric SOUL OF ERLANG AND ELIXIR The only video you need ! Jose has a lot of great videos too
It's a gift to never encounter over-work :)
@@DavidWhitneycouk Uh huh. I was in the US Army, so my idea of over work and your idea of over work probably aren't aligned.
@@7th_CAV_Trooper I imagine not - there is no universal experience of either.
5 years too late . This is why I got off the MS train
great talk, the letter at the end was just a beautiful encouragement, thank you David
Thank you 🖤
I'll take capitalism and free markets every day of the year.
100%
You have no choice.
Great talk!
If you just followed this pattern, you'd have functional core, with an imperative shell that deals with IO. Now you have to do more functional stuff that you couldn't do before because it requires the result of the IO. So you do it again, you accept the IO from, do some pure stuff, and then result go to IO. And then again. Scott is completely ignoring how real world application s work by showing toy examples. The reason is because it gets way more complicated, and that makes this functional style a lot more complicated. You're not getting rid of complexity. Next: You never have to mock pure code. Hogwash. You mock those things that are slow. If I'm doing heavy math that takes 10 seconds to calculate, I'm not going to run that in every test simply because it's "pure". I'll mock it when I'm testing other code that depend on it. Scott knows this. He's been doing this long enough. But he loves to trivialize things to make thing seem more stable, and then pull out the "it depends" card later. It just comes across disingenuous.
So all in all this is just an advertising for Deunde server product?
How have you guys not talked about the best part of fsharp? TYPE INFERENCE. seriously. I'd take Hindly Milner type inference over almost any feature. not having to babysit the compiler is the most amazing speed multiplier in code.
this is also an objectively superior feature, it's a land slide victory in fsharp's favor, csharp has nothing to answer it.
We had to cut it out because of the time constraints;)
@@maxarshinov gotta pick which seem like the most useful issues to bring up, I understand. I just think it might be more of a win than most people realize, and is the thing that I miss most when I use csharp. The ability to let the compiler do the mental gymnastics of figuring out how to make the function signatures fit together is incredibly helpful for composability. Csharp's compiler doesn't help you make composition easier, it's a pedantic "uhm ackshually" jerk that will refuse to do anything until you've got it's magic secret code words spelled out and pronounced correctly. It will gladly tell you you're wrong, though not exhaustively check everything you'd like. It will tell you you're not passing the right thing around but it's up to you to figure out what will make it happy, when in the same situations it's clear to fsharp's compiler what you wanted from the beginning, and like a true pal, just says, "alright, I see what you're getting at, let me just adjust this automatically for you, make sure things are good to go and safe under the hood, you just keep exploring the actual problem you set out to solve, I'll do the drudgery for you. I'm a computer after all, that's what I'm good for. "
Irina's talk made it to the last issue of Tech Talks Weekly newsletter 🎉 Congrats!
This is a really nice intro to vector databases and it has been featured in the last issue of Tech Talks Weekly newsletter 🎉 Congrats Erik!
Nationality: Azerbaijani Accent: Russian Company: Norway Place: United Kingdom Hotel: Trivago, of course
Some of us were rendering client side using Dynamic HTML and JavaScript in the early 2000's.
We need to move past the web and start coding dreams for angels that can save hubmanend cind
This is not technical enough. Not NDC worthy. Unfortunately waste of time for devs.
"Okay, we'll give a UX guy one shot to have a talk this year. What will it be about?" "Pickles!"
When you have to rely on heavy mutability => C# For pretty much everything else, F# is what Microsoft wishes C# could be.
half of the presentation is advertising for azure
I lost interest in the video when he claimed pickup truck drivers dislike electric car drivers based on their voting preferences. For someone who should rely on science and principles, this is disappointing.
2x speed is perfect to watch this video
interesting talk, thanks for putting me on to residuality theory
It is interesting to see bridging various technologies like that. Fantastic Steve Sanderson, as always. I love his presentations and the level of detail and complexity he usually goes into. However, the presented concept of combining technologies is making things on the backend much more complicated than most teams will want to accept unless they really have to integrate some legacy code/functionalities/externalities. The more tech interfaces you have to deal with, the more points of failure for your system and issues that will be hard to overcome. It's really difficult for me to come up with a valid real-world use case. Any specific ideas?
This is it. This is the future and I've been searching & experimenting for MONTHS and this is literally the FIRST instance I have found of this type of implementation of Multi-Modality!
GCP Pub Sub is missed the biggest messaging system in the world lol:)
Very good!
I liked the disclaimer at 46:40. However, the Handle method must be marked async, and the SingleOrDefaultAsync extension method should rather be used to actually compile the code. Other than that, I like the idea of vertical slices and I worked on projects that had grown into huge sizes and suffered from what Chris described. I just would love to see some examples or some discussion around good practices of how to layout the features and their borders because it is a pretty vague term tbh. I also am quite curious about what the design decision-making will be for huge projects when a change order arrives with a new feature that overlaps various existing features. It can become pretty hard to tell where one feature ends and another one begins. Or do you create a new one? This can become pretty messy quite quickly if it's affecting existing views or other end-user experiences.
Really??? People is afraid of computation expressions? Unfortunate
TIL the old weird flashy lights are the equivalent of modern RGB leds
First!
Does your events include the whole aggregate root and everything within? If not, how do you handle out of order events?
There are tools based on tracking cookies deployed by state actors & bad guys to obtain physical location of an individual. Whoever says they don't care about tracking is out of touch
The Facebook diss about “300 billion” was such a miss lol