java gets faster and faster as it learns realworld usage to optimize the compiled code. This is why the idea that Java is slower than even C is just BS. for a complex system, no compile time can really know the best optimization to use. This is where java's on the fly in time compilation shines. Because at run time it gets the code paths that is most used and has the best view of where to optimize. Yes, it takes some cycles for java to optimize it completely. But if you have a long running program, there is simply nothing better than java. Neither for speed nor for how quickly you can build the system. Rest of the languages are only good to cover either system level programming or for short run cycle programs. If you are running a batch application that uses 100% cpu for 10 secs and shuts down then java won't have the time to optimize, in those case just use graalvm and compile java to native code directly.
@rnrn712713 сағат бұрын
thank you, man! you really are on a different level! keep up the good work!
@AntonPutra11 сағат бұрын
❤️
@evgenylikhonosov57214 сағат бұрын
Thank you, great tutorial!
@AntonPutra11 сағат бұрын
❤️
@enjoy594115 сағат бұрын
I didn't understand Kubernetes related technologies when I read about it on the internet, but your videos made it very easy to understand. I really appreciate it.
@AntonPutra11 сағат бұрын
❤️
@mendoncaangelo15 сағат бұрын
*Dude I see you are doing good....Keep up the good work :)....Dawai Dawai....Let the JUNIP people know you are doing well :)*
@AntonPutra11 сағат бұрын
😂😂
@spasham7419 сағат бұрын
You had created the eks cluster in the past what is that we can expect in this new series? How is it different from the previous EKS cluster you have created?
@AntonPutra11 сағат бұрын
There have been few new developments on EKS side. 1. Kubernetes auth configmap is deprecated and recommended approach is to use new EKS API to add new users to the cluster 2. New way to grant permissions to application (we no longer use IAM OIDC provider and IAM roles for service accounts, instead we use Pod identities) 3. Some other small features in certain controllers like aws load balancer controller etc
@arunreddy384422 сағат бұрын
you are awesome buddy, very clear ,concise ... covered 1 hour stuff in 5 mins. that's really great.
@AntonPutra11 сағат бұрын
❤️
@padmavatitallapareddy8148Күн бұрын
This is really nice and clear video where can we practice SQL ? Thank you🙂
@AntonPutraКүн бұрын
Just updated readme. I have build a custom docker image based on the Postgres that has all the data. Run the following commands to pull it docker run --detach --name my-postgres --env POSTGRES_PASSWORD=devops123 aputra/postgres-169:15.3 docker exec -it my-postgres psql -U postgres
@padmavatitallapareddy8148Күн бұрын
@@AntonPutra thank you
@arunreddy3844Күн бұрын
HI Anton, thank you ! i had quick question which is out of context . I have been trying to setup onprem k8s cluster using kubeadm on ubuntu severs (through Oracle virtual box) . getting issue while deploying network plugin(Calico in my case) .. pod is not spinning up , here is below the events i found. same issue across other os flavor (centos) too. Could you pls help me with the resolution ? fyi .. i have choosen MAC Address policy as Generate new MAC addresses for all network adapters while creating VM through Oracle virtual box. am i missing something here ? Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 36s default-scheduler Successfully assigned kube-system/calico-node-b8r5j to osboxes Warning FailedMount 4s (x7 over 35s) kubelet MountVolume.SetUp failed for volume "bpffs" : hostPath type check failed: /sys/fs/bpf is not a directory
@AntonPutraКүн бұрын
I have a script, take a look how to provision on prem cluster ## Control Plane ### Preparing the hosts sudo apt update && sudo apt -y upgrade sudo sed -i 's/ubuntu/control-plane-00/' /etc/hostname sudo sed -i 's/ubuntu/control-plane-00/' /etc/hosts sudo reboot sudo apt update && sudo apt -y upgrade sudo sed -i 's/ubuntu/node-00/' /etc/hostname sudo sed -i 's/ubuntu/node-00/' /etc/hosts sudo reboot sudo apt update && sudo apt -y upgrade sudo sed -i 's/ubuntu/node-01/' /etc/hostname sudo sed -i 's/ubuntu/node-01/' /etc/hosts sudo reboot sudo apt update && sudo apt -y upgrade sudo sed -i 's/ubuntu/node-02/' /etc/hostname sudo sed -i 's/ubuntu/node-02/' /etc/hosts sudo reboot sudo apt update && sudo apt -y upgrade sudo sed -i 's/ubuntu/node-03/' /etc/hostname sudo sed -i 's/ubuntu/node-03/' /etc/hosts sudo reboot sudo apt update && sudo apt -y upgrade sudo sed -i 's/ubuntu/node-04/' /etc/hostname sudo sed -i 's/ubuntu/node-04/' /etc/hosts sudo reboot sudo apt update && sudo apt -y upgrade sudo sed -i 's/ubuntu/node-05/' /etc/hostname sudo sed -i 's/ubuntu/node-05/' /etc/hosts sudo reboot ### Disable swap sudo swapoff -a sudo sed -i 's/\/swap.img/#\/swap.img/' /etc/fstab free -h ### Installing a container runtime (containerd) curl -L github.com/containerd/containerd/releases/download/v1.7.3/containerd-1.7.3-linux-amd64.tar.gz -o containerd-1.7.3-linux-amd64.tar.gz sudo tar Cxzvf /usr/local containerd-1.7.3-linux-amd64.tar.gz sudo curl -L raw.githubusercontent.com/containerd/containerd/main/containerd.service -o /lib/systemd/system/containerd.service sudo systemctl daemon-reload sudo systemctl enable --now containerd #### Installing runc curl -L github.com/opencontainers/runc/releases/download/v1.1.8/runc.amd64 -o runc.amd64 sudo install -m 755 runc.amd64 /usr/local/sbin/runc #### Installing CNI plugins curl -L github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz -o cni-plugins-linux-amd64-v1.3.0.tgz sudo mkdir -p /opt/cni/bin sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.3.0.tgz sudo mkdir /etc/containerd/ sudo sh -c 'containerd config default > /etc/containerd/config.toml' sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml sudo systemctl restart containerd stat -fc %T /sys/fs/cgroup/ ### Install and configure prerequisites cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter # sysctl params required by setup, params persist across reboots cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF # Apply sysctl params without reboot sudo sysctl --system lsmod | grep br_netfilter lsmod | grep overlay sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward ### Install kubeadm (on all the hosts) sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl curl -fsSL dl.k8s.io/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl ### Initializing your control-plane node sudo kubeadm init --pod-network-cidr=10.0.0.0/16 ### Installing a Pod network add-on kubectl create -f raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml kubectl apply -f gist.githubusercontent.com/antonputra/e2f5e86d3574604b8ee4f61a53c31865/raw/96e03c744e6ee049c3f6ecb3be3ec2c8b5ee0d2c/calico watch kubectl get pods -n calico-system sudo kubeadm join 192.168.50.135:6443 --token 7832ex.xhpe15rhj4px3atg \ --discovery-token-ca-cert-hash sha256:70b41c1422bd0658e664663f62919c46cffe32a6526c4b58327c93895c866dcd kubectl label node node-00 node-role.kubernetes.io/worker=
@arunreddy3844Күн бұрын
@@AntonPutra thank you Sir, will try and let you know .
@hgn213Күн бұрын
Welp....you just got a new sub. Keep making short tutorials on kube that perform a common action on k3s/k8 etc.
@AntonPutraКүн бұрын
thanks! will do between full courses
@tkb123412 күн бұрын
Can we add our custom metrics as below: auto scale on increasing of user count. If so , can you share the Prometheus query and what is HPA scaler needs to be configured ?
@AntonPutraКүн бұрын
I have updated version of that tutorial, pls take a look - github.com/antonputra/tutorials/tree/main/lessons/181/1-hpa/custom-metrics
@tkb123412 күн бұрын
Hi Anton, Can you share the configuration for the below requirement: I need to do the auto scale on increase of user count accessing my UI application and it data traffic is above the threshold
@AntonPutraКүн бұрын
I have updated version of that tutorial, pls take a look - github.com/antonputra/tutorials/tree/main/lessons/181/1-hpa/custom-metrics
@burhanuddinasgarali76782 күн бұрын
the video was great but if you could just look away when teaching, kinda creeps me out
@AntonPutra2 күн бұрын
ok :)
@amuthansakthivel36482 күн бұрын
Give this man an award!
@AntonPutra2 күн бұрын
❤
@MihaiLupoiu2 күн бұрын
Thank you very much Anton for all the videos you make! I learned a lot from your content!
@AntonPutra2 күн бұрын
thanks!
@MatthewKennedyUK2 күн бұрын
This is fantastic, I’m loving that you’ve broken this into multiple videos and are going into a more detail helping me to build my production EKS cluster. This is worth my subscription, keep up the good work.
@AntonPutra2 күн бұрын
thank you!
@GabrielPozo2 күн бұрын
Great video!!! Now I am waiting for the next part. 😁
@AntonPutra2 күн бұрын
thank you!
@arunreddy14362 күн бұрын
masterpiece Sir, curiously waiting for the rest of EKS videos and thank you for the great job.
@AntonPutra2 күн бұрын
thank you!
@diegonayalazo2 күн бұрын
❤
@diegonayalazo2 күн бұрын
❤
@user-pc1pm1vb7p3 күн бұрын
What would be a condition when 2 both with similar configuration should always schedule in 2 diff nodes ?
@AntonPutra2 күн бұрын
use podantiaffinity - kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#more-practical-use-cases
@felipedonadon70393 күн бұрын
I've been into DevOps for barely two years now, and have just started using GCP due to a new client that we now have. Your videos are a lifesaver brother, thank you so much for the clear explanations and all the tutorials!
@AntonPutra2 күн бұрын
thank you so much, I'll refresh them soon
@raghuveer1203 күн бұрын
Another great video. Keep sharing your knowledge.
@AntonPutra2 күн бұрын
❤️
@ZergStylexDD3 күн бұрын
In this video you create all resources using direct terraform resources. But we also have publicly available modules from Anton Babenko to create EKS and simplify the terraform layer, in my opinion. What do you think is better to use in production cases? Is it worth using such public terraform modules or is it better to create all the resources yourself?
@AntonPutra3 күн бұрын
I respect him, i just generally don't like using open source modules. For example that open source module still uses auth configmap to manage users. It's very easy for them to start using API but it will break your infra and you would have to keep using old versions until you create new eks clusters (just from my personal experience) Modules are great for consulting and temporary envs, when you don't need to maintain clusters for over the year. I know a lot of copy pasting but when you have 20+ clusters, update module can in all envs can take months or even year :)
@ZergStylexDD3 күн бұрын
Great content!
@AntonPutra3 күн бұрын
❤️
@diegonayalazo3 күн бұрын
❤
@AntonPutra3 күн бұрын
thanks :)
@diegonayalazo3 күн бұрын
❤
@AntonPutra3 күн бұрын
thanks again :)
@Daveooooooooooo03 күн бұрын
Audio bug at 29:45...here you just define x2
@AntonPutra3 күн бұрын
thanks, probably missed it
@Daveooooooooooo02 күн бұрын
@@AntonPutra 💪keep on rocking!
@AntonPutra2 күн бұрын
@@Daveooooooooooo0 will do :)
@agun21st3 күн бұрын
Get learn with latest version. Thank you sir.
@dineshparva3 күн бұрын
Thanks for the video could you explain in layman terms what exactly is the oidc provider and its role in eks does it act like a authentication broker between iam and k8s in aws?
@AntonPutra3 күн бұрын
oidc provider allows you to establish relationship between AWS IAM and Kubernetes RBAC. 1. you create IAM role and define trust relationship with Kubernetes service account 2. you create Kubernetes service account and LINK IAM role with Kubernetes service account Finally you can assign IAM permissions to Kubernetes pods. BUT you no longer need it at all, new better way is pod identities, video comparing all approaches coming in few days
@soufiane22v3 күн бұрын
Amazing stuff . This is the right moment l to deep dive into EKS . Thank you so much for the effort 🙏🏻🙏🏻🙏🏻
@AntonPutra3 күн бұрын
thanks!
@soufiane22vКүн бұрын
is it possible to use openTofu instead of terraform ?
@AntonPutra3 күн бұрын
🔴 - To support my channel, I’d like to offer Mentorship/On-the-Job Support/Consulting - [email protected]
@AntonPutra3 күн бұрын
Part 3 will be released in 2 days. Playlist - kzread.info/head/PLiMWaCMwGJXnKY6XmeifEpjIfkWRo9v2l&si=Ku0ay7zUEKgfcVcb 1. Create AWS VPC using Terraform 2. Create AWS EKS Cluster using Terraform 3. Add IAM User & IAM Role to AWS EKS 4. Horizontal Pod Autoscaler (HPA) on AWS EKS 5. EKS Pod Identities Tutorial (vs. IRSA & OIDC) 6. Cluster Autoscaler Tutorial 7. AWS Load Balancer Controller Tutorial (TLS) 8. Nginx Ingress Controller Tutorial (Cert-Manager & TLS) 9. CSI Driver Tutorial (ReadWriteOnce) 10. EFS CSI Driver Tutorial (ReadWriteMany) 11. AWS Secrets Manager Tutorial (Env & Files) Based on the feedback, I’ll add the following sections (let me know if anything else is missing): - Autoscaling with Karpenter - Autoscaling with Keda - Private Ingress with Private DNS & VPN - Monitoring with Prometheus - EKS self managed group - EKS Fargate - EKS Pod Identities vs. EKS IRSA (oidc) vs. Node roles
@dineshparva3 күн бұрын
few more sections to add are cilium integration and vpc lattice and gateway api advantages
@AntonPutra3 күн бұрын
@@dineshparva ok, added cilium, will take a look at the second one
@rafalkita8842 күн бұрын
I would add cluster upgrades too. Every few months we have new EKS version. How would you do an upgrade when you have all these dependent cluster components deployed with terraform. Best practices, zero downtime etc. when going from one EKS version to the next one.
@AntonPutra2 күн бұрын
@@rafalkita884 thanks, it heavily depends on the Kubernetes version itself not EKS. For example K8s can deprecated some APIs for example old ingress beta v1 and you would need to upgrade all your yaml files. So it's very hard to come up with general recommendation.
@ValeriiVasianovych3 күн бұрын
Спасибо Вам, Антон. Вы очень помогаете мне развиваться и продолжать двигаться дальше в DevOps.
@AntonPutra3 күн бұрын
pozhalusta :)
@irwin_a3 күн бұрын
Ur Gold to K8s community !!!
@AntonPutra3 күн бұрын
thanks :)
@KX3DEX3 күн бұрын
My work just did this. Can't wait to watch it all.
@AntonPutra3 күн бұрын
thanks, next section will be released in 1-2 hrs
@kgck154 күн бұрын
One of the Best video on docker networks..nicely explained.
@AntonPutra4 күн бұрын
❤
@diegonayalazo4 күн бұрын
❤ Thanks Grand Teacher
@AntonPutra4 күн бұрын
🙏
@diegonayalazo4 күн бұрын
❤
@AntonPutra4 күн бұрын
🙏
@George-mk7lp4 күн бұрын
great content as always
@AntonPutra4 күн бұрын
thanks!!
@good_vibes_204 күн бұрын
Nice job
@AntonPutra4 күн бұрын
thank you, little bit outdated, i'll soon make a new one
@good_vibes_204 күн бұрын
@@AntonPutra Sounds good. Subbed!
@diegonayalazo4 күн бұрын
❤
@AntonPutra4 күн бұрын
thanks
@Daveooooooooooo05 күн бұрын
Ebs is supported
@AntonPutra4 күн бұрын
"You can't mount Amazon EBS volumes to Fargate Pods." AWS Fargate considerations - docs.aws.amazon.com/eks/latest/userguide/fargate.html
@XenoZeduX5 күн бұрын
Would like to see Pulumi content in the future
@AntonPutra4 күн бұрын
ok will do as well as sdk
@acokmez5 күн бұрын
wow better than udemy thanks man
@AntonPutra4 күн бұрын
thanks!
@diegonayalazo5 күн бұрын
❤
@AntonPutra5 күн бұрын
thanks :)
@shokhrukhbekyursunjonov62035 күн бұрын
Good day Anton, thank you so much for user-friendly content! Could you please make a tutorial video on properly deploying secure (SASL/SCRAM) Confluent based full Fafka stack (2024 edition)? For several weeks I am attempting to deploy full kafka stack (zoo+kafka+schema-registry+kafka-connect+rest-proxy+ksqldb+conduktor-console) using SASL/SCRAM_SHA_256 method, but having errors in additional components such as registry, proxy connect and ksqldb during SASL SCRAM authorization... (docker compose solution). I am sure it would be really helpful to the kafka devops community here... Sincerely, Shokhrukh Yursunjonov
@AntonPutra5 күн бұрын
Sure, I can do it. A couple of questions: Is it Kubernetes-based? Also, why not use Kafka without ZooKeeper (KRaft)? Do you have any legacy applications that require ZooKeeper?
@shokhrukhbekyursunjonov62035 күн бұрын
@@AntonPutra it is docker stack based (cause I am given one server to deploy kafka and I am writing compose file to deploy all in one server). Ah, I almost forgot about KRaft, good idea, I might try to use this mode, I heard that it is more intelligent and faster! According to my info (what devs said to me) apps are not legacy (mostly .net containerized apps), thus I can try using single/double broker KRaft mode, thank you. But I am afraid to have the same issues configurning SASL/SCRAM auth in it
@AntonPutra5 күн бұрын
@@shokhrukhbekyursunjonov6203 yes it's faster since kafka does not need to keep offsets in zookeeper and it's scales better without zk. I'll get to it maybe after EKS playlist will see.
@69k_gold5 күн бұрын
Let's say I'm using a shard-nothing architecture, now let's say there's a relationship between customers table, payments table and orders table. Customers and orders tables are linked by the foreign key customers->id ~ orders->customer_id Orders table and payments table has the foreign key orders->id ~ payments->order_id Now how would you shard this database? You can't use a single shard key, because both customer_id and order_id are important that ensure all the related rows are in a single shard. So how would you solve this problem?
Пікірлер
THANK YOU for a great video!
❤️
java gets faster and faster as it learns realworld usage to optimize the compiled code. This is why the idea that Java is slower than even C is just BS. for a complex system, no compile time can really know the best optimization to use. This is where java's on the fly in time compilation shines. Because at run time it gets the code paths that is most used and has the best view of where to optimize. Yes, it takes some cycles for java to optimize it completely. But if you have a long running program, there is simply nothing better than java. Neither for speed nor for how quickly you can build the system. Rest of the languages are only good to cover either system level programming or for short run cycle programs. If you are running a batch application that uses 100% cpu for 10 secs and shuts down then java won't have the time to optimize, in those case just use graalvm and compile java to native code directly.
thank you, man! you really are on a different level! keep up the good work!
❤️
Thank you, great tutorial!
❤️
I didn't understand Kubernetes related technologies when I read about it on the internet, but your videos made it very easy to understand. I really appreciate it.
❤️
*Dude I see you are doing good....Keep up the good work :)....Dawai Dawai....Let the JUNIP people know you are doing well :)*
😂😂
You had created the eks cluster in the past what is that we can expect in this new series? How is it different from the previous EKS cluster you have created?
There have been few new developments on EKS side. 1. Kubernetes auth configmap is deprecated and recommended approach is to use new EKS API to add new users to the cluster 2. New way to grant permissions to application (we no longer use IAM OIDC provider and IAM roles for service accounts, instead we use Pod identities) 3. Some other small features in certain controllers like aws load balancer controller etc
you are awesome buddy, very clear ,concise ... covered 1 hour stuff in 5 mins. that's really great.
❤️
This is really nice and clear video where can we practice SQL ? Thank you🙂
Just updated readme. I have build a custom docker image based on the Postgres that has all the data. Run the following commands to pull it docker run --detach --name my-postgres --env POSTGRES_PASSWORD=devops123 aputra/postgres-169:15.3 docker exec -it my-postgres psql -U postgres
@@AntonPutra thank you
HI Anton, thank you ! i had quick question which is out of context . I have been trying to setup onprem k8s cluster using kubeadm on ubuntu severs (through Oracle virtual box) . getting issue while deploying network plugin(Calico in my case) .. pod is not spinning up , here is below the events i found. same issue across other os flavor (centos) too. Could you pls help me with the resolution ? fyi .. i have choosen MAC Address policy as Generate new MAC addresses for all network adapters while creating VM through Oracle virtual box. am i missing something here ? Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 36s default-scheduler Successfully assigned kube-system/calico-node-b8r5j to osboxes Warning FailedMount 4s (x7 over 35s) kubelet MountVolume.SetUp failed for volume "bpffs" : hostPath type check failed: /sys/fs/bpf is not a directory
I have a script, take a look how to provision on prem cluster ## Control Plane ### Preparing the hosts sudo apt update && sudo apt -y upgrade sudo sed -i 's/ubuntu/control-plane-00/' /etc/hostname sudo sed -i 's/ubuntu/control-plane-00/' /etc/hosts sudo reboot sudo apt update && sudo apt -y upgrade sudo sed -i 's/ubuntu/node-00/' /etc/hostname sudo sed -i 's/ubuntu/node-00/' /etc/hosts sudo reboot sudo apt update && sudo apt -y upgrade sudo sed -i 's/ubuntu/node-01/' /etc/hostname sudo sed -i 's/ubuntu/node-01/' /etc/hosts sudo reboot sudo apt update && sudo apt -y upgrade sudo sed -i 's/ubuntu/node-02/' /etc/hostname sudo sed -i 's/ubuntu/node-02/' /etc/hosts sudo reboot sudo apt update && sudo apt -y upgrade sudo sed -i 's/ubuntu/node-03/' /etc/hostname sudo sed -i 's/ubuntu/node-03/' /etc/hosts sudo reboot sudo apt update && sudo apt -y upgrade sudo sed -i 's/ubuntu/node-04/' /etc/hostname sudo sed -i 's/ubuntu/node-04/' /etc/hosts sudo reboot sudo apt update && sudo apt -y upgrade sudo sed -i 's/ubuntu/node-05/' /etc/hostname sudo sed -i 's/ubuntu/node-05/' /etc/hosts sudo reboot ### Disable swap sudo swapoff -a sudo sed -i 's/\/swap.img/#\/swap.img/' /etc/fstab free -h ### Installing a container runtime (containerd) curl -L github.com/containerd/containerd/releases/download/v1.7.3/containerd-1.7.3-linux-amd64.tar.gz -o containerd-1.7.3-linux-amd64.tar.gz sudo tar Cxzvf /usr/local containerd-1.7.3-linux-amd64.tar.gz sudo curl -L raw.githubusercontent.com/containerd/containerd/main/containerd.service -o /lib/systemd/system/containerd.service sudo systemctl daemon-reload sudo systemctl enable --now containerd #### Installing runc curl -L github.com/opencontainers/runc/releases/download/v1.1.8/runc.amd64 -o runc.amd64 sudo install -m 755 runc.amd64 /usr/local/sbin/runc #### Installing CNI plugins curl -L github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz -o cni-plugins-linux-amd64-v1.3.0.tgz sudo mkdir -p /opt/cni/bin sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.3.0.tgz sudo mkdir /etc/containerd/ sudo sh -c 'containerd config default > /etc/containerd/config.toml' sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml sudo systemctl restart containerd stat -fc %T /sys/fs/cgroup/ ### Install and configure prerequisites cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter # sysctl params required by setup, params persist across reboots cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF # Apply sysctl params without reboot sudo sysctl --system lsmod | grep br_netfilter lsmod | grep overlay sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward ### Install kubeadm (on all the hosts) sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl curl -fsSL dl.k8s.io/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl ### Initializing your control-plane node sudo kubeadm init --pod-network-cidr=10.0.0.0/16 ### Installing a Pod network add-on kubectl create -f raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml kubectl apply -f gist.githubusercontent.com/antonputra/e2f5e86d3574604b8ee4f61a53c31865/raw/96e03c744e6ee049c3f6ecb3be3ec2c8b5ee0d2c/calico watch kubectl get pods -n calico-system sudo kubeadm join 192.168.50.135:6443 --token 7832ex.xhpe15rhj4px3atg \ --discovery-token-ca-cert-hash sha256:70b41c1422bd0658e664663f62919c46cffe32a6526c4b58327c93895c866dcd kubectl label node node-00 node-role.kubernetes.io/worker=
@@AntonPutra thank you Sir, will try and let you know .
Welp....you just got a new sub. Keep making short tutorials on kube that perform a common action on k3s/k8 etc.
thanks! will do between full courses
Can we add our custom metrics as below: auto scale on increasing of user count. If so , can you share the Prometheus query and what is HPA scaler needs to be configured ?
I have updated version of that tutorial, pls take a look - github.com/antonputra/tutorials/tree/main/lessons/181/1-hpa/custom-metrics
Hi Anton, Can you share the configuration for the below requirement: I need to do the auto scale on increase of user count accessing my UI application and it data traffic is above the threshold
I have updated version of that tutorial, pls take a look - github.com/antonputra/tutorials/tree/main/lessons/181/1-hpa/custom-metrics
the video was great but if you could just look away when teaching, kinda creeps me out
ok :)
Give this man an award!
❤
Thank you very much Anton for all the videos you make! I learned a lot from your content!
thanks!
This is fantastic, I’m loving that you’ve broken this into multiple videos and are going into a more detail helping me to build my production EKS cluster. This is worth my subscription, keep up the good work.
thank you!
Great video!!! Now I am waiting for the next part. 😁
thank you!
masterpiece Sir, curiously waiting for the rest of EKS videos and thank you for the great job.
thank you!
❤
❤
What would be a condition when 2 both with similar configuration should always schedule in 2 diff nodes ?
use podantiaffinity - kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#more-practical-use-cases
I've been into DevOps for barely two years now, and have just started using GCP due to a new client that we now have. Your videos are a lifesaver brother, thank you so much for the clear explanations and all the tutorials!
thank you so much, I'll refresh them soon
Another great video. Keep sharing your knowledge.
❤️
In this video you create all resources using direct terraform resources. But we also have publicly available modules from Anton Babenko to create EKS and simplify the terraform layer, in my opinion. What do you think is better to use in production cases? Is it worth using such public terraform modules or is it better to create all the resources yourself?
I respect him, i just generally don't like using open source modules. For example that open source module still uses auth configmap to manage users. It's very easy for them to start using API but it will break your infra and you would have to keep using old versions until you create new eks clusters (just from my personal experience) Modules are great for consulting and temporary envs, when you don't need to maintain clusters for over the year. I know a lot of copy pasting but when you have 20+ clusters, update module can in all envs can take months or even year :)
Great content!
❤️
❤
thanks :)
❤
thanks again :)
Audio bug at 29:45...here you just define x2
thanks, probably missed it
@@AntonPutra 💪keep on rocking!
@@Daveooooooooooo0 will do :)
Get learn with latest version. Thank you sir.
Thanks for the video could you explain in layman terms what exactly is the oidc provider and its role in eks does it act like a authentication broker between iam and k8s in aws?
oidc provider allows you to establish relationship between AWS IAM and Kubernetes RBAC. 1. you create IAM role and define trust relationship with Kubernetes service account 2. you create Kubernetes service account and LINK IAM role with Kubernetes service account Finally you can assign IAM permissions to Kubernetes pods. BUT you no longer need it at all, new better way is pod identities, video comparing all approaches coming in few days
Amazing stuff . This is the right moment l to deep dive into EKS . Thank you so much for the effort 🙏🏻🙏🏻🙏🏻
thanks!
is it possible to use openTofu instead of terraform ?
🔴 - To support my channel, I’d like to offer Mentorship/On-the-Job Support/Consulting - [email protected]
Part 3 will be released in 2 days. Playlist - kzread.info/head/PLiMWaCMwGJXnKY6XmeifEpjIfkWRo9v2l&si=Ku0ay7zUEKgfcVcb 1. Create AWS VPC using Terraform 2. Create AWS EKS Cluster using Terraform 3. Add IAM User & IAM Role to AWS EKS 4. Horizontal Pod Autoscaler (HPA) on AWS EKS 5. EKS Pod Identities Tutorial (vs. IRSA & OIDC) 6. Cluster Autoscaler Tutorial 7. AWS Load Balancer Controller Tutorial (TLS) 8. Nginx Ingress Controller Tutorial (Cert-Manager & TLS) 9. CSI Driver Tutorial (ReadWriteOnce) 10. EFS CSI Driver Tutorial (ReadWriteMany) 11. AWS Secrets Manager Tutorial (Env & Files) Based on the feedback, I’ll add the following sections (let me know if anything else is missing): - Autoscaling with Karpenter - Autoscaling with Keda - Private Ingress with Private DNS & VPN - Monitoring with Prometheus - EKS self managed group - EKS Fargate - EKS Pod Identities vs. EKS IRSA (oidc) vs. Node roles
few more sections to add are cilium integration and vpc lattice and gateway api advantages
@@dineshparva ok, added cilium, will take a look at the second one
I would add cluster upgrades too. Every few months we have new EKS version. How would you do an upgrade when you have all these dependent cluster components deployed with terraform. Best practices, zero downtime etc. when going from one EKS version to the next one.
@@rafalkita884 thanks, it heavily depends on the Kubernetes version itself not EKS. For example K8s can deprecated some APIs for example old ingress beta v1 and you would need to upgrade all your yaml files. So it's very hard to come up with general recommendation.
Спасибо Вам, Антон. Вы очень помогаете мне развиваться и продолжать двигаться дальше в DevOps.
pozhalusta :)
Ur Gold to K8s community !!!
thanks :)
My work just did this. Can't wait to watch it all.
thanks, next section will be released in 1-2 hrs
One of the Best video on docker networks..nicely explained.
❤
❤ Thanks Grand Teacher
🙏
❤
🙏
great content as always
thanks!!
Nice job
thank you, little bit outdated, i'll soon make a new one
@@AntonPutra Sounds good. Subbed!
❤
thanks
Ebs is supported
"You can't mount Amazon EBS volumes to Fargate Pods." AWS Fargate considerations - docs.aws.amazon.com/eks/latest/userguide/fargate.html
Would like to see Pulumi content in the future
ok will do as well as sdk
wow better than udemy thanks man
thanks!
❤
thanks :)
Good day Anton, thank you so much for user-friendly content! Could you please make a tutorial video on properly deploying secure (SASL/SCRAM) Confluent based full Fafka stack (2024 edition)? For several weeks I am attempting to deploy full kafka stack (zoo+kafka+schema-registry+kafka-connect+rest-proxy+ksqldb+conduktor-console) using SASL/SCRAM_SHA_256 method, but having errors in additional components such as registry, proxy connect and ksqldb during SASL SCRAM authorization... (docker compose solution). I am sure it would be really helpful to the kafka devops community here... Sincerely, Shokhrukh Yursunjonov
Sure, I can do it. A couple of questions: Is it Kubernetes-based? Also, why not use Kafka without ZooKeeper (KRaft)? Do you have any legacy applications that require ZooKeeper?
@@AntonPutra it is docker stack based (cause I am given one server to deploy kafka and I am writing compose file to deploy all in one server). Ah, I almost forgot about KRaft, good idea, I might try to use this mode, I heard that it is more intelligent and faster! According to my info (what devs said to me) apps are not legacy (mostly .net containerized apps), thus I can try using single/double broker KRaft mode, thank you. But I am afraid to have the same issues configurning SASL/SCRAM auth in it
@@shokhrukhbekyursunjonov6203 yes it's faster since kafka does not need to keep offsets in zookeeper and it's scales better without zk. I'll get to it maybe after EKS playlist will see.
Let's say I'm using a shard-nothing architecture, now let's say there's a relationship between customers table, payments table and orders table. Customers and orders tables are linked by the foreign key customers->id ~ orders->customer_id Orders table and payments table has the foreign key orders->id ~ payments->order_id Now how would you shard this database? You can't use a single shard key, because both customer_id and order_id are important that ensure all the related rows are in a single shard. So how would you solve this problem?
Thank you and God Bless!
❤️