Error Correction - Computerphile
What good is knowing you have a problem if you can't fix it? - Professor Brailsford explains Hamming Codes and how errors can not just be detected, but also corrected.
Original Compression film: • Compression - Computer...
Entropy in Compression: • Entropy in Compression...
Error Detection: • Error Detection and Fl...
Home-made Video Arcade: • Homemade Video Arcade ...
BASIC & Sorting: • Programming BASIC and ...
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. See the full list of Brady's video projects at: bit.ly/bradychannels
Пікірлер: 314
I could seriously sit and listen to this guy all day, he's so goddamn interesting.
only issue i have with this channel is the videos that are part of a series of videos are not labeled. Sometimes you give them the "continued.." but that doesn't help if there are more than 2.
Professor Brailsford is video GOLD for education about this field and it's history!
This is my favorite series in computerphile so far.
this channel ND numberphile i love it.. i am doing my computer engineering and sometimes when i get frustrated of my hectic life schedule and want a productive break these are the two channels i come to... i hope you guys make more and more videos .. so that it can make my course more interesting and also encourage young minds to perceive career in computer also it would help ordinary people to know how the devices they use so casually work... guys please make more videos... i love them all and it makes my course really more interesting ...
Definitely my favorite series on computerphile so far! Would love it if we could venture further into the 'hairy' stuff, even if we'd only scratch the surface in such a short format :)
Different idea in cd error correction Reed Solomon I believe in 1960s - love to see him demonstrate that correction code. If memory serves a 16x16 grid. Though it's years since I studied theory.
Flashbacks to 12 months ago. and my grad course. This is one of the foundation pieces guys! Can't have our lovely wi-fi without it.
It'd be great if we could have "Part 1" and so on in the title, I often stumble onto Part 2 from a video that interested me, without knowing it'd been there
Just programmed a hamming encoder/decoder for my programming class! Really cool stuff, I'd like to learn how multiple bit error detection/correction works
I'm feeling proud that I know what he was talking at the end.
You should make a video about creating Hamming codes for larger numbers/multiple errors. You said it was very complicated but I disagree. It is one of the simpler algorithms, it is absolutely beautiful and anyone who ever took algebra should be able to understand it.
Just wanted to say that prof. Brailsford has amazingly soothing voice, it's pure pleasure to listen to. I'd love to see more videos about this other lovely stuff :)
I have to say that in general the computerphile videos are not all that interesting to me (a programmer), which suprised me a lot because i liked the idea. But the videos with Professor Brailsford are always really really good and i feel they are explained well even for those that are not that deeply familiar with the things he talks about. Keep it up!
I like this clear step by step explanation. A well done video!
Loved the way he ended it off. Gets me excited to learn more mathematics!
I love this channel so much. Could you do an episode (or several) on file systems and RAID?
Very well explained, thanks! I had wondered about this before, but I'm not a good learner by reading just text. Visual explanations I grasp much quicker.
To break down a concept an demonstrate it in basic terms it's always good to go back many generations before modern error correction algorithms for example like the streaming video data in this video.
Why use the cube at all? Why over complicate this? If you got two zeroes, it's an ACK. If you got two ones, it's a NAK. The point is that, minus the perfect 000/111 case, you're tolerating at most one bit of corruption. So 110, 101, 011 = the only possible ways to corrupt one bit of three ones. Likewise, 001, 010, 100 = the only possible ways to corrupt one bit of three zeroes. Keep It Simple, Smarty.
This man has to be one of the best teachers in the world.
In any case, the point of the ACK and NAK is so that the receiver can say to the transmitter "Got your message, thanks" or "Could you send that again, please?" respectively. So, on decoding that received ACK or NAK the transmitter will know what happened to the original message. The original message could also implement a similar system to what the Professor describes here for error correction, and so the receiver could attempt to correct errors before giving up and sending a NAK.
Please more videos about this!!!!!!!
Man now I want a video on each of the things he mentioned at the end!
You can also use methods like cyclic coding where the information is encoded into more bits. This allows you detect and correct wrong bits at the receiving end.
This way of explaining it converts very well into the mathematics behind it.
The arrangement of bits in that cube has a very special property that makes the detection possible, notice how every sides in the cube connects two points that differs in exactly one bit. The cube looks a little pointless here because there's only 3-bits, but with larger number of bits, when you're sending n-bits with larger correction distance, you can construct an n-dimensional hypercube to help visualize and correction simply becomes finding the nearest "good neighbors" in the hypercube.
Just discovered the channel! Loved it!
he is an amazing teacher. thank you
Fantastic video!
RAID 1 and 10 have error correction in that there's at least one identical copy of everything. RAID 4 and 5 reserve one disk's worth of space to store parity bits (which follow the same rules that Professor Brailsford has been describing), and if one disk fails, you can use those parity bits along with the contents of the surviving disks to figure out what would have been on the disk that failed.
Fascinating stuff!
As he mentioned at the end, modern error correction methods are based on sophisticated versions of this method. This channel is more about the origins and foundations of CS than about teaching cutting edge methods.
Lol, everything is understandable and fascinating when your teacher's got a british accent! I just love this channel! ^_^
(cont.) say you want to send 32-bit data with correction distance of 4 (i.e. it can correct when up to 4 bits are flipped). Given this requirements, all you need to do is to create a hypercube that can fit 2^32 good points where each good points are at least 9 steps away from all the other good points. How to create the smallest hypercube that satisfies that property and can be easily transformed back and forth from the original 32-bit is more involved, but it's easy to visualize why that works.
Please do a video on ECC RAM vs. non-EEC RAM, and DDR RAM vs. GDDR RAM
yyyeeeaaaa new computerphile video!!
Loved it Sir.
Very nice answer!
Professor Brailsford is is WIN!
I'd like to spend a day with this man.
First, it actually is possible to have signals with different lengths (look up "variable-length code" for some examples). Second, a "0" signal is not the absence of power, because then a "1" signal would be indistinguishable from a "01" signal or a "10" signal. A "0" signal would be a low-voltage signal and a "1" signal would be a high-voltage signal (or some other representation of what is technically a 3 character language with "no power" being a "space" or null character).
Great teacher
What's the name of the videotool the animations are made with?
Computers are amazing D: And Professor Brailsford is really good at explaining how they work. More please :V
Interesting; 1 Qbit Quantum error correction uses two additional bits as well. The Qbit has the superposition state Y= a|0> + b|1> where the information in the Qbit is contained in the complex numbers "a" and "b". We entangle the state with two addition bits which gives us Y' = a|000> + b|111>. Then no matter which bit gets damaged we can recover the original state /w 100% certainty by applying a set procedure. The bonus is that we can actually lose 2 bits and still recover the original state
Thanks, very clear and simple explanation :)
If you liked the set of integers from 0 to 255, well, Beyonce can tell you how you should have used it for error correction (and also encryption).
Would it not be more efficient to send maybe n bits for ACK and n+1 bits for NACK? Then you don't have to check the contents of either response just the length. So if a bit gets distorted it becomes irrelevant to the problem.
This chap is epic at explaining stuff
Would it be more beneficial to ACK that it is cloudy outside (101) rather than sending back a Boolean state (111) or (000)? Or even go back to (10) but instead do a double ACK? A: 01 -> B B: 01 -> A A: 01 -> B B: 01 -> A
Also the CD player hides some smaller errors from the user. So even if some data samples have been lost, the user usually can't detect the difference. Only if the error rate gets too high, there will be audible crackle or a short silence.
awesome video! SUBSCRIBED!
Wow, where does this guy get old school computer printout fanfold paper? I haven't seen that stuff since I was writing Fortran 77 code for our CDC Cyber 170 something like 35 years ago.
What video is he referring to in the beginning?
I hope there will be video on fields, rings, groups and cosets, either here or on Numberphile.
wow - that was very very good!!! thankyou for that so much!!! u explain so very very well Professor :)
My point is (was), it's just an extremely complicated way of saying 'we send the bit three times and use whatever pops up the most times'. The schematic used, while accurate, over-complicates it, especially with the (KZread) audience in mind, as it should be described in as simple and general terms as possible. It's also portrayed to provide something like flawless accuracy, which it does not (unless it's near-flawless to begin with).
Only if only need to do it a small number of times. Speaking is a very slow method of transferring data; I could spend all day calling places and getting only a few hundred weather reports. But if I need to know the wind speed and visibility around a weather drone every few miliseconds to make sure I don't crash, this method will do better than a phone call.
I love this guy!
I love this old reel feed paper!
fantastic voice
Love the videos, Brady. Have you heard of the Oculus Rift headset? Very cool piece of hardware.
I was working on sending small packets of no more than 256 bytes of data over IR using a a NRZ scheme really meant for TV remotes. I wanted good error correction but I think a few braincells upped and left me in the effort. I had to settle for error detection only in the end. Excellent work and thumbs up!
Anyone else love this guy's voice?
great video! you should make one on how graphics/physics engines for games/3d render. or how the GUI uses resources and displays information from the system and its directory's. hope u like my suggestions :)
good starting point. thanks. I'm going to go deeper and hope my brain doesn't smoke.
I made example in other comment. You can count steps needed to reach any expected combination without graphic representation.
Hi computephile, i just started programming opengl shaders and i would love if you could make a video on graphic cards.
Here are a few great ideas for the next Computerphile videos: Field theory, rings, groups and/or cosets :D
Yea, I did watch the entire thing. Caught that, and the 'simple' remark. The thing was more.... Is this useful any more? i.e. is it used in memory catch controllers? or anything anymore?
Perhaps go into explaining the difference between Back Error Correction and Forward Error Correction? I think it could be interesting to hear your explanation of this.
Drawing it as a cube makes me think that it's just a distance calculation from each corner. errors have a distance 1, sqrt (2) or sqrt (3). That seems like it would make large problems easier.
It's in the description under "Error Detection".
CDs use so-called Reed-Solomon codes. With them you are able to restore up to 25% corrupted info. The idea behind it is basically to distribute information, so that if you scratch a surface, then erased bits can still be built up from other parts of CD. Same principle is used in QR-codes.
Could you please make a video about the Reed-Solomon algorithm?
not quite, i think you would just be adding faces to the 3 dimensional object. four bits for instance would probably resemble a truncated icosahedron and so on. i would assume the shape of a 8 bit spherical polyhedron would need a single point (00000000) with six other geometric points of interest around the sphere before hitting the far point(11111111). all points of course converging in. essentially it would need eight paths and six or seven points between on each path.
Do hamming codes next!
It may seem weird, but I am genuinely interested in the way harder stuff. To be honest, for me personally it was fatiguing that he took like 5 min to explain that there is no way you can get from 000 to 111 with 1 error - although he is an amazingly interesting speaking teacher. You said you read the comments so I like to leave my feedback :) I'd really watch maybe 30 min videos which go in depth when it's such an excited teacher. Maybe here or on numberphile.
I want all the other stuff he mentions!
Can anyone give me the names the he said in the end? "some theory, rings, groups"???
because that only works for the simplest case where your valid bit patterns are all 0 or all 1. For complex messages you probably have multiple messages with the same bit counts but that are still 3 edges away from each other in your graph.
I'd be interested to see a video on networking TCP/IP and how it relates.
There is a carrier signal agreed by both (or all) ends of the network. This signal is then manipulated to represent 1 or 0.
Yeah, definitely helpful, ty
First semester Computer Science here and I have my information theory test next week! Wish me luck!
What do you mean by "isn't either like 010"? And If the ack/nak is warped then yes the message probably is warped. I thought the whole ack/nack thing was for like single characters, so each letter in the text file would have their own ack/nak test.
You should do a video on RAID 0,1,5,10 etc and explain how the error correction is done.
What you are talking about though is synonymous with geometry. Steps are essentially distance. The reason why he used a cube is because it's a nice analog that works even better when the systems get much more complicated.
I'll admit I'm only at 3:19 at the time of writing, but what would happen if your acknowledge code got scrambled from say: "00" to "11"?
A parity bit is a kind of checksum, just a very simple case.
I like these videos
on second thoughts (its been 3 weeks), yes, perhaps each character could be warped individually, so yes
I'm making my own game engine so I know a bit about how graphics rendering works in game engines and you'd need a lot more than a video on that. Every rendering effect, from shadows to glare to reflections etc., could do with its own video.
You don't have to physically draw the cube. That was just to illustrate a point. However many problems are simpler to solve when taking into account geometry. For example netflix's suggestion system. They compare you to other members and suggest movies that people have rated highly and those people are close to you in Euclidean space i.e. people who rated movies you've rated similarly.
A parity bit is the easiest form of a checksum. A checksum is any method of changing the data to allow error detection to occur at the receiving end. In this example, he did just use a parity bit, but he wasn't incorrect in calling it a checksum.
for a distance one correction of any bit length couldn't you just convert the codes to the decimal form and check if the wrong code has a difference of 2^n (1, 2, 4, 8 etc.) compared to the correct codes? That should be only one bit difference
Is it possible for all the bits to be distorted?
With a 3 bit message and 3 bit confirm, why not send back the code you receive as confirm?
usually these things are time-based, so an inserted bit would likely result in further data looking like garbage. presumably, there would be a mechanism in place for the hardware to re-align with the communication stream, then it reports a bad value or similar?