In this article we are going to explore about error correcting codes in computer network. We will be discussing various types of error correcting codes, how they work, what are their applications etc.
Table of contents:
- Introduction to Errors
- Error Correcting Codes
- HAMMING CODES
- Binary Convolution Code
- Reed - Solomon Code
- Low-Density Parity Check Code
Introduction to Errors
Before moving on to the main topic we will first understand what are errors in computer network. Errors are basically some fault, problem or issue in a message. In computer networks messages are the collection of information that needs to be transferred in order to pass information from one source to another. If the message is not the same as it was while transferring then this situation is an error. In simple words we can say that when any message at sender side does not match the message received from receiver, then that is an error. These errors need to be corrected to allow smooth transfer of information from sender to receiver. So there are various types of error correcting codes and methods in computer network which help in correcting the errors. We will study each of them in detail. So let's get started.
Error Correcting Codes
Error correcting codes are a sequence of numbers that are produced or generated by some algorithms for detecting and correcting the errors that occurs while transfer of information. As we have already seen above errors so these codes detect and find the exact position at which error has been occurred and then try to correct them to allow proper transfer of information.
There are 2 types of error correcting codes in computer networks:
- Block codes
- Convolutional codes
Block codes code information in blocks as the name suggests. In block codes information bits are followed by parity bits. They are memoryless.
Convolutional codes convolve information bit sequence. In convolution codes, information bits are spread along the sequence. Unlike block codes convolutional codes have memory. They use small codewords in comparison to block codes, both achieving the same quality.
These two are the two types of error correcting codes, now we will discuss about some error correcting codes and their working and how they correct and detect an error. In general we will study 4 basic error correcting codes in computer network.
- Hamming codes
- Binary Convolution Code
- Reed - Solomon Code
- Low-Density Parity Check Code
We will study each of them in detail.
Hamming code is a error correcting code with the capability of both error detection and correction. It can detect two simultaneous bit errors and can correct single bit errors. In hamming code, the sender encodes the message by adding redundant bits in the message. The redundant bits are the extra bits that are placed at certain locations of the data bits to detect the error. At the receiver end, the code is decoded to detect errors and the original message is received. Now we will how the whole process of detection and correction works in hamming code.
MESSAGE ENCODING IN HAMMING CODE:
There are basically three standard steps which a sender use to encode a message.
The steps are:
Calculating total number of redundant bits - For calculating redundant bits let us assume that there are n number of data bits and p number of redundant bits which are to be added. Here, (n + p) depicts the location of an error in each of (n + p) bit positions and one extra state indicates no error. As p bits can indicate 2p states, 2p has to be at least equal to (n + p + 1). We can mathematically assign the above lines in equation as:
2p >= (n+p+1)
Placing redundant bits - All the redundant bits which are to added needs to be placed at the positions which are power of 2.
Calculating value of redundant bits - Redundant bits make number of 1's in a message as even or odd. This property is known as parity, which can be either odd or even. Even parity means there are even number of 1's in a message, while odd parity means there are odd number of 1's in a message. The redundant bits are calculated as parity. It should cover all the bit positions whose binary representation should include a 1 in the 1st position excluding the position of p1(at first position).
MESSAGE DECRYPTING IN A HAMMING CODE:
When receiver gets a message, it is necessary to perform recalculations to detect and correct errors present there. In this also, there are 3 standard steps which needs to be performed to detect and correct errors. The steps are:
Counting number of redundant bits - Here also the concept is same as above in which message was encoded, We will simply apply above process for finding number of redundant bits. We will use this:
2p >= (n+p+1), where n is number of data bits , p is number of redundant bits.
Placing redundant bits correctly - Just like above process, we will place redundant bits in positions which are power of 2 like 1,2,4,16...
Parity check - Same as above here also we will perform same method of even and odd parity.
In an overview we can see that the steps in message encoding and decrypting message are same.
After seeing all the working we will see some applications, advantages and disadvantages of hamming code.
Some of the important applications of a hamming code are:
- Open connectors
- Computer memory
- Embedded processor
Generally hamming codes are beneficial for networks in which single bit errors occurs as it can correct single bit errors only. Hamming code not only provides the detection of a bit error but also helps you to indent bit containing error so that it can be corrected. As we have seen above various applications of hamming code, we can say that it is very useful error correcting code.
As you all know that hamming code can correct only single bit errors, so if there is any situation where multiple bits are incorrectly placed, hamming code cannot correct multiple errors. This is the major problem in hamming code.
Binary Convolution Code
Convolutional code is another type of error-correcting code where the output bits are obtained by performing a desired logical operation on a present bitstream along with considering some bits of the previous stream. This coding technique rather than depending on the block of bits shows dependency on bitstream. It is also a linear code that consists of the shift register, used for temporarily storing the bits, shifting operation of bits along with X-OR logic circuits to give rise to an output code. Convolutional coding is known to be one of the most frequently used error correction techniques relative to digital wireless communication.
The major elements of the convolutional coding technique include the shift register that acts as temporary storage and whose stored bits undergo shifting using a sliding window, a logic circuit that performs modulo-2 addition incorporating XOR function. Before discussing the approach used in convolutional coding let us understand some important parameters. There are two parameters which define convolutional coding:
Constraint length - The constraint length corresponds to the length of the convolutional encoder which is the overall window size in bits, within the shift register. It is denoted by K. Sometimes also denoted by L. There is another parameter m which corresponds to the number of input bits retained within the shift register once it is entered in the encoder.
Code rate - It is the ratio of a number of bits shifted at once within the shift register(K) to the total number of bits in an encoded message(n). Mathematically it is assigned as :
r = K/n
Now assume we have a message with message bit as x[n]. Other x[n-1],x[n-2]... are encoding states. So basically x[n-i] is an encoding state. Now we have to calculate parity bits and they can be found by using encoder states and input bits. So we have assumed that we have 2 states with a single input bit. Constraint length will be 3. Thus, for the convolutional code, it is said that the output stream shows dependency on previously-stored bits in memory along with the present input bits.
Reed - Solomon Code
It is a linear block code. It takes a block of data and insert or add redundant bits before transferring the message. Receiver decodes the message and correct the errors. In reed solomon code there are some variables or you can say parameters which play an important role in this. n,k,q are those parameters. n is block symbol length, k is length of message symbol and q is size of each symbol. Each message and code symbol in the block corresponds to an element of a Galois field. Galois field is a set of numbers with arithmetic operations add, subtract, multiply, and divide which are associative, commutative, and closed. It is represented as GF(n), where n is the number of elements in the field.
In this we encode the message as a polynomial p(x), and then multiply with a code generator polynomial g(x). We will use the parameters defined above i.e, n,k,q.
We construct code generator polynomial g(x) with n-k factors, each root being a consecutive element in the Galois field. Mathematically,
a is a primitive element, an alternative way of specifying elements in
a field as successive powers 0, a0...,aN where N=2q-1 .
After that it map the message vector [x1...xk] to a polynomial p(x) of degree<k. This can be done using Lagrange interpolation. As soon as polynomial is found, now evaluate for other points ak+1</sub..an .
In this process, sender calculates s(x) = p(x) * g(x), and sends over the coefficients of s(x). Receiver gets r(x). If s(x) == r(x), then r(x) / g(x) has no remainder. Else r(x) = p(x) * g(x) + e(x), where e(x) is an error polynomial. Receiver knows g(x) and its roots and it can evaluate r(x) at every root to obtain a system of equations to determine which coefficients of r are in error, and the value of the error.
- Digital communication protocols such as DSL
- Data storage media such as DVDs and CDs
- Barcode formats such as QR codes
Low-Density Parity Check Code
It is a linear block code which has the capability of correcting errors in blocks of large sizes. It is constructed using a sparse tanner graph. A low-density parity-check code is a code specified by a parity-check matrix.
As we have seen above it is specified by a parity check matrix, which contains a major amount of 0's and very less amount of 1's. The parity check matrix has rows and columns which represent equations and bits in code symbols respectively. In this there are basically two parameters as n,i,j where n is block size, i is fixed number of 1's in each column and j is fixed number of 1's in each row.
There are two methods or processes through which message can be decoded.
When parity check bits are very small, we have to use decoders that help in doing parity checks. In the process, if we came across any bit that contains more than fixed number of parity equations that are not satisfied then we will simply reverse that particular bit. When we get new values, parity equations are recalculated. This whole process keep on running continuously until we get all parity equations which are satisfied.
The above process is easy to understand and implement but the problem arises when parity check bits are large. In that case we will be using probabilistic algorithms method. We will apply these algorithms on low density parity check graphs. As we have already seen above, LDPS is constructed using a sparse tanner graph, same graph is used here. The graph has two parts one which contains code symbols and other contains parity equations. Now if a code symbol is present then there is a line which connects two parts together. Message is decoded by passing these messages along these lines. Message is transmitted to check nodes from there it will come back to message nodes and in this whole process parity values are evaluated. This method is not easy to implement but it provided better result as compared to above method.
So now we have seen all the standard error correcting codes and their working,applications,pros and cons. I would like to conclude this article by hoping that you all who are reading this article will definitely gain some knowledge after reading this.