ldpc - low density parity check codes

7
SPARTANS - ENCODING TOPIC: LOW DENSITY PARITY CHECK MATRIX –ENCODING FOR BIT FLIPPING TECHNICAL DETAILS It will consist of Technical terms, formulas and working of Ldpc: TECHNICAL TERMS 1. Code Efficiency/Rate: The rate of a block code is defined as the ratio between its message length and its block length. Rate=k/n. 2. Modulo-2Addition : Add two bit and take modulo2 of their sum, it’s same as XOR gate. 3. Parity bits: A bit which acts as a check on a set of binary values. 4. Generator Matrix: An (n, k) LBC can be specified by any set of ‘k’ linear independent codeword’s c0, c1, . . . , ck−1. If we arrange the ‘k’ codeword’s into a k × n matrix ‘G’, then ‘G’ is called a generator matrix for C. G consists of identity and parity matrices. 5. Parity Check Matrix: Matrix ‘H’ that consist of identity and transpose of parity matrix. 6. Received word: The code received on the receiving part of the system. 7. Error Vector: A vector showing the position of wrong/flipped bit in received word. 8. Syndrome: The syndrome is the receive sequence multiplied by the transposed parity check matrix (H). SPARTANS – GROUP 3 Page 1

Upload: pandyakavi

Post on 06-Nov-2015

16 views

Category:

Documents


3 download

DESCRIPTION

LDPC - EncodingLDPC code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel. An LDPC is constructed using a sparse bipartite graph.Encoding of LDPC code was done in Matlab and it was implemented on hardware that was FPGA (Field-Programmable-Gate-Array) using Verilog.

TRANSCRIPT

SPARTANS - ENCODING

TOPIC: LOW DENSITY PARITY CHECK MATRIX ENCODING FOR BIT FLIPPING

TECHNICAL DETAILSIt will consist of Technical terms, formulas and working of Ldpc:

TECHNICAL TERMS

1. Code Efficiency/Rate: Therateof a block code is defined as the ratio between its message length and its block length. Rate=k/n.

2. Modulo-2Addition: Add two bit and take modulo2 of their sum, its same as XOR gate.

3. Parity bits: A bit which acts as a check on a set of binary values.

4. Generator Matrix: An (n, k) LBC can be specified by any set of k linear independent codewords c0, c1, . . . , ck1. If we arrange the k codewords into a k n matrix G, then G is called a generator matrix for C. G consists of identity and parity matrices.

5. Parity Check Matrix: Matrix H that consist of identity and transpose of parity matrix.

6. Received word: The code received on the receiving part of the system.

7. Error Vector: A vector showing the position of wrong/flipped bit in received word.

8. Syndrome: The syndrome is the receive sequence multiplied by the transposed parity check matrix (H).

9. Bit Flipping: If the generated bit is not equal to the required bit then we flip (change the bit from 1 to 0 and 0 to 1) the bit and this changing of bits is called bit-flipping.

10. Cycle of 4: A cycle of 4 is a case in which two adjacent columns have ones in two same rows.

11. Tanner Graph: A bipartite graphused to state constraints or equations which specify error correcting codes. They are partitioned into sub-code nodes and digit nodes. For linear block codes, the sub-code nodes denote rows of the parity-check-matrix(H). The digit nodes represent the columns of the matrix H. An edge connects a sub-code node to a digit node if a nonzero entry exists in the intersection of the corresponding row and column. [1]

12. Column Weight: The number of ones in a column is the column weight of that column.

FORMULAS:

1. s=H*rT: Syndrome(s) is calculated by multiplying Parity-Check Matrix (H) with the transpose of received bit/word(r).2. find (z): Locates all non-zero elements of array z.3. Max (a): Returns maximum value from within array a.4. x = [u p]: x is the code word and it consists of message bits (u) and parity bits (p).5. Mod (val,2): Returns modulo-2 value of the variable named/titled val.6. p = ((B-1)*(A*u)) T: p stands for parity Bits. Matrix A and B are part of H and they are related as H= [A B]. u is the message bits to be transmitted. Thus through this formula we can find the parity bits.

WORKING:

ENCODINGWorking module consists of two sub-parts: Creating Parity Check Matrix and Code-Word.Parity Check Matrix (H): The matrix to be created is of dimension 5K*10K, with the condition that each column has three ones in it avoiding cycle of 4. For this, we place three ones in each column at random positions using the randperm() function of Matlab. This results in every column having three ones. But it may also results in cycle of 4 and therefore we now detect them and remove them. To detect them we do the logical anding (and operation) of two rows and if the anding of two rows gives more than one ones then the two rows result in cycle of 4. Thus we now detected the cycle of 4. We now identify the position within row that results in cycle of 4 after identifying the location we flip the bit within the row having maximum one. Thus we successfully remove the cycle of 4 and have created H matrix.

Generate Codeword (x): H is a rectangular matrix (m*2m) and therefore we write it as a combination of two square matrix A (m*m) and B (m*m), such that H = [A B]. Let x be the codeword that is to be generated. Code word consists of input bits (u) and parity bits (p), such that x = [u p]. Dimension of x is 1*10K and that of u and p is 1*5K. Thus, to generate a code word we only need to know the parity bits (p). Formula of Syndrome gives us the relation between H and x and it is: HxT = 0. Expanding the relation we get [A B]*[u p]T = 0. This on evaluating results in( AuT xor BpT )= 0 , and it finally leads to BpT = AuT. From the previous relation we get pT as pT = (B-1)* (AuT) and thus we get p as ((B-1)* (AuT))T. On knowing the parity bits, we can compute code word which is the augmentation of input bits(provided from client side) and parity bits(calculated above). Thus the codeword is now passed on to Decoders for the Module of Decoding Decoding Partner: BLACK KNIGHT

DECODING A brief description from EncodersAt first we compute Syndrome, such that if received_word*HT=0 would mean that the received word is the correct one and the non-zero Syndrome would imply that the received word is still incorrect. To compute the number of error connections we use the formula of HT*SyndromeT, this formula gives us the connection of each bit(of received bit) with that of error bit. Thus, we flip that bit which is connected with the maximum number of error-bits and thus this concept came to be known as Bit-Flipping Method. After flipping the bit we re-compute the syndrome, if it comes as zero then the flipped code-word is correct otherwise we again perform the entire logic of Bit-Flipping and then it continues till it gets zero syndrome or has reached its maximum iteration limit.

HISTORY AND ADVANCES:

HISTORY:LDPC codes were invented by Gallager in 1962 but for several decades they were neglected. Turbo codes were introduced by C. Berrou, A. Glavieux and P. Thitimajshima who showed that the performance can reach to the Shannon Limit if using turbo codes in terms of Bit Error Rates .During those decades, R. Tanner introduced Tanner graph to represent LDPC codes. MacKay and Neal rediscovered LDPC codes and also showed the performance can reach Shannon limit like turbo codes or even better than turbo code.[2]

ADVANCES IN CODINGFor large block size, LDPC codes are constructed mainly by studying the behaviour of decoders. As the block tends to infinity, decoding is reliably achieved below the noise threshold and above it the decoding is not achieved, and this effect is Cliff effect. The threshold is optimized by finding the best proportion of arcs from check nodes and from variable nodes. Exit Chart is an approximate graphical approach to visualizing this threshold.

After this optimization we can construct specific LDPC code with two main types of techniques: Pseudorandom approaches: The approach is for large block size, a random construction provides good decoding performance.Combinatorial approaches: Used to create codes with simple encoders. It is also used to optimize the properties of less block-sized LDPC codes.

Also LDPC codes used in 10-gigabit Ethernet standard are RS-LDPC codes based on LDPC codes. LDPC codes used in DVB-S2 standard are structured rather than randomly generated so they have simpler and lower cost hardware. Also another way of constructing LDPC codes is to use finite geometries.[3]

Right now, efforts are being put for the combination of implementation oriented code design (Block-LPDC) and sub-optimal decoding algorithms (corrected MinSum decoding) which will make LDPC codes a viable option for next generation wireless systems.[4]

WHY AND HOW IT IS USEFUL TO SOCIETY

Internet, communication and digital circuits are huge source of data sharing. Sometimes we want to transmit a bulky data and other time a very confidential data.

Thus in any case we want to encode our data through transmission channel so that we can reduce the transmission cost, maintain the privacy of data transferred.Many of the times noise in channels disturbs the data which may result in wrong transmission of information.

Thus, we need some methodology which encodes the data and transmits it over the channel and if the error occurs in the transmitted data then the method should be capable of finding the error and correcting it.

Low Density Parity Check (LDPC) is a code which satisfies above raised needs. LBC is not only a theoretical concept but has found its application in varied fields; some of them are listed below along with examples.

1. In Wi-Fi, mobile networks: The performance gain (dB) used to lower transmit power, increase level of data throughput, for transmission through long distances is the main advantage of LDPC codes. So with LDPC, mobile networks, Wi-Fi networks operated in noisy environments could be operated more reliably, efficiently and at higher data rates.[5]

2. For DVB:LDPC results into generate parse parity check matrix which can help us in forward error correcting code[6] .It helps in digital communication in which there can be errors in packets of information. LDPC helps here for clear satellite communication.

3. Apart from this LDPC code is adopted in satellite based digital video broadcasting ,used for 10 GBase-T Ethernet (10GBps over twisted-pair cables),also highly likely to be adopted in the IEEE wireless local area network standard.

Hence, we find that the concept of LDPC is ingrained in all the electrical equipments we use. Thus, this concept helps us to transmit encoded data, is error-correcting too which increases the accuracy of data we receive, resulting in effective communication and data sharing.

CONCLUSION LDPC was a great project to work with as it not only described the practical application of Linear Algebra in real life but we also came to know of how it works by programming it in MATLAB and implementing on FPGA. It was a hand on experience to develop a code that is widely used in the field of Signals, Systems and Communication Devices.

The project was done by us in the time-frame of 8-weeks. The main aim of the project in Encoding part was to develop large scale parity check matrix without cycle of-4 and at the completion we have developed the code that removes cycle of 4 and creates as sparse H matrix as possible. Along with this accomplishment the system also suffers from one drawback. On removing the cycle of 4, we flip the bit and sometimes the flipping of bits reduces the column weight from 3 to 2. The System is also too slow as it takes lot of time to compute inverse of B. We also found some of the drawbacks of LDPC System as whole and have mentioned the same below: a. LDPC have complex encoders, results in great delay to find inverse of the part of system (that is inverse of B). b. LDPC code fails to deliver on small scale cases.

There are possible ways to counter the drawbacks; a. Can devise a formula in which the there is no dependency on finding the invertible matrix.

Certain advantages of LDPC(i). Randomly generated LDPC code has higher efficiency and fewer cycles then structurally or pattern-wise generated code.(ii). The hardware implementation of LDPC Encoding is compact and easy to implement.

REFERENCES1. http://en.wikipedia.org/wiki/Tanner_graph2. http://www.ece.uic.edu/~devroye/courses/ECE534/project/project_Hao-Chih_Chang.pdf3. http://en.wikipedia.org/wiki/Low-density_parity-check_code4. https://mns.ifn.et.tu-dresden.de/Lists/nPublications/Attachments/397/Zimmermann_E_WWRF_05.pdf5. http://sysmasteronline.com/pdf/ANNEX%20VIII.pdf6. http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4536677&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D4536677

SPARTANS GROUP 3Page