Channel Coding: Theory, Algorithms, and Applications

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

This book gives a review of the principles, methods and techniques of important and emerging research topics and technologies in Channel Coding, including theory, algorithms, and applications. Edited by leading people in the field who, through their reputation, have been able to commission experts to write on a particular topic. With this reference source you will: * Quickly grasp a new area of research * Understand the underlying principles of a topic and its applications * Ascertain how a topic relates to other areas and learn of the research issues yet to be resolved

Author(s): David Declercq (editor), Marc Fossorier (editor), Ezio Biglieri (editor)
Edition: 1
Publisher: Academic Press
Year: 2016

Language: English
Pages: 690

Half Title
Channel-Coding_2014_Academic-Press-Library-in-Mobile-and-Wireless-Communicat
Title Page
Copyright_2014_Academic-Press-Library-in-Mobile-and-Wireless-Communications
Copyright
Preface_2014_Academic-Press-Library-in-Mobile-and-Wireless-Communications
Preface
Contributors_2014_Academic-Press-Library-in-Mobile-and-Wireless-Communicatio
Contributors
Chapter-1---Turbo-Codes--From-First-P_2014_Academic-Press-Library-in-Mobile-
1 Turbo Codes: From First Principles to Recent Standards
1 Introduction
2 History of turbo codes
2.1 The origins of turbo codes
2.2 Concatenation
2.3 Negative feedback in the decoder and recursive systematic convolutional codes
2.4 Extrinsic information and iterative decoding
2.5 Parallel concatenation
3 Fundamentals of turbo coding
3.1 Recursive systematic convolutional (RSC) component codes
3.2 Block coding with turbo codes
3.3 The permutation
3.3.1 Regular permutation
3.3.2 Irregular permutations
4 Fundamentals of turbo decoding
4.1 The turbo principle
4.2 Soft-input soft-output decoding
4.2.1 Definitions
4.2.2 The MAP algorithm
4.2.3 The MAP algorithm in the logarithmic domain: Log-MAP and Max-Log-MAP
5 Industrial impacts of turbo codes
5.1 The very first implementations of turbo codecs
5.1.1 The CAS 5093 circuit
5.1.2 The Turbo4 circuit
5.2 Early applications of turbo codes
5.3 Turbo codes in standards
5.3.1 Mobile communication systems
5.3.2 Digital video broadcasting (DVB) standards
5.3.3 Other standards
5.3.4 Summary
6 Conclusion
References
Chapter-2---Turbo-Like-Codes-_2014_Academic-Press-Library-in-Mobile-and-Wire
2 Turbo-Like Codes Constructions
1 Introduction and bibliography survey
1.1 Introduction
2 Structure of concatenated codes
2.1 Main characteristics of turbo encoding structures
2.2 Trellis encoders
2.3 Mapper
2.3.1 Repeater
2.3.2 Parity-check bit generator
2.3.3 Constellation labeling
2.4 Interleaver
2.5 Rate conversion modules
2.6 Puncturer
2.7 Summary of encoding modules
2.8 Some turbo encoder structures and their main properties
2.8.1 Serially concatenated convolutional codes
2.8.2 Duobinary PCCC
2.8.3 (Irregular) repeat-accumulate codes
2.8.4 Self-concatenation
2.8.5 Other turbo coding structures
2.9 Convolutional versus block encoding
2.9.1 Periodic state enforcing of trellis encoders
3 ML analysis and design of constituent codes
3.1 Maximum-likelihood analysis
3.1.1 Word and bit error probabilities
3.1.2 Uniform interleaver
3.1.3 Analysis of PCCC
3.1.4 Analysis of SCCC
3.1.5 Analysis of hybrid concatenated codes with interleavers
3.1.6 More refined upper bounds
3.2 Design criteria for constituent encoders
3.2.1 Design of parallel concatenated convolutional codes with interleaver
3.2.2 A heuristic explanation of the interleaver gain
3.2.3 Design of serially concatenated convolutional codes with interleavers
3.3 Comparison between parallel and serially concatenated codes
3.4 Finding the optimum constituent encoders
4 Iterative decoding
4.1 Messages in iterative decoders and independence assumption
4.2 Soft-input soft-output modules
4.3 The SISO for the data ordering encoding modules
4.4 The SISO module for the trellis encoder
4.4.1 The SISO algorithm for computing the extrinsic LLRS
4.4.2 Trellis with multiple symbol labels
4.5 The SISO module for the mapper
4.5.1 Computation of SISO updating with the minimal trellis
4.6 Multiple code representations
5 Interleaver designs
5.1 Interleaver theory
5.2 Interleavers: basic definitions
5.2.1 Causal and canonical interleavers
5.3 Connections among interleaver parameters
5.4 Convolutional and block interleavers
5.4.1 Convolutional interleavers
5.4.2 Implementation of convolutional interleavers with minimal memory requirement
5.4.3 Block interleavers
5.4.4 Block interleaver causalization
5.4.5 The canonical causal interleaver of a block interleaver
5.4.6 The two-register causal interleaver of a block interleaver
5.5 Some practical interleavers
5.5.1 Random interleaver
5.5.2 Spread interleaver
5.5.3 Pseudo-random interleavers
5.5.4 Congruential-type interleavers
5.5.5 Multidimensional interleaver
5.5.6 More interleaver classes
5.5.7 Pruning and extending interleavers
6 Performances
6.1 Introduction
6.2 Iterative decoding versus ML upper bounds
6.3 Optimality of constituent encoders
6.4 Performance versus number of iterations
6.5 Comparison between PCCC and SCCC
6.6 Other concatenated structures
6.6.1 Simulation results for variations of PCCC codes
6.6.2 Simulation results for variations of SCCC codes
6.6.3 Simulation results for multiple turbo codes (DPCCC)
6.6.4 Simulation results for hybrid concatenated codes (HCCC)
6.6.5 Simulation results for self-concatenated codes
6.6.6 Simulation results for repeat-accumulate (RA) codes
6.6.7 Simulation results for repeat-accumulate-accumulate (RAA) codes
References
Chapter-3---Low-Density-Parity-Che_2014_Academic-Press-Library-in-Mobile-and
3 Low-Density Parity-Check Code Constructions
1 Introduction
2 LDPC codes and ensembles
2.1 Gallager ensemble
2.2 Unstructured irregular ensemble
2.3 Repeat-accumulate code ensemble
2.4 Multi-edge type ensemble
2.5 Protograph ensemble
2.6 LDPC convolutional code ensemble
3 Asymptotic analysis and optimization
3.1 Asymptotic threshold
3.2 Weight distribution
3.3 Optimization via differential evolution
4 Finite-length construction
4.1 Unstructured codes
4.1.1 Progressive edge-growth (PEG) algorithm
4.1.2 Improved PEG algorithms based on extrinsic cycle connectivity
4.1.3 Improved PEG algorithm for avoidance of small stopping sets
4.1.4 Improved PEG algorithms for error rate minimization
4.1.5 Randomized PEG algorithm and cage design
4.2 Structured codes
4.2.1 QC-LDPC codes via circulant PEG
5 LDPC codes in standards
5.1 IEEE 802.16-2009 LDPC codes
5.1.1 Construction of parity-check matrices
5.1.2 Efficient encoding
5.2 ITU-T G.9960 LDPC codes
5.3 CCSDS LDPC codes
5.4 Other standards including LDPC codes
5.5 Details of parity-check matrices
5.5.1 Exponent matrices for IEEE802.16-2009 LDPC codes with n=2304
5.5.2 Exponent matrices for ITU-T G.9960 LDPC codes
5.5.3 Exponent matrices for IEEE802.11-2012 LDPC codes
Acknowledgments
References
Chapter-4---LDPC-Deco_2014_Academic-Press-Library-in-Mobile-and-Wireless-Com
4 LDPC Decoders
1 Introduction
1.1 A decoding-oriented approach to coding
1.2 Decoding complexity, long codes, and Shannon limit
1.3 Evolution of LDPC decoding from Gallager to our days
1.3.1 Belief-propagation decoding
1.3.2 Min-sum and min-sum-based decoding
1.3.3 Stochastic decoding
1.3.4 Decoding over erasure channels
1.3.5 Non-binary LDPC decoding
1.4 Chapter's organization
2 Notation and terminology
3 Binary LDPC decoders
3.1 Bit-flipping decoding
3.2 Hard-decision MP decoders
3.2.1 Gallager-B decoding
3.2.2 Gallager-A decoding
3.2.3 Majority-voting decoding
3.2.4 Gallager-B decoding with extended alphabet
3.3 Belief-propagation decoding
3.4 Min-sum decoding
3.5 Min-sum-based decoding
3.5.1 Normalized/offset-min-sum
3.5.2 Self-corrected min-sum
3.5.3 Finite alphabet iterative decoders
3.6 Erasure decoding
3.7 Further readings
3.7.1 Other decoding algorithms
3.7.2 Implementation-related issues
4 Non-binary LDPC decoders
4.1 Notation update
4.2 Belief-propagation decoding
4.3 Min-sum decoding
4.4 Extended-min-sum decoding
4.5 Min-max decoding
4.6 Erasure decoding
4.7 Further readings
4.7.1 Other decoding algorithms
4.7.2 Implementation-related issues
Appendix
References
Chapter-5---Code-Design-with_2014_Academic-Press-Library-in-Mobile-and-Wirel
5 Code Design with EXIT Charts
1 Introduction
1.1 System model
1.2 Notation
2 Parallel concatenated codes
2.1 Encoder and decoder
2.2 The EXIT method
2.2.1 Decoding trajectory
2.2.2 Decoding model and transfer function
2.3 Code analysis and design
3 Serially concatenated codes
3.1 Encoder and decoder
3.2 EXIT analysis
3.3 Code analysis and design
4 LDPC codes
4.1 Decoder and decoding models
4.1.1 Variable-node decoder
4.1.2 Check-node decoder
4.1.3 Discussion of decoding models
4.1.4 Discussion of the area properties
4.2 Analysis and design for the BEC
4.2.1 EXIT functions
4.2.2 Code design
4.3 Analysis and design for the AWGN channel
5 Comments and generalizations
5.1 Estimation of mutual information
5.2 Theory of EXIT analysis
5.3 EXIT analysis for other codes or coded systems
6 Summary
References
Chapter-6---Failures-and-Error-Floo_2014_Academic-Press-Library-in-Mobile-an
6 Failures and Error Floors of Iterative Decoders
1 Introduction
2 Preliminaries
2.1 LDPC codes
2.2 Channel assumptions
2.3 Message passing decoding algorithms
2.3.1 Gallager A/B message passing algorithm
2.3.2 The sum-product algorithm
2.4 Bit flipping algorithms
3 Overview of decoding failures
4 Combinatorial characterization of decoding failures
4.1 Trapping sets on the BEC
4.2 Trapping sets on the BSC
4.3 Trapping sets on the AWGNC
5 Case study: Column-weight-three codes with the Gallager A/B algorithm on the BSC
5.1 Graphical representation
5.2 Topological relation
5.3 Evolution of trapping sets
5.4 FER estimation
6 Combating error floors
6.1 Improving error floor performance by constructing better Tanner graphs
6.1.1 Constructing better Tanner graphs by increasing girth
6.1.2 Constructing better Tanner graphs by avoiding trapping sets
6.1.3 Relative harmfulness
6.2 Improving error floor performance by designing better decoders
6.2.1 Finite precision of messages, quantized belief propagation, and low-complexity modifications
6.2.2 Decoding beyond belief propagation
6.2.3 Approaching ML performance via iterative decoding
7 Connections to LP decoding
7.1 Pseudo-codewords for LP decoders
8 Conclusion
Acknowledgments
References
Chapter-7---Rate-Compatible-LDPC-and-Turbo-C_2014_Academic-Press-Library-in-
7 Rate-Compatible LDPC and Turbo Codes for Link Adaptivity and Unequal Error Protection
1 Unequal error protection Turbo codes
1.1 Puncturing and pruning
1.2 Hybrid Turbo codes and their convergence
1.2.1 Local EXIT charts
1.2.2 Relation between local and global EXIT charts
1.3 Interleaver structures
2 Unequal error protection LDPC codes based on puncturing and pruning
2.1 Density evolution for general RC-LDPC codes
2.2 Design of good puncturing patterns for a given mother code
2.3 Pruning for creating irregular UEP check-node profiles
2.4 Structured RC-LDPC codes
3 Unequal error protection LDPC codes based on degree distribution optimization
3.1 Multi-edge-type UEP LDPC codes
References
Chapter-8---Rateless-C_2014_Academic-Press-Library-in-Mobile-and-Wireless-Co
8 Rateless Coding
1 Introduction
2 The fountain paradigm
2.1 Fountain coding and decoding: definitions and principles
2.2 The random binary fountain code
3 Rateless sparse-graph codes for the binary erasure channel: LT and Raptor codes
3.1 LT codes
3.2 Raptor codes
3.3 Decoding algorithms for the BEC
3.4 Raptor codes in standards
4 Extensions to noisy channels
4.1 The additive white Gaussian noise channel
4.2 Fading channels
4.3 Other channels
4.4 Link to fixed rate counterparts
5 Advanced sparse-graph based rateless coding schemes
5.1 LT/Raptor codes extensions
5.1.1 Improved decoding algorithms and sequence designs
5.1.2 Generalization of LT/Raptor codes
5.1.3 Distributed LT codes
5.1.4 Non-binary fountain codes
5.1.5 ``Turbo''-based fountain coding
5.2 Other rateless coding schemes: beyond sparse-graph codes
6 Applications of rateless coding
6.1 Rateless coding versus IR-HARQ
6.2 Multimedia communications and broadcasting
6.2.1 Unequal error protection
6.2.2 Multimedia video communications
6.3 Wireless networks and relays
6.4 Distributed storage and data dissemination
6.5 Source and source-channel coding
References
Chapter-9---An-Introduction-to-Dist_2014_Academic-Press-Library-in-Mobile-an
9 An Introduction to Distributed Channel Coding
1 Introduction
1.1 Distributed channel coding
1.2 Outline of this chapter
1.3 Notations
2 The three-node relay channel
2.1 Basic model
2.1.1 AWGN relay channel
2.1.2 Binary erasure relay channel
2.2 Relaying strategies
2.3 Fundamental coding strategies for decode-and-forward relaying
2.3.1 Full-duplex relaying
2.3.2 Half-duplex relaying
2.3.3 Design objectives: achieving the optimal decode-and-forward rates
3 Distributed coding for the three-node relay channel
3.1 LDPC code designs for the relay channel
3.1.1 Code structures for decode-and-forward relaying
3.1.2 Irregular LDPC codes
3.1.3 Spatially coupled LDPC codes
3.2 Distributed turbo-codes and related code structures
3.2.1 Code optimization
3.2.2 Noisy relay
4 Relaying with uncertainty at the relay
4.1 Compress-and-forward relaying
4.2 Soft-information forwarding and estimate-and-forward
5 Cooperation with multiple sources
5.1 Two-user cooperative network: coded cooperation
5.2 Multi-source cooperative relay network
5.2.1 Spatially coupled LDPC codes
6 Summary and conclusions
Acknowledgment
References
Chapter-10---Space-Time-Bl_2014_Academic-Press-Library-in-Mobile-and-Wireles
10 Space-Time Block Codes
1 Introduction and preliminaries
2 STBCs with low ML decoding complexity
2.1 Encoding complexity, decoding complexity and diversity gain
2.1.1 Measure of ML decoding complexity
2.1.2 Full diversity
2.1.3 Problem statement of optimal rate-ML decoding complexity trade-off
3 Full-rate full-diversity STBCs
3.1 STBCs from division algebras
3.1.1 Division algebras—an introduction
3.1.2 Codes from the left regular representation of division algebras
3.1.3 Cyclic division algebras
3.2 Embedding cyclic division algebras into matrices over the maximal cyclic subfield
4 Perfect space-time block codes
5 Diversity and multiplexing gain trade-off of space-time codes
6 Space-time codes for asymmetric MIMO systems
6.1 Fast-decodable MIDO codes with large coding gain
6.2 DMT-optimal LSTBC-schemes for asymmetric MIMO systems
6.2.1 Fast-decodable STBCs
7 Distributed space-time codes
7.1 Communication with relays
7.1.1 Distributed space-time coding for asynchronous relay networks
7.2 Space-time codes for wireless two-way relaying
8 Conclusion
References
Chapter-11---Coded-Modu_2014_Academic-Press-Library-in-Mobile-and-Wireless-C
11 Coded Modulation
1 Introduction
2 Preliminaries
2.1 Gaussian and Rayleigh fading channels
2.2 Capacity of signal sets over the Gaussian channel
2.3 Overview of coded modulation schemes
2.4 Performance measure
3 Trellis coded modulation
3.1 Set partitioning
3.2 Encoder and decoder for trellis codes
3.2.1 Design of trellis coded modulation
3.3 Concatenated trellis coded modulation
4 Multilevel codes and multistage decoding
4.1 Multilevel codes
4.2 Multistage decoding
4.3 Code design for multilevel codes and multistage decoding
4.3.1 Component codes for low error probability
4.3.2 Design for capacity-approaching component codes
4.4 Iterative decoding of multilevel codes
4.5 Multilevel codes for unequal error protection
5 Bit-interleaved coded modulation
5.1 Encoding of BICM
5.2 Decoding BICM
5.3 BICM capacity and capacity-approaching codes
5.4 BICM with iterative decoding
5.5 Shaping and BICM
References
Chapter-12---Joint-Source-Channel_2014_Academic-Press-Library-in-Mobile-and-
12 Joint Source-Channel Coding and Decoding
1 Why joint source-channel coding/decoding
2 Joint source-channel decoding basics
2.1 Various types of redundancy
2.1.1 Redundancy due to the syntax of source coders
2.1.2 Redundancy due to semantic of the source and of the source coder
2.1.3 Redundancy due to the packetization of compressed data
2.2 Decoders to exploit the redundancy
2.3 Reducing the decoding complexity
2.3.1 Aggregated trellises
2.3.2 Projected trellises
2.3.3 Sequential decoders
2.3.4 Iterative decoders
3 Joint source-channel coding basics
3.1 OPTA
3.2 Simple (counter-)example
3.3 To code or not to code
4 Modified source encoders
4.1 Variable-length error-correcting codes
4.1.1 Distance properties of variable-length codes
4.1.2 Code construction
4.2 Redundant signal representations
4.2.1 Multiple description scalar quantization
4.2.2 Correlating transforms
4.2.3 Frame expansion
4.2.4 Channel coding
4.3 Hierarchical and high-density constellations
4.3.1 Hierarchical modulations
4.3.2 High-density constellations
5 Accounting for the presence of a network
5.1 Redundancy in the protocol stack
5.2 Protocol-assisted channel decoding
5.2.1 Optimal estimator
5.2.2 Suboptimal estimator
5.3 Reliable header recovery
5.4 Reliable burst segmentation
5.4.1 Aggregated packets within a burst
5.4.2 Estimators for the number of packets and their boundaries
5.4.3 Example: WiMax burst segmentation
6 Conclusion
References
Chapter-13---Hardware-Design-and-Realiza_2014_Academic-Press-Library-in-Mobi
13 Hardware Design and Realization for Iteratively Decodable Codes
1 Introduction
2 Standard implementation
2.1 Quantization of input LLR
2.2 Standard turbo decoder architecture
2.2.1 Global architecture
2.2.2 Standard MAP architecture
2.2.3 MAP architecture for tail-biting code
2.3 Standard LDPC decoder architecture
3 Low complexity decoder
3.1 Internal precision optimization
3.2 Precision optimization in turbo decoder
3.3 Sliding-window technique in turbo decoder
3.4 Algorithm simplifications
3.5 Scheduling of Turbo-Code
4 High throughput architectures
4.1 Basic solutions to increase decoding speed
4.2 Average vs. worst case number of iteration
4.3 Parallel error-free memory access
4.4 High-speed Turbo-Code
4.4.1 Radix-4 architecture
4.4.2 Forward-Backward parallelism
4.4.3 Half iteration parallelism
4.5 High-speed LDPC code
5 Energy efficient architectures
5.1 General purpose methods
5.2 Application-specific methods
6 Exotic designs
6.1 Analog decoders
6.2 Stochastic decoders
6.3 Conclusion
7 A survey of relevant implementations
7.1 High throughput implementations
7.2 Low energy and low area implementations
7.3 Flexible decoders
7.4 Hardware accelerators
8 Conclusion
References
Subject-Index_2014_Academic-Press-Library-in-Mobile-and-Wireless-Communicati
Subject Index
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
Z