error control in data link layer and transport layer

error control in data link layer and transport layer

Post by vins » Sun, 05 Feb 2006 15:33:42

why is error control implemented both at data link layer and transport


error control in data link layer and transport layer

Post by stephe » Mon, 06 Feb 2006 00:31:30

it isnt.

IP runs transport error "stuff" to pick up some end to end problems.

IP runs across lots of different data link types. Some of those (in fact
most) happen to include error detection (which might be a subset of what you
mean by error control) - most links are not IP specific and also support
other protocols, and the error checking is there as part of the link
protocol design.

But there isnt any reason that has to be true in all cases.

i suspect that when your app runs over a loopback interface (i.e. IP packets
never leave the local machine) - there arent any link layer error checks.

XXXX@XXXXX.COM - replace xyz with ntl


error control in data link layer and transport layer

Post by vins » Tue, 07 Feb 2006 05:55:32

can u clarify further if we talk about the issue w.r.t OSI model.

anyways thanks for taking pain to reply and having the spirit of
sharing ur knowlegde


error control in data link layer and transport layer

Post by stephe » Tue, 07 Feb 2006 08:16:23

sort of. you need to remember it is only a model, and that a lot of IP was
designed before the model.

what you need to remember is that there are lots of protocols, and many are
multifunction. A lot of them dont fit all that well into the OSI scheme.

So if you use Ethernet as an example of "data link" - then there is ethernet
itself, which has CRC for checking. (and there can be 802.1 which is another
optional layer in Ethernet, but notionally still "data link" in OSI terms).

but the Ethernet CRC doesnt help for "end to end" issues, since it only
exists across a single bridged set of Ethernets - error checking is needed
end to end, since things can go wrong inside the routers that link various
data links together.

Also there are other error checks anyway - in Ethernet, if a packet has a
fraction of a byte in it, there is a problem, or if the length field is
larger than the packet size, or the packet is too small and so on. In a
sense this is all error checking.


XXXX@XXXXX.COM - replace xyz with ntl

error control in data link layer and transport layer

Post by sidd » Tue, 07 Feb 2006 14:44:29

"Error control" is a very broad concept, and it occurs in some form on
just about every level of an IP network. Just as it will occur in
almost any form of protocol,hardware device, or application level

As said, ethernet can check everything is going smoothly from a
datalink level (higher level protocols are simply unable to do this),
but it still needs to be robust enough to allow for all of the higher
layers to be implemented however we want them to be. Thus, it would be
silly for your NIC to have knowledge of what TCP/IP headers should look
like, this sort of error checking also has to be done at the
appropriate level.

error control in data link layer and transport layer

Post by robertwess » Wed, 08 Feb 2006 08:02:39

The amount of error control at the link layer is dependent on the type
of service the link layer is intended to provide. As an extreme case
consider SLIP, which was intended to provide the simplest possible way
to run IP packets over a dial-up line, and does *nothing* in the way of
error control at the link level, and laves users hoping that the TCP/IP
checksums catch any errors. That's at least one good reason SLIP is
not used much anymore.

TCP/IPs end-to-end error control allows a certain amount of error in
the link layer. Other protocols, for example old sub-area SNA, largely
assumed that communications between adjacent nodes was error free, so
error correction had to happen at the link layer.

Even with TCP/IPs end-to-end error control, you want to limit the
number of errors that occur. First, you want a very small number of
actually damaged packets to be passed to the IP stack because the
TCP/IP checksums are not all that strong. So most link level protocols
use at least a 32 bit CRC. In TCP/IP implementations packets (frames)
that fail the link level tests are just discarded and are dealt with as
lost packets by the stack.

Second, you want to minimize the number to lost packets due to
transmission errors, because the lost packets are terrible for
performance. Packets lost due to network congestion are also bad for
performance, but if you're losing packets due to network congestion,
you probably want to slow down anyway. So the amount of error control
designed into the link layer will be dependent on the physical error
rate. On a link where actual bit errors are rare, like Ethernet, a
simple CRC to catch errors is sufficient. On a very noisy radio link
it may require a very substantial amount of error-correcting capability
to get the "good packet rate" up to a high enough value. This does not
require a zero error rate, nor does it imply a backwards error
correcting scheme involving retransmissions (although those can be
used). For many high-noise links a forward error correcting scheme is
used, and some residual level of uncorrectable (link level) errors is

For passable TCP performance, you need to limit your lost packets to no
more than 2-3%, and even that will put a rather substantial cap on
window sizes. Ideally you want a much lower error rate. And remember
that this error rate is end-to-end, so it's essentially the
cumulative loss rates of all the links between the end nodes.