SlideShare a Scribd company logo
DATA COMMUNICATION
AND
Network Layer; Congestion Control; TCP, UDP
AND
COMPUTER NETWORKS
B.Tech CSE V Semester
For educational purpose only; Source: Forouzan & Tanenbaum
Unit 3
Network layer:
Design Issues: store-and-forward,
Services to transport layer - Connection less and
Connection oriented services
Connection oriented services
Routing Algorithms:
The optimality principle, shortest path routing,
Flooding,
Distance vector and Link state,
Multicast Routings.
Unit 4
Congestion Control:
Principles, congestion prevention policies,
congestion control in virtual circuits and
datagram subnets, load shedding, jitter control.
datagram subnets, load shedding, jitter control.
Internetworking:
Tunneling, Internet work routing,
Fragmentation.
The IP protocol, IP address,
Gateway routing protocols: OSPF, BGP.
Unit 5
Transport Layer:
UDP, TCP- service model, protocol, segment
header,
connection management, Transmission Policy.
connection management, Transmission Policy.
Application Layer:
The DNS Name Space,
Resource Records, Name Servers.
Network Layer: Design Issues
A
Physical Physical
Data link
Data link
R1 R3 R4 B
Network
Network
Source Destination Data
D Header
H
Legend
Source: Forouzan
Datagram
D3 H3
Datagram
D3 H3
Network Layer: Design Issues
Internetworking: process of connecting different networks by
using networking devices such as routers, gateways etc.,
Packetizing: encapsulate packets received from upper
layer protocols
Addressing: identify each device uniquely to allow
global communication
Routing: determine optimal route for sending a packet
from one host to another
Fragmenting: decapsulate packets from one and
encapsulate them for another network
Source: Forouzan
Network layer at the source
Packetizer
encapsulate packet from upper layer
Add universal source and destination address
Processing Module
verify whether destination address is host address.
If so routing is not needed
Routing Module
find interface from which packet must be sent
Fragmentation Module
Breaking packets into smaller pieces(fragments) such that
resulting pieces can pass through a link
Source: Forouzan
Network layer at a router
Processing Module
Checks if the packet has reached its destination or
needs to be forwarded
Routing Module
finds the interface from which packet must be sent
8 Source: Forouzan
Network layer at the
destination
9 Source: Forouzan
Store and Forward Packet Switching
Source: Tanenbaum
Services Provided to Transport Layer
 Designing goals
 Independent of subnet technology
 Transport layer shielded from number, type, and
 Transport layer shielded from number, type, and
topology of subnets
 Uniform network address numbering
 Two Types of Services
 Connectionless
 Connection-oriented
Implementation of Connectionless Service
Source: Tanenbaum
Implementation of Connection-Oriented Service
Routing within a virtual-circuit network
Source: Tanenbaum
Comparison of Virtual-Circuit
and Datagram Networks
Routing
 Routing is the process of forwarding of a packet in
a network so that it reaches its intended destination.
 The main goals of routing are:
 The main goals of routing are:
 Correctness
 low overhead
 robustness
 stability
 fairness
Routing Algorithms
•The Optimality Principle
•Shortest Path Routing
•Flooding
•Flooding
•Distance Vector Routing
•Link State Routing
•Hierarchical Routing
•Broadcast Routing
•Multicast Routing
If router J is on the optimal path from router I
to router K, then the optimal path from J to K
also falls along the same route.
Hierarchical Routing
Source: Tanenbaum
Hierarchical Routing
Broadcast Routing
 Sending a packet to all destinations simultaneously is
called broadcasting.
Ex: When a company needs to update version.
 Ex: When a company needs to update version.
 Make ‘n’ copies for every user and then broadcast
packet, to each destination:
 Drawback: Uses lots of bandwidth and source needs to have
complete list of all destinations
Broadcast Routing
 2. when router receives a packet that is to be broadcasted,
it simply floods those packets out of all interfaces.
 Drawback: This method is easy on router's CPU but may cause
 Drawback: This method is easy on router's CPU but may cause
the problem of duplicate packets received from peer routers.
 Reverse path forwarding is a technique, in which router
knows in advance about its predecessor from where it
should receive broadcast.
 Advantage: Detects and discards duplicates.
Multicast Routing
 In broadcast routing, packets are sent to all nodes even if
they do not want it. But in Multicast routing, the data is sent
to only nodes which wants to receive the packets.
to only nodes which wants to receive the packets.
 Multicast routing uses spanning tree to avoid looping.
 Multicast routing also uses reverse path Forwarding
technique, to detect and discard duplicates and loops.
Link State Routing
Following 5 steps are followed to implement LSR.
1. Learning about the Neighbors
2. Measuring Line Cost.
2. Measuring Line Cost.
3. Building Link State Packets.
4. Distributing the Link State Packets.
5. Computing the New Routes.
Step 1:Learning about the Neighbors
 Upon router booting, discover who the neighbours are, using
HELLO packet.
 Receiving packet router replies back who am i
Receiving packet router replies back who am i
Source: Tanenbaum
Step 2: Measuring Line Cost
 Line cost or delay is measured by sending ECHO message
and measure return time.
 For better results, the above step can be repeated several
 For better results, the above step can be repeated several
times and average is considered
 Waiting time in router queue can be added to include line
traffic load
Step 3: Building Link State Packets:
 Packet containing:
 Identity of sender
 Sequence number
 age
List of neighbours
 When to build the link
state packets?
 Periodically
 when significant events
 List of neighbours
when significant events
occur
Source: Tanenbaum
Step 4: Distributing Link State Packets:
 Distributing link state packets reliably
 Arrival time for packets different
 How to keep consistent routing tables?
 Flooding is used
 each packet contains seq no. that is incremented for each new packet sent
 Routers keep track of source router, seq
 When new LSP arrives, it is checked against LSP already seen:
 new forwarded on all lines except the one it arrived on; Duplicate, discarded
 Packet with seq no lower than highest one, rejected
Distributing the Link State Packets
Source: Tanenbaum
Step 5: Computing new routes:
 With a full set of link state packets, a router can:
 Construct the entire subnet graph
 Run Dijkstra’s algorithm to compute the shortest
path to each destination
path to each destination
 Problems for large subnets
 Memory to store data
 Compute time for developing these tables.
Adaptive and Non adaptive Routing
 Adaptive
 Routing is based on current
measurements of traffic
 Non Adaptive
 Routing is computed in advance
 N/w Admin manually enters
measurements of traffic
 Routers exchange updates and
router table information.
 Router may select a new route for
each packet.
 Ex: DVR, LSR
 N/w Admin manually enters
routing paths into router.
 Once the path to dest has been
selected, router sends all
packets along the route
 Ex: Flooding, SPR
Distance Vector Vs. Link State Routing
 Distance Vector
 Sends entire routing table
 Slow convergence
 Link State
 Sends only link state info
 Fast convergence
 Susceptible to routing loops
 Does not know network topology
 Simple to configure
 Periodic updates (30/60 sec)
 Ex: RIP; BGP
 Less susceptible to routing loops
 Knows entire n/w topology
 Hard to configure
 Updates are triggered not
periodic
 Ex: OSPF; IS-IS
Single-Source Shortest Path Problem
The problem of finding shortest paths from a source
vertex v to all other vertices in the graph.
Dijkstra's algorithm
Dijkstra's algorithm - is a solution to the single-source
shortest path problem in graph theory.
Works on both directed and undirected graphs. However,
all edges must have nonnegative weights.
all edges must have nonnegative weights.
Input: Weighted graph G={E,V} and source vertex v V,
such that all edge weights are nonnegative
Output: Lengths of shortest paths (or the shortest paths
themselves) from a given source vertex v V to all other
vertices
Dijkstra's algorithm - Pseudocode
dist[s] ←0 (distance to source vertex is zero)
for all v V–{s}
do dist[v] ←∞ (set all other distances to infinity)
S← (S, the set of visited vertices is initially empty)
Q←V (Q, the queue initially contains all
vertices)
while Q ≠ (while the queue is not empty)
while Q ≠ (while the queue is not empty)
do u ← mindistance(Q,dist) (select the element of Q with the min.
distance)
S←S {u} (add u to list of visited vertices)
for all v neighbors[u]
do if dist[v] > dist[u] + w(u, v) (if new shortest path found)
then d[v] ←d[u] + w(u, v) (set new value of shortest path)
return dist
Dijkstra(Graph, source):
for each vertex v in Graph.Vertices:
dist[v] ← INFINITY
prev[v] ← UNDEFINED
add v to Q
dist[source] ← 0
while Q is not empty:
u ← vertex in Q with min dist[u]
remove u from Q
for each neighbor v of u still in Q:
alt ← dist[u] + Graph.Edges(u, v)
if alt < dist[v]:
dist[v] ← alt
prev[v] ← u
DCCN Network Layer congestion control TCP
DCCN Network Layer congestion control TCP
Djikstra’s example 2
Shortest Path Routing
DCCN Network Layer congestion control TCP
DCCN Network Layer congestion control TCP
Shortest Path Routing
SPR: Example 2
B
C
2
7
3
3
2
2
A D
E
G
F
H
3
2
2
3
2
4
2
6
1
Solution: SPR Example 2
A
B C
D(10,H)
2
7
3
(6,E)
2
(9,B)
(4,B) 2
3
(2,A)
A D(10,H)
E F
2
(6,E)
4
2
6
1
(4,B)
G(5,E) H(8,F)
2
2
Explanation: SPR Example 2
A
B
C
D
E F
2
7
3
2
2
3
2
4
2
6
1
G H
2
4
6
Shortest Path Routing
SPR: Example 3
Solution: SPR Example 3
Solution: SPR Example 3
SPR: Example 3
DCCN Network Layer congestion control TCP
Solution: SPR Example 3
SPR: Example 2
B
C
2
7
3
3
2
2
A D
E
G
F
H
3
2
2
3
2
4
2
6
1
Solution: SPR Example 2
A
B C
D(10,H)
2
7
3
(6,E)
2
(9,B)
(4,B) 2
3
(2,A)
A D(10,H)
E F
2
(6,E)
4
2
6
1
(4,B)
G(5,E) H(8,F)
2
2
Explanation: SPR Example 2
A
B
C
D
E F
2
7
3
2
2
3
2
4
2
6
1
G H
2
4
6
Distance Vector Routing
 c(x,v) = cost for direct link from x to v
 Dx(y) = estimate of least cost from x to y
 Node x maintains its neighbors’ distance vectors
For each neighbor v, x maintains D = [D (y): y є N ]
56
 For each neighbor v, x maintains Dv = [Dv(y): y є N ]
 Each node v periodically sends Dv to its neighbors
 And neighbors update their own distance vectors
Dx(y) ← minv{c(x,v) + Dv(y)} for each node y N
 Over time, the distance vector Dx converges
Distance Vector Routing: Example1
Y
2 1
X Z
2 1
7
Distance Vector Routing
Distance Vector Routing
DCCN Network Layer congestion control TCP
Distance Vector Routing: Example 2
Y
1
X
Z
4
1
50
Distance Vector Routing
 (a) A subnet. (b) Input from A, I, H, K, and the new routing table for J.
Flooding
 Flooding adapts the technique in which every
incoming packet is sent on every outgoing line
except the one on which it arrived.
except the one on which it arrived.
 One problem with this method is that packets may
go in a loop. As a result of this a node may receive
several copies of a particular packet which is
undesirable.
Flooding
 Some techniques adapted to overcome these
problems are as follows:
 Sequence Numbers
 Sequence Numbers
 Hop Count
 Spanning Tree
 A flooding attack occupies the host memory buffer,
making it impossible to make new connections,
resulting in a denial of service.
Flooding: Advantages
 Simple to setup and implement, since a router may know only
its neighbours.
 Robust ie., Even in case of malfunctioning of a large number
Robust ie., Even in case of malfunctioning of a large number
routers, the packets find a way to reach the destination.
 All nodes which are directly or indirectly connected are visited.
So, there are no chances for any node to be left out. This is a
main criteria in case of broadcast messages.
 The shortest path is always chosen by flooding
Flooding: Disadvantages
 Network congestion
 Wastage of network resources
Security risks
 Security risks
 Difficulty in network troubleshooting
Flooding
 Every incoming packet is sent out on every outgoing
line except for the input line
 Problem
 Problem
 Large number of packets are generated
 Solutions
 Hop counter
 Avoiding duplicates
 Selective flooding
68
Flooding - Conclusion
 Optimal
 Shortest path is always chosen
 No other algorithm can produce a shorter delay
Robust
 Robust
 Not practical in most applications
 Useful in some applications
 Military application: robustness
 Distributed database applications
 Concurrent update
 Wireless networks
Leaky Bucket Algorithm (congestion control)
 The leaky bucket is an algorithm related to
congestion control, based on an analogy of how a
bucket with a constant leak will overflow if either
bucket with a constant leak will overflow if either
the average rate at which water is poured in
exceeds the rate at which the bucket leaks or if
more water than the capacity of the bucket is
poured in all at once
No matter the rate at which water enters the bucket, the outflow is at a constant rate
once the bucket is full, any additional water entering it spills over the sides and is lost
Leaky bucket
Source: Tanenbaum
Source: Tanenbaum
Leaky bucket
Source: Forouzan
 Consider that, each network interface has a leaky bucket.
 Now, when the sender wants to transmit packets, the packets are thrown into the
bucket. These packets get accumulated in the bucket present at the network
interface.
 If the bucket is full, the packets are discarded by the buckets and are lost.
Leaky bucket
 This bucket will leak at a constant rate. This means that the packets will be
transmitted to the network at a constant rate. This constant rate is known as the
Leak Rate or the Average Rate.
 In this way, bursty traffic is converted into smooth, fixed traffic by the leaky
bucket.
 Queuing and releasing the packets at different intervals help in reducing network
congestion and increasing overall performance.
 Consider leaky bucket capacity = 5, fixed data flow
rate = 2, packet sizes = 5, 4, 3
Leaky bucket Example
Second | Packet Recieved| Packet Sent| PacketLeft |Packet Dropped|
-------------------------------------------------------------------------------------------------------
1 5 2 3 0
2 4 2 3 2
bucket capacity = 5, fixed data flow rate
= 2, packet sizes = 5, 4, 3
Leaky bucket Example 2
 consider Bucket size capacity = 1000 bytes, leaky
bucket rate (fixed data flow rate) = 500
bytes/second.
bytes/second.
 Packet sizes [600,900,800,1000,1900]
 Solve using leaky bucket
 Enter The Bucket Size capacity: 1000
 Enter fixed data flow rate : 500
 Enter no. of seconds to simulate: 5
 Enter The Size Of The Packet Entering At 1sec: 600
Enter The Size Of The Packet Entering At 1sec: 600
 Enter The Size Of The Packet Entering At 2sec: 900
 Enter The Size Of The Packet Entering At 3sec: 800
 Enter The Size Of The Packet Entering At 4sec: 1000
 Enter The Size Of The Packet Entering At 5sec: 1900
DCCN Network Layer congestion control TCP
DCCN Network Layer congestion control TCP
Unit 4: Congestion Control & Internetworking
 Congestion control
principles
 Congestion control
 Internetworking:
 Concatenated VCs
 Connection less
Congestion control
prevention policies
 Congestion control in VC
and datagram subnets
 Load shedding
 Jitter control
 Connection less
internetworking
 Tunneling,
Fragmentation, IP
 IP Address, Gateway
Routing protocols
Congestion?
Congestion refers to a network state, where a node or router or link
carries so much of data that it may degrade the performance of network.
Too many packets in (a part of) the subnet.
May occur if the load on the network – the number of packets sent to the
Packet flow
Input Queue
Input Queue
Output Queue
May occur if the load on the network – the number of packets sent to the
network – is greater than the capacity of the network – the number of
packets a network can handle.
The network layer and transport layer share the responsibility of
handling congestion.
Congestion control refers to mechanisms and techniques to control the
congestion and keep the load below the capacity.
Packet flow
Input Queue
Congestion?
Congestion refers to a network state, where a node or router or link
carries so much of data that it may degrade the performance of network.
Too many packets in (a part of) the subnet.
May occur if the load on the network – the number of packets sent to the
Packet flow
May occur if the load on the network – the number of packets sent to the
network – is greater than the capacity of the network – the number of
packets a network can handle.
The network layer and transport layer share the responsibility of
handling congestion.
Congestion control refers to mechanisms and techniques to control the
congestion and keep the load below the capacity.
Packet flow
84
Congestion
 On packet arrival:
 Packet is put at the end of input queue
 Processing module of the router removes the packet from front of queue and make
routing decisions using routing table.
 Packet is put into respective output queue and waits its turn to be sent.
 If rate of packet arrival > packet processing rate, Input queue size will increase.
If rate of processing > rate of departure, output queue increases
 If rate of processing > rate of departure, output queue increases
Congestion
Destination
1.5-Mbps T1 link
Router
Source
2
Source
1
100-Mbps FDDI
10-Mbps Ethernet
Flow Control Vs. Congestion Control
 Flow control: slow down the sender if the sender sends the data at a
faster rate than the receiving capacity of the receiver.
 Flow control relates to traffic between two machines, while congestion
Flow control relates to traffic between two machines, while congestion
control is more global. Flow control makes sure that fast sender cannot
continually transmit data faster than the receiver is able to absorb it.
 Congestion control has to make sure that subnet is able to carry the
offered traffic and is global issue.
 Congestion control is a mechanism and techniques to control the
congestion and keep the load below the capacity.
Causes of Congestion
 limited resources
 insufficient memory
 insufficient memory
 Slow processors
 Low bandwidth
 Mismatch in updates of parts of system
Congestion Control
 Congestion control refers to techniques and mechanisms
that can either prevent congestion, before it happens,
or remove congestion, after it has happened.
or remove congestion, after it has happened.
 In general, congestion control mechanisms can be
divided into two broad categories:
 open-loop congestion control (prevention) and
 closed-loop congestion control (removal).
Congestion control: Principles
 makes an attempt to prevent
congestion from occurring
 Make sure the problem does not occur
 Tools
 Decide when to accept traffic
 Monitor: where and when congestion?
 % packets discarded
 average queue length
 number of packets that time out
 average packet delay
open loop (Prevention) closed loop (Removal)
 Decide when to accept traffic
 Decide when to discard packets and
which ones
 Make scheduling decisions in the
subnet
 Two types: Source based & destination
based
They make decision without considering
current state of network
average packet delay
Rising number indicates growing congestion
 Pass collected info to places where actions can
be taken = source of traffic
 Adjust system operation
 Increase resources: bandwidth
 Decrease load: deny, degrade service
DCCN Network Layer congestion control TCP
Congestion: Prevention Policies
 open loop solutions: Minimize congestion, they try to achieve there goals by using
appropriate policies at various levels
Layer Policies
Transport  Retransmission policy
 Acknowledgement policy
 Flow control policy
 Timeout determination ( transit time over the network is hard to predict)
91
 Timeout determination ( transit time over the network is hard to predict)
Network  Virtual circuits vs datagrams in subnet( many cog. Control algo work only with VC)
 Packet queueing and service policy ( 1 Q / input/output line and round robin)
 Packet discard policy
 Routing algorithm ( spreading traffic over all lines)
 Packet lifetime management
Data link  Retransmission policy( Go back N will put heavy load than Selective Reject)
 Acknowledgement policy( piggyback onto reverse traffic )
 Flow control policy ( small window reduce traffic and thus congestion )
 Several techniques can be employed for congestion
control. These include:
 Warning bit
 Choke packets
 Choke packets
 Load shedding
 Random early discard
 Traffic shaping
 The first 3 deal with congestion detection and recovery.
The last 2 deal with congestion avoidance.
Congestion Control: Virtual circuit subnets
 Admission control is a validation process in
communication systems where a check is performed
before a connection is established to see if current
resources are sufficient for the proposed connection
No new virtual circuits are setup when congestion is
93
 No new virtual circuits are setup when congestion is
signalled
 Route new virtual circuits around problem areas.
Congestion Control: VC contd..
Assume there is no congestion, despite huge resource
allocation, leads to wastage of resources
Balanced reservation of resources
Negotiation when virtual circuit is set up
 Negotiation when virtual circuit is set up
 About kind of traffic + service desired
 Resource reservation in subnet
Line capacity
Buffers in routers
Congestion Control in Datagram
Subnets: Warning Bit
A warning bit is sent back in the ack to the source in the case
congestion.
Every router on the path can set the warning bit.
f
a
au
u )
1
( 

 f
a
au
u old
new )
1
( 


Where a indicates how fast the router forgets recent history
f indicates line utilization
‘u’ value ranges from 0.0 to 1.0
If u is above a threshold, a warning state is reached.
96
Warning Bit
 A special bit in the packet header is set by the
router to warn the source when congestion is
detected.
 The bit is copied and piggy-backed on the ACK and
sent to the sender.
sent to the sender.
 The sender monitors the number of ACK packets it
receives with the warning bit set and adjusts its
transmission rate accordingly.
97
Choke Packets
 Used in both VC and datagram subnets
• Control packet
 Generated at congested node
Sent to source node
 Sent to source node
 e.g. ICMP source quench
 From router or destination
 Source cuts back until no more source quench message
 Sent for every discarded packet, or anticipated
98
Choke Packet
 Source, upon receiving a choke packet
 Reduces traffic by a percentage after receiving choke packet
 After time interval expired, listens
 If choke packet received then
goto the step of reducing traffic
goto the step of reducing traffic
else increase traffic
 Typically
 First choke packet causes data rate reduced to 50%, then 25%, …
 Traffic is increased in smaller increments
Hop-by-Hop Choke Packets
 Choke packet takes too long to get back to source
 Choke packet affect each hop along the path
 The goal is to address congestion quickly at the point of
greatest need  propagate the “relief” back to the
greatest need  propagate the “relief” back to the
source
 This generates greater need for buffers at the router
 Required to reduce output
 Meanwhile the input continues full blast until the choke
packet propagates to the next hop
DCCN Network Layer congestion control TCP
Load Shedding
 When routers are being inundated by packets that they can not handle, they just
throw them away
 Dropping packets randomly may not reduce congestion.
 Select the right packets to drop is very important
 For file transfer, old packet is worth more than a new one
 For file transfer, old packet is worth more than a new one
 For multimedia, a new packet is more important than an old one
 Senders must mark packets in priority classes to indicate how important they are
 A full frame is more important than a difference frame in compressed video
transmission
 The routers can first drop packets from the lowest class, then the next lowest class,
and so on.
102
Random Early Discard (RED)
 This is a proactive approach in which the router discards one
or more packets before the buffer becomes completely full.
 Each time a packet arrives, the RED algorithm computes the
average queue length, avg.
average queue length, avg.
 Congestion is minimal, when avg < lower threshold
 Congestion is severe, If avg >upper threshold, packet is
discarded.
 Onset of congestion, If avg is between the two thresholds
Jitter Control
The variation (i.e., standard deviation) in the packet arrival times is called jitter.
High jitter, for example, having some packets taking 20 msec and others taking 30 msec to arrive will give an
uneven quality to the sound or movie.
When a packet arrives at a router, the router checks to see how much the packet is behind or
ahead of its schedule.
This information is stored in the packet and updated at each hop.
If the packet is ahead of schedule, it is held just long enough to get it back on schedule.
If it is behind schedule, the router tries to get it out the door quickly.
• Traffic shaping is a technique for regulating the
average rate and burstiness of a flow of data that
enters the network.
enters the network.
 Traffic shaping controls the rate at which packets
are sent. Used in ATM and Integrated Services
networks.
 At connection set-up time, the sender and carrier
negotiate a traffic pattern (shape).
UDP
 The User Datagram Protocol (UDP) is called a
connectionless, unreliable transport protocol.
 It does not add anything to the services of IP except
 It does not add anything to the services of IP except
to provide process-to-process communication
instead of host-to-host communication.
Well-known ports used with UDP
UDP header format
TCP
 TCP is a connection-oriented protocol; it creates a
virtual connection between two TCPs to send data.
 In addition, TCP uses flow and error control
 In addition, TCP uses flow and error control
mechanisms at the transport level.
TCP Segment Header
DCCN Network Layer congestion control TCP
TCP: Transmission Control Protocol
 provides reliable end to end byte stream over unreliable intern/w.
 TCP was designed to dynamically adapt to properties of the
internetwork and to be robust in the face of many kinds of failures.
internetwork and to be robust in the face of many kinds of failures.
 TCP service is obtained by both the sender and receiver creating
end points, called sockets.
 Each socket has a socket number (address) consisting of the IP
address of the host and a 16-bit number local to that host, called a
port. A port is the TCP name for a TSAP.
TCP Connection Establishment
(a) TCP connection establishment in the normal case.
(b) Simultaneous connection establishment on both sides.
Connection establishment using three-way handshaking
TCP Connection Release
 Each simplex connection is released independently of its sibling.
 To release a connection, either party can send a TCP segment with
FIN bit set, which means that it has no more data to transmit.
 When the FIN is acknowledged, that direction is shut down for new data.
 Data may continue to flow indefinitely in the other direction, however.
When both directions have been shut down, the connection is released.
 Normally, four TCP segments are needed to release a connection, one FIN
and one ACK for each direction. However, it is possible for the first ACK and
second FIN to be contained in same segment, reducing total count to three.
TCP Connection management: States for TCP
TCP Connection management: States for TCP
State Description
CLOSED There is no connection.
LISTEN The server is waiting for calls from the client.
SYN-SENT A connection request is sent; waiting for acknowledgment.
SYN-RCVD A connection request is received.
ESTABLISHED Connection is established.
FIN-WAIT-1 The application has requested the closing of the connection.
FIN-WAIT-2 The other side has accepted the closing of the connection.
TIME-WAIT Waiting for retransmitted segments to die.
CLOSE-WAIT The server is waiting for the application to close.
LAST-ACK The server is waiting for the last acknowledgment.
TCP Connection Management
Recall: TCP sender, receiver establish
“connection” before exchanging
data segments
 initialize TCP variables:
 seq. #s
buffers, flow control info
Three way handshake:
Step 1: client host sends TCP SYN segment to
server
 specifies initial seq #
no data
 buffers, flow control info
 client: connection initiator
Socket clientSocket = new
Socket("hostname","port
number");
 server: contacted by client
Socket connectionSocket =
welcomeSocket.accept();
 no data
Step 2: server host receives SYN, replies with
SYNACK segment
 server allocates buffers
 specifies server initial seq. #
Step 3: client receives SYNACK, replies with
ACK segment, which may contain data
TCP Connection Management (cont.)
Closing a connection:
client closes socket:
clientSocket.close();
client server
close
close
3-117
Step 1: client end system sends
TCP FIN control segment to
server
Step 2: server receives FIN,
replies with ACK. Closes
connection, sends FIN.
close
closed
timed
wait
TCP Connection Management (cont.)
Step 3: client receives FIN, replies
with ACK.
 Enters “timed wait” - will
respond with ACK to
client server
closing
closing
3-118
respond with ACK to
received FINs
Step 4: server, receives ACK.
Connection closed.
Note: with small modification, can
handle simultaneous FINs.
closing
closed
timed
wait
closed
TCP Connection Management (cont)
TCP server
lifecycle
TCP client
lifecycle
DNS: Domain Name System
 DNS is a service that translates the domain name into IP
addresses.
 Generic Domain - It defines the category of the domain.
 Generic Domain - It defines the category of the domain.
For example - (.com- commercial), (.org - non-profit
organization), (.edu - educational).
 Country Domain - It categorizes according to the
country. For example - (.in - India), (.uk - United
Kingdom).
The DNS Name Space (1)
A portion of the Internet domain name space.
The DNS Name Space (2)
Generic top-level domains
DNS: Resource Records
 Resource records are used to store data about domain names and IP
addresses.
 A DNS zone database is made up of a collection of resource records.
 Each resource record specifies information about a particular object.
 A record - The record that holds the IP address of a domain.
 AAAA record - The record that contains the IPv6 address for a
domain
 MX record - Directs mail to an email serve
 NS record - Stores the name server for a DNS entry
 A record - The record that holds the IP address of a
domain.
 AAAA record - The record that contains the IPv6
 AAAA record - The record that contains the IPv6
address for a domain
 MX record - Directs mail to an email serve
 NS record - Stores the name server for a DNS entry
Domain Resource Records (1)
The principal DNS resource record types
Domain Resource Records (2)
A portion of a possible DNS database for cs.vu.nl.
Name Servers (1)
Part of the DNS name space divided into zones
(which are circled).
Name Servers
Example of a resolver looking up a remote name in
10 steps.

More Related Content

PDF
DCCN Unit 1.pdf
PDF
Data Communication and Computer Networks
PPTX
Instruction sets of 8086
PPT
Support Vector Machines
DOCX
UNIT-III-DIGITAL SYSTEM DESIGN
PPTX
K means clustering
PPT
Network Layer,Computer Networks
PPT
Unit 3 Network Layer PPT
DCCN Unit 1.pdf
Data Communication and Computer Networks
Instruction sets of 8086
Support Vector Machines
UNIT-III-DIGITAL SYSTEM DESIGN
K means clustering
Network Layer,Computer Networks
Unit 3 Network Layer PPT

What's hot (20)

PDF
Data Communication & Computer Networks
PDF
HIGH SPEED NETWORKS
PDF
Data communication and network Chapter - 2
PPTX
Unit 2 data link control
PPTX
Controlled Access Protocols
PPT
data-link layer protocols
PPTX
Media Access Control
PPTX
Quality of Service
PPT
Data link control
PPTX
Networking devices
PPT
Wan technologies
PPTX
Computer Network - Network Layer
PPTX
Zone Routing Protocol
PPTX
TCP/IP Protocol Architeture
PDF
Data Communication & Networking
PPTX
Introduction to Data-Link Layer
PDF
Data communications and networking(DCN)
PPTX
Network Layer
PPT
chapter 11(Data link Control)in CN .ppt
PPT
Chapter 4 data link layer
Data Communication & Computer Networks
HIGH SPEED NETWORKS
Data communication and network Chapter - 2
Unit 2 data link control
Controlled Access Protocols
data-link layer protocols
Media Access Control
Quality of Service
Data link control
Networking devices
Wan technologies
Computer Network - Network Layer
Zone Routing Protocol
TCP/IP Protocol Architeture
Data Communication & Networking
Introduction to Data-Link Layer
Data communications and networking(DCN)
Network Layer
chapter 11(Data link Control)in CN .ppt
Chapter 4 data link layer
Ad

Similar to DCCN Network Layer congestion control TCP (20)

PPT
Routing
PDF
IT6601 MOBILE COMPUTING
PPTX
Network Layer
PPTX
Computer networks for cse Unit-3 (1).pptx
PPTX
Routing algorithms
PDF
4af46e43-4dc7-4b54-ba8b-3a2594bb5269 j.pdf
PPT
Wireless routing protocols
PPTX
PPT on Project Report Ashutosh Kumar.pptx
PPTX
Routing Presentation
PDF
COMPUTER NETWORKS CHAPTER 3 NETWORK LAYER NOTES CSE 3RD year sem 1
PPTX
Routing algorithms
PPTX
Comparative Analysis of Distance Vector Routing & Link State Protocols
PPTX
Network layer new
PPT
Unit-3-Part-1 [Autosaved].ppt
PPTX
Network Layer
PDF
distance-vector-routing-3.pdf
PPTX
Module 3- transport_layer .pptx
PPTX
NETWORK LAYER PRESENTATION IP ADDRESSING UNIT-3.pptx
PPTX
Module_3_Part_3.pptx
Routing
IT6601 MOBILE COMPUTING
Network Layer
Computer networks for cse Unit-3 (1).pptx
Routing algorithms
4af46e43-4dc7-4b54-ba8b-3a2594bb5269 j.pdf
Wireless routing protocols
PPT on Project Report Ashutosh Kumar.pptx
Routing Presentation
COMPUTER NETWORKS CHAPTER 3 NETWORK LAYER NOTES CSE 3RD year sem 1
Routing algorithms
Comparative Analysis of Distance Vector Routing & Link State Protocols
Network layer new
Unit-3-Part-1 [Autosaved].ppt
Network Layer
distance-vector-routing-3.pdf
Module 3- transport_layer .pptx
NETWORK LAYER PRESENTATION IP ADDRESSING UNIT-3.pptx
Module_3_Part_3.pptx
Ad

More from Sreedhar Chowdam (20)

PDF
DBMS Nested & Sub Queries Set operations
PDF
DBMS Notes selection projection aggregate
PDF
Database management systems Lecture Notes
PPT
Advanced Data Structures & Algorithm Analysi
PDF
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
PPTX
Design and Analysis of Algorithms Lecture Notes
PDF
Design and Analysis of Algorithms (Knapsack Problem)
PDF
PPS Notes Unit 5.pdf
PDF
PPS Arrays Matrix operations
PDF
Programming for Problem Solving
PDF
Big Data Analytics Part2
PDF
Python Programming: Lists, Modules, Exceptions
PDF
Python Programming by Dr. C. Sreedhar.pdf
PDF
Python Programming Strings
PDF
Python Programming
PDF
Python Programming
PDF
C Recursion, Pointers, Dynamic memory management
PDF
C Programming Storage classes, Recursion
PDF
Programming For Problem Solving Lecture Notes
PDF
Big Data Analytics
DBMS Nested & Sub Queries Set operations
DBMS Notes selection projection aggregate
Database management systems Lecture Notes
Advanced Data Structures & Algorithm Analysi
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms Lecture Notes
Design and Analysis of Algorithms (Knapsack Problem)
PPS Notes Unit 5.pdf
PPS Arrays Matrix operations
Programming for Problem Solving
Big Data Analytics Part2
Python Programming: Lists, Modules, Exceptions
Python Programming by Dr. C. Sreedhar.pdf
Python Programming Strings
Python Programming
Python Programming
C Recursion, Pointers, Dynamic memory management
C Programming Storage classes, Recursion
Programming For Problem Solving Lecture Notes
Big Data Analytics

Recently uploaded (20)

PDF
Well-logging-methods_new................
PPTX
Fundamentals of safety and accident prevention -final (1).pptx
PPTX
bas. eng. economics group 4 presentation 1.pptx
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
Construction Project Organization Group 2.pptx
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PPTX
UNIT 4 Total Quality Management .pptx
DOCX
573137875-Attendance-Management-System-original
PPTX
Geodesy 1.pptx...............................................
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PDF
composite construction of structures.pdf
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
Safety Seminar civil to be ensured for safe working.
PDF
PREDICTION OF DIABETES FROM ELECTRONIC HEALTH RECORDS
PDF
BIO-INSPIRED HORMONAL MODULATION AND ADAPTIVE ORCHESTRATION IN S-AI-GPT
Well-logging-methods_new................
Fundamentals of safety and accident prevention -final (1).pptx
bas. eng. economics group 4 presentation 1.pptx
Model Code of Practice - Construction Work - 21102022 .pdf
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
CYBER-CRIMES AND SECURITY A guide to understanding
Construction Project Organization Group 2.pptx
R24 SURVEYING LAB MANUAL for civil enggi
UNIT 4 Total Quality Management .pptx
573137875-Attendance-Management-System-original
Geodesy 1.pptx...............................................
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
composite construction of structures.pdf
CH1 Production IntroductoryConcepts.pptx
Automation-in-Manufacturing-Chapter-Introduction.pdf
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Safety Seminar civil to be ensured for safe working.
PREDICTION OF DIABETES FROM ELECTRONIC HEALTH RECORDS
BIO-INSPIRED HORMONAL MODULATION AND ADAPTIVE ORCHESTRATION IN S-AI-GPT

DCCN Network Layer congestion control TCP

  • 1. DATA COMMUNICATION AND Network Layer; Congestion Control; TCP, UDP AND COMPUTER NETWORKS B.Tech CSE V Semester For educational purpose only; Source: Forouzan & Tanenbaum
  • 2. Unit 3 Network layer: Design Issues: store-and-forward, Services to transport layer - Connection less and Connection oriented services Connection oriented services Routing Algorithms: The optimality principle, shortest path routing, Flooding, Distance vector and Link state, Multicast Routings.
  • 3. Unit 4 Congestion Control: Principles, congestion prevention policies, congestion control in virtual circuits and datagram subnets, load shedding, jitter control. datagram subnets, load shedding, jitter control. Internetworking: Tunneling, Internet work routing, Fragmentation. The IP protocol, IP address, Gateway routing protocols: OSPF, BGP.
  • 4. Unit 5 Transport Layer: UDP, TCP- service model, protocol, segment header, connection management, Transmission Policy. connection management, Transmission Policy. Application Layer: The DNS Name Space, Resource Records, Name Servers.
  • 5. Network Layer: Design Issues A Physical Physical Data link Data link R1 R3 R4 B Network Network Source Destination Data D Header H Legend Source: Forouzan Datagram D3 H3 Datagram D3 H3
  • 6. Network Layer: Design Issues Internetworking: process of connecting different networks by using networking devices such as routers, gateways etc., Packetizing: encapsulate packets received from upper layer protocols Addressing: identify each device uniquely to allow global communication Routing: determine optimal route for sending a packet from one host to another Fragmenting: decapsulate packets from one and encapsulate them for another network Source: Forouzan
  • 7. Network layer at the source Packetizer encapsulate packet from upper layer Add universal source and destination address Processing Module verify whether destination address is host address. If so routing is not needed Routing Module find interface from which packet must be sent Fragmentation Module Breaking packets into smaller pieces(fragments) such that resulting pieces can pass through a link Source: Forouzan
  • 8. Network layer at a router Processing Module Checks if the packet has reached its destination or needs to be forwarded Routing Module finds the interface from which packet must be sent 8 Source: Forouzan
  • 9. Network layer at the destination 9 Source: Forouzan
  • 10. Store and Forward Packet Switching Source: Tanenbaum
  • 11. Services Provided to Transport Layer  Designing goals  Independent of subnet technology  Transport layer shielded from number, type, and  Transport layer shielded from number, type, and topology of subnets  Uniform network address numbering  Two Types of Services  Connectionless  Connection-oriented
  • 12. Implementation of Connectionless Service Source: Tanenbaum
  • 13. Implementation of Connection-Oriented Service Routing within a virtual-circuit network Source: Tanenbaum
  • 15. Routing  Routing is the process of forwarding of a packet in a network so that it reaches its intended destination.  The main goals of routing are:  The main goals of routing are:  Correctness  low overhead  robustness  stability  fairness
  • 16. Routing Algorithms •The Optimality Principle •Shortest Path Routing •Flooding •Flooding •Distance Vector Routing •Link State Routing •Hierarchical Routing •Broadcast Routing •Multicast Routing
  • 17. If router J is on the optimal path from router I to router K, then the optimal path from J to K also falls along the same route.
  • 20. Broadcast Routing  Sending a packet to all destinations simultaneously is called broadcasting. Ex: When a company needs to update version.  Ex: When a company needs to update version.  Make ‘n’ copies for every user and then broadcast packet, to each destination:  Drawback: Uses lots of bandwidth and source needs to have complete list of all destinations
  • 21. Broadcast Routing  2. when router receives a packet that is to be broadcasted, it simply floods those packets out of all interfaces.  Drawback: This method is easy on router's CPU but may cause  Drawback: This method is easy on router's CPU but may cause the problem of duplicate packets received from peer routers.  Reverse path forwarding is a technique, in which router knows in advance about its predecessor from where it should receive broadcast.  Advantage: Detects and discards duplicates.
  • 22. Multicast Routing  In broadcast routing, packets are sent to all nodes even if they do not want it. But in Multicast routing, the data is sent to only nodes which wants to receive the packets. to only nodes which wants to receive the packets.  Multicast routing uses spanning tree to avoid looping.  Multicast routing also uses reverse path Forwarding technique, to detect and discard duplicates and loops.
  • 23. Link State Routing Following 5 steps are followed to implement LSR. 1. Learning about the Neighbors 2. Measuring Line Cost. 2. Measuring Line Cost. 3. Building Link State Packets. 4. Distributing the Link State Packets. 5. Computing the New Routes.
  • 24. Step 1:Learning about the Neighbors  Upon router booting, discover who the neighbours are, using HELLO packet.  Receiving packet router replies back who am i Receiving packet router replies back who am i Source: Tanenbaum
  • 25. Step 2: Measuring Line Cost  Line cost or delay is measured by sending ECHO message and measure return time.  For better results, the above step can be repeated several  For better results, the above step can be repeated several times and average is considered  Waiting time in router queue can be added to include line traffic load
  • 26. Step 3: Building Link State Packets:  Packet containing:  Identity of sender  Sequence number  age List of neighbours  When to build the link state packets?  Periodically  when significant events  List of neighbours when significant events occur Source: Tanenbaum
  • 27. Step 4: Distributing Link State Packets:  Distributing link state packets reliably  Arrival time for packets different  How to keep consistent routing tables?  Flooding is used  each packet contains seq no. that is incremented for each new packet sent  Routers keep track of source router, seq  When new LSP arrives, it is checked against LSP already seen:  new forwarded on all lines except the one it arrived on; Duplicate, discarded  Packet with seq no lower than highest one, rejected
  • 28. Distributing the Link State Packets Source: Tanenbaum
  • 29. Step 5: Computing new routes:  With a full set of link state packets, a router can:  Construct the entire subnet graph  Run Dijkstra’s algorithm to compute the shortest path to each destination path to each destination  Problems for large subnets  Memory to store data  Compute time for developing these tables.
  • 30. Adaptive and Non adaptive Routing  Adaptive  Routing is based on current measurements of traffic  Non Adaptive  Routing is computed in advance  N/w Admin manually enters measurements of traffic  Routers exchange updates and router table information.  Router may select a new route for each packet.  Ex: DVR, LSR  N/w Admin manually enters routing paths into router.  Once the path to dest has been selected, router sends all packets along the route  Ex: Flooding, SPR
  • 31. Distance Vector Vs. Link State Routing  Distance Vector  Sends entire routing table  Slow convergence  Link State  Sends only link state info  Fast convergence  Susceptible to routing loops  Does not know network topology  Simple to configure  Periodic updates (30/60 sec)  Ex: RIP; BGP  Less susceptible to routing loops  Knows entire n/w topology  Hard to configure  Updates are triggered not periodic  Ex: OSPF; IS-IS
  • 32. Single-Source Shortest Path Problem The problem of finding shortest paths from a source vertex v to all other vertices in the graph.
  • 33. Dijkstra's algorithm Dijkstra's algorithm - is a solution to the single-source shortest path problem in graph theory. Works on both directed and undirected graphs. However, all edges must have nonnegative weights. all edges must have nonnegative weights. Input: Weighted graph G={E,V} and source vertex v V, such that all edge weights are nonnegative Output: Lengths of shortest paths (or the shortest paths themselves) from a given source vertex v V to all other vertices
  • 34. Dijkstra's algorithm - Pseudocode dist[s] ←0 (distance to source vertex is zero) for all v V–{s} do dist[v] ←∞ (set all other distances to infinity) S← (S, the set of visited vertices is initially empty) Q←V (Q, the queue initially contains all vertices) while Q ≠ (while the queue is not empty) while Q ≠ (while the queue is not empty) do u ← mindistance(Q,dist) (select the element of Q with the min. distance) S←S {u} (add u to list of visited vertices) for all v neighbors[u] do if dist[v] > dist[u] + w(u, v) (if new shortest path found) then d[v] ←d[u] + w(u, v) (set new value of shortest path) return dist
  • 35. Dijkstra(Graph, source): for each vertex v in Graph.Vertices: dist[v] ← INFINITY prev[v] ← UNDEFINED add v to Q dist[source] ← 0 while Q is not empty: u ← vertex in Q with min dist[u] remove u from Q for each neighbor v of u still in Q: alt ← dist[u] + Graph.Edges(u, v) if alt < dist[v]: dist[v] ← alt prev[v] ← u
  • 43. SPR: Example 2 B C 2 7 3 3 2 2 A D E G F H 3 2 2 3 2 4 2 6 1
  • 44. Solution: SPR Example 2 A B C D(10,H) 2 7 3 (6,E) 2 (9,B) (4,B) 2 3 (2,A) A D(10,H) E F 2 (6,E) 4 2 6 1 (4,B) G(5,E) H(8,F) 2 2
  • 45. Explanation: SPR Example 2 A B C D E F 2 7 3 2 2 3 2 4 2 6 1 G H 2 4 6
  • 53. SPR: Example 2 B C 2 7 3 3 2 2 A D E G F H 3 2 2 3 2 4 2 6 1
  • 54. Solution: SPR Example 2 A B C D(10,H) 2 7 3 (6,E) 2 (9,B) (4,B) 2 3 (2,A) A D(10,H) E F 2 (6,E) 4 2 6 1 (4,B) G(5,E) H(8,F) 2 2
  • 55. Explanation: SPR Example 2 A B C D E F 2 7 3 2 2 3 2 4 2 6 1 G H 2 4 6
  • 56. Distance Vector Routing  c(x,v) = cost for direct link from x to v  Dx(y) = estimate of least cost from x to y  Node x maintains its neighbors’ distance vectors For each neighbor v, x maintains D = [D (y): y є N ] 56  For each neighbor v, x maintains Dv = [Dv(y): y є N ]  Each node v periodically sends Dv to its neighbors  And neighbors update their own distance vectors Dx(y) ← minv{c(x,v) + Dv(y)} for each node y N  Over time, the distance vector Dx converges
  • 57. Distance Vector Routing: Example1 Y 2 1 X Z 2 1 7
  • 61. Distance Vector Routing: Example 2 Y 1 X Z 4 1 50
  • 62. Distance Vector Routing  (a) A subnet. (b) Input from A, I, H, K, and the new routing table for J.
  • 63. Flooding  Flooding adapts the technique in which every incoming packet is sent on every outgoing line except the one on which it arrived. except the one on which it arrived.  One problem with this method is that packets may go in a loop. As a result of this a node may receive several copies of a particular packet which is undesirable.
  • 64. Flooding  Some techniques adapted to overcome these problems are as follows:  Sequence Numbers  Sequence Numbers  Hop Count  Spanning Tree  A flooding attack occupies the host memory buffer, making it impossible to make new connections, resulting in a denial of service.
  • 65. Flooding: Advantages  Simple to setup and implement, since a router may know only its neighbours.  Robust ie., Even in case of malfunctioning of a large number Robust ie., Even in case of malfunctioning of a large number routers, the packets find a way to reach the destination.  All nodes which are directly or indirectly connected are visited. So, there are no chances for any node to be left out. This is a main criteria in case of broadcast messages.  The shortest path is always chosen by flooding
  • 66. Flooding: Disadvantages  Network congestion  Wastage of network resources Security risks  Security risks  Difficulty in network troubleshooting
  • 67. Flooding  Every incoming packet is sent out on every outgoing line except for the input line  Problem  Problem  Large number of packets are generated  Solutions  Hop counter  Avoiding duplicates  Selective flooding
  • 68. 68 Flooding - Conclusion  Optimal  Shortest path is always chosen  No other algorithm can produce a shorter delay Robust  Robust  Not practical in most applications  Useful in some applications  Military application: robustness  Distributed database applications  Concurrent update  Wireless networks
  • 69. Leaky Bucket Algorithm (congestion control)  The leaky bucket is an algorithm related to congestion control, based on an analogy of how a bucket with a constant leak will overflow if either bucket with a constant leak will overflow if either the average rate at which water is poured in exceeds the rate at which the bucket leaks or if more water than the capacity of the bucket is poured in all at once
  • 70. No matter the rate at which water enters the bucket, the outflow is at a constant rate once the bucket is full, any additional water entering it spills over the sides and is lost Leaky bucket Source: Tanenbaum
  • 73.  Consider that, each network interface has a leaky bucket.  Now, when the sender wants to transmit packets, the packets are thrown into the bucket. These packets get accumulated in the bucket present at the network interface.  If the bucket is full, the packets are discarded by the buckets and are lost. Leaky bucket  This bucket will leak at a constant rate. This means that the packets will be transmitted to the network at a constant rate. This constant rate is known as the Leak Rate or the Average Rate.  In this way, bursty traffic is converted into smooth, fixed traffic by the leaky bucket.  Queuing and releasing the packets at different intervals help in reducing network congestion and increasing overall performance.
  • 74.  Consider leaky bucket capacity = 5, fixed data flow rate = 2, packet sizes = 5, 4, 3 Leaky bucket Example
  • 75. Second | Packet Recieved| Packet Sent| PacketLeft |Packet Dropped| ------------------------------------------------------------------------------------------------------- 1 5 2 3 0 2 4 2 3 2
  • 76. bucket capacity = 5, fixed data flow rate = 2, packet sizes = 5, 4, 3
  • 77. Leaky bucket Example 2  consider Bucket size capacity = 1000 bytes, leaky bucket rate (fixed data flow rate) = 500 bytes/second. bytes/second.  Packet sizes [600,900,800,1000,1900]  Solve using leaky bucket
  • 78.  Enter The Bucket Size capacity: 1000  Enter fixed data flow rate : 500  Enter no. of seconds to simulate: 5  Enter The Size Of The Packet Entering At 1sec: 600 Enter The Size Of The Packet Entering At 1sec: 600  Enter The Size Of The Packet Entering At 2sec: 900  Enter The Size Of The Packet Entering At 3sec: 800  Enter The Size Of The Packet Entering At 4sec: 1000  Enter The Size Of The Packet Entering At 5sec: 1900
  • 81. Unit 4: Congestion Control & Internetworking  Congestion control principles  Congestion control  Internetworking:  Concatenated VCs  Connection less Congestion control prevention policies  Congestion control in VC and datagram subnets  Load shedding  Jitter control  Connection less internetworking  Tunneling, Fragmentation, IP  IP Address, Gateway Routing protocols
  • 82. Congestion? Congestion refers to a network state, where a node or router or link carries so much of data that it may degrade the performance of network. Too many packets in (a part of) the subnet. May occur if the load on the network – the number of packets sent to the Packet flow Input Queue Input Queue Output Queue May occur if the load on the network – the number of packets sent to the network – is greater than the capacity of the network – the number of packets a network can handle. The network layer and transport layer share the responsibility of handling congestion. Congestion control refers to mechanisms and techniques to control the congestion and keep the load below the capacity. Packet flow Input Queue
  • 83. Congestion? Congestion refers to a network state, where a node or router or link carries so much of data that it may degrade the performance of network. Too many packets in (a part of) the subnet. May occur if the load on the network – the number of packets sent to the Packet flow May occur if the load on the network – the number of packets sent to the network – is greater than the capacity of the network – the number of packets a network can handle. The network layer and transport layer share the responsibility of handling congestion. Congestion control refers to mechanisms and techniques to control the congestion and keep the load below the capacity. Packet flow
  • 84. 84 Congestion  On packet arrival:  Packet is put at the end of input queue  Processing module of the router removes the packet from front of queue and make routing decisions using routing table.  Packet is put into respective output queue and waits its turn to be sent.  If rate of packet arrival > packet processing rate, Input queue size will increase. If rate of processing > rate of departure, output queue increases  If rate of processing > rate of departure, output queue increases
  • 86. Flow Control Vs. Congestion Control  Flow control: slow down the sender if the sender sends the data at a faster rate than the receiving capacity of the receiver.  Flow control relates to traffic between two machines, while congestion Flow control relates to traffic between two machines, while congestion control is more global. Flow control makes sure that fast sender cannot continually transmit data faster than the receiver is able to absorb it.  Congestion control has to make sure that subnet is able to carry the offered traffic and is global issue.  Congestion control is a mechanism and techniques to control the congestion and keep the load below the capacity.
  • 87. Causes of Congestion  limited resources  insufficient memory  insufficient memory  Slow processors  Low bandwidth  Mismatch in updates of parts of system
  • 88. Congestion Control  Congestion control refers to techniques and mechanisms that can either prevent congestion, before it happens, or remove congestion, after it has happened. or remove congestion, after it has happened.  In general, congestion control mechanisms can be divided into two broad categories:  open-loop congestion control (prevention) and  closed-loop congestion control (removal).
  • 89. Congestion control: Principles  makes an attempt to prevent congestion from occurring  Make sure the problem does not occur  Tools  Decide when to accept traffic  Monitor: where and when congestion?  % packets discarded  average queue length  number of packets that time out  average packet delay open loop (Prevention) closed loop (Removal)  Decide when to accept traffic  Decide when to discard packets and which ones  Make scheduling decisions in the subnet  Two types: Source based & destination based They make decision without considering current state of network average packet delay Rising number indicates growing congestion  Pass collected info to places where actions can be taken = source of traffic  Adjust system operation  Increase resources: bandwidth  Decrease load: deny, degrade service
  • 91. Congestion: Prevention Policies  open loop solutions: Minimize congestion, they try to achieve there goals by using appropriate policies at various levels Layer Policies Transport  Retransmission policy  Acknowledgement policy  Flow control policy  Timeout determination ( transit time over the network is hard to predict) 91  Timeout determination ( transit time over the network is hard to predict) Network  Virtual circuits vs datagrams in subnet( many cog. Control algo work only with VC)  Packet queueing and service policy ( 1 Q / input/output line and round robin)  Packet discard policy  Routing algorithm ( spreading traffic over all lines)  Packet lifetime management Data link  Retransmission policy( Go back N will put heavy load than Selective Reject)  Acknowledgement policy( piggyback onto reverse traffic )  Flow control policy ( small window reduce traffic and thus congestion )
  • 92.  Several techniques can be employed for congestion control. These include:  Warning bit  Choke packets  Choke packets  Load shedding  Random early discard  Traffic shaping  The first 3 deal with congestion detection and recovery. The last 2 deal with congestion avoidance.
  • 93. Congestion Control: Virtual circuit subnets  Admission control is a validation process in communication systems where a check is performed before a connection is established to see if current resources are sufficient for the proposed connection No new virtual circuits are setup when congestion is 93  No new virtual circuits are setup when congestion is signalled  Route new virtual circuits around problem areas.
  • 94. Congestion Control: VC contd.. Assume there is no congestion, despite huge resource allocation, leads to wastage of resources Balanced reservation of resources Negotiation when virtual circuit is set up  Negotiation when virtual circuit is set up  About kind of traffic + service desired  Resource reservation in subnet Line capacity Buffers in routers
  • 95. Congestion Control in Datagram Subnets: Warning Bit A warning bit is sent back in the ack to the source in the case congestion. Every router on the path can set the warning bit. f a au u ) 1 (    f a au u old new ) 1 (    Where a indicates how fast the router forgets recent history f indicates line utilization ‘u’ value ranges from 0.0 to 1.0 If u is above a threshold, a warning state is reached.
  • 96. 96 Warning Bit  A special bit in the packet header is set by the router to warn the source when congestion is detected.  The bit is copied and piggy-backed on the ACK and sent to the sender. sent to the sender.  The sender monitors the number of ACK packets it receives with the warning bit set and adjusts its transmission rate accordingly.
  • 97. 97 Choke Packets  Used in both VC and datagram subnets • Control packet  Generated at congested node Sent to source node  Sent to source node  e.g. ICMP source quench  From router or destination  Source cuts back until no more source quench message  Sent for every discarded packet, or anticipated
  • 98. 98 Choke Packet  Source, upon receiving a choke packet  Reduces traffic by a percentage after receiving choke packet  After time interval expired, listens  If choke packet received then goto the step of reducing traffic goto the step of reducing traffic else increase traffic  Typically  First choke packet causes data rate reduced to 50%, then 25%, …  Traffic is increased in smaller increments
  • 99. Hop-by-Hop Choke Packets  Choke packet takes too long to get back to source  Choke packet affect each hop along the path  The goal is to address congestion quickly at the point of greatest need  propagate the “relief” back to the greatest need  propagate the “relief” back to the source  This generates greater need for buffers at the router  Required to reduce output  Meanwhile the input continues full blast until the choke packet propagates to the next hop
  • 101. Load Shedding  When routers are being inundated by packets that they can not handle, they just throw them away  Dropping packets randomly may not reduce congestion.  Select the right packets to drop is very important  For file transfer, old packet is worth more than a new one  For file transfer, old packet is worth more than a new one  For multimedia, a new packet is more important than an old one  Senders must mark packets in priority classes to indicate how important they are  A full frame is more important than a difference frame in compressed video transmission  The routers can first drop packets from the lowest class, then the next lowest class, and so on.
  • 102. 102 Random Early Discard (RED)  This is a proactive approach in which the router discards one or more packets before the buffer becomes completely full.  Each time a packet arrives, the RED algorithm computes the average queue length, avg. average queue length, avg.  Congestion is minimal, when avg < lower threshold  Congestion is severe, If avg >upper threshold, packet is discarded.  Onset of congestion, If avg is between the two thresholds
  • 103. Jitter Control The variation (i.e., standard deviation) in the packet arrival times is called jitter. High jitter, for example, having some packets taking 20 msec and others taking 30 msec to arrive will give an uneven quality to the sound or movie. When a packet arrives at a router, the router checks to see how much the packet is behind or ahead of its schedule. This information is stored in the packet and updated at each hop. If the packet is ahead of schedule, it is held just long enough to get it back on schedule. If it is behind schedule, the router tries to get it out the door quickly.
  • 104. • Traffic shaping is a technique for regulating the average rate and burstiness of a flow of data that enters the network. enters the network.  Traffic shaping controls the rate at which packets are sent. Used in ATM and Integrated Services networks.  At connection set-up time, the sender and carrier negotiate a traffic pattern (shape).
  • 105. UDP  The User Datagram Protocol (UDP) is called a connectionless, unreliable transport protocol.  It does not add anything to the services of IP except  It does not add anything to the services of IP except to provide process-to-process communication instead of host-to-host communication.
  • 108. TCP  TCP is a connection-oriented protocol; it creates a virtual connection between two TCPs to send data.  In addition, TCP uses flow and error control  In addition, TCP uses flow and error control mechanisms at the transport level.
  • 111. TCP: Transmission Control Protocol  provides reliable end to end byte stream over unreliable intern/w.  TCP was designed to dynamically adapt to properties of the internetwork and to be robust in the face of many kinds of failures. internetwork and to be robust in the face of many kinds of failures.  TCP service is obtained by both the sender and receiver creating end points, called sockets.  Each socket has a socket number (address) consisting of the IP address of the host and a 16-bit number local to that host, called a port. A port is the TCP name for a TSAP.
  • 112. TCP Connection Establishment (a) TCP connection establishment in the normal case. (b) Simultaneous connection establishment on both sides.
  • 113. Connection establishment using three-way handshaking
  • 114. TCP Connection Release  Each simplex connection is released independently of its sibling.  To release a connection, either party can send a TCP segment with FIN bit set, which means that it has no more data to transmit.  When the FIN is acknowledged, that direction is shut down for new data.  Data may continue to flow indefinitely in the other direction, however. When both directions have been shut down, the connection is released.  Normally, four TCP segments are needed to release a connection, one FIN and one ACK for each direction. However, it is possible for the first ACK and second FIN to be contained in same segment, reducing total count to three.
  • 115. TCP Connection management: States for TCP TCP Connection management: States for TCP State Description CLOSED There is no connection. LISTEN The server is waiting for calls from the client. SYN-SENT A connection request is sent; waiting for acknowledgment. SYN-RCVD A connection request is received. ESTABLISHED Connection is established. FIN-WAIT-1 The application has requested the closing of the connection. FIN-WAIT-2 The other side has accepted the closing of the connection. TIME-WAIT Waiting for retransmitted segments to die. CLOSE-WAIT The server is waiting for the application to close. LAST-ACK The server is waiting for the last acknowledgment.
  • 116. TCP Connection Management Recall: TCP sender, receiver establish “connection” before exchanging data segments  initialize TCP variables:  seq. #s buffers, flow control info Three way handshake: Step 1: client host sends TCP SYN segment to server  specifies initial seq # no data  buffers, flow control info  client: connection initiator Socket clientSocket = new Socket("hostname","port number");  server: contacted by client Socket connectionSocket = welcomeSocket.accept();  no data Step 2: server host receives SYN, replies with SYNACK segment  server allocates buffers  specifies server initial seq. # Step 3: client receives SYNACK, replies with ACK segment, which may contain data
  • 117. TCP Connection Management (cont.) Closing a connection: client closes socket: clientSocket.close(); client server close close 3-117 Step 1: client end system sends TCP FIN control segment to server Step 2: server receives FIN, replies with ACK. Closes connection, sends FIN. close closed timed wait
  • 118. TCP Connection Management (cont.) Step 3: client receives FIN, replies with ACK.  Enters “timed wait” - will respond with ACK to client server closing closing 3-118 respond with ACK to received FINs Step 4: server, receives ACK. Connection closed. Note: with small modification, can handle simultaneous FINs. closing closed timed wait closed
  • 119. TCP Connection Management (cont) TCP server lifecycle TCP client lifecycle
  • 120. DNS: Domain Name System  DNS is a service that translates the domain name into IP addresses.  Generic Domain - It defines the category of the domain.  Generic Domain - It defines the category of the domain. For example - (.com- commercial), (.org - non-profit organization), (.edu - educational).  Country Domain - It categorizes according to the country. For example - (.in - India), (.uk - United Kingdom).
  • 121. The DNS Name Space (1) A portion of the Internet domain name space.
  • 122. The DNS Name Space (2) Generic top-level domains
  • 123. DNS: Resource Records  Resource records are used to store data about domain names and IP addresses.  A DNS zone database is made up of a collection of resource records.  Each resource record specifies information about a particular object.  A record - The record that holds the IP address of a domain.  AAAA record - The record that contains the IPv6 address for a domain  MX record - Directs mail to an email serve  NS record - Stores the name server for a DNS entry
  • 124.  A record - The record that holds the IP address of a domain.  AAAA record - The record that contains the IPv6  AAAA record - The record that contains the IPv6 address for a domain  MX record - Directs mail to an email serve  NS record - Stores the name server for a DNS entry
  • 125. Domain Resource Records (1) The principal DNS resource record types
  • 126. Domain Resource Records (2) A portion of a possible DNS database for cs.vu.nl.
  • 127. Name Servers (1) Part of the DNS name space divided into zones (which are circled).
  • 128. Name Servers Example of a resolver looking up a remote name in 10 steps.