Unix keystrokes

Adding this here just to memorize this good and very often unused keystrokes.Think that each person working with cli at least 1 hour a day should know this rather than moving the arrows 🙂

CTRL + B – Move the cursor left

CTRL + F – Move the cursor right

CTRL + A – Move the cursor to beginning of line

CTRL + E – Move the cursor to the end of the line

CTRL + W – Erase the preceding word

CTRL + U – Erase from cursor to beginning of the line

CTRL + K – Erase from cursor to end of the line

CTRL + Y – Paste erased text

CTRL + P – View the previous command (cursor up)

CTRL + N – Vie the next command (cursor down)

Would like to recommend a great book for Linux newbies who would like to learn a bit more :

How Linux Works: What Every Superuser Should Know

 

More about TCP

Just finished the TCP course from INE  :

CCIE R&S: Understanding Transmission Control Protocol (TCP) 

Really recommend this to anyone who is interested in a bit deeper dive of how TCP works, great quality content, Keith Bogart is doing amazing job as instructor !

 

Some TCP facts which I didn’t mentioned in my previous post :

  • Before TCP connection is established TCP goes through the following steps:
  • Closed Initial State
  • Active Open (APP notifies the CPU and requests for service time) – client side
  • Transmission Control Block setup process  – CPU creates TCB to keep track of each new TCP connection – for each TCP session we create a new TCB with unique identifier (socket – SRC IP; SRC TCP Port; DST IP; DST TCP Port)
  • Initial sequence number will be random to prevent attacks on TCP
  • When TCB is ready we are sending SYN(x random Seq# number) – this is called a Syn Sent State
  • Receiving Syn(y random seq# number from server) + ACK (seq# x + 1)
  • Sending ACK(seq# y+1) for that
  • TCP Client side state changes to Established

This is Active Open – TCB creation process, initiated only per need of client.

Also we have Passive Open (server – listening process)

 

  • The initial state – closed
  • TCB creates partiality filled socket – so called unspecified passive open
  • From server is listening on specific port, when receives a SYN from client
  • Changes the state to Syn Received
  • Then responds with ACK and state changes to Established

A bit about Nagle  :

  • On by default on most of operating systems
  • When congestion occurs might be harmful – Storage vendors advising to disable it
  • Basically what Nagle does after getting the CPU time is gathers the data from Buffer and stores it – doesn’t send anything until we have a bytes on a flight  = unacknowledged bytes
  • If yes it will keep storing until his he has those unack bytes, after receiving ACKs for them it will send out multiple segments which we collected from buffer.

MSS VS MTU 

TCP checks the MTU and configures MSS accordingly. If we have a jumbo MTU 9216 MSS will be 9216 – 20 (IP Header) – 20 TCP header.

Relative Sequence Number in Wireshark and Subdisector

  • Wireshark by default shows not the real sequence numbers but the relative ones – this can be changed in menu
  • When troubleshooting HTTP long responds – be aware that by default Allow subdisector to reassemble TCP stream in Wireshark is enabled – which could bring a lot of confusion as it reasembles the packets and wait of full web page load, instead of getting the picture in 1ms you can see it wireshark as getting in 30 seconds, despite that there was no problem at all. Recomending to view a video on this :  Wireshark Tutorial Series #2. Tips and tricks used by insiders and veterans

Sequence Numbers 

  • Built to count the bytes in segment
  • When segment sent only with ACK without the data you might see that seq# is not incrementing – as there is nothing to count
  • Also it might increment even while empty – this called phantom byte – add +1 to seq#

About the Missed ACKs 

  • Not each sent segment will receive an ACK for it – might be we will be acking each 5 segments
  • RTO – Re-transmission timeout value – depends on RTT, also so called SRTT Smooth RTT – when we are comparing multiple RTT values and taking the average.
  • Why TCP is reliable ? Because before and after sending the Segment it puts it into the Re-transmission Queue and only after receiving an ACK it removes the segment from there.
  • Duplicate ACKs are appearing after one of the segments has been delivered but other one was lost – then we are repeating the last ACKed segment – selective repeat here comes into play with option field which helps to retransmit only what is missing.

Sliding Window Rules :

  • Bytes sent and acknowledged  – removing from retransmission queue.
  • Bytes sent but not yet acknowledged
  • Bytes not yet sent for which receiver is ready – usable window
  • Bytes not sent and receiver is not ready to receive them

Urgent Bit 

  • No priorities here, just flagging the segment about it’s urgency – TCP does nothing special here, only allows to upper layer APP to identify the urgent packet.
  • So no VIP delivery from the TCP side.
  • Urgent pointer field points to the first byte of non urgent data.

 

Flowcontrol IEEE 802.3x

Old known stuff, probably for everybody who is somehow related to networking, but anyway, putting it here too.

IEEE 802.3x – Wikipedia link

If QoS is enabled and you like to prioritize the traffic, flow control needs to be disabled,as it doesn’t care about any higher level prioritization, just when ingress traffic is coming in faster than receiver can accept it, flow control will kick in and send pause frames until the ingress-egress rate will be equalized or ingress rate is lower than egress of that interface.

A bit more info from Dell FTOS 9 documentation about flow control :

 

flowcontrol

I would use it only for storage – for example iscsi traffic, in separated network, then it won’t do any harm.. probably 🙂

But of course no way of using it on trunk links, other switch facing links and etc.

 

Mounting the iscsi target and checking performance.

Installing – Let’s use Debian based manager :

Apt-get install open-iscsi

Apt-get install iperf

Apt-get install nload

Scanning for targets :

Iscsiadmin -m discovery -t st -p target_ip

Iscsiadm -m node -u

Iscsiadm -m node –login

Check if you see the object :

Lsblk

New disk should appear in dev – like sdx \–sdx1

How to partition (if wasn’t partitioned yet)

Fdisk /dev/sdc

N

P

Enter

w

Mkfs.ext4 /dev/sdx1

Connecting the drive to system

Fsck /dev/sdc1

Mount /dev/sdc1 /mnt

Creating big file with random data :

Dd if=/dev/urandom of=rnd.20G count=1024 bs=20M

Also you can check the copy speed live via rsync (when copying from mount location to mount for example) –

rsync –progress source destination

Live interface monitoring can be done via

nload – same like top but for nics

And at the end if you’d like to check what is the performance of network(without storage) – connect second host to same switch and run iperf

Iperf -s  (on servers side)

Iperf -c x.x.x.x -d     (-D option for simultaneous bi-directional bandwidth measurement)

TCP congestion control

As we know TCP is called reliable – connection-oriented protocol. But why ?

Basically because it keeps the data in its buffer before and after sending it, it makes sure that data was delivered in sequence and receiver send the confirmation about received data (ack). Otherwise data will be re-transmitted there are different ways to re-send the data, let’s explore some of them.

So what is happening under the hood of this massive (big overhead comparing to UDP protocol).

Some ideas about TCP congestion mechanisms  :

  1. Were created with intention that small buffers (of devices) would overflow, packet loss will trigger and TCP would react.  – All good and cool, but in 2017 we have a huge buffers and this might cause some problems here, because it takes time before they will overflow, it creates a delays.
  2. With very small buffers we also have a problem, in case if we have  burst of packets and due to small buffer one/some of them would get lost the TCP can treat them as a congestion – afterwards it will reduce its congestion window and as the result links won’t be able to be filled completely.
  3. Link flapping – this stuff is common in campus networks – link flapping, fading can trick the TCP into thinking that there is an extreme congestion in the network and therefore it should do exponential re-transmissions

To avoid many bad stuff we have mechanisms like :

  • sliding window – changing the window size depending on successfully received traffic
  • stop and wait – one frame per ACK, basically N size of data can be sent until the next ACK
  • cumulative acks – if I Acknowledge packet 3 that means I also acknowledge that I received 2 packets before this one.
  • Go back to N – in case if single packet is lost we will re-transmit all the segment(is good when there are burst of losses), when senders window is larger than receivers ,protocol will use go back to N.
  • Selective repeat – we will re-transmit that packet and only that which was lost
  • Very important to make sure that we are not re-transmitting to early.

Mainly TCP is the end to end host based congestion control mechanism.

  • It reacts to events observable of the end host
  • Uses TCP’s sliding window and flow control
  • Tries to figure out how many packets can safely be outstanding in the network at the time.

You can memorize very simple form of TCP congestion mechanism and build everything on top of it – at least i’m doing so, I might be wrong 🙂

AIMD – Additive increase, multiplicative decrease

  • Basically if packet was received without errors and we got the ACK, we would increase the size of window field : w+1/w
  • If packet was dropped we would use following formula w=w/2 – so basically after first dropped packet we will be cutting the window to half.

AIMD also helps us to fully use the links – window size expands according to AIMD to probe how many bytes the pipe can hold.

Summary for AIMD :

  • Throughput of AIMD flow is sensitive to the drop probability and very sensitive to RTT – round trip time.
  • With many flows each flow will follow it’s own AIMD rule.

We have several TCP flow control methods :

TCP TAHOE

  • slow start (on connection startup to quickly find network capacity or packet timeout)
  1. window starts at max MSS
  2. increasing window for each ACK packet
  3. Exponentially grow congestion window to sense network capacity
  • congestion avoidance state  – to carefully probe when close to maximum network capacity
  • triple duplicate acks
  • fast re-transmission means don’t wait for a time out to re-transmit a missing segment if you receive a tripe duplicate ACK.

FSM for Tahoe Mechanism :

tahoefsm

TCP RENO.

Behaves identical as Tahoe on timeout

  • On triple duplicate ACK it :
  1. sets threshold to congestion window 2
  2. sets congestion window to congestion win/2 – fast recoevry
  3. Inflate congestion window size (fast recovery)
  4. Retrasmit missing segments (fast retransmit)
  5. stay in congestion avoidance state
  • TCP Reno adds in additional optimization, three duplicate ACKs don’t cause TCP to lose an RTT worth of trasnmission it waits for the missing segments to be acked.

FSM for Reno Mechanism :

renofsm.png

Basically the difference between Tahoe and Reno is a fast recovery.

Observation signals :

  • increasing ACKs : transfer is going well
  • duplicate ACKs : something was lost, delayed
  • timeout – bad stuff 🙂

In TCP we are also using self-clocking – with help of this sender knows that packet has left the network.

@Credits to Stanford university for providing such a great course – almost all info here is taken from Networking self paced course.Â