Performance Tuning a Web Server: Part 5

by Mike on February 19, 2009 · 4 comments

in Web Server

Network Evaluation
Performance can also be tied directory to network or network cards.  By using the netstat command a great deal of information can be gathered.

netstat -s

Evaluate Server Performance

Disk Performance

CPU Performance

Memory Performance

I/O Performance

Network Performance


Kernel Interface table
eth0       1500   0 56231717      0      0      0 61216720      0      0      0 BMRU
lo        16436   0    44310      0      0      0    44310      0      0      0 LRU

56170680 total packets received
0 forwarded
0 incoming packets discarded
56029482 incoming packets delivered
61155737 requests sent out
3 fragments failed
771 ICMP messages received
0 input ICMP message failed.
ICMP input histogram:
destination unreachable: 700
timeout in transit: 71
434 ICMP messages sent
0 ICMP messages failed
ICMP output histogram:
destination unreachable: 434
InType3: 700
InType11: 71
OutType3: 434
749419 active connections openings
1612880 passive connection openings
6730 failed connection attempts
145270 connection resets received
2 connections established
55023039 segments received
59333078 segments send out
813947 segments retransmited
280 bad segments received.
535941 resets sent
1038100 packets received
431 packets to unknown port received.
0 packet receive errors
1041188 packets sent
856 invalid SYN cookies received
6536 resets received for embryonic SYN_RECV sockets
22 ICMP packets dropped because they were out-of-window
1425458 TCP sockets finished time wait in fast timer
58 time wait sockets recycled by time stamp
1990 packets rejects in established connections because of timestamp
28155 delayed acks sent
64 delayed acks further delayed because of locked socket
Quick ack mode was activated 13325 times
143 packets directly queued to recvmsg prequeue.
354 packets directly received from backlog
30488 packets directly received from prequeue
15086856 packets header predicted
40 packets header predicted and directly queued to user
21233474 acknowledgments not containing data received
8545221 predicted acknowledgments
8637 times recovered from packet loss due to fast retransmit
146765 times recovered from packet loss due to SACK data
40 bad SACKs received
Detected reordering 3428 times using FACK
Detected reordering 1329 times using SACK
Detected reordering 1366 times using reno fast retransmit
Detected reordering 19226 times using time stamp
10663 congestion windows fully recovered
158555 congestion windows partially recovered using Hoe heuristic
TCPDSACKUndo: 3221
13795 congestion windows recovered after partial ack
153292 TCP data loss events
TCPLostRetransmit: 272
1301 timeouts after reno fast retransmit
22488 timeouts after SACK recovery
7854 timeouts in loss state
515452 fast retransmits
54993 forward retransmits
99955 retransmits in slow start
50791 other TCP timeouts
TCPRenoRecoveryFail: 1336
20161 sack retransmits failed
10893 DSACKs sent for old packets
1 DSACKs sent for out of order packets
155229 DSACKs received
3474 DSACKs for out of order packets received
44453 connections reset due to unexpected data
55028 connections reset due to early user close
4796 connections aborted due to timeout
InMcastPkts: 15
OutMcastPkts: 17

Under TCP there are several troubling stats.  One of those is the failed connection attempts.  These failed connection attempts are directly related to the full queue for syn requests.  This is a good indicator that the max_syn_backlog should be increased.

6730 failed connection attempts

This is supported by the reset connects that you see.

55028 connections reset due to early user close

One other issue to verify with your network cards is that they are at full capacity.  ethtool is a good way to check that information.

ethtool eth0
Settings for eth0:
Supported ports: [ TP MII ]
Supported link modes:   10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
Supports auto-negotiation: Yes
Advertised link modes:  10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
Advertised auto-negotiation: Yes
Speed: 100Mb/s
Duplex: Full
Port: MII
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: pumbg
Wake-on: d
Current message level: 0×00000007 (7)
Link detected: yes

If you needed to increase the Ethernet card to 100 full duplex you could do that with:

ethtool -s eth0 speed 100 duplex full

It seems there are two issues in this evaluation. First, RAID 1, though it provides redundancy, now the server is too busy to afford RAID 1 and should be moved to RAID 0  or just ext3 to increase speed.  Drive speed, bus, etc. should also be considered to get the best performance.  The other issue was related to networking and the TCP sockets.  Increasing that memory will insure that there are less problems related to network connections.

{ 1 comment }

Kredit trotz Schufa September 20, 2010 at 9:25 pm

First-class article, I just offer it with my friend of Taiwan. I Stumble UP your web publication location, you will detect a growth of page views inside 24 hours for specific persons. Cheers

{ 3 trackbacks }

Previous post:

Next post: