Is it normal to have half Swap used?
> total used free shared buffers cached
Mem: 360 353 6 0 5 109
-/+ buffers/cache: 238 121
Swap: 255 124 131
You can see that half of the Swap is being used. I checked several times every day and the range is about 99 - 130 (Swap).
I wonder if this is normal?
5 Replies
I think you'll want to post at least some 'ps' output if you need help diagnosing your memory usage.
> PID TTY TIME CMD
1 ? 00:00:00 init
2 ? 00:00:00 migration/0
3 ? 00:00:00 ksoftirqd/0
4 ? 00:00:00 migration/1
5 ? 00:00:00 ksoftirqd/1
6 ? 00:00:00 migration/2
7 ? 00:00:00 ksoftirqd/2
8 ? 00:00:00 migration/3
9 ? 00:00:00 ksoftirqd/3
10 ? 00:00:00 events/0
11 ? 00:00:00 events/1
12 ? 00:00:00 events/2
13 ? 00:00:00 events/3
14 ? 00:00:00 khelper
15 ? 00:00:00 kthread
17 ? 00:00:00 xenwatch
18 ? 00:00:00 xenbus
27 ? 00:00:00 kblockd/0
28 ? 00:00:00 kblockd/1
29 ? 00:00:00 kblockd/2
30 ? 00:00:00 kblockd/3
31 ? 00:00:00 cqueue/0
32 ? 00:00:00 cqueue/1
33 ? 00:00:00 cqueue/2
34 ? 00:00:00 cqueue/3
36 ? 00:00:00 kseriod
110 ? 00:00:06 kswapd0
111 ? 00:00:00 aio/0
112 ? 00:00:00 aio/1
113 ? 00:00:00 aio/2
114 ? 00:00:00 aio/3
116 ? 00:00:00 jfsIO
117 ? 00:00:00 jfsCommit
118 ? 00:00:00 jfsCommit
119 ? 00:00:00 jfsCommit
120 ? 00:00:00 jfsCommit
121 ? 00:00:00 jfsSync
122 ? 00:00:00 xfslogd/0
123 ? 00:00:00 xfslogd/1
124 ? 00:00:00 xfslogd/2
125 ? 00:00:00 xfslogd/3
126 ? 00:00:00 xfsdatad/0
127 ? 00:00:00 xfsdatad/1
128 ? 00:00:00 xfsdatad/2
129 ? 00:00:00 xfsdatad/3
738 ? 00:00:00 net_accel/0
739 ? 00:00:00 net_accel/1
740 ? 00:00:00 net_accel/2
741 ? 00:00:00 net_accel/3
748 ? 00:00:00 kpsmoused
752 ? 00:00:00 kcryptd/0
753 ? 00:00:00 kcryptd/1
754 ? 00:00:00 kcryptd/2
755 ? 00:00:00 kcryptd/3
756 ? 00:00:00 kmirrord
765 ? 00:00:03 kjournald
800 ? 00:00:00 kauditd
834 ? 00:00:00 udevd
2221 ? 00:00:00 dhclient
2296 ? 00:00:00 syslogd
2299 ? 00:00:00 klogd
2446 ? 00:00:00 crond
2467 ? 00:00:00 atd
2473 tty1 00:00:00 mingetty
13661 ? 00:00:00 proftpd
13909 ? 00:00:00 sshd
5924 ? 00:00:00 master
5928 ? 00:00:00 qmgr
5935 ? 00:00:00 saslauthd
5936 ? 00:00:00 saslauthd
5937 ? 00:00:00 saslauthd
5938 ? 00:00:00 saslauthd
5940 ? 00:00:00 saslauthd
5948 ? 00:00:00 dovecot
5950 ? 00:00:00 dovecot-auth
5952 ? 00:00:00 pop3-login
5953 ? 00:00:00 pop3-login
5954 ? 00:00:00 pop3-login
5956 ? 00:00:00 imap-login
5957 ? 00:00:00 imap-login
6011 ? 00:00:00 tlsmgr
8829 ? 00:00:00 screen
8830 pts/2 00:00:00 bash
8840 pts/2 00:00:00 su
8841 pts/2 00:00:00 bash
8861 pts/2 00:00:00 su
8862 pts/2 00:00:00 bash
9175 pts/2 00:00:00 su
9176 pts/2 00:00:00 bash
11166 ? 00:00:00 imap-login
12998 ? 00:00:00 pdflush
17719 ? 00:00:00 mysqld_safe
17763 ? 00:00:48 mysqld
5063 ? 00:00:00 pdflush
6754 ? 00:00:00 lighttpd
6755 ? 00:00:00 php-cgi
6757 ? 00:00:00 php-cgi
6758 ? 00:00:00 php-cgi
6759 ? 00:00:00 php-cgi
6760 ? 00:00:00 php-cgi
6761 ? 00:00:00 php-cgi
6762 ? 00:00:00 php-cgi
6763 ? 00:00:00 php-cgi
6764 ? 00:00:00 php-cgi
6765 ? 00:00:00 php-cgi
6766 ? 00:00:00 php-cgi
6767 ? 00:00:00 php-cgi
6768 ? 00:00:00 php-cgi
6769 ? 00:00:00 php-cgi
6770 ? 00:00:00 php-cgi
6771 ? 00:00:00 php-cgi
6772 ? 00:00:00 php-cgi
6773 ? 00:00:00 php-cgi
6774 ? 00:00:02 php-cgi
6775 ? 00:00:02 php-cgi
6776 ? 00:00:03 php-cgi
6777 ? 00:00:02 php-cgi
6778 ? 00:00:03 php-cgi
6779 ? 00:00:02 php-cgi
6780 ? 00:00:02 php-cgi
6781 ? 00:00:02 php-cgi
6782 ? 00:00:02 php-cgi
6783 ? 00:00:02 php-cgi
6784 ? 00:00:03 php-cgi
6785 ? 00:00:02 php-cgi
6786 ? 00:00:02 php-cgi
6787 ? 00:00:02 php-cgi
6788 ? 00:00:04 php-cgi
6789 ? 00:00:03 php-cgi
7792 ? 00:00:00 pickup
9115 ? 00:00:00 sshd
9117 ? 00:00:00 sshd
9118 pts/0 00:00:00 bash
9138 pts/0 00:00:00 su
9139 pts/0 00:00:00 bash
9269 pts/0 00:00:00 ps
I have no idea what that means though
You can run 'man ps' to view the ps command manual. It's a complicated but powerful command.
A 'ps aux' will show us approximate memory usage per process, but I think it's fairly obvious that this is just a case of too much running at once. See if stopping mysql and then lighttpd (separately) improves the situation.
The normal steps at this point are to 1) run fewer processes 2) scale back resources used by the processes you do need. As I recall, there are a few good articles in the wiki about dealing with this.
mt-elbert ~ $ vmstat 10
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 84 42764 45300 87132 0 0 0 1 1 0 0 0 100 0
0 0 84 42764 45300 87132 0 0 0 0 110 211 0 0 100 0
0 0 84 42764 45300 87132 0 0 0 0 108 211 0 0 100 0
0 0 84 42764 45300 87132 0 0 0 1 110 214 0 0 100 0
0 0 84 42764 45300 87132 0 0 0 0 108 211 0 0 100 0
0 0 84 42764 45300 87132 0 0 0 0 107 211 0 0 100 0
0 0 84 42764 45300 87132 0 0 0 0 110 212 0 0 100 0
The first line when you run vmstat is an average since boot, and the following lines are the number of operations occurring since the last sample. As you can see, my average swap i/o since boot is zero. This is good. If, however, you're seeing a high average, or if the swap operations spike significantly during periods of high load, you probably need to free up memory by running less processes or upgrading your Linode.
There are several tools available to monitor and graph your swap i/o over time that will help you determine if there is a problem that needs to be addressed. From what I've seen, Munin is pretty good for getting detailed data without too much setup hassle, but I haven't used it in a while. YMMV.
So, having said all of that, what really matters is monitoring your system over time to look for trends. Things like Munin are awesome for that as they allow you to see trends very easily. The trends you want to look for on a Linode, obviously, are things that cause disk i/o to spike since that is your most expensive resource from a performance perspective.
–James
I only have 6 low-traffic websites hosted on my Linode though.
Below is my output of vmstat:
li27-127 ~: vmstat
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 141464 18520 2024 96432 0 0 5 12 13 17 0 0 100 0 0
Do the i/o values look all right? I'll try Munin for monitoring my Linode 360.
Below is the output of free -m:
li27-127 ~: free -m
total used free shared buffers cached
Mem: 360 342 17 0 2 94
-/+ buffers/cache: 246 113
Swap: 255 138 117
The output of ps axu is:
li27-127 ~: ps axu
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.1 2068 520 ? Ss Aug24 0:00 init [3]
root 2 0.0 0.0 0 0 ? S Aug24 0:00 [migration/0]
root 3 0.0 0.0 0 0 ? SN Aug24 0:00 [ksoftirqd/0]
root 4 0.0 0.0 0 0 ? S Aug24 0:00 [migration/1]
root 5 0.0 0.0 0 0 ? SN Aug24 0:00 [ksoftirqd/1]
root 6 0.0 0.0 0 0 ? S Aug24 0:00 [migration/2]
root 7 0.0 0.0 0 0 ? SN Aug24 0:00 [ksoftirqd/2]
root 8 0.0 0.0 0 0 ? S Aug24 0:00 [migration/3]
root 9 0.0 0.0 0 0 ? SN Aug24 0:00 [ksoftirqd/3]
root 10 0.0 0.0 0 0 ? S< Aug24 0:00 [events/0]
root 11 0.0 0.0 0 0 ? S< Aug24 0:00 [events/1]
root 12 0.0 0.0 0 0 ? S< Aug24 0:00 [events/2]
root 13 0.0 0.0 0 0 ? S< Aug24 0:00 [events/3]
root 14 0.0 0.0 0 0 ? S< Aug24 0:00 [khelper]
root 15 0.0 0.0 0 0 ? S< Aug24 0:00 [kthread]
root 17 0.0 0.0 0 0 ? S< Aug24 0:00 [xenwatch]
root 18 0.0 0.0 0 0 ? S< Aug24 0:00 [xenbus]
root 27 0.0 0.0 0 0 ? S< Aug24 0:00 [kblockd/0]
root 28 0.0 0.0 0 0 ? S< Aug24 0:00 [kblockd/1]
root 29 0.0 0.0 0 0 ? S< Aug24 0:00 [kblockd/2]
root 30 0.0 0.0 0 0 ? S< Aug24 0:00 [kblockd/3]
root 31 0.0 0.0 0 0 ? S< Aug24 0:00 [cqueue/0]
root 32 0.0 0.0 0 0 ? S< Aug24 0:00 [cqueue/1]
root 33 0.0 0.0 0 0 ? S< Aug24 0:00 [cqueue/2]
root 34 0.0 0.0 0 0 ? S< Aug24 0:00 [cqueue/3]
root 36 0.0 0.0 0 0 ? S< Aug24 0:00 [kseriod]
root 110 0.0 0.0 0 0 ? S< Aug24 0:07 [kswapd0]
root 111 0.0 0.0 0 0 ? S< Aug24 0:00 [aio/0]
root 112 0.0 0.0 0 0 ? S< Aug24 0:00 [aio/1]
root 113 0.0 0.0 0 0 ? S< Aug24 0:00 [aio/2]
root 114 0.0 0.0 0 0 ? S< Aug24 0:00 [aio/3]
root 116 0.0 0.0 0 0 ? S< Aug24 0:00 [jfsIO]
root 117 0.0 0.0 0 0 ? S< Aug24 0:00 [jfsCommit]
root 118 0.0 0.0 0 0 ? S< Aug24 0:00 [jfsCommit]
root 119 0.0 0.0 0 0 ? S< Aug24 0:00 [jfsCommit]
root 120 0.0 0.0 0 0 ? S< Aug24 0:00 [jfsCommit]
root 121 0.0 0.0 0 0 ? S< Aug24 0:00 [jfsSync]
root 122 0.0 0.0 0 0 ? S< Aug24 0:00 [xfslogd/0]
root 123 0.0 0.0 0 0 ? S< Aug24 0:00 [xfslogd/1]
root 124 0.0 0.0 0 0 ? S< Aug24 0:00 [xfslogd/2]
root 125 0.0 0.0 0 0 ? S< Aug24 0:00 [xfslogd/3]
root 126 0.0 0.0 0 0 ? S< Aug24 0:00 [xfsdatad/0]
root 127 0.0 0.0 0 0 ? S< Aug24 0:00 [xfsdatad/1]
root 128 0.0 0.0 0 0 ? S< Aug24 0:00 [xfsdatad/2]
root 129 0.0 0.0 0 0 ? S< Aug24 0:00 [xfsdatad/3]
root 738 0.0 0.0 0 0 ? S< Aug24 0:00 [net_accel/0]
root 739 0.0 0.0 0 0 ? S< Aug24 0:00 [net_accel/1]
root 740 0.0 0.0 0 0 ? S< Aug24 0:00 [net_accel/2]
root 741 0.0 0.0 0 0 ? S< Aug24 0:00 [net_accel/3]
root 748 0.0 0.0 0 0 ? S< Aug24 0:00 [kpsmoused]
root 752 0.0 0.0 0 0 ? S< Aug24 0:00 [kcryptd/0]
root 753 0.0 0.0 0 0 ? S< Aug24 0:00 [kcryptd/1]
root 754 0.0 0.0 0 0 ? S< Aug24 0:00 [kcryptd/2]
root 755 0.0 0.0 0 0 ? S< Aug24 0:00 [kcryptd/3]
root 756 0.0 0.0 0 0 ? S< Aug24 0:00 [kmirrord]
root 765 0.0 0.0 0 0 ? S< Aug24 0:03 [kjournald]
root 800 0.0 0.0 0 0 ? S< Aug24 0:00 [kauditd]
root 834 0.0 0.0 2148 340 ? S