Wednesday, December 11, 2013

Duplicate Mount Point in df output

Hi Friends,

Today we got an oppurtunity to work on different problem, which we have not faced in last 3 years. the output of df -h was showing two enteries of a same mount point. below is an example :-


[root@ ~]# df

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/VG00-LogVol00
                     198384608  13753896 174390712   8% /
/dev/sda1               194442     17776    166627  10% /boot
tmpfs                 17425156         0  17425156   0% /dev/shm
xyz:/tools          67291072  57243200   6629632  90% /tools
10.x.x.x:/fs1_user
                     2114715648 648047488 1466668160  31% /users
10.x.x.y:/fs2_DVTools
                     1586036736 236312640 1349724096  15% /dvtools
10.x.x.x:/fs1_user
                     2114715648 648047488 1466668160  31% /users
[root@ ~]#

even when we were trying to un mount the same and then mount it with mount -a or with giving complete path (this was the case of nfs FS). it ws again and again showing two enteries of the same mount point. It was not an issue but it was a new thing. Users were easily and very much able to access the files.

1. we cheched fstab, it was fine and correct.
2. we checked messages /var/log/messages. there were no message for that.
3. we checked /etc/mtab, It was having dual enteries of same Mount points.
4. so then we think of unmounting the file system and enforcing users to release the same with fuser -kuc /users and then again umounting the same FS.
5. The above thing we did four times but still the FS /users were showing file system busy.
6. Sixth time we did it again fuser -kuc /users, suddenly our session got disconnected and on relogin Network was un reachable. Network timed out error came.
7. We at the same time contacted DC team for helping us with DRAC IP or performing

/etc/init.d/sshd status

this was giving stopped, service dead.. but PID exist.

we ask and guid him to restart the SSHD services and then we were able to open connection over putty.

/etc/init.d/sshd start

8. Now on mount -a, the File system mounted with proper output i.e. with single output of File system.

After above steps, things come back to normal.


Un mounting once the same file system will not work. We need to unmounts the same FS twice or thrice or more.
We may need also to use fuser –kuc but, it may also lead to stopping of SSHD server. so network will not be reachable. So we should and have to be ready with DRAC IP or server support from LOCAL IT or if from VM machine or from DRAC.
After starting the SSHD services back. The file system mounted correctly.
No issues were in Packages, FSTAB.
         

 
[root@ ~]# df -hT

Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/mapper/VG00-LogVol00
              ext3    190G   14G  167G   8% /
/dev/sda1     ext3    190M   18M  163M  10% /boot
tmpfs        tmpfs     17G     0   17G   0% /dev/shm
xzy:/tools   nfs     65G   55G  6.4G  90% /tools
10.x.x.y:/fs2_DVTools
               nfs    1.5T  226G  1.3T  15% /dvtools
10.x.x.x:/fs1_user
               nfs    2.0T  620G  1.4T  31% /users
[root@ ~]# cat /etc/fstab
/dev/VG00/LogVol00      /                       ext3    defaults        1 1
LABEL=/boot             /boot                   ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/VG00/LogVol01      swap                    swap    defaults        0 0
xyz:/tools /tools nfs defaults 1 2
 .....:/fs2_DVTools       /dvtools        nfs     defaults        1 2
....:/fs1_user  /users                  nfs     defaults        1 2
[root@ ~]# cat /etc/mtab
/dev/mapper/VG00-LogVol00 / ext3 rw 0 0
proc /proc proc rw 0 0
sysfs /sys sysfs rw 0 0
devpts /dev/pts devpts rw,gid=5,mode=620 0 0
/dev/sda1 /boot ext3 rw 0 0
tmpfs /dev/shm tmpfs rw 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0
:/tools /tools nfs rw,addr= 0 0
:/fs2_DVTools /dvtools nfs rw,addr= 0 0
nfsd /proc/fs/nfsd nfsd rw 0 0
:/fs1_user /users nfs rw,addr= 0 0
[root@~]#


Cheers Happy Sharing
Amit Chopra

1 comment:

Reinventor Of Wheels said...

So what's the root cause?