Category Archives: Solution

Check if system (RHEL/CENTOS )is UEFI or BIOS

Please run the following script to find out if your system is UEFI or BIOS

 

#!/bin/bash
[ -d /sys/firmware/efi ] && fw=”UEFI” || fw=”BIOS”
echo -e “$fw”
if [ “$fw” == “UEFI” ] ; then
mygrub=’/boot/efi/EFI/redhat/grub.cfg’
echo -e “\n\tUEFI detected, this is a ($fw) system., and the boot config is located at ($mygrub)”
else
mygrub=’/boot/grub2/grub.cfg’
echo -e “\n\t($fw) system detected your boot config is located at ($mygrub)\n”
fi

Yum Update hung or ssh disconnected during yum update

Yum gives error: “There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them”

 

Environment:

Red hat / CentOS 5-6-7

 

Issue

  • While updating the system using yum it throws an error: “There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them”.
  • yum update interrupted and yum-complete-transaction wants to remove 253 pkgs
  • Running yum-complete-transaction renders the system inoperable after removing system critical packages

 

 

Resolution

  • While updating the system using yum if it finds incomplete or aborted yum transactions on a system it displays the message “There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them.”

If due to some reason the previous transaction was incomplete or aborted then on the next yum install or update command, it would give this message. The “yum-complete-transaction” command would help to complete the previous incomplete or aborted transactions.

On execution of “yum-complete-transaction” it gives the list of packages which will be installed and removed to complete the previous transaction. It asks for confirmation if you would like to continue. On entering ‘Y’ it will complete the transaction. Before continuing (i.e hitting ‘Y’) *it is important to verify the list of packages that will be installed or removed by “yum-complete-transaction” .

Note: At the time of writing, there is a bug with “yum-complete-transaction” where, depending on the list of packages to install or remove from the previous transaction, it might offer to remove almost all the packages on the system. It is important to review the Transaction Summary and package list before hitting ‘Y’ to continue. If you are experiencing this issue, please run the following:

 

# package-cleanup --dupes
# yum-complete-transaction --cleanup-only
# yum update yum yum-utils
# yum-complete-transaction  (Note: At this point, it should not offer to remove so many packages. Please do check to make sure though.)
# yum clean all
# yum update

Example:
   # yum-complete-transaction
    Loaded plugins: rhnplugin
    There are 1 outstanding transactions to complete. Finishing the most recent one
    The remaining transaction had 3 elements left to run
    There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them.
    --> Running transaction check
    ---> Package dmidecode.i386 1:2.9-1.el5 set to be erased
    --> Processing Dependency: dmidecode >= 2.7 for package: hal
    ---> Package dnsmasq.i386 0:2.45-1.el5_2.1 set to be erased
    [..snip..]

    --> Finished Dependency Resolution

    ==================================================================================================================
     Package                               Arch               Version                     Repository             Size
    ==================================================================================================================
    Removing:
     dmidecode                             i386               1:2.9-1.el5                 installed             148 k
     dnsmasq                               i386               2.45-1.el5_2.1              installed             343 k
    Removing for dependencies:
     NetworkManager                        i386               1:0.7.0-9.el5               installed             3.3 M
     NetworkManager-glib                   i386               1:0.7.0-9.el5               installed             154 k

    Transaction Summary
    ==================================================================================================================
    Install      0 Package(s)        
    Update       0 Package(s)        
    Remove      4 Package(s)        

    Is this ok [y/N]:
If you do not want to complete the previous transaction, then use the "--cleanup-only" option. This would clean up only transaction journal files and exit.

Example :

  # yum-complete-transaction --cleanup-only
    Loaded plugins: rhnplugin
    Cleaning up unfinished transaction journals
    Cleaning up 2009-10-06.15:46.06



Linux Rescue Mode

Reinstall Corrupted Bootloader

Overview

  • The Linux rescue mode is a special boot mode for fixing critical issues, such as reinstalling a corrupted stage 1bootloader or accessing a problematic /boot file system, that have rendered the system unbootable. In this training session, you will:
  • Boot the system into the rescue mode using the RHEL installation DVD with networking disabled and file systems mounted in read-write mode in the chroot environment
  • Reinstall Corrupted Bootloader using grub-install
  • Reinstall Corrupted Bootloader from the grub Shell
  • Install Lost/Corrupted System Files

Boot RHEL into the Rescue Mode

  • 1. Boot the system with the installation DVD
  • 2. Choose Rescue Installed System from the boot menu and press Enter.
  • 3. Select a language and keyboard on subsequent screens.
  • 4. Choose Local CD/DVD and press OK.
  • 5. Choose no when asked whether you want network interfaces to start in the rescue mode.
  • 6. Choose Continue and press Enter.
  • 7. Continue searches for a RHEL installation and, if found, mounts the / file system on /mnt/sysimage in read/write mode.
  • 8. Press OK again to go to the command prompt.
  • 9. Run the chroot command to make all file systems appear as if they are mounted directly under.

# chroot /mnt/sysimage

 

Reinstall Corrupted Bootloader using grub-install

  • 1. Boot the system into rescue mode with all file systems mounted read/write in the chroot environment.
  • 2. Issue the grub-install command to reinstall the bootloader on the boot disk’s MBR:

# grub-install –root-directory=/ /dev/sda

  • 3. Issue exit twice to go back, then select Reboot to restart the system.

Reinstall Corrupted Bootloader from the grub Shell

  • 1. Boot the system into rescue mode with all file systems mounted read/write in the chroot environment.
  • 2. Issue the grub command to invoke the grub shell:

# grub

  • 3. Run the root command to set the current root device:

grub> root (hd0,0)

  • 4. Execute the setup command to install the bootloader on the /dev/sda disk:

grub> setup (hd0)

  • 5. Issue quit to exit out of the grub shell, and then exit twice to go back.

6. Select Reboot to restart the system

 

Install a Lost or Corrupted System File

  • 1. Boot the system into rescue mode with all file systems mounted read/write in the non-chroot environment.
  • 2. Copy the lost/corrupted file (in this case /bin/mount) to the /mnt/sysimage/bin directory:

# cp /bin/mount /mnt/sysimage/bin

  • 3. Confirm the file copy:

# ls -l /mnt/sysimage/bin

  • 4. Issue the exit command twice to go back.
  • 6. Select Reboot to restart the system.

What Is Inode?

An Inode is a data structure used to store the meta data of a file. Inode number represents the collective number of files and folders present in your web hosting account.

 

There are no free file system inodes left

It’s quite easy for a disk to have a large number of inodes used even if the disk is not very full.

.Each file and folder use an inode. When the file system is created, a specific block of inodes is created for that file system. If many small files are present, this can cause the pool of inodes to be consumed prematurely.

It’s also possible that deleting files will not reduce the inode count if the files have multiple hard links. As I said, inodes belong to the file, not the directory entry.

If a file has two directory entries linked to it, deleting one will not free the inode.

 

  • My file system has plenty of space left, but I’m getting errors about a lack of inodes. How do I detect and fix this?
  • To find out “what directory has a large number of files in it?”

The number of used/free inodes can be seen here:

#find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n

or

# df -i /dev/(device) for the entire device or filesystem

Determine what is creating all the small files, and delete them if that is practical.

Add additional space to the device. The ratio will stay the same, but additional inodes will be added to the file system.

/var/lock/lvm/V_vg00: open failed: No space left on device

vgs
/var/lock/lvm/V_vg00: open failed: No space left on device
Can’t lock volume group vg00: skipping

  • Red Hat Enterprise Linux 4/5/6

 

[root@shebanglinux subsys]# pvscan
/var/lock/lvm/P_global: open failed: No space left on device
Unable to obtain global lock.

 

[root@shebanglinux subsys]# pvscan -vvv
Processing: pvscanvvv
O_DIRECT will be used
Setting global/locking_type to 1
File-based locking selected.
Setting global/locking_dir to /var/lock/lvm
Locking /var/lock/lvm/P_global WB
/var/lock/lvm/P_global: open failed: No space left on device
Unable to obtain global lock.

solution

Extend the maximum inode count in an ext3/4 file system.

The Problem is no free inodes on the filesystem

  • Use the command “df -ih” and check if there are free inodes on the particular filesystem.
# df -ih

 

NFS Stale File Handle error

NFS (Network File System) version 3 and 4

Sometime NFS can result in weird problems. For example, NFS mounted directories sometimes contain stale file handles. If you run command such as ls or vi you will see an error:

Issue

  • Clients mounting NFS filesystems report stale file handles
  • What are some causes of stale file handles and how can they be prevented?
  • I am seeing application logs stating read or write operations on an NFS file, or operations on an NFS directory, complete with errno = 116 (ESTALE)
  • Change the file id size from 64-bits to 32-bits in NFS Server, there is ‘stale file handle’ error and below messages in /var/log/messages

# ls -l
.: Stale File Handle

# grep -i nfs /var/log/messages

kernel: NFS: server lhub.nas error: fileid changed
kernel: fsid 0:13: expected fileid 0x1015734f5, got 0x15734f5
A filehandle becomes stale whenever the file or directory referenced by the handle is removed by another host, 

while your client still holds an active reference to the object.

 A typical example occurs when the current directory of a process, running on your client, 

is removed from the server (either by a process running on the server or on another client).

So this can occur if the directory is modified on the NFS server, but the directories modification time is not updated.

solution

  • Any change to the file system or storage that the NFS export resides on may cause stale file handles. Examples:
    • If no FSID is specified for the export then moving the export to a new block device or the block device being assigned a new major/minor number will cause stale file handles
    • Changing the underlying file systems or inode map will cause stale file handles regardless of FSID
  • Modifying an export’s explicitly defined FSID will cause stale file handles and require remount
  • Adding an explicitly defined FSID to an export that does not yet have one will not cause stale file handles
  • Incorrectly configured clustered NFS may lead to stale file handles during failover events
  • Certain bugs in NFS servers or NFS clients can cause stale file handles
  • Mounting filesystems from a very large number of NFS servers (several hundred or more) can cause port exhaustion of ‘reserved ports’ (under 1024). Beginning with RHEL 6, the mount option “noresvport” is available, and may help by allowing non-reserved ports to be used. Before RHEL 6, this mount option is not available.

A possible Solution is to Remount the directory from the NFS Client

# umount -f /path/to/mountpoint
# mount -t nfs nfsserver:/path/to/share /path/to/mountpoint