All posts by Steven A

Data Privacy & Federal Government Priorities

What is the Federal Data Strategy?

The use of data is transforming the world. The way the Federal Government provides, maintains, and uses data has a unique place in society and maintaining trust in federal data is pivotal to a democratic process. The Federal Government needs a coordinated and integrated approach to using data to deliver on mission, serve the public, and steward resources while respecting privacy and confidentiality.

The Federal Data Strategy will define Principles, Practices, and a Year 1 Action Plan to deliver a more consistent approach to federal data stewardship, use, and access. The Federal Data Strategy development team will also test solutions and assumptions along the way with The Data Incubator Project, which will help identify priority use cases and methods that should be replicated or scaled.

The Facebook – Cambridge Analytica fiasco may have grabbed headlines, but in reality, this is but one example of the data misuse and data privacy issues that are currently impacting nearly every industry sector. Consider for a moment, the potential impact of a cyberattack on a federal government agency. In the face of ever-evolving sophisticated cyber threats, federal agencies require increasingly complex data security solutions. Here are the primary data security concerns we’re currently hearing about from clients in the federal space and our recommendations to address these concerns.

Compromised Communications

Another area in which Washington seeks to outsmart digital criminals is in the prevention of eavesdropping. For example MSI catchers or MDIs (mobile device identifiers), also referred to as stingrays, are rogue mobile cell towers that intercept a phone’s voice and data transmission thereby providing the adversary full access to the individual’s phone conversations and text messages.

Stingray hardware is portable and can easily fit inside a backpack. Any member of Congress or government employee using their cellphone in the street could have their conversation intercepted. Even if the person is in their office, a nearby stingray could capture the call as long as it’s within range. The reality is anyone can easily listen in on official government conversations and messages. The Department of Homeland Security publicly acknowledged this activity in April 2018, but the existence of these devices has been known for years – maybe a decade. The issue has only recently appeared on the public radar, but addressing it is a serious matter of national security.

Standard cell phone service is highly vulnerable to hacking, and even carrier-grade cell services aren’t designed with extensive levels of security. Anytime data is archived with a third party, the chances for a breach increase substantially. For this reason, Silent Circle’s secure communications products use “peer-to-peer” encryption. For phones equipped with our Silent Phone application, any voice or text communication is encrypted from the sender’s device to the other party’s device. End-to-end encryption is truly an ideal defense against stingray interception because even if the conversation gets routed through a cell tower simulator, the communication remains encrypted.

Linux Rescue Mode

Reinstall Corrupted Bootloader

Overview

  • The Linux rescue mode is a special boot mode for fixing critical issues, such as reinstalling a corrupted stage 1bootloader or accessing a problematic /boot file system, that have rendered the system unbootable. In this training session, you will:
  • Boot the system into the rescue mode using the RHEL installation DVD with networking disabled and file systems mounted in read-write mode in the chroot environment
  • Reinstall Corrupted Bootloader using grub-install
  • Reinstall Corrupted Bootloader from the grub Shell
  • Install Lost/Corrupted System Files

Boot RHEL into the Rescue Mode

  • 1. Boot the system with the installation DVD
  • 2. Choose Rescue Installed System from the boot menu and press Enter.
  • 3. Select a language and keyboard on subsequent screens.
  • 4. Choose Local CD/DVD and press OK.
  • 5. Choose no when asked whether you want network interfaces to start in the rescue mode.
  • 6. Choose Continue and press Enter.
  • 7. Continue searches for a RHEL installation and, if found, mounts the / file system on /mnt/sysimage in read/write mode.
  • 8. Press OK again to go to the command prompt.
  • 9. Run the chroot command to make all file systems appear as if they are mounted directly under.

# chroot /mnt/sysimage

 

Reinstall Corrupted Bootloader using grub-install

  • 1. Boot the system into rescue mode with all file systems mounted read/write in the chroot environment.
  • 2. Issue the grub-install command to reinstall the bootloader on the boot disk’s MBR:

# grub-install –root-directory=/ /dev/sda

  • 3. Issue exit twice to go back, then select Reboot to restart the system.

Reinstall Corrupted Bootloader from the grub Shell

  • 1. Boot the system into rescue mode with all file systems mounted read/write in the chroot environment.
  • 2. Issue the grub command to invoke the grub shell:

# grub

  • 3. Run the root command to set the current root device:

grub> root (hd0,0)

  • 4. Execute the setup command to install the bootloader on the /dev/sda disk:

grub> setup (hd0)

  • 5. Issue quit to exit out of the grub shell, and then exit twice to go back.

6. Select Reboot to restart the system

 

Install a Lost or Corrupted System File

  • 1. Boot the system into rescue mode with all file systems mounted read/write in the non-chroot environment.
  • 2. Copy the lost/corrupted file (in this case /bin/mount) to the /mnt/sysimage/bin directory:

# cp /bin/mount /mnt/sysimage/bin

  • 3. Confirm the file copy:

# ls -l /mnt/sysimage/bin

  • 4. Issue the exit command twice to go back.
  • 6. Select Reboot to restart the system.

Find and remove older file in Linux

With this, you will be able with the Linux find command to find all files older than 30 days and then execute rm command on them.

The find utility on Linux allows you to pass in arguments, including one to execute another command on each file. We’ll use this in order to figure out what files are older than a certain number of days, and then

use the ls command to list them. To be on the safe side and the

rm command to remove them.

List:

find  /path/to/files* –mtime +30 -exec ls –tl {} \;

Note that there are spaces between ls, {}, and \;

Remove:

find  /path/to/files* –mtime +30 -exec rm {} \;

  1. The first argument is the path to the files. This can be a path, a directory, or a wildcard as in the example above. I would recommend using the full path, and make sure that you run the command without the exec rm to make sure you are getting the right results.
  2. The second argument, -mtime, is used to specify the number of days old that the file is. If you enter +30, it will find files older than 30 days
  3. The third argument, -exec, allows you to pass in a command such as ls -tl. The {} \; at the end is required to end the command.

What Is Inode?

An Inode is a data structure used to store the meta data of a file. Inode number represents the collective number of files and folders present in your web hosting account.

 

There are no free file system inodes left

It’s quite easy for a disk to have a large number of inodes used even if the disk is not very full.

.Each file and folder use an inode. When the file system is created, a specific block of inodes is created for that file system. If many small files are present, this can cause the pool of inodes to be consumed prematurely.

It’s also possible that deleting files will not reduce the inode count if the files have multiple hard links. As I said, inodes belong to the file, not the directory entry.

If a file has two directory entries linked to it, deleting one will not free the inode.

 

  • My file system has plenty of space left, but I’m getting errors about a lack of inodes. How do I detect and fix this?
  • To find out “what directory has a large number of files in it?”

The number of used/free inodes can be seen here:

#find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n

or

# df -i /dev/(device) for the entire device or filesystem

Determine what is creating all the small files, and delete them if that is practical.

Add additional space to the device. The ratio will stay the same, but additional inodes will be added to the file system.

/var/lock/lvm/V_vg00: open failed: No space left on device

vgs
/var/lock/lvm/V_vg00: open failed: No space left on device
Can’t lock volume group vg00: skipping

  • Red Hat Enterprise Linux 4/5/6

 

[root@shebanglinux subsys]# pvscan
/var/lock/lvm/P_global: open failed: No space left on device
Unable to obtain global lock.

 

[root@shebanglinux subsys]# pvscan -vvv
Processing: pvscanvvv
O_DIRECT will be used
Setting global/locking_type to 1
File-based locking selected.
Setting global/locking_dir to /var/lock/lvm
Locking /var/lock/lvm/P_global WB
/var/lock/lvm/P_global: open failed: No space left on device
Unable to obtain global lock.

solution

Extend the maximum inode count in an ext3/4 file system.

The Problem is no free inodes on the filesystem

  • Use the command “df -ih” and check if there are free inodes on the particular filesystem.
# df -ih

 

NFS Stale File Handle error

NFS (Network File System) version 3 and 4

Sometime NFS can result in weird problems. For example, NFS mounted directories sometimes contain stale file handles. If you run command such as ls or vi you will see an error:

Issue

  • Clients mounting NFS filesystems report stale file handles
  • What are some causes of stale file handles and how can they be prevented?
  • I am seeing application logs stating read or write operations on an NFS file, or operations on an NFS directory, complete with errno = 116 (ESTALE)
  • Change the file id size from 64-bits to 32-bits in NFS Server, there is ‘stale file handle’ error and below messages in /var/log/messages

# ls -l
.: Stale File Handle

# grep -i nfs /var/log/messages

kernel: NFS: server lhub.nas error: fileid changed
kernel: fsid 0:13: expected fileid 0x1015734f5, got 0x15734f5
A filehandle becomes stale whenever the file or directory referenced by the handle is removed by another host, 

while your client still holds an active reference to the object.

 A typical example occurs when the current directory of a process, running on your client, 

is removed from the server (either by a process running on the server or on another client).

So this can occur if the directory is modified on the NFS server, but the directories modification time is not updated.

solution

  • Any change to the file system or storage that the NFS export resides on may cause stale file handles. Examples:
    • If no FSID is specified for the export then moving the export to a new block device or the block device being assigned a new major/minor number will cause stale file handles
    • Changing the underlying file systems or inode map will cause stale file handles regardless of FSID
  • Modifying an export’s explicitly defined FSID will cause stale file handles and require remount
  • Adding an explicitly defined FSID to an export that does not yet have one will not cause stale file handles
  • Incorrectly configured clustered NFS may lead to stale file handles during failover events
  • Certain bugs in NFS servers or NFS clients can cause stale file handles
  • Mounting filesystems from a very large number of NFS servers (several hundred or more) can cause port exhaustion of ‘reserved ports’ (under 1024). Beginning with RHEL 6, the mount option “noresvport” is available, and may help by allowing non-reserved ports to be used. Before RHEL 6, this mount option is not available.

A possible Solution is to Remount the directory from the NFS Client

# umount -f /path/to/mountpoint
# mount -t nfs nfsserver:/path/to/share /path/to/mountpoint