Friday 27 November 2009

Unix Admin's Outcry!!

" I need power, give me a shell! " by. Chitambira
" sed -i s/wife/girlfriend/ /my.household " by. anonymous
" find /wholewideworld -name "Mr.Right" -exec marry.me {} \; " by miss *nix
" telnet Barclays 34512301 && mv £1500 /MyFavouritePub" guys this is not secure i lost my money!!
":(){:|:};:" - I was angered by my slow server and hit my keyboard randomly! (do u knw what that is?)
"grep -r 'love' /world  >>  wife"
"grep -r 'hate' /world > politicians"

Friday 20 November 2009

Zenoss: Simple deduplication of events for clustered filesystems

Imagine if you have a huge cluster filesystem (eg gpfs, or gfs) which if mounted on a thousand nodes.
All nodes(hosts) are monitored in zenoss
You want to monitor this file system (which appear the same on all nodes)
By default, all nodes will create events/alerts for this filesystem
You might end up having a thousand email about the same gpfs filesystem mounted on a thousand nodes

If, however, you choose to monitor this filesystem on one node, what happens when that node goes down, even the filesystem in ok?... no monitoring

To mitigate this, here is a simple solution (not very smart but works)

In the /Events/Perf/Filesystem, More--->Transform
Add this code:

comp = evt.component
sev = str(evt.severity)
if comp.find("gpfs")>=0:
   evt.dedupid = "cluster|"+comp+"|/Perf/Filesystem|usedBlocks_usedBlocks|high disk usage|"+sev


assuming that your filesystems have a particular naming convention e.g. in my case (gpfs_data, gpfs_finance, gpfs_images etc)
The approach is to use a single unique dedupid for each clusterd filesystem. Only the first node to notice the event will alert.
However, it would need a CLEAR from this particular nod for the event to be cleared.

Tuesday 10 November 2009

Zenoss: Setting IP interface speed using a script

Zen modeler sometimes returns wrong interface speeds for some interfaces. This is usually not straight forward to rectify if the monitored device is a hardware appliance like environment monitors etc. Zenoss have as a matter of policy or best practice, not allowed modification of interfaces for automatically discovered devices. However, they have suggested that this can be done via ZenDMD. Punching the commands again and again everytime you want to set interface speed could be monotonous. The script below could be used as a one liner to set interface speed for a specific device.

#!/usr/bin/env python
import sys
import Globals
from Products.ZenUtils.ZenScriptBase import ZenScriptBase
from transaction import commit

dmd = ZenScriptBase(connect=True).dmd

for dev in dmd.Devices.getSubDevices():
  if dev.id.startswith(sys.argv[1]):
   for interface in dev.os.interfaces():
    if interface.id.startswith(sys.argv[2]):
     interface.speed = float(sys.argv[3])
     interface.lockFromUpdates()
commit()


save the script as script.py
[zenoss@server ~]chmod 755 script.py
You can then use this script on the commandline as zenoss user, usage:

./script.py
e. g.[zenoss@server ~]./script.py cisco Fast 100000000

##for setting speed to 100Mbps on all FastEthernet interfaces on ALL devices that begin with cisco (like cisco-002.net.myorg.com, cisco-003. ... ...)
###NOTE: use appropriate indention in the script

Tuesday 3 November 2009

Adding diskspace to root VG on Linux guest VM

....Im increasing my root VG from 4GB to 7GB....
(aim: to increase root filesystel /, from 3.3GB to about 6GB)

Summary:
-make the vmdk bigger,
-use fdisk to increase the partition,
-use pvcreate/pvresize to increase the PV,
-use lvextend/lvresize to increase the LV then finally,
-use resize2fs to expand the filessytem

Make vmdk bigger
Make vmdk bigger using any vm client (vitualcenter etc) e.g.
If you have ESX 3.5 or newer:
1. Open VMware Infrastructure Client and connect to VirtualCenter or the ESX host.
2. Right-click the virtual machine.
3. Click Edit Settings.
4. Select Virtual Disk.
5. Increase the size of the disk.

using fdisk
##check current usage
[root@guestvm ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-root
3.3G 2.6G 506M 84% /
/dev/sda1 251M 29M 210M 12% /boot
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/mapper/myapp-lvol0
188G 7.5G 171G 5% /opt/myapp

##check current space available
[root@guestvm ~]# fdisk -l
Disk /dev/sda: 4096 MB, 4294967296 bytes
255 heads, 33 sectors/track, 512 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 33 265041 83 Linux
/dev/sda2 34 522 3927892+ 8e Linux LVM

Disk /dev/sdb: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

##Reboot VM...

##check if there if disk has now free space
[root@guestvm ~]# fdisk -l

Disk /dev/sda: 7516 MB, 7516192768 bytes
255 heads, 63 sectors/track, 913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 33 265041 83 Linux
/dev/sda2 34 522 3927892+ 8e Linux LVM

Disk /dev/sdb: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

##configure free space with fdisk
[root@guestvm ~]# fdisk /dev/sda

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4):
Value out of range.
Partition number (1-4): 3
First cylinder (523-913, default 523):
Using default value 523
Last cylinder or +size or +sizeM or +sizeK (523-913, default 913):
Using default value 913

Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): 8e
Changed system type of partition 3 to 8e (Linux LVM)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@guestvm ~]#

##check if your now have a new partion carrying the free space
[root@guestvm ~]# fdisk -l
Disk /dev/sda: 7516 MB, 7516192768 bytes
255 heads, 63 sectors/track, 913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 33 265041 83 Linux
/dev/sda2 34 522 3927892+ 8e Linux LVM
/dev/sda3 523 913 3140707+ 8e Linux LVM

Disk /dev/sdb: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

LV Management
[root@guestvm ~]#pvcreate /dev/sda3 ##create a physical volume

[root@guestvm ~]#vgdisplay ##current volume group setting
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 3.72 GB
PE Size 32.00 MB
Total PE 119
Alloc PE / Size 119 / 3.72 GB
Free PE / Size 0 / 0
VG UUID n20DW3-UjK4-tY7q-tYmR-G4nQ-5LH3-AXivlS

[root@guestvm ~]#vgextend VolGroup00 /dev/sda3
Volume group "VolGroup00" successfully extended

[root@guestvm ~]# pvscan
PV /dev/sdb VG myapp lvm2 [200.00 GB / 10.00 GB free]
PV /dev/sda2 VG VolGroup00 lvm2 [3.72 GB / 0 free]
PV /dev/sda3 VG VolGroup00 lvm2 [2.97 GB / 2.97 GB free]
Total: 3 [206.68 GB] / in use: 3 [206.68 GB] / in no VG: 0 [0 ]

[root@guestvm ~]# vgdisplay VolGroup00 | grep "Free"
Free PE / Size 95 / 2.97 GB

[root@guestvm ~]# lvextend -L+2.96GB /dev/VolGroup00/root
Rounding up size to full physical extent 2.97 GB
Extending logical volume root to 6.28 GB
Logical volume root successfully resized

[root@guestvm ~]# resize2fs /dev/VolGroup00/root
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/VolGroup00/root is mounted on /; on-line resizing required
Performing an on-line resize of /dev/VolGroup00/root to 1646592 (4k) blocks.
The filesystem on /dev/VolGroup00/root is now 1646592 blocks long.

[root@guestvm ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-root
6.1G 2.6G 3.3G 45% /
/dev/sda1 251M 29M 210M 12% /boot
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/mapper/myapp-lvol0
188G 7.6G 170G 5% /opt/myapp