" I need power, give me a shell! " by. Chitambira
" sed -i s/wife/girlfriend/ /my.household " by. anonymous
" find /wholewideworld -name "Mr.Right" -exec marry.me {} \; " by miss *nix
" telnet Barclays 34512301 && mv £1500 /MyFavouritePub" guys this is not secure i lost my money!!
":(){:|:};:" - I was angered by my slow server and hit my keyboard randomly! (do u knw what that is?)
"grep -r 'love' /world >> wife"
"grep -r 'hate' /world > politicians"
Friday, 27 November 2009
Friday, 20 November 2009
Zenoss: Simple deduplication of events for clustered filesystems
Imagine if you have a huge cluster filesystem (eg gpfs, or gfs) which if mounted on a thousand nodes.
All nodes(hosts) are monitored in zenoss
You want to monitor this file system (which appear the same on all nodes)
By default, all nodes will create events/alerts for this filesystem
You might end up having a thousand email about the same gpfs filesystem mounted on a thousand nodes
If, however, you choose to monitor this filesystem on one node, what happens when that node goes down, even the filesystem in ok?... no monitoring
To mitigate this, here is a simple solution (not very smart but works)
In the /Events/Perf/Filesystem, More--->Transform
Add this code:
comp = evt.component
sev = str(evt.severity)
if comp.find("gpfs")>=0:
  evt.dedupid = "cluster|"+comp+"|/Perf/Filesystem|usedBlocks_usedBlocks|high disk usage|"+sev
assuming that your filesystems have a particular naming convention e.g. in my case (gpfs_data, gpfs_finance, gpfs_images etc)
The approach is to use a single unique dedupid for each clusterd filesystem. Only the first node to notice the event will alert.
However, it would need a CLEAR from this particular nod for the event to be cleared.
All nodes(hosts) are monitored in zenoss
You want to monitor this file system (which appear the same on all nodes)
By default, all nodes will create events/alerts for this filesystem
You might end up having a thousand email about the same gpfs filesystem mounted on a thousand nodes
If, however, you choose to monitor this filesystem on one node, what happens when that node goes down, even the filesystem in ok?... no monitoring
To mitigate this, here is a simple solution (not very smart but works)
In the /Events/Perf/Filesystem, More--->Transform
Add this code:
comp = evt.component
sev = str(evt.severity)
if comp.find("gpfs")>=0:
  evt.dedupid = "cluster|"+comp+"|/Perf/Filesystem|usedBlocks_usedBlocks|high disk usage|"+sev
assuming that your filesystems have a particular naming convention e.g. in my case (gpfs_data, gpfs_finance, gpfs_images etc)
The approach is to use a single unique dedupid for each clusterd filesystem. Only the first node to notice the event will alert.
However, it would need a CLEAR from this particular nod for the event to be cleared.
Tuesday, 10 November 2009
Zenoss: Setting IP interface speed using a script
Zen modeler sometimes returns wrong interface speeds for some interfaces. This is usually not straight forward to rectify if the monitored device is a hardware appliance like environment monitors etc. Zenoss have as a matter of policy or best practice, not allowed modification of interfaces for automatically discovered devices. However, they have suggested that this can be done via ZenDMD. Punching the commands again and again everytime you want to set interface speed could be monotonous. The script below could be used as a one liner to set interface speed for a specific device.
save the script as script.py
###NOTE: use appropriate indention in the script
#!/usr/bin/env python
import sys
import Globals
from Products.ZenUtils.ZenScriptBase import ZenScriptBase
from transaction import commit
dmd = ZenScriptBase(connect=True).dmd
for dev in dmd.Devices.getSubDevices():
  if dev.id.startswith(sys.argv[1]):
   for interface in dev.os.interfaces():
    if interface.id.startswith(sys.argv[2]):
     interface.speed = float(sys.argv[3])
     interface.lockFromUpdates()
commit()
save the script as script.py
[zenoss@server ~]chmod 755 script.py
You can then use this script on the commandline as zenoss user, usage: ./script.py
e. g.[zenoss@server ~]./script.py cisco Fast 100000000
##for setting speed to 100Mbps on all FastEthernet interfaces on ALL devices that begin with cisco (like cisco-002.net.myorg.com, cisco-003. ... ...)###NOTE: use appropriate indention in the script
Tuesday, 3 November 2009
Adding diskspace to root VG on Linux guest VM
....Im increasing my root VG from 4GB to 7GB....
(aim: to increase root filesystel /, from 3.3GB to about 6GB)
Summary:
-make the vmdk bigger,
-use fdisk to increase the partition,
-use pvcreate/pvresize to increase the PV,
-use lvextend/lvresize to increase the LV then finally,
-use resize2fs to expand the filessytem
Make vmdk bigger
Make vmdk bigger using any vm client (vitualcenter etc) e.g.
If you have ESX 3.5 or newer:
1. Open VMware Infrastructure Client and connect to VirtualCenter or the ESX host.
2. Right-click the virtual machine.
3. Click Edit Settings.
4. Select Virtual Disk.
5. Increase the size of the disk.
using fdisk
##check current usage
[root@guestvm ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-root
3.3G 2.6G 506M 84% /
/dev/sda1 251M 29M 210M 12% /boot
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/mapper/myapp-lvol0
188G 7.5G 171G 5% /opt/myapp
##check current space available
[root@guestvm ~]# fdisk -l
Disk /dev/sda: 4096 MB, 4294967296 bytes
255 heads, 33 sectors/track, 512 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 33 265041 83 Linux
/dev/sda2 34 522 3927892+ 8e Linux LVM
Disk /dev/sdb: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
##Reboot VM...
##check if there if disk has now free space
[root@guestvm ~]# fdisk -l
Disk /dev/sda: 7516 MB, 7516192768 bytes
255 heads, 63 sectors/track, 913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 33 265041 83 Linux
/dev/sda2 34 522 3927892+ 8e Linux LVM
Disk /dev/sdb: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
##configure free space with fdisk
[root@guestvm ~]# fdisk /dev/sda
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4):
Value out of range.
Partition number (1-4): 3
First cylinder (523-913, default 523):
Using default value 523
Last cylinder or +size or +sizeM or +sizeK (523-913, default 913):
Using default value 913
Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): 8e
Changed system type of partition 3 to 8e (Linux LVM)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@guestvm ~]#
##check if your now have a new partion carrying the free space
[root@guestvm ~]# fdisk -l
Disk /dev/sda: 7516 MB, 7516192768 bytes
255 heads, 63 sectors/track, 913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 33 265041 83 Linux
/dev/sda2 34 522 3927892+ 8e Linux LVM
/dev/sda3 523 913 3140707+ 8e Linux LVM
Disk /dev/sdb: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
LV Management
[root@guestvm ~]#pvcreate /dev/sda3 ##create a physical volume
[root@guestvm ~]#vgdisplay ##current volume group setting
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 3.72 GB
PE Size 32.00 MB
Total PE 119
Alloc PE / Size 119 / 3.72 GB
Free PE / Size 0 / 0
VG UUID n20DW3-UjK4-tY7q-tYmR-G4nQ-5LH3-AXivlS
[root@guestvm ~]#vgextend VolGroup00 /dev/sda3
Volume group "VolGroup00" successfully extended
[root@guestvm ~]# pvscan
PV /dev/sdb VG myapp lvm2 [200.00 GB / 10.00 GB free]
PV /dev/sda2 VG VolGroup00 lvm2 [3.72 GB / 0 free]
PV /dev/sda3 VG VolGroup00 lvm2 [2.97 GB / 2.97 GB free]
Total: 3 [206.68 GB] / in use: 3 [206.68 GB] / in no VG: 0 [0 ]
[root@guestvm ~]# vgdisplay VolGroup00 | grep "Free"
Free PE / Size 95 / 2.97 GB
[root@guestvm ~]# lvextend -L+2.96GB /dev/VolGroup00/root
Rounding up size to full physical extent 2.97 GB
Extending logical volume root to 6.28 GB
Logical volume root successfully resized
[root@guestvm ~]# resize2fs /dev/VolGroup00/root
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/VolGroup00/root is mounted on /; on-line resizing required
Performing an on-line resize of /dev/VolGroup00/root to 1646592 (4k) blocks.
The filesystem on /dev/VolGroup00/root is now 1646592 blocks long.
(aim: to increase root filesystel /, from 3.3GB to about 6GB)
Summary:
-make the vmdk bigger,
-use fdisk to increase the partition,
-use pvcreate/pvresize to increase the PV,
-use lvextend/lvresize to increase the LV then finally,
-use resize2fs to expand the filessytem
Make vmdk bigger
Make vmdk bigger using any vm client (vitualcenter etc) e.g.
If you have ESX 3.5 or newer:
1. Open VMware Infrastructure Client and connect to VirtualCenter or the ESX host.
2. Right-click the virtual machine.
3. Click Edit Settings.
4. Select Virtual Disk.
5. Increase the size of the disk.
using fdisk
##check current usage
[root@guestvm ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-root
3.3G 2.6G 506M 84% /
/dev/sda1 251M 29M 210M 12% /boot
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/mapper/myapp-lvol0
188G 7.5G 171G 5% /opt/myapp
##check current space available
[root@guestvm ~]# fdisk -l
Disk /dev/sda: 4096 MB, 4294967296 bytes
255 heads, 33 sectors/track, 512 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 33 265041 83 Linux
/dev/sda2 34 522 3927892+ 8e Linux LVM
Disk /dev/sdb: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
##Reboot VM...
##check if there if disk has now free space
[root@guestvm ~]# fdisk -l
Disk /dev/sda: 7516 MB, 7516192768 bytes
255 heads, 63 sectors/track, 913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 33 265041 83 Linux
/dev/sda2 34 522 3927892+ 8e Linux LVM
Disk /dev/sdb: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
##configure free space with fdisk
[root@guestvm ~]# fdisk /dev/sda
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4):
Value out of range.
Partition number (1-4): 3
First cylinder (523-913, default 523):
Using default value 523
Last cylinder or +size or +sizeM or +sizeK (523-913, default 913):
Using default value 913
Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): 8e
Changed system type of partition 3 to 8e (Linux LVM)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@guestvm ~]#
##check if your now have a new partion carrying the free space
[root@guestvm ~]# fdisk -l
Disk /dev/sda: 7516 MB, 7516192768 bytes
255 heads, 63 sectors/track, 913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 33 265041 83 Linux
/dev/sda2 34 522 3927892+ 8e Linux LVM
/dev/sda3 523 913 3140707+ 8e Linux LVM
Disk /dev/sdb: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
LV Management
[root@guestvm ~]#pvcreate /dev/sda3 ##create a physical volume
[root@guestvm ~]#vgdisplay ##current volume group setting
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 3.72 GB
PE Size 32.00 MB
Total PE 119
Alloc PE / Size 119 / 3.72 GB
Free PE / Size 0 / 0
VG UUID n20DW3-UjK4-tY7q-tYmR-G4nQ-5LH3-AXivlS
[root@guestvm ~]#vgextend VolGroup00 /dev/sda3
Volume group "VolGroup00" successfully extended
[root@guestvm ~]# pvscan
PV /dev/sdb VG myapp lvm2 [200.00 GB / 10.00 GB free]
PV /dev/sda2 VG VolGroup00 lvm2 [3.72 GB / 0 free]
PV /dev/sda3 VG VolGroup00 lvm2 [2.97 GB / 2.97 GB free]
Total: 3 [206.68 GB] / in use: 3 [206.68 GB] / in no VG: 0 [0 ]
[root@guestvm ~]# vgdisplay VolGroup00 | grep "Free"
Free PE / Size 95 / 2.97 GB
[root@guestvm ~]# lvextend -L+2.96GB /dev/VolGroup00/root
Rounding up size to full physical extent 2.97 GB
Extending logical volume root to 6.28 GB
Logical volume root successfully resized
[root@guestvm ~]# resize2fs /dev/VolGroup00/root
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/VolGroup00/root is mounted on /; on-line resizing required
Performing an on-line resize of /dev/VolGroup00/root to 1646592 (4k) blocks.
The filesystem on /dev/VolGroup00/root is now 1646592 blocks long.
[root@guestvm ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-root
6.1G 2.6G 3.3G 45% /
/dev/sda1 251M 29M 210M 12% /boot
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/mapper/myapp-lvol0
188G 7.6G 170G 5% /opt/myapp
Monday, 7 September 2009
Summer of community ZenPack Contest
My MGE-UPS Zenpack won the runner up award for the Summer of Community ZenPack Contest 2009. My prize is the Chumby Media Player (http://www.chumby.com)
Check out the Zenoss announcement/blog here
The MGE UPS ZenPack provides monitoring for MGE UPS devices and MIBs for translating traps. It provides performance graphs for:
* Battery Levels
* Battery Times
* UPS Currents
* UPS Voltages
Check out the Zenoss announcement/blog here
The MGE UPS ZenPack provides monitoring for MGE UPS devices and MIBs for translating traps. It provides performance graphs for:
* Battery Levels
* Battery Times
* UPS Currents
* UPS Voltages
Wednesday, 26 August 2009
My ~]# ./Shell One Liners
This is going to be a long list of some of the shell one liners that I have used at one time in my *nix life.
1. ps axho args | awk /ssh/'{print $1}'
2. for i in AIX*; do mv $i BSD${i:3}; done ## or u can use {i#AIX}
3. sed -i s/AIX/BSD/ BSD*
4. kill `ps aux | awk /$1/'{if ($3+0>'"$limit"') {print $2} }'`
5. for i in *.frm; do touch ${i%.frm}.{MYD,MYI}; done
6. awk '/^iface bar/,/^$/ {print}'
7. find /home/ -name '*:*' -exec rename s/:/-/ {} \;
8. sed '/^$/d' /path/to/currentfile | sed '1,8d' | sed 'N;$!P;$!D;$d' > /path/to/newfile
9. find /mytopdir -depth -type d -empty -exec rmdir {} \;
10. find . -type f -newermt 2007-06-07 ! -newermt 2007-06-08
11. find ./ -type f -ls |grep '19 Jun'
12. find . -type f -mtime $(( ( $(date +%s) - $(date -d '2010-06-25' +%s) ) / 60 / 60 / 24 - 1 ))
13. ifconfig -a | grep inet | grep -v '127.0.0.1' | awk '{ print $2}'
14.
15. find / -cmin -1440 -size +5000000c -print
16. for i in `ls /homes/`; do find /homes/$i/ -name "*R3D*" -mtime +730 -exec rm {} \;; done
17. cd /home/user && rename : _ *:*
18. for i in `find /home/user/ -name '*:*' -print`; do mv $i ${i//:/_}; done
1. ps axho args | awk /ssh/'{print $1}'
2. for i in AIX*; do mv $i BSD${i:3}; done ## or u can use {i#AIX}
3. sed -i s/AIX/BSD/ BSD*
4. kill `ps aux | awk /$1/'{if ($3+0>'"$limit"') {print $2} }'`
5. for i in *.frm; do touch ${i%.frm}.{MYD,MYI}; done
6. awk '/^iface bar/,/^$/ {print}'
7. find /home/ -name '*:*' -exec rename s/:/-/ {} \;
8. sed '/^$/d' /path/to/currentfile | sed '1,8d' | sed 'N;$!P;$!D;$d' > /path/to/newfile
9. find /mytopdir -depth -type d -empty -exec rmdir {} \;
10. find . -type f -newermt 2007-06-07 ! -newermt 2007-06-08
11. find ./ -type f -ls |grep '19 Jun'
12. find . -type f -mtime $(( ( $(date +%s) - $(date -d '2010-06-25' +%s) ) / 60 / 60 / 24 - 1 ))
13. ifconfig -a | grep inet | grep -v '127.0.0.1' | awk '{ print $2}'
14.
sed -i 's/StringA/StringB/g' ` grep -ril 'StringA' /homes/myhome/* ` 15. find / -cmin -1440 -size +5000000c -print
16. for i in `ls /homes/`; do find /homes/$i/ -name "*R3D*" -mtime +730 -exec rm {} \;; done
17. cd /home/user && rename : _ *:*
18. for i in `find /home/user/ -name '*:*' -print`; do mv $i ${i//:/_}; done
Wednesday, 19 August 2009
Handy reference to Linux process status codes
D Uninterruptible sleep (usually IO)
R Running or runnable (on run queue)
S Interruptible sleep (waiting for an event to complete)
T Stopped, either by a job control signal or because it is being traced.
W paging (not valid since the 2.6.xx kernel)
X dead (should never be seen)
Z Defunct ("zombie") process, terminated but not reaped by its parent.
For BSD formats and when the stat keyword is used, additional characters may
be displayed:
< high-priority (not nice to other users)
N low-priority (nice to other users)
L has pages locked into memory (for real-time and custom IO)
s is a session leader
l is multi-threaded (using CLONE_THREAD, like NPTL pthreads do)
+ is in the foreground process group
R Running or runnable (on run queue)
S Interruptible sleep (waiting for an event to complete)
T Stopped, either by a job control signal or because it is being traced.
W paging (not valid since the 2.6.xx kernel)
X dead (should never be seen)
Z Defunct ("zombie") process, terminated but not reaped by its parent.
For BSD formats and when the stat keyword is used, additional characters may
be displayed:
< high-priority (not nice to other users)
N low-priority (nice to other users)
L has pages locked into memory (for real-time and custom IO)
s is a session leader
l is multi-threaded (using CLONE_THREAD, like NPTL pthreads do)
+ is in the foreground process group
Tuesday, 28 July 2009
Zenoss - Open Source Monitoring
Zenoss is a fantastic open source monitoring solution. I have been running Zenoss-Core, the free (no support contract) version for about 4 months now. I am currently monitoring pretty much everything, including but not limited to, linux, windows, aix, Mac, and bsd servers. most services on these servers are monitores, like databases, mail, web, proxy, san, vmware you name it. Also the hardware aspects of these devices are monitored including bladecenters, san, network devices etc.
Zenoss has proved to be the solution to all monitoring. It has made use of snmp really worth it.
For more information about zenoss chech the site http://www.zenoss.com/community/docs
Whats good about Zenoss is that it has got very active forums with users contributing "ZenPacks" which are package sthat add new functionality to Zenoss Core and Enterprise.
With zenpacks, you dont need any special skills to implement monitoring of a certain type of devices, You just install the zenpack and follow its simple instructions and you are done.
I recently added a ZenPack to monitor MGE UPS devices to the community. You can find the zenpack here
Zenoss has proved to be the solution to all monitoring. It has made use of snmp really worth it.
For more information about zenoss chech the site http://www.zenoss.com/community/docs
Whats good about Zenoss is that it has got very active forums with users contributing "ZenPacks" which are package sthat add new functionality to Zenoss Core and Enterprise.
With zenpacks, you dont need any special skills to implement monitoring of a certain type of devices, You just install the zenpack and follow its simple instructions and you are done.
I recently added a ZenPack to monitor MGE UPS devices to the community. You can find the zenpack here
vmware home lab up and running
After fidlling with various hardware, I fiunally managed to get my esxi server up and running.
I installed my additional 4GB RAM into my HP ML115 box. My total memory is now 5GB.
NOTE: that you have to install identical dimms in alternating slots, eg. slot1 -> 2GBtypeA; then slot2 -> other-typeB, slot3 -> 2GBtypeA; and slot4 -> other-typeB.
Esxi CD installed like a dream. It detected everything including hard drives and network card and I didnt have to do anything. within seconds, Esxi was installed and I was configuring it.
I had not bothered to check how I was gonna manage my configs. I had hoped the VI client wioll be cool. However, connecting to my esx host from my vista laptop using vi, client, the connection was slow due to two things, firts my vista laptop has always had perfomance problems and it sucks! I dont have time fiddling with vista and I had always contemplated throwing it away (dell vosto 1400, vista business, 160GB, 3GB RAM, Intel Core 2 Duo T56) Only that my loved ones sometimes want to play windows games, movies and music on it. Also it has been loaded with proprietary softwares for managing my TV, etc. etc. Secondly, I am connecting via my home wireless network, and my wireless hup is a modest 100MB huawe unmanaged wireless router. I know this sucks and I intend to be serious very soon, but for now thats it, and its "painfully" working.
I discovered the magic juju to enable that limited ssh service console. Now I connect to my esx host using ssh client and I can edit my config files easily. If I had time in this world, Oh boy, I tell you I would hack this "unsupported" console and build my own custom management plugins. I suspect that its also possible to "Upgrade" this minimalistic console although a great deal of work and prior vm kerneling is needed.
The 60 day evealuation licence has more features than the free licensed version, so if you want to play around with esxi, its a good idea to delay licensing it until towards the expiration (if you dont want to re-install again)
I grabbed a cheap 19" widescreen HP monitor (sub £100) but I am not using it yet. Will probably keep it as my cluster will grow beyond this solo type of thing.
I also bought a wireless desktop Logitech Cordless Desktop S520 and it worked out of the box with my esx, was jut Plug and Play thing.
Thursday, 16 July 2009
Hard drive and memory
I ordered an additional hard drive for my datastore.
I ordered the Western Digital Caviar Black 640GB. Its said to be the best sub-terabyte drive in the market. I got it cheap for £51 inc. vat and shipping from ebuyer.com
From http://compreviews.about.com/od/storage/tp/SATAHardDrives.htm
"If you don't need a huge hard drive but are looking for a solid performing drive that is a bit more affordable, then the Western Digital Caviar Black 640GB is the choice. The drive actually uses the same platters used by the terabyte Caviar Black version, but instead uses two rather than three platters. The result, a drive with the same level of performance but uses a bit less energy and is more affordable to those that don't necessarily need to store as much data."
This is a sata II, 3GB/s, 7.2k rpm, 8.5ms ave. seek time, 32MB cache drive
I have also received my memory form www.overclockers.co.uk. I ordered Corsair XMS2 4GB (2x2GB) DDR2 PC2-6400C5 TwinX Dual Channel (TWIN2X4096-6400C5)
http://www.overclockers.co.uk/showproduct.php?prodid=MY-136-CS&groupid=701&catid=8&subcat=
Note that these have got heat sincs for perfomance and reliability. I got them for £42 including vat and shipping.
I ordered the Western Digital Caviar Black 640GB. Its said to be the best sub-terabyte drive in the market. I got it cheap for £51 inc. vat and shipping from ebuyer.com
From http://compreviews.about.com/od/storage/tp/SATAHardDrives.htm
"If you don't need a huge hard drive but are looking for a solid performing drive that is a bit more affordable, then the Western Digital Caviar Black 640GB is the choice. The drive actually uses the same platters used by the terabyte Caviar Black version, but instead uses two rather than three platters. The result, a drive with the same level of performance but uses a bit less energy and is more affordable to those that don't necessarily need to store as much data."
This is a sata II, 3GB/s, 7.2k rpm, 8.5ms ave. seek time, 32MB cache drive
I have also received my memory form www.overclockers.co.uk. I ordered Corsair XMS2 4GB (2x2GB) DDR2 PC2-6400C5 TwinX Dual Channel (TWIN2X4096-6400C5)
http://www.overclockers.co.uk/showproduct.php?prodid=MY-136-CS&groupid=701&catid=8&subcat=
Note that these have got heat sincs for perfomance and reliability. I got them for £42 including vat and shipping.
Wednesday, 15 July 2009
Vmware Home Lab Project
My HP ML115 G5 arrived. I am yet to unbox it because lately i havent had time to set it up. Also I have odered more Memory and a bigger harddrive. I will also scavenge for a good monitor to use.
Tuesday, 7 July 2009
First things first - Hardware
I am planning to buy a very low cost HP Proliant ML115 Quad core server. I am looking at prices £200 and below. If anybody knows where I can get good offers, please holla. My main goal is to run ESXi for my home lab. I am hoping to host about 10 VMs; 4 Windows, and 6 Linux.
I will populate this box with 4x2GB ram for a max og 8GB.
I am also considering hardware RAID, but at this moment I am not sure whether this will be really important.
One other thing that I would fiddle around with are the HDDs. I have seen a very lucrative offer online for a Samsung EG 5400rpm 1TB drive for £60, guys advise??
This is basically the summary of my new project.
I will populate this box with 4x2GB ram for a max og 8GB.
I am also considering hardware RAID, but at this moment I am not sure whether this will be really important.
One other thing that I would fiddle around with are the HDDs. I have seen a very lucrative offer online for a Samsung EG 5400rpm 1TB drive for £60, guys advise??
This is basically the summary of my new project.
Subscribe to:
Posts (Atom)