Archive for the ‘work’ Category

Linux to the rescue again

I have to keep a windows xp vm kicking around that I use almost never for those nagging few windows apps that the smart developers didn’t make at least a web version for. Well, I needed to to some modifications to a project on MS Project server (Firefox/Linux compatibility in next release BTW) so I fired up the xp vm to find that it was effectively out of disk space. Back when I built it 2+ yrs ago I only made it an 8gb vm and with all the little proprietary apps over the years is has just gotten full.

A quick google search on the subject showed that I could, indeed, increase the drive space in the vmdk with the “vmware-vdiskmanager” command (vmware serer 1 – I told you this vm was old). I simply went to my virtual machines directory (where the vmdk files are stored) and issued “vmware-vdiskmanager -x 12gb -t 1 winxp.vmdk”. This says (-x) extend the volume to 12gb and that the volume type (-t 1) is split into the 2gb files. The command did it’s job in just a few seconds and presented me with a warning that I would need a third party program in the virtual machine to expand the partition there to get use of the new volume free space.

I learned from my favorite windows admin that there is a diskmanager utility in xp that *can* do this, however, not on the system partition, which is what I needed. I just happened to have an Ubuntu 9.10 iso handy and told the xp vm to boot that up instead. From there I started up GParted and quickly told it to extend the size of the partition to fill all the remaining free space on the volume. I clicked on the green checkmark to tell GParted to “Go” and off it went. The entire resize for GParted took only maybe 10 seconds. It’s just amazing to me. I remember when Linux couldn’t even figure out what an NTFS partition and here I was fixing one in mere seconds.

Needless to say, only a minute later I had my windows xp vm booting up and working in it’s newly extended NTFS partition. Once again, Linux saved the day!

Saturday, November 21st, 2009

Throw some Rocks at it!

ganglia
One of the parts of my day job is dealing with and managing our HPC cluster. This is an 8 node Rocks cluster that was installed maybe a week after I started. Now I was a bit green still at that point and failed to get a better grasp on some things at the time, like how to maintain and upgrade the thing, and I have recently been paying for that 🙂

Apparently, the install we have doesn’t have a clear-cut way to do errata and bug fixes. It was an early version of the cluster software. Well, after some heated discussions with our Dell rep about this, I decided what I really needed to do was a bit of research to see what the deal really was and if I could get us upgraded to something a bit better and more current.

Along came my June 2009 issue of The Linux Journal which just happened to have a GREAT article in it about installing your very own Rocks Cluster (YAY!). Well, I hung on to that issue with the full intention of setting up a development/testing cluster when I had the chance. And that chance came just the other day.

Some of you probably don’t have a copy of the article, and I needed to do some things a bit different anyhow, so I am going to try and summarize here what I did to get my new dev cluster going.

Now what I needed is probably a little different that what most people will, so you will have to adjust things accordingly and I’ll try and mention the differences as I go along where I can. First off, I needed to run the cluster on RedHat proper and not CentOS, which is much easier to get going. I also am running my entire dev cluster virtually on an ESX box and most of you would be doing this with physical hardware.

To start things off I headed over to The Rocks CLuster website where I went to the download section and then to the page for Rocks 5.2 (Chimichanga) for Linux. At this point, those of you who do not need specifically RedHat should pick the appropriate version of the Jumbo DVD (either 32 or 64 bit). What I did was to grab the iso’s for the Kernel and Core Rolls. Those 2 cd images plus my dvd image for RHEL 5.4 are the equivalent to your one Jumbo DVD iso on the website that uses CentOS as the default Linux install.

Now at this point, you can follow the installation docs there (which are maybe *slightly* outdated(?), or just follow here as the install is pretty simple really. You will need a head node and one or more cluster nodes for your cluster. Your head node should have 2 interfaces and each cluster node 1 network interface. The idea here is that your head node will be the only node of your cluster that is directly accessible on your local area network and that head node will communicate on a separate private network with the cluster nodes. With 2 interfaces, plug your eth0 interface on all nodes, head and cluster into a separate switch and plug eth1 of your head node into your LAN. Turn on your head node and boot it up from the Jumbo DVD, or in the case of the RHEL people, from the Kernel cd.

The Rocks installer is really quite simple. Enter “build” at the welcome screen. Soon you will be at the configuration screen. There you will choose the “CD/DVD Based Rolls” selection where you can pick from your rolls and such. I chose everything except the Sun specific stuff (descriptions on which Rolls do what are in the download section). Since I was using RHEL instead of CentOS on the jumbo dvd, I had to push that “CD/DVD” button once per cd/dvd and select what I needed from each one.

Once the selections were made it asks you for information about the cluster. Only the FQDN and Cluster name are really necessary. After that you are given the chance to configure your public (lan) and private network settings, your root password, time zone and disk partitioning. My best advice here would be to go with default where possible although I did change my private network address settings and they worked perfectly. Letting the partitioner handle your disk partitioning is probably best too.

A quick note about disk space: If you are going to have a lot of disk space anywhere, it’s best on the head node as that space will be put in a partition that will be shared between compute nodes. Also, each node should have at least 30gb of hdd space to get the install done correctly. I tried with 16gb on one compute node and the install failed!

After all that (which really is not much at all), you just sit back and wait for your install to complete. After completion the install docs tell you to wait a few minutes for all the post install configs (behind the scenes I guess) to finish up before logging in.

Once you are at that point and logged into your head node, it is absolutely trivial to get a compute node running. First, from the command line on your head node, run “insert-ethers” and select “Compute”. Then, power on your compute node (do one at a time) and make sure it’s set to network boot (PXE). You will see the mac address and compute node name pop up on your insert-ethers screen and shortly thereafter your node will install itself from the head node, reboot and you’ll be rockin’ and rollin’!

Once your nodes are going, you can get to that shared drive space on /state/partition1. You can run commands on the hosts by doing “rocks run host uptime”, which would give you an uptime on all the hosts in the cluster. “rocks help” will help you out with more commands. You can ssh into any one of the nodes by simply doing “ssh compute-0-1” or whichever node you want.

Now the only problem I have encountered so far is I had an issue with a compute node that didn’t want to install correctly (probably because I was impatient). I tried reinstalling it and it and somehow got a new nodename from insert-ethers. In order to delete my bad info in the node database that insert-ethers maintains I needed to do a “rocks remove host compute-0-1” and then “rocks sync config” before I was able to make a new compute-0-1 node.

So now you and I have a functional cluster. What do you do with it? Well, you can do anything on there that requires the horsepower of multiple computers. Some things come to mind like graphics rendering and there are programs and instructions on the web on how to do those. I ran folding at home on mine. With a simple shell script I was able to setup and start folding at home on all my nodes. You could probably do most anything the same way. If any of you find something fantastic you like to run on your cluster, be sure to pass it along and let us know!

Friday, November 13th, 2009

Tools of the trade


Just what kinds of tools do you need to do a systems administrator job? I Am talking about actual hand-type tools, not fancy laptops, big brains or large amounts of your favorite caffeinated beverage and pizza. Surprisingly, I use very few.

The first thing I picked up is a little toolkit. I don’t think you need to spend a lot of cash on it, in fact, mine was less than $6, but it needs a few important pieces in it. The most important by far, probably, is a halfway decent screwdriver with at least a small selection of bits. The kit I bought (from Microcenter btw) has a regular screwdriver with extension and bit set and also has a small selection of the small jewelers screwdrivers. Although I hardly ever use jeweler screwdrivers, if your glasses happen to fall apart or something, they sure are handy to have around! The kit also includes a pair of tweezers which I have never touched, and two more quite important tools, a pair of side cutters and a pair of needle nose pliers. You’d be surprised how handy both of those are.


The other must-have is a pocket knife. Really. I cannot tell you how many times I reach for my pocket knife a day. I use it for everything from opening boxes and cutting strapping/cable ties/old wires to perforating the film on my lunch before I pop it in the microwave (yes, I wash it off first). Some guys carry around a Leatherman or a Swiss Army Knife with all kinds of screwdrivers and other things attached, but my preference is single task tools. They just seem more rugged, easier to use and better suited for daily use. To that end I picked a decent little inexpensive pocket knife, a Winchester Parfive, which was well under $20.

The only other tool I can think of that might enjoy wide use for some Systems Administrators is a good Ratcheting Telemaster cable crimper. While I don’t really use one of those at my current Sysadmin job, I used to use it almost daily at my old job. Do yourself a favor and make sure you buy a good quality tool here with a comfortable handle. After you squeeze on it a few hundred times you’ll understand why 🙂

Not the final authority on the subject, I am also interested to know what you might use yourself, and if you think I may have forgotten something. Just let me know by leaving a comment here or sending me an email in the usual manner.

Monday, November 9th, 2009

FIRE!

Even though it’s a day late, I thought I would share that my train caught on fire on the way into work on the 4th. Nobody was hurt (that I know of) but plenty of people were ticked off and it sure was stinky! For those that don’t already know, it just so happens that SEPTA is on strike, making the Regional Rail trains (the ones I ride) completely packed to overflowing and, of course, it’s tough to get to your destination when your train is out of commission and there are no buses running because of the strike. SEPTA seems to make money hand over fist while providing the poorest service you can imagine. It really gets under your skin after a while.

Anyway, the pics are here: Train Fire

Thursday, November 5th, 2009

Who needs work?

I remember several people at OLF recently telling me that they were hurting for / looking for some work. Well, I get hounded by headhunters quite often and would gladly pass stuff on to those people who are interested, but I need to know who you are 🙂 Send me an email at linc dot fessenden at G mail dot com and let me know.

Monday, October 26th, 2009

Command Line Mail

Here’s one for the book…:

I have a script that monitors a process and I want it to email my cellphone (to page me) if things don’t look just right. The problem is that just using “mail” or “mailx” in a script fails because my carrier divines whether or not my return address is real. Obviously a from field that looks like “root@localhost” is just not getting through.

What’s the solution? Enter “mutt”.

Mutt, it seems, will let you specify your from field in the ~/.muttrc file. Also, it works pretty much the same on the command line as mail or mailx. So, I set up mu ~/.muttrc like so:

set realname = "menotyou"
set from = "menotyou@myrealdomain.com"
set hostname = "myrealdomain.com"
set use_from = yes

And then, in the script I send mails like so:

echo "Wow I can send mail!" | /usr/bin/mutt -s "A present for you" myphoneaddr@provider.com

All in one line of course, but BINGO, all the sudden my cell phone springs to life at all hours of the night with information I don’t want to know 🙂

Enjoy!

Thursday, June 4th, 2009

New Laptop


Ahh, there heresy! Yup, I finally got off my butt and bought a new laptop and IT’S A MAC! A Macbook 5,2, dual core 2ghz, initially with 2gb of ram.

How can this be, you may ask. I thought you are a Linux guy! Well, rest assured, I still am.

Although I am no Apple fanboy, you know the kind that wears turtlenecks with their suit, I do admire the hardware and have for sometime. They make a nice looking machine. Their machines also retain their value more than any other manufacturer, which is a big bonus.

The other thing that helped me make this decision is I am trying hard to leverage myself into doing some Apple server support at work as well. You know the drill, the more I can offer my employer, the longer, easier and more lucrative my stay there will be. That’s how I stayed 13 years at my last job. I was the go-to-guy.

My first impressions after having this almost a week? It’s pretty fast. In fact there was some wow factor there the first time I loaded my intranet page. It popped up so fast it was as if it was a local document! I also think OS 10.5mumble is better than 10.2, 3, or 4. It just seems a little slicker – it’s hard to quantify, it just does.

Of course, the first thing I did was to install the apps on OS X that make it livable for me. The short list is Firefox (Safari? Ick, although it’s MUCH better now than under 10.4), Thunderbird (Mail App can’t hold a candle), OpenOffice (best office suit out there), Vlc (hey, guy has gotta be able to watch his vids and quicktime doesn’t cut the mustard), and Cisco VPN (gotta be able to work). After those, things started to get livable on the machine.

My future plans, of course, involve installing a dual boot of Linux on this machine, and this is where I can use your help. I am looking for opinions and up-to-date howtos on different Distributions to try on here. Everyone always jumps right on the Ubuntu bandwagon, but perhaps there might be some other fun ones out there to try as well 🙂 Just shoot me an email and let me know what you are using and how it works!

Sunday, March 15th, 2009

Building an rpm to install script files

On an rpm based system, say CentOS, first make sure that the rpm-build package is installed.

In your user account, not as root (bad form and all) make the following directories:


mkdir -p ~/rpm
mkdir -p ~/rpm/BUILD
mkdir -p ~/rpm/RPMS
mkdir -p ~/rpm/SOURCES
mkdir -p ~/rpm/SPECS
mkdir -p ~/rpm/SRPMS
mkdir -p ~/rpm/tmp

And create an ~/.rpmmacros file with the following in it:


%packager Your Name
%_topdir /home/YOUR HOME DIR/rpm
%_tmppath /home/YOUR HOME DIR/rpm/tmp

And now comes the fun part. Go to the ~/rpm/SOURCES directory and create a working package directory under that with the package name and a dash and the major revision number. For example, ~/rpm/SOURCES/linc-1. Now in that directory you will copy all the scripts/files that you wish to have in your package. For example, I might have a script in that directory called myscript.sh that I want to be installed as part of the linc package.

Once that is done, make a tarball of that directory in the ~/rpm/SOURCES directory named programname-revision.tar.gz. Using my previous example it would be:

tar czvf linc-1.tar.gz linc-1/

Now for the glue that makes this all stick together. Go to your ~/rpm/SPECS directory and create a spec file for your package. We’ll call mine linc.spec and it’ll look like this:


Summary: My first rpm script package
Name: linc
Version: 1
Release: 1
Source0: linc-1.tar.gz
License: GPL
Group: MyJunk
BuildArch: noarch
BuildRoot: %{_tmppath}/%{name}-buildroot
%description
Make some relevant package description here
%prep
%setup -q
%build
%install
install -m 0755 -d $RPM_BUILD_ROOT/opt/linc
install -m 0755 myscript.sh $RPM_BUILD_ROOT/opt/linc/myscript.sh
%clean
rm -rf $RPM_BUILD_ROOT
%post
echo " "
echo "This will display after rpm installs the package!"
%files
%dir /opt/linc
/opt/linc/myscript.sh

A lot of that file is pretty self explanatory except then install lines and the lines after %file. The install lines tell rpm what to install where and with what permissions. You also have to do any directory creation there as well (the one with the -d in the line). The things after %file are similar in that this tells rpm’s database which files are attached to this package. The %dir signifies a new directory, otherwise the files are listed with their complete paths.

Now that you have all that together. The last thing you need do is create the package. Just go to ~/rpm and do an “rpmbuild -ba SPECS/linc.spec”. You will end up with an ~/rpm/RPMS/noarch/linc-1-1.noarch.rpm if all goes well.

Monday, February 16th, 2009

The Mothman Lives!

Mothman

Mothman

Many of you know that I am strangely fascinated by what sorts of things people name their servers after. I, personally, use cryptids. I have machines named things like Sasquatch, Nessie, Yeti, Chupacabras and the like.

Last night I had to do some work. One of the things I needed to take care of was getting some sort of development environment at home for me to be able to work on some work-related projects at home in a less confined atmosphere. A lot of those projects involve needing an rpm based machine, which I didn’t have.

I decided I would set up VMware Server, which I use all the time at work, but this time I would use the Server 2 product. I have been using the Server 1.6 for a long time and love it. It’s fast, easy to use, and reliable. Server 2 came out some time ago, but I haven’t had a need to upgrade, so this seemed to be the perfect time.

I used the tutorial over at HowtoForge which steps you through things really well. The only real problems I encountered were that I couldn’t get to the license page for vmware for some reason (I did happen to have a couple spares from a previous one though) and during the install I was prompted that my gcc version didn’t match my kernel version, but I chose to continue on anyhow and all was well.

My initial impressions were mixed. I kind of like having the interface be web based now, which is pretty convenient. It is, however, slower. The other bothersome thing was that running vmware server on my 3ghz machine with 3gb of ram used *all* of it’s resources and brought the machine to it’s knees. This really frustrated me until I decided to just reboot the machine…. For some reason this cleared up a lot of my problems with the resource utilization and things started behaving better. I am not sure why, but my advice to anyone trying a new install would be to reboot after the install before you actually start trying to use vmware 🙂

Once that was all taken care of, I set about to get a vm running. I picked CentOS, for obvious reasons. Unfortunately I only had a CentOS 5.1 dvd image available (usually try to get the greatest and latest) but I decided to use it anyhow rather than spend time downloading the newest one. I started setting up the new vm, which I called Mothman, and got to the installation media section and hit a small speed bump. I specified that I wanted to use an iso image, but the browse function directed me only to some strange volume where there was nothing. I couldn’t pick my home directory for the iso file. As it turns out, the default volume that VMware is looking in is the directory you picked during the install to hold your vm’s. In my case, it was the default /var/lib/vmware/Virtual Machines/. Once I dropped the iso there, I could find and use it.

The install went off without a hitch. The new popout console is pretty slick and works well. All in all, I liked it and would recommend it. I still think I need a way faster machine to host this stuff, but that’s another story altogether. Even so, with my host and vm both running, right now top reports my system usage as ” load average: 0.12, 0.14, 0.09″ and I haven’t used any swap either. Not too shabby!

Thursday, February 12th, 2009

Pictures anyone?

A buddy of mine at work is a very good budding photographer. Occasionally he will put one of his pics up as wallpaper on his machine so I can catch a glimpse. I have been bugging him to get some online presence going so I can see some more of his work and finally, this morning he threw me a URL.

Check it out!

http://www.mike-wagner.com/

and his newest stuff:

http://www.mike-wagner.com/new/index.html

Friday, December 19th, 2008