Archive for the ‘Security’ Category

Review: Penetration Testing with the Bash shell by Keith Makan – Packt Pub.

Penetration Testing with the Bash shell

I’ll have to say that, for some reason, I thought this book was going to be some kind of guide to using only bash itself to do penetration testing. It’s not that at all. It’s really more like doing penetration testing FROM the bash shell, or command line of you like.

Your first 2 chapters take you through a solid amount of background bash shell information. You cover topics like directory manipulation, grep, find, understanding some regular expressions, all the sorts of things you will appreciate knowing if you are going to be spending some time at the command line, or at least a good topical smattering. There is also some time spent on customization of your environment, like prompts and colorization and that sort of thing. I am not sure it’s really terribly relevant to the book topic, but still, as I mentioned before if you are going to be spending time at the command line, this is stuff that’s nice to know. I’ll admit that I got a little charge out of it because my foray into the command line was long ago on an amber phosphorous serial terminal. We’ve come a long way, Baby 🙂

The remainder of the book deals with some command line utilities and how to use them in penetration testing. At this point I really need to mention that you should be using Kali Linux or BackTrack Linux because some of the utilities they reference are not immediately available as packages in other distributions. If you are into this topic, then you probably already know that, but I just happened to be reviewing this book while using a Mint system while away from my test machine and could not immediately find a package for dnsmap.

The book gets topically heavier as you go through, which is a good thing IMHO, and by the time you are nearing the end you have covered standard bash arsenal commands like dig and nmap. You have spent some significant time with metasploit and you end up with the really technical subjects of disassembly (reverse engineering code) and debugging. Once you are through that you dive right into network monitoring, attacks and spoofs. I think the networking info should have come before the code hacking but I can also see their logic in this roadmap as well. Either way, the information is solid and sensical, it’s well written and the examples work. You are also given plenty of topical reference information should you care to continue your research, and this is something I think people will really appreciate.

To sum it up, I like the book. Again, it wasn’t what I thought it was going to be, but it surely will prove to be a valuable reference, especially combined with some of Packt’s other fine books like those on BackTrack. Buy your copy today!

Wednesday, July 16th, 2014

BackTrack 5 Cookbook: Quick answers to common problems

BackTrack 5 Cookbook

BackTrack 5 Cookbook

You know, sometimes, just sometimes something fortuitous happens to me. This was one of those times.

I was contacted by my friends over at Pakt Publishing to review their new book on BackTrack. Of course I said sure. Hey, I am a Linux junkie after all! It had actually been quite a while since I had played with BackTrack and this gave me *just* the incentive I needed, but let me tell you a bit about the book…

The book is a “cookbook” style book which gives you “recipes” or guided examples of common problems/scenarios and their fixes. The book is well written, a good reference for a pro, and a great tutorial for the beginner, and by beginner I am assuming that the person *does* have Linux experience, just not BackTrack experience as some command line comfort is pretty much a necessity for this kind of work. The first 2 chapters start you out exactly the way they should, by installing and customizing the distribution. What they don’t tell you is it takes a good while to actually download the distro, but that is beside the point.

Once you actually get things running well, you can follow the book through some really decent examples from Information Gathering all the way through Forensics. The book covers all matter of subject matter and applications in between such as using NMAP, Nessus, Metaspolit, UCSniff and more. I mentioned that this was fortuitous for me and that was because one of the things the book covered was the Hydra program, and, as it turns out, that was the perfect tool for me to use in remediating some password synchronization issues across several hundred servers.

Anyone using a computer should have at least a basic understanding about keeping their valuable data safe, whether that data is for a multi-million dollar company or your own invaluable family photographs. This book goes to great efforts to not only explain how to detect, analyze and remedy such issues, but also gives important background about just how systems become vulnerable to begin with. If only for that reason alone, it’s worth the read. If you are actually a sysadmin, this information is a must. For $23 for the ebook version, it’s a no brainer. Good book. It helped me out and I’ll wager that if you give it a read it’ll do the same for you!

Monday, February 18th, 2013

Lost your Mint password?

First time this happened! A coworker asked me today how to get into his Linux Mint box after he forgot his password. Of course I rattled off the old GRUB way to get things done, but, what?? This is GRUB 2! No so fast there! Turns out it’s quite different.

You hold down the shift key while booting to get to the grub menu.
You hit ‘e’ to edit your boot options.
You change the kernel line options on the very end of the kernel line to read “rw init=/bin/bash”.
You press F10 to boot.

Once booted you are dropped immediately into a shell prompt where you can change your password with the “passwd username” command. Reboot and you’re home free!

Monday, August 22nd, 2011

Book Review – BackTrack 4: Assuring Security by Penetration Testing


Right after I got this book, Backtrack 5 was released. My intention was to go through the book and compare/contrast things to Backtrack 5. Well, we all know the saying about the best layed plans…

That being said, I believe the information in this book to be directly applicable to Backtrack 5 and a good reference for it!

The book is a great tutorial and walk-through on how to use Backtrack for security and penetration testing, but, more than that, it offers good information about the field in general. You will go through software installations, software overviews, methodologies, tests / testing, and my favorite part, reporting and deliverables, a MUST for professional computer people.

I think this is an excellent book to add to your knowledge arsenal and you may be surprised at just how much you didn’t know. I know I was. This really is an important subject for computer professionals and I cant think of a better way to brush up than by grabbing a copy today. Thumbs up!

Saturday, June 11th, 2011

Why I use OSSEC

There are some great reasons to use OSSEC. One of them is you get emails like these I received this morning:

Jun 10 09:24:51 pukwudgie sshd[28651]: Failed password for invalid user pureftp from port 45542 ssh2
Jun 10 09:24:48 pukwudgie sshd[28651]: Invalid user pureftp from
Jun 10 09:24:29 pukwudgie sshd[28630]: Failed password for invalid user tom from port 37388 ssh2
Jun 10 09:24:28 pukwudgie sshd[28630]: Invalid user tom from
Jun 10 09:24:11 pukwudgie sshd[28628]: Failed password for invalid user peter from port 57468 ssh2
Jun 10 09:24:09 pukwudgie sshd[28628]: Invalid user peter from
Jun 10 09:23:52 pukwudgie sshd[28610]: Failed password for invalid user thom from port 49315 ssh2
Jun 10 09:26:39 pukwudgie sshd[28730]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost= user=root
Jun 10 09:25:43 pukwudgie sshd[28690]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=
Jun 10 09:25:24 pukwudgie sshd[28672]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=
Jun 10 09:25:05 pukwudgie sshd[28653]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=
Jun 10 09:24:48 pukwudgie sshd[28651]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=
Jun 10 09:24:28 pukwudgie sshd[28630]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=
Jun 10 09:24:09 pukwudgie sshd[28628]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost= 10 09:44:08 pukwudgie sshd[29440]: pam_succeed_if(sshd:auth): error retrieving information about user recruit
Jun 10 09:44:46 pukwudgie sshd[29478]: pam_succeed_if(sshd:auth): error retrieving information about user office
Jun 10 09:45:25 pukwudgie sshd[29497]: pam_succeed_if(sshd:auth): error retrieving information about user tomcat
Jun 10 09:45:05 pukwudgie sshd[29480]: pam_succeed_if(sshd:auth): error retrieving information about user samba
Jun 10 09:45:42 pukwudgie sshd[29514]: pam_succeed_if(sshd:auth): error retrieving information about user webadmin
Jun 10 09:47:02 pukwudgie sshd[29555]: Failed password for invalid user spam from port 45351 ssh2
Jun 10 09:46:59 pukwudgie sshd[29555]: Invalid user spam from
Jun 10 09:46:43 pukwudgie sshd[29538]: Failed password for invalid user ssh2 from port 37198 ssh2
Jun 10 09:46:40 pukwudgie sshd[29538]: Invalid user ssh2 from
Jun 10 09:46:03 pukwudgie sshd[29518]: Failed password for invalid user jambo from port 49116 ssh2
Jun 10 09:46:01 pukwudgie sshd[29518]: Invalid user jambo from
Jun 10 09:45:45 pukwudgie sshd[29514]: Failed password for invalid user webadmin from port 40961 ssh2

Etcetera, etcetera…

Friday, June 10th, 2011

Server Build

Last night on the TechShow I was asked about providing some info on a decent default server build. Here are some quick notes to get people going. Adjust as necessary.

Just for ease, here, lets assume you are installing CentOS 5, a nice robust enterprise class Linux for your server needs.

CentOS 5 / RHEL 5 / Scientific Linux, etc., does a really great job picking the defaults, so sticking with those is just fine and has worked well for me on literally hundreds of servers.

  • I let the partitioner remove all existing partitions and chose the default layout without modification.
  • Configure your networking appropriately, make sure to set your system clock for the appropriate timezone (no I do not generally leave my hardware clock set to UTC).
  • When picking general server packages I go for web server and software devel. I do not, generally, pick virtualization unless there is a specific reason to. I find that the web and devel meta server choices provide a robust background with all the tools I need to set up almost any kind of server I want without having to dredge for hundreds of packages later on.
  • The install itself at this point should take you about 15 minutes depending on the speed of your hardware.
  • Once installed, reboot the server and you should come to a setup agent prompt. Select the firewall configuration. Disable the firewall and SELinux completely (trust me here). Once that is done, exit the setup agent (no need to change anything else here), login to the machine as root and reboot it. This is necessary to completely disable SELinux.

From this point on it’s all post install config…:

  • Add any software repositories you need to.
    I not only have my own repo for custom applications, but also have a local RedHat repo for faster updates and lower network strain/congestion.
  • Install your firewall.
    I use an ingress and egress firewall built on iptables. While mine is a custom written app, there are several iptables firewall generator apps out there you can try.
  • Install your backup software.
    Doesn’t matter if this is a big company backup software like TSM or CommVault, or you are just using tar in a script. Make sure your system is not only being backed up regularly, but that you can actually restore data from those backups if you need to.
  • Add your local admin account(s).
    Don’t be an idiot and log into your server all the time as root. Make a local account and give yourself sudo access (and use it).
  • Fix your mail forwarding.
    Create a .forward file in your root directory and put your email address in there. You will get your servers root emails delivered to you so you can watch the logwatch reports and any cron results and errors. This is important sysadmin stuff to look at when it hits your inbox.
  • Stop unnecessary services.
    Yes, if you are running a server you can probably safely stop the bluetooth and cups services. Check through what you are running with a “service –status-all” or a “chkconfig –list” (according to your runlevel) and turn off / stop those services you are not and will not be using. This will go a long way toward securing your server as well.
  • Install OSSEC and configure it to email you alerts.
  • No root ssh.
    Change your /etc/ssh/sshd_config and set “PermitRootLogin no”. Remember, you just added an admin account for yourself, you don’t need to ssh into this thing as root anymore. Restart your sshd service after making the change in order to apply it.
  • Set runlevel 3 as default.
    You do not need to have a GUI desktop running on your server. Run the gui on your workstation and save your server resources for serving stuff. Make the change in /etc/inittab “id:3:initdefault:”.
  • Fix your syslog.
    You really should consider having a separate syslog server. They are easy to set up (hey, Splunk is FREE up to so much usage) and it makes keeping track of whats happening on multiple servers much easier (try that Splunk stuff – you’ll like it).
  • Set up NTPD.
    Your server needs to know what time it is. ‘Nuff said.
  • Install ClamAV.
    Hey, it’s free and it works. If you do ANYTHING at all with handling emails or fileshares for windows folks on this machine, you owe it to yourself and your users to run Clam on there to help keep them safer.
  • Do all your updates now.
    Before you go letting the world in on your new server, make sure to run all the available updates. No sense starting a new server instance with out of date and potentially dangerous software.
  • Lastly, update your logbook.
    You should have SOME mechanism for keeping track of server changes, whether it be on paper or in a wiki or whathaveyou. Use it RELIGIOUSLY. You will be glad someday you did.

Thursday, February 24th, 2011

Diagnosis: Paranoia

You know, there are just some things you do not need first thing on a Monday morning. This was one of them…

I came and and started reviewing my reports and was looking at an access report, which is basically a “last | grep $TheDateIWant” from over the weekend. I keep a pretty tight ship and want to know who is accessing what servers and when (and sometimes why). What I saw was monstrously suspicious! I saw MYSELF logged in to 3 different servers 3 times each around 5am on Sunday morning – while I was sleeping.

This is the kind of thing to throw you into an immediate panic first thing on a Monday morning, but I decided to give myself 10 minutes to investigate before completely freaking out.

The first thing I noticed was that the access/login times looked suspiciously like the same times I ran my daily reports on the machines, however, the previous week I had changed the user that runs those reports and this was still saying it was me. I double, triple and quadruple checked and searched all the report programs to make absolutely sure there was no indication that they were still using my personal account (which was probably bad practice to begin with btw). Then I scoured all the cron logs to see what was actually running at those times, and oddly enough, it was just those reports.

I looked through the command line history on those machines and checked again the “last | head” to see who was logging on those machines. Nothing out of place BUT with the “last| head” I was NOT listed as being on the machine on that date! So I ran the entire report command again “last | grep $TheDateIWant” and there I was again, listed right under the logins of the report user.

Anyone catching this yet?

What I had stumbled upon were a few machines that are used so infrequently that the wtmp file, which is what the “last” command uses for data, had over 1 year of entries. My search of “last | grep ‘Oct 31′” was returning not only this year, but my own logins from last year as well.


Moral of the story? Mondays stink – Just stay home!

Monday, November 1st, 2010


Prey is a lightweight application that will help you track and find your laptop if it ever gets stolen. It works in all operating systems and not only is it Open Source but also completely free.

That’s what their website says anyway.

You have to admit that it sounds quite intriguing. There are a lot of utilities around that you can *pay* for that offer some reasonable facsimile of helping you track your stolen laptop and get it back, but this is the first open source one I have come across.

Further inspection shows this to be “the real deal”. At least as far as I am concerned. I cannot yet comment on the mac/win versions of the software, but the Linux version is pretty slick.

Essentially, Prey runs through cron every 10 minutes by default, completely in the background, hidden from view. It checks for the existence of a specific website and if it doesn’t find this website (gets a 404 message), it starts grabbing information from your machine like ip addresses, screenshots, pics from your webcam, etc., and sends them either to Prey’s website for you to view, or directly to your email account. This is all information designed to help you track down where your laptop is, and identify who might have it.

I tried it on my Ubuntu work laptop and the client is literally a drop-in dmg package. It installed and asked me to run a control panel applet for configuration. This only really asked me for 2 pieces of identifying information, the API key and the device key, both of which were available to me after I registered (for free) on Prey’s website at

Once you are registered and get your device (laptop) listed on the website, you can tell Prey, via the website anytime, that your laptop is missing by going to (and after logging in) clicking on the appropriate device listing (they let you have 3 for free btw), changing the “Missing” slide switch to “on” and hitting the update button at the bottom of the page. There are other options in there you can change as well to suit your needs. The next time your laptop can find an internet connection and check in, Prey will have it sending reports out so you can find it. I was pretty happy and impressed with how well it worked actually.

The only con I can think of with this program is the fact that I run Linux. Not that people won’t steal laptops with Linux on them, but that I imagine that anyone who would steal one of my laptops would immediately install windows on it, thus rendering Prey useless. If I were to employ the use of that auto-login stuff, that could perhaps stave off a would be thief long enough for Prey to do it’s job, but I do like having to log in to my machines (just makes me feel more secure). It’s something to think about, and I will look into what other people have to say on the subject in Prey’s forums. That being said, however, I am still putting the software on my laptops. Hey, it can’t hurt right?

Wednesday, January 13th, 2010


Late last ‘week I noticed that my new nagios server was not responding anymore. Well, I checked it and it was down. Not only that, it was a vm on my test server and the entire server was down as well. Arrrgh.

Usually I use this as a foray to tell you all to remember to do your backups. Well, in this case I didn’t do them either. Hey, it’s a test vm server right? Yeah, well I am kicking myself about that anyhow. I just got nagios working really well the way I wanted. Oh well, I guess I get to practice some more right 🙂

Well, as it turns out, my server had a catastrophic drive failure. I did EVERYTHING to try and resuscitate this thing. To start with, it had no partition table at all. Luckily I bought 2 of these servers and they were identically configured, so I checked out the partition table of the one and used fdisk to apply it to the broken one. After that I was able to fsck one partition, but as it would happen, that partition was only boot. Feh. The other partition had lost all it’s superblock info. I couldn’t even use a backup superblock. Nada. I noticed that mkfs had a command line switch of -S, which writes the superblock info on a artition without formatting or touching the inodes. I tried that and it appeared to be successful. At leat I could run fsck on the partition now and it was fixing the inodes. YAY! except that after a few hours of fixing, I still got nothing but a few system files in a pile under the lost-n-found directory. Shortly thereafter the drive lost it’s partition info again anyway. That’s life I guess.

So, it was off to Microcenter to get a new hdd. I brought that home and did a fresh CentOS 5.3 32 bit install and played with it a bit and thought to myself, hey, maybe I should run some kind of burn-in test on this server before I go investing a lot of time into it again.

That is where Sys_Basher comes in. Sys_Basher is a multithreaded memory and disk exerciser. That’s what the website says. It makes a pretty good burn in program by continually testing your memory and disk (which pushes on your cpu as well) for any length of time you specify. I kinda like it actually, and that is a good thing because there are woefully few burn-in or stress test type programs available to the Linux community. In fact, if you are a programmer and looking for a great project, you could generate a lot of traffic and interest by making one. Not that I don’t like Sys_Basher, mind you, but variety is the spice of life and certainly the way of open source!

Anyway, I ran Sys_Basher overnight on my new machine which passed with flying colors. Then, this morning, I decided that maybe I should run 64bit Linux on this box. Some days I am so fickle, but I decided it would be in my best interest to change up the OS before building a bunch of new test vms on there 🙂

Maybe this time I’ll even back the darn thing up too! Wish me luck and, btw, do your backups!

Sunday, July 12th, 2009


Even though I wrote and use OSM I also use Nagios at work (along with OSM). Actually, I administer Nagios there, however I have never actually installed and configured it. It was in place before I started there.

That being said, my manager asked me how to get it installed and running today, as he wants to try using it at home. This sort of spurred me into setting it up at home tonight. It’s really nice having a server that can handle a few test VMs, by the way 🙂

I decided I would install it on CentOS, because I need to be able to get it running on RedHat for work, so off to Google I went. After a bit of searching I finally came across a WONDERFUL site which provides a quick and dirty script for getting Nagios installed and working lickety split. It works perfectly and the only adjustment I made to the script, other than changing the passwords in it, was to comment out the SELinux lines because I already have SELinux disabled.

That really was it. Pretty simple. Of course the rip here is actually getting Nagios to monitor your systems, and that is probably beyond the scope of this post, which was really meant as a reference for that install script. Configuring nagios by the command line is not for the faint of heart. The files you need to pay attention to end up in /usr/local/nagios/etc and /usr/local/nagios/etc/objects. Just keep in mind that the configs seem to reference eachother in a cyclical way and you really need to pay attention. I found a good starter-help at the bottom of this website for adding your first non-local machine. Once you get that working you’ll understand how to add more, but I still found it a bit of a frustrating experience for a few minutes.

I did note, however, that there are quite a few projects out there which claim to configure Nagios for you via a web interface. I hope to give them a shot or two in the coming days/nights. Let me know if any of you have tried any and how they fair.

Monday, May 4th, 2009