Showing posts with label Checkpoint 700. Show all posts
Showing posts with label Checkpoint 700. Show all posts

Monday, January 23, 2017

Installing and Using Google Authenticator for Two Factor Auth on a Checkpoint 750



Hi all, in Installing Kali Linux on a Checkpoint 750 SMB Gaia Emebedded Firewall I dropped a hint about a reason to do this. Well here is an interesting use case. We can create a free stand alone 2 factor authentication system for VPN users using Google Authenticator. BTW I updated the Kali install. I forgot about mounting proc and sys. Head over there and check the update if you've not already.

For example say you didn't have a external radius sever and/or user directory (ldap, MS DC etc). Using this method you can have a working two factor authentication system that doesn't require connectivity to an external radius server. Granted you can always just pull the radius config out of this write up as well. By the end of this the goal is to show how to put all this together.

Here is a list of the moving part we'll be using.
The way this will be tied together is the following.

Request comes into the firewall in the following form

username
unix passwd password + OTP


  1. Firewall forwards request to Free Radius (which is installed on the 750 in this case).
  2. Free Radius passes the username and password to via the Radius PAM module.
  3. PAM passes the username and password to the Google LibPAM module. 
  4. Google's libpam module strips the OTP off the password string and verifies the OTP with google. If the authentication is good Google's libpam sends the password (without the OTP) string back to pam.
  5. Pam then checks the password using the normal unix checks.

If everything is good the FreeRadius sends an accept message back to the firewall and then you're golden!

Now before we get too much further into this let me give you a little warning. This will require some hacking. Why is that you ask? Well.. Kali (guessing most if not all Debian based OSes) assumes the Linux kernel has audit support enabled but the kernel on the 750 does not as seen by

[Expert@FW750]# gzip -dc /proc/config.gz | egrep -i audit
# CONFIG_AUDIT is not set
[Expert@FW750]# 

This causes Google's pam module to fail when creating a network connection. Found this with strace. Basically you'll see socket(bla NETLINK_ADUIT) error =PROTOUNSUPPORTED or something like that.

There might be a better way of dealing with this, but the current work around is to recompile the pam package. What is kind of a pain is if there is an update to pam and you update the chroot with say apt-get upgrade or something then you'll need to recompile the package with the new pam module. You can always make more then one chroot also so you could make one just for Free Radius and a different one for all the things Kali can do.

Right so anyway... Let configure some stuff. I'm going to assume you already have the chroot setup so i'll be going right into that, but first add a loop interface. This will be a private interface we'll be telling freeraidus to use. You should also create 2 firewall rules for this under the "Incoming, Internal and VPN traffic" section. One to allow radius from loop00 to loop00 and a second rule to deny all other radius. BTW radius object has a timeout of 3600 seconds (seems high for udp) so if you've already passed traffic the deny rule won't take effect until radius falls out of the connections table. I lowered radius timeout to 30 seconds. ok ok ok.. configure stuff.

From clish run the following to create a loop interface for radius.

FW750> add interface-loopback ipv4-address 172.16.31.1

Next we'll login to the chroot and update the apt-get system then install some packages (FreeRadius, Google libpam). BTW make sure proc and sys are mounted inside the chroot. I updated the kali write up about that.

[Expert@FW750]# chroot /mnt/sd/kali-chroot bash -l
root@FW750:/# apt-get update
!stuff happens
root@FW750:/# apt-get install libpam-google-authenticator freeradius
! lots of output
!don't worry about java errors. freeradius must have javasupport enable by default.
Do you want to continue? [Y/n] y
root@FW750:/#

A lot of things will start download. Lets queue up some Music while we wait.

After a few minutes you'll have almost everything you need. 

############
# Start hacking
############

# This part is only needed for installing on the 750. If you're by chance running through this for an external radius server you can skip this.

So now we have all our apps installed. Lets rebuild pam! 

First you'll need to tell apt you'll be downloading source. If your not sure how to change /etc/apt/source.list basically just copy whats there and change the starting 'deb' to 'deb-src'. This will do that for you if you super lazy. We'll also be installing everything needed to build pam.

root@FW750:/# egrep -q '^deb-src' /etc/apt/sources.list || sed 's/^deb /deb-src /' /etc/apt/sources.list >> /etc/apt/sources.list
root@FW750:/# mkdir pam ; cd pam
root@FW750:/pam# apt-get update    
more stuff
Reading package lists... Done
root@FW750:/pam# apt-get build-dep pam
!more output stuff
root@FW750:/# export CONFIGURE_OPTS="--disable-audit" ; apt-get source --compile pam
root@FW750:/pam# dpkg -i libpam-modules_1.1.8-3.5_armhf.deb libpam-modules-bin_1.1.8-3.5_armhf.deb libpam-runtime_1.1.8-3.5_all.deb libpam0g_1.1.8-3.5_armhf.deb
root@FW750:/pam#

ok all done /pam you can delete this entire dir if you want.

###########
# End hacking
###########

Ok right.. so lets configure everything!

###########
# Start Radius config!
###########

Edit /etc/pam.d/radiusd. This is what we want it to look like. This is so the user FreeRadius runs as can read the users Google authenticator configuration file.

auth       required    pam_google_authenticator.so forward_pass
auth   required pam_unix.so use_first_pass
#@include common-auth
@include common-account
@include common-password
@include common-session

Now edit

/etc/freeradius/3.0/users

Add this to the top of it.This basically says if the unix user is a member of /etc/group "disabled" then reject the radius request. Next part says pass the user login to the pam backend.

DEFAULT Group == "disabled", Auth-Type := Reject
Reply-Message = "Your account has been disabled."
DEFAULT Auth-Type := PAM

run the following to enable the pam module

ln -s /etc/freeradius/3.0/mods-available/pam /etc/freeradius/3.0/mods-available/pam

setup the client IP and Password for radius packets from the firewall.

edit 

/etc/freeradius/3.0/clients

# Add the Use the loop address we created earlier.
client  firewall {
ipaddr = 172.16.31.1
secret = somepw
}


Now lets edit the main site radius server config.

/etc/freeradius/3.0/sites-enabled/default

replace all
ipaddr = *

with
ipaddr = 172.16.31.1


Comment out ALL the IPv6 sections (the entire section)

uncomment the pam section (around line 489)
        #  Pluggable Authentication Modules.
        pam


edit

/etc/freeradius/3.0/proxy.conf

uncomment src_ip and set it to the loop00 interface IP as well.

src_ipaddr = 172.16.31.1


Geez.. are we done yet? As it turns out.. almost! We're now going to make the FreeRadius server run as root. This is needed because the google pam module will switch to the user's uid before reading the config file. I did try the pam option to allow perms 0660 but because of the uid switch I couldn't get it to work.

edit

/etc/freeradius/3.0/radiusd.conf

and change user and group to root.

        user = root
        group = root


###########
# End Radius config!
###########


###########
# Start of google authenticator config.
###########

First add a unix user. This will be the user account you configure for the VPN. I'll make a testuser account. Afterwards log as test user and setup google-authenticator. BTW you might want to full size your ssh session so you can see the full QRcode on the console (yes that work.. amamzing).

root@FW750:/pam# adduser testuser
Adding user `testuser' ...
Adding new group `testuser' (1000) ...
Adding new user `testuser' (1000) with group `testuser' ...
Creating home directory `/home/testuser' ...
Copying files from `/etc/skel' ...
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Changing the user information for testuser
Enter the new value, or press ENTER for the default
Full Name []:
Room Number []:
Work Phone []:
Home Phone []:
Other []:
Is the information correct? [Y/n] y
root@FW750:/pam# su - testuser
testuser@FW750:~$ google-authenticator

Answer yes to all questions, open the app and take a pic of the QRcode with the app (i used iOS version). This will fully configure the OTP app. Side note.. that is so cool..

###########
# End of google config
###########

OK, lets fire it up!

logout of the chroot jail and start the radius server.

root@FW750:/# exit
logout
[Expert@FW750]# chroot /mnt/sd/test/kali-chroot freeradius
[Expert@FW750]# ps axuw | egrep '[U]SER|[f]reeradius'
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root     24025  0.5  0.8  55624  8332 ?        Ssl  06:27   0:00 freeradius
[Expert@FW750]# 

##########
# Firewall config (ok well I created the loop00 interface way above but firewall config!)
##########

Now to wrap everything up, make your firewall rules look like this.



Then go to VPN -> Authinication Servers -> Primary Raidius -> Configure ( here the 172.16.31.1:1812 is. My doesn't say configure because I already configured it). Also make sure you're using the same secret key from the radius config.


Lastly hit the "permissions for RADIUS users" and fill everything out. Make sure the check mark is enabled. Send all users. I'm not sure that the role matters. I think its just adding an extra A/V pair that the radius server is ignoring, but i went with network admin.


And then to wrap up the config. Setup freeradius to start on startup.

Here is my startup script.

[Expert@FW750]# cat /pfrm2.0/etc/userScript
mount /dev/sda1 /mnt/sd
mount -t proc proc /mnt/sd/kali-chroot/proc
mount -t sysfs sysfs /mnt/sd/kali-chroot/sys
ln -s /bin/busybox /bin/crond
mkdir -p /mnt/sd/backups
mkdir -p /var/spool/cron/crontabs/
cp /storage/*.zip /mnt/sd/backups
echo '1 1 * * *  cp /storage/*.zip /mnt/sd/backups' >> /var/spool/cron/crontabs/root
chmod 600 /var/spool/cron/crontabs/root
/bin/crond
chroot /mnt/sd/kali-chroot freeradius
[Expert@FW750]#


########
# If something goes wrong!
########

Most the the problems I were with the radius configs. If you want to debug radius run it with the -XXX arguments and you'll get a decent amount of debug output. It can also be useful to start rsyslog inside the chroot for more log. Also the google pam module has a debug option. Follow the link at the top for more info. In addition you can add the word "debug" to the pam_google_authenticator.so line in /etc/pam.d/radiusd file. This will spit out helpful into on what Google Authenticator is doing. Make sure NTP is enabled and that the clock is synced. If not you'll have to sync the clock and possibly recreate the token.

#########
# I want more users!
#########
So login to the chroot and add them! Just run through the google authenticator config section for every user.

That's all for now!

Tuesday, January 17, 2017

Installing Kali Linux on a Checkpoint 750 SMB Gaia Emebedded Firewall

UPDATE!!!

This blog has moved to Spikefish Solutions Blog

UPDATE!!!


Hi all! Its been a little while since I posted something. I've had a little side project I've been working and I just got everything setup. I have a different write up describing how to install a Debian (ehem stable) chroot on a SD card in a Checkpoint SMB 750 running Gaia Emebedded. Well, I found an easier way to do this. Basically install your favorite Debian based OS (Debian or. um.. how about Kali Rolling!? ok ok I'm pretty sure Ubuntu would work also) on a VM and install the debootstrap package (apt-get install debootstrap).

BTW i'm assuming you have a SD card and that its formatted for a Linux file system on in. Oh right...sorry I forgot the 750 doesn't have a file system util; we'll cover how to address that also.

First lets assume you've installed Kali somewhere.. maybe a VM, booted up and logged in. Take a look at /etc/apt-sources.list. I see this.

deb http://http.kali.org/kali kali-rolling main non-free contrib

The important part is the URL and the kali-rolling. This is where to get the files and which version.

The following command will grab everything and put it in ~/kail-chroot, extract it, but not complete the install (--foreign). Also note its downloading the arm binaries.

debootstrap --arch armhf --foreign kali-rolling kali-chroot http://http.kali.org/kali

After a few mins and a lot of stuff on the screen about packages you'll have a folder called ~/kail-chroot.

# formating SD card. Skip to #finish installing kali if you already have a linux file system on the sd card.

Now.. first thing we need to do is reformat that pesky sd car if you haven't already.

Grab, hopefully, all the files needed to run mke2fs from kali-choot

tar -zcvf mke2fs.tgz kali-chroot/sbin/ kali-chroot/bin/ kali-chroot/lib/ kali-chroot/etc/ kali-chroot/root/

copy mke2fs.tgz to your firewall and put it in /storage using scp or whatever. We're just using this as a temp holding area so that we can run mke2fs

On the firewall run the following.

cd /storage
tar -zxvf mke2fs.tgz
mkdir storage/kali-chroot/dev
cp /dev/sda* /storage/kali-chroot/dev/
umount /dev/sda1
chroot /storage/kali-chroot

At this point you should see

I have no name!@FW750:/#

Thats ok, fdisk -l /dev/sda should show some info about the sd card most likely a msdos filesystem. If the umount give filesystem busy or something like that open the webui on your firewall and "Logs and Monitoring" -> Options -> "Eject SD card safely" and the umount should work.. or it will already be unmounted.

Now you can format the sd card with ext3 or ext4. I went with ext3 for basically no good reason (or because i thought this was an 1100 that doesn't support ext4 take, take your pick).

Assuming everything is umounted run the following to format with ext3 (or change to ext4).

mke2fs -t ext3 /dev/sda1
exit
mount /dev/sda1 /mnt/sd

lots of stuff later and you have a ext3 (or 4) filesystem! This is good because it can repair itself (angy look for no fsck) and its a real linux filesystem.

You can now delete /storage/kali-chroot if you want.

# finish installing kali!

ok back on your kali install VM or where ever you installed it.

Make a new tar file that will include the full kali we downloaded earlier.

tar -zcvf kali-choot.tgz kali-chroot/

upload kali-chroot.tgz to /mnt/sd/

Back to the firewall.. and uncompress everything.

cd /mnt/sd
tar -zxvf kali-choot.tgz

Login to the chroot

chroot /mnt/sd/kali-chroot bash -l

if you see something like this.. its game on!

[Expert@FW]# chroot /mnt/sd/kali-chroot bash -l
root@FW:/#

Now finish the installer!

[Expert@FW]# ./debootstrap/debootstrap --second-stage

lots of stuff will fly by.. unpacking, installing, etc.

That,s basically it! Now you have a kali install on your firewall. I should point out its a very minimal install. Also there maybe utilities that come with kali that won't work for a lot of reasons (no memory being a big one). 
UPDATE:
I left out a final step. You need to mount proc and sysfs!
If you're inside the jail run this.
mount -t proc proc /proc
mount -t sysfs sysfs /sys
If you outside of the jail
mount -t proc proc /mnt/sd/kali-chroot/proc
mount -t sysfs sysfs /mnt/sd/kali-chroot/sys
Also be sure to add the following to the startup script. This way this gets mounted on start of the firewall (assuming you want to, which i do)

/pfrm2.0/etc/userScript
# mount sda1 because mounting happens after startup script.
mount /dev/sda1 /mnt/sd
mount -t proc proc /mnt/sd/kali-chroot/proc
mount -t sysfs sysfs /mnt/sd/kali-chroot/sys


Now i'm sure you're thinking.. what is this point of this? I'll get to that real soon My G^2.  

Saturday, September 10, 2016

How to install Debian on Gaia Embedded - 700/1400 (not 1200R (ok and not 600/1100*))

UPDATE: Turns out this doesn't work on the 600/1100 (wah waaaah). Need some more testing (Yeah, I totally tested this on a 600/1100 before posting) to see if i can work around libc issue.

I recently... well maybe not that recently.. spent a few months working on building cross compilers that matched up %100 to a given Checkpoint Gaia Embedded system. Meaning, same libc (glibc 2.5, what a pain!), compiler version (based on glibc output) and kernel heads version.

I thought this was needed so that everything would be compatible. Well, turns out I made things way harder then it should have been. I recently found out that glibc is basically backwards compatible. There may be edge cases where things don't end up right, but for the most part, it seems pretty darn backwards compatible.

So that got me thinking. I started downloading .deb files and extracting them on my 750 and pretty much everything worked. Granted there was a lot of tracing library dependencies. So knowing that all worked I switched gears. I bought a 32 gig microsd card and installed it. I did format it to ext4 since vfat isn't a linux friendly file system. Side note: of course 7xx doesn't have the mkfs.ext4. Sigh... I'll have to map out all the needed libraries for that and point out the download links.

So the next idea was, can we just install debian on the 750 in a chroot environment? It turns out, yeah. I used the Debootstrap to create the chroot. It took a little while as it needs perl and wget and a few other things. The default wget on Gaia Embedded doesn't support https so just to be safe I pulled wget down also.

Before continuing, this is not supported by anyone. I would only do this on a test box, and not on a production firewall.

Basically I downloaded all these utilities on a spare Linux box in our Miami office:

ca-certificates_20141019+deb8u1_all.deb
debootstrap_1.0.67_all.deb
gzip_1.6-4_armhf.deb
libblkid1_2.20.1-5.3_armhf.deb
libdb5.3_5.3.28-9_armhf.deb
libffi-dev_3.1-2+b2_armhf.deb
libffi6_3.1-2+b2_armhf.deb
libgdbm3_1.8.3-13.1_armhf.deb
libgmp10_6.0.0+dfsg-6_armhf.deb
libgnutls-deb0-28_3.3.8-6+deb8u3_armhf.deb
libhogweed2_2.7.1-5+deb8u1_armhf.deb
libicu52_52.1-8+deb8u3_armhf.deb
libidn11_1.29-1+deb8u2_armhf.deb
liblzma5_5.1.1alpha+20120614-2+b3_armhf.deb
libnettle4_2.7.1-5+deb8u1_armhf.deb
libp11-2_0.2.8-5_armhf.deb
libp11-kit-dev_0.20.7-1_armhf.deb
libp11-kit0_0.20.7-1_armhf.deb
libpsl0_0.5.1-1_armhf.deb
libssl1.0.0_1.0.1t-1+deb8u2_armhf.deb
libstdc++6_4.9.2-10_armhf.deb
libtasn1-3-bin_4.2-3+deb8u2_all.deb
libtasn1-6_4.2-3+deb8u2_armhf.deb
libuuid1_2.20.1-5.3_armhf.deb
libuuid1_2.25.2-6_armhf.deb
perl-base_5.20.2-3+deb8u6_armhf.deb
perl-modules_5.20.2-3+deb8u6_all.deb
wget_1.16-1_armhf.deb
xz-utils_5.1.1alpha+20120614-2+b3_armhf.deb
zlib1g_1.2.8.dfsg-2+b1_armhf.deb


I put them on a linux box and extracted them using this... somewhat nasty process:


for x in `ls *.deb` ; do ar xv $x ; tar -zxvf data.tar.gz ; tar -Jxvf data.tar.xz ; done


What I'm doing is expanding the .deb archive, which contains 3 or more files. The binaries are in a file called data.tar.gz (gziped) or data.tar.xz (lzma). I would have done this on Checkpoint Gaia Embedded but it doesn't include anything uncompress lzma. Kind of a brute force method to extract everything, but it worked. After that the raw files are ready to install on your Checkpoint firewall.


Next I just moved the files over to the Maimi Checkpoint firewall, so now I have this:


[Expert@FWCKP750]# ls -l
drwxr-xr-x 2 root root 4096 Sep 10 10:49 bin
drwxr-xr-x 5 root root 4096 Sep 10 10:13 etc
drwxr-xr-x 3 root root 4096 Sep 10 09:40 lib
drwxr-xr-x 2 root root 4096 Sep 10 09:33 sbin
drwxr-xr-x 7 root root 4096 Sep 10 09:47 usr
drwxr-xr-x 3 root root 4096 Sep 10 09:33 var
[Expert@FWCKP750]# pwd
/mnt/sd/cnf/debian/bootstrap
[Expert@FWCKP750]#


Debootstrap is really just a shell script so once you have everything you can just run it. You also don't have to run it in Miami, I won't tell anyone if you do.


I did make a small script to setup library and path so the debootstrap files are used first. The last item was to tell debootstrap where its shell include files were.


I put this in setup.sh:

[Expert@FWCKP750]# pwd
/mnt/sd/cnf/debian

[Expert@FWCKP750]# cat setup.sh
declare -x DEBOOTSTRAP_DIR="/mnt/sd/debian/cnf/bootstrap/usr/share/debootstrap/"
declare -x LD_LIBRARY_PATH="/mnt/sd/debian/cnf/bootstrap/usr/lib/arm-linux-gnueabihf:/mnt/sd/debian/bootstrap/usr/lib:/mnt/sd/debian/bootstrap/lib/arm-linux-gnueabihf:/mnt/sd/debian/bootstrap/lib:.:/pfrm2.0/lib:/pfrm2.0/lib/iptables:"
declare -x PATH="/mnt/sd/cnf/debian/bootstrap/usr/bin:/mnt/sd/debian/bootstrap/usr/sbin:/mnt/sd/debian/bootstrap/sbin:/usr/local/bin:/usr/bin:/bin:/pfrm2.0/bin:/pfrm2.0/bin/cli:/pfrm2.0/bin/cli/provisioning:.:/usr/local/sbin:/usr/sbin:/sbin:/opt/fw1/bin"
[Expert@FWCKP750]#


This will suck in those settings for this login session on the Miami Checkpoint firewall.


source setup.sh


That should be about all that is needed to run debootstrap. Next, just make the dir you want to install the OS into and run debootstrap.


mkdir /mnt/sd/stable-chroot


Then fire off debootstrap.


debootstrap --arch arghf stable /mnt/sd/stable-chroot http://httpredir.debian.org/debian/

With luck and about 15 min you'll have a fully installed OS. We'll need a few little tweaks to wrap this up.

We need to mount proc and sysfs inside the chroot.

I added these statements to my userScript to handle this at bootup (yes the Miami Checkpoint firewall):

[Expert@FWCKP750]# ls -l userScript
-rwxr-xr-x 1 root root 120 Sep 10 12:01 userScript
[Expert@FWCKP750]# cat userScript
mount /dev/mmcblk1 /mnt/sd
mount proc /mnt/sd/stable-chroot/proc -t proc
mount sysfs /mnt/sd/stable-chroot/sys -t sysfs
[Expert@FWCKP750]#

You can just run those mount commands by hand also if you want. The mount /mnt/sd isn't
needed if the system is already up and running as it should auto mount. However,
the auto mount happens after userScript. Adding the mount to userScript is the workaround.

Now you're ready to jump in. Here I login to the chroot and then show python version
and perl version install.

[Expert@FWCKP750]# chroot /mnt/sd/stable-chroot bash -l
root@FWCKP750:/# echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin root@FWCKP750:/# cat /etc/debian_version 8.5
root@FWCKP750:/# python3 -V Python 3.4.2
root@FWCKP750:/# perl -v This is perl 5, version 20, subversion 2 (v5.20.2) built for arm-linux-gnueabihf-thread-multi-64int (with 81 registered patches, see perl -V for more detail) Copyright 1987-2015, Larry Wall Perl may be copied only under the terms of either the Artistic License or the GNU General Public License, which may be found in the Perl 5 source kit. Complete documentation for Perl, including FAQ lists, should be found on this system using "man perl" or "perldoc perl". If you have access to the Internet, point your browser at http://www.perl.org/, the Perl Home Page. root@FWCKP750:/#


The install is about 500meg. Fits just great on a 32gig sd card, but is way too big without it.

I should point out things will work just fine inside the chroot. Once you logout, some things
will work outside of the chroot (/mnt/sd/stable-chroot/usr/bin/lsof for example), others you
may need to create a shell script to add library search and path statements, or in the case of
perl and python, do something to tell it where the modules are located.

I hope I didn't loose you at inside the chroot and outside the chroot. Chroot is a linux command that CHanges the ROOT dir.

So outside the chroot means the dir structure would look like this for example:

/mnt/sd/stable-chroot/
inside the chroot it would look like this.
/

Here is an example:
First I login to the chroot (now I'm inside)
[Expert@FWCKP750]# chroot /mnt/sd/stable-chroot bash -l
I create a file called TestFile
root@FWCKP750:/# touch TestFile
Notice how i'm in /
root@FWCKP750:/# pwd
/
And we see the TestFile
root@FWCKP750:/# ls
TestFile boot etc lib mnt proc run srv tmp var
bin dev home media opt root sbin sys usr
root@FWCKP750:/# exit
Now I logout. Notice out the directory changes? I'm now outside the chroot.
[Expert@FWCKP750]# pwd
/mnt/sd/stable-chroot
[Expert@FWCKP750]# ls
TestFile dev lib opt run sys var
bin etc media proc sbin tmp
boot home mnt root srv usr
[Expert@FWCKP750]#
Hope that clears things up!

One interesting thing I noticed was the default ip utilties package on gaia embedded says
it doesn't support netns (Network Name Space (think VSX)), but using the Debian ip utility I was able to create a netns name. I haven't looked into this any further.


I'll have to run through the whole process again to make sure I documented it correctly.

Wait what? You would like a tar file of the debootstrap dir?

ok ok ok. Here you go.

debootstrap - 600 / 1100 / 700 / 1400

Saturday, April 16, 2016

Gaia Embedded - How it works.


Hi everyone, I know you've been having these strange urges. You have these new feelings and you're not sure what to do about them. Everyone goes through this at one point. Its part of growing up. Meet me at camera three.

Ok, so we're here to talk about Gaia Embedded of course. Gaia Embedded is the OS that runs the SMB checkpoint firewalls. Its a combo of a uboot image, busybox, lua, sqlite3 databases and then all the normal stuff you would expect on a firewall. Your fw commands, environment variables and what not.

Another major difference is Gaia Embedded doesn't currently run on any x86/x64 cpu. As of right now it only runs on a ARM or MIPS CPU (that I know of). Meaning you can't just take an executable from say R77.20 Gaia and expect it to work on R77.20 for 1100.

First lets talk boot up. Gaia Embedded uses an image created via u-boot. This loads the kernel and the root file system, which is a rootfs (shocking!) and all the normal file systems.

Lets take a look! This is a small portion of /logs/boot_log. This provides a little hint of what is happening. Oh btw, this is a 1100 running R75.20.71.

Creating 11 MTD partitions on "nand_mtd":
0x00000000-0x000a0000 : "u-boot"
i2c driver was not initialized yet.
0x000a0000-0x00100000 : "bootldr-env"
0x00100000-0x00900000 : "kernel-1"
0x00900000-0x07a00000 : "rootfs-1"
0x07a00000-0x08200000 : "kernel-2"
0x08200000-0x0f300000 : "rootfs-2"
0x0f300000-0x16c00000 : "default_sw"
0x16c00000-0x18400000 : "logs"
0x18400000-0x18500000 : "preset_cfg"
0x18500000-0x18600000 : "adsl"
0x18600000-0x20000000 : "storage"


So it looks like one partition has something related to boot loader environment, then we have kernel, root (-1 and -2). default_sw, logs, preset_cfg (maybe factory default boots here?), adsl (who uses that still?) and storage.

Lets compare to what we have running.
[Expert@FW]# df -h
Filesystem                Size      Used Available Use% Mounted on
tmpfs                    20.0M    620.0k     19.4M   3% /tmp
tmpfs                    40.0M      7.4M     32.6M  18% /fwtmp
/dev/mtdblock7           24.0M      8.5M     15.5M  35% /logs
/dev/mtdblock10         122.0M     27.8M     94.2M  23% /storage
/dev/mtdblock5          113.0M     79.4M     33.6M  70% /pfrm2.0
tmpfs                    40.0M      1.1M     38.9M   3% /tmp/log/local
[Expert@FW]#


Looks like we found logs and storage. Maybe default_sfw is /pfrm2.0 (this is basically where most of the appliance lives).

So what else do we see? /tmp, /fwtmp,  /tmp/log/local are tmp file systems, meaning RAM based file systems. Technically virtual memory base file systems, but these boxes don't have a swap file so everything is for sure in RAM.

Now I want to point out there is no "/" in that listing. I'm pretty sure this is because / is a rootfs which is loaded by the kernel. Its kind of like a tmpfs, only it gets a list of files inserted into it before hand.

ok lets view /

[Expert@FW]# cd / ; ls -l
drwxr-xr-x    2 105      80              0 Apr 12 10:30 bin
lrwxrwxrwx    1 105      80              6 Dec 31  1969 data -> /flash
lrwxrwxrwx    1 root     root            8 Apr 13 05:20 dbg -> /tmp/dbg
drwxr-xr-x    5 5031     80              0 Apr 13 05:21 dev
drwxr-xr-x    7 105      80              0 Apr 15 07:07 etc
lrwxrwxrwx    1 root     root           16 Apr 13 05:20 flash -> /pfrm2.0/config1
drwxrwxrwt   12 root     root          860 Apr 16 11:20 fwtmp
lrwxrwxrwx    1 105      80             10 Dec 31  1969 init -> /sbin/init
drwxr-xr-x    3 105      80              0 Apr 13 05:21 lib
lrwxrwxrwx    1 105      80             11 Dec 31  1969 linuxrc -> bin/busybox
drwxr-xr-x    8 root     root            0 Apr 12 21:27 logs
drwxr-xr-x    8 105      80              0 Apr 13 05:20 mnt
lrwxrwxrwx    1 root     root           10 Apr 13 05:20 opt -> /fwtmp/opt
drwxr-xr-x   12 root     root            0 Dec 31  1969 pfrm2.0
dr-xr-xr-x   70 root     root            0 Dec 31  1969 proc
drwxr-xr-x    2 105      80              0 Apr 13 05:20 sbin
drwxr-xr-x    8 root     root            0 Apr 16 11:20 storage
drwxr-xr-x   10 root     root            0 Dec 31  1969 sys
drwxrwxrwt    4 root     root          320 Apr 16 11:20 tmp
drwxr-xr-x    2 root     root            0 Apr 13 05:21 usb
drwxr-xr-x    8 105      80              0 Apr 13 05:21 usr
drwxrwxrwx    8 105      80              0 Apr 16 10:46 var
drwxr-xr-x    2 root     root            0 Apr 13 05:21 web
[Expert@FW]#


Now.. something to notice. flash is a symbolic link (not a 3rd party app) to /pfrm2.0/config1. We'll come back to that.

Storage - this seems to be where online updates go? Not %100 sure. /logs its pretty much what it looks like. Logs..

Ok so lets talk black magic now...

Busybox. Busybox is a single application that will act differently based on how its called. You might be thinking, what do you mean by "how its called?". Let me show you with a shell script.

[Expert@FW]# echo "echo My Argument is \$1" > /tmp/script.sh
[Expert@FW]# cat /tmp/script.sh
echo My Argument is $1
[Expert@FW]# bash /tmp/script.sh hello!
My Argument is hello!
[Expert@FW]#


In this script i'm saying show me the first argument to this script and print it after the "is".

Guess what? There is also a $0, which is the name of the command (script in this case).

[Expert@FW]# echo "echo My Argument is \$0" > /tmp/script.sh
[Expert@FW]# bash /tmp/script.sh hello!
My Argument is /tmp/script.sh
[Expert@FW]#


See how that changed? Using this logic a script with the exact same contents could act differently if there was a change for how it was called.

Ok so i changed the script and now we have:

[Expert@FW]# cat /tmp/script.sh
if [ $0 == "hello" ] ; then
    echo My Argument is $1!
else
    echo "i don't know how $0 acts!"
fi
[Expert@FW]# bash script.sh
i don't know how script.sh acts!
[Expert@FW]#


Now lets make a copy of the script and call it hello.

[Expert@FW]# cp script.sh hello
[Expert@FW]# bash hello howdy
My Argument is howdy!
[Expert@FW]#


Ok so we've proven we can change how something reacts based purely on its file name!

In comes Busybox! Busybox is a swiss army knife. Its a single binary that has a lot of programs built into it. This is done for massive disk space savings.

[Expert@FW]# ls -l /bin/busybox
-rwxr-xr-x    1 105      80         745216 Dec 31  1969 /bin/busybox
[Expert@FW]#


754k. So whats in there?

[Expert@FW]# /bin/busybox
BusyBox v1.8.1 (2015-04-26 16:47:09 IDT) multi-call binary
Copyright (C) 1998-2006 Erik Andersen, Rob Landley, and others.
Licensed under GPLv2. See source distribution for full notice.

Usage: busybox [function] [arguments]...
   or: [function] [arguments]...

        BusyBox is a multi-call binary that combines many common Unix
        utilities into a single executable.  Most people will create a
        link to busybox for each function they wish to use and BusyBox
        will act like whatever it was invoked as!

Currently defined functions:
        [, [[, addgroup, adduser, adjtimex, ar, arp, arping, ash,
        awk, basename, bunzip2, bzcat, bzip2, cal, cat, catv,
        chattr, chgrp, chmod, chown, chpasswd, chpst, chroot,
        chrt, chvt, cksum, clear, cmp, comm, cp, cpio, crond,
        crontab, cryptpw, cut, date, dc, dd, deallocvt, delgroup,
        deluser, df, dhcprelay, diff, dirname, dmesg, dnsd, dos2unix,
        du, dumpkmap, dumpleases, echo, ed, egrep, eject, env,
        envdir, envuidgid, ether-wake, expand, expr, fakeidentd,
        false, fbset, fdflush, fdformat, fdisk, fgrep, find, fold,
        free, freeramdisk, fsck, fsck.minix, ftpget, ftpput, fuser,
        getopt, getty, grep, gunzip, gzip, halt, hdparm, head,
        hexdump, hostid, hostname, httpd, hwclock, id, ifconfig,
        ifdown, ifup, inetd, init, insmod, install, ip, ipaddr,
        ipcalc, ipcrm, ipcs, iplink, iproute, iprule, iptunnel,
        kbd_mode, kill, killall, killall5, klogd, last, length,
        less, linux32, linux64, linuxrc, ln, loadfont, loadkmap,
        logger, login, logname, logread, losetup, ls, lsattr,
        lsmod, lzmacat, makedevs, md5sum, mdev, mesg, mkdir, mkfifo,
        mkfs.minix, mknod, mkswap, mktemp, modprobe, more, mount,
        mountpoint, mt, mv, nameif, netstat, nice, nmeter, nohup,
        nslookup, od, openvt, passwd, patch, pidof, ping, ping6,
        pipe_progress, pivot_root, poweroff, printenv, printf,
        pscan, pwd, raidautorun, rdate, readahead, readlink, readprofile,
        realpath, reboot, renice, reset, resize, rm, rmdir, rmmod,
        route, rpm, rpm2cpio, run-parts, runlevel, runsv, runsvdir,
        rx, sed, seq, setarch, setconsole, setkeycodes, setlogcons,
        setsid, setuidgid, sh, sha1sum, slattach, sleep, softlimit,
        sort, split, start-stop-daemon, stat, strings, stty, su,
        sulogin, sum, sv, svlogd, swapoff, swapon, switch_root,
        sync, sysctl, syslogd, tail, tar, taskset, tcpsvd, tee,
        telnet, telnetd, test, tftp, time, top, touch, tr, traceroute,
        true, tty, ttysize, udhcpc, udhcpd, udpsvd, umount, uname,
        uncompress, unexpand, uniq, unix2dos, unlzma, unzip, uptime,
        usleep, uudecode, uuencode, vconfig, vi, vlock, watch,
        watchdog, wc, wget, which, who, whoami, xargs, yes, zcat,
        zcip

[Expert@FW]#



That is a lot of programs! So how does busybox know how to act? symbolic links!

[Expert@FW]# ls -l mv
lrwxrwxrwx    1 105      80              7 Dec 31  1969 mv -> busybox
[Expert@FW]#


So as you can see mv is a symbolic link to busybox.

You can feel free to poke around in there and see what else you can learn. Lets move on.

I started talking about the boot up process. Normally unix uses /etc/init.d/ stuff for booting. There are files in there but most of the heavy lifting is done in

/pfrm2.0/etc/cpInit


This is where firewall kernel modules are loaded and all kinds of things happen.

If you need run a script at boot up, you'll need to create the following.

/pfrm2.0/etc/userScript


Ok, so what else can we talk about? Where do your configuration changes go that are made from clish or the webui?

Right here! ( /flash )

[Expert@FW]# ls -l
drwxr-xr-x    2 root     root            0 Dec 27 11:42 ace
-rw-r--r--    1 root     root           35 Dec 27 11:45 expert_pass_
drwxr-xr-x   10 root     root            0 Dec 27 11:42 fw1
-r--r--r--    1 root     root          373 Apr 12 10:28 passwd
-r-xr-xr-x    1 105      80            950 Sep  2  2015 restore_future_settings_hook.sh
-rw-------    1 root     root          255 Apr 12 10:28 shadow
drwxr-xr-x    4 root     root            0 Dec 27 11:43 sofaware
-rw-r--r--    1 root     root       760832 Apr 16 10:58 system.db
drwxr-xr-x    2 root     root            0 Dec 27 11:42 tmp
-rw-r--r--    1 root     root         1122 Dec 28 05:45 top_last_day_report.json
-rw-r--r--    1 root     root         1123 Dec 26 18:01 top_last_hour_report.json
[Expert@FW]#


Notice a few things. shadow and passwd? these files are copied over to /etc on boot up or when changes are made via clish/webui.

The interesting one is system.db. This is a sqlite3 database. Want to read it? SURE!

echo .dump | sqlite3 system.db > /logs/system-db.txt


Now you can view all the table schemes.. schema?.. whatever.. database output!

Something else interesting. Gaia Embedded on all platforms has a built in switch, fully managed switch!
I'm not going to dive into that right now, but you can split the ports and do basically anything a normal layer 3 switch would do. Cool stuff.

Now.. something odd i've noticed. If for some reason your doing dynamic routing on Gaia Embedded, keep this in mind when trouble shooting routing issues. If for some reason routed crashes it won't be restarted (well it depends on which routed process crashes but lets just say all of them crash). BUT!! If you login via cli and issue a show route, it will pause, then restart routed under the sheets, THEN show you the output.

This can be VERY confusing as it will look like all the sudden a issue fixed itself before you've had a chance to look at it. Want to see this in action? Setup a lab, get OSPF running. Kill routed then do a show route from clish.

Speaking of crashes!

When a process crashes on Gaia Embedded the kernel will use this sysctl to figure out how to generate the core file.

[Expert@FW]# sysctl kernel.core_pattern
kernel.core_pattern = |/pfrm2.0/bin/core_dump.sh
[Expert@FW]#


This means the core file will be piped into the shell script /pfrm2.0/bin/core_dump.sh.

Lets look at that shell script.

[Expert@FW]# cat /pfrm2.0/bin/core_dump.sh
#!/bin/sh

cat > /logs/core
[Expert@FW]#


So... it pipes the core into cat and writes a file called /logs/core. I'm not following why they didn't just set kernel.core_pattern = /logs/core but sometimes its best to not ask questions. :)

Ok two things! You will only ever have a hit at while a single process crash and only the latest one because of this. That being said how do you know what process crashed? We only have a file called /logs/core.

We use the magic file command!

Lets tell sleep to go away most violently and check out core file. I'm going to tell sleep to sleep for 1000 seconds, then kill it with -6 (seg fault i think) %1 is the first job running in the background.

[Expert@FW]# sleep 1000 &
[1] 18008
[Expert@FW]# kill -6 %1
[Expert@FW]#
[1]+  Aborted                 (core dumped) sleep 1000
[Expert@FW]#

[Expert@FW]# ls -l /logs/core
-rw-r--r--    1 root     root       274432 Apr 16 11:56 /logs/core
[Expert@FW]# file /logs/core
/logs/core: ELF 32-bit LSB core file ARM, version 1 (SYSV), SVR4-style, from 'sleep'
[Expert@FW]#


If you were trouble shooting something at this point I would say, create a cpinfo (cpinfo -o /logs/`hostname`.cpinfo.gz -z) and then download that core file also. If you aren't faint of heart I would also say do backtrace on said core file as well. You'll need gdb to do that. You can request it from checkpoint or use mine from the tools page. More on doing a backtrace later.

I'll update this with anything else i can think of, but for now thats all folks!

Tuesday, April 12, 2016

Strace / Backsup / how magical strace is - Part 3 - The final!

UPDATE: as of R77.20 HFA 20 checkpoint has added a scheduled backup option in the webui. Its under device -> system operations -> Periodic backup is OFF | Settings.. 

So I've had a fun journey going slightly insane trying to figure out why one backup method creates a meta string on the zip file and the other doesn't.

Well... so.... I made in important discovery. The zip file always had a meta header on it. Technically its called the comment field (--archive-comment). What happened? Well, I used unzip -l from cygwin and compared it to the output of unzip on Gaia Embedded. The unzip on Gaia Embedded doesn't print the meta header!! 

ARG! Well that was a waste. Note to self, saving a prompt would have been useful.



So.. down to details..


Here is the command to backup a centrally firewall managed.



/pfrm2.0/bin/backup_settings.sh full pc "Making cron jobs stuff" admin


And here is the command to backup a locally managed firewall.



/pfrm2.0/bin/backup_settings.sh local_policy pc "Making cron jobs stuff" admin


Here is an option to backup without the policy.



/pfrm2.0/bin/backup_settings.sh  pc "Making cron jobs stuff" admin


I don't really understand the "pc" argument. Seems like it has control over where the backup gets stored locally. Not sure I see a point in changing it.

Right.. so I'm on a firewall with local policy.. SOOOOO...here is my current userScript. I moved crond down the list because it looks like if you make any crontab changes you'll need to restart crond. So to make everything work right we need to create all crontabs before starting cron on boot up.

So this crontab creates a backup every 5 mins. I did this because I was debugging and wanted to be able to show all the correct times. If you wanted to use this in production you would use a different time. If your not sure how to write a crontab this looks like a pretty good site. Crontab Examples

[Expert@FW]# cat /pfrm2.0/etc/userScript
ln -s /bin/busybox /bin/crond
mkdir -p /var/spool/cron/crontabs/
echo '*/5 * * * * /pfrm2.0/bin/backup_settings.sh local_policy pc "Making cron jobs stuff" admin' >> /var/spool/cron/crontabs/root
/bin/crond
[Expert@FW]#ls -l /storage/Gateway-ID-7F70949E_R75.20.71_983004120_2016-Apr-12-22_10_02.zip
-rw-r--r--    1 root     root      3089494 Apr 12 22:10 /storage/Gateway-ID-7F70949E_R75.20.71_983004120_2016-Apr-12-22_10_02.zip
[Expert@FW]# egrep -i cron /var/log/messages
2016 Apr 12 10:30:22 FW cron.notice crond[1832]: crond 1.8.1 started, log level 8
2016 Apr 12 22:02:23 FW cron.notice crond[7131]: crond 1.8.1 started, log level 8
2016 Apr 12 22:05:01 FW cron.notice crond[7131]: USER root pid 7147 cmd /pfrm2.0/bin/backup_settings.sh local_policy pc "Making cron jobs stuff" admin
2016 Apr 12 22:10:01 FW cron.notice crond[7131]: USER root pid 7290 cmd /pfrm2.0/bin/backup_settings.sh local_policy pc "Making cron jobs stuff" admin

Hurray! it works.. up next.. make this look less terrible. 

Saturday, April 9, 2016

Strace / Backsup / how magical strace is - Part 2

UPDATE: as of R77.20 HFA 20 checkpoint has added a scheduled backup option in the webui. Its under device -> system operations -> Periodic backup is OFF | Settings..

In the previous write up I showed how I think I found the backup command used in Gaia Embedded. How can we be sure this is the backup command? Well, this is what I did. I downloaded a backup via the webui then issued my backup command and compared the md5sums. Guess what I found? They're different! aaahh crap.

So how bad is it? File listing in the zip is the same. However, when I list the archive with zip -l I found this. The one from the webui has a meta section. The one created via clish (backup settings to tftp server 127.0.0.1) does not.

Check this out.., I've never seen this before (of course that means very little). This is on the top of the webui .zip:

Archive:  FW_R75.20.71_983004120_2016-Apr-09-21_11_10.zip
<meta_data_record>
<UID>MAC_HERE</UID>
<BoardModel>L50</BoardModel>
<Hostname>FW</Hostname>
<Version>R75.20.71_983004120</Version>
<Date>Apr 09, 2016 09:11:10 PM</Date>
<HasPolicy>2</HasPolicy>
<HasPassword>0</HasPassword>
<User>admin</User>
<Comment> </Comment>
</meta_data_record>
  Length      Date    Time    Name

Well... that is odd.. So the backup created via clish is different from the backup created by the webui on R75. Hopefully we haven't stumbled onto a bug that only effects R75 as R77 is out.

I'm going to go out on a limb and say the backup made via the webui is a better backup then from client.

Back to strace!

We know the webui runs on port 4434. Lets see what process is on that port:

[Expert@FW]# lsof -nni | grep 4434
-bash: lsof: command not found
[Expert@FW]#


Oh.. right.. no package.. sigh.. Well hopefully support doesn't see this. I uploaded lsof to /logs.

[Expert@FW]# tar -zxvf lsof_4.89.tgz
cnf/bin/lsof
[Expert@FW]# cd cnf/bin/
[Expert@FW]# ./lsof -nni | grep 4434
thttpd     910   root    1u  IPv4   3439      0t0  TCP *:4434 (LISTEN)
[Expert@FW]#

Boom! Now we know what process to strace.

This time I'm going to attach strace to a live process ( 910 ).
One thing I want to point out is I'm going to login to the webui and navigate all the way to the backup section first. Then I started the strace and hit the "Create Backup" button. I also did NOT download the file via the webui, so as to not pollute the strace output with all the stuff for the download . After the backup completed I hit CTRL-C on the strace. This is the full output of strace on the console:

[Expert@FW]# strace -s 1024 -f -p 910 -o /storage/thttpd.txt
strace: Process 910 attached
strace: Process 17366 attached
strace: Process 17367 attached
strace: Process 17368 attached
strace: Process 17369 attached
strace: Process 17491 attached
strace: Process 910 detached
[Expert@FW]#


Now... lets review our strace log.
Let go right to egrep -i backup /storage/thttpd.txt.

[Expert@FW]# egrep -i backup /storage/thttpd.txt
17367 send(0, "<31>Apr  9 21:42:11 thttpd[17367]: POST data: backup.full_backup=false&backup.comments=&backup.password=&button.create_backup=apply&thispage=lm_backupRestore\n", 158, MSG_NOSIGNAL) = 158
17367 write(1, "backup.full_backup=false&backup.comments=&backup.password=&button.create_backup=apply&thispage=lm_backupRestore", 111) = 111
17369 read(0, "backup.full_backup=false&backup.comments=&backup.password=&button.create_backup=apply&thispage=lm_backupRestore", 1024) = 111
[Expert@FW]#



Ok, So the first line looks like it is us hitting the backup button, so the backup command has to be close to this line. However, I can't figure out what thttpd is doing using the unfilter log (not shown because of how long it is). Its like strace isn't seeing it. Its very possible strace has dorked something up and I need to reboot. I can't do that right now because the wife is watching Ill Tempered Masters of Tattooing on Hulu.

I think we're done for tonight.


Thursday, April 7, 2016

Strace / Backsup / how magical strace is - Part 1

UPDATE: as of R77.20 HFA 20 checkpoint has added a scheduled backup option in the webui. Its under device -> system operations -> Periodic backup is OFF | Settings..

So... I made blog post about how to use a symbolic link to enable crond. I'll follow up this with a blog post about busybox based on the feed back I got from that posting.

I think this will be an interesting post. I'm going to point out i'm doing this on a live firewall.

so.. strace.. this tool does not ship on any checkpoint firewall. That being said its pretty easy to get it on Gaia. Just install CentOS 5.11, install it and copy over the strace binary. Gaia Embedded however.. thats a little more difficult. So if you check the tools page you'll find a download link for the 600, 1100, and 1200R. I should point out this is only for learning and most likely should not be used on a production firewall. Use this on a lab to learn how things really work because lets face it. If you don't know how something works when its working its much harder to understand why its broken if its broken.

So back to backsup. So you're thinking you spelled that wrong. I'll explain that later, it will make sense, just stick with me.

ok rock and roll. So we've already discussed how to enable crond so that we can schedule jobs on Gaia Embedded. The next step is to setup a backsup job. Lets see what options we have in clish and then try to figure out what command to use.



Well.. thats not very cool. TFTP or USB? TFTP?? what year is this? Should I hook up a serial connection to the encode the backup via uuencode and pipe the backsup into hylafax over slip (I don't really know if thats possible but i'm guessing so). Ok, joking aside...

lets just finish the backsup command and see what happens. Lets send it to 127.0.0.1.

[Expert@FW]# clish
FW> backup settings to ?
usb  - Save the backup file on a USB device
tftp - Send the backup file to a TFTP server
FW> backup settings to


ok, so what happened? the tftp timed out.. no surprise there. But what is interesting is the file name. Lets get out of clish (because I already set bashUser on in expert) and see if we can find that file.

FW> exit
Terminated
[Expert@FW]# find / -name FW_R75.20.71_983004120_2016-Apr-07-21_03_43.zip
/storage/FW_R75.20.71_983004120_2016-Apr-07-21_03_43.zip
[Expert@FW]#


Ok! Now we're cooking with gas! So we know we could stop here. We could add a cronjob that simply calls clish -c "backup settings to tftp server 127.0.0.1" and then upload said backup from there, but come on. We don't want to fire off a needless tftp command.

Here is where strace comes in, IN YOUR LAB!.. ehem...

Lets upload strace for the 600 / 1100 since this is a 1100 firewall. I've put strace in /logs/.

[Expert@FW]# ls -l strace-4.11.tgz
-rw-r--r--    1 root     root       293698 Apr  7 21:13 strace-4.11.tgz
[Expert@FW]# tar -zxvf strace-4.11.tgz
cnf/bin/strace
[Expert@FW]#


ok so strace installed.

strace has a lot of arguments. I should also point out it does some things that can cause problems on production system. I would only use this in a lab environment as it has a chance of crashing or causing unexpected bad things to happen. But this is our lab! so what do we care?

So full steam ahead!

[Expert@FW]# /logs/cnf/bin/strace -f -o /logs/strace-output.txt  -s 1024 clish -c "backup settings to tftp server 127.0.0.1"



This should complete without issue (assuming you're running in bash because you issued a bashUser on from expert, logged out and back in).

So the arguments are as follows.
-f == Mean follow any child processes. What is a child process? Well.. you can't do it with one process so many programs will create sub processes to handle small tasks and then return to the main process. This options means trace those child processes as well.
-o == This is where our strace output file will go. In this case /logs/strace.txt
-s == This is the max size of each line logged. So each line should be no longer then 1024 .. characters. might be a little excessive.
The final arguments are the command we want to trace, which in this case is clish -c "backup settings to tftp server 127.0.0.1"

You'll need to wait a little while. This eats a lot of cpu. After about 5-10 mins we have the following.


ok.. yeah.. -s 1024 was a bit much but we'll just work with what we have.

[Expert@FW]# ls -lh /logs/strace-output.txt
-rw-r--r--    1 root     root        15.4M Apr  7 21:28 /logs/strace-output.txt
[Expert@FW]#



So short cut. I know from playing with this before hand that we're looking for execve calls mostly.

[Expert@FW]# egrep '^[0-9]+ +execve' /logs/strace.txt > /logs/execve.txt
[Expert@FW]#


Ok so what do we have in /logs/execve.txt?

[Expert@FW]# wc -l /logs/execve.txt
      133 /logs/execve.txt
[Expert@FW]#



Not too bad.. Get to the good part.


577   execve("/bin/clish", ["clish", "-c", "backup settings to tftp server 127.0.0.1"], [/* 23 vars */]) = 0
579   execve("/usr/bin/id", ["id", "-u"], [/* 21 vars */]) = 0
581   execve("/pfrm2.0/bin/pt", ["/pfrm2.0/bin/pt", "--list"], [/* 22 vars */]) = 0
583   execve("/pfrm2.0/bin/lua", ["lua", "-e", "require ('cli.pt')('--list',  nil)"], [/* 21 vars */]) = 0
585   execve("/usr/bin/awk", ["awk", "-F:", "-v", "U=admin", "$1==U { print $7; exit; }", "/etc/passwd"], [/* 22 vars */]) = 0
587   execve("/usr/bin/tty", ["tty"], [/* 22 vars */]) = 0
588   execve("/pfrm2.0/bin/is_under_fw.sh", ["is_under_fw.sh", "577"], [/* 22 vars */]) = 0
589   execve("/pfrm2.0/bin/ppnames.sh", ["ppnames.sh", "577"], [/* 21 vars */]<unfinished ...>
590   execve("/bin/grep", ["grep", "^fw$"], [/* 21 vars */] <unfinished ...>
592   execve("/usr/bin/awk", ["awk", "$1==\"PPid:\" { print $2}"], [/* 21 vars*/] <unfinished ...>
593   execve("/bin/ps", ["ps", "--noheader", "-o", "comm", "575"], [/* 21 vars*/]) = 0
595   execve("/usr/bin/awk", ["awk", "$1==\"PPid:\" { print $2}"], [/* 21 vars*/]) = 0
596   execve("/bin/ps", ["ps", "--noheader", "-o", "comm", "32500"], [/* 21 vars */]) = 0
598   execve("/usr/bin/awk", ["awk", "$1==\"PPid:\" { print $2}"], [/* 21 vars*/]) = 0
599   execve("/bin/ps", ["ps", "--noheader", "-o", "comm", "32499"], [/* 21 vars */]) = 0
601   execve("/usr/bin/awk", ["awk", "$1==\"PPid:\" { print $2}"], [/* 21 vars*/]) = 0
602   execve("/bin/ps", ["ps", "--noheader", "-o", "comm", "906"], [/* 21 vars*/]) = 0
604   execve("/usr/bin/awk", ["awk", "$1==\"PPid:\" { print $2}"], [/* 21 vars*/]) = 0
605   execve("/bin/ps", ["ps", "--noheader", "-o", "comm", "1"], [/* 21 vars */]) = 0
607   execve("/usr/bin/awk", ["awk", "$1==\"PPid:\" { print $2}"], [/* 21 vars*/]) = 0
608   execve("/usr/bin/tty", ["tty"], [/* 23 vars */]) = 0
610   execve("/bin/grep", ["grep", "sfwsh\\.bin"], [/* 23 vars */] <unfinished...>
611   execve("/usr/bin/tty", ["tty"], [/* 23 vars */] <unfinished ...>
609   execve("/bin/ps", ["ps", "--noheaders", "-t", "/dev/pts/0"], [/* 23 vars*/]) = 0
613   execve("/pfrm2.0/bin/pt", ["pt", "users", "-f", "username", "admin", "-F", "role"], [/* 23 vars */] <unfinished ...>
614   execve("/usr/bin/head", ["head", "-n", "1"], [/* 23 vars */] <unfinished...>
615   execve("/bin/grep", ["grep", "-v", "{}"], [/* 23 vars */] <unfinished ...>
617   execve("/pfrm2.0/bin/lua", ["lua", "-e", "require ('cli.pt')('users', '-f', 'username', 'admin', '-F', 'role',  nil)"], [/* 22 vars */] <unfinished ...>
577   execve("/pfrm2.0/bin/sfwsh.bin", ["/pfrm2.0/bin/sfwsh.bin", "-c", "backup settings to tftp server 127.0.0.1"], [/* 25 vars */]) = 0
620   execve("/bin/sh", ["sh", "-c", "BKUP_TARGET=tftp backup_settings_cli.sh"], [/* 26 vars */]) = 0
620   execve("/pfrm2.0/bin/cli/backup_settings_cli.sh", ["backup_settings_cli.sh"], [/* 27 vars */]) = 0
623   execve("/bin/sh", ["sh", "-c", "export CPDIR=/opt/fw1 ; export FWDIR=/opt/fw1 ; PATH=/usr/sbin:/opt/fw1/bin:${PATH}; export PATH;/opt/fw1/bin//cpprod_util FwIsLocalMgmt 2>&1 ; echo RC=$?"], [/* 28 vars */] <unfinished ...>
624   execve("/opt/fw1/bin//cpprod_util", ["/opt/fw1/bin//cpprod_util", "FwIsLocalMgmt"], [/* 28 vars */]) = 0
634   execve("/bin/sh", ["sh", "-c", "/pfrm2.0/bin/backup_settings.sh local_policy pc \" \" admin \"\" \"\" 2>&1 ; echo RC=$?"], [/* 28 vars */] <unfinished ...>
635   execve("/pfrm2.0/bin/backup_settings.sh", ["/pfrm2.0/bin/backup_settings.sh", "local_policy", "pc", " ", "admin", "", ""], [/* 28 vars */]) = 0
638   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
639   execve("/bin/rm", ["/bin/rm", "-rf", "/fwtmp/backup_settings_status"], [/* 28 vars */]) = 0
641   execve("/bin/df", ["df", "/logs", "-m"], [/* 28 vars */] <unfinished ...>
642   execve("/usr/bin/tr", ["tr", "-s", " "], [/* 28 vars */] <unfinished ...>
643   execve("/usr/bin/cut", ["cut", "-f4", "-d "], [/* 28 vars */] <unfinished ...>
644   execve("/usr/bin/tail", ["tail", "-n", "1"], [/* 28 vars */] <unfinished...>
658   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
660   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
662   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
663   execve("/opt/fw1/bin/cp_write_syslog.sh", ["/opt/fw1/bin/cp_write_syslog.sh", "[System", "Operations]", "Starting", "settings", "backup", "process..."], [/* 28 vars */]) = 0
663   execve("/usr/bin/logger", ["logger", "-t", "CHECKPOINT", "-p", "info", "--", "[System Operations] Starting settings backup process..."], [/* 27 vars */]) = 0
665   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
667   execve("/usr/sbin/fw_printenv", ["/usr/sbin/fw_printenv", "-n", "activePartition"], [/* 28 vars */]) = 0
669   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
671   execve("/usr/sbin/fw_printenv", ["/usr/sbin/fw_printenv", "-n", "hw_mac_addr"], [/* 28 vars */]) = 0
673   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
675   execve("/usr/sbin/fw_printenv", ["/usr/sbin/fw_printenv", "-n", "unitModel"], [/* 28 vars */]) = 0
677   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
678   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
679   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
681   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
682   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
683   execve("/bin/rm", ["/bin/rm", "-rf", "/storage/*.zip"], [/* 28 vars */])= 0
693   execve("/usr/sbin/fw_printenv", ["/usr/sbin/fw_printenv", "-n", "activeConfig"], [/* 28 vars */]) = 0
694   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
695   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
696   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
697   execve("/bin/date", ["/bin/date", "+%b %d, %Y %r"], [/* 28 vars */]) = 0
698   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
699   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
700   execve("/bin/mkdir", ["/bin/mkdir", "-p", "/pfrm2.0/config1/addtional_settings_tmp"], [/* 29 vars */]) = 0
701   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
702   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
703   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
705   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
706   execve("/usr/bin/dirname", ["dirname", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/user.dhcpd.conf.*"], [/* 29 vars */]) = 0
707   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
708   execve("/bin/cp", ["/bin/cp", "-a", "///pfrm2.0/etc/user.dhcpd.conf.*", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/"], [/* 29 vars */]) = 0
709   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
710   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
711   execve("/usr/bin/dirname", ["dirname", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/opt/fw1/boot/modules/*.conf"], [/* 29 vars */]) = 0
712   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
713   execve("/bin/cp", ["/bin/cp", "-a", "///pfrm2.0/opt/fw1/boot/modules/*.conf", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/opt/fw1/boot/modules/"], [/* 29 vars */]) = 0
714   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
715   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
716   execve("/usr/bin/dirname", ["dirname", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/logging.config"], [/* 29 vars */]) = 0
717   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
718   execve("/bin/cp", ["/bin/cp", "-a", "///pfrm2.0/etc/logging.config", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/"], [/* 29 vars */]) = 0
719   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
720   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
721   execve("/usr/bin/dirname", ["dirname", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/userScript"], [/* 29 vars */]) = 0
722   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
723   execve("/bin/cp", ["/bin/cp", "-a", "///pfrm2.0/etc/userScript", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/"], [/* 29 vars */]) = 0
724   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
728   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
729   execve("/usr/bin/dirname", ["dirname", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/dropbear_rsa_host_key"], [/* 29 vars */]) = 0
730   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
731   execve("/bin/cp", ["/bin/cp", "-a", "///pfrm2.0/etc/dropbear_rsa_host_key", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/"], [/* 29 vars */]) = 0
732   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
733   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
734   execve("/usr/bin/dirname", ["dirname", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/webManifest"], [/* 29 vars */]) = 0
735   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
736   execve("/bin/cp", ["/bin/cp", "-a", "///pfrm2.0/etc/webManifest", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/"], [/* 29 vars */]) = 0
737   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
738   execve("/bin/cp", ["/bin/cp", "-a", "/pfrm2.0/bin/restore_future_settings_hook.sh", "/pfrm2.0/config1"], [/* 29 vars */]) = 0
739   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
740   execve("/pfrm2.0/bin/firmTool", ["/pfrm2.0/bin/firmTool", "-c", "/dev/mtd4"], [/* 29 vars */]) = 0
743   execve("/usr/bin/cut", ["cut", "-d", "_", "-f", "1"], [/* 29 vars */]) =0
746   execve("/usr/bin/cut", ["cut", "-d", "_", "-f", "2"], [/* 29 vars */]) =0
749   execve("/usr/bin/cut", ["cut", "-d", "_", "-f", "3"], [/* 29 vars */]) =0
752   execve("/usr/bin/cut", ["cut", "-d", "_", "-f", "4"], [/* 29 vars */]) =0
753   execve("/bin/hostname", ["hostname"], [/* 29 vars */]) = 0
754   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
757   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
758   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
759   execve("/pfrm2.0/bin/zip", ["/pfrm2.0/bin/zip", "-ry", "/storage/settings_backup.zip", "ace", "addtional_settings_tmp", "expert_pass_", "fw1", "passwd","restore_future_settings_hook.sh", "shadow", "sofaware", "system.db", "tmp", "top_last_day_report.json", "top_last_hour_report.json", "restore_future_settings_hook.sh", "-x", "./fw1/state/local/FW1/*", "-x", "./sofaware/gui/logs.properties", "-qz"], [/* 29 vars */]) = 0
763   execve("/pfrm2.0/bin/unzip", ["/pfrm2.0/bin/unzip", "-qz", "/storage/settings_backup.zip"], [/* 29 vars */]) = 0
764   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
765   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
766   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
767   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
768   execve("/bin/mv", ["/bin/mv", "/storage/settings_backup.zip", "/storage/FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip"], [/* 29 vars */]) = 0
769   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
770   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
771   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
772   execve("/bin/rm", ["/bin/rm", "-rf", "/pfrm2.0/config1/addtional_settings_tmp"], [/* 29 vars */]) = 0
773   execve("/bin/sh", ["sh", "-c", "cat /fwtmp/backup_file_location 2>&1 ; echo RC=$?"], [/* 28 vars */] <unfinished ...>
774   execve("/bin/cat", ["cat", "/fwtmp/backup_file_location"], [/* 28 vars */]) = 0
775   execve("/bin/sh", ["sh", "-c", "cat /fwtmp/backup_file_location 2>&1 ; echo RC=$?"], [/* 28 vars */] <unfinished ...>
776   execve("/bin/cat", ["cat", "/fwtmp/backup_file_location"], [/* 28 vars */]) = 0
777   execve("/bin/sh", ["sh", "-c", "echo `/bin/date +%Y-%b-%d-%T`: 'Uploading FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip to the TFTP server 127.0.0.1'>> /logs/backup_settings 2>&1 ; echo RC=$?"], [/* 28 vars */] <unfinished ...>
779   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
780   execve("/bin/sh", ["sh", "-c", "ls /storage//FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip 2>&1 ; echo RC=$?"], [/* 28 vars */] <unfinished ...>
781   execve("/bin/ls", ["ls", "/storage//FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip"], [/* 28 vars */]) = 0
782   execve("/bin/sh", ["sh", "-c", "cd /storage/; tftp -pl FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip 127.0.0.1 2>&1 ; echo RC=$?"], [/* 28 vars */] <unfinished ...>
783   execve("/usr/bin/tftp", ["tftp", "-pl", "FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip", "127.0.0.1"], [/* 29 vars */]) = 0
784   execve("/bin/sh", ["sh", "-c", "echo `/bin/date +%Y-%b-%d-%T`: 'tftp: timeout' >> /logs/backup_settings 2>&1 ; echo RC=$?"], [/* 28 vars */]) = 0
786   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0

Ok.. that was super long. So what are we looking at? These are sub processes created by the clish command!

So we're pretty sure our magic backsup command is somewhere in here because I'm guessing the sub system that is uploading the backsup to a tftp server is different from the process that creates the backsup.  

So the first few pages of lines seem like validation tests. 
Then we see this..

577   execve("/pfrm2.0/bin/sfwsh.bin", ["/pfrm2.0/bin/sfwsh.bin", "-c", "backup settings to tftp server 127.0.0.1"]



This looks like our clish command! We must be getting close now..

623   execve("/bin/sh", ["sh", "-c", "export CPDIR=/opt/fw1 ; export FWDIR=/opt/fw1 ; PATH=/usr/sbin:/opt/fw1/bin:${PATH}; export PATH;/opt/fw1/bin//cpprod_util FwIsLocalMgmt 2>&1 ; echo RC=$?"], [/* 28 vars */] <unfinished ...>
624   execve("/opt/fw1/bin//cpprod_util", ["/opt/fw1/bin//cpprod_util", "FwIsLocalMgmt"], [/* 28 vars */]) = 0


This looks like a check for local vs central management. hmm. so backup command maybe different base on how the firewall is managed. This firewall is locally managed FYI.

634   execve("/bin/sh", ["sh", "-c", "/pfrm2.0/bin/backup_settings.sh local_policy pc \" \" admin \"\" \"\" 2>&1 ; echo RC=$?"]


ok.. this is what we're looking for.

Lets back up.

[Expert@FW]# ls -lh /storage/FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip
-rw-r--r--    1 root     root         2.9M Apr  7 21:28 /storage/FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip
[Expert@FW]#


 So this is our backsup file created from the clish command to backup to tftp? Lets remove it first.

[Expert@FW]# rm /storage/FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip
[Expert@FW]#


Ok now.. that backsup command.

635   execve("/pfrm2.0/bin/backup_settings.sh", ["/pfrm2.0/bin/backup_settings.sh", "local_policy", "pc", " ", "admin", "", ""]


How do we run this from cli? Well each "," is showing each argument to /pfrm2.0/bin/backup_settings.sh.

ok ok ok.

So looks like is script local_polic pc " " admin "" ""
hmmm

let see what that does..

[Expert@FW]# /pfrm2.0/bin/backup_settings.sh local_policy pc " " admin "" ""
[Expert@FW]#


Well... something happened, that command took a few seconds to run. Do we have anything special in the /storage dir?

[Expert@FW]# ls -l /storage/
-rw-r--r--    1 root     root      3089185 Apr  7 22:27 FW_R75.20.71_983004120_2016-Apr-07-22_27_06.zip
drwxr-xr-x    2 root     root            0 Apr  5 21:30 lib
[Expert@FW]


oh nice! is that a valid backup? We'll have to do more testing to find out. Its getting a bit late and i'm running out of stream.




* I misspelled backup. See i told you everything would be explained!

Tail!

So I added a tool to the list of unsupported tools. Tail! I'm fighting the urge to be completely juvenile. Oh well.. :). See tools page to see why this is useful.

Tools page updated!

Tools Tools Tools!

Wednesday, April 6, 2016

Tools Tools Tools!

I know I said the next post was going to be on backups but I found out that one of my tools was broken (strace on the 600/1100) so I decided to write up a blog on tools. This is kind of rehash but I wanted to put this in a final resting place. I'll be updating this as other tools are compiled as well.

No 700 packages yet. - April/06/2016

Gaia Embedded I think is missing a few tools. I know, you're thinking just these 4? Well these were just kind of my starting point. I have others that I haven't posted but if someone wants something compiled I'll be more then happy to do so. I'll be releasing the compilers once i get everything packaged up nicely. I lost my source code for the 1200R compiler so i'll need to restart that mess again. (ugh).

All files in the tars are prefixed with cnf/ so no need to worry about trashing a local binary. They also don't require any libraries not already installed which is a big plus (so far!).

I would expand them under /logs/ or /mnt/sd if you have a external SD card formatted with EXT4 (stick to /logs if your using dos/vfat).

so for example. Upload the files to /logs/
then run

tar -zxvf FILENAME_HERE

Note: These packages are not supported by checkpoint (or anyone really). Also note using 3rd party apps are not supported on checkpoint firewalls.

dosfstools - this is for people running a 600/1100/1200R that have an external SD card installed. If you reboot (reboot command) these devices there is a good chance the dos/vfat file system on the SD will become corrupted at some point because checkpoint shutdown process is not so great. I would reformat with EXT4 if possible. If you need access to the data of a hosed dos file system and you're remote this gives you a way. Its a bit of a gamble. I mean there is no guarantee that repairing the file system will give you access to your data. 2nd option would be to dd off the unmounted SD card, upload to a remote server, mount via a loop and use a native dos file system checker.

strace - strace is a magical tool. You can watch what happens real time and catch error when debugs aren't useful. If you know what procmon on windows is, its basically the same thing. Basically you get to log almost everything a process does. Like, files it opens (or tries), libraries, external processes it might spawn. All that jazz. How many times have you thought, "how should xyz work normally" when something is broken in production but works in a lab? This will shed light. Next blog will be on how to use this so stay tuned!

gdb - if you find you have a core file (/logs/core) run file against it to find out what process crashed then use gdb to create a backtrace. This in addition with the core file and a cpinfo will help speed along any support ticket.

lsof - LiSt Open Files. this is another great tool for seeing all files/network connections/devices a giving process has open. Want to see all process that have a network connection open and not resolve ips? lsof -in

tail - I know, I know.. John.. everything has tail. Oh.. but does your tail include -F support? well.. this one does. If you know a -F is more then a grade a passive aggressive teacher gives you then this is for you!

600 / 1100 downloads

600 / 1100 dosfstools

600 / 1100 gdb

600 / 1100 lsof

600 / 1100 strace

600 / 1100 tail

1200R downloads

1200R dosfstools

1200R gdb

1200R lsof

1200R strace

1200R tail

Sunday, April 3, 2016

Enabling cron, the scheduling service on 600 / 700 / 1100 / 1200R


NOTE: Gaia Embedded has been updated to enable cron by default. This should no longer be needed.


If you're familiar with UNIX skip down to step 6 for the commands to add to userScript (note the capital "S"). Also see note about crontab command and writing your cron jobs by hand.

In this post I'll show you what is required to configure cron so that jobs can be scheduled. This allows you to create custom jobs to run at a certain time/day. For whatever reason this services isn't enabled by default, but this is easy to address and should be upgrade safe. Meaning if you upgrade the firmware this setting should be saved. This also means we'll need to create/edit the userScript file. This is a file read during boot up. Each line will be executed as the devices boots up. For Gaia users this is like a custom rc.local file.

So, let's start off. This is a quick list of what we're going to do.

  1. Create the symbolic link in the startup file so crond can be found. 
  2. Create the directory for crontabs. These are scripts that tell cron when and what to run. 
  3. Start cron (to make sure it will run correctly). 
  4. Create/Edit userScript file. 
  5. Reboot (to make sure cron starts on boot up). 
  6. Create a test cronjob!  (edit userScript again and reboot again).

So let's begin!

Step 1. Here we are going to create a symbolic link (think of this as a shortcut) to busybox for cron.

[Expert@FW]# ln -s /bin/busybox /bin/crond
The link should look something like this now.
[Expert@FW]# ls -l /bin/crond
lrwxrwxrwx 1 root root 12 Apr 3 19:08 /bin/crond -> /bin/busybox
[Expert@FW]#

Step 2. Create the crontabs directory for crond. This is where your scheduled jobs will need to be placed.

[Expert@FW]# mkdir -p /var/spool/cron/crontabs/
Directory should look like this.
[Expert@FW]# ls -ld /var/spool/cron/crontabs/
drwxr-xr-x 2 root root 40 Apr 3 19:12 /var/spool/cron/crontabs/
[Expert@FW]#

Step 3. Start cron!

[Expert@FW]# /bin/crond
once running you should see something like this for output.
[Expert@FW]# ps aux | egrep '[c]rond'
root 1822 0.0 0.1 4748 656 ? Ss 16:38 0:00 /bin/crond
[Expert@FW]#

Step 4. Lets do this on boot up!
Add each start up command to /pfrm2.0/etc/userScript. This file isn't created by default. This is what it should look like.

[Expert@FW]# ls -l /pfrm2.0/etc/userScript
-rw-r--r-- 1 root root 76 Apr 3 19:24 /pfrm2.0/etc/userScript
[Expert@FW]# cat /pfrm2.0/etc/userScript
ln -s /bin/busybox /bin/crond
mkdir -p /var/spool/cron/crontabs/
/bin/crond
[Expert@FW]#

Step 5. Reboot!

Once the system is done rebooting you should see something like this run expert mode.
[Expert@FW]# ps aux | egrep '[c]rond'
root 1822 0.0 0.1 4748 656 ? Ss 16:38 0:00 /bin/crond
[Expert@FW]#


Step 6. Setup a test cronjob!

I've created a file called /var/spool/cron/crontabs/root. This is the file that cron jobs will be run from. In this example I created a job that will simply create a message in the /var/log/messages file every min.

[Expert@FW]# ls -l /var/spool/cron/crontabs/root
-rw------- 1 root root 44 Apr 3 19:34 /var/spool/cron/crontabs/root
[Expert@FW]# cat /var/spool/cron/crontabs/root
* * * * * echo "testing123testing" | logger
[Expert@FW]# egrep 'testing' /var/log/messages
2016 Apr 3 19:37:01 FW cron.notice crond[1822]: USER root pid 1957 cmd echo "testing123testing" | logger
2016 Apr 3 19:37:01 FW user.notice root: testing123testing
[Expert@FW]#

Everything looks good, now create you own crontab via an echo statement in /pfrm2.0/etc/userScript. This way your cronjob will be created at boot up. You'll need to be careful about only using " " inside your command if you need quoting as part of your cronjob. Be sure to escape $ and ! otherwise odd things may happen. Here is our final /pfrm2.0/etc/userScript. Note I used append statements ( >> ) so that if you add more than one job the last line won't clobber the previous.

[Expert@FW]# cat /pfrm2.0/etc/userScript
ln -s /bin/busybox /bin/crond
mkdir -p /var/spool/cron/crontabs/
/bin/crond
echo '* * * * * echo "testing123testing" | logger' >> /var/spool/cron/crontabs/root
chmod 600 /var/spool/cron/crontabs/root
[Expert@FW]#


Next up, we'll use cron and userScript to setup a job to automate backups! Hopefully this wasn't too long winded.

NOTE: Do not use the command crontab. It will create your crontab, however it won't be saved as /var is on / which is a rootfs. This means no data is saved on a reboot.