Saturday, April 16, 2016

Gaia Embedded - How it works.


Hi everyone, I know you've been having these strange urges. You have these new feelings and you're not sure what to do about them. Everyone goes through this at one point. Its part of growing up. Meet me at camera three.

Ok, so we're here to talk about Gaia Embedded of course. Gaia Embedded is the OS that runs the SMB checkpoint firewalls. Its a combo of a uboot image, busybox, lua, sqlite3 databases and then all the normal stuff you would expect on a firewall. Your fw commands, environment variables and what not.

Another major difference is Gaia Embedded doesn't currently run on any x86/x64 cpu. As of right now it only runs on a ARM or MIPS CPU (that I know of). Meaning you can't just take an executable from say R77.20 Gaia and expect it to work on R77.20 for 1100.

First lets talk boot up. Gaia Embedded uses an image created via u-boot. This loads the kernel and the root file system, which is a rootfs (shocking!) and all the normal file systems.

Lets take a look! This is a small portion of /logs/boot_log. This provides a little hint of what is happening. Oh btw, this is a 1100 running R75.20.71.

Creating 11 MTD partitions on "nand_mtd":
0x00000000-0x000a0000 : "u-boot"
i2c driver was not initialized yet.
0x000a0000-0x00100000 : "bootldr-env"
0x00100000-0x00900000 : "kernel-1"
0x00900000-0x07a00000 : "rootfs-1"
0x07a00000-0x08200000 : "kernel-2"
0x08200000-0x0f300000 : "rootfs-2"
0x0f300000-0x16c00000 : "default_sw"
0x16c00000-0x18400000 : "logs"
0x18400000-0x18500000 : "preset_cfg"
0x18500000-0x18600000 : "adsl"
0x18600000-0x20000000 : "storage"


So it looks like one partition has something related to boot loader environment, then we have kernel, root (-1 and -2). default_sw, logs, preset_cfg (maybe factory default boots here?), adsl (who uses that still?) and storage.

Lets compare to what we have running.
[Expert@FW]# df -h
Filesystem                Size      Used Available Use% Mounted on
tmpfs                    20.0M    620.0k     19.4M   3% /tmp
tmpfs                    40.0M      7.4M     32.6M  18% /fwtmp
/dev/mtdblock7           24.0M      8.5M     15.5M  35% /logs
/dev/mtdblock10         122.0M     27.8M     94.2M  23% /storage
/dev/mtdblock5          113.0M     79.4M     33.6M  70% /pfrm2.0
tmpfs                    40.0M      1.1M     38.9M   3% /tmp/log/local
[Expert@FW]#


Looks like we found logs and storage. Maybe default_sfw is /pfrm2.0 (this is basically where most of the appliance lives).

So what else do we see? /tmp, /fwtmp,  /tmp/log/local are tmp file systems, meaning RAM based file systems. Technically virtual memory base file systems, but these boxes don't have a swap file so everything is for sure in RAM.

Now I want to point out there is no "/" in that listing. I'm pretty sure this is because / is a rootfs which is loaded by the kernel. Its kind of like a tmpfs, only it gets a list of files inserted into it before hand.

ok lets view /

[Expert@FW]# cd / ; ls -l
drwxr-xr-x    2 105      80              0 Apr 12 10:30 bin
lrwxrwxrwx    1 105      80              6 Dec 31  1969 data -> /flash
lrwxrwxrwx    1 root     root            8 Apr 13 05:20 dbg -> /tmp/dbg
drwxr-xr-x    5 5031     80              0 Apr 13 05:21 dev
drwxr-xr-x    7 105      80              0 Apr 15 07:07 etc
lrwxrwxrwx    1 root     root           16 Apr 13 05:20 flash -> /pfrm2.0/config1
drwxrwxrwt   12 root     root          860 Apr 16 11:20 fwtmp
lrwxrwxrwx    1 105      80             10 Dec 31  1969 init -> /sbin/init
drwxr-xr-x    3 105      80              0 Apr 13 05:21 lib
lrwxrwxrwx    1 105      80             11 Dec 31  1969 linuxrc -> bin/busybox
drwxr-xr-x    8 root     root            0 Apr 12 21:27 logs
drwxr-xr-x    8 105      80              0 Apr 13 05:20 mnt
lrwxrwxrwx    1 root     root           10 Apr 13 05:20 opt -> /fwtmp/opt
drwxr-xr-x   12 root     root            0 Dec 31  1969 pfrm2.0
dr-xr-xr-x   70 root     root            0 Dec 31  1969 proc
drwxr-xr-x    2 105      80              0 Apr 13 05:20 sbin
drwxr-xr-x    8 root     root            0 Apr 16 11:20 storage
drwxr-xr-x   10 root     root            0 Dec 31  1969 sys
drwxrwxrwt    4 root     root          320 Apr 16 11:20 tmp
drwxr-xr-x    2 root     root            0 Apr 13 05:21 usb
drwxr-xr-x    8 105      80              0 Apr 13 05:21 usr
drwxrwxrwx    8 105      80              0 Apr 16 10:46 var
drwxr-xr-x    2 root     root            0 Apr 13 05:21 web
[Expert@FW]#


Now.. something to notice. flash is a symbolic link (not a 3rd party app) to /pfrm2.0/config1. We'll come back to that.

Storage - this seems to be where online updates go? Not %100 sure. /logs its pretty much what it looks like. Logs..

Ok so lets talk black magic now...

Busybox. Busybox is a single application that will act differently based on how its called. You might be thinking, what do you mean by "how its called?". Let me show you with a shell script.

[Expert@FW]# echo "echo My Argument is \$1" > /tmp/script.sh
[Expert@FW]# cat /tmp/script.sh
echo My Argument is $1
[Expert@FW]# bash /tmp/script.sh hello!
My Argument is hello!
[Expert@FW]#


In this script i'm saying show me the first argument to this script and print it after the "is".

Guess what? There is also a $0, which is the name of the command (script in this case).

[Expert@FW]# echo "echo My Argument is \$0" > /tmp/script.sh
[Expert@FW]# bash /tmp/script.sh hello!
My Argument is /tmp/script.sh
[Expert@FW]#


See how that changed? Using this logic a script with the exact same contents could act differently if there was a change for how it was called.

Ok so i changed the script and now we have:

[Expert@FW]# cat /tmp/script.sh
if [ $0 == "hello" ] ; then
    echo My Argument is $1!
else
    echo "i don't know how $0 acts!"
fi
[Expert@FW]# bash script.sh
i don't know how script.sh acts!
[Expert@FW]#


Now lets make a copy of the script and call it hello.

[Expert@FW]# cp script.sh hello
[Expert@FW]# bash hello howdy
My Argument is howdy!
[Expert@FW]#


Ok so we've proven we can change how something reacts based purely on its file name!

In comes Busybox! Busybox is a swiss army knife. Its a single binary that has a lot of programs built into it. This is done for massive disk space savings.

[Expert@FW]# ls -l /bin/busybox
-rwxr-xr-x    1 105      80         745216 Dec 31  1969 /bin/busybox
[Expert@FW]#


754k. So whats in there?

[Expert@FW]# /bin/busybox
BusyBox v1.8.1 (2015-04-26 16:47:09 IDT) multi-call binary
Copyright (C) 1998-2006 Erik Andersen, Rob Landley, and others.
Licensed under GPLv2. See source distribution for full notice.

Usage: busybox [function] [arguments]...
   or: [function] [arguments]...

        BusyBox is a multi-call binary that combines many common Unix
        utilities into a single executable.  Most people will create a
        link to busybox for each function they wish to use and BusyBox
        will act like whatever it was invoked as!

Currently defined functions:
        [, [[, addgroup, adduser, adjtimex, ar, arp, arping, ash,
        awk, basename, bunzip2, bzcat, bzip2, cal, cat, catv,
        chattr, chgrp, chmod, chown, chpasswd, chpst, chroot,
        chrt, chvt, cksum, clear, cmp, comm, cp, cpio, crond,
        crontab, cryptpw, cut, date, dc, dd, deallocvt, delgroup,
        deluser, df, dhcprelay, diff, dirname, dmesg, dnsd, dos2unix,
        du, dumpkmap, dumpleases, echo, ed, egrep, eject, env,
        envdir, envuidgid, ether-wake, expand, expr, fakeidentd,
        false, fbset, fdflush, fdformat, fdisk, fgrep, find, fold,
        free, freeramdisk, fsck, fsck.minix, ftpget, ftpput, fuser,
        getopt, getty, grep, gunzip, gzip, halt, hdparm, head,
        hexdump, hostid, hostname, httpd, hwclock, id, ifconfig,
        ifdown, ifup, inetd, init, insmod, install, ip, ipaddr,
        ipcalc, ipcrm, ipcs, iplink, iproute, iprule, iptunnel,
        kbd_mode, kill, killall, killall5, klogd, last, length,
        less, linux32, linux64, linuxrc, ln, loadfont, loadkmap,
        logger, login, logname, logread, losetup, ls, lsattr,
        lsmod, lzmacat, makedevs, md5sum, mdev, mesg, mkdir, mkfifo,
        mkfs.minix, mknod, mkswap, mktemp, modprobe, more, mount,
        mountpoint, mt, mv, nameif, netstat, nice, nmeter, nohup,
        nslookup, od, openvt, passwd, patch, pidof, ping, ping6,
        pipe_progress, pivot_root, poweroff, printenv, printf,
        pscan, pwd, raidautorun, rdate, readahead, readlink, readprofile,
        realpath, reboot, renice, reset, resize, rm, rmdir, rmmod,
        route, rpm, rpm2cpio, run-parts, runlevel, runsv, runsvdir,
        rx, sed, seq, setarch, setconsole, setkeycodes, setlogcons,
        setsid, setuidgid, sh, sha1sum, slattach, sleep, softlimit,
        sort, split, start-stop-daemon, stat, strings, stty, su,
        sulogin, sum, sv, svlogd, swapoff, swapon, switch_root,
        sync, sysctl, syslogd, tail, tar, taskset, tcpsvd, tee,
        telnet, telnetd, test, tftp, time, top, touch, tr, traceroute,
        true, tty, ttysize, udhcpc, udhcpd, udpsvd, umount, uname,
        uncompress, unexpand, uniq, unix2dos, unlzma, unzip, uptime,
        usleep, uudecode, uuencode, vconfig, vi, vlock, watch,
        watchdog, wc, wget, which, who, whoami, xargs, yes, zcat,
        zcip

[Expert@FW]#



That is a lot of programs! So how does busybox know how to act? symbolic links!

[Expert@FW]# ls -l mv
lrwxrwxrwx    1 105      80              7 Dec 31  1969 mv -> busybox
[Expert@FW]#


So as you can see mv is a symbolic link to busybox.

You can feel free to poke around in there and see what else you can learn. Lets move on.

I started talking about the boot up process. Normally unix uses /etc/init.d/ stuff for booting. There are files in there but most of the heavy lifting is done in

/pfrm2.0/etc/cpInit


This is where firewall kernel modules are loaded and all kinds of things happen.

If you need run a script at boot up, you'll need to create the following.

/pfrm2.0/etc/userScript


Ok, so what else can we talk about? Where do your configuration changes go that are made from clish or the webui?

Right here! ( /flash )

[Expert@FW]# ls -l
drwxr-xr-x    2 root     root            0 Dec 27 11:42 ace
-rw-r--r--    1 root     root           35 Dec 27 11:45 expert_pass_
drwxr-xr-x   10 root     root            0 Dec 27 11:42 fw1
-r--r--r--    1 root     root          373 Apr 12 10:28 passwd
-r-xr-xr-x    1 105      80            950 Sep  2  2015 restore_future_settings_hook.sh
-rw-------    1 root     root          255 Apr 12 10:28 shadow
drwxr-xr-x    4 root     root            0 Dec 27 11:43 sofaware
-rw-r--r--    1 root     root       760832 Apr 16 10:58 system.db
drwxr-xr-x    2 root     root            0 Dec 27 11:42 tmp
-rw-r--r--    1 root     root         1122 Dec 28 05:45 top_last_day_report.json
-rw-r--r--    1 root     root         1123 Dec 26 18:01 top_last_hour_report.json
[Expert@FW]#


Notice a few things. shadow and passwd? these files are copied over to /etc on boot up or when changes are made via clish/webui.

The interesting one is system.db. This is a sqlite3 database. Want to read it? SURE!

echo .dump | sqlite3 system.db > /logs/system-db.txt


Now you can view all the table schemes.. schema?.. whatever.. database output!

Something else interesting. Gaia Embedded on all platforms has a built in switch, fully managed switch!
I'm not going to dive into that right now, but you can split the ports and do basically anything a normal layer 3 switch would do. Cool stuff.

Now.. something odd i've noticed. If for some reason your doing dynamic routing on Gaia Embedded, keep this in mind when trouble shooting routing issues. If for some reason routed crashes it won't be restarted (well it depends on which routed process crashes but lets just say all of them crash). BUT!! If you login via cli and issue a show route, it will pause, then restart routed under the sheets, THEN show you the output.

This can be VERY confusing as it will look like all the sudden a issue fixed itself before you've had a chance to look at it. Want to see this in action? Setup a lab, get OSPF running. Kill routed then do a show route from clish.

Speaking of crashes!

When a process crashes on Gaia Embedded the kernel will use this sysctl to figure out how to generate the core file.

[Expert@FW]# sysctl kernel.core_pattern
kernel.core_pattern = |/pfrm2.0/bin/core_dump.sh
[Expert@FW]#


This means the core file will be piped into the shell script /pfrm2.0/bin/core_dump.sh.

Lets look at that shell script.

[Expert@FW]# cat /pfrm2.0/bin/core_dump.sh
#!/bin/sh

cat > /logs/core
[Expert@FW]#


So... it pipes the core into cat and writes a file called /logs/core. I'm not following why they didn't just set kernel.core_pattern = /logs/core but sometimes its best to not ask questions. :)

Ok two things! You will only ever have a hit at while a single process crash and only the latest one because of this. That being said how do you know what process crashed? We only have a file called /logs/core.

We use the magic file command!

Lets tell sleep to go away most violently and check out core file. I'm going to tell sleep to sleep for 1000 seconds, then kill it with -6 (seg fault i think) %1 is the first job running in the background.

[Expert@FW]# sleep 1000 &
[1] 18008
[Expert@FW]# kill -6 %1
[Expert@FW]#
[1]+  Aborted                 (core dumped) sleep 1000
[Expert@FW]#

[Expert@FW]# ls -l /logs/core
-rw-r--r--    1 root     root       274432 Apr 16 11:56 /logs/core
[Expert@FW]# file /logs/core
/logs/core: ELF 32-bit LSB core file ARM, version 1 (SYSV), SVR4-style, from 'sleep'
[Expert@FW]#


If you were trouble shooting something at this point I would say, create a cpinfo (cpinfo -o /logs/`hostname`.cpinfo.gz -z) and then download that core file also. If you aren't faint of heart I would also say do backtrace on said core file as well. You'll need gdb to do that. You can request it from checkpoint or use mine from the tools page. More on doing a backtrace later.

I'll update this with anything else i can think of, but for now thats all folks!

Tuesday, April 12, 2016

Strace / Backsup / how magical strace is - Part 3 - The final!

UPDATE: as of R77.20 HFA 20 checkpoint has added a scheduled backup option in the webui. Its under device -> system operations -> Periodic backup is OFF | Settings.. 

So I've had a fun journey going slightly insane trying to figure out why one backup method creates a meta string on the zip file and the other doesn't.

Well... so.... I made in important discovery. The zip file always had a meta header on it. Technically its called the comment field (--archive-comment). What happened? Well, I used unzip -l from cygwin and compared it to the output of unzip on Gaia Embedded. The unzip on Gaia Embedded doesn't print the meta header!! 

ARG! Well that was a waste. Note to self, saving a prompt would have been useful.



So.. down to details..


Here is the command to backup a centrally firewall managed.



/pfrm2.0/bin/backup_settings.sh full pc "Making cron jobs stuff" admin


And here is the command to backup a locally managed firewall.



/pfrm2.0/bin/backup_settings.sh local_policy pc "Making cron jobs stuff" admin


Here is an option to backup without the policy.



/pfrm2.0/bin/backup_settings.sh  pc "Making cron jobs stuff" admin


I don't really understand the "pc" argument. Seems like it has control over where the backup gets stored locally. Not sure I see a point in changing it.

Right.. so I'm on a firewall with local policy.. SOOOOO...here is my current userScript. I moved crond down the list because it looks like if you make any crontab changes you'll need to restart crond. So to make everything work right we need to create all crontabs before starting cron on boot up.

So this crontab creates a backup every 5 mins. I did this because I was debugging and wanted to be able to show all the correct times. If you wanted to use this in production you would use a different time. If your not sure how to write a crontab this looks like a pretty good site. Crontab Examples

[Expert@FW]# cat /pfrm2.0/etc/userScript
ln -s /bin/busybox /bin/crond
mkdir -p /var/spool/cron/crontabs/
echo '*/5 * * * * /pfrm2.0/bin/backup_settings.sh local_policy pc "Making cron jobs stuff" admin' >> /var/spool/cron/crontabs/root
/bin/crond
[Expert@FW]#ls -l /storage/Gateway-ID-7F70949E_R75.20.71_983004120_2016-Apr-12-22_10_02.zip
-rw-r--r--    1 root     root      3089494 Apr 12 22:10 /storage/Gateway-ID-7F70949E_R75.20.71_983004120_2016-Apr-12-22_10_02.zip
[Expert@FW]# egrep -i cron /var/log/messages
2016 Apr 12 10:30:22 FW cron.notice crond[1832]: crond 1.8.1 started, log level 8
2016 Apr 12 22:02:23 FW cron.notice crond[7131]: crond 1.8.1 started, log level 8
2016 Apr 12 22:05:01 FW cron.notice crond[7131]: USER root pid 7147 cmd /pfrm2.0/bin/backup_settings.sh local_policy pc "Making cron jobs stuff" admin
2016 Apr 12 22:10:01 FW cron.notice crond[7131]: USER root pid 7290 cmd /pfrm2.0/bin/backup_settings.sh local_policy pc "Making cron jobs stuff" admin

Hurray! it works.. up next.. make this look less terrible. 

Saturday, April 9, 2016

Strace / Backsup / how magical strace is - Part 2

UPDATE: as of R77.20 HFA 20 checkpoint has added a scheduled backup option in the webui. Its under device -> system operations -> Periodic backup is OFF | Settings..

In the previous write up I showed how I think I found the backup command used in Gaia Embedded. How can we be sure this is the backup command? Well, this is what I did. I downloaded a backup via the webui then issued my backup command and compared the md5sums. Guess what I found? They're different! aaahh crap.

So how bad is it? File listing in the zip is the same. However, when I list the archive with zip -l I found this. The one from the webui has a meta section. The one created via clish (backup settings to tftp server 127.0.0.1) does not.

Check this out.., I've never seen this before (of course that means very little). This is on the top of the webui .zip:

Archive:  FW_R75.20.71_983004120_2016-Apr-09-21_11_10.zip
<meta_data_record>
<UID>MAC_HERE</UID>
<BoardModel>L50</BoardModel>
<Hostname>FW</Hostname>
<Version>R75.20.71_983004120</Version>
<Date>Apr 09, 2016 09:11:10 PM</Date>
<HasPolicy>2</HasPolicy>
<HasPassword>0</HasPassword>
<User>admin</User>
<Comment> </Comment>
</meta_data_record>
  Length      Date    Time    Name

Well... that is odd.. So the backup created via clish is different from the backup created by the webui on R75. Hopefully we haven't stumbled onto a bug that only effects R75 as R77 is out.

I'm going to go out on a limb and say the backup made via the webui is a better backup then from client.

Back to strace!

We know the webui runs on port 4434. Lets see what process is on that port:

[Expert@FW]# lsof -nni | grep 4434
-bash: lsof: command not found
[Expert@FW]#


Oh.. right.. no package.. sigh.. Well hopefully support doesn't see this. I uploaded lsof to /logs.

[Expert@FW]# tar -zxvf lsof_4.89.tgz
cnf/bin/lsof
[Expert@FW]# cd cnf/bin/
[Expert@FW]# ./lsof -nni | grep 4434
thttpd     910   root    1u  IPv4   3439      0t0  TCP *:4434 (LISTEN)
[Expert@FW]#

Boom! Now we know what process to strace.

This time I'm going to attach strace to a live process ( 910 ).
One thing I want to point out is I'm going to login to the webui and navigate all the way to the backup section first. Then I started the strace and hit the "Create Backup" button. I also did NOT download the file via the webui, so as to not pollute the strace output with all the stuff for the download . After the backup completed I hit CTRL-C on the strace. This is the full output of strace on the console:

[Expert@FW]# strace -s 1024 -f -p 910 -o /storage/thttpd.txt
strace: Process 910 attached
strace: Process 17366 attached
strace: Process 17367 attached
strace: Process 17368 attached
strace: Process 17369 attached
strace: Process 17491 attached
strace: Process 910 detached
[Expert@FW]#


Now... lets review our strace log.
Let go right to egrep -i backup /storage/thttpd.txt.

[Expert@FW]# egrep -i backup /storage/thttpd.txt
17367 send(0, "<31>Apr  9 21:42:11 thttpd[17367]: POST data: backup.full_backup=false&backup.comments=&backup.password=&button.create_backup=apply&thispage=lm_backupRestore\n", 158, MSG_NOSIGNAL) = 158
17367 write(1, "backup.full_backup=false&backup.comments=&backup.password=&button.create_backup=apply&thispage=lm_backupRestore", 111) = 111
17369 read(0, "backup.full_backup=false&backup.comments=&backup.password=&button.create_backup=apply&thispage=lm_backupRestore", 1024) = 111
[Expert@FW]#



Ok, So the first line looks like it is us hitting the backup button, so the backup command has to be close to this line. However, I can't figure out what thttpd is doing using the unfilter log (not shown because of how long it is). Its like strace isn't seeing it. Its very possible strace has dorked something up and I need to reboot. I can't do that right now because the wife is watching Ill Tempered Masters of Tattooing on Hulu.

I think we're done for tonight.


Thursday, April 7, 2016

Strace / Backsup / how magical strace is - Part 1

UPDATE: as of R77.20 HFA 20 checkpoint has added a scheduled backup option in the webui. Its under device -> system operations -> Periodic backup is OFF | Settings..

So... I made blog post about how to use a symbolic link to enable crond. I'll follow up this with a blog post about busybox based on the feed back I got from that posting.

I think this will be an interesting post. I'm going to point out i'm doing this on a live firewall.

so.. strace.. this tool does not ship on any checkpoint firewall. That being said its pretty easy to get it on Gaia. Just install CentOS 5.11, install it and copy over the strace binary. Gaia Embedded however.. thats a little more difficult. So if you check the tools page you'll find a download link for the 600, 1100, and 1200R. I should point out this is only for learning and most likely should not be used on a production firewall. Use this on a lab to learn how things really work because lets face it. If you don't know how something works when its working its much harder to understand why its broken if its broken.

So back to backsup. So you're thinking you spelled that wrong. I'll explain that later, it will make sense, just stick with me.

ok rock and roll. So we've already discussed how to enable crond so that we can schedule jobs on Gaia Embedded. The next step is to setup a backsup job. Lets see what options we have in clish and then try to figure out what command to use.



Well.. thats not very cool. TFTP or USB? TFTP?? what year is this? Should I hook up a serial connection to the encode the backup via uuencode and pipe the backsup into hylafax over slip (I don't really know if thats possible but i'm guessing so). Ok, joking aside...

lets just finish the backsup command and see what happens. Lets send it to 127.0.0.1.

[Expert@FW]# clish
FW> backup settings to ?
usb  - Save the backup file on a USB device
tftp - Send the backup file to a TFTP server
FW> backup settings to


ok, so what happened? the tftp timed out.. no surprise there. But what is interesting is the file name. Lets get out of clish (because I already set bashUser on in expert) and see if we can find that file.

FW> exit
Terminated
[Expert@FW]# find / -name FW_R75.20.71_983004120_2016-Apr-07-21_03_43.zip
/storage/FW_R75.20.71_983004120_2016-Apr-07-21_03_43.zip
[Expert@FW]#


Ok! Now we're cooking with gas! So we know we could stop here. We could add a cronjob that simply calls clish -c "backup settings to tftp server 127.0.0.1" and then upload said backup from there, but come on. We don't want to fire off a needless tftp command.

Here is where strace comes in, IN YOUR LAB!.. ehem...

Lets upload strace for the 600 / 1100 since this is a 1100 firewall. I've put strace in /logs/.

[Expert@FW]# ls -l strace-4.11.tgz
-rw-r--r--    1 root     root       293698 Apr  7 21:13 strace-4.11.tgz
[Expert@FW]# tar -zxvf strace-4.11.tgz
cnf/bin/strace
[Expert@FW]#


ok so strace installed.

strace has a lot of arguments. I should also point out it does some things that can cause problems on production system. I would only use this in a lab environment as it has a chance of crashing or causing unexpected bad things to happen. But this is our lab! so what do we care?

So full steam ahead!

[Expert@FW]# /logs/cnf/bin/strace -f -o /logs/strace-output.txt  -s 1024 clish -c "backup settings to tftp server 127.0.0.1"



This should complete without issue (assuming you're running in bash because you issued a bashUser on from expert, logged out and back in).

So the arguments are as follows.
-f == Mean follow any child processes. What is a child process? Well.. you can't do it with one process so many programs will create sub processes to handle small tasks and then return to the main process. This options means trace those child processes as well.
-o == This is where our strace output file will go. In this case /logs/strace.txt
-s == This is the max size of each line logged. So each line should be no longer then 1024 .. characters. might be a little excessive.
The final arguments are the command we want to trace, which in this case is clish -c "backup settings to tftp server 127.0.0.1"

You'll need to wait a little while. This eats a lot of cpu. After about 5-10 mins we have the following.


ok.. yeah.. -s 1024 was a bit much but we'll just work with what we have.

[Expert@FW]# ls -lh /logs/strace-output.txt
-rw-r--r--    1 root     root        15.4M Apr  7 21:28 /logs/strace-output.txt
[Expert@FW]#



So short cut. I know from playing with this before hand that we're looking for execve calls mostly.

[Expert@FW]# egrep '^[0-9]+ +execve' /logs/strace.txt > /logs/execve.txt
[Expert@FW]#


Ok so what do we have in /logs/execve.txt?

[Expert@FW]# wc -l /logs/execve.txt
      133 /logs/execve.txt
[Expert@FW]#



Not too bad.. Get to the good part.


577   execve("/bin/clish", ["clish", "-c", "backup settings to tftp server 127.0.0.1"], [/* 23 vars */]) = 0
579   execve("/usr/bin/id", ["id", "-u"], [/* 21 vars */]) = 0
581   execve("/pfrm2.0/bin/pt", ["/pfrm2.0/bin/pt", "--list"], [/* 22 vars */]) = 0
583   execve("/pfrm2.0/bin/lua", ["lua", "-e", "require ('cli.pt')('--list',  nil)"], [/* 21 vars */]) = 0
585   execve("/usr/bin/awk", ["awk", "-F:", "-v", "U=admin", "$1==U { print $7; exit; }", "/etc/passwd"], [/* 22 vars */]) = 0
587   execve("/usr/bin/tty", ["tty"], [/* 22 vars */]) = 0
588   execve("/pfrm2.0/bin/is_under_fw.sh", ["is_under_fw.sh", "577"], [/* 22 vars */]) = 0
589   execve("/pfrm2.0/bin/ppnames.sh", ["ppnames.sh", "577"], [/* 21 vars */]<unfinished ...>
590   execve("/bin/grep", ["grep", "^fw$"], [/* 21 vars */] <unfinished ...>
592   execve("/usr/bin/awk", ["awk", "$1==\"PPid:\" { print $2}"], [/* 21 vars*/] <unfinished ...>
593   execve("/bin/ps", ["ps", "--noheader", "-o", "comm", "575"], [/* 21 vars*/]) = 0
595   execve("/usr/bin/awk", ["awk", "$1==\"PPid:\" { print $2}"], [/* 21 vars*/]) = 0
596   execve("/bin/ps", ["ps", "--noheader", "-o", "comm", "32500"], [/* 21 vars */]) = 0
598   execve("/usr/bin/awk", ["awk", "$1==\"PPid:\" { print $2}"], [/* 21 vars*/]) = 0
599   execve("/bin/ps", ["ps", "--noheader", "-o", "comm", "32499"], [/* 21 vars */]) = 0
601   execve("/usr/bin/awk", ["awk", "$1==\"PPid:\" { print $2}"], [/* 21 vars*/]) = 0
602   execve("/bin/ps", ["ps", "--noheader", "-o", "comm", "906"], [/* 21 vars*/]) = 0
604   execve("/usr/bin/awk", ["awk", "$1==\"PPid:\" { print $2}"], [/* 21 vars*/]) = 0
605   execve("/bin/ps", ["ps", "--noheader", "-o", "comm", "1"], [/* 21 vars */]) = 0
607   execve("/usr/bin/awk", ["awk", "$1==\"PPid:\" { print $2}"], [/* 21 vars*/]) = 0
608   execve("/usr/bin/tty", ["tty"], [/* 23 vars */]) = 0
610   execve("/bin/grep", ["grep", "sfwsh\\.bin"], [/* 23 vars */] <unfinished...>
611   execve("/usr/bin/tty", ["tty"], [/* 23 vars */] <unfinished ...>
609   execve("/bin/ps", ["ps", "--noheaders", "-t", "/dev/pts/0"], [/* 23 vars*/]) = 0
613   execve("/pfrm2.0/bin/pt", ["pt", "users", "-f", "username", "admin", "-F", "role"], [/* 23 vars */] <unfinished ...>
614   execve("/usr/bin/head", ["head", "-n", "1"], [/* 23 vars */] <unfinished...>
615   execve("/bin/grep", ["grep", "-v", "{}"], [/* 23 vars */] <unfinished ...>
617   execve("/pfrm2.0/bin/lua", ["lua", "-e", "require ('cli.pt')('users', '-f', 'username', 'admin', '-F', 'role',  nil)"], [/* 22 vars */] <unfinished ...>
577   execve("/pfrm2.0/bin/sfwsh.bin", ["/pfrm2.0/bin/sfwsh.bin", "-c", "backup settings to tftp server 127.0.0.1"], [/* 25 vars */]) = 0
620   execve("/bin/sh", ["sh", "-c", "BKUP_TARGET=tftp backup_settings_cli.sh"], [/* 26 vars */]) = 0
620   execve("/pfrm2.0/bin/cli/backup_settings_cli.sh", ["backup_settings_cli.sh"], [/* 27 vars */]) = 0
623   execve("/bin/sh", ["sh", "-c", "export CPDIR=/opt/fw1 ; export FWDIR=/opt/fw1 ; PATH=/usr/sbin:/opt/fw1/bin:${PATH}; export PATH;/opt/fw1/bin//cpprod_util FwIsLocalMgmt 2>&1 ; echo RC=$?"], [/* 28 vars */] <unfinished ...>
624   execve("/opt/fw1/bin//cpprod_util", ["/opt/fw1/bin//cpprod_util", "FwIsLocalMgmt"], [/* 28 vars */]) = 0
634   execve("/bin/sh", ["sh", "-c", "/pfrm2.0/bin/backup_settings.sh local_policy pc \" \" admin \"\" \"\" 2>&1 ; echo RC=$?"], [/* 28 vars */] <unfinished ...>
635   execve("/pfrm2.0/bin/backup_settings.sh", ["/pfrm2.0/bin/backup_settings.sh", "local_policy", "pc", " ", "admin", "", ""], [/* 28 vars */]) = 0
638   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
639   execve("/bin/rm", ["/bin/rm", "-rf", "/fwtmp/backup_settings_status"], [/* 28 vars */]) = 0
641   execve("/bin/df", ["df", "/logs", "-m"], [/* 28 vars */] <unfinished ...>
642   execve("/usr/bin/tr", ["tr", "-s", " "], [/* 28 vars */] <unfinished ...>
643   execve("/usr/bin/cut", ["cut", "-f4", "-d "], [/* 28 vars */] <unfinished ...>
644   execve("/usr/bin/tail", ["tail", "-n", "1"], [/* 28 vars */] <unfinished...>
658   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
660   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
662   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
663   execve("/opt/fw1/bin/cp_write_syslog.sh", ["/opt/fw1/bin/cp_write_syslog.sh", "[System", "Operations]", "Starting", "settings", "backup", "process..."], [/* 28 vars */]) = 0
663   execve("/usr/bin/logger", ["logger", "-t", "CHECKPOINT", "-p", "info", "--", "[System Operations] Starting settings backup process..."], [/* 27 vars */]) = 0
665   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
667   execve("/usr/sbin/fw_printenv", ["/usr/sbin/fw_printenv", "-n", "activePartition"], [/* 28 vars */]) = 0
669   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
671   execve("/usr/sbin/fw_printenv", ["/usr/sbin/fw_printenv", "-n", "hw_mac_addr"], [/* 28 vars */]) = 0
673   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
675   execve("/usr/sbin/fw_printenv", ["/usr/sbin/fw_printenv", "-n", "unitModel"], [/* 28 vars */]) = 0
677   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
678   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
679   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
681   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
682   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
683   execve("/bin/rm", ["/bin/rm", "-rf", "/storage/*.zip"], [/* 28 vars */])= 0
693   execve("/usr/sbin/fw_printenv", ["/usr/sbin/fw_printenv", "-n", "activeConfig"], [/* 28 vars */]) = 0
694   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
695   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
696   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
697   execve("/bin/date", ["/bin/date", "+%b %d, %Y %r"], [/* 28 vars */]) = 0
698   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
699   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
700   execve("/bin/mkdir", ["/bin/mkdir", "-p", "/pfrm2.0/config1/addtional_settings_tmp"], [/* 29 vars */]) = 0
701   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
702   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
703   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
705   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
706   execve("/usr/bin/dirname", ["dirname", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/user.dhcpd.conf.*"], [/* 29 vars */]) = 0
707   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
708   execve("/bin/cp", ["/bin/cp", "-a", "///pfrm2.0/etc/user.dhcpd.conf.*", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/"], [/* 29 vars */]) = 0
709   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
710   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
711   execve("/usr/bin/dirname", ["dirname", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/opt/fw1/boot/modules/*.conf"], [/* 29 vars */]) = 0
712   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
713   execve("/bin/cp", ["/bin/cp", "-a", "///pfrm2.0/opt/fw1/boot/modules/*.conf", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/opt/fw1/boot/modules/"], [/* 29 vars */]) = 0
714   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
715   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
716   execve("/usr/bin/dirname", ["dirname", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/logging.config"], [/* 29 vars */]) = 0
717   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
718   execve("/bin/cp", ["/bin/cp", "-a", "///pfrm2.0/etc/logging.config", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/"], [/* 29 vars */]) = 0
719   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
720   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
721   execve("/usr/bin/dirname", ["dirname", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/userScript"], [/* 29 vars */]) = 0
722   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
723   execve("/bin/cp", ["/bin/cp", "-a", "///pfrm2.0/etc/userScript", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/"], [/* 29 vars */]) = 0
724   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
728   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
729   execve("/usr/bin/dirname", ["dirname", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/dropbear_rsa_host_key"], [/* 29 vars */]) = 0
730   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
731   execve("/bin/cp", ["/bin/cp", "-a", "///pfrm2.0/etc/dropbear_rsa_host_key", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/"], [/* 29 vars */]) = 0
732   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
733   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
734   execve("/usr/bin/dirname", ["dirname", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/webManifest"], [/* 29 vars */]) = 0
735   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
736   execve("/bin/cp", ["/bin/cp", "-a", "///pfrm2.0/etc/webManifest", "/pfrm2.0/config1/addtional_settings_tmp//pfrm2.0/etc/"], [/* 29 vars */]) = 0
737   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
738   execve("/bin/cp", ["/bin/cp", "-a", "/pfrm2.0/bin/restore_future_settings_hook.sh", "/pfrm2.0/config1"], [/* 29 vars */]) = 0
739   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
740   execve("/pfrm2.0/bin/firmTool", ["/pfrm2.0/bin/firmTool", "-c", "/dev/mtd4"], [/* 29 vars */]) = 0
743   execve("/usr/bin/cut", ["cut", "-d", "_", "-f", "1"], [/* 29 vars */]) =0
746   execve("/usr/bin/cut", ["cut", "-d", "_", "-f", "2"], [/* 29 vars */]) =0
749   execve("/usr/bin/cut", ["cut", "-d", "_", "-f", "3"], [/* 29 vars */]) =0
752   execve("/usr/bin/cut", ["cut", "-d", "_", "-f", "4"], [/* 29 vars */]) =0
753   execve("/bin/hostname", ["hostname"], [/* 29 vars */]) = 0
754   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
757   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
758   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
759   execve("/pfrm2.0/bin/zip", ["/pfrm2.0/bin/zip", "-ry", "/storage/settings_backup.zip", "ace", "addtional_settings_tmp", "expert_pass_", "fw1", "passwd","restore_future_settings_hook.sh", "shadow", "sofaware", "system.db", "tmp", "top_last_day_report.json", "top_last_hour_report.json", "restore_future_settings_hook.sh", "-x", "./fw1/state/local/FW1/*", "-x", "./sofaware/gui/logs.properties", "-qz"], [/* 29 vars */]) = 0
763   execve("/pfrm2.0/bin/unzip", ["/pfrm2.0/bin/unzip", "-qz", "/storage/settings_backup.zip"], [/* 29 vars */]) = 0
764   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
765   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
766   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
767   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
768   execve("/bin/mv", ["/bin/mv", "/storage/settings_backup.zip", "/storage/FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip"], [/* 29 vars */]) = 0
769   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
770   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
771   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 29 vars */]) = 0
772   execve("/bin/rm", ["/bin/rm", "-rf", "/pfrm2.0/config1/addtional_settings_tmp"], [/* 29 vars */]) = 0
773   execve("/bin/sh", ["sh", "-c", "cat /fwtmp/backup_file_location 2>&1 ; echo RC=$?"], [/* 28 vars */] <unfinished ...>
774   execve("/bin/cat", ["cat", "/fwtmp/backup_file_location"], [/* 28 vars */]) = 0
775   execve("/bin/sh", ["sh", "-c", "cat /fwtmp/backup_file_location 2>&1 ; echo RC=$?"], [/* 28 vars */] <unfinished ...>
776   execve("/bin/cat", ["cat", "/fwtmp/backup_file_location"], [/* 28 vars */]) = 0
777   execve("/bin/sh", ["sh", "-c", "echo `/bin/date +%Y-%b-%d-%T`: 'Uploading FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip to the TFTP server 127.0.0.1'>> /logs/backup_settings 2>&1 ; echo RC=$?"], [/* 28 vars */] <unfinished ...>
779   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0
780   execve("/bin/sh", ["sh", "-c", "ls /storage//FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip 2>&1 ; echo RC=$?"], [/* 28 vars */] <unfinished ...>
781   execve("/bin/ls", ["ls", "/storage//FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip"], [/* 28 vars */]) = 0
782   execve("/bin/sh", ["sh", "-c", "cd /storage/; tftp -pl FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip 127.0.0.1 2>&1 ; echo RC=$?"], [/* 28 vars */] <unfinished ...>
783   execve("/usr/bin/tftp", ["tftp", "-pl", "FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip", "127.0.0.1"], [/* 29 vars */]) = 0
784   execve("/bin/sh", ["sh", "-c", "echo `/bin/date +%Y-%b-%d-%T`: 'tftp: timeout' >> /logs/backup_settings 2>&1 ; echo RC=$?"], [/* 28 vars */]) = 0
786   execve("/bin/date", ["/bin/date", "+%Y-%b-%d-%T"], [/* 28 vars */]) = 0

Ok.. that was super long. So what are we looking at? These are sub processes created by the clish command!

So we're pretty sure our magic backsup command is somewhere in here because I'm guessing the sub system that is uploading the backsup to a tftp server is different from the process that creates the backsup.  

So the first few pages of lines seem like validation tests. 
Then we see this..

577   execve("/pfrm2.0/bin/sfwsh.bin", ["/pfrm2.0/bin/sfwsh.bin", "-c", "backup settings to tftp server 127.0.0.1"]



This looks like our clish command! We must be getting close now..

623   execve("/bin/sh", ["sh", "-c", "export CPDIR=/opt/fw1 ; export FWDIR=/opt/fw1 ; PATH=/usr/sbin:/opt/fw1/bin:${PATH}; export PATH;/opt/fw1/bin//cpprod_util FwIsLocalMgmt 2>&1 ; echo RC=$?"], [/* 28 vars */] <unfinished ...>
624   execve("/opt/fw1/bin//cpprod_util", ["/opt/fw1/bin//cpprod_util", "FwIsLocalMgmt"], [/* 28 vars */]) = 0


This looks like a check for local vs central management. hmm. so backup command maybe different base on how the firewall is managed. This firewall is locally managed FYI.

634   execve("/bin/sh", ["sh", "-c", "/pfrm2.0/bin/backup_settings.sh local_policy pc \" \" admin \"\" \"\" 2>&1 ; echo RC=$?"]


ok.. this is what we're looking for.

Lets back up.

[Expert@FW]# ls -lh /storage/FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip
-rw-r--r--    1 root     root         2.9M Apr  7 21:28 /storage/FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip
[Expert@FW]#


 So this is our backsup file created from the clish command to backup to tftp? Lets remove it first.

[Expert@FW]# rm /storage/FW_R75.20.71_983004120_2016-Apr-07-21_26_10.zip
[Expert@FW]#


Ok now.. that backsup command.

635   execve("/pfrm2.0/bin/backup_settings.sh", ["/pfrm2.0/bin/backup_settings.sh", "local_policy", "pc", " ", "admin", "", ""]


How do we run this from cli? Well each "," is showing each argument to /pfrm2.0/bin/backup_settings.sh.

ok ok ok.

So looks like is script local_polic pc " " admin "" ""
hmmm

let see what that does..

[Expert@FW]# /pfrm2.0/bin/backup_settings.sh local_policy pc " " admin "" ""
[Expert@FW]#


Well... something happened, that command took a few seconds to run. Do we have anything special in the /storage dir?

[Expert@FW]# ls -l /storage/
-rw-r--r--    1 root     root      3089185 Apr  7 22:27 FW_R75.20.71_983004120_2016-Apr-07-22_27_06.zip
drwxr-xr-x    2 root     root            0 Apr  5 21:30 lib
[Expert@FW]


oh nice! is that a valid backup? We'll have to do more testing to find out. Its getting a bit late and i'm running out of stream.




* I misspelled backup. See i told you everything would be explained!

Tail!

So I added a tool to the list of unsupported tools. Tail! I'm fighting the urge to be completely juvenile. Oh well.. :). See tools page to see why this is useful.

Tools page updated!

Tools Tools Tools!

Wednesday, April 6, 2016

Tools Tools Tools!

I know I said the next post was going to be on backups but I found out that one of my tools was broken (strace on the 600/1100) so I decided to write up a blog on tools. This is kind of rehash but I wanted to put this in a final resting place. I'll be updating this as other tools are compiled as well.

No 700 packages yet. - April/06/2016

Gaia Embedded I think is missing a few tools. I know, you're thinking just these 4? Well these were just kind of my starting point. I have others that I haven't posted but if someone wants something compiled I'll be more then happy to do so. I'll be releasing the compilers once i get everything packaged up nicely. I lost my source code for the 1200R compiler so i'll need to restart that mess again. (ugh).

All files in the tars are prefixed with cnf/ so no need to worry about trashing a local binary. They also don't require any libraries not already installed which is a big plus (so far!).

I would expand them under /logs/ or /mnt/sd if you have a external SD card formatted with EXT4 (stick to /logs if your using dos/vfat).

so for example. Upload the files to /logs/
then run

tar -zxvf FILENAME_HERE

Note: These packages are not supported by checkpoint (or anyone really). Also note using 3rd party apps are not supported on checkpoint firewalls.

dosfstools - this is for people running a 600/1100/1200R that have an external SD card installed. If you reboot (reboot command) these devices there is a good chance the dos/vfat file system on the SD will become corrupted at some point because checkpoint shutdown process is not so great. I would reformat with EXT4 if possible. If you need access to the data of a hosed dos file system and you're remote this gives you a way. Its a bit of a gamble. I mean there is no guarantee that repairing the file system will give you access to your data. 2nd option would be to dd off the unmounted SD card, upload to a remote server, mount via a loop and use a native dos file system checker.

strace - strace is a magical tool. You can watch what happens real time and catch error when debugs aren't useful. If you know what procmon on windows is, its basically the same thing. Basically you get to log almost everything a process does. Like, files it opens (or tries), libraries, external processes it might spawn. All that jazz. How many times have you thought, "how should xyz work normally" when something is broken in production but works in a lab? This will shed light. Next blog will be on how to use this so stay tuned!

gdb - if you find you have a core file (/logs/core) run file against it to find out what process crashed then use gdb to create a backtrace. This in addition with the core file and a cpinfo will help speed along any support ticket.

lsof - LiSt Open Files. this is another great tool for seeing all files/network connections/devices a giving process has open. Want to see all process that have a network connection open and not resolve ips? lsof -in

tail - I know, I know.. John.. everything has tail. Oh.. but does your tail include -F support? well.. this one does. If you know a -F is more then a grade a passive aggressive teacher gives you then this is for you!

600 / 1100 downloads

600 / 1100 dosfstools

600 / 1100 gdb

600 / 1100 lsof

600 / 1100 strace

600 / 1100 tail

1200R downloads

1200R dosfstools

1200R gdb

1200R lsof

1200R strace

1200R tail

Sunday, April 3, 2016

Enabling cron, the scheduling service on 600 / 700 / 1100 / 1200R


NOTE: Gaia Embedded has been updated to enable cron by default. This should no longer be needed.


If you're familiar with UNIX skip down to step 6 for the commands to add to userScript (note the capital "S"). Also see note about crontab command and writing your cron jobs by hand.

In this post I'll show you what is required to configure cron so that jobs can be scheduled. This allows you to create custom jobs to run at a certain time/day. For whatever reason this services isn't enabled by default, but this is easy to address and should be upgrade safe. Meaning if you upgrade the firmware this setting should be saved. This also means we'll need to create/edit the userScript file. This is a file read during boot up. Each line will be executed as the devices boots up. For Gaia users this is like a custom rc.local file.

So, let's start off. This is a quick list of what we're going to do.

  1. Create the symbolic link in the startup file so crond can be found. 
  2. Create the directory for crontabs. These are scripts that tell cron when and what to run. 
  3. Start cron (to make sure it will run correctly). 
  4. Create/Edit userScript file. 
  5. Reboot (to make sure cron starts on boot up). 
  6. Create a test cronjob!  (edit userScript again and reboot again).

So let's begin!

Step 1. Here we are going to create a symbolic link (think of this as a shortcut) to busybox for cron.

[Expert@FW]# ln -s /bin/busybox /bin/crond
The link should look something like this now.
[Expert@FW]# ls -l /bin/crond
lrwxrwxrwx 1 root root 12 Apr 3 19:08 /bin/crond -> /bin/busybox
[Expert@FW]#

Step 2. Create the crontabs directory for crond. This is where your scheduled jobs will need to be placed.

[Expert@FW]# mkdir -p /var/spool/cron/crontabs/
Directory should look like this.
[Expert@FW]# ls -ld /var/spool/cron/crontabs/
drwxr-xr-x 2 root root 40 Apr 3 19:12 /var/spool/cron/crontabs/
[Expert@FW]#

Step 3. Start cron!

[Expert@FW]# /bin/crond
once running you should see something like this for output.
[Expert@FW]# ps aux | egrep '[c]rond'
root 1822 0.0 0.1 4748 656 ? Ss 16:38 0:00 /bin/crond
[Expert@FW]#

Step 4. Lets do this on boot up!
Add each start up command to /pfrm2.0/etc/userScript. This file isn't created by default. This is what it should look like.

[Expert@FW]# ls -l /pfrm2.0/etc/userScript
-rw-r--r-- 1 root root 76 Apr 3 19:24 /pfrm2.0/etc/userScript
[Expert@FW]# cat /pfrm2.0/etc/userScript
ln -s /bin/busybox /bin/crond
mkdir -p /var/spool/cron/crontabs/
/bin/crond
[Expert@FW]#

Step 5. Reboot!

Once the system is done rebooting you should see something like this run expert mode.
[Expert@FW]# ps aux | egrep '[c]rond'
root 1822 0.0 0.1 4748 656 ? Ss 16:38 0:00 /bin/crond
[Expert@FW]#


Step 6. Setup a test cronjob!

I've created a file called /var/spool/cron/crontabs/root. This is the file that cron jobs will be run from. In this example I created a job that will simply create a message in the /var/log/messages file every min.

[Expert@FW]# ls -l /var/spool/cron/crontabs/root
-rw------- 1 root root 44 Apr 3 19:34 /var/spool/cron/crontabs/root
[Expert@FW]# cat /var/spool/cron/crontabs/root
* * * * * echo "testing123testing" | logger
[Expert@FW]# egrep 'testing' /var/log/messages
2016 Apr 3 19:37:01 FW cron.notice crond[1822]: USER root pid 1957 cmd echo "testing123testing" | logger
2016 Apr 3 19:37:01 FW user.notice root: testing123testing
[Expert@FW]#

Everything looks good, now create you own crontab via an echo statement in /pfrm2.0/etc/userScript. This way your cronjob will be created at boot up. You'll need to be careful about only using " " inside your command if you need quoting as part of your cronjob. Be sure to escape $ and ! otherwise odd things may happen. Here is our final /pfrm2.0/etc/userScript. Note I used append statements ( >> ) so that if you add more than one job the last line won't clobber the previous.

[Expert@FW]# cat /pfrm2.0/etc/userScript
ln -s /bin/busybox /bin/crond
mkdir -p /var/spool/cron/crontabs/
/bin/crond
echo '* * * * * echo "testing123testing" | logger' >> /var/spool/cron/crontabs/root
chmod 600 /var/spool/cron/crontabs/root
[Expert@FW]#


Next up, we'll use cron and userScript to setup a job to automate backups! Hopefully this wasn't too long winded.

NOTE: Do not use the command crontab. It will create your crontab, however it won't be saved as /var is on / which is a rootfs. This means no data is saved on a reboot.