Saturday, April 10, 2010

Crontab error "/bin/sh: root: command not found"

Crontab error "/bin/sh: root: command not found"
********************************************************

Today I struggled with making the crontab work on my system. I am using cron jobs for the first time.
Although I always wanted to understand how it works, esp as I heard that they are good for periodic backups.
But it was quite frustrating for me to make it work, especially if you prefer to google without reading the man pages
thorougly. Let me just explain what I was trying to achieve and how the error got resolved. Now I realize I
could have saved a lot of time, had I read the man pages :(

But sometimes we are in a hurry and we are not at all interested in understanding how things work, but in
making it work as quickly as possible.


For those who want a quick look at resolution of this error I would say,
check your cron syntax:

1. If you are making changes in a local cron file using crontab -e, the job entry should contain 6 fields (not the username)
like this:

* * * * * /home/build_auto/echo.sh

A wrong entry like this:
* * * * * root /home/build_auto/echo.sh

would cause cron to interpret "root" as a command.

THe syntax "* * * * * root /home/build_auto/echo.sh" is valid for system crontab file /etc/crontab.

Most of the syntax related examples can be found by reading the man page for crontab files:

man 5 crontab

Creating a simple cron job to run a shell script
***************************************************
I am simply trying to create a cron job and which would execute a shell script for me at regular intervals.
So first I read through a simple tutorial from where I learn about the basic syntax and the fields.


Now for my simple cron job, I create a simple shell script which will output some data in another text file.
And for simplicity I would like to run it every minute. (so that I can quickly confirm how it works)

So here is my simple shell script which will append a string ("test") to another text file (test.txt)
echo.sh

#!/bin/sh

echo "test" >> /home/build_auto/test.txt


This way everytime the script echo.sh is executed, it will append a string "test" in a new line in test.txt.
So when our cron job executes perfectly i.e. every minute, we see "test" in every new line.

Say I save my echo.sh in a location : /home/build_auto/

Now you can add a cron job at two places:

1. In the system cron file /etc/crontab
2. And in a new crontab file using the crontab command.

This file is will be stored in /var/spool/cron with the same name as the username.


Editing the System cron file /etc/crontab

This way is not advisable as you would be directly interfering with the system cron file which is required by cron daemon.
Still if you would like to add an entry, open /etc/crontab in an editor and add an entry like this:

* * * * * root /home/build_auto/echo.sh

THere are seven fields seperated by spaces. For details on the fields read the man page.
The first field is for minute, second for hour, third for day of month, month, day of week, user account which will be used for execution and command name which is the full path of our shell script.


The *s indicate the job will be executed every minute, every hour and so on. Save the /etc/crontab and your job should execute every minute. There
is no need to do any service restart.


Editing the user level crontab file using the crontab command

The other way is to create a new crontab file using the option -e (edit) with crontab, which is mostly meant for non-root users.
This file will have the same name as the username and can be found at the location: /var/spool/cron


The crontab syntax is similar to the previous one, except that instead of 7 fields, there are only 6. The username is not required.

Create a new crontab file using the command:

crontab -u root -e

or simply

crontab -e

and add an entry like this:

* * * * * /home/build_auto/echo.sh

Remember, no username here, the crontab command has already taken care of it through the -u option. (or through the current user if -u is omitted)
Save the file and now your cron script should be executed every minute.
Confirm your entry by listing down the crontab list for user root:

99EP68903:/home/build_auto # crontab -u root -l
# DO NOT EDIT THIS FILE - edit the master and reinstall.
# (/tmp/crontab.XXXXosSNdV installed on Mon Apr 5 22:03:11 2010)
# (Cron version V5.0 -- $Id: crontab.c,v 1.12 2004/01/23 18:56:42 vixie Exp $)
* * * * * /home/build_auto/echo.sh



You can also see the same in the file /var/spool/cron/tabs/root.


Making mistakes

In case, as a noob you create an entry "* * * * * root /home/build_auto/echo.sh" using the crontab -e command, you will get mail error messages like this one:


From root@linux.local Mon Apr 5 22:01:01 2010
Return-Path:
X-Original-To: root
Delivered-To: root@linux.local
Received: by linux.local (Postfix, from userid 0)
id CC5ED320408; Mon, 5 Apr 2010 22:01:01 +0530 (IST)
From: root@linux.local
To: root@linux.local
Subject: Cron root /home/build_auto/echo.sh
X-Cron-Env:
X-Cron-Env:
X-Cron-Env:
X-Cron-Env:
X-Cron-Env:
Message-Id: <20100405163101.cc5ed320408@linux.local>
Date: Mon, 5 Apr 2010 22:01:01 +0530 (IST)
Status: R

/bin/sh: root: command not found

This can be misleading, and it can be easily misunderstood as if the cron is unable to locate /bin/sh. But in fact cron is trying to execute a command with the name "root", which does not exist.


This is because cron expects a command in the sixth field.

After a few minutes, upon successful executions of the cronjob the test.txt should look like:

99EP68903:/home/build_auto # cat test.txt
test
test
test
test
test
test
test


And one more thing, ensure that in your shell script the PATH of all files resolves to absolute path, any relative path like ./test.txt would
resolve through the home directory of the user that is executing the cron job.


#end of post


LABELS: "/BIN/SH: ROOT: COMMAND NOT FOUND", COMMAND, CRON, CRONTAB, CRONTAB -E, ERROR, GETCH LINUX, JOB
















Failed to find VM - aborting Red Hat
In case you are using RedHat 5.* Linux, and you a message like this while installation:

Failed to find VM - aborting


You need to disable Selinux.
Go to /etc/selinux directory, open the file config, which would look like:

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - SELinux is fully disabled.
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection.
SELINUXTYPE=targeted

Change the line SELINUX=enforcing to
SELINUX=disabled
Posted by k3w13r at 3:58 PM 0 comments
Monday, June 29, 2009
Exceeding Windows Remote Desktop Limit
While making a Remote desktop connection, the maximum number of allowed connections is 2. And when this limit is reached, you see an error message of the sort:

When you close the remote desktop window using the 'x' sign in the top right corner, you DISCONNECT from the windows session. However windows keeps your session alive in its memory. So that when you try to relogin it assigns the active session in its memory to you. Closing the window using the 'x' button doesnot make you logoff. Your session remains active, only that your state is 'DISCONNECTED'. So sometimes when the number of sessions is 2, even though they are disconnected, still windows shows you this message. You can use a third reserved connection to remotely login into windows:

type this command in your command prompt:

start mstsc -v:xx.xx.xx.xx /f -console

and this will open the third connection. You can use this connection to kill the other disconnected sessions through taskmanager. xx.xx.xx.xx is the IP of the windows machine.
Posted by k3w13r at 11:00 PM 0 comments
Set the Date on Linux
date -s "May 08 2009 02:40:51"
Posted by k3w13r at 10:59 PM 0 comments
Saturday, June 27, 2009
tcpdump
This is for reference, its not a guide but just a list of usage commands that I picked from various sources. Yeah I admit, I am one of those lamers who prefer to google than reading the man page. :/ Most are picked from wireshark's homepage :

http://openmaniak.com/tcpdump.php

1.tcpdump
2.tcpdump -v //verbose
3.tcpdump -D //lists devices
4.tcpdump -n //avoid dns lookup
5.tcpdump -q // quick output
6.tcpdump udp // capture udp packets only :: useful
7.tcpdump -w capture.cap //save the capture to a file named capture.cap :: useful
8.tcpdump -r capture.cap //read dump from capture.cap
9.tcpdump host abc.com //packets coming from or going towards abc.com ::useful
10.tcpdump src xx.xx.xx.aa and dst xx.xx.xx.bb
11.tcpdump -A //displays the packet's content ::useful
12.tcpdump -i eth1 //capture on interface eth1
13.tcpdump -v -A udp and dst 192.168.69.238 or dst 192.168.69.242 -i eth1
14.tcpdump -n -S -s 15000 -vv -X 'host 192.168.0.159 and udp and port 1717'
-S print absolute IP sequence number (not relative)
-n no address resolution
-s size of capture for each packet (15000 should be enough to hold data returned by query,
you will have to play with this depending on what type of query you issue)
-X print HEX and ASCII version of packet 'host 192.168.0.159 and udp and port 1717'

for an exhaustive list, see the man page

http://linux.die.net/man/8/tcpdump
Posted by k3w13r at 1:54 PM 0 comments
Installing a Module in Perl through source
I am very new to perl. No idea how to make things work in perl. I mean resolving errors and that kind of stuff. I can write programs with some google help. Two days back I wanted to generate a malformed UDP packet, a packet with an Invalid UDP length field. This kind of packet was notorious for causing a DOS attack on older Unix systems (dont know whats the current status). Sure it was fun. But yes, I found a useful tip for a perl beginner like me. It happens when your code requires a perl module that is not available in your current perl installation. In such cases you see errors like:

Can't locate Socket6.pm in @INC (@INC contains: /usr/lib/perl5/5.10.0/s390x-linux-thread-multi /usr/lib/perl5/5.10.0 /usr/lib/perl5/site_perl/5.10.0/s390x-linux-thread-multi /us
r/lib/perl5/site_perl/5.10.0 /usr/lib/perl5/vendor_perl/5.10.0/s390x-linux-thread-multi /usr/lib/perl5/vendor_perl/5.10.0 /usr/lib/perl5/vendor_perl .) at /etc/ha.d/resource.d/l
directord line 721.
BEGIN failed--compilation aborted at /etc/ha.d/resource.d/ldirectord line 721.

Obviously it means that my Linux doesnt have the perl module named Socket6.pm. It happends many times that if I google with this error string, I may or may not find a quick solution. The better way is to go to the CPAN search site

http://search.cpan.org/

and search for Socket6.pm

This will give you the package that has Socket6.pm in it. Again there can be two ways of installing it, either you install it through CPAN or install it by source. I preferred the second method as my linux machine had some internet connectivity issues.

So download the tar.gz package from the results returned by search.cpan, extract it and install it using the commands

tar -xvzf package.tar.gz
perl Makefile.pl
make
make test
make install
Posted by k3w13r at 10:54 AM 1 comments
Friday, June 26, 2009
Heartbeat problem
Related to Heartbeat package for High Availability Clusters (SLES 11)
The apache resource script was failing, for this reason the whole cluster wasnt working fine. I searched so much, but couldnt find the reason..

node242:/etc/ha.d/resource.d # ./apache status

2009/05/08_02:41:04 ERROR: command failed: sh -c wget -O- -q -L --bind-address=127.0.0.1 http://localhost:80/server-status | tr '\012' ' ' | grep -Ei "[[:space:]]*" >/dev/null
2009/05/08_02:41:04 ERROR: Generic error
ERROR: Generic error

Then I set up the debug flag set -x in the shell script, and I got the location of the actual file where the command was failing. Its in:

/usr/lib/ocf/resource.d/heartbeat
Here in the apache script, I saw the following code, which was in fact preparing the wget command parameters.

#
# It's difficult to figure out whether the server supports
# the status operation.
# (we start our server with -DSTATUS - just in case :-))
#
# Typically (but not necessarily) the status URL is /server-status
#
# For us to think status will work, we have to have the following things:
#
# - $WGET has to exist and be executable
# - The server-status handler has to be mapped to some URL somewhere
#
# We assume that:
#
# - the "main" web server at $PORT will also support it if we can find it
# somewhere in the file
# - it will be supported at the same URL as the one we find in the file
#
# If this doesn't work for you, then set the statusurl attribute.
#
if
[ "X$STATUSURL" = "X" ]
then
if
have_binary $WGET
then
StatusURL=`FindLocationForHandler $1 server-status | tail -1`
if
[ "x$Listen" != "x" ]
then
echo $Listen | grep ':' >/dev/null || # Listen can be only port spec
Listen="localhost:$Listen"
STATUSURL="http://${Listen}$StatusURL"
case $WGET in
*wget*) WGETOPTS="$WGETOPTS --bind-address=127.0.0.1";;
esac
else
STATUSURL="${LOCALHOST}:${PORT}$StatusURL"
fi
fi
fi
test "$PidFile"
}


From the comments I figured out that server status check wasnt required in my case, its best to comment that out for my case, the problem seems to be that the wget command itself isnt getting executed by the shell.

monitor_apache() {
if
! have_binary $WGET
then
ocf_log err "Monitoring not supported by $OCF_RESOURCE_INSTANCE"
ocf_log info "Please make sure that wget is available"
return $OCF_ERR_CONFIGURED

elif [ -z "$STATUSURL" ]; then
ocf_log err "Monitoring not supported by $CONFIGFILE"
ocf_log info "Please set the statusurl parameter"
return $OCF_ERR_CONFIGURED
fi

if
silent_status
then
#ocf_run sh -c "$WGET $WGETOPTS $STATUSURL | tr '\012' ' ' | grep -Ei \"$TESTREGEX\" >/dev/null"
else
ocf_log info "$CMD not running"
return $OCF_NOT_RUNNING
fi
}


So I commented the line:
#ocf_run sh -c "$WGET $WGETOPTS $STATUSURL | tr '\012' ' ' | grep -Ei \"$TESTREGEX\" >/dev/null"

and my problem was fixed.


node242:/etc/ha.d/resource.d # ./apache status
Script name is : /usr/lib/ocf/resource.d//heartbeat/apache
2009/05/08_02:46:29 INFO: Running OK
INFO: Running OK

0 COMMENTS:

Post a Comment

Older Post Home
Subscribe to: Post Comments (Atom)