Wednesday, October 10, 2012

Script to check the status of application and send email notification if application is down

Script to check the status of application and send email notification if application is down


This Script checks the status of application and send email notification if application is down. Define the application URL in url.properties

#!/usr/bin/bash



PATH=${PATH}:/opt/csw/bin;export PATH



cd /home/charan

date=`date '+%Y%m%d%H%M'`

mdate=`date '+%b-%d(%A)-%Y_%H.%M%p'`

home=/home/charan

logdir=$home/logs/application/$1.app.log$date

emaillist=

if [ $# -ne 1 ] ;

then echo ;

echo "enter env name for the script to execute or pass ALL as argument to check for the status of all envrionment specified in url.properties"

echo

exit 0

fi

echo " Checking application status of $1 Server "

envlist=" DEV SIT UAT "

if [[ $1 == ALL ]]; then

for envvar in $envlist ;

do

PropsPath="/home/ccbuild/charan/url.properties";

URL=`grep "$envvar"= $PropsPath
awk -F= '{print $2}'`

echo " $envvar URL --> $URL "

wget -q -O $logdir $URL

if [ \! -s $logdir ]; then

mailx -s "$envvar Application is DOWN" $emaillist < $logdir rm -rf $logdir fi done fi for env in $envlist ; do if [[ $env == $1 ]]; then PropsPath="/home/charan/url.properties"; URL=`grep "$env"= $PropsPath
awk -F= '{print $2}'` echo " $env URL --> $URL "

wget -q -O $logdir $URL

if [ \! -s $logdir ]; then

mailx -s "$env Application is DOWN" $emaillist < $logdir

rm -rf $logdir

fi

fi

done

rm -rf $logdir



--------------------------------------------------------------



DEV=

SIT=

UAT=

Web server plug-in routing to SAME application in DIFFERENT clusters

Question


If I install the same Web application into more than one WebSphere Application Server cluster, is it possible to configure the Web server plug-in to properly route requests to the application in both clusters?



Cause



The most common use of the WebSphere Application Server Web server plug-in is to load balance requests for an application installed to a single cluster. For that environment you should not use the instructions in this technote.



In some rare cases, you might want to install the exact same application into multiple clusters. The purpose of this technote is to describe how to configure the WebSphere Application Server Web server Plug-in to work properly in that specific case.



Answer



Note: The Web server plug-in does not support load balancing or fail-over between multiple clusters. Also the configuration described below requires manually changing the plugin-cfg.xml file, so you should turn off automatic propagation in the WebSphere Administrative Console so that the plugin-cfg.xml file does not get automatically overwritten.



Yes, it is possible for a single Web server plug-in to properly route requests when the same application is installed into more than one WebSphere Application Server cluster. To make this work you will need to use different hostnames or port numbers for each of the clusters. You will also need to do some manual cut and paste of information in the plugin-cfg.xml file.



The example below shows exactly how to accomplish this.



* The IBM HTTP Server machine is called ihsbox.

* cluster1 has two members c1_member1 and c1_member2.

* cluster2 has two members c2_member1 and c2_member2.

* Both of the member1 appservers are on a machine called was1.

* Both of the member2 appservers are on a machine called was2.





So, for this simple example, it would look like the following:



------ was1 --- cl1_member1

/ \--- cl2_member1

/

ihsbox

\

\------ was2 --- cl1_member2

\--- cl2_member2



If I install my snoop application (context-root /snoop) into both clusters, how is the plug-in supposed to distinguish which ServerCluster to use?



In the plug-in, there are only 3 things that can distinguish between requests:



* hostname

* port number

* URI





For example, these URLs are unique requests that can be routed independently of each other:



http://host1/snoop

http://host1:83/snoop

http://host2/snoop

http://host2:81/snoop



In each of these examples, the URI part /snoop remains the same. It is the hostname or port number that makes the difference.



Back to the example, in Application Server admin, you would create a virtual host called "vhost1" which would have a host alias of host1:80. You would also need to include other host aliases for the internal ports used by appservers in cluster1 (for example: ports 9080, 9081, 9443, 9444). You would use this virtual host (vhost1) in all of the members of cluster1 (cl1_member1 and cl1_member2).



In addition, you would create a virtual host called "vhost2" which would have a host alias of host2:80. You would need to include other host aliases for the internal ports used by appservers in cluster2. I would use this virtual host (vhost2) in all of the members of cluster2 (cl2_member1 and cl2_member2).



In order to maintain session affinity it is essential to use different affinity cookie names for each different cluster. For example, the appservers in cluster1 can use the cookie name "JSESSIONIDC1". And the appservers in cluster2 can use the cookie name "JSESSIONIDC2". By using different cookie names for the different clusters, session affinity will be preserved within each cluster. For information about how to change the cookie names, see Cookie settings in the Information Center.



You must map the application modules to the newly created virtual hosts. Since the same application is installed to both clusters, you will need to map the application modules to both vhosts. However, there currently is a limitation in the Application Server administrative console in that it only allows the application modules to be mapped to a single vhost. Consequently, you must use a trick to map the modules twice and manually copy and paste the configs into a single plugin-cfg.xml file.



Here are the steps to use:



1. Map the application modules to the first vhost (for example: vhost1).



2. Generate the plug-in.



3. From the plugin-cfg.xml file, manually copy the VirtualHostGroup and UriGroup and Route that correspond to vhost1.



4. Map the application modules to the second vhost (for example: vhost2).



5. Generate the plug-in.



6. In the new plugin-cfg.xml file you will see that the VirtualHostGroup and UriGroup for vhost1 are gone, and there are new VirtualHostGroup and UriGroup for vhost2.



7. Manually paste the VirtualHostGroup and UriGroup and Route for vhost1 back into the plugin-cfg.xml file.



8. Save the plugin-cfg.xml file and propagate it to the Web server.





The plugin-cfg.xml file should now have a VirtualHostGroup and UriGroup for vhost1 with a Route that points to cluster1. Also there should be a VirtualHostGroup and UriGroup for vhost2 with a Route that points to cluster2.



You need to account for these new hostnames in my IBM HTTP Server config (httpd.conf). The ServerName for IBM HTTP Server is ihsbox. Create a VirtualHost in IBM HTTP Server to account for the other valid hostnames, like this:





ServerName ihsbox

ServerAlias host1

ServerAlias host2





Add host1 and host2 into my DNS config so that they resolve to the ip address of ihsbox.



Now, this URL http://host1/snoop will go to the snoop application in cluster1.



And, this URL http://host2/snoop will go to the snoop application in cluster2.



If you want to use different port numbers instead of different hostnames, the same idea applies there as well.



Jython Script to check the status of listener ports of all servers of cluster



lineSeparator = java.lang.System.getProperty('line.separator')



clusterName='CLUSTER_NAME'

cId = AdminConfig.getid("/ServerCluster:"+ clusterName +"/" )

cList = AdminConfig.list("ClusterMember", cId ).split(lineSeparator)

for sId in cList:

server = AdminConfig.showAttribute(sId, "memberName" )

node = AdminConfig.showAttribute(sId, "nodeName" )

cell = AdminControl.getCell()



s1 = AdminControl.completeObjectName('cell='+ cell +',node='+node +',name='+ server +',type=Server,*')

if len(s1) > 0:

print server + " state is started"

else:

print server + " is down"



print "Server " + server + " has the following Listener Ports"

lPorts = AdminControl.queryNames('type=ListenerPort,cell='+ cell+',node='+ node +',process='+ server +',*')

lPortsArray = lPorts.split(lineSeparator)

for lPort in lPortsArray:

lpcfgId = AdminControl.getConfigId(lPort)

lpName = AdminConfig.showAttribute(lpcfgId, "name")

lpstate = AdminControl.getAttribute(lPort, 'started')

if lpstate == 'true':

print lpName + " is started "

else :

print lpName + " is stopped "



print ""

Useful One Liner scripts for WAS Administrator:-




1. AdminTask.reportConfiguredPorts() : Lists every server in your cell. Shows all the ports each server uses

2. AdminTask.reportConfigInconsistencies() : Checks the configuation repository and reports any structural inconsistencies

3. AdminApp.list() : Lists every application installed in your cell

4, AdminApp.view( 'appName' ) : Replace appName with one of the names returned by AdminApp.list(). The name must be surrounded by quotes

6. AdminTask.generateSecConfigReport() : Shows every setting related to security in your entire cell. Shows the current value of the setting. Shows the menu path through the Admin Console to that setting. The printout is a little confusing at first, but it is very useful once you get used to reading it.

7. AdminTask.createApplicationServer('Node1', '[-name serverNode1 ]') : Creates a new application server called "serverNode1" in a node called "Node1"

Tuesday, October 9, 2012

Unix script to monitor TCP port statistics

One simple and very useful indicator of process health and load is its TCP activity. The following script takes a set of ports and summarizes how many TCP sockets are established, opening, and closing for each port. It has been tested on Linux and AIX. Example output:




$ portstats.sh 80 443

PORT ESTABLISHED OPENING CLOSING

80 3 0 0

443 10 0 2

====================================

Total 13 0 2 portstats.sh:



#!/bin/sh



usage() {

echo "usage: portstats.sh PORT_1 PORT_2 ... PORT_N"

echo " Summarize network connection statistics coming into a set of ports."

echo ""

echo " OPENING represents SYN_SENT and SYN_RECV states."

echo " CLOSING represents FIN_WAIT1, FIN_WAIT2, TIME_WAIT, CLOSED, CLOSE_WAIT,"

echo " LAST_ACK, CLOSING, and UNKNOWN states."

echo ""

exit;

}



NUM_PORTS=0

OS=`uname`



for c in $*

do

case $c in

-help)

usage;

;;

--help)

usage;

;;

-usage)

usage;

;;

--usage)

usage;

;;

-h)

usage;

;;

-?)

usage;

;;

*)

PORTS[$NUM_PORTS]=$c

NUM_PORTS=$((NUM_PORTS + 1));

;;

esac

done



if [ "$NUM_PORTS" -gt "0" ]; then

date

NETSTAT=`netstat -an
grep tcp`

i=0

for PORT in ${PORTS[@]}

do

if [ "$OS" = "AIX" ]; then

PORT="\.$PORT\$"

else

PORT=":$PORT\$"

fi

ESTABLISHED[$i]=`echo "$NETSTAT"
grep ESTABLISHED
awk '{print $4}'
grep "$PORT"
wc -l`

OPENING[$i]=`echo "$NETSTAT"
grep SYN_
awk '{print $4}'
grep "$PORT"
wc -l`

WAITFORCLOSE[$i]=`echo "$NETSTAT"
grep WAIT
awk '{print $4}'
grep "$PORT"
wc -l`

WAITFORCLOSE[$i]=$((${WAITFORCLOSE[$i]} + `echo "$NETSTAT"
grep CLOSED
awk '{print $4}'
grep "$PORT"
wc -l`));

WAITFORCLOSE[$i]=$((${WAITFORCLOSE[$i]} + `echo "$NETSTAT"
grep CLOSING
awk '{print $4}'
grep "$PORT"
wc -l`));

WAITFORCLOSE[$i]=$((${WAITFORCLOSE[$i]} + `echo "$NETSTAT"
grep LAST_ACK
awk '{print $4}'
grep "$PORT"
wc -l`));

WAITFORCLOSE[$i]=$((${WAITFORCLOSE[$i]} + `echo "$NETSTAT"
grep UNKNOWN
awk '{print $4}'
grep "$PORT"
wc -l`));



TOTESTABLISHED=0

TOTOPENING=0

TOTCLOSING=0

i=$((i + 1));

done



printf '%-6s %-12s %-8s %-8s\n' PORT ESTABLISHED OPENING CLOSING

i=0

for PORT in ${PORTS[@]}

do

printf '%-6s %-12s %-8s %-8s\n' $PORT ${ESTABLISHED[$i]} ${OPENING[$i]} ${WAITFORCLOSE[$i]}

TOTESTABLISHED=$(($TOTESTABLISHED + ${ESTABLISHED[$i]}));

TOTOPENING=$(($TOTOPENING + ${OPENING[$i]}));

TOTCLOSING=$(($TOTCLOSING + ${WAITFORCLOSE[$i]}));

i=$((i + 1));

done



printf '%36s\n'
tr " " "="

printf '%-6s %-12s %-8s %-8s\n' Total $TOTESTABLISHED $TOTOPENING $TOTCLOSING



else

usage;

fi

To automatically read the ports for IHS, use:



$ portstats.sh `grep Listen /opt/IBM/HTTPServer/conf/httpd.conf
grep -v "\#"
awk '{print $2}'
tr '\n' ' '`It should also be possible to extract WAS ports from .../WebSphere/AppServer/profiles/*/config/cells/*/nodes/*/serverindex.xml.

Moving WebSphere dmgr from one host to another

Steps involved in moving WebSphere dmgr from one host (machine1) to another host (machine2) with the hostname change :

1) As a caution make please make a backup of the following so that it's easy to restore when something goes wrong,

a) Backup the configuration of all the profiles (DMGR) and (AppSrv) in machine1 that is involved in the cell .

(eg) basically run WAS_ROOT/bin/backupconfig.sh which will create WebSphereConfig_2007-11-16.zip

b) (Optional) Also do a filesystem backup of the directory if possible to avoid any surprises.



2) Install WebSphere ND in the new box and create a new Dmgr profile with machine2_dmgr_profile .



3) Extract the WebSphereConfig_2007-11-16.zip to /config directory.

(eg) jar -xvf WebSphereConfig_2007-11-16.zip



4) if is different than than change USER_INSTALL_ROOT "value" in /config/cells//nodes//variables.xml to point to the new dmgr profile location.

(eg)



5) Change the following properties in /bin/setupcmdLine.sh to point to the machine1 dmgr cell name and node name.



(eg) WAS_CELL=machine1Cell01

WAS_NODE=machine1CellManager01



6) Copy the custom keyfiles (*.jks) from /etc/ to /etc; or skip this step if dmgr is using the default keys.



7) Follow the instructions mentioned in http://www-1.ibm.com/support/docview.wss?rs=180&context=SSEQTP&q1=best+practices&uid=swg27007419&loc=en_US&cs=utf-8&lang=en from page 4-6 on section 2 and 2.1



8) check if syncs works for all the nodes and you were able to see all the configurations from the previous dmgr.



Certificate Chk Scripts

################## cert_checker #############################


#



#

# Usage

# 1 - Copy to was users ~

# 2 - find out locations of keystores

# 3 - find out keystore passwords - these will be in the properties directory in ssl.client.props

# 4 - run script with following syntax ./certcheck.sh directoryname keystore_password eg

# ./certcheck.sh/wload/w6fc/app/profiles/base/config/cells/WLTHDR password01

#

####################################################################



rm ~/certlist.txt

fname=listcerts.tmp

pname=listcerts.tmp

directory=$1

password=$2

for a in `ls $directory/*jks $1/*kdb $1/*p12`

do

echo "" >> ~/certlist.txt

echo "################ private certificates #######################" >> ~/certlist.txt

echo $a >> ~/certlist.txt

/usr/opt/ibm/gskta/bin/gsk7cmd -cert -list personal -pw $password -db $a 2>>/dev/null
grep -v Certificates >${fname}

while read b

do

echo "######" >> ~/certlist.txt

/usr/opt/ibm/gskta/bin/gsk7cmd -cert -details -pw $password -db $a -label "$b" 2>>/dev/null >> ~/certlist.txt

done <${fname}

echo "#######################################"

done



for a in `ls $directory/*jks $1/*kdb $1/*p12`

do

echo "" >> ~/certlist.txt

echo "################ public certificates #######################" >> ~/certlist.txt

echo $a >> ~/certlist.txt

/usr/opt/ibm/gskta/bin/gsk7cmd -cert -list CA -pw $password -db $a 2>>/dev/null
grep -v Certificates >${pname}

while read c

do

echo "######" >> ~/certlist.txt

/usr/opt/ibm/gskta/bin/gsk7cmd -cert -details -pw $password -db $a -label "$c" 2>>/dev/null >> ~/certlist.txt

done <${pname}

echo "#######################################"

done



rm ${fname}

rm ${pname}