Wednesday, September 8, 2010

Configure SSL for WebSphere Application Server

 

Configure SSL for WebSphere Application Server

See these topics for instructions on configuring SSL for WebSphere Application Server:

Note: For these steps, it is assumed that you have a network drive mapped from your workstation to your iSeries system.

Configure SSL for WebSphere plug-ins

A WebSphere plug-in interfaces with a Web server to handle client requests for server-side resources and routes them to the application server for processing. WebSphere Application Server - Express includes plug-ins for IBM HTTP Server for i5/OS and Domino Web Server for iSeries.

After SSL is working between your browser and Web server, proceed to configure SSL between the Web server plug-in and the WebSphere Application Server - Express product. This is not required if the link between the plug-in and application server is known to be secure or if your applications are not sensitive. If privacy of application data is a concern, however, this connection should be an SSL connection.

Using the product-provided certificates to configure SSL for WebSphere plug-ins

WebSphere Application Server - Express Version 5.1 (and later) application server instance contain an SSL key file. The pathname for the key file is /QIBM/UserData/WebASE51/ASE/instance/etc/plugin-key.kdb, where instance is the name of your server instance.

The plugin-key.kdb file contains a digital certificate. The digital certificate is required for the Web server plug-in to trust the signer of the Web container's certificate when an HTTPS transport is configured with the default SSL repertoire. The default Web container is created with such an HTTPS transport.

This default HTTPS transport should be removed or reconfigured to replace the product-provided certificates before putting the server into production. Using the product-provided certificates to configure SSL for the WebSphere plug-ins significantly reduces configuration complexity, but they should not be used for production servers. The tasks below demonstrate how to create your own certificates. Alternatively, you can obtain certificates from a commercial certificate authority.

Creating an SSL key file for the WebSphere Web server plug-in

The following is an example of how to create an SSL key file for your WebSphere plug-in:

  1. Start the Digital Certificate Manager.
    Procedures vary depending on the release of Digital Certificate Manager (DCM) you have installed on your iSeries system. The release of DCM used in this article is V5R1M0.

  2. Create a local certificate authority.
    Skip this step if you already have a certificate authority (CA) created on you iSeries system.

  3. Create a key store for the HTTP server plug-in:
    1. In the left pane, click Create New Certificate Store.
    2. Select Other System Certificate Store and click Continue.
    3. On the Create a Certificate in New Certificate Store page, select Yes - Create a certificate in the certificate store, and click Continue.
    4. On the Select a Certificate Authority (CA) page, select Local Certificate Authority and click the Continue button.
    5. Fill in the form to create a certificate and certificate store. Use this pathname for the certificate store:
      /QIBM/UserData/WebASE51/ASE/instance/etc/plugin-key.kdb

      where instance is the name of your instance. (The remainder of these instructions refers to the directory above etc as USER_INSTALL_ROOT.)

      Use MyPluginCert as the key label. Fill in the other required fields, and then click Continue.

  4. Set the default system certificate:
    1. In the left pane, click to expand Fast Path.
    2. Select Work with server and client certificates.
    3. Select certificate MyPluginCert.
    4. Click Set default.

  5. Remove all trusted signers except the Local CA:
    1. On the left pane, click Select a Certificate Store
    2. Select Other System Certificate Store and click Continue.
    3. On the Certificate Store and Password page, enter the Certificate store path and filename (USER_INSTALL_ROOT/etc/plugin-key.kdb) and the password. Click Continue.
    4. On the left pane, click Fast Path.
    5. Select Work with CA certificates and click Continue.
    6. On the Work with CA Certificates page, for all CA certificates except the LOCAL_CERTIFICATE_AUTHORITY, select the certificate and then click Delete. Respond with Yes when asked if you are sure you want to delete this certificate.

  6. Extract the Local CA certificate so that you can import the certificate into the application server key file later:
    1. In the left pane, click Install CA certificate on your PC.
    2. In the right pane, click Copy and paste certificate.
    3. Create text file USER_INSTALL_ROOT/etc/myLocalCA.txt on your workstation's mapped drive to the iSeries, then paste the CA certificate into myLocalCA.txt and save the file.
    4. Click Done.

Use SSL configuration repertoires to manage SSL settings for resources in the administrative domain. The default repertoire is DefaultNode/DefaultSSLSettings. You can use DefaultNode/DefaultSSLSettings for testing or create new SSL configuration repertoires for production applications and associate them with individual resources. For more information, see Use SSL configuration repertoires.

Configuring SSL for the application server's HTTPS transport

To configure SSL for the application server's HTTPS transport, you must first create an SSL key file. The contents of this file depend on whom you want to allow to communicate directly with the application server over the HTTPS port (in other words, you are defining the HTTPS server security policy).

This topic presents a restrictive security policy, in which only a well-defined set of clients (those whose certificates are signed by your local certificate authority) are allowed to connect to the application server HTTPS port. It is recommended that you follow this security policy when your application's deployment descriptor specifies the use of the client certificate authentication method. The procedure for creating an SSL key file without the default signer certificates conforms to this policy.

To configure SSL for the application server's HTTPS transport, follow these steps:

Step 1: Create an SSL key file without the default signer certificates.

  1. Start iKeyman on your workstation. For more information, see IBM Key Managment Tool (iKeyman).

  2. Create a new key database file:
    1. Click Key Database File and select New.
    2. Specify settings:
      • Key database type: JKS
      • File Name: appServerKeys.jks
      • Location: your etc directory, such as USER_INSTALL_ROOT/etc
    3. Click OK.
    4. Enter a password (twice for confirmation) and click OK.

  3. Delete all of the signer certificates.

  4. Click Signer Certificates and select Personal Certificates.

  5. Add a new self-signed certificate:
    1. Click New Self-Signed to add a self-signed certificate.
    2. Specify settings:
      • Key Label: appServerTest
      • Common Name: use the DNS name for your iSeries server
      • Organization: IBM
    3. Click OK.

  6. Extract the certificate from this self-signed certificate so that it can be imported into the plug-in's SSL key file:
    1. Click Extract Certificate.
    2. Specify settings:
      • Data Type: Base64-encoded ASCII data
      • Certificate file name: appServer.arm
      • Location: the path to your etc directory
    3. Click OK.

  7. Import the Local CA public certificate:
    1. Click Personal Certificates and select Signer Certificates.
    2. Click Add.
    3. Specify settings:
      • Data Type: Base64-encoded ASCII data
      • Certificate file name: myLocalCA.txt
      • Location: the path to your etc directory
    4. Click OK.

  8. Enter plug-in for the label and click OK.

  9. Click Key Database File.

  10. Select Exit.

Step 2: Add the signer certificate of the application server to the plug-in's SSL key file.

  1. Start the Digital Certificate Manager (DCM)
  2. On the left pane, click Select a Certificate Store
  3. Select Other System Certificate Store and click Continue.
  4. On the Certificate Store and Password page, enter the Certificate store path and filename (USER_INSTALL_ROOT/etc/plugin-key.kdb) and the password, then click Continue.
  5. On the left pane, click Fast Path.
  6. Select Work with CA certificates and click Continue.
  7. Click Import.
  8. Specify USER_INSTALL_ROOT/etc/appServer.arm for the Import file field value and click Continue.
  9. Specify appServer for the CA certificate label field value and click Continue.

Step 3: Grant access to the key files.

It is very important to protect your key files from unauthorized access. Set the following protections by using the i5/OS Change Authority (CHGAUT) command:

  • appServerKeys.jks

    PROFILE ACCESS
    *PUBLIC *EXCLUDE
    QEJBSVR *R

  • plugin-key.kdb

    PROFILE ACCESS
    *PUBLIC *EXCLUDE
    QTMHHTTP *RX

  • All other files you created in the USER_INSTALL_ROOT/etc directory should have *EXCLUDE authority set for *PUBLIC.

Note: QTMHHTTP is the default user profile for the IBM HTTP Server for i5/OS. If your Web server runs under another profile, grant that profile *RX authority for plug-inKeys.kdb instead of QTMHHTTP.

For example, to grant read and execute (*RX) authority for plugin-key.kdb to the QTMHHTTP user profile, run the Change Authority (CHGAUT) command. For example:

  CHGAUT OBJ('/QIBM/UserData/WebASE51/ASE/myInstance/etc/plugin-key.kdb')          USER(QTMHHTTP) DTAAUT(*RX)

Step 4: (Optional) Configure an alias for the SSL port

If you have not already configured an alias for your Web server's SSL port in your WebSphere virtual host, do so now.

Step 5: Configure HTTPS transport for the Web container

For more information, see Configure HTTPS transport for your application server's Web container.

Manual update of the plug-in configuration file is required if you are using a key file other than the one that is provided with the product to configure SSL for the Web server plug-in. Before updates are applied, your regenerated plug-in configuration file should contain an entry that is similar to the following:

<Transport Hostname="MYISERIES" Port="10175" Protocol="https">     <Property name="keyring" value="/QIBM/UserData/WebASE51/ASE/myinst/etc/plugin-key.kdb"/>   <Property name="stashfile" value="/QIBM/UserData/WebASE51/ASE/myinst/etc/plugin-key.sth"/> </Transport>

When you use your own key file, you must manually update your plug-in configuration file with the name of your key file and remove the stashfile property definition. For example:

<Transport Hostname="MYISERIES" Port="10175" Protocol="https">     <Property name="keyring" value="/QIBM/UserData/WebASE51/ASE/myinst/etc/myplugin-key.kdb"/> </Transport>

Note: Configuring the WebSphere Web plug-in for SSL can require manual updates to the plug-in configuration file. Manual changes can be lost when the plug-in configuration file is regenerated. If you have manually changed the plug-in configuration file, check the file to see determine if your changes have been lost, and reapply them if necessary.

The configuration is complete.

As an alternative, you can implement an even more restrictive security policy by configuring the plugin to use a self signed certificate for authenticating to the application server's Web container. Assuming you have successfully completed all steps in the above task, follow these steps to implement this more restrictive policy:

  1. Use iKeyman to create a keystore.

  2. Create a self signed certificate in the keystore.

  3. Export the self signed certificate (with the private key) from the keystore.

  4. Extract the self signed certificate (also known as a signer certificate since it doesn't contain the the private key) from the keystore.

  5. Again using iKeyman, add the extracted signer certificate to the HTTPS transport's trust store (appServerKeys.jks in the above example).

  6. Remove all other signer certificates from the HTTPS transport's trust store.

  7. Using DCM, import the self signed certificate (with the private key) into the plugin's key store (plugin-key.kdb). Record the label you use when importing the certificate.

    Note: DCM treats self signed certificates as signer certificates and adds the certificate to the list of signer certificates, even though the certificate contains a private key.

  8. Restart the application server.

  9. Regenerate the Web plugin configuration file.

  10. Specify the certificate the plugin is to use for authenticating to the Web container by manually adding the certLabel property to the HTTPS transport in the Web plugin configuration file (USER_INSTALL_ROOT/config/cell/plugin-cfg.xml). Set the certLabel property value to the label you used when importing the self signed certificate into the plugin's key store. For example:

    <Transport Hostname="MYISERIES" Port="10175" Protocol="https">   <Property name="keyring"     value="/QIBM/UserData/WebASE51/ASE/myinst/etc/plugin-key.kdb"/>   <Property name="certLabel" value="selfsigned"/> </Transport>
  11. Restart the Web server.

WebSphere Application Server Configurables for managing HTTP Session Cookie Vulnerability

WebSphere Application Server Configurables for managing HTTP Session Cookie Vulnerability
IBM Websphere Application Server provides configurables to progressively secure session cookie information passed between Application Server and clients.

Configurables are listed below
1) httpOnlyCookies - PK98436
The WebContainer code was modified to add the HTTPOnly attribute when generating a session cookie if the following WebContainer custom property is set.

Note: This feature is not available with Fixpacks earlier than 6.1.0.31 or 7.0.0.9

Property name:
com.ibm.ws.webcontainer.httpOnlyCookies

HTTPOnly prevents scripts from capturing or manipulating session cookie information

2) Security integration - Session Manager Option
Specifies when security integration is enabled, the session management facility associates the identity of users with their HTTP sessions.

This ties session cookie information to the userid for which the session was created.

3) Restrict cookies to HTTPS sessions - Session Manager Option
Specifies that the session cookies include the secure field. Enabling the feature restricts the exchange of cookies to HTTPS sessions only.

Check box is available through the WebSphere Admin Console > Session management > Enable Cookies link. Requires use of SSL protocol.

4) Enable SSL ID Tracking - Session Manager Option
Specifies that session tracking uses Secure Sockets Layer (SSL) information as a session ID. The sessionID cannot be captured from the browser. Requires use of SSL protocol.

Tuesday, July 6, 2010

Change time zone in aix

Stop xntp service if your server is an ntp client.

# stopsrc -s xntpd
Change time zone to IST CUT +5:30 using command

# chtz IST-5:30
Login/logout. nullify your drift file after taking a backup

# > /etc/ntp.drift

Set correct time using

# smitty date
Restart xntpd

# startsrc -s xntpd

AIX commands

Displaying the top 10 CPU-consuming processes

# ps aux | head -1; ps aux | sort -rn +2 | head -10

Displaying number of processors in the system

# lsdev -Cc processor

Displaying the top 10 CPU-consuming processes

# ps aux | head -1 ; ps aux | sort -rn +3 | head

Displaying the top 10 memory-consuming processes using SZ

# ps -ealf | head -1 ; ps -ealf | sort -rn +9 | head

Displaying the processes in order of being penalized

# ps -eakl | head -1 ; ps -eakl | sort -rn +5

Displaying the processes in order of priority

# ps -eakl | sort -n +6 | head

Displaying the processes in order of nice value

# ps -eakl | sort -n +7

Displaying the processes in order of time

# ps vx | head -1 ; ps vx | grep -v PID | sort -rn +3 | head -10

Displaying the processes in order of real memory use

# ps vx | head -1 ; ps vx | grep -v PID | sort -rn +6 | head -10

Displaying the processes in order of I/O

# ps vx | head -1 ; ps vx | grep -v PID | sort -rn +4 | head -10

Displaying WLM classes

# ps -a -o pid,user,class,pcpu,pmem,args

Determining the PID of wait processes

# ps vg | head -1 ; ps vg | grep -w wait

Wait processes bound to CPUs

# ps -mo THREAD -p 516,774,1032,1290

Aix: Unlock user account AIX

Procedure to change the password on a server

# passwd

Note: The account needs to be reset if when trying to log in the following message is received:

3004-303 There have been too many unsuccessful login attempts; please see
the system administrator.

Procedure to reset the account:

# chsec -f /etc/security/lastlog -a \
"unsuccessful_login_count=0" -s

# chuser "account_locked=false"


on the other hand, edit this file:
# vi /etc/security/lastlog

and reset this field to zero: unsuccessful_login_count = 0

Tuesday, June 8, 2010

To add a jvm to Windows Service

Today one of colleague was trying to add a jvm to Windows Service.
This is the syntax for the command -
WASService.exe -add service_name
-serverName server_name
-profilePath server_profile_directory
[-wasHome
app_server_root]
[-configRoot configuration_repository_directory]
[-startArgs additional_start_arguments]
[-stopArgs additional_stop_arguments]
[-userid user_id -password password]
[-logFile service_log_file]
[-logRoot server_log_directory]
[-restart true | false]
[-startType automatic | manual | disabled]

This guy is fighting with some set of parameters. He is executing, command runs, he goes to the services, he finds that specific jvm in the services, but when he starts it, it simply says *started*, upon right clicking on the service, he sees only start option, remaining faded out.

So, whats the problem?
Go through the command options properly -
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.base.doc/info/aes/ae/rins_wasservice.html
He was trying to add jvm to win services like this -
WASService.exe add blabla -logFile path/to/a/location/server.log -logRoot path/to/a/location/

yata yata yata

The problem is - one should pass SERVER LOG FILE as logFile and SERVER_LOG_LOCATION not to another location.
The WASService command looks for a file named server_name .pid to determine if the server is running.

So, if you want to add a jvm, by name server1 to Win services, you need to pass server1 log location.

Have Fun

Enable Trace in Plugin-cfg.xml

WebServer Plugin writes a log, by default its named as http-plugin.log, by default placed under PLUGIN_HOME/logs/
Plugin writes Error messages into this log. The attribute which deals with this is
< Log > in the plugin-cfg.xml
Ex.,
< Log LogLevel="Error" Name="/usr/IBM/WebSphere/Plugins/logs/http_plugin.log" / >

According to above line all Error messages will be written into http-plugin.log.

How to enable trace in the plugin-cfg.xml? if that is the question, do like this -

< Log LogLevel="Trace" Name="/usr/IBM/WebSphere/Plugins/logs/http_plugin.log" / >

From the InfoCenter -
Plug-in Problem Determination Steps
The plug-in provides very readable tracing which can be beneficial in helping to figure out the problem. By setting the LogLevel attribute in the config/plugin-cfg.xml file to Trace, you can follow the request processing to see what is going wrong.
Note: If you are using a Veritas File System with large file support enabled, file sizes up to two terabytes are allowed. In this case, if you set the LogLevel attribute in the plugin-cfg.xml file to LogLevel=Trace, then the http_plugin.log file might grow quickly and consume all available space on your file system. Therefore, you should set the value of the LogLevel attribute to ERROR or DEBUG to prevent high CPU utilization.
At a high level, complete these steps.
The plug-in gets a request.
The plug-in checks the routes defined in the plugin-cfg.xml file.
It finds the server group.
It finds the server.
It picks the transport protocol, HTTP or HTTPS.
It sends the request.
It reads the response.
It writes it back to the client.


Here is the URL for Web server plug-in troubleshooting tips

AIX Command Tips

Displaying top CPU_consuming processes:
#ps aux | head -1; ps aux | sort -rn +2 | head -10
Displaying top 10 memory-consuming processes:
#ps aux | head -1; ps aux | sort -rn +3 | head
Displaying process in order of being penalized:
#ps -eakl | head -1; ps -eakl | sort -rn +5
Displaying process in order of priority:
#ps -eakl | sort -n +6 | head
Displaying process in order of nice value
#ps -eakl | sort -n +7
Displaying the process in order of time
#ps vx | head -1;ps vx | grep -v PID | sort -rn +3 | head -10
Displaying the process in order of real memory use
#ps vx | head -1; ps vx | grep -v PID | sort -rn +6 | head -10
Displaying the process in order of I/O
#ps vx | head -1; ps vx | grep -v PID | sort -rn +4 | head -10
Displaying WLM classes
#ps -a -o pid, user, class, pcpu, pmem, args
Determinimg process ID of wait processes:
#ps vg | head -1; ps vg | grep -w wait
Wait process bound to CPU
#ps -mo THREAD -p
Cpu usage with priority levels
#topas -P

#svmon -Put 10 will give the memory mapping for the
top ten memory consuming processes.

#top


Remember, some commands needs you to be root. So, you switch to su to root.
Two important things here -
1. from ur profile, if u say
$su root
takes you to root with current shell. Means that, though u r root, u still carry ur .profile and ur env variables.
2. if u want to have root's env variables -
su - root or
su root
after getting into root
. ./.profile

How can we configure Remote Plugin?

This is the procedure
Machine A: WAS
Machine B: IHS
Thumb Rule: Install Plugins on IHS Machine and propagrate them to WAS.
Procedure
Machine A : Install WAS.
Machine B: Install IHS
Machine B : Install Plugins: In the installation process, you have to select remote WAS, and name for ur webserverconfig, say webserver1. After installation, in the IHS_INST/conf/httpd.conf check for IBM Module entry and Plugin installation paths
Goto the plugin installation path/bin
Check for configurewebserver1.sh/bat
Now,
Copy that file, configurewebserver1.sh/bat to WAS box, that is Machine A. The file contains this info.
./wsadmin.sh -f configureWebserverDefinition.jacl webserver1 IHS '/software/IBM/IHS' '
/software/IBM/IHS/conf/httpd.conf' 7700 MAP_ALL '/software/IBM/Plugins' unmanage
d webserver1 hostname solaris
(This is an example, 7700 is port number.)
If you already have enabled global security on WAS, you need to add -username adminusername and -password hispassword at the end of the above script.

Then run the script.

Which creates a WebServer definition in the AppServer.

U need to configure WAS to remote admin WebServer.

Security.xml file is corrupted

If security.xml file is corrupted how will restore it?
First, what is file corruption?
Corrupted files are files that suddenly become inoperable or unusable. There are several reasons why a file may become corrupted. In some cases, it is possible to recover and fix the corrupted file, while at other times it may be necessary to delete the file and replace it with an earlier saved version.

What are the chances of security.xml becoming corrupted? There are chances for any config file to become corrupted.

Things to understand:-
1. How to avoid this?
2. What to do when this happens?

1. How to avoid this?
When you plan to edit Security.xml or any configuration file, better to take a hard copy back up or run backupConfig script. Hard copy backup, cp file as security_bak.xml, then make make changes to security.

2. What to do when this happens?
Say, on 5th, Tuesday, Feb you made changes to your security, it got fat fingured or corrupted, goto your system admin, revert it back to last working copy.
I would do like this. I would talk to my system admin and ask him to load security.xml from lastnights backup. We at our office, have nightly backups and weekly backups. We retain a months historical backups.

OR - If you know your security model completely, you can manually goto security.xml file, set security to false. save and recycle your server. It sets secutiy to false means no security. Now, set your security again.

Again, When will you modify security.xml? This is not an every day task. You will edit your security at the time of setting up new installation, or when you have a change in LDAP info or, when there is a need to add a new user or group etc. So, its always a good practice to take security.xml backup before you modify it.

There is even a better way to do this, specially in Production Environments. Let your versioning system take care of it. Meaning, check in your configuration into a version control system. If you make any change, it can be tracked.

WAS Migration

Migration
This term has many definitions and a broad scope. In this document, the meaning of migration is limited to the actions associated with moving Java™ 2 Enterprise Edition (J2EE™) applications (EARs) and Application Server configuration data (such as resources and security settings) from a previous version of Application Server to V6.

WASPreUpgrade (tool)
Refers to the first step of the two-step migration process. The tool associated with this step will extract information from the previous version of Application Server and store it in a backup directory. This tool can be run by itself from the command line or as part of the migration wizard.

WASPostUpgrade (tool)
Refers to the second step of the two-step migration process. The tool associated with this step will take information from a directory created by the WASPreUpgrade tool and import it into a V6 profile. This tool can be run by itself from the command line or as part of the migration wizard.

Backup directory
Refers to a directory structure created by the WASPreUpgrade tool that contains all the information necessary for migration from the previous version of Application Server.

Migration wizard
Refers to the graphical user interface (GUI) that interactively performs the migration. This GUI tool performs the WASPreUpgrade and WASPostUpgrade steps.

FirstSteps (tool)
Tool provided in V6 to simplify and organize many actions that customers may wish to perform with a newly installed system. It can be found in the firststeps directory under each profile and can be used to launch the migration wizard.

Profiles
This concept expands on the idea of "instances" in V5. It refers to the collection of all the configuration data for an Application Server in V6. Application Server V6 provides for multiple profiles with only one install of the binaries. A single profile is required as the destination for the data being migrated from a previous version. (See Installing Application Server V6.)

Cell
Refers to the collection of one or more nodes controlled by a single deployment manager.

Federate or Federated
Refers to the action of adding a node to a cell; also refers to a node that is part of a cell. This term has been expanded to also refer to a node in a multi-node V4 domain.

Deployment manager profile (dmgr profile)
This profile acts as the deployment manager, and is the destination for the migration of the V5 deployment manager, and as a new deployment manager for V4 migrations. There can be only one deployment manager profile for each cell.

Standalone or Application Server profile
Refers to a profile that is analogous to a single node install of Application Server. This type of profile is the destination for the migrations of a node either in a cell or not in a cell.

Clusters
This term replaces the idea of ServerGroups from V4. Clusters are sets of servers that are used for distributing workload within a cell.


A quick guide for migrating to IBM WebSphere Application Server V6
Posted by WebSphere at 2/16/2009 07:40:00 PM 1 comments
Labels: faq, WebSphere
2/9/09
security.xml file is corrupted

If security.xml file is corrupted how will restore it?
First, what is file corruption?
Corrupted files are files that suddenly become inoperable or unusable. There are several reasons why a file may become corrupted. In some cases, it is possible to recover and fix the corrupted file, while at other times it may be necessary to delete the file and replace it with an earlier saved version.

What are the chances of security.xml becoming corrupted? There are chances for any config file to become corrupted.

Things to understand:-
1. How to avoid this?
2. What to do when this happens?

1. How to avoid this?
When you plan to edit Security.xml or any configuration file, better to take a hard copy back up or run backupConfig script. Hard copy backup, cp file as security_bak.xml, then make make changes to security.

2. What to do when this happens?
Say, on 5th, Tuesday, Feb you made changes to your security, it got fat fingured or corrupted, goto your system admin, revert it back to last working copy.
I would do like this. I would talk to my system admin and ask him to load security.xml from lastnights backup. We at our office, have nightly backups and weekly backups. We retain a months historical backups.

OR - If you know your security model completely, you can manually goto security.xml file, set security to false. save and recycle your server. It sets secutiy to false means no security. Now, set your security again.

Again, When will you modify security.xml? This is not an every day task. You will edit your security at the time of setting up new installation, or when you have a change in LDAP info or, when there is a need to add a new user or group etc. So, its always a good practice to take security.xml backup before you modify it.

There is even a better way to do this, specially in Production Environments. Let your versioning system take care of it. Meaning, check in your configuration into a version control system. If you make any change, it can be tracked.

IKEYMAN

ikeyman is a UI tool comes with IHS/WAS with which one can create certificates, extract them, import them, export them, create self signed certificates and so on.

When to use ikeyman?
When your certificates expire, you need to have new certificates. You use ikeyman to import the new certificates
When you want create self signed certificates. You use ikeyman.
You have to eshtablish trust between different clients and your server. You use ikeyman.

Here is a technote on Creating Custom Secure Socket Layer (SSL) Key Files using a CA Certificate

Here is the ikeyman doc. Click here


Install SSL Certificate using IBM ikeyman

Performance boost through disabling file system caching

In a recent engagement to troubleshoot lousy performance, colleagues of mine were able to "fix it".

Usually it is a good thing, when an operating system (here: kernel and file system code) caches data in buffers at the file system level and flushes it out to disk when appropriate (and in the right chunks) to minimize the amount of physical I/O operations.

When placing database files on such filesystems however, the filesystem's caching algorithms can be extremely counterproductive.

In the case here, Solaris, the Veritas filesystem, and Oracle was involved. Re-mounting the filesystem with the parameter

mincache=direct
reduced the time for SQL inserts into the SIB tables from up to 11 seconds down to 0.03 seconds.

When dealing with the combination AIX, JFS2, and DB2, the DB2 command

db2 alter tablespace no file system caching
has more or less the same effect. Data caching on the filesystem level is disabled and the data is persisted as fast as possible. According to the AIX documentation the same effect can be reached on the file system level by using "mount ... -o dio". Read performance for non-DB files might suffer, because caching is reduced. DB-data will still be cached in the DB bufferpools.

HTTP tuning

I often have to modify the performance of the HTTP side of the conversation and here is a link to a decent basics:

Best Practices for Speeding Up Your Web Site

I also like the fact they used IBM's Page Detailer http://alphaworks.ibm.com/tech/pagedetailer which is an awesome application my colleague performance whiz Phil Theiller showed me. It is really awesome for getting a good visual feel for the HTTP side of things.

Friday, April 30, 2010

Trace port in AIX

AIX Command

1. netstat -Aan | grep
- This shows if the specified is being used. The hex number in the first column is the address of protocol control block (PCB)

2. rmsock tcpcb
- This shows the process who is holding the socket. Note that this command must be run as root.

AIX Example

Let's set SVCENAME to 30542, so that the listener will use this port. Then, use the commands above to check if the port is indeed being used by DB2 LUW.

$ db2 update dbm cfg using svcename 30542
$ db2start
$ netstat -Aan | grep 30542
f10000f303321b58 tcp4 0 0 *.30542 *.* LISTEN

The netstat command, above, shows that the port 30542 is being used for listening. To confirm that it is DB2 LUW that's using the port, run rmsock as root like following.

$ rmsock f10000f303321b58 tcpcb
The socket 0x3321800 is being held by proccess 692476 (db2sysc).

This shows that it's db2sysc process that's using the port, and its PID is 692476.

Note that rmsock, unlike what its name implies, does not remove the socket, if the socket is being used by any process. Instead of removing the socket, it just reports the process holding the socket. Also note that the second argument of rmsock is the protocol. It's tcpcb in the example to indicate that the protocol is TCP.

Windows commands for port

Windows Command

netstat -an |find /i "listening"
netstat -an |find /i "established"
netstat -ao |find /i "listening"

1. netstat -aon | findstr ""

This shows if the specified is being used. The number in the last column is the process id (PID) of the process holding the socket. Once PID is determined, one can refer to "Windows Task Manager" to determine which application corresponds to the PID.

Windows Example

C:\>netstat -aon | findstr "50000"
TCP 0.0.0.0:50000 0.0.0.0:0 LISTENING 2564

C:\>pslist 2564

pslist v1.28 - Sysinternals PsList
Copyright ¬ 2000-2004 Mark Russinovich
Sysinternals

Process information for MACHINENAME:

Name Pid Pri Thd Hnd Priv CPU Time Elapsed Time
db2syscs 2564 8 15 366 30912 0:00:02.859 2:12:08.564

-------------------------

To find and trace open ports in unix

Listing all the pids:
---------------------
/usr/bin/ps -ef | sed 1d | awk '{print $2}'


Mapping the files to ports using the PID:
-------------
/usr/proc/bin/pfiles 2>/dev/null | /usr/xpg4/bin/grep
or
/usr/bin/ps -o pid -o args -p | sed 1d


Mapping the sockname to port using the port number:
----------------------
for i in `ps -e|awk '{print $1}'`; do echo $i; pfiles $i 2>/dev/null | grep 'port: 8080'; done
or
pfiles -F /proc/* | nawk '/^[0-9]+/ { proc=$2} ; /[s]ockname: AF_INET/ { print proc "\n " $0 }'


There were two explanations why "lsof" did not show, what was expected:

1) One thing that might prevent lsof to print all, is if the ports are controlled by inetd
or some such (i.e. there is nothing actively listening on them until you try talking to them).

Also, try telneting to the port and then run lsof while the telnet session is connected.

2) On Solaris 10, using "lsof -i" to show mapping of processes to TCP ports incorrectly shows all
processes that have socket open as using port 65535, for example:

sshd 8005 root 8u IPv4 0x60007ebdac0 0t0 TCP *:65535
(LISTEN)
sendmail 1116 root 5u IPv4 0x60007ecce00 0t0 TCP *:65535
(LISTEN)

This is a known bug in lsof that can _not_ be fixed because of differences between Solaris 10
and previous versions. So the useful "lsof -i :" is now not useful.

Thursday, April 8, 2010

EJB Container tuning

If you use applications that affect the size of the EJB Container Cache, it is possible that the performance of your applications can be impacted by an incorrect size setting. Monitoring Tivoli Performance Viewer (TPV) is a great way to diagnose if the EJB Container Cache size setting is tuned correctly for your application. If the application has filled the cache causing evictions to occur, TPV will show a very high rate of ejbStores() being called and probably a lower than expected CPU utilization on the application server machine

managed &Unmanaged web server

Unmanaged web server
Unmanaed web servers reside on a System without a node agent. This is the only option in a standalone server environment and is a common option for Web Servers installed outside a firewall. The use of this topology requires that each time the plug-in configuration file is generated, it is copied from the machine where WebSphere Application Server is installed to machine where the server is running.

If the Web server is defined to an unmanaged node, you can do the following:

1.Check the status of the Web server.

2.Generate a plug-in configuration file for that Web server.

3.If the Web server is an IBM HTTP Server and the IHS Administration server is
installed and properly configured, you can also:

Display the IBM HTTP Server Error log (error.log) and Access log (access.log) files.

Start and stop the server.

Display and edit the IBM HTTP Server configuration file (httpd.conf).

Propagate the plug-in configuration file after it is generated.

You cannot propagate an updated plug-in configuration file to a non-IHS Web server that is defined to an unmanaged node. You must install an updated plug-in configuration file manually to a Web server that is defined to an unmanaged node
-------------------------------------------------------------------------------------
Managed Web Server

In a distributed server environment, you can define multiple Web servers. These
Web servers can be defined on managed or unmanaged nodes. A managed node
has a node agent. If the Web server is defined to a managed node, you can do
the following:

1.Check the status of the Web server.

2.Generate a plug-in configuration file for that Web server.

3.Propagate the plug-in configuration file after it is generated.

If the Web server is an IBM HTTP Server (IHS) and the IHS Administration
server is installed and properly configured, you can also:

Display the IBM HTTP Server Error log (error.log) and Access log
(access.log) files.

Start and stop the server.

Display and edit the IBM HTTP Server configuration file (httpd.conf)

Tuning data source - Connection pool tuning

You can tune the Connection pool from WAS Admin Console

Maximum connections Specifies the maximum number of physical connections that can be created in this pool. These are the physical connections to the backend datastore. When this number is reached, no new physical connections are created; requestors must wait until a physical connection that is currently in use is returned to the pool. For optimal performance, set the value for the connection pool lower than the value for the Web container threadpool size. Lower settings, such as 10 to 30 connections, might perform better than higher settings, such as 100

Minimum Connections: Specifies the minimum number of physical connections to maintain. Until this number is exceeded, the pool maintenance thread does not discard physical connections. If you set this property for a higher number of connections than your application ultimately uses at run time, you do not waste application resources. WebSphere Application Server does not create additional connections to achieve your minimum setting. Of course, if your application requires more connections than the value you set for this property, application performance diminishes as connection requests wait for fulfillment.

Connection Timeout : Specifies the interval, in seconds, after which a connection request times out and a ConnectionWaitTimeoutException is thrown.

This value indicates the number of seconds a request for a connection waits when there are no connections available in the free pool and no new connections can be created, usually because the maximum value of connections in the particular connection pool has been reached. For example, if Connection Timeout is set to 300, and the maximum number of connections are all in use, the pool manager waits for 300 seconds for a physical connection to become available. If a physical connection is not available within this time, the pool manager initiates a ConnectionWaitTimeout exception. It usually does not make sense to retry the getConnection() method; if a longer wait time is required you should increase the Connection Timeout setting value. If a ConnectionWaitTimeout exception is caught by the application, the administrator should review the expected connection pool usage of the application and tune the connection pool and database accordingly.
If the Connection Timeout is set to 0, the pool manager waits as long as necessary until a connection becomes available. This happens when the application completes a transaction and returns a connection to the pool, or when the number of connections falls below the value of Maximum Connections, allowing a new physical connection to be created.
If Maximum Connections is set to 0, which enables an infinite number of physical connections, then the Connection Timeout value is ignored.

Reap Time Specifies the interval, in seconds, between runs of the pool maintenance thread.

For example, if Reap Time is set to 60, the pool maintenance thread runs every 60 seconds. The Reap Time interval affects the accuracy of the Unused Timeout and Aged Timeout settings. The smaller the interval, the greater the accuracy. If the pool maintenance thread is enabled, set the Reap Time value less than the values of Unused Timeout and Aged Timeout. When the pool maintenance thread runs, it discards any connections remaining unused for longer than the time value specified in Unused Timeout, until it reaches the number of connections specified in Minimum Connections. The pool maintenance thread also discards any connections that remain active longer than the time value specified in Aged Timeout.

The Reap Time interval also affects performance. Smaller intervals mean that the pool maintenance thread runs more often and degrades performance.

To disable the pool maintenance thread set Reap Time to 0, or set both Unused Timeout and Aged Timeout to 0. The recommended way to disable the pool maintenance thread is to set Reap Time to 0, in which case Unused Timeout and Aged Timeout are ignored. However, if Unused Timeout and Aged Timeout are set to 0, the pool maintenance thread runs, but only physical connections which timeout due to non-zero timeout values are discarded.


Unused Timeout: Specifies the interval in seconds after which an unused or idle connection is discarded.

Set the Unused Timeout value higher than the Reap Timeout value for optimal performance. Unused physical connections are only discarded if the current number of connections exceeds the Minimum Connections setting. For example, if the unused timeout value is set to 120, and the pool maintenance thread is enabled (Reap Time is not 0), any physical connection that remains unused for two minutes is discarded


Aged TimeoutSpecifies the interval in seconds before a physical connection is discarded.

Setting Aged Timeout to 0 supports active physical connections remaining in the pool indefinitely. Set the Aged Timeout value higher than the Reap Timeout value for optimal performance. For example, if the Aged Timeout value is set to 1200, and the Reap Time value is not 0, any physical connection that remains in existence for 1200 seconds (20 minutes) is discarded from the pool. The only exception is if the connection is involved in a transaction when the aged timeout is reached. If it is the connection is closed immediately after the transaction completes.


Purge Policy: Specifies how to purge connections when a stale connection or fatal connection error is detected



EntirePool:All connections in the pool are marked stale. Any connection not in use is immediately closed. A connection in use is closed and issues a stale connection Exception during the next operation on that connection. Subsequent getConnection() requests from the application result in new connections to the database opening. When using this purge policy, there is a slight possibility that some connections in the pool are closed unnecessarily when they are not stale. However, this is a rare occurrence. In most cases, a purge policy of EntirePool is the best choice.

FailingConnectionOnly:Only the connection that caused the stale connection exception is closed. Although this setting eliminates the possibility that valid connections are closed unnecessarily, it makes recovery from an application perspective more complicated. Because only the currently failing connection is closed, there is a good possibility that the next getConnection() request from the application can return a connection from the pool that is also stale, resulting in more stale connection exceptions.
The connection pretest function attempts to insulate an application from pooled connections that are not valid. When a backend resource, such as a database, goes down, pooled connections that are not valid might exist in the free pool. This is especially true when the purge policy is failingConnectionOnly; in this case, the failing connection is removed from the pool. Depending on the failure, the remaining connections in the pool might not be valid.

Memory-CPU in unix

Task I - Identifying a memory DOS and responding
In this task, you will start a memory denial of service against yourself and then add
swap on the fly to attempt to buy more time. If you were the client from the module 4
exercise, you will have to create the memory DOS script (step 16) from the module 4 lab.
1) Open 3 seperate terminal windows. In the first terminal, start a vmstat with an interval of one second.
[1] # vmstat 1
procs memory page disk faults cpu
r b w swap free re mf pi po fr de sr dd f0 s5 s1 in sy cs us sy id
0 0 25 811888 326896 3 11 8 5 5 0 5 2 0 0 0 311 564 89 2 1 97
0 0 25 781816 264256 13 8 680 0 0 0 0 119 0 0 0 634 6097 360 1 21 78
0 1 25 781816 263288 20 0 896 0 0 0 0 135 0 0 0 742 3591 421 2 12 86

2) In the second terminal, invoke the "hog" script in the /export/home/guest directory.
[2] # cd /export/home/guest
[2] # ./hog
3) In the third terminal window, create an 128mb swap file and add it on the fly.
[3] # mkfile 128m /export/swapfile
[3] # swap -a /export/swapfile
4) Observer the vmstat output in terminal 1. Did the "swap" column grow?
procs memory page disk faults cpu
r b w swap free re mf pi po fr de sr dd f0 s5 s1 in sy cs us sy id
1 1 25 1560 8584 612 6607 624 8 8 200 0 69 0 0 0 514 7979 1086 27 72 1
1 1 25 1280 7680 534 2471 4536 3656 3744 40 439 145 0 0 0 841 3537 671 14 40
0 1 25 127608 8232 415 513 4640 5648 5688 0 202 100 0 0 0 546 615 245 1 11 88
0 3 25 123104 8464 449 558 5008 5080 5080 0 0 113 0 0 0 557 974 322 2 13 85
5) Observe the hog output in terminal 2. It appears that the script continued to run
even though /tmp was full. As soon as the swap space was added, the script started writing
files into the /tmp space.
cat: write error: No space left on device <---script still running even though /tmp is full
+ let x=x+1
+ [ 1805 -eq 100000 ]
+ cat /var/sadm/install/contents
+ 1>> /tmp/file.1805
cat: write error: No space left on device <---script still running even though /tmp is full
+ let x=x+1
+ [ 1806 -eq 100000 ]
+ cat /var/sadm/install/contents <---script appears to resume writing to /tmp
+ 1>> /tmp/file.1806
+ let x=x+1
6) Stop the script by issuing a ^C in terminal 2. Clean up the /tmp directory.
[3] # cd /tmp
[3] # rm -r /tmp/file*
7) Stop the vmstat by issuing a ^C in terminal 1.
Task II - Limiting the size of /tmp in the /etc/vfstab
In this exercise, you will limit the size of the /tmp filesystem and then run a memory DOS
against yourself to see if the filesystem limit worked.
1) Observer the size of the /tmp file system with the df command.
# df -k /tmp
Filesystem kbytes used avail capacity Mounted on
swap 900360 16 900344 1% /tmp
2) Edit the /etc/vfstab and configure /tmp to have a maximum size of 128m
# vi /etc/vfstab
swap - /tmp tmpfs - yes size=128m
3) Since /tmp cannot unmount. You will have to reboot the workstation
# init 6
4) After the workstation has rebooted, check the size of /tmp again.
Notice that it is much smaller than the previous size.
# df -k /tmp
Filesystem kbytes used avail capacity Mounted on
swap 131072 344 130728 1% /tmp
5) Open 3 terminal windows. In the first window, start a vmstat at 1 second
intervals.
[1] # vmstat 1
procs memory page disk faults cpu
r b w swap free re mf pi po fr de sr dd f0 s5 s1 in sy cs us sy id
0 0 0 856456 426696 29 122 134 0 0 0 0 19 0 0 0 363 662 157 3 8 89
0 0 0 878352 414024 0 8 0 0 0 0 0 0 0 0 0 356 171 91 0 0 100
6) In a second terminal window, start the hog script.
[2] # cd /export/home/guest
[2] # ./hog
7) Notice in terminal 2 that the hog script will error out more quickly with a "No space left on
device" while in terminal 1, the vmstat reports plenty of virtual memory in the "swap"
column. Since the /tmp file system has been limited, the workstation is now protected
against a /tmp DOS. However, the /tmp fiilesystem is still full. Any applications
that need to write to the /tmp space will be unable to do so.
Task III - Identifying a CPU DOS and responding to it.
The following task teaches how to detect a CPU DOS and prevent future CPU DOS.
The task requires you to fork bomb your own system. This may cause the system
to stop responding. Be sure to save all of your work.
1) Open 4 terminal windows. In the first terminal window, use the sar command to monitor
the process table size. Have sar monitor every second for 1000 seconds. Notice the proc-sz
value.
[1] # sar -v 1 1000
19:40:20 proc-sz ov inod-sz ov file-sz ov lock-sz
19:40:21 63/7914 0 1955/33952 0 392/392 0 0/0
19:40:22 63/7914 0 1955/33952 0 392/392 0 0/0
19:40:23 63/7914 0 1955/33952 0 392/392 0 0/0
19:40:24 63/7914 0 1955/33952 0 392/392 0 0/0
2) In the second terminal window, use the vmstat command to monitor the run queue (r) field.
[2] # vmstat 1
procs memory page disk faults cpu
r b w swap free re mf pi po fr de sr dd f0 s5 s1 in sy cs us sy id
0 0 0 800712 352024 26 102 40 0 0 0 0 5 0 0 0 334 432 124 2 3 95
0 0 0 875544 420648 0 8 0 0 0 0 0 0 0 0 0 431 317 120 1 0 99
3) In the third terminal window, login via telnet to the localhost as the user guest.
[3] # telnet localhost
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
SunOS 5.8
login: guest
Password:
Last login: Tue Jul 30 15:58:55 from localhost
Sun Microsystems Inc. SunOS 5.8 Generic Patch October 2001
$
4) As user guest, create a fork bomb by editing two scripts called "a" and "b". These
scripts will do nothing but call each other an execute sleep processes. They will
continue in an infinite loop until the process table fills to capacity.
[3] $ vi a
./b &
sleep 20 &
[3] $ vi b
./a &
sleep 20 &
5) Make the scripts executable.
[3] $ chmod 555 a b
6) Execute the scripts. As soon as these scripts execute, look immediatley to terminal
windows 1 and 2 and notice the drastic change.
[3] $ ./a &
7) If the system is still responsive, monitor the vmstat and sar output. Also, in terminal
window 4, issue a ps -ef command.
[4] # ps -ef
<>
guest 9723 9722 0 19:51:18 ?? 0:00 -sh
guest 9767 9766 0 19:51:18 ?? 0:00 -sh
guest 8427 1 0 0:00
guest 9709 9708 0 19:51:18 ?? 0:00 -sh
guest 9800 9774 0 19:51:18 ?? 0:00 -sh
8) As the system administrator, stop the CPU DOS by killing all of the user guest's
processes.
[4] # pkill -u guest
9) Search the user guest's home directory for any files created in the last day.
[4] # find /export/home/guest -mtime -1
/export/home/guest
/export/home/guest/a
/export/home/guest/b
Task IV - Preventing CPU DOS
The purpose of this task is to configure the /etc/system file on the server to limit
the ammount of processes a user can take.
1) As root on the server, open up the kernel using the mdb command in read mode. The
adb utility is used for core dump analysis and information gathering on a live kernel.
All of the features of mdb are covered in "ST-350 - System Fault Analysis". The following
mdb command will display the current value for the maximun amoumt of user proceeses allowed
on the server.
# mdb -k
Loading modules: [ unix krtld genunix ip ufs_log nfs isp ipc random ptm logindmux ]
>maxuprc/D <-----Ask the kernel how many proccess a user can own.
maxuprc:
maxuprc: 7909 <-----The kernel reports that a user can own 7909 process table slots
>max_nprocs/D <-----Ask the kernel what the total process table size can be.
max_nprocs:
max_nprocs: 7914 <-----The kernel reports that the maximum table size is 7914. This means
that a user can reserve almost the entire process table
$q <----- exit adb
2) Since a regular user can consume an entire process table. Set a kernel tuning parameter
in the /etc/system file to limit the maximum user processes.
# vi /etc/system
<>
set maxuprc=100
3) Reboot the workstation.
# init 6
4) After the workstation has rebooted, verify with mdb that the kernel tuning setting worked.
# mdb -k
Loading modules: [ unix krtld genunix ip nfs ipc ptm logindmux ]
> maxuprc/D
maxuprc:
maxuprc: 100
>
5) Open three terminal windows. In the first terminal window, use the sar command
to monitor the process table size.
[1] # sar -v 1 1000
SunOS gabriel 5.8 Generic_108528-13 sun4u 08/01/02
18:48:57 proc-sz ov inod-sz ov file-sz ov lock-sz
18:48:58 42/7914 0 1430/33952 0 264/264 0 0/0
6) In the second terminal window, use the su command to assume the identity of guest. Run
the fork bomb.
[2] # su - guest
[2] $ id
uid=1001(guest) gid=10(staff)
[2] $ ./a
7) Observe the output in terminal window #1. Did the process table continue to grow or
did it level off?
sar -v 1 1000
SunOS gabriel 5.8 Generic_108528-13 sun4u 08/01/02
18:48:57 proc-sz ov inod-sz ov file-sz ov lock-sz
18:48:58 42/7914 0 1430/33952 0 264/264 0 0/0
18:48:59 42/7914 0 1430/33952 0 264/264 0 0/0
18:49:00 42/7914 0 1430/33952 0 264/264 0 0/0
18:49:01 141/7914 0 1430/33952 0 461/461 0 0/0
18:49:02 141/7914 0 1430/33952 0 461/461 0 0/0
18:49:03 141/7914 0 1430/33952 0 461/461 0 0/0
8) The "guest" user was limited to 100 processes by tuning the kernel.

Wednesday, March 31, 2010

Disabling global security

If you are running a Deployment Manager
There are two security.xml files you need to change:
WSAS_install_root/AppServer/config/cells/cellname/security.xml
WSAS_install_root/DeploymentManager/config/cells/cellname/security.xml
Always store a copy of the security.xml file in a temporary directory before making any changes.
Open each security.xml file and search for the very first occurrence of enabled="true". This is located inside the tag.
Change enabled="true" to enabled="false", then save the file.
You must restart the Deployment Manager, the nodegent and then the Application Servers, in that order.

Monday, March 15, 2010

A quick way of navigating to WebSphere logs

A quick way of navigating to WebSphere logs. If you are using the bash as a shell then you know about the tab key to auto complete, however in the WebSphere file system there are many subfolders within subfolders to get to the logs. To speed up having to tab through folders you can go direct using a UNIX alias as a shortcut.alias, unaliasAssign a name or an abbreviated name that makes sense or is shorter for a command.Description:alias Lists the aliases that are currently defined. alias "dir=ls" Creates an alias. dir will output the same contents as the ls command. unalias name Removes an alias. unalias dir Example:in my .bash_profile I have created two lines as follows:alias dlogs='ls -ltr /apps/was/ws61/profiles/dmgr/logs'alias glogs='cd /apps/was/ws61/profiles/dmgr/logs'Demo:To list deployment manager logs[root@websphere ~]# dlogstotal 2076-rw-r--r-- 1 root root 508 Oct 14 16:09 AboutThisProfile.txt-rw-r--r-- 1 root root 2561 Dec 15 12:59 iscinstall.log-rw-r--r-- 1 root root 0 Dec 15 15:03 wsadmin.valout-rw-r--r-- 1 root root 6635 Dec 15 15:03 wsadmin.traceoutdrwxr-xr-x 2 root root 4096 Dec 15 22:22 dmgrdrwxr-xr-x 2 root root 4096 Jan 28 21:59 ffdc-rw-r--r-- 1 root root 2097152 Jan 28 22:01 activity.logTo go to deployment logs[root@websphere ~]# glogs[root@websphere logs]#

WASX702 Error creating SOAP connection to host localhost exception

Problem:WASX7023E: Error creating "SOAP" connection to host "localhost"; exception information: com.ibm.websphere.management.exception.ConnectorNotAvailableException: com.ibm.websphere.management.exception.ConnectorNotAvailableException: Failed to get a connection with IP address associated with hostname localhostWASX7213I: This scripting client is not connected to a server process; please refer to the log file /E50/was/inst50/profiles/e50crm_cell/e50crm_dmgr/logs/wsadmin.traceout for additional information.WASX8011W: AdminTask object is not available.WASX7017E: Exception received while running file "./my script.jacl"; exception information: com.ibm.ws.scripting.ScriptingException: WASX7070E: The configuration service is not available.
Solution:The problem is the Deployment Manager is not started, Start it./profiles/dmgr/bin/startManager.sh or startManager.bat

Discussion on umask options for Installing WebSphere

Well depending on how paranoid you may be in your environment and if your a true Unix die hard you would probably not like the default suggested umask of 022 for WebSphere installs. Often your hosting provider will set a default umask of 077 for the OS, however you can set a umask in your user profile that you are installing WAS. Is is recommended that you install WebSphere with the umask 022, but you could go to 027 to stop third party's from reading your logs.a umask is you access not'd. So 022 means that all new files will have 755 as their file permissions. Not'd means the inverse. We are using binary do the inverse of 7 is 0 and the inverse of 5 is two.Note: 755 means rwx for owner, r x for group and r x for otherBefore installing WebSphere you can verify the umask setting, issue the following command:umaskTo set the umask setting to 022, issue the following command:umask 022 A umask of 22 will allow logs to be created where third party's (other) can read the logs. some WebSphere/Unix admins consider logs source code and thus use 077 or 027.Some more security conscious WebSphere installs may even go for 027 where access for other is not allowed until the administrator grants it upon request. This means no other third party can read the was logs unless the file permission's are changed with chmod.

How to find number of JVM in Websphere

To list all the jvm process that wensphere is running you can do several thgins.
1. ps-ef grep
> ps -ef grep //javawasadm 18445 18436 0 13:48:33 pts/9 0:00 grep /javawasadm 9959 1 0 Feb 18 ? 4:17 /java/bin/java -XX:MaxPermSize=256m -Dwas.status.socket=49743 -Xwasadm 9927 1 0 Feb 18 ? 5:10 /java/bin/java -XX:MaxPermSize=256m -Dwas.status.socket=49611 -X2. pgrep -f -u $WASUSER $ENVPATH

Application files

EJB application JAR files — An EJB application JAR file contains one or moreEJBs.
Web application WAR files — AWAR file contains a single Web application.Because an EAR file can contain multiple Web applications, each Web applicationin an EAR file must have a unique deployment context. The deploymentmechanism for EAR files allows just such a specification of different contexts.
Application client JAR files — The application client JAR file contains a single,standalone Java application that’s intended to run within an application clientcontainer. The application client JAR file contains a specialized deploymentdescriptor and is composed similarly to an EJB JAR file. The JAR file also containsthe classes required to run the standalone client as well as any clientlibraries needed to access JAAS, JAXP, JDBC, JMS, or an EJB client.
Resource adapter RAR files — The resource adapter RAR file contains Javaclasses and native libraries required to implement a Java Connector Architecture(JCA) resource adapter to an enterprise information system. Resource adaptersdon’t execute within a container; rather, they’re designed to execute as a bridgebetween an application server and an external enterprise information system.Each of these components is developed and packaged individually apart from the EARfile and its own deployment descriptor. A J2EE EAR file combines one or more of thesecomponents into a unified package with a custom deployment descriptor.Packaging Roles

What is WebSphere?

WebSphere is a set of Java-based tools from IBM that allows customers to create and manage sophisticated business Web sites. The central WebSphere tool is the WebSphere Application Server (WAS), an application server that a customer can use to connect Web site users with Java applications or servlets. Servlets are Java programs that run on the server rather than on the user's computer as Java applets do. Servlets can be developed to replace traditional common gateway interface (CGI) scripts, usually written in C or Practical Extraction and Reporting Language, and run much faster because all user requests run in the same process space. In addition to Java, WebSphere supports open standard interfaces such as the Common Object Request Broker Architecture (CORBA) and Java Database Connectivity (JDBC) and is designed for use across different operating system platforms. One edition of WebSphere is offered for small-to-medium size businesses and another edition for larger businesses with a higher number of transactions``
WAS provides a servlet server that is installed as a "bolt-on" to a Web (HTTP) server. The HTTP server provides static Web pages and when it is equipped with a servlet server such as WebSphere, it can provide dynamic Web pages that are modified on the fly by data that is on your iSeries. Servlets are the Java programs that communicate directly with the servlet server and send it the formatted data to enable Web transactions and data access. The Java programs in turn can do it all or they can call high level language (HLL) programs such as those written in RPG and COBOL to fetch and/or update the necessary data.
With a servlet server such as WebSphere, the code called by the server is always based on Java. The bottom line is that if you want the IBM mainstream method for serving dynamic data to a Web browser, you need a servlet server such as WebSphere in any of its versions and packages. WebSphere is currently at version 6.0 and there are a few numbers following it such as 6.0.2.1 which better depict its version and micro release and PTF levels.
WebSphere is in many ways a Web operating system that operates under control of the system's operating system. Just like i5/OS, it is very large, complex, and sophisticated. Though you may be able to do simple things with just a bit of WebSphere knowledge, to become proficient in it, you have to invest quite a bit of time. To give you an idea of the complexity of WebSphere and the things you can know overall to make it work effectively in your shop, there are a number of manuals that IBM provides for you to learn and use the product. In fact, there are seven separate PDF manuals for version 6.0 that address the many aspects of hosting a WebSphere server from installation to administration to troubleshooting.
The following list contains the purpose of each of these PDF format manuals and the number of pages in each. Considering that the manuals are two weeks behind the Web version, for me they are much easier to work with than playing the hypertext game on the Web. They are all downloadable to your PC. Additionally, they are being updated all the time so these page counts are how they look right now at the beginning of the fourth quarter, 2005. The WebSphere manuals include the following:
Installation, 66 pages Administration, 2690 pages Performance, 300 pages Security, 1196 pages Troubleshooting, 336 pages Migration, 170 pages Program Development, 1366 pages That's about 6,000 pages. Though you don't need all of those pages to set up and get some simple things running with WAS Express, if those pages we