How to protect Cognos 10 app server

Problem: Protecting Cognos 10 App Server

The Cognos 10 application runs within an application server. As a result it is vulnerable to attacks over the Internet through the open ports for WWW traffic.

Here are some notes on CAF.

Ant

http://publib.boulder.ibm.com/infocenter/caapps/v8r4m0/topic/com.ibm.swg.im.cognos.inst_apps.8.4.0.doc/inst_apps_i_cnfg_CAF.html

You can track firewall activity by checking the log file, which contains rejected requests only. If firewall validation fails, you can check the log file to find where the failure occurred. By default, log messages are stored in the c8_location/logs/cogserver.log file. In a gateway-only installation, the file is named caf.log. If you configure a destination for log messages, IBM Cognos Application Firewall log messages are sent to the specified destination.

IBM Cognos Application Firewall also has a Secure Error feature, which gives administrators control over which groups or users can view detailed error messages. For more information, see the IBM Cognos 8 Administration and Security Guide.


Migrating a directory to a separate filesystem on AIX

Problem

You have a directory that deserves it’s own file system for some reason. This could be because you need to increase throughput, manage backups separately, manage quotas separately or just to have a cleaner data architecture.

Solution

  • Create a new filesystem using mkfs
  • Mount the new filesystem temporarily to /mnt
  • Stop all processes that access the directory to move
  • Move all contents to the new filesystem using mv
  • umount /mnt
  • mount /new/filesystem /path/to/directory

This principle is pretty much the same of any Unix operating system.


iSCSI target on NetApp for AIX 6.1 and VIOS 2.1

Problem

iSCSI is much more flexible and easy to manage then a fibre channel (FC) infrastructure. There is a performance penalty of course, but in many cases the performance penalty is well worth the management benefits. This article will show how to set up the NetApp as a target and the AIX or VIOS host as the client for iSCSI LUNs.

Solution

Setting up the NetApp

iscsi nodename
igroup create -i -t aix aixgroup iqn.1992-08.com.ibm:aix-rtp.00000000
lun create -s 200g /vol/iscsi/aix1
lun map /vol/test/aix.lun aixgroup
lun show -g aix

Setting up the AIX host

oslevel -r
lslpp -l | grep -i iscsi
echo "10.10.1.10 3260 iqn.1992-08.com.netapp:sn.00000000" > /etc/iscsi/targets
lsdev -C | grep iscsi
lsattr -El iscsi0
chdev -l iscsi0 -a initiator_name=iqn.1992-08.com.ibm:aix-rtp.00000000
lsdev -Cc disk
cfgmgr -l iscsi0
lsdev -Cc disk

References


AIX 6.1 OS Patches the Easy Way

Summary

Patching AIX can be intimidating at first for someone coming from the Windows / Linux world. AIX has capabilities that natively support production quality operations. One of these qualities is that one can install patches on an alternative volume, make that volume bootable for testing and allowing an easy way out if something doesn’t work right. Since AIX 5.3, it is also possible to install patches on the same volume as the boot volume and define boot profiles to boot at a certain patch level. In this post we will just look at the basics of patching, emulating the equivalent of “yum update” or the “windows express update” (taking Linux / Windows as analogy).

Steps

  • Download available patches for current technology level:
    smitty suma <Enter>
    Download Updates Now (Easy) <Enter>
    Download All Latest Fixes <Enter> <Enter>
  • Install patches:
    smitty update_all
    specify INPUT device: /usr/sys/inst.images/installp/ppc <Enter>
    go down to "ACCEPT new license agreements?" <Tab> (to switch to "yes") <Enter> <Enter>

That’s it. You can reboot in case there were kernel updates or APARs that recommend a reboot. To check the current patch level, you can run oslevel -s.


Change Host Name of DB2 9.7 Server on AIX

Synopsis

After changing the name of the host on which DB2 9.7 is running, the following error message is received when trying to start the database:
09/28/2009 02:32:50 0 0 SQL6048N A communication error occurred during START or STOP DATABASE MANAGER processing.
SQL1032N No start database manager command was issued. SQLSTATE=57019

Solution

For each database instance on the machine where the name changed, and on each federated server instance, the file $INST_HOME/sqllib/db2nodes.cfg needs to be edited, and the old host name changed to the new name for each occurrence. The format of the node lines is:
0 host.domain.tld 0

References


Fiber Channel (FC / SAN) Performance Tuning on AIX 6.1

Introduction

In large data-warehousing applications, the efficiency of storage systems is critical. With the same hardware and software release, we have seen queries that never terminate (i.e. still runs after a week) under one configuration to be able to finish within hours after optimizations.

Of course in the case of a database like DB2, there are many other factors to consider, and in this article, I’m focusing on the OS level tuning advice from my friend Ben.

About FSCSI

When using a SAN over FC, it is possible to get close to wire speed, as the FSCSI protocol has very low overhead. It is usually more the nature of the I/O that slows down performance. Synchronous I/O is going to be slower – more on that later.

Testing I/O

A good way to test out tis by compiling up the latest version of IOtest on AIX:
ftp://ftp.soliddata.com/iotest/
Use it on a file or a junk raw LV while supplying it with various parameters.

Things you will usually see:

  • synchronous I/O is going to be far slower than you would expect no matter what you do. (synchronous I/O is produced by commands like dd, cp, mv, cat, etc. as well as by databases and applications that do not use AIO (Unidata database, websphere, apache, etc. all use Synchronous I/O)
  • The more writes you have, the slower you will go. Most SAN disk manufacturers design their arrays (and even their spindles) to handle 80% read and 20% write optimally- so that even if you have a large amount of disk cache, you will fill up the allotted “slot” for writes very quickly and ultimately be limited by the ability of the back-end disks to absorb writes.
  • The smaller the I/O sizes are, the lower your throughput
  • The fewer process threads performing I/O, the lower your throughput – the more process threads you are running, the better the saturation and I/O consolidation, so the better your throughput (but only up to a point)

Detecting a back end disk problem

You can tell if you really have a back end disk problem with filemon by running these three commands:
filemon -T 20000000 -o /tmp/filemon.out -O lv,pv
sleep 120
trcstop

That should write you out a file /tmp/filemon.out that has much information in it. We extract what we are looking for with this command:
grep times /tmp/filemon.out

This should give us a bunch of read and write times. If the “avg” is greater than 20 (milliseconds), then the back end disk is not fast enough to support the load on it. Usually the solution to that situation is to add more disks and spread the load over more spindles.

Increasing I/O performance: general rules

  • Use AIO if possible (asynchronous I/O)
  • Increase queue depths if necessary (especially if you are representing large arrays of disks as a single LUN on the host)
  • Do as much work in a single disk operation as possible
  • Run as many I/O generating threads in parallel as you can. This is done on DB2 by increasing the IOSERVERS. We had good results by having one IOSERVER per spindle with a RAID10 LUN.
  • Make sure dynamic tracking is on (fscsi device attribute, dyntrk). More on this later.

Practical Steps

In reality, since there are other constraints to observe, these are usually about all you can actually do:

  • On each disk, set the queue depth to approximately: 8 * n / h

    where n = number of spindles in the biggest RAID array you are using
    where h = number of HBAs(fibre adapters) that can see the disk

    lsattr -El” and set it with “chdev-l

  • On each HBA (fibre adapter) set your SCSI queue to 2048 with “chdev -l fcsX -a num_cmd_elems=2048“. If the disks on it are in use, you will have to use this command instead “chdev -Pl fcsX -a num_cmd_elemss=2048” and then reboot for it to take effect.
  • Make sure you are using dynamic tracking (same commands as above, but with the fscsiXX devices and the dyntrk attribute- i.e. “chdev -Pl fscsi0 -a dyntrk=yes
  • If you are using AIX 5.x make sure lru_file_repage=0 in vmo. (set it with “vmo -po lru_file_repage=0“)

Use this advice at you own risk. Please be very careful when making any changes, and make sure you have backups.

References:


How to Disable Web Based System Manager on AIX 6.1

AIX has a neat feature that allows to manage a system completely from a browser interface. Since AIX 6.1 TL3, this interface runs on port 80, which makes it easy to find it. However there might be several reasons one might not want to run the AIX Web Based System Manager. For example:

  • Something else already running on port 80
  • Save the memory and CPU overhead associated with the System Manager
  • Already using IBM Systems Director

So, if you decide that you do not need the Web Based System Manager, you can disable it very easily with these two simple commands:

stopsrc -s http4websm
rmitab webserverstart


Extending a Filesystem in AIX 6.1

Aix allows extending a filesystem on the fly. This feature has been around for over 15 years. I first used it with AIX 3.1. What an AIX system is intalled, it can be set up with minimal requirements and then each filesystem can be grown on the fly as needed. In fact the installer has an option to grow automatically any file system as needed during the install of a new package.

However, there is always a time when the physical limits of the available storage are reached. In that case, it is needed to either add a physical disk to the machine, or add a LUN from the SAN, or add a virtual disk to the LPAR. Next I show what to do once the additional physical storage has been added:

  • First lets make sure that the new storage is recognized:lspv will list the recognized storage units. If the new storage is regocnized, it will be visible on a line like:
    hdiskX               none             none

    “X” is a placeholder for a number, like “hdisk3.” If the new storage isn’t visible yet, then we need to tell AIX to look for it with: cfgmgr this command should take only a few seconds to rescan the system. Once it returns list the physical volumes again, and you should see the new volume.

  • Now we need to decide which logical volume to extend. The list of logical volumes is shown by lsvg. On simple installs, all you will see is rootvg. To extend rootvg all you have to do is:
    extendvg rootvg hdiskX
  • Finally we can grow the file system that we needed to grow either with smit, smitty or the direct command chfs. For example:
    chfs -a size=+1048576 /opt

References


Installing IBM Systems Director on an AIX 6.1 LPAR with a DB2 back-end

IBM Systems Director is a very powerful management tool that comes free with PowerVM (that is: free to anyone that purchases a Power System). I have gathered here the different steps. Most of the information comes from the official documentation: Installing IBM Systems Director on the management server. In this guide I focus on the installation using AIX 6.1 in an LPAR with DB2 as the back-end database server.

  • First, prepare the database as described in Preparing the IBM DB2 Universal Database. The DB2 client can be installed following the steps described in Installing the DB2 9.5 Client on AIX 6.1
  • Then, make a note of the following configuration parameters for the Director installer:


    DbmsApplication = DB2
    DbmsServerName = fqdn_of_db2_server
    DbmsTcpIpListenerPort=database_port (for example 50000)
    DbmsDatabaseName = database_name
    DbmsDatabaseAppHome = path_to_sqllib (for example /home/db2inst1/sqllib)
    DbmsUserId = database_user_name
    DbmsPassword = (encrypted_user_password populated by /opt/ibm/director/bin/configDB.sh)

    Create a database for IBM Systems Director (ISD), and make sure you can connect to the DB2 database from the AIX LPAR that will serve for the ISD install.

  • Next, the network is prepared, as described in Preparing firewalls and proxies for IBM Systems Director. Remember to also open up access from the management server to the other LAPRs and servers that ISD will have to access.
  • make sure the ports needed for ISD are available:
    netstat -an | grep LISTEN | egrep "951(0|4|5)"
    If the ports are in use, modify them as follows:
    netstat -an | grep LISTEN | egrep "991(0|4|5)"
    if there is no output, then:
    /var/opt/tivoli/ep/runtime/agent/toolkit/bin/configure.sh -unmanaged -port 9910 -jport 9914 -nport 9915 -force
  • Next, AIX needs to be patched if necessary, as described in Preparing to install IBM Systems Director Server on AIX. Since we are on AIX 6.1, there is little to do if all the fixpacks are up to date. Installing CSM is one step I recommend if you have IVM managed systems (having csm.hc_utils already installed from the install CD is necessary):


    mkdir csm
    wget 'ftp://ftp.software.ibm.com/software/server/csm/csm-aix-1.7.0.19.power.tar.gz'
    gtar xvzf csm-aix-1.7.0.19.power.tar.gz
    cd installp/ppc
    inutoc .
    installp -acgXYd . csm.hc_utils
    cd ../../../director (where you unpacked the director download see below if you haven't done that part yet)
    installp -acgXYd . Director.Server.ext.FSPProxy.rte

  • Download IBM Systems Director if not already done from IBM, unpack it, and run the installer:
    mkdir director
    gtar xvzf path_to_download/SysDir6_1_Server_AIX.tar.gz
    server/dirinstall.server
  • Configure the database access:
    cd /opt/ibm/director/proddata/
    cp cfgdbcmd.rsp cfgdbcmd.rsp-dist
    vi cfgdbcmd.rsp

    Edit the file cfgdbcmd.rsp and select the lines that apply to DB2. Then populate the password with:
    /opt/ibm/director/bin/configDB.sh
    /opt/ibm/director/bin/cfgdbcmd.sh -dbAdmin db2_instance_user -dbAdminPW db2_instance_user_pass
    this will take a while to complete, as there are over 1,000 tables to create, with constraints and indexes, and then the tables are pre-populated.
  • Then a final configuration to create the resource manager user ID
    /opt/ibm/director/bin/configAgtMgr.sh
    This is very simple, just provide a user id and a password for the ISD to use internally.
  • Now you are ready to start the ISD:
    /opt/ibm/director/bin/smstart
    This will take a while, and it will probably hang if you don’t have enough memory. A minimum of 2GB is necessary.
  • Follow the startup progress:
    /opt/ibm/director/bin/smstatus -r
    The output should be as follows:
    Inactive
    Starting
    Active
    I didn’t time, but it takes over 10 minutes for the ISD to become ready in a small LPAR on a p5

That’s it, from here, you can continue on with the ISD documentation: Configuring IBM Systems Director Server after installation.
Let me know how it goes and how you like ISD.


Mount CD/DVD in an AIX or Linux LPAR

To mount a CD or DVD in an LPAR, first you need to use the media library to assign one of the CDs in the library to the LPAR. For example, using the ivm inteface:

  1. Click on the lpar name in the “View/Modify Partitions” section
  2. Select the optical devices tab
  3. Create a virtual optical device if there isn’t one yet
  4. Click modify under current media
  5. Select the CD or DVD from the library
  6. Click OK

Then, you need to mount the media inside the AIX or Linux partition:

  1. Create the /mnt/cdrom directory if it doens’t exist yet: mkdir /mnt/cdrom
  2. Mount the media device: mount -v cdrfs -r /dev/cd0 /mnt/cdrom (on Linux the mount command is slightly different)

Note: on AIX you can edit the file “/etc/cdromd.conf” and add the line “device cd0 /mnt/cdrom” to have the CD or DVD mounted automatically.