Saturday, November 15, 2008

Solaris 10 Sysadmin Howto

Enable/disable IP forwarding

Solaris 10
#routeadm -e ipv4-forwarding (use -d to disable)
#routeadm -e ipv4-forwarding (use -d to disable)
#routeadm qfe0 router (ipv4, enable forwarding, put -router to disable forwarding)
#routeadm qfe0 inet6 router (ipv6, put -router to disable forwarding)
Note: In solaris 8/9, use 'ndd set ` for the same purpose, and `ndd get ` to get the status

Processor
#psrinfo
#which psradm
#psradm -f (disable processor)
#psradm -n (enable processor)


From Sun docs

In order to create a zone or container in Solaris 10, few things need to be done
Here's the steps to follow
a) Create a new processor set, which you need to define how many processor (min and max) per processor set.

b) Create new resource pool. Resource pool contains your newly define processor set in a). Enable, save and activate this pool.

c) Once the resource pool is created, now create a new zone using zonecfg. In this new zone, you need to define the zone name, the directory or fs where the zoning files will be located at, the network address and the network interface used, and finally assign this zone to the resource pool in b). Make sure you verify and commit this zonecfg config

d) Your zone is now ready. Now install the zone OS.Once finished, try to boot the zone.
e) Once the zone booted succesfully, login to the zone and do initial setup fro the zone.
f) DOne


Solaris 10 : Creating a New Resource Pool

1. global# pooladm -e (Enable the resource pools )
2. global# pooladm -s (Save the current configuration )
3. global# pooladm (See if any pools already exist on the system)
Create a processor set (pset) called “email-pset” with a min. 1 CPU and max. of 1 CPU.
4. global# poolcfg -c 'create pset email-pset (uint pset.min=1; uint pset.max=1)'
5. global# poolcfg -c 'create pool email-pool' (Create a resource pool for the psr set. )
Link the pool to the processor set.)
6. global# poolcfg -c 'associate pool email-pool (pset email-pset)'
7. global# pooladm -c (Activate the configuration. )
8. global# pooladm ( Verify the existence of the resource pool)


Creating the Zone on the New Resource Pool

Steps required. Zone with FSS will have more options.
• Configuration—Define the zone properties(fs, network interfaces etc)
• Installation—Create the zone (by installing and populating param for the zone)
• Virtual platform management—Use zone tools to boot, halt, or reboot the zone
• Zone login—Move in and out of the zone to perform administrative tasks

Configuration
To configure and define a new zone:
1. global# zonecfg -z email-zone (Enter the zone configuration tool.)
2. zonecfg:email-zone> create (Create new zone definition)
3. zonecfg:email-zone> set zonepath=/export/home/zones/email-zone (Assign to fs)
4. zonecfg:my-zone> set autoboot=true
5. Configure networking parameters, using the add net command and its subcommands.
zonecfg:email-zone> add net
zonecfg:email-zone:net> set address=10.0.0.1
zonecfg:email-zone:net> set physical=eri0
zonecfg:email-zone:net> end
6. zonecfg:email-zone> set pool=email-pool (Assign the zone to the email pool.)
7. zonecfg:email-zone> verify (Verify the config. syntax correct)
8. Write the in-memory configuration to stable memory, using the commit command, and then exit the shell
zonecfg:email-zone> commit
zonecfg:email-zone> exit (or ^D [Ctrl ”d”])


A standard zone automatically shares the /usr, /lib, /platform, and /sbin file systems with the global zone. It is important to note that a standard zone configuration mounts all global file systems as read-only.
As a result, an attempt to install an application to any of these directories will fail. See the section Creating the First Web Server Container (page 9) to learn how to mount a global zone file system with write permissions in the directory in which the application is installed.


Installation
9. Install the zone.
global# zoneadm -z email-zone install


Virtual Platform Management
When the installation is complete, the zone is ready to be booted. While the zone is now installed, the system identification internal to the zone has not run. At this point the administrator can configure things like the root password to the zone and the name server with which it should connect. The first time the zone is booted, the system automatically interacts with the user to configure this system identification. First time booting after installation, standard system identification questions must be answered via zone's console.

10. Boot the zone, using the zoneadm(1M) boot command.
global# zoneadm -z email-zone boot

Zone Login
11. Log on to the zone Console using the zlogin (After system identification is complete & rebooted)
global# zlogin -C email-zone
[Connected to zone email-zone console]
[This will now show the same type of output as when a normal system boots.
...
boot passwd = (Your choice)

12. Disconnect from the console using ~. (tilde dot) as in tip(1).
The zone can now be accessed over the network using the telnet(1), rlogin(1) or ssh(1) commands, like a standard Solaris OS system.

Note that each zone created on the system must be installed, configured, and booted. In addition, a sysidcfg(4) file can be used to automate the identification process. See the http://docs.sun.com site for details.



Enabling the FSS on the Web Server Resource Pool
Once the Container for the email server application is created, installed, and booted, you will create another Container for the first Web server. While this new Container is similar to the one created for the email server application, it also utilizes the Fair Share Scheduler to set CPU usage guarantees.

To set the Fair Share Scheduler:

1. Set the scheduler for the default pool to the Fair Share Scheduler.
global# poolcfg -c 'modify pool pool_default (string pool.scheduler="FSS")'
2. Create an instance of the configuration
global# pooladm -c
3. Move all the processes in the default pool and its assoc. zones under the FSS.
global# priocntl -s -c FSS -i class TS
global# priocntl -s -c FSS -i pid 1

If you don't want to reboot the system you can use priocntl(1). This step could also be done by rebooting the system.


Creating the First Web Server Container
Installing this zone will be slightly more sophisticated. You will assign three Fair Share shares to it as well as provide read-write access to the /usr/local file system.

To create the zone:
1. Define the zone for the first Web server.
global# zonecfg -z Web1-zone
Web1-zone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:Web1-zone> create
zonecfg:Web1-zone> set zonepath=/export/home/zones/Web1-zone
zonecfg:Web1-zone:net> set address=10.0.0.2
zonecfg:Web1-zone:net> set physical=eri0
zonecfg:Web1-zone:net> end
zonecfg:Web1-zone> set pool=pool_default


Remember, the two Web servers share the CPU resources of the default pool with each other as well as the global zone, so you need to specify how those resources should be shared using the Fair Share Scheduler (FSS).

With FSS, the relative importance of applications is expressed by allocating CPU resources based on shares—a portion of the system's CPU resources assigned to an application. The larger the number of shares assigned to an application, the more CPU resources it receives from the FSS software relative to other applications. The number of shares an application receives is not absolute—what is important is how many shares it has relative to other applications, and whether they will compete with it for CPU resources.

2. Assign three shares to this zone
zonecfg:Web1-zone> add rctl
zonecfg:Web1-zone:rctl> set name=zone.cpu-shares
zonecfg:Web1-zone:rctl> add value (priv=privileged,limit=3,action=none)
zonecfg:Web1-zone:rctl> end
zonecfg:Web1-zone> exit

In the case of a standard zone install—like the email server—the /usr directory is configured to be read-only. In some cases an application may need to be installed into a sub-directory under /usr like /usr/local (i.e., open source software often installs here). A standard zone install will not allow this. However, this can be done by changing the zone configuration so that it mounts an additional directory on the /usr/local directory in the zone, as read-write.

In this example, the first Web server is installed in /usr/local/bin, which means we need to configure the zone to support this.

To configure a read-write /usr/local directory:
3. In the global zone, create the directory to be exported to the zone.
global# mkdir -p /export/home/zones/Web1-zone/local
4. Set the permissions such that only root in the global zone can enter this directory.
global# chmod 700 /export/home/zones/Web1-zone
5. Create the dir on which the file system is to be mounted if it doesn't already exist (otherwise skip).
global# mkdir /usr/local
6. Enter the zone configuration tool for this zone.
global# zonecfg -z Web1-zone
7. Add a file system to the zone, using the add fs command.
zonecfg:Web1-zone> add fs
8. Specify a directory in the zone on which the file system can be mounted.
zonecfg:Web1-zone:fs> set dir=/usr/local
9. Export the directory from the global zone to the new zone.
zonecfg:Web1-zone:fs> set special=/export/home/zones/Web1-zone/local
10. Set the file system type to the loopback file system.
zonecfg:Web1-zone:fs> set type=lofs
11. Set the directory to have read and write permissions.
zonecfg:Web1-zone:fs> set options=[rw,nodevices]
12. End the configuration.
zonecfg:Web1-zone:fs> end
13. Be sure to verify and commit the configuration, and then install and boot the zone.
zonecfg:Web1-zone> verify
zonecfg:Web1-zone> commit
zonecfg:Web1-zone> exit
global# zoneadm -z email-zone install
global# [output omitted here for brevity]
global# zoneadm -z Web1-zone boot
global# zlogin -C Web1-zone

global# zoneadm list -cv is a quick way to see what state the zone is in.

Creating the Second Web Server Container
Once the Container for the first Web server is created, installed, and booted, a Container can be created for the second Web server. This Container is similar to the one just created, but will be assigned a different amount of FSS shares, and also includes access to a CD-ROM device and a raw disk partition.

To create the second Container:

1. Create the zone for the second Web site using the same process used to create the Web1-zone zone. Be sure to change the name of the zone, its location, the name of the pool used, and the IP address.
#zonecfg -z Web2-zone
Web2-zone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:Web2-zone> create
zonecfg:Web2-zone> set zonepath=/export/home/zones/Web2-zone
zonecfg:Web2-zone> add net
zonecfg:Web2-zone:net> set address=10.0.0.3
zonecfg:Web2-zone:net> set physical=eri0
zonecfg:Web2-zone:net> end...
zonecfg:Web2-zone> set pool=pool_default
2. Specify the use of the Fair Share Scheduler, and assign two shares to the zone.
zonecfg:Web2-zone> add rctl
zonecfg:Web2-zone:rctl> set name=zone.cpu-shares
zonecfg:Web2-zone:rctl> add value (priv=privileged,limit=2,action=none)
zonecfg:Web2-zone:rtcl> end
To give the users of the Container access to the CD-ROM device:
3. Add a file system to the zone, using add fs.
zonecfg:Web2-zone> add fs
4. Specify the CD-ROM directory for the zone.
zonecfg:Web2-zone:fs> set dir=/cdrom
5. Export the directory from the global zone to the new zone.
zonecfg:Web2-zone:fs> set special=/cdrom
6. Use the loopback file system.
zonecfg:Web2-zone:fs> set type=lofs
7. Set the directory to have read only permission because it is a read only CD device.
zonecfg:Web2-zone:fs> set options=[nodevices]
8. zonecfg:Web2-zone:fs> end End the configuration.


To configure the zone to access a raw device (raw disk partition) perform the following steps:
9. Add the block device for the raw partition to the zone.
zonecfg:Web2-zone> add device
zonecfg:Web2-zone:device> set match=/dev/dsk/c0t0d0s6
zonecfg:Web2-zone:device> end
10. Add the character device for the raw partition to the zone.
zonecfg:Web2-zone> add device
zonecfg:Web2-zone:device> set match=/dev/rdsk/c0t0d0s6
zonecfg:Web2-zone:device> end
zonecfg:Web2-zone> verify
zonecfg:Web2-zone> commit
zonecfg:Web2-zone> exit

The global zone administrator must ensure the disk partition is not exported to other zones for the duration of this process. Failure to do so may result in data corruption.

11. Install , boot, and configure.

The email server will run on its own guaranteed CPU, protected from the other applications on this system, while the Webservers share the remaining three CPUs. To clarify the FSS share usage, the first Web server application has three out of the total six shares, entitling it to 1.5 CPUs worth of the three CPUs (3*3/6=1.5); the second has two of the six shares, giving it one CPUs worth; and the global zone gets the remaining 0.5 CPUs worth.

Tuesday, June 24, 2008

Sun's Innotek VirtualBox

VirtualBox by Innotek GmBH
Then VirtualBox by Sun Microsystems
Then VirtualBox by Oracle

Pretty much like VMWare.

Wednesday, March 26, 2008

NFSv4 on FedoraCoreX

On NFS client
/etc/fstab - used on the NFS client
/etc/auto.master - used on the NFS client
/etc/init.d/rpcgssd - required on the client when RPCSEC_GSS is used

On NFS Server
/etc/exports - used on the NFS server
/etc/sysconfig/nfs - used on the NFS server
/etc/init.d/nfs - required on the server
/etc/init.d/rpcsvcgssd - required on the server when RPCSEC_GSS is used

On both NFS Client/Server
/etc/idmapd.conf - used on the NFS client and server
/etc/gssapi_mech.conf - used on the NFS client and server
/etc/init.d/portmap - used on the client and server
/etc/init.d/rpcidmapd - required on both client and server

Others such as:
/etc/hosts, /etc/hosts.allow, /etc/hosts.deny, /etc/sysconfig/iptables (if firewall enabled, port tcp/2049) etc

To mount NFS during boot, put appropriate NFS entry at /etc/fstab
For more info : http://www.vanemery.com/Linux/NFSv4/NFSv4-no-rpcsec.html#intro

Wednesday, January 30, 2008

How to check backup and to restore on HP OpenView Omniback

(From SuhailaAbdGhani SITI)

# /usr/omni/bin/omnistat <--Check for stats

Check detail reports for current running session
# /usr/omni/bin/omnidb -ses 2007/05/28-2 -rep|more

Check session details :(Backup contain; Object Name,Size, Device name, etc)
omnidb -session -detail
# /usr/omni/bin/omnidb -sess 2008/01/17-4 -det

omnidownload -list_devices
# /usr/omni/bin/omnidownload -list_devices

To Restore filesystem
-------------------------
omnir -filesystem object name -session sessionID session-id-tree mountpoint -into restore target -no_monitor/session-id
# /usr/omni/bin/omnir -filesystem mlbs0048_cs1:/local/apps/ctm '/local/apps/ctm' -session 2008/01/17-4 -tree /local/apps/ctm -into /local/apps/ctm -no_monitor


To handle mount request:
Check session report, look for media label,device,slot, etc eg:
# /usr/omni/bin/omnidb -sess 2008/01/26-12 -rep|more

Command to confirm mount request.
omnimnt -dev -session
# /usr/omni/bin/omnimnt -dev LTO_2_mlbsv146 -ses 2008/01/26-12

Monday, January 28, 2008

Jérôme Kerviel

Is he 'better' than Yasuo Hamanaka and Nick Leeson or a merely made a scapegoat of Société Générale's bad investment decision?

Monday, January 21, 2008

To Configure Solaris OS to Generate Core Files

#mkdir -p /var/cores
#coreadm -g /var/cores/%f.%n.%p.%t.core -e global -e global-setid -e log -d process -d proc-setid

# coreadm <---Display the core configuration.
global core file pattern: /var/cores/%f.%n.%p.%t.core
init core file pattern: core
global core dumps: enabled
per-process core dumps: disabled
global setid core dumps: enabled
per-process setid core dumps: disabled
global core dump logging: enabled

#man coreadm <---for further details
# ulimit -c unlimited <--- Set the size of the core dumps to unlimited.
# ulimit -a <----Verify coredump size
coredump(blocks) unlimited

Test/Verify core file creation.
# cd /var/cores
# sleep 100000 &
[1] PID
# kill -8 PID; # ls

Friday, January 18, 2008

Backup/Restore (Solaris) - ufsdump/ufsrestore, cpio, tar

#dmesg | grep st <---- CHecking tape device
#mt -f /dev/rmt/0 status <---- Checking tape drive status

UFSDUMP (Backup file system)
#ufsdump 0cvf /dev/rmt/0 /dev/rdsk/c0t0d0s0 <-- using raw devices
#ufsdump 0cvf /dev/rmt/0 /usr <--- using mounted fs
Disk to disk copy
#ufsdump 0f - /dev/rdsk/c0t0d0s7 |(cd /mnt/backup ;ufsrestore xf -)

UFSRESTORE (restore fs)
#ufsrestore rvf /dev/rmt/0 <---- restore a dump to a current dir
# ufsrestore -i /dev/rmt/0 <--- in interactive mode, on the selected tape drive

TAR
#tar cvf /dev/rmt/0 * <---- Backup all (*) using tar for the current dir
#tar tvf /dev/rmt/0 <---- listing a tar backup on a tape
#tar xvf /dev/rmt/0 <---- Extracting tar backup from tape to a current dir

CPIO
#find . -depth -print | cpio -ovcB > /dev/rmt/0 <---- Backup using cpio
#cpio -ivtB < /dev/rmt/0 <---- Viewing cpio files on a tape
#cpio -ivcB < /dev/rmt/0 <---- Restoring a cpio backup

Compress/uncompress
#compress -v or #gzip
#uncompress .Z or #gunzip .gz

Thursday, January 17, 2008

OS Backup/Restore using ufsdump in Solaris

Backing Up and Restoring the Solaris OS With "ufsdump"
(http://www.sun.com/bigadmin/content/submitted/backup_restore_ufsdump.html)

Yazid Mohamed, August 2006

This Tech Tip describes a backup and restore procedure for the Solaris 8 or 9 Operating System using the ufsdump command.

Backing Up the OS

1. For this example, we are using c0t0d0s0 as a root partition. Bring the system into single-user mode (recommended).
# init -s

2. Check the partition consistency.
# fsck -m /dev/rdsk/c0t0d0s0

3. Verify the tape device status:
# mt status

Or use this command when you want to specify the raw tape device, where x is the interface:
# mt -f /dev/rmt/x status

4. Back up the system:
a) When the tape drive is attached to your local system, use this:
# ufsdump 0uf /dev/rmt/0n /

b) When you want to back up from disk to disk, for example, if you want to back up c0t0d0s0 to c0t1d0s0:
# mkdir /tmp/backup
# mount /dev/dsk/c0t1d0s0 /tmp/backup
# ufsdump 0f - / | (cd /tmp/backup;ufsrestore xvf -)

c) When you want to back up to a remote tape, use this. On a system that has a tape drive, add the following line to its /.rhosts file:

hostname root

where hostname is the name or IP of the system that will run ufsdump to perform the backup. Then run the following command:
# ufsdump 0uf remote_hostname:/dev/rmt/0n /

Restoring the OS

1. For this example, your OS disk is totally corrupted and replaced with a new disk. Go to the ok prompt and boot in single-user mode from the Solaris CD.
ok> boot cdrom -s

2. Partition your new disk in the same way as your original disk.

3. Format all slices using the newfs command. For example:
# newfs /dev/rdsk/c0t0d0s0

4. Make a new directory in /tmp:
# mkdir /tmp/slice0

5. Mount c0t0d0s0 into /tmp/slice0:
# mount /dev/dsk/c0t0d0s0 /tmp/slice0

6. Verify the status of the tape drive:
# mt status

If the tape drive is not detected, issue the following command:
# devfsadm -c tape
or
# drvconfig
# tapes
# devlinks

Verify the status of tape drive again and make sure the backup tape is in the first block or file number is zero. Use the following command to rewind the backup tape:
# mt rewind

7. Go into the /tmp/slice0 directory and you can start restoring the OS.
# cd /tmp/slice0
# ufsrestore rvf /dev/rmt/0n

If you want to restore from another disk (such as c0t1d0s0), use the following command:
# mkdir /tmp/backup
# mount /dev/dsk/c0t1d0s0 /tmp/backup
# ufsdump 0f - /tmp/backup | (cd /tmp/slice0;ufsrestore xvf -)

8. After restoring all the partitions successfully, install bootblock to make the disk bootable. This example assumes your /usr is located inside the "/" partition:
# cd /tmp/slice0/usr/platform/'uname -m'/lib/fs/ufs
# installboot bootblk /dev/rdsk/c0t0d0s0

9. To finish restoring your OS, reboot the system.

Example:
To create full dump of a root fs on c0t3d0, on 150MB tape unit 0, use:
example# ufsdump 0cfu /dev/rmt/0 /dev/rdsk/c0t3d0s0

To make and verify an incremental dump at level 5 of the usr partition of c0t3d0, on a tape unit 1,
example# ufsdump 5fuv /dev/rmt/1 /dev/rdsk/c0t3d0s6

Solaris NIC type

The types of Device and Network Interfaces

hme - SUNW,hme Fast-Ethernet device
driverbge - SUNW,bge Gigabit Ethernet driver for Broadcom
BCM5704ce - Cassini Gigabit-Ethernet device driver
dmfe - Davicom Fast Ethernet driver for Davicom
DM9102Adnet - Ethernet driver for DEC 21040, 21041, 21140 Ethernet
cardselx - 3COM EtherLink III Ethernet device
driverelxl - 3Com Ethernet device driver
eri - eri Fast-Ethernet device
driverge - GEM Gigabit-Ethernet device driver
ieef - Intel Ethernet device
driverle - Am7990 (LANCE) Ethernet device driver
pcelx - 3COM EtherLink III PCMCIA Ethernet
Adapterpcn - AMD PCnet Ethernet controller device
driverqfe - SUNW,qfe Quad Fast-Ethernet device driver
sk98sol - SysKonnect Gigabit Ethernet SK-98xx device
driverspwr - SMC EtherPower II 10/100 (9432) Ethernet device driver

Control-M Connect:Direct

Configuration directory: /local/bin/cdunix/ndm/cfg/
Configuration Files: netmap.cfg
Userfile.cfg
Receiving server:
Sending Server:


Steps for Processing Connect: Direct Request
* Receive request to add a user mapping.

At RECEIVING SERVER
1. Login to the Receiving server
a.)Go to the connect direct configuration directory: /local/bin/cdunix/ndm/cfg/

2. Check NETMAP.CFG file – to map remote system.
a.) Is the sending server information added to the netmap.cfg?
b.) If yes then do nothing and exit
else
(Get all the info from the previous server and change the following values:
b.1.) Node name - it is the same at the server name, usually at the top of the values.
b.2.) comm.info – This is the ip address of the sending server.}

3. At USERFILE.CFG file – to map remote user to the local user
user2@:\ #Remote user @Remote system
:local.id=user2:\ #Local user
:descrip=:

Add the local users with the values
user2:\
:admin.auth=n:\
:cmd.chgproc=y:\
:cmd.delproc=y:\
:cmd.flsproc=y:\
:cmd.selproc=y:\
:cmd.submit=y:\
:cmd.selstats=y:\
:cmd.stopndm=n:\
:cmd.trace=n:\
:pstmt.copy=y:\
:pstmt.runjob=y:\
:pstmt.runtask=y:\
:pstmt.submit=y:\
:snode.ovrd=y:\
:pstmt.copy.ulimit=y:\
:pstmt.upload=y:\
:pstmt.upload_dir=:\
:pstmt.download=y:\
:pstmt.download_dir=:\
:pstmt.run_dir=:\
:pstmt.submit_dir=:\
:name=:\
:phone=:\
:descrip=:


B) SENDING SERVER
. Login to the sending server
. su – user2#local user.
. Set the connect direct environment: # . /local/etc/cdunix/cdunixenv
. Create the process file: test.cd

E.g.
user2@ $ cat test.cd
test process snode=

step01: copy from (file=/local/users/user2/whatevername.tst pnode) to (file=SNAZU1.GOOF snode disp=new)

· Run the command to start the Direct application:
user2@ $ direct

********************************************
* CONNECT:Direct for UNIX *
*-----------------------------------------------------------*
* Copyright (c) 1983, 1997 Sterling Commerce, Inc. *
* Version 3.1.00 GA *
********************************************

Enter a ';' at the end of a command to submit it. Type 'quit;' to exit CLI.

Submit the process
Direct> submit file=/local/users/user2/test.cd;
Process Submitted, Process Number = 1036

Check the status of the process
Direct> sel stat pnum=1036;
===============================================================================
SELECT STATISTICS
===============================================================================
P RECID LOG TIME PNAME PNUMBER STEPNAME CCOD FDBK MSGID
E RECID LOG TIME MESSAGE TEXT
-------------------------------------------------------------------------------
P PSTR 06/13/2005 05:05:40 test 1036 0 XSMG200I
P CTRC 06/13/2005 05:05:40 test 1036 step01 0 SCPA000I
P PRED 06/13/2005 05:05:40 test 1036 0 XSMG252I
===============================================================================
Select Statistics Completed Successfully.

Note:-if the ccod is any other number than 0{zero} then the transfer is not successful.
E.g.

Direct> sel stat pnum=1035;
===============================================================================
SELECT STATISTICS
===============================================================================
P RECID LOG TIME PNAME PNUMBER STEPNAME CCOD FDBK MSGID
E RECID LOG TIME MESSAGE TEXT
-------------------------------------------------------------------------------
P PSTR 06/13/2005 05:01:35 test 1035 0 XSMG200I
P CTRC 06/13/2005 05:01:35 test 1035 step01 8 XCPR014I
P PRED 06/13/2005 05:01:35 test 1035 8 XCPR014I
===============================================================================
Select Statistics Completed Successfully.
Direct>quit;
Direct> sel stat pnum=1035 detail=yes; <----For a Detailed output on the status :

# /local/bin/cdunix/work/hostname/ <-----Log files location

Wednesday, January 16, 2008

Tivoli TSM

# lslpp -l tivoli* <-- To check TSM version and installed fileset on MC TSM client
Path: /usr/lib/objrepos
tivoli.tivguid 1.1.0.0 COMMITTED IBM Tivoli GUID on AIX
tivoli.tsm.client.api.64bit 5.2.0.0 COMMITTED TSM Client - 64bit API
tivoli.tsm.client.ba.64bit.base 5.2.0.0 COMMITTED TSM Client - Backup/Archive Base
tivoli.tsm.client.ba.64bit.common 5.2.0.0 COMMITTED TSM Client-Backup/ArchiveCommon
tivoli.tsm.client.ba.64bit.web 5.2.0.0 COMMITTED TSM Client - Backup/Archive WEB
tivoli.tsm.tdpr3.ora.64bit 3.3.2.0 COMMITTED Data Protection for SAP
Path: /etc/objrepos
tivoli.tivguid 1.1.0.0 COMMITTED IBM Tivoli GUID on AIX


TSM Backup/Archive Client Config Files
* v3.x systems: /usr/lpp/adsm/bin/dsm.sys and /usr/lpp/adsm/bin/dsm.opt
* v4 system: /usr/tivoli/tsm/client/ba/bin/dsm.sys & dsm.sys

TSM Scheduler logs
· Directory: /local/etc/tsm/schedule_log
· schedlogname: /var/adm/dsmsched.log <---- EU installation
· errorlogname: /var/adm/dsmerror_log <----EU installation
· inclexcl: /usr/tivoli/tsm/client/ba/bin/inclexcl (AIX)
· inclexcl: /opt/tivoli/tsm/client/ba/bin/inclexcl (Solaris)
· dsm.sys: /usr/tivoli/tsm/client/ba/bin/dsm.sys <--Backup param
· dsm.opt: /usr/tivoli/tsm/client/ba/bin/dsm.opt

/local/etc/tsm/bin/dsm.opt <---specify client processing options


dsm.sys
· dsm.sys - Used to specify one/more servers to contact for services, and communications options for each server. This file can also include authorization options, backup and archive processing options, and scheduling options.
· Example
$more /usr/tivoli/tsm/client/ba/bin/dsm.sys
/local/etc/tsm/bin/dsm.sys <---which refer to schedule_log and inclexcl
* dsm.sys file V 1.0 01/05/2001
Servername ICB0008_TSM
TCPPort 1500
TCPServeraddress icb0008
TCPWindowsize 640
TCPBuffsize 512
TXNBytelimit 2097152
schedmode prompted
schedlogretention 5
schedlogname /local/etc/tsm/schedule_log
inclexcl /local/etc/tsm/inclexcl/inclexcl
tcpnodelay YES
passwordaccess generate
SCHEDLOGRetention 7
schedlogname /var/adm/dsmsched.log
errorlogname /var/adm/dsmerror_log
inclexcl /usr/tivoli/tsm/client/ba/bin/inclexcl (AIX)
inclexcl /opt/tivoli/tsm/client/ba/bin/inclexcl (Sol)

dsm.opt
· dsm.opt / inclexcl – – used to specify client processing options, including the TSM server to use for filesystem backups.
· Example: /usr/tivoli/tsm/client/ba/bin/dsm.opt
/local/etc/tsm/bin/dsm.opt
...
SERVERNAME icb0008_TSM
tapeprompt no
subdir yes

inclexcl
· to include/exclude a specific file or groups of files from backup services, and to assign specific management classes to files. Tivoli Storage Manager backs up any file that is not explicitly excluded. Because Tivoli Storage Manager processes your include-exclude list from the bottom of the list up, it is important to enter all your include-exclude statements in the proper order. You can use the query inclexcl command to display a list of include and exclude statements in the order they are examined
· /local/etc/tsm/inclexcl/inclexcl
By default, all data is backed up to the default management class. The inclexcl file can be used to:
1.Exclude files from backup.
2.Backup files to a different management class
· Example:
root@icb0044 # dsmc q inclexcl
*** FILE INCLUDE/EXCLUDE ***
Mode Function Pattern (match from top down) Source File
---- --------- ------------------------------ -----------------
--snip--
Excl All /usr/sap/???/.../data/ROLLFL?? /local/etc/tsm/inclexcl/inclexcl
Excl All /usr/sap/???/.../data/PAGFIL?? /local/etc/tsm/inclexcl/inclexcl
Excl All /usr/sap/tmp/.../* /local/etc/tsm/inclexcl/inclexcl
Incl All /usr/sap/trans/.../* /local/etc/tsm/inclexcl/inclexcl
Incl All /usr/sap/.../* /local/etc/tsm/inclexcl/inclexcl
Incl All /oracle/???/????log?/.../cntrl???.dbf /local/etc/tsm/inclexcl/inclexcl
Incl All /oracle/???/sapdata*/.../cntrl???.dbf /local/etc/tsm/inclexcl/inclexcl
Excl All /oracle/stage/.../* /local/etc/tsm/inclexcl/inclexcl
Excl All /oracle/???/mirrlog?/.../* /local/etc/tsm/inclexcl/inclexcl
Excl All /oracle/???/origlog?/.../* /local/etc/tsm/inclexcl/inclexcl
Excl All /oracle/???/sapdata*/.../* /local/etc/tsm/inclexcl/inclexcl
Excl All /oracle/???/saparch/*.dbf /local/etc/tsm/inclexcl/inclexcl
Incl All /sapmnt/???/???JOBLG/.../* /local/etc/tsm/inclexcl/inclexcl
Incl All /local/backup/sap/logs/* /local/etc/tsm/inclexcl/inclexcl
Excl All /unix /local/etc/tsm/inclexcl/inclexcl

TSM Scheduler logs (/local/etc/tsm/schedule_log)

· The client scheduler logs are located in /local/etc/tsm/schedule_log* or in the file specified in the schedulogname parameter in the dsm.sys file.
· The scheduler log contains the messages from all events scheduled from the TSM Server, which is generally the filesystem backups.
· For MegaCentre, the scheduler logs are rotated daily to reduce the size of the file that is appended to during filesystems backups. The scheduler log only contains information about scheduled backups. It does NOT contain information about dsmc commands entered interactively (e.g. dsmc incr). The file should be examined when working on filesystem backup failures.
11/23/03 04:18:10 ANS1312E Server media mount not possible
11/23/03 04:18:10 Retry # 2 Normal File--> --snip-- ** Unsuccessful **
11/23/03 04:18:12 --- SCHEDULEREC STATUS BEGIN
11/23/03 04:18:12 Total number of objects inspected: 70,058
11/23/03 04:18:12 Total number of objects backed up: 212
11/23/03 04:18:12 Total number of objects updated: 0
11/23/03 04:18:12 Total number of objects rebound: 6,421
11/23/03 04:18:12 Total number of objects deleted: 0
11/23/03 04:18:12 Total number of objects expired: 51
11/23/03 04:18:12 Total number of objects failed: 5
11/23/03 04:18:12 Total number of bytes transferred: 111.96 MB
11/23/03 04:18:12 Data transfer time: 85.86 sec
11/23/03 04:18:12 Network data transfer rate: 1,335.24 KB/sec
11/23/03 04:18:12 Aggregate data transfer rate: 107.61 KB/sec
11/23/03 04:18:12 Objects compressed by: 0%
11/23/03 04:18:12 Elapsed processing time: 00:17:45
11/23/03 04:18:12 --- SCHEDULEREC STATUS END
11/23/03 04:18:12 ANS1312E Server media mount not possible
11/23/03 04:18:12 --- SCHEDULEREC OBJECT END FS_BACKUP_1 11/23/03 04:00:00
11/23/03 04:18:12 ANS1512E Scheduled event 'FS_BACKUP_1' failed. Return code = 12.
11/23/03 04:18:12 Sending results for scheduled event 'FS_BACKUP_1'.
11/23/03 04:18:12 Results sent to server for scheduled event 'FS_BACKUP_1'.


TSM Inactive Files
· Uique feature of TSM when performing regular filesystem backups: active vs. inactive files.
· TSM considers the most recent backup version to be the active version. The most recent backup version is considered an inactive version if the file was deleted or excluded at the time the last incremental backup was run. Any other backup version is considered an inactive version.
· When performing restores, you must specify inactive to restore any inactive files. By default, all TSM restore and query commands assume the files are active.
· Command Line: #dsmc –ina active ...
· WEB/GUI: View ---> Select “active and inactive files”


TSM Backup/Archive Client
Sample dsmc restore/retrieve Commands
Command Remark
# dsmc restore /tmp/file.to.restore Restore the active version of an individual file to its original location
# dsmc restore -pick –ina \ /oracle/DC0/saparch/archDC0.log Use the pick option to select a particular version of an inactive file to restore
# dsmc restore “filename*” restore the latest backup version of a filename with a wildcard character
# dsmc restore filename newfilename restore the latest backup version of a file back to a renamed file or different location
# dsmc restore -ina –pick -replace=prompt - \ subdir=yes directory/ newdirectory/ restore an inactive version of a directory with a pickllist and rename (directories must end in a /)
#dsmc restore -subdir=yes -pitdate=MM/DD/YYYY \ - pittime=15:00 directory/ restore an directory to a particular point in time (directories must end in a /)
NOTEIf you need to restore a file that was backed up with the “archive” function, you would use the “dsmc retrieve” command. It has similar options to the “dsmc restore” command.
#dsmc archive directory/ -subdir=yes - \ archmc=db_online_1 -desc="save before restore“ archive a directory (e.g. before a restore)
#dsmc help Access dsmc help menus

Sample dsmc backup/query Commands


# dsmc incr <---incremental backup of the entire system
# dsmc incr /local/logs/brreports/scripts/<---incr. backup of one dir( end with /)
# dsmc selective /local/logs/brreports/scripts/ selective (full) backup of one directory (directories must end in a /)
# dsmc q fi query filesystems to see last successful backup
# dsmc q sched query to see when next scheduled backup runs
#dsmc q backup -ina \ /local/backup/image/bos.obj.icb0006 #dsmc q backup -ina \ “/local/backup*/image*/bos.obj.icb0006” query to see all versions of a backed up fileUse “ “ if there is a wild card entry in the files that we want to query.
# dsmc q inclexcl query to display the files included/excluded

NOTE
can also archive files with the dsmc archive command.
Archives are retention based instead of version based.to be added.


Example
====================================================================
1. 06:00 17th August Time- Restore of files from the oldest available until 31 Dec 2004
tsm> restore -inactive subdir=yes -todate=12/31/2004 -fromowner=cdusr02 "/local/data/archive/*" /restore_temp/

2. 06:00 18th August Time- Restore of files from 31st Dec 2004 until 30th June 2005
tsm> restore -inactive subdir=yes -fromdate=12/31/2004 -todate=06/30/2005 -fromowner=cdusr02 "/local/data/archive/*" /restore_temp/

3. 06:00 19th August Time- Restore of files from 30th June 2005 until 1st Jan 2006
tsm> restore -inactive subdir=yes -fromdate=06/30/2005 -todate=01/01/2006 -fromowner=cdusr02 "/local/data/archive/*" /restore_temp/
====================================================================
The destination (/restore_temp/) is created in such a way;
as it was found out from testing that TSM will create only 1 parent directory from the asterisk (*) specified in the source (/local/data/archive/*) parameter.


check for backup file (date format mm/dd/yyyy)
#dsmc query backup "/local/pbridge/data/acq_file_20" -fromdate=08/14/2005 -todate=08/14/2005 -inactive

restore file acq_file_20 with new name acq_file_20_aug14 at directory /local/pbridge/data/

#dsmc restore -fromdate=08/14/2005 -todate=08/14/2005 -inactive "/local/pbridge/data/acq_file_20" /local/pbridge/data/acq_file_20_aug14




TSM Central Scheduler
for scheduling TSM Administrative commands and for all Filesystem backups.
What can be scheduled :
• Client commands (Backup / restore)
• Administrative commands.

Sample Filesystem Backup Schedule

tsm: ICB0008_TSM>q sched
Domain * Schedule Name Action Start Date/Time Duration Period Day
------------ - ---------------- ------ -------------------- -------- ------ ---
DOM_SAP FS_BACKUP_1 Inc Bk 08/28/01 04:00:00 4 H 1 D Any
DOM_UNIX FS_BACKUP_1 Inc Bk 09/27/01 22:30:00 2 H 1 D Any

tsm: ICB0008_TSM>q assoc
Policy Domain Name: DOM_SAP
Schedule Name: FS_BACKUP_1
Associated Nodes: ICB0001 ICB0003 ICB0004 ICB0006 ICB0008 ICB0009 ICB0010 ICB0011 ICB0015 ICB0016 ICB0017 ICB0018 ICB0030 ICB0031 ICB0032 ICB0033 ICB0034 ICB0035 ICB0036 ICB0037 ICB0038 ICB0039 ICB0040 ICB0041 ICB0042 ICB0043 ICB0044 ICB0045 ICB0046 ICB0118 ICB0119

Policy Domain Name: DOM_UNIX
Schedule Name: FS_BACKUP_1
Associated Nodes: HOUICBUX1003 HOUICBUX1004 HOUICBUX1005 HOUICBUX1006 HOUICBUX1007 HOUICBUX1008 HOUICBUX1009 HOUICBUX1010 HOUICBUX1011 HOUICBUX1012 HOUICBUX1014 HOUICBUX1015 HOUICBUX1016


Support problems
“dsmc sched” not running
Restart scheduler either via host specific method (check boot method) or /local/backup/sap/tsm/scripts/TSMclient_Start
Verify that scheduler is active: ps -aef|grep sched


Correcting Filesystem Backup Failures
#su - root
#ps –ef | grep sched <--check tsm scheduler
#dsmc q fe
#dsmc q fi
#dsmc q se <--- no password prompt, then continue. ELSE please correct it
#dsmc q se
ok>



TSM Retention Periods
As per requirement of an individual clients/ default standard for the org.

TSM Client Daemon Start
AIX:Usually in /local/etc/scripts/rc.local
For example:
nohup /usr/tivoli/tsm/client/ba/bin/dsmc sched -password=`hostname` 1> /dev/null 2>&1 &
(or it may execute the TSMclient_Start script)

Solaris:
/etc/rc3.d/S30tivoli
nohup dsmc sched -optfile=dsm.opt2 1> /dev/null 2>&1 &

NIS

NIS (Network Information System)
*Provides central management of users, groups, email aliases, hostnames, MAC address mappings, RPC/port service lookups etc.
Exploring NIS Maps
#ypwhich<--- Verify domain binding

#ypwhich –m <---- Shows maps available on domain.
#ypcat <--- Show text format of map.
#ypcat –k netgroup <--- will show the key in addition to the data.

#ypcat -x >---will show aliases available for maps.

Automount Master Map
*Config file is /etc/auto.master or /etc/auto_master, used to start automount processes to monitor mount points.
Example:
# cat /etc/auto.master
/misc /etc/auto.misc

The indirect directory to watch is /misc. The details about what resources to mount under /misc is contained in /etc/auto.misc.

# /etc/init.d/autofs start
# ps -ef | grep automount
root 2050 1 0 21:22 pts/1 00:00:00 /usr/sbin/automount /misc file /etc/auto.misc


Automount


The automount process is started with a directory to watch and a map of resources to manage under the mount point (by default, mounts expire after 5 minutes of inactivity).

Indirect maps:
*Names of directories under the master mount point being watched.
*Mounting options.
*Resource to mount.
*Example: Indirect auto.misc
# cat /etc/auto.misc
cd -fstype=iso9660,ro :/dev/cdrom
emacs -r bogus.host.com:/emacs

Direct maps (not in Linux):
*Fully qualified mount points.
*Mounting options.
*Resource to mount.


NIS and automount
*automount can facilitate a common network directory through NFS.
*The NIS network directories will correspond to an indirect map and mount point.
*The Master map and associated indirect maps can be pushed through NIS


NIS auto.master


$ ypcat -k auto.master
/home auto.home

$ ypcat -k auto.home
* cg1:/home/& <--* matches any directory reference under /home (indirect dir)
<--& used to substitute the key in the target mount.

Setting Up an NIS Client
# domainname ten.nis
# vi /etc/yp.conf
ypserver 192.168.1.98
# vi /etc/nsswitch.conf
passwd: compat
group: compat

Others to look at: automount, hosts, ethers, networks, aliases. Use compat if you want simulate typical Unix behavior, mandating the inclusion of "+/-" lines to /etc/passwd, /etc/group. Use files nis to avoid having to use "+/-" lines, but lose the ability to restrict NIS authentication.
# ps -ef | grep portmap

Verify portmap is running.
# ypbind
# ypwhich
192.168.1.98
# ypcat passwd
ccox:ZBaMuOdCZStAE:500:10:Chris Cox:/home/ccox:/bin/ksh
# /etc/init.d/autofs start <--Start if not running

Common NIS Maps
You can see the maps being advertised/pushed from the Master with ypwhich -m.
$ ypwhich -m
netid.byname server1
passwd.byuid server1
services.byname server1
services.byservicename server1
auto.home server1
netgroup server1
passwd.byname server1
group.byname server1
netgroup.byuser server1
netgroup.byhost server1
group.bygid server1
ypservers server1
rpc.byname server1
auto.master server1

Note: There is no passwd map, but rather are two passwd maps, one keyed byname and one keyed byuid. The passwd map is a alias for passwd.byname.

Setting up NIS Master
# domainname ten.nis
# /usr/lib/yp/ypinit -m
# ypserv <----optionally, can join this domain as a client, use ypbind instead.
# rpc.yppasswdd -D /etc
# /usr/lib/yp/ypxfrd <----need this if you support NIS Slaves.

Note: ·-D option to yppasswdd specifies the directory containing the original source passwd for the passwd.* maps.


Updating Maps


# useradd -m newuser
# passwd newuser
# ypcat passwd | grep newuser
# cd /var/yp
# make
updated passwd
pushed passwd
# ypcat passwd | grep newuser
newuser:axPwTTAWjfk/Y:4448:4444:/home/newuser:/bin/ksh


General NIS Problems
. ypcat to view maps, can see encrypted password strings in clear text.
· rpc is insecure
· No /etc/shadow map support, so no password aging
· Clients can hang on boot if NIS Server not available.

NIS Confusion
· Originally, NIS was designed to handle host resolution, however with DNS, pushing an NIS host map is redundant and can create consistency problems. Solution: Do not push a host map, let host resolution to use DNS instead.
* May have to run ypserv with the -b option.
* Using NIS domain the same as the Internet domain name. This causes confusion, generally not recommended today.

NIS+
· Sun’s upgrade to NIS
o Secure rpc support.
o Password aging.
o Better replication support for servers.
o Very granular security features.


NIS Security - sort of

# /var/yp/securenets
#
# allow connections from local host -- necessary
host 127.0.0.1
# same as 255.255.255.255 127.0.0.1
255.255.255.0 192.168.1.0
# Allow anyone in the 192.168.1.0 net.