Saturday, March 24, 2012

ODM

ODM: ODM(Object data manager) is a repository in which the OS keeps information regarding your system, such as devices, software or TCP/IP information.

ODM information is stored in /usr/lib/objrepos, /usr/share/lib/objrepos, /etc/objrepos.

ODM commands: odmadd, odmchange, odmcreate, odmshow, odmdelete, odmdrop, odmget,

To start the graphical mode smit using smit –m

Creating alias: alias rm=/usr/sbin/linux/rm

Export PATH=/usr/linux/bin:$path; print $path

Troubleshooting on boot process

Troubleshooting on boot process:

Accessing a system that will not boot: Press F5 on a PCI based system to boot from the tape/CDROMàInsert volume 1 of the installation media àselect the maintenance mode for system recoveryà Access a root volume groupàselect the volume groupà

Damaged boot image:Access a system that will not bootàCheck the / and /tmp file system sizeàdetermine the boot disk using lslv –m hd5àRecreate the boot image using bosboot –a –d /dev/hdisknàcheck for CHECKSTOP errors on errlog. If such errors found probably failing hardware. àshutdown and restart the system

Corrupted file system, Corrupted jfs log: Access a system that will not bootàdo fsck on all filw systemsà format the jfs log using /usr/sbin/logform /dev/hd8àRecreate the boot image using bosboot –a –d /dev/hdiskn

Super block corrupted: If fsck indicates that block 8 is corrupted, the super block for the file system is corrupted and needs to be repaired ( dd count=1 bs=4k skip=31 seek=1 if=/dev/hdn of=/dev/hdn)àrebuild jfslog using /usr/sbin/logform /dev/hd8àmount the root and usr file systems by (mount /dev/hd4 /mnt, mount /usr)àCopy the system configuration to backup directory(cp /mnt/etc/objrepos* /mnt/etc/objrepos/backup)àcopy the configuration from the RAM fs(cp /etc/objrepos/Cu* /mnt/etc/objrepos)àunmount all file systemsàsave the clean ODM to the BLV using savebase –d /dev/hdiskàreboot

Corrupted /etc/inittab file: check the empty,missing inittab file. Check problems with /etc/environment, /bin/sh,/bin/bsh,/etc/fsck,/etc/profileàReboot

Runlevelà selected group of processes. 2 is muti user and default runlevel. S,s,M,m for Maintenance mode

Identifying current run levelàcatt /etc/.init.state

Displaying history of previous run levels: /usr/lib/acct/fwtmp < /var/adm/wtmp |grep run-level

Changing system run levels: telinit M

Run level scripts allow users to start and stop selected applications while changing the run level. Scripts beginning with k are stop scripts and S for start scripts.

Go to maintenance mode by using shutdown -m

Rc.boot fle: The /sbin/rc.boot file is a shell script that is called by the init. rc.boot file configures devices, booting from disk, varying on a root volume group, enabling fle systems, calling the BOS installation programs.

/etc/rc file: It performs normal startup initialization. It varyon all vgs, Activate all paging spaces(swapon –a), configure all dump devices(sysdumpdev –q), perform file system checks(fsck –fp), mount all

/etc/rc.net: It contains network configuration information.

/etc/rc.tcpip: it start all network related daemons(inted, gated, routed, timed, rwhod)

AIX Boot Process

AIX Boot Process:
When the server is Powered on Power on self test(POST) is run and checks the hardware
On successful completion on POST Boot logical volume is searched by seeing the bootlist
The AIX boot logical contains AIX kernel, rc.boot, reduced ODM & BOOT commands. AIX kernel is loaded in the RAM.
Kernel takes control and creates a RAM file system.
Kernel starts /etc/init from the RAM file system
init runs the rc.boot 1 ( rc.boot phase one) which configures the base devices.
rc.boot1 calls restbase command which copies the ODM files from Boot Logical Volume to RAM file system
rc.boot1 calls cfgmgr –f command to configure the base devices
rc.boot1 calls bootinfo –b command to determine the last boot device
Then init starts rc.boot2 which activates rootvg
rc.boot2 calls ipl_varyon command to activate rootvg
rc.boot2 runs fsck –f /dev/hd4 and mount the partition on / of RAM file system
rc.boot2 runs fsck –f /dev/hd2 and mounts /usr file system
rc.boot2 runs fsck –f /dev/hd9var and mount /var file system and runs copy core command to copy the core dump if available from /dev/hd6 to /var/adm/ras/vmcore.0 file. And unmounts /var file system
rc.boot2 runs swapon /dev/hd6 and activates paging space
rc.boot2 runs migratedev and copies the device files from RAM file system to /file system
rc.boot2 runs cp /../etc/objrepos/Cu* /etc/objrepos and copies the ODM files from RAM file system to / filesystem
rc.boot2 runs mount /dev/hd9var and mounts /var filesystem
rc.boot2 copies the boot log messages to alog
rc.boot2 removes the RAM file system
Kernel starts /etc/init process from / file system
The /etc/init points /etc/inittab file and rc.boot3 is started. Rc.boot3 configures rest of the devices
rc.boot3 runs fsck –f /dev/hd3 and mount /tmp file system
rc.boot3 runs syncvg rootvg &
rc.boot3 runs cfgmgr –p2 or cfgmgr –p3 to configure rest of the devices. Cfgmgr –p2 is used when the physical key on MCA architecture is on normal mode and cfgmgr –p3 is used when the physical key on MCA architecture is on service mode.
rc.boot3 runs cfgcon command to configure the console
rc.boot3 runs savebase command to copy the ODM files from /dev/hd4 to /dev/hd5
rc.boot3 starts syncd 60 & errordaemon
rc.boot3 turn off LED’s
rc.boot3 removes /etc/nologin file
rc.boot3 checks the CuDv for chgstatus=3 and displays the missing devices on the console
The next line of Inittab is execued


/etc/inittab file format: identifier:runlevel:action:command

MkitabàAdd records to the /etc/inittab file

LsitabàList records in the /etc/inittab file

Chitabàchanges records in the /etc/inittab file

Rmitabàremoves records from the /etc/inittab file

To display a boot list: bootlist –m normal –o

To change a boot list: bootlist –m normal cd0 hdisk0

PV , LV, VG ODM Commands

VG type Max Pv’s Max LV’s Max PP’s/VG Max PP Size

Normal 32 256 32512 1G

BIG 128 512 130048 1G

Scalable 1024 4096 2097152 128G

PVIDs stored in ODM.

Creating PVID : chdev –l hdisk3 –a pv=yes

Clear the PVID : chdev –l hdisk3 –a pv=clear.

Display the allocation PP’s to LV’s : lspv –p hdisk0

Display the layout of a PV: lspv –M hdisk0

Disabling partition allocation for a physical volume : chpv –an hdisk2 : Allocatable=no

Enabling partition allocation for a physical volume : chpv –ay hdisk2 : Allocatable = yes

Change the disk to unavailable : chpv –vr hdisk2 : PV state = removed

Change the disk to available : chpv –va hdisk2 : PV state = active

Clean the boot record : chpv –c hdisk1

To define hdsik3 as a hotspare : chpv –hy hdisk3

To remove hdisk3 as a hotspare : chpv –hn hdisk3

Migrating ttwo disks : migratepv hdisk1 hdisk2

Migrate only PPS that belongs to particular LV : migratepv –l testlv hdisk1 hdisk5

Move data from one partition located on a physical disk to another physical partition on a different disk: migratelp testlv/1/2 hdisk5/123

Logical track group(LTG) size is the maximum allowed transfer size for an IO disk operation. Lquerypv –M hdisk0


VOLUME GROUPS

For each VG, two device driver files are created under /dev.

Creating VG : mkvg –y vg1 –s64 –v99 hdisk4

Creating the Big VG : mkvg –B –y vg1 –s 128 –f –n –V 101 hdisk2

Creating a scalable VG: mkvg –S –y vg1 –s 128 –f hdisk3 hdisk4 hdisk5

Adding disks that requires more than 1016 PP’s/PV using chvg –t 2 VG1

Information about a VG read from a VGDA located on a disk: lsvg –n VG1

Change the auto vary on flag for VG : chvg –ay newvg

Change the auto vary off flag for VG: chvg –an newvg

Quorum ensures data integrity in the event of disk failure. A quorum is a state in which 51 percent or more of the PVs in a VG accessible. When quorum is lost, the VG varies itself off.

Turn off the quorum : chvg –Qn testvg

Turn on the quorum : chvg –Qy testvg

To change the maximum no of PPs per PV : chvg –t 16 testvg.

To change the Normal VG to scalable vg : 1. Varyoffvg ttt 2. chvg –G ttt 3. varyonvg ttt

Change the LTG size : chvg –L 128 testvg à VG’s are created with a variable logical track group size.

Hot Spare: In Physical volume all PP’s shou;d be free. PP located on a failing disk will be copied from its mirror copy to one or more disks from the hot spare pool.

Designate hdisk4 as hot spare: chpv –hy hdisk4

Migrate data from a failing disk to spare disk: Chvg –hy vg;

Change synchronization policy : chvg –sy testvg; synchronization policy controls automatic synchronization of stale partitions within the VG.

Change the maximum no. of pps within a VG: chvg –P 2048 testvg

Change maximum no.of LVs/VG : chvg –v 4096 testvg.

How to remove the VG lock : chvg –u

Extending a volume group : extendvg testvg hdisk3; If PVID is available use extendvg –f testvg hdisk3

Reducing the disk from vg : reducevg testvg hdisk3

Synchronize the ODM information : synclvodm testvg

To move the data from one system to another use the exportvg command. The exportvg command only removes VG definition from the ODM and does not delete any data from physical disk. : exportvg testvg

Importvg : Recreating the reference to the VG data and making that data available.. This command reads the VGDA of one of the PV that are part of the VG. It uses redefinevg to find all other disks that belong to the VG. It will add corresponding entries into the ODM database and update /etc/filesystems with new values. importvg –y testvg hdisk7
Server A: lsvg –l app1vg
Server A: umount /app1
Server A: Varyoffvg app1vg
Server B: lspv|grep app1vg
Server B: exportvg app1vg
Server B: importvg –y app1vg –n V90 vpath0
Chvg –a n app1vg
Varyoffvg app1vg

Varying on a volume group : varyonvg testvg

Varying off a volume group : varyoffvg testvg

Reorganizing a volume group : This command is ued to reorganize physical partitions within a VG. The PP’s will be rearranged on the disks according to the intra-physical and inter-physical policy. reorgvg testvg.

Synchronize the VG : syncvg –v testvg ; syncvg –p hdisk4 hdisk5

Mirroring a volume group : lsvg –p rootvg; extendvg rootvg hdisk1; mirrorvg rootvg; bosboot –ad /dev/hdisk1; bootlist –m normal hdisk0 hdisk1

Splitting a volume group : splitvg –y newvg –c 1 testvg

Rejoin the two copies : joinvg testvg


Logical Volumes:

Create LV : mklv –y lv3 –t jfs2 –a im testvg 10 hdisk5

Remove LV : umount /fs1, rmlv lv1

Delete all data belonging to logical volume lv1 on physical volume hdisk7: rmlv –p hdsik7 lv1

Display the no. of logical partitions and their corresponding physical partitions: lslv –m lv1

Display information about logical volume testlv read from VGDA located on hdisk6: lslv –n hdisk6 testlv

Display the LVCB : getlvcb –AT lv1

Increasing the size of LV : extendlv –a ie –ex lv1 3 hdisk5 hdisk6

Copying a LV : cplv –v dumpvg –y lv8 lv1

Creating copies of LV : mklvcopy –k lv1 3 hdisk7 &

Splitting a LV : umount /fs1; splitlvcopy –y copylv testlv 2

Removing a copy of LV : rmlvcopy testlv 2 hdisk6

Changing maximum no.of logical partitions to 1000: chlv –x 1000 lv1


Installation :


New and complete overwrite installation : For new machine, Overwrite the existing one, reassign your hard disks

Migration: upgrade AIX versions from 5.2 to 5.3. This method preserves most file systems, including root volume group.


Preservation installation : If you want to preserve the user data.. use /etc/preserve.list. This installation overwrites /usr, /tmp,/var and / file systems by default. /etc/filesystems file is listed by default.

AIX VG

AIX Short Notes


LVM:

VG: One or more PVs can make up a VG.

Within each volume group one or more logical volumes can be defined.

VGDA(Volume group descriptor area) is an area on the disk that contains information pertinent to the vg that the PV belongs to. It also includes information about the properties and status of all physical and logical volumes that are part of the vg.

VGSA(Volume group status area) is used to describe the state of all PPs from all physical volumes within a volume group. VGSA indicates if a physical partition contains accurate or stale information.

LVCB(Logical volume control block) contains important information about the logical volume, such as the no. of logical partitions or disk allocation policy.