Configurazione LAN VIOS per Jumbo Frames

1) VERIFICA parametri schede fisiche 

Verificare che flow_ctrl, jumbo_frames, large_receive e large_send siano impostati a “yes” sulle schede fisiche che compongono l’etherchannel 

lsattr -El entX | egrep “flow_ctrl|jumbo|large” 

2) CONFIGURAZIONE JF su Etherchannel 

Impostare use_jumbo_frame=yes sul device EC , dopo averlo messo offline

rmdev -l entX   # where entX represents the EC device if one exists

chdev -l entX -a use_jumbo_frame=

mkdev -l entX   # where entX represents the EC device

Verifica: 

lsattr -El entX | egrep “mode|jumbo” 

3) IMPOSTAZIONE parametri virtual Ethernet 

Impostare i seguenti parametri sull’interfaccia con IP address: 

$ chdev -dev enX -attr mtu_bypass=on rfc1323=1 mtu=9000 

Impostare i seguenti parametri sulle virtual Ethernet (trunk) che compongono la SEA (non sono necessari sul control channel): 

max_buf_huge=128 

max_buf_large=256 

max_buf_medium=2048 

max_buf_small=4096 

max_buf_tiny=4096 

min_buf_huge=127 

min_buf_large=255 

min_buf_medium=2047 

min_buf_small=4095 

min_buf_tiny=4095 

chdev -dev entX -perm -attr max_buf_huge=128 max_buf_large=256 max_buf_medium=2048 max_buf_small=4096 max_buf_tiny=4096 min_buf_huge=127 min_buf_large=255 min_buf_medium=2047 min_buf_small=4095 min_buf_tiny=4095 

5) CONFIGURAZIONE SEA 

Configurare la SEA, abilitando largesend, large_receive e jumbo_frames. 

Nell’esempio, avendo un solo trunk, è stata impostata la modalità auto (ha_mode=auto), nel caso di SEA con più trunk impostare il load sharing (ha_mode=sharing): 

mkvdev -sea entXX -vadapter entYY -default entYY -defaultid 1 -attr ha_mode=auto ctl_chan=entZZ largesend=1 jumbo_frames=yes large_receive=yes adapter_reset=no thread=0 

chdev -l entX -a ha_mode=standby

rmdev -l entX

chdev -l entX -a jumbo_frames=yes largesend=1 jumbo_frames=yes large_receive=yes

mkdev -l entX

chdev -l entX -a ha_mode=auto

6) IMPOSTAZIONE parametri di rete sulle virtual Ethernet delle LPAR client 

Impostare i seguenti parametri sulla scheda di rete: 

max_buf_huge=128 

max_buf_large=256 

max_buf_medium=2048 

max_buf_small=4096 

max_buf_tiny=4096 

min_buf_huge=127 

min_buf_large=255 

min_buf_medium=2047 

min_buf_small=4095 

min_buf_tiny=4095 

chdev -l entX -a max_buf_huge=128 -a max_buf_large=256 -a max_buf_medium=2048 -a max_buf_small=4096 -a max_buf_tiny=4096 -a min_buf_huge=127 -a min_buf_large=255 -a min_buf_medium=2047 -a min_buf_small=4095 -a min_buf_tiny=4095 -P 

chdev -l enX  -a mtu_bypass=on -a tcp_nodelay=1 -a rfc1323=1 -a mtu=9000

Reboot 

Aix alt_disk_clone

Di seguito i passi eseguiti per clonare rootvg della lpar LPAR1 in occasione della migrazione da un P6-570 ad un P7-770.
Aggiungo un disco da 50GB (hdisk27) che ospiterà il rootvg della nuova partizione , sulla vecchia si chiamerà altinst_rootvg
al termine dell’operazione.

root@lpar1:/#lspv
hdisk0 00cff683554eac45 rootvg active
hdisk1 00cff683554eac7b rootvg active
hdisk4 00cff683e9f9313f vgappl30 active
hdisk2 00cff68379f89cc7 vgappl31 active
hdisk3 00cff68379f89dbe vgappl32 active
hdisk5 00cff68379f89e28 vgappl33 active
hdisk6 00cff68379f89e74 vgappl34 active
hdisk7 00cff68379f89eb4 vgappl35 active
hdisk8 00cff68379f89ef9 vgappl36 active
hdisk9 00cff68379f89f36 vgappl30 active
hdisk10 00cff68379f89f7a vgappl31 active
hdisk11 00cff68379f89fbb vgappl32 active
hdisk12 00cff68379f89ffc vgappl33 active
hdisk13 00cff68379f8a046 vgappl34 active
hdisk14 00cff68379f8a08b vgappl35 active
hdisk15 00cff68379f8a0d1 vgappl36 active
hdisk16 00cff6837a2fb38a tempvg active
hdisk17 00cff683ad734158 tempvg active
hdisk18 00cff6837b91c30d vgappl30 active
hdisk19 00cff683d38e1716 swapvg active
hdisk20 00cff6830a99978f vgappl30 active
hdisk21 00cff6830a999885 vgappl31 active
hdisk22 00cff6830a9998f1 vgappl32 active
hdisk23 00cff6830a999958 vgappl33 active
hdisk24 00cff6830a9999c5 vgappl34 active
hdisk25 00cff6830a999a4e vgappl35 active
hdisk26 00cff6830a999ace vgappl36 active
hdisk27 00cff6830915c025 None

 

Notare che se rootvg è mirrored l’operazione fallirà per mancanza di spazio; rimuovere mirror prima di procedere:

root@lpar1:/#unmirrorvg rootvg hdisk1
0516-1246 rmlvcopy: If hd5 is the boot logical volume, please run ‘chpv -c <diskname>’
as root user to clear the boot record and avoid a potential boot
off an old boot image that may reside on the disk from which this
logical volume is moved/removed.
0516-1804 chvg: The quorum change takes effect immediately.
0516-1144 unmirrorvg: rootvg successfully unmirrored, user should perform
bosboot of system to reinitialize boot records. Then, user must modify
bootlist to just include: hdisk0.

 

Verifichiamo lo stato del mirror :

root@lpar1:/#lsvg -p rootvg
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 981 588 08..00..188..196..196
hdisk1 active 981 981 197..196..196..196..196

 

OK , adesso rimuovo hdisk1 da rootvg :

root@lpar1:/#reducevg rootvg hdisk1
root@lpar1:/#lsvg -p rootvg
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 981 588 08..00..188..196..196

 

Un altro problema che potrebbe impedire l’operazione è la presenza di LV con nomi di lunghezza superiore a 12 caratteri
visto che la procedura aggiungerà un prefisso al nome originale dei logical volumes.

root@lpar1:/#smitty alt_clone

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
* Target Disk(s) to install [hdisk27] +
Phase to execute all +
image.data file [] /
Exclude list [] /

Bundle to install [] +
-OR-
Fileset(s) to install []

Fix bundle to install []
-OR-
Fixes to install []

Directory or Device with images []
(required if filesets, bundles or fixes used)

installp Flags
COMMIT software updates? yes +
SAVE replaced files? no +
AUTOMATICALLY install requisite software? yes +
EXTEND file systems if space needed? yes +
OVERWRITE same or newer versions? no +
VERIFY install and check file sizes? no +
ACCEPT new license agreements? yes +

Customization script [] /
Set bootlist to boot from this disk
on next reboot? no +
Reboot when complete? no +
Verbose output? yes +
Debug output? no +

 

Attenzione a non selezionare la modifica della bootlist e il reboot when complete .

Selezionando l’output verboso vedremo la lista dei files che vengono copiati nel nuovo disco , inutile.

Alla fine avremo :

root@lpar1:/#lspv
hdisk0 00cff683554eac45 rootvg active
hdisk1 00cff683554eac7b None
hdisk4 00cff683e9f9313f vgappl30 active
hdisk2 00cff68379f89cc7 vgappl31 active
hdisk3 00cff68379f89dbe vgappl32 active
hdisk5 00cff68379f89e28 vgappl33 active
hdisk6 00cff68379f89e74 vgappl34 active
hdisk7 00cff68379f89eb4 vgappl35 active
hdisk8 00cff68379f89ef9 vgappl36 active
hdisk9 00cff68379f89f36 vgappl30 active
hdisk10 00cff68379f89f7a vgappl31 active
hdisk11 00cff68379f89fbb vgappl32 active
hdisk12 00cff68379f89ffc vgappl33 active
hdisk13 00cff68379f8a046 vgappl34 active
hdisk14 00cff68379f8a08b vgappl35 active
hdisk15 00cff68379f8a0d1 vgappl36 active
hdisk16 00cff6837a2fb38a tempvg active
hdisk17 00cff683ad734158 tempvg active
hdisk18 00cff6837b91c30d vgappl30 active
hdisk19 00cff683d38e1716 swapvg active
hdisk20 00cff6830a99978f vgappl30 active
hdisk21 00cff6830a999885 vgappl31 active
hdisk22 00cff6830a9998f1 vgappl32 active
hdisk23 00cff6830a999958 vgappl33 active
hdisk24 00cff6830a9999c5 vgappl34 active
hdisk25 00cff6830a999a4e vgappl35 active
hdisk26 00cff6830a999ace vgappl36 active
hdisk27 00cff6830915c025 altinst_rootvg

 

A questo punto abbiamo il nuovo rootvg (altinst_rootvg) , per accedervi dobbiamo usare il comando alt_rootvg_op.
Ad esempio possiamo “svegliare” il vg :

root@lpar1:/#alt_rootvg_op -W -d hdisk27
Waking up altinst_rootvg volume group …

root@lpar1:/#lsvg -l altinst_rootvg
altinst_rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
alt_hd5 boot 1 1 1 closed/syncd N/A
alt_hd6 paging 16 16 1 closed/syncd N/A
alt_hd8 jfs2log 1 1 1 open/syncd N/A
alt_hd4 jfs2 8 8 1 open/syncd /alt_inst
alt_hd2 jfs2 37 37 1 open/syncd /alt_inst/usr
alt_hd9var jfs2 16 16 1 open/syncd /alt_inst/var
alt_hd3 jfs2 32 32 1 open/syncd /alt_inst/tmp
alt_hd10opt jfs2 64 64 1 open/syncd /alt_inst/opt
alt_hd11admin jfs2 2 2 1 open/syncd /alt_inst/admin
alt_dumplv0 sysdump 64 64 1 closed/syncd N/A
alt_livedump jfs2 4 4 1 open/syncd /alt_inst/var/adm/ras/livedump
alt_netbackup jfs2 64 64 1 open/syncd /alt_inst/opt/netbackup
alt_ocsinv jfs2 2 2 1 open/syncd /alt_inst/opt/ocsinventory
alt_nagios jfs2 2 2 1 open/syncd /alt_inst/opt/nagios
alt_logs jfs2 80 80 1 open/syncd /alt_inst/logs

 

e rimetterlo a dormire :

root@lpar1:/#alt_rootvg_op -S
Putting volume group altinst_rootvg to sleep …
forced unmount of /alt_inst/var/adm/ras/livedump
forced unmount of /alt_inst/var/adm/ras/livedump
forced unmount of /alt_inst/var
forced unmount of /alt_inst/var
forced unmount of /alt_inst/usr
forced unmount of /alt_inst/usr
forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/opt/ocsinventory
forced unmount of /alt_inst/opt/ocsinventory
forced unmount of /alt_inst/opt/netbackup
forced unmount of /alt_inst/opt/netbackup
forced unmount of /alt_inst/opt/nagios
forced unmount of /alt_inst/opt/nagios
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/logs
forced unmount of /alt_inst/logs
forced unmount of /alt_inst/admin
forced unmount of /alt_inst/admin
forced unmount of /alt_inst
forced unmount of /alt_inst
Fixing LV control blocks…
Fixing file system superblocks…

root@lpar1:/#lsvg
rootvg
altinst_rootvg
vgappl30
vgappl31
vgappl32
vgappl33
vgappl34
vgappl35
vgappl36
stagevg
swapvg

root@lpar1:/#lsvg -l altinst_rootvg
0516-010 : Volume group must be varied on; use varyonvg command.

 

Ora non rimane che eliminare ogni traccia del vg e del disco dalla lpar per spostarlo sulla nuova.

root@lpar1:/#exportvg altinst_rootvg

root@lpar1:/#rmdev -dl hdisk27
hdisk27 deleted

 

SEA Failover definition in dual vios setup

Scenario , ambiente dual vios su cui vogliamo costruire una shared ethernet adapter con failover e vlan multiple.

Su entrambi I vios definire due virtual adapters su cui verranno configurate le vlan necessarie , in questo caso la 1 per la rete di produzione e la 606 per la management . E’ necessario attivare il 802.1q per poter assegnare vlan multiple all’adapter. Sul primo vios della coppia assegnamo anche trunk priority uguale a 1 .

 

 

 

 

Sul secondo vio replichiamo la stessa configurazione con però trunk priority uguale a 2.

 

 

 

 

A questo punto configuriamo I due virtual adapter che costituiranno il control channel per il failover della SEA . In questo caso utilizziamo un vlanid non usato , ad es . 999 , senza settare 802.1q e il

bridging.

 

 

 

 

 

Definiamo infine la SEA :

 

mkvdev -sea ent2 -vadapter ent3 -default ent3 -defaultid 1 -attr ha_mode=auto ctl_chan=ent4

ent5 Available

en5

et5

 

dove ent2 è la fisica , nel nostro caso un etherchannel costruito su due fisiche;

ent3 è il vadapter che conosce le vlan di prod e mgmt ;

defaultid = il vlanid di default ; ctl_chan = il vadapter sulla vlan 999 ;

 

Configuriamo l’ip address sulla sea oppure aggiungiamo un ulteriore vadapter per rimanere indipendenti dalla SEA e non perdere connettività in caso di manutenzione.

 

mkvdev -vlan ent5 -tagid 606

ent6 Available

en6

et6

 

e assegnamo l’ip address al nuovo adapter

 

mktcpip -hostname viosXX1 -inetaddr XX.XX.XX.XX -interface en5 -netmask 255.255.255.0 -gateway XX.XX.XX.XX mktcpip -hostname viosXX2 -inetaddr XX.XX.XX.XX -interface en5 -netmask 255.255.255.0 -gateway XX.XX.XX.XX

 

 

 

Scripted LPAR building

Scenario: we have to build many lpars but we are too lazy to do the job manually:
Here comes the magic of hmc and its tools that give us the chance to think before and relax later …when the script will execute the job.
From HMC we can type :

mksyscfg -r lpar -m -i name=[LPARNAME], profile_name=[PROFILENAME],lpar_id=10, lpar_env=”aixlinux”, min_mem=4096, desired_mem=8192, max_mem=12800, proc_mode=shared, min_procs=1, desired_procs=2, max_procs=4, min_proc_units=0.2, desired_proc_units=0.4, max_proc_units=1, sharing_mode=uncap, uncap_weight=128, conn_monitoring=1, boot_mode=norm, max_virtual_slots=200, virtual_eth_adapters=2/0/603//0/0/

This will create the lpar profile in hmc , only the disk adapters are missing .

Now we can tell the nim about the new lpars and their ip address , writing them in /etc/hosts manually or using hostent command:

hostent -a [IP] -h [HOSTNAME]

At this point we can define the nim object , doing :

nim -o define -t standalone -a platform=chrp -a if1=”find_net [HOSTNAME] 0″ -a cable_type1=tp -a net_settings1=”auto auto” -a netboot_kernel=64 [HOSTNAME]

and start the installation from nim , to do so we must have a spot and a mksysb defined:

nim -o bos_inst -a source=mksysb -a spot=spot-[HOSTNAME] -a mksysb=mksysb-[HOSTNAME] -a accept_licenses=yes -a installp_flags=-acNgXY -a no_client_boot=yes -a preserve_res=yes [HOSTNAME]

Starting sshd … PRNG is not seeded

Yesterday i applied TL7SP5 to a Aix 6.1 lpar and after reboot it just was inactive on the network…

Log on the lpar through hmc and …surprise , all daemons were stopped..
I thought , just start them … but first of all i’ll start sshd … eheh

root@pprctest:/#startsrc -s sshd
0513-059 The sshd Subsystem has been started. Subsystem PID is 4128828.
root@pprctest:/#PRNG is not seeded
PRNG is not seeded
PRNG is not seeded

mmm i’ll need to investigate , going on searching i try to check permissions on the /dev/random and /dev/urandom files …
but  /dev/*random* devices were missing…….and I had to really need them!

# odmget CuDvDr | grep -p random
CuDvDr:
resource = “ddins”
value1 = “random”
value2 = “32”
value3 = “”

root@pprctest:/# mknod /dev/random c 32 0
root@pprctest:/# mknod /dev/urandom c 32 1
root@pprctest:/# randomctl -l

root@pprctest:/# stopsrc -s sshd

root@pprctest:/# startsrc -s sshd

DSH POWER

Sometimes we need to execute  a command on many aix systems at once, here comes the Distributed SHell utility .
DSH can rely on RSH or SSH to connect to clients and execute commands , obviously nowadays RSH is deprecated so the choice must be SSH.For example a customer requested me to change the timezone format from Posix to Olson for some application compatibility issues .

I wrote the node list in a text file called nodelist (yes i know , i have a lot of fantasy , LOL).
I created a command file called , mhh let’s try to guess…. cmdfile

#cat cmdfile.chgtz
cp /etc/environment /etc/environment.$(date +%Y-%m-%d_%T)
sed ‘s/^TZ.*$/TZ=NFT-1DFT,M3.5.0,M10.5.0/g’ /etc/environment > /etc/environment.new
grep TZ /etc/environment.new
mv /etc/environment.new /etc/environment
echo $TZ

and the shell script dsh_do.sh that do the stuff …reads the cmdfile and gives to it the nodelist as a parameter , one node at a time.

#cat dsh_do.sh

#!/usr/bin/ksh
while read node; do dsh -n $node -e ./cmdfile.chgtz; done < nodelist

Simply the command file makes a backup of the /etc/environment file , do the substitution with sed redirecting the output to a temp file . The temporary .new file is renamed to replace the original.
This solution give me the flexibility to do what i want (specified in cmdfile) on the nodes i want (in nodelist)
without visiting the clients.

Upgrading RHEL 5.6 to 5.8 with YUM

Recently i was committed to update a lab machine from RHEL 5.6 to RHEL 5.8
in order to make it compliant with a new version of flare that the storage team 
has scheduled to deploy on the Clariion.
To do this i used the yum method.
We are using an internal yum server that supply updates for all RHEL releases from 4.6 to 6.2.
So I did a yum update with the original repo file that still pointed to u6 dir on the yum server,
to bring the installed packages to the latest available level.
Later i did a yum clean all to clean the local cached yum files , replaced u6 with u8 in the repo file
and started to evaluate the real upgrade process.
I simulated this by executing “yum update” and answering N at the confirmation question.
After a review of the proposed packages , i am ready to go.
So let’s start the operation measuring also the elapsed time .
Please note that the following yum output is not complete because of its size !

[root@LABSERVER01 backup_config]# time yum -y update

Loaded plugins: rhnplugin, security

This system is not registered with RHN.

RHN support will be disabled.

Skipping security plugin, no data

Setting up Update Process

Resolving Dependencies

Skipping security plugin, no data

–&gt; Running transaction check


Dependencies Resolved


Transaction Summary
================================================================================================================================================================================
Install       4 Package(s)
Upgrade     315 Package(s)

Total download size: 428 M
Downloading Packages:


——————————————————————————————————————————————————————————–
Total                                                                                                                                            18 MB/s | 428 MB     00:23
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction

Complete!

real    10m30.912s
user    4m0.240s
sys     1m8.269s

[root@LABSERVER01 backup_config]#

After the completion of the yum update we need to install the kernel-debug package too , because it’s a dependency for oracleasm.
cd /root/ASM/2.6.18-308.8.2/
yum install kernel-debug
rpm -Uvh oracleasm-2.6.18-308.8.2.el5-2.0.5-1.el5.x86_64.rpm oracleasm-2.6.18-308.8.2.el5debug-2.0.5-1.el5.x86_64.rpm oracleasmlib-2.0.4-1.el5.x86_64.rpm oracleasm-support-2.1.7-1.el5.x86_64.rpm
reboot && exit
After the reboot we can verify that the version is really changed , as we can see the redhat-release now it is 5.8 .
[root@LABSERVER01 ~]# uptime
 11:36:47 up 4 min,  1 user,  load average: 0.03, 0.07, 0.03
[root@LABSERVER01 ~]# uname -a
Linux LABSERVER01 2.6.18-308.8.2.el5 #1 SMP Tue May 29 11:54:17 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux
[root@LABSERVER01 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.8 (Tikanga)
[root@LABSERVER01 ~]#
At this point we needed to manually start the EMC PowerPath service , it was really strange…
some investigaion needed !
/etc/init.d/PowerPath start
after powerpath modules load , we can interact through powermt facility
powermt save  file=powermt.save.$(date +%Y%m%d).postupgrade
powermt display dev=all > powermt.display.$(date +%Y%m%d).postupgrade
The modules are not loaded because PowerPath did not start at boot.
In our case the problem was generated by the following paragraph indentation
in / etc / rc.sysinit (!) .
I noticed that the following block of lines was “tabbed” on the right , but i didn’t care about .
###BEGINPP
# Configure and initialize PowerPath.
if [ -f /etc/init.d/PowerPath ]; then
/etc/init.d/PowerPath start
fi
###ENDPP
So i remoevd the unneeded characters (spaces or tabs)  and (magic…) now PowerPath is started normally.
After the reboot i verified that powerpath is loaded and the devices are visible and working correctly.
lsmod | grep -i emc
emcpvlumd              69472  0
emcpxcrypt            166376  0
emcpdm                 75528  0
emcpgpx                55376  3 emcpvlumd,emcpxcrypt,emcpdm
emcpmpx               201160  8
emcp                 2170976  5 emcpvlumd,emcpxcrypt,emcpdm,emcpgpx,emcpmpx
[root@LABSERVER01 ~]# powermt display dev=all
Pseudo name=emcpowerb
CLARiiON ID=CKM00084800353 [SG_LABSERVER]
Logical device ID=600601605FF0220044BF43B27710E111 [LUN 140]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0;
Owner: default=SP B, current=SP B       Array failover mode: 1
==============================================================================
————— Host —————   – Stor –   — I/O Path —  — Stats —
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
   1 qla2xxx                  sdb       SP A2     active  alive       0      0
   1 qla2xxx                  sdd       SP B2     active  alive       0      0
   2 qla2xxx                  sdf       SP B7     active  alive       0      0
   2 qla2xxx                  sdh       SP A7     active  alive       0      0
Pseudo name=emcpowera
CLARiiON ID=CKM00084800353 [SG_LABSERVER]
Logical device ID=600601605FF0220068B2FDD2810FE111 [LUN 135]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0;
Owner: default=SP B, current=SP B       Array failover mode: 1
==============================================================================
————— Host —————   – Stor –   — I/O Path —  — Stats —
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
   1 qla2xxx                  sda       SP A2     active  alive       0      0
   1 qla2xxx                  sdc       SP B2     active  alive       0      0
   2 qla2xxx                  sde       SP B7     active  alive       0      0
   2 qla2xxx                  sdg       SP A7     active  alive       0      0

WWN quick report in AIX os

 

Often we need just a simple info but we’d like to have it in a report ready format .

The following (just2lines-)script greps the fibre channel informations (in this case location code and WWN) 

and displays them .

 

#!/usr/bin/ksh

lscfg | grep fcs

for i in `lscfg -vp |grep fcs| awk ‘{print $1}’`

do

echo $i && lscfg -vp -l $i |grep ‘Network Address’

done

 

These very simple loops can be readapted for any other greppable parameter on the server.

VIOS – Aggiunta dischi

Scenario: 2 sistemi p770 con 4 vios ognuno (2 dedicati scsi e 2 dedicati net)

SU TUTTI I 4 VIOS DEDICATI PER RISORSE SCSI :

oem_setup_env + cfgmgr
oppure
cfgdev (da restricted shell)

verificare che il vio veda le nuove lun
lsdev | grep -i hdisk

I seguenti comandi modificano alcuni parametri dei dischi , se necessari.
Nel mio caso i dischi sono su storage Hitachi.

da shell restricted
chdev -l hdiskXXX -attr pv=yes
chdev -dev hdiskXXX -attr reserve_policy=no_reserve
chdev -dev hdiskXXX -attr queue_depth=2

SOLO SUI VIOS RELATIVI AL SISTEMA OWNER DELLA LPAR
Ora rimane da verificare quale vhost corrisponda alla lpar a cui dobbiamo rendere disponibili i dischi.
In HMC vedo il numero di controller Cxx
in vios shell
lsdev -slots | grep Cxx

ottengo così il vadapter , nel seguente esempio creo i vdev per 3 volumi fisici
che diventeranno parte del rootvg e del pagingvg di una lpar in fase di installazione.

mkvdev -vdev hdisk300 -vadapter vhost11 -dev soa02_rootvg
mkvdev -vdev hdisk301 -vadapter vhost11 -dev soa02paging1
mkvdev -vdev hdisk302 -vadapter vhost11 -dev soa02paging2

Wpars

Annotazioni sulle attività più comuni su wpars.

Le wpars sono la risposta Ibm alle zone Solaris o ai Linux Containers.

Le wpars possono essere Sysem WPAR o Application WPAR , nascono in Aix6.1 .

Alcune delle operazioni più comuni da eseguire su wpar , sono elencate di seguito.

– creare una system wpar da command line :

mkwpar -c -l -D rootvg=yes devname=hdisk3 -n syswpar -N address=11.22.33.44

– listare dettagli di una wpar :

lswpar

lswpar -N – aggiunge dettagli della configurazione di rete

lswpar -L – long listing , output molto dettagliato

– modificare dettagli wpar:

chwpar -A syswpar – modifica la wpar settando l’autoboot all’avvio del sistema global environment.

 

Ovviamente le wpars possono anche essere gestite da smitty , attraverso il fast path smitty wpar.