Configurazione LAN VIOS per Jumbo Frames

1) VERIFICA parametri schede fisiche 

Verificare che flow_ctrl, jumbo_frames, large_receive e large_send siano impostati a “yes” sulle schede fisiche che compongono l’etherchannel 

lsattr -El entX | egrep “flow_ctrl|jumbo|large” 

2) CONFIGURAZIONE JF su Etherchannel 

Impostare use_jumbo_frame=yes sul device EC , dopo averlo messo offline

rmdev -l entX   # where entX represents the EC device if one exists

chdev -l entX -a use_jumbo_frame=

mkdev -l entX   # where entX represents the EC device

Verifica: 

lsattr -El entX | egrep “mode|jumbo” 

3) IMPOSTAZIONE parametri virtual Ethernet 

Impostare i seguenti parametri sull’interfaccia con IP address: 

$ chdev -dev enX -attr mtu_bypass=on rfc1323=1 mtu=9000 

Impostare i seguenti parametri sulle virtual Ethernet (trunk) che compongono la SEA (non sono necessari sul control channel): 

max_buf_huge=128 

max_buf_large=256 

max_buf_medium=2048 

max_buf_small=4096 

max_buf_tiny=4096 

min_buf_huge=127 

min_buf_large=255 

min_buf_medium=2047 

min_buf_small=4095 

min_buf_tiny=4095 

chdev -dev entX -perm -attr max_buf_huge=128 max_buf_large=256 max_buf_medium=2048 max_buf_small=4096 max_buf_tiny=4096 min_buf_huge=127 min_buf_large=255 min_buf_medium=2047 min_buf_small=4095 min_buf_tiny=4095 

5) CONFIGURAZIONE SEA 

Configurare la SEA, abilitando largesend, large_receive e jumbo_frames. 

Nell’esempio, avendo un solo trunk, è stata impostata la modalità auto (ha_mode=auto), nel caso di SEA con più trunk impostare il load sharing (ha_mode=sharing): 

mkvdev -sea entXX -vadapter entYY -default entYY -defaultid 1 -attr ha_mode=auto ctl_chan=entZZ largesend=1 jumbo_frames=yes large_receive=yes adapter_reset=no thread=0 

chdev -l entX -a ha_mode=standby

rmdev -l entX

chdev -l entX -a jumbo_frames=yes largesend=1 jumbo_frames=yes large_receive=yes

mkdev -l entX

chdev -l entX -a ha_mode=auto

6) IMPOSTAZIONE parametri di rete sulle virtual Ethernet delle LPAR client 

Impostare i seguenti parametri sulla scheda di rete: 

max_buf_huge=128 

max_buf_large=256 

max_buf_medium=2048 

max_buf_small=4096 

max_buf_tiny=4096 

min_buf_huge=127 

min_buf_large=255 

min_buf_medium=2047 

min_buf_small=4095 

min_buf_tiny=4095 

chdev -l entX -a max_buf_huge=128 -a max_buf_large=256 -a max_buf_medium=2048 -a max_buf_small=4096 -a max_buf_tiny=4096 -a min_buf_huge=127 -a min_buf_large=255 -a min_buf_medium=2047 -a min_buf_small=4095 -a min_buf_tiny=4095 -P 

chdev -l enX  -a mtu_bypass=on -a tcp_nodelay=1 -a rfc1323=1 -a mtu=9000

Reboot 

Scripted LPAR building

Scenario: we have to build many lpars but we are too lazy to do the job manually:
Here comes the magic of hmc and its tools that give us the chance to think before and relax later …when the script will execute the job.
From HMC we can type :

mksyscfg -r lpar -m -i name=[LPARNAME], profile_name=[PROFILENAME],lpar_id=10, lpar_env=”aixlinux”, min_mem=4096, desired_mem=8192, max_mem=12800, proc_mode=shared, min_procs=1, desired_procs=2, max_procs=4, min_proc_units=0.2, desired_proc_units=0.4, max_proc_units=1, sharing_mode=uncap, uncap_weight=128, conn_monitoring=1, boot_mode=norm, max_virtual_slots=200, virtual_eth_adapters=2/0/603//0/0/

This will create the lpar profile in hmc , only the disk adapters are missing .

Now we can tell the nim about the new lpars and their ip address , writing them in /etc/hosts manually or using hostent command:

hostent -a [IP] -h [HOSTNAME]

At this point we can define the nim object , doing :

nim -o define -t standalone -a platform=chrp -a if1=”find_net [HOSTNAME] 0″ -a cable_type1=tp -a net_settings1=”auto auto” -a netboot_kernel=64 [HOSTNAME]

and start the installation from nim , to do so we must have a spot and a mksysb defined:

nim -o bos_inst -a source=mksysb -a spot=spot-[HOSTNAME] -a mksysb=mksysb-[HOSTNAME] -a accept_licenses=yes -a installp_flags=-acNgXY -a no_client_boot=yes -a preserve_res=yes [HOSTNAME]

DSH POWER

Sometimes we need to execute  a command on many aix systems at once, here comes the Distributed SHell utility .
DSH can rely on RSH or SSH to connect to clients and execute commands , obviously nowadays RSH is deprecated so the choice must be SSH.For example a customer requested me to change the timezone format from Posix to Olson for some application compatibility issues .

I wrote the node list in a text file called nodelist (yes i know , i have a lot of fantasy , LOL).
I created a command file called , mhh let’s try to guess…. cmdfile

#cat cmdfile.chgtz
cp /etc/environment /etc/environment.$(date +%Y-%m-%d_%T)
sed ‘s/^TZ.*$/TZ=NFT-1DFT,M3.5.0,M10.5.0/g’ /etc/environment > /etc/environment.new
grep TZ /etc/environment.new
mv /etc/environment.new /etc/environment
echo $TZ

and the shell script dsh_do.sh that do the stuff …reads the cmdfile and gives to it the nodelist as a parameter , one node at a time.

#cat dsh_do.sh

#!/usr/bin/ksh
while read node; do dsh -n $node -e ./cmdfile.chgtz; done < nodelist

Simply the command file makes a backup of the /etc/environment file , do the substitution with sed redirecting the output to a temp file . The temporary .new file is renamed to replace the original.
This solution give me the flexibility to do what i want (specified in cmdfile) on the nodes i want (in nodelist)
without visiting the clients.

WWN quick report in AIX os

 

Often we need just a simple info but we’d like to have it in a report ready format .

The following (just2lines-)script greps the fibre channel informations (in this case location code and WWN) 

and displays them .

 

#!/usr/bin/ksh

lscfg | grep fcs

for i in `lscfg -vp |grep fcs| awk ‘{print $1}’`

do

echo $i && lscfg -vp -l $i |grep ‘Network Address’

done

 

These very simple loops can be readapted for any other greppable parameter on the server.

VIOS – Aggiunta dischi

Scenario: 2 sistemi p770 con 4 vios ognuno (2 dedicati scsi e 2 dedicati net)

SU TUTTI I 4 VIOS DEDICATI PER RISORSE SCSI :

oem_setup_env + cfgmgr
oppure
cfgdev (da restricted shell)

verificare che il vio veda le nuove lun
lsdev | grep -i hdisk

I seguenti comandi modificano alcuni parametri dei dischi , se necessari.
Nel mio caso i dischi sono su storage Hitachi.

da shell restricted
chdev -l hdiskXXX -attr pv=yes
chdev -dev hdiskXXX -attr reserve_policy=no_reserve
chdev -dev hdiskXXX -attr queue_depth=2

SOLO SUI VIOS RELATIVI AL SISTEMA OWNER DELLA LPAR
Ora rimane da verificare quale vhost corrisponda alla lpar a cui dobbiamo rendere disponibili i dischi.
In HMC vedo il numero di controller Cxx
in vios shell
lsdev -slots | grep Cxx

ottengo così il vadapter , nel seguente esempio creo i vdev per 3 volumi fisici
che diventeranno parte del rootvg e del pagingvg di una lpar in fase di installazione.

mkvdev -vdev hdisk300 -vadapter vhost11 -dev soa02_rootvg
mkvdev -vdev hdisk301 -vadapter vhost11 -dev soa02paging1
mkvdev -vdev hdisk302 -vadapter vhost11 -dev soa02paging2

Wpars

Annotazioni sulle attività più comuni su wpars.

Le wpars sono la risposta Ibm alle zone Solaris o ai Linux Containers.

Le wpars possono essere Sysem WPAR o Application WPAR , nascono in Aix6.1 .

Alcune delle operazioni più comuni da eseguire su wpar , sono elencate di seguito.

– creare una system wpar da command line :

mkwpar -c -l -D rootvg=yes devname=hdisk3 -n syswpar -N address=11.22.33.44

– listare dettagli di una wpar :

lswpar

lswpar -N – aggiunge dettagli della configurazione di rete

lswpar -L – long listing , output molto dettagliato

– modificare dettagli wpar:

chwpar -A syswpar – modifica la wpar settando l’autoboot all’avvio del sistema global environment.

 

Ovviamente le wpars possono anche essere gestite da smitty , attraverso il fast path smitty wpar.