I’ve been quite busy and writing the blog is getting to be more and more difficult with the amount of work I have but I try to stick to my thing as writing these blogs posts is almost the only thing I can do properly in my whole life. So why do without ? As my place is one of the craziest place I have ever worked in -(for the good … and the bad (I’ll not talk here about how are the things organized here or how is the recognition of your work but be sure it is probably be one the main reason I’ll probably leave this place one day or another)- the PowerSystems growth is crazy and the number of AIX partitions we are managing with PowerVC never stops increasing and I think that we are one the biggest PowerVC customer in the whole world (I don’t know if it is a good thing or not). Just to give you a couple of examples we have here on the biggest Power Enterprise Pool I have ever seen (384 Power8 mobile cores), the number of partitions managed by PowerVC is around 2600 and we have a PowerVC managing almost 30 hosts. You have understand well … theses numbers are huge. It’s seems to be very funny, but it’s not ; the growth is problem, a technical problem and we are facing problems that most of you will never hit. I’m speaking about density and scalability. Hopefully for us the “vertical” design of PowerVC can now be replaced by what I call an “horizontal” design. Instead of putting all the nova instances on one single machine, we now have the possibility to spread the load on each host by using NovaLink. As we needed to solve these density and scalability problems we decided to move all the P8 hosts to NovaLink (this process is still ongoing but most of the engineering stuffs are already done). As you now know we are not deploying a host every year but generally a couple by month and that’s why we needed to find a solution to automate this. So this blog post will talk about all the things and the best practices I have learn using and implementing NovaLink in a huge production environment (automated installation, tips and tricks, post-install, migration and so on). But we will not stop here I’ll also talk about the new things I have learn about PowerVC (1.3.1.2 and 1.3.0.1) and give more tips and tricks to use the product as it best. Before going any further I’d first want to say a big thank you to the whole PowerVC team for their kindness and the precious time they gave to us to advise and educate the OpenStack noob I am. (A special thanks to Drew Thorstensen for the long discussions we had about Openstack and PowerVC. He is probably one the most passionate guy I have ever met at IBM).
Novalink Automated installation
I’ll not write big introduction, let’s work and let’s start with NovaLink and how to automate the Novalink installation process. Copy the content of the installation cdrom to a directory that can be served by an http server on your NIM server (I’m using my NIM server for the bootp and tftp part). Note that I’m doing this with a tar command because there are symbolic links in the iso and a simple cp will end up with a full filesystem.
# loopmount -i ESD_-_PowerVM_NovaLink_V1.0.0.3_062016.iso -o "-V cdrfs -o ro" -m /mnt # tar cvf iso.tar /mnt/* # tar xvf ios.tar -C /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso # ls -l /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso total 320 dr-xr-xr-x 2 root system 256 Jul 28 17:54 .disk -r--r--r-- 1 root system 243 Apr 20 21:27 README.diskdefines -r--r--r-- 1 root system 3053 May 25 22:25 TRANS.TBL dr-xr-xr-x 3 root system 256 Apr 20 11:59 boot dr-xr-xr-x 3 root system 256 Apr 20 21:27 dists dr-xr-xr-x 3 root system 256 Apr 20 21:27 doc dr-xr-xr-x 2 root system 4096 Aug 09 15:59 install -r--r--r-- 1 root system 145981 Apr 20 21:34 md5sum.txt dr-xr-xr-x 2 root system 4096 Apr 20 21:27 pics dr-xr-xr-x 3 root system 256 Apr 20 21:27 pool dr-xr-xr-x 3 root system 256 Apr 20 11:59 ppc dr-xr-xr-x 2 root system 256 Apr 20 21:27 preseed dr-xr-xr-x 4 root system 256 May 25 22:25 pvm lrwxrwxrwx 1 root system 1 Aug 29 14:55 ubuntu -> . dr-xr-xr-x 3 root system 256 May 25 22:25 vios
Prepare the PowerVM NovaLink repository. The content of the repository can be found in the NovaLink iso image in pvm/repo/pvmrepo.tgz:
# ls -l /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/pvm/repo/ total 720192 -r--r--r-- 1 root system 223 May 25 22:25 TRANS.TBL -rw-r--r-- 1 root system 2106 Sep 05 15:56 pvm-install.cfg -r--r--r-- 1 root system 368722592 May 25 22:25 pvmrepo.tgz
Extract the content of this tgz file in a directory that can be served by the http server:
# mkdir /export/nim/lpp_source/powervc/novalink/1.0.0.3/pvmrepo # cp /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/pvm/repo/pvmrepo.tgz # cd /export/nim/lpp_source/powervc/novalink/1.0.0.3/pvmrepo # gunzip pvmrepo.tgz # tar xvf pvmrepo.tar [..] x ./pool/non-free/p/pvm-core/pvm-core-dbg_1.0.0.3-160525-2192_ppc64el.deb, 54686380 bytes, 106810 media blocks. x ./pool/non-free/p/pvm-core/pvm-core_1.0.0.3-160525-2192_ppc64el.deb, 2244784 bytes, 4385 media blocks. x ./pool/non-free/p/pvm-core/pvm-core-dev_1.0.0.3-160525-2192_ppc64el.deb, 618378 bytes, 1208 media blocks. x ./pool/non-free/p/pvm-pkg-tools/pvm-pkg-tools_1.0.0.3-160525-492_ppc64el.deb, 170700 bytes, 334 media blocks. x ./pool/non-free/p/pvm-rest-server/pvm-rest-server_1.0.0.3-160524-2229_ppc64el.deb, 263084432 bytes, 513837 media blocks. # rm pvmrepo.tar # ls -l total 16 drwxr-xr-x 2 root system 256 Sep 11 13:26 conf drwxr-xr-x 2 root system 256 Sep 11 13:26 db -rw-r--r-- 1 root system 203 May 26 02:19 distributions drwxr-xr-x 3 root system 256 Sep 11 13:26 dists -rw-r--r-- 1 root system 3132 May 24 20:25 novalink-gpg-pub.key drwxr-xr-x 4 root system 256 Sep 11 13:26 pool
Copy the NovaLink boot files in a directory that can be served by your tftp server (I’m using /var/lib/tftpboot):
# mkdir /var/lib/tftpboot # cp -r /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/pvm /var/lib/tftpboot # ls -l /var/lib/tftpboot total 1016 -r--r--r-- 1 root system 1120 Jul 26 20:53 TRANS.TBL -r--r--r-- 1 root system 494072 Jul 26 20:53 core.elf -r--r--r-- 1 root system 856 Jul 26 21:18 grub.cfg -r--r--r-- 1 root system 12147 Jul 26 20:53 pvm-install-config.template dr-xr-xr-x 2 root system 256 Jul 26 20:53 repo dr-xr-xr-x 2 root system 256 Jul 26 20:53 rootfs -r--r--r-- 1 root system 2040 Jul 26 20:53 sample_grub.cfg
I still don’t know why this is the case on AIX but the tftp server is searching for the grub.cfg in the root directory of your AIX system. It’s not the case for my RedHat Enterprise Linux installation but it’s the case for the NovaLink/Ubuntu installation. Copy the sample-grub.cfg to /grub.cfg and modify the content of the file:
- As the gateway, netmask and nameserver will be provided the the pvm-install-config.cfg (the configuration file of the Novalink installer we will talk about this later) file comment those three lines.
- The hostname will still be needed.
- Modify the linux line and point to the vmlinux file provided in the NovaLink iso image.
- Modify the live-installer to point to the filesystem.squashfs provided in the NovaLink iso image.
- Modify the pvm-repo line to point to the pvm-repository directory we created before.
- Modify the pvm-installer line to point to the NovaLink install configuration file (we will modify this one after).
- Don’t do anything with the pvm-vios line as we are installing NovaLink on a system already having Virtual I/O Servers installed (I’m not installing Scale Out system but high end models only).
- I’ll talk later about the pvm-disk line (this line is not by default in the pvm-install-config.template provided in the NovaLink iso image).
# cp /var/lib/tftpboot/sample_grub.cfg /grub.cfg # cat /grub.cfg # Sample GRUB configuration for NovaLink network installation set default=0 set timeout=10 menuentry 'PowerVM NovaLink Install/Repair' { insmod http insmod tftp regexp -s 1:mac_pos1 -s 2:mac_pos2 -s 3:mac_pos3 -s 4:mac_pos4 -s 5:mac_pos5 -s 6:mac_pos6 '(..):(..):(..):(..):(..):(..)' ${net_default_mac} set bootif=01-${mac_pos1}-${mac_pos2}-${mac_pos3}-${mac_pos4}-${mac_pos5}-${mac_pos6} regexp -s 1:prefix '(.*)\.(\.*)' ${net_default_ip} # Setup variables with values from Grub's default variables set ip=${net_default_ip} set serveraddress=${net_default_server} set domain=${net_ofnet_network_domain} # If tftp is desired, replace http with tftp in the line below set root=http,${serveraddress} # Remove comment after providing the values below for # GATEWAY_ADDRESS, NETWORK_MASK, NAME_SERVER_IP_ADDRESS # set gateway=10.10.10.1 # set netmask=255.255.255.0 # set namserver=10.20.2.22 set hostname=nova0696010 # In this sample file, the directory novalink is assumed to exist on the # BOOTP server and has the NovaLink ISO content linux /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/install/vmlinux \ live-installer/net-image=http://${serveraddress}/export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/install/filesystem.squashfs \ pkgsel/language-pack-patterns= \ pkgsel/install-language-support=false \ netcfg/disable_dhcp=true \ netcfg/choose_interface=auto \ netcfg/get_ipaddress=${ip} \ netcfg/get_netmask=${netmask} \ netcfg/get_gateway=${gateway} \ netcfg/get_nameservers=${nameserver} \ netcfg/get_hostname=${hostname} \ netcfg/get_domain=${domain} \ debian-installer/locale=en_US.UTF-8 \ debian-installer/country=US \ # The directory novalink-repo on the BOOTP server contains the content # of the pvmrepo.tgz file obtained from the pvm/repo directory on the # NovaLink ISO file. # The directory novalink-vios on the BOOTP server contains the files # needed to perform a NIM install of VIOS server(s) # pvmdebug=1 pvm-repo=http://${serveraddress}/export/nim/lpp_source/powervc/novalink/1.0.0.3/novalink-repo/ \ pvm-installer-config=http://${serveraddress}/export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/pvm/repo/pvm-install.cfg \ pvm-viosdir=http://${serveraddress}/novalink-vios \ pvmdisk=/dev/mapper/mpatha \ initrd /export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/install/netboot_initrd.gz }
Modify the pvm-install.cfg, it’s the NovaLink installer configuration file. We just need to modify here the [SystemConfig],[NovaLinkGeneralSettings],[NovaLinkNetworkSettings],[NovaLinkAPTRepoConfig] and [NovaLinkAdminCredential]. My advice is to configure one NovaLink by hand (by doing an installation directly with the iso image, then after the installation your configuration file is saved in /var/log/pvm-install/novalink-install.cfg. You can copy this one as your template on your installation server. This file is filled by the answers you gave during the NovaLink installation)
# more /export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/pvm/repo/pvm-install.cfg [SystemConfig] serialnumber = XXXXXXXX lmbsize = 256 [NovaLinkGeneralSettings] ntpenabled = True ntpserver = timeserver1 timezone = Europe/Paris [NovaLinkNetworkSettings] dhcpip = DISABLED ipaddress = YYYYYYYY gateway = ZZZZZZZZ netmask = 255.255.255.0 dns1 = 8.8.8.8 dns2 = 8.8.9.9 hostname = WWWWWWWW domain = lab.chmod666.org [NovaLinkAPTRepoConfig] downloadprotocol = http mirrorhostname = nimserver mirrordirectory = /export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/ mirrorproxy = [VIOSNIMServerConfig] novalink_private_ip = 192.168.128.1 vios1_private_ip = 192.168.128.2 vios2_private_ip = 192.168.128.3 novalink_netmask = 255.255.128.0 viosinstallprompt = False [NovaLinkAdminCredentials] username = padmin password = $6$N1hP6cJ32p17VMpQ$sdThvaGaR8Rj12SRtJsTSRyEUEhwPaVtCTvbdocW8cRzSQDglSbpS.jgKJpmz9L5SAv8qptgzUrHDCz5ureCS. userdescription = NovaLink System Administrator
Finally modify the /etc/bootptab file and add a line matching your installation:
# tail -1 /etc/bootptab nova0696010:bf=/var/lib/tftpboot/core.elf:ip=10.20.65.16:ht=ethernet:sa=10.255.228.37:gw=10.20.65.1:sm=255.255.255.0:
Don’t forget to setup an http server, serving all the needed files. I know this configuration is super unsecured. But honestly I don’t care my NIM server is in a super secured network just accessible by the VIOS and NovaLink partition. So I’m good :
# cd /opt/freeware/etc/httpd/ # grep -Ei "^Listen|^DocumentRoot" conf/httpd.conf Listen 80 DocumentRoot "/"
Instead of doing this over and over and over at every NovaLink installation I have written a custom script preparing my NovaLink installation file, what I do in this script is:
- Preparing the pvm-install.cfg file.
- Modifying the grub.cfg file.
- Adding a line to the /etc/bootptab file.
# ./custnovainstall.ksh nova0696010 10.20.65.16 10.20.65.1 255.255.255.0 #!/usr/bin/ksh novalinkname=$1 novalinkip=$2 novalinkgw=$3 novalinknm=$4 cfgfile=/export/nim/lpp_source/powervc/novalink/novalink-install.cfg desfile=/export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/pvm/repo/pvm-install.cfg grubcfg=/export/nim/lpp_source/powervc/novalink/grub.cfg grubdes=/grub.cfg echo "+--------------------------------------+" echo "NovaLink name: ${novalinkname}" echo "NovaLink IP: ${novalinkip}" echo "NovaLink GW: ${novalinkgw}" echo "NovaLink NM: ${novalinknm}" echo "+--------------------------------------+" echo "Cfg ref: ${cfgfile}" echo "Cfg file: ${cfgfile}.${novalinkname}" echo "+--------------------------------------+" typeset -u serialnumber serialnumber=$(echo ${novalinkname} | sed 's/nova//g') echo "SerNum: ${serialnumber}" cat ${cfgfile} | sed "s/serialnumber = XXXXXXXX/serialnumber = ${serialnumber}/g" | sed "s/ipaddress = YYYYYYYY/ipaddress = ${novalinkip}/g" | sed "s/gateway = ZZZZZZZZ/gateway = ${novalinkgw} /g" | sed "s/netmask = 255.255.255.0/netmask = ${novalinknm}/g" | sed "s/hostname = WWWWWWWW/hostname = ${novalinkname}/g" > ${cfgfile}.${novalinkname} cp ${cfgfile}.${novalinkname} ${desfile} cat ${grubcfg} | sed "s/ set hostname=WWWWWWWW/ set hostname=${novalinkname}/g" > ${grubcfg}.${novalinkname} cp ${grubcfg}.${novalinkname} ${grubdes} # nova1009425:bf=/var/lib/tftpboot/core.elf:ip=10.20.65.15:ht=ethernet:sa=10.255.248.37:gw=10.20.65.1:sm=255.255.255.0: echo "${novalinkname}:bf=/var/lib/tftpboot/core.elf:ip=${novalinkip}:ht=ethernet:sa=10.255.248.37:gw=${novalinkgw}:sm=${novalinknm}:" >> /etc/bootptab
Novalink installation: vSCSI or NPIV ?
NovaLink is not designed to be installed of top of NPIV it’s a fact. As it is designed to be installed on a totally new system without any Virtual I/O Servers configured the NovaLink installation is by default creating the Virtual I/O Servers and using these VIOS the installation process is creating backing devices on top of logical volumes created in the default VIOS storage pool. Then the Novalink installation partition is created on top of these two logical volumes and at the end mirrored. This is the way NovaLink is doing for Scale Out systems.
For High End systems NovaLink is assuming your going to install the NovaLink partition on top of vSCSI (have personnaly tried with hdisk backed and SSP Logical Unit backed and both are working ok). For those like me who wants to install NovaLink on top of NPIV (I know this is not a good choice, but once again I was forced to do that) there still is a possiblity to do it. (In my humble opinion the NPIV design is done for high performance and the Novalink partition is not going to be an I/O intensive partition. Even worse our whole new design is based on NPIV for LPARs …. it’s a shame as NPIV is not a solution designed for high denstity and high scalability. Every PowerVM system administrator should remember this. NPIV IS NOT A GOOD CHOICE FOR DENSITY AND SCALABILITY USE IT FOR PERFORMANCE ONLY !!!. The story behind this is funny. I’m 100% sure that SSP is ten time a better choice to achieve density and scalability. I decided to open a poll on twitter asking this question “Will you choose SSP or NPIV to design a scalable AIX cloud based on PowerVC ?”. I was 100% sure SSP will win and made a bet with friend (I owe him beers now) that I’ll be right. What was my surprise when seeing the results. 90% of people vote for NPIV. I’m sorry to say that guys but there are two possibilities: 1/ You don’t really know what scalability and density means because you never faced it so that’s why you made the wrong choice. 2/ You know it and you’re just wrong . This little story is another proof telling that IBM is not responsible about the dying of AIX and PowerVM … but unfortunately you are responsible of it not understanding that the only way to survive is to face high scalable solution like Linux is doing with Openstack and Ceph. It’s a fact. Period.)
This said … if you are trying to install NovaLink on top of NPIV you’ll get an error. A workaround to this problem is to add the following line to the grub.cfg file
pvmdisk=/dev/mapper/mpatha \
If you do that you’ll be able to install NovaLink on your NPIV disk but still have an error the first time you’ll install it at the “grub-install step”. Just re-run the installation a second time and the grub-install command will work ok (I’ll explain how to do to avoid this second issue later).
One work-around to this second issue is to recreate the initrd by adding a line in the debian-installer config file.
Fully automated installation by example
- Here the core.elf file is downloaded by tftp. You can se in the capture below that the grub.cfg file is searched in / :
- The installer is starting:
- The vmlinux is downloaded (http):
- The root.squashfs is downloaded (http):
- The pvm-install.cfg configuration file is downloaded (http):
- pvm services are started. At this time if you are running in co-management mode you’ll see the Red lock in the HMC Server status:
- The Linux and Novalink istallation is ongoing:
- System is ready:
Novalink code auto update
When adding a NovaLink host to PowerVC the powervc packages coming from the powervc management host will be installed on the NovaLink partition. You can check this during the installation. Here is what’s going on when adding the NovaLink host to PowerVC:
# cat /opt/ibm/powervc/log/powervc_install_2016-09-11-164205.log ################################################################################ Starting the IBM PowerVC Novalink Installation on: 2016-09-11T16:42:05+02:00 ################################################################################ LOG file is /opt/ibm/powervc/log/powervc_install_2016-09-11-164205.log 2016-09-11T16:42:05.18+02:00 Installation directory is /opt/ibm/powervc 2016-09-11T16:42:05.18+02:00 Installation source location is /tmp/powervc_img_temp_1473611916_1627713/powervc-1.3.1.2 [..] Setting up python-neutron (10:8.0.0-201608161728.ibm.ubuntu1.375) ... Setting up neutron-common (10:8.0.0-201608161728.ibm.ubuntu1.375) ... Setting up neutron-plugin-ml2 (10:8.0.0-201608161728.ibm.ubuntu1.375) ... Setting up ibmpowervc-powervm-network (1.3.1.2) ... Setting up ibmpowervc-powervm-oslo (1.3.1.2) ... Setting up ibmpowervc-powervm-ras (1.3.1.2) ... Setting up ibmpowervc-powervm (1.3.1.2) ... W: --force-yes is deprecated, use one of the options starting with --allow instead. *************************************************************************** IBM PowerVC Novalink installation successfully completed at 2016-09-11T17:02:30+02:00. Refer to /opt/ibm/powervc/log/powervc_install_2016-09-11-165617.log for more details. ***************************************************************************
Installing the missing deb packages if NovaLink host was added before PowerVC upgrade
If the NovaLink host was added in PowerVC 1.3.1.1 and you updated to PowerVC 1.3.1.2 you have to update the package by hand because there is a little bug during the update of some packages:
- From the PowerVC management host copy the latest packages to the NovaLink host:
# scp /opt/ibm/powervc/images/powervm/powervc-powervm-compute-1.3.1.2.tgz padmin@nova0696010:~ padmin@nova0696010's password: powervc-powervm-compute-1.3.1.2.tgz
# tar xvzf powervc-powervm-compute-1.3.1.2.tgz # cd powervc-1.3.1.2/packages/powervm # dpkg -i nova-powervm_2.0.3-160816-48_all.deb # dpkg -i networking-powervm_2.0.1-160816-6_all.deb # dpkg -i ceilometer-powervm_2.0.1-160816-17_all.deb # /opt/ibm/powervc/bin/powervc-services restart
rsct and pvm deb update
Never forget to install latest rsct and pvm packages after the installation. You can clone the official IBM repository for pvm and rsct files (you can check my previous post about Novalink for more details about cloning the repository). Then create two files in /etc/apt/sources.list.d one for pvm, the other for rsct
# vi /etc/apt/sources.list.d/pvm.list deb http://nimserver/export/nim/lpp_source/powervc/novalink/nova/debian novalink_1.0.0 non-free # vi /etc/apt/source.list.d/rsct.list deb http://nimserver/export/nim/lpp_source/powervc/novalink/rsct/ubuntu xenial main # dpkg -l | grep -i rsct ii rsct.basic 3.2.1.0-15300 ppc64el Reliable Scalable Cluster Technology - Basic ii rsct.core 3.2.1.3-16106-1ubuntu1 ppc64el Reliable Scalable Cluster Technology - Core ii rsct.core.utils 3.2.1.3-16106-1ubuntu1 ppc64el Reliable Scalable Cluster Technology - Utilities # # dpkg -l | grep -i pvm ii pvm-cli 1.0.0.3-160516-1488 all Power VM Command Line Interface ii pvm-core 1.0.0.3-160525-2192 ppc64el PVM core runtime package ii pvm-novalink 1.0.0.3-160525-1000 ppc64el Meta package for all PowerVM Novalink packages ii pvm-rest-app 1.0.0.3-160524-2229 ppc64el The PowerVM NovaLink REST API Application ii pvm-rest-server 1.0.0.3-160524-2229 ppc64el Holds the basic installation of the REST WebServer (Websphere Liberty Profile) for PowerVM NovaLink # apt-get install rsct.core rsct.basic Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: docutils-common libpaper-utils libpaper1 python-docutils python-roman Use 'apt autoremove' to remove them. The following additional packages will be installed: rsct.core.utils src The following packages will be upgraded: rsct.core rsct.core.utils src 3 upgraded, 0 newly installed, 0 to remove and 6 not upgraded. Need to get 9,356 kB of archives. After this operation, 548 kB disk space will be freed. [..] # apt-get install pvm-novalink Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: docutils-common libpaper-utils libpaper1 python-docutils python-roman Use 'apt autoremove' to remove them. The following additional packages will be installed: pvm-core pvm-rest-app pvm-rest-server pypowervm The following packages will be upgraded: pvm-core pvm-novalink pvm-rest-app pvm-rest-server pypowervm 5 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. Need to get 287 MB of archives. After this operation, 203 kB of additional disk space will be used. Do you want to continue? [Y/n] Y [..]
After the installation, here is what you should have if everything was updated properly:
dpkg -l | grep rsct ii rsct.basic 3.2.1.4-16154-1ubuntu1 ppc64el Reliable Scalable Cluster Technology - Basic ii rsct.core 3.2.1.4-16154-1ubuntu1 ppc64el Reliable Scalable Cluster Technology - Core ii rsct.core.utils 3.2.1.4-16154-1ubuntu1 ppc64el Reliable Scalable Cluster Technology - Utilities dpkg -l | grep pvm ii pvm-cli 1.0.0.3-160516-1488 all Power VM Command Line Interface ii pvm-core 1.0.0.3.1-160713-2441 ppc64el PVM core runtime package ii pvm-novalink 1.0.0.3.1-160714-1152 ppc64el Meta package for all PowerVM Novalink packages ii pvm-rest-app 1.0.0.3.1-160713-2417 ppc64el The PowerVM NovaLink REST API Application ii pvm-rest-server 1.0.0.3.1-160713-2417 ppc64el Holds the basic installation of the REST WebServer (Websphere Liberty Profile) for PowerVM NovaLink
Novalink post-installation (my ansible way to do that)
You all now know that I’m not very fond of doing the same things over and over again, that’s why I have create an ansible post-install playbook especially for NovaLink post installation. You can download it here: nova_ansible. Then install ansible on a host that has an ssh access to all your NovaLink partitions and run the the ansible playbook:
- Untar the ansible playbook:
# mkdir /srv/ansible # cd /srv/ansible # tar xvf novalink_ansible.tar
# cat group_vars/novalink.yml ntpservers: - ntpserver1 - ntpserver2 dnsservers: - 8.8.8.8 - 8.8.9.9 dnssearch: - lab.chmod666.org vepa_iface: ibmveth6 repo: nimserver
#cat inventories/hosts.novalink [novalink] nova65a0cab nova65ff4cd nova10094ef nova06960ab
# ansible-playbook -i inventories/hosts.novalink site.yml
More details about NovaLink
MGMTSWITCH vswitch automatic creation
Do not try to create the MGMTSWITCH by yourself. The NovaLink installer is doing it for you. As my Virtual I/O Servers are installed using the IBM Provisioning Toolkit for PowerVM … I was creating the MGMTSWITCH at this time but I was wrong. You can see this in the file /var/log/pvm-install/pvminstall.log on the NovaLink partition:
# cat /var/log/pvm-install/pvminstall.log Fri Aug 12 17:26:07 UTC 2016: PVMDebug = 0 Fri Aug 12 17:26:07 UTC 2016: Running initEnv [..] Fri Aug 12 17:27:08 UTC 2016: Using user provided pvm-install configuration file Fri Aug 12 17:27:08 UTC 2016: Auto Install set [..] Fri Aug 12 17:27:44 UTC 2016: Auto Install = 1 Fri Aug 12 17:27:44 UTC 2016: Validating configuration file Fri Aug 12 17:27:44 UTC 2016: Initializing private network configuration Fri Aug 12 17:27:45 UTC 2016: Running /opt/ibm/pvm-install/bin/switchnetworkcfg -o c Fri Aug 12 17:27:46 UTC 2016: Running /opt/ibm/pvm-install/bin/switchnetworkcfg -o n -i 3 -n MGMTSWITCH -p 4094 -t 1 Fri Aug 12 17:27:49 UTC 2016: Start setupinstalldisk operation for /dev/mapper/mpatha Fri Aug 12 17:27:49 UTC 2016: Running updatedebconf Fri Aug 12 17:56:06 UTC 2016: Pre-seeding disk recipe
NPIV lpar creation problem !
As you know my environment is crazy. Every lpar we are creating have 4 virtual fibre channels adapters. Obviously two on fabric A and two on fabric B. And obviously again each fabric must be present on each Virtual I/O Servers. So to sum up. An lpar must have access to fabric A and B using VIOS1 and to fabric A and B using VIOS2. Unfortunately there was a little bug in the current NovaLink (1.0.0.3) code and all the lpar created were created with only two adapters. The PowerVC team gave my a patch to handle this particular issue patching the npiv.py file. This patch needs to be installed on the NovaLink partition itself.:
# cd /usr/lib/python2.7/dist-packages/powervc_nova/virt/ibmpowervm/pvm/volume # sdiff npiv.py.back npiv.bck
I’m intentionally not giving you the solution here (just by copying/pasting code) because an issue is addressed and an APAR has been opened for this issue and is resolved in 1.3.1.2 version. IT16534
From NovaLink to HMC …. and the opposite
One of the challenge for me was to be sure everything was working ok regarding LPM and NovaLink. So I decided to test different cases:
- From NovaLink host to Novalink host (didn’t had any trouble)
- From NovaLink host to HMC host (didn’t had any trouble)
- From HMC host to Novalink host (had a trouble)
Once again this issue avoiding HMC to Novalink LPM to work correctly is related to storage. A patch is ongoing but let me explain this issue a little bit (only if you have to absolutely move an LPAR from HMC to NovaLink and your are in the same case as I am):
PowerVC is not correctly doing the mapping to the destination Virtual I/O Servers and is trying to map two times the fabric A on the VIOS1 and two time the fabric B on the VIOS2. Hopefully for us you can do the migration by hand :
- Do the LPM operation from PowerVC and check on the HMC side how PowerVC is doing the mapping (log on the HMC to check this):
# lssvcevents -t console -d 0 | grep powervc_admin | grep migrlpar time=08/31/2016 18:53:27,"text=HSCE2124 User name powervc_admin: migrlpar -m 9119-MME-656C38A -t 9119-MME-65A0C31 --id 18 --ip 10.22.33.198 -u wlp -i ""virtual_fc_mappings=6/vios1/2//fcs2,3/vios2/1//fcs2,4/vios2/1//fcs1,5/vios1/2//fcs1"",shared_proc_pool_id=0 -o m command failed."
# migrlpar -m 9119-MME-656C38A -t 9119-MME-65A0C31 --id 18 --ip 10.22.33.198 -u wlp -i '"virtual_fc_mappings=6/vios2/1//fcs2,3/vios1/2//fcs2,4/vios2/1//fcs1,5/vios1/2//fcs1"',shared_proc_pool_id=0 -o m # lssvcevents -t console -d 0 | grep powervc_admin | grep migrlpar time=08/31/2016 19:13:00,"text=HSCE2123 User name powervc_admin: migrlpar -m 9119-MME-656C38A -t 9119-MME-65A0C31 --id 18 --ip 10.22.33.198 -u wlp -i ""virtual_fc_mappings=6/vios2/1//fcs2,3/vios1/2//fcs2,4/vios2/1//fcs1,5/vios1/2//fcs1"",shared_proc_pool_id=0 -o m command was executed successfully."![]()
One more time don't worry about this issue a patch is on the way. But I thought it was interessting to talk about it just to show you how PowerVC is handling this (user, key sharing, check on the HMC).
Deep dive into the initrd
I am curious and there is no way to change this. As I wanted to know how the NovaLink installer is working I had to check into the netboot_initrd.gz file. There are a lot of interesting stuff to check in this initrd. Run the commands below on a Linux partition if you also want to have a look:
# scp nimdy:/export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/install/netboot_initrd.gz . # gunzip netboot_initrd # cpio -i < netboot_initrd 185892 blocksThe installer is located in opt/ibm/pvm-install:
# ls opt/ibm/pvm-install/data/ 40mirror.pvm debpkgs.txt license.txt nimclient.info pvm-install-config.template pvm-install-preseed.cfg rsct-gpg-pub.key vios_diagram.txt # ls opt/ibm/pvm-install/bin assignio.py envsetup installpvm monitor postProcessing pvmwizardmain.py restore.py switchnetworkcfg vios cfgviosnetwork.py functions installPVMPartitionWizard.py network procmem recovery setupinstalldisk updatedebconf vioscfg chviospasswd getnetworkinfo ioadapter networkbridge pvmconfigdata.py removemem setupviosinstall updatenimsetup welcome.py editpvmconfig initEnv mirror nimscript pvmtime resetsystem summary.py user wizpkgYou can for instance check what's the installer is exactly doing. Let's take again the exemple of the MGMTSWITCH creation, you can see in the output below that I was right saying that:
Remember that I was telling you before that I had problem with installation on NPIV. You can avoid installing NovaLink two times by modifying the debian installer directly in the initrd by adding a line in the debian installer file opt/ibm/pvm-install/data/pvm-install-preseed.cfg (you have to rebuild the initrd after doing this) :
# grep bootdev opt/ibm/pvm-install/data/pvm-install-preseed.cfg d-i grub-installer/bootdev string /dev/mapper/mpatha # find | cpio -H newc -o > ../new_initrd_file # gzip -9 ../new_initrd_file # scp ../new_initrdfile.gz nimdy:/export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/install/netboot_initrd.gzYou can also find good example here of pvmctl commands:
# grep -R pvmctl * pvmctl lv create --size $LV_SIZE --name $LV_NAME -p id=$vid pvmctl scsi create --type lv --vg name=rootvg --lpar id=1 -p id=$vid --stor-id name=$LV_NAMETroubleshooting
NovaLink is not PowerVC so here is a little reminder of what I do to troubleshot Novalink:
- Installation troubleshooting:
#cat /var/log/pvm-install/pvminstall.log
# cat /var/log/neutron/neutron-powervc-pvm-sea-agent.log
# cat /var/log/nova/nova-compute.log
# cat /var/log/pvm/pvmctl.log
One last thing to add about NovaLink. One thing I like a lot is that Novalink is doing backups of the system and VIOS hourly/daily. These backup are stored in /var/backup/pvm :
# crontab -l # VIOS hourly backups - at 15 past every hour except for midnight 15 1-23 * * * /usr/sbin/pvm-backup --type vios --frequency hourly # Hypervisor hourly backups - at 15 past every hour except for midnight 15 1-23 * * * /usr/sbin/pvm-backup --type system --frequency hourly # VIOS daily backups - at 15 past midnight 15 0 * * * /usr/sbin/pvm-backup --type vios --frequency daily # Hypervisor daily backups - at 15 past midnight 15 0 * * * /usr/sbin/pvm-backup --type system --frequency daily #ls -l /var/backups/pvm total 4 drwxr-xr-x 2 root pvm_admin 4096 Sep 9 00:15 9119-MME*0265FF47B
More PowerVC tips and tricks
Let's finish this blog post with more PowerVC tips and tricks. Before giving you the tricks I have to warn you. All of these tricks are not supported by PowerVC, use them at your own risk OR contact your support before doing anything else. You may break and destroy everything if you are not aware of what you are doing. So please be very careful using all these tricks. YOU HAVE BEEN WARNED !!!!!!
Accessing and querying the database
This first trick is funny and will allow you to query and modify the PowerVC database. Once again do this a your own risks. One of the issue I had was strange. I do not remeber how it happends exactly but some of my luns that were not attached to any hosts and were still showing an attachmenent number equals to 1 and I didn't had the possibility to remove it. Even worse someone has deleted these luns on the SVC side. So these luns were what I called "ghost lun". Non existing but non-deletable luns. (I had also to remove the storage provider related to these luns). The only way to change this was to change the state to detached directly in the cinder database. Be careful this trick is only working with MariaDB.
First get the database password. Get the encrypted password from /opt/ibm/powervc/data/powervc-db.conf file and decode it to have the clear password:
# grep ^db_password /opt/ibm/powervc/data/powervc-db.conf db_password = aes-ctr:NjM2ODM5MjM0NTAzMTg4MzQzNzrQZWi+mrUC+HYj9Mxi5fQp1XyCXA== # python -c "from powervc_keystone.encrypthandler import EncryptHandler; print EncryptHandler().decode('aes-ctr:NjM2ODM5MjM0NTAzMTg4MzQzNzrQZWi+mrUC+HYj9Mxi5fQp1XyCXA==')" OhnhBBS_gvbCcqHVfx2N # mysql -u root -p cinder Enter password: MariaDB [cinder]> MariaDB [cinder]> show tables; +----------------------------+ | Tables_in_cinder | +----------------------------+ | backups | | cgsnapshots | | consistencygroups | | driver_initiator_data | | encryption | [..]
Then get the lun uuid on the PowerVC gui for the lun you want to change, and follow the commands below:
MariaDB [cinder]> select * from volume_attachment where volume_id='9cf6d85a-3edd-4ab7-b797-577ff6566f78' \G *************************** 1. row *************************** created_at: 2016-05-26 08:52:51 updated_at: 2016-05-26 08:54:23 deleted_at: 2016-05-26 08:54:23 deleted: 1 id: ce4238b5-ea39-4ce1-9ae7-6e305dd506b1 volume_id: 9cf6d85a-3edd-4ab7-b797-577ff6566f78 attached_host: NULL instance_uuid: 44c7a72c-610c-4af1-a3ed-9476746841ab mountpoint: /dev/sdb attach_time: 2016-05-26 08:52:51 detach_time: 2016-05-26 08:54:23 attach_mode: rw attach_status: attached 1 row in set (0.01 sec) MariaDB [cinder]> select * from volumes where id='9cf6d85a-3edd-4ab7-b797-577ff6566f78' \G *************************** 1. row *************************** created_at: 2016-05-26 08:51:57 updated_at: 2016-05-26 08:54:23 deleted_at: NULL deleted: 0 id: 9cf6d85a-3edd-4ab7-b797-577ff6566f78 ec2_id: NULL user_id: 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 project_id: 1471acf124a0479c8d525aa79b2582d0 host: pb01_mn_svc_qual size: 1 availability_zone: nova status: available attach_status: attached scheduled_at: 2016-05-26 08:51:57 launched_at: 2016-05-26 08:51:59 terminated_at: NULL display_name: dummy display_description: NULL provider_location: NULL provider_auth: NULL snapshot_id: NULL volume_type_id: e49e9cc3-efc3-4e7e-bcb9-0291ad28df42 source_volid: NULL bootable: 0 provider_geometry: NULL _name_id: NULL encryption_key_id: NULL migration_status: NULL replication_status: disabled replication_extended_status: NULL replication_driver_data: NULL consistencygroup_id: NULL provider_id: NULL multiattach: 0 previous_status: NULL 1 row in set (0.00 sec) MariaDB [cinder]> update volume_attachment set attach_status='detached' where volume_id='9cf6d85a-3edd-4ab7-b797-577ff6566f78'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 MariaDB [cinder]> update volumes set attach_status='detached' where id='9cf6d85a-3edd-4ab7-b797-577ff6566f78'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0
The second issue I had was about having some machines in deleted state but the reality was that the HMC just rebooted and for an unknow reason these machines where seen as 'deleted' .. but they were not. Using this trick I was able to force a re-evalutation of each machine is this case:
# mysql -u root -p nova Enter password: MariaDB [nova]> select * from instance_health_status where health_state='WARNING'; +---------------------+---------------------+------------+---------+--------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+ | created_at | updated_at | deleted_at | deleted | id | health_state | reason | unknown_reason_details | +---------------------+---------------------+------------+---------+--------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+ | 2016-07-11 08:58:37 | NULL | NULL | 0 | 1af1805c-bb59-4bc9-8b6d-adeaeb4250f3 | WARNING | [{"resource_local": "server", "display_name": "p00ww6754398", "resource_property_key": "rmc_state", "resource_property_value": "initializing", "resource_id": "1af1805c-bb59-4bc9-8b6d-adeaeb4250f3"}] | | | 2015-07-31 16:53:50 | 2015-07-31 18:49:50 | NULL | 0 | 2668e808-10a1-425f-a272-6b052584557d | WARNING | [{"resource_local": "server", "display_name": "multi-vol", "resource_property_key": "vm_state", "resource_property_value": "deleted", "resource_id": "2668e808-10a1-425f-a272-6b052584557d"}] | | | 2015-08-03 11:22:38 | 2015-08-03 15:47:41 | NULL | 0 | 2934fb36-5d91-48cd-96de-8c16459c50f3 | WARNING | [{"resource_local": "server", "display_name": "clouddev-test-754df319-00000038", "resource_property_key": "rmc_state", "resource_property_value": "inactive", "resource_id": "2934fb36-5d91-48cd-96de-8c16459c50f3"}] | | | 2016-07-11 09:03:59 | NULL | NULL | 0 | 3fc42502-856b-46a5-9c36-3d0864d6aa4c | WARNING | [{"resource_local": "server", "display_name": "p00ww3254401", "resource_property_key": "rmc_state", "resource_property_value": "initializing", "resource_id": "3fc42502-856b-46a5-9c36-3d0864d6aa4c"}] | | | 2015-07-08 20:11:48 | 2015-07-08 20:14:09 | NULL | 0 | 54d02c60-bd0e-4f34-9cb6-9c0a0b366873 | WARNING | [{"resource_local": "server", "display_name": "p00wb3740870", "resource_property_key": "rmc_state", "resource_property_value": "inactive", "resource_id": "54d02c60-bd0e-4f34-9cb6-9c0a0b366873"}] | | | 2015-07-31 17:44:16 | 2015-07-31 18:49:50 | NULL | 0 | d5ec2a9c-221b-44c0-8573-d8e3695a8dd7 | WARNING | [{"resource_local": "server", "display_name": "multi-vol-sp5", "resource_property_key": "vm_state", "resource_property_value": "deleted", "resource_id": "d5ec2a9c-221b-44c0-8573-d8e3695a8dd7"}] | | +---------------------+---------------------+------------+---------+--------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+ 6 rows in set (0.00 sec) MariaDB [nova]> update instance_health_status set health_state='PENDING',reason='' where health_state='WARNING'; Query OK, 6 rows affected (0.00 sec) Rows matched: 6 Changed: 6 Warnings: 0
The ceilometer issue
When updating from PowerVC 1.3.0.1 to 1.3.1.1 PowerVC is changing the database backend from DB2 to MariaDB. This is a good thing but the way the update is done is by exporting all the data in flat files and then re-inserting it in the MariaDB database records per records. I had a huge problem because of this, just because my ceilodb base was huge because of the number of machines I had and the number of operations we run on PowerVC since it is in production. The DB insert took more than 3 days and never finish. If you don't need the ceilo data my advice is to change the retention from 270 days y default to 2 hours:
# powervc-config metering event_ttl --set 2 --unit hr # ceilometer-expirer --config-file /etc/ceilometer/ceilometer.conf
If this is not enough an you still experiencing problems regarding the update the best way is to flush the entire table before the update:
# /opt/ibm/powervc/bin/powervc-services stop # /opt/ibm/powervc/bin/powervc-services db2 start # /bin/su - pwrvcdb -c "db2 drop database ceilodb2" # /bin/su - pwrvcdb -c "db2 CREATE DATABASE ceilodb2 AUTOMATIC STORAGE YES ON /home/pwrvcdb DBPATH ON /home/pwrvcdb USING CODESET UTF-8 TERRITORY US COLLATE USING SYSTEM PAGESIZE 16384 RESTRICTIVE" # /bin/su - pwrvcdb -c "db2 connect to ceilodb2 ; db2 grant dbadm on database to user ceilometer" # /opt/ibm/powervc/bin/powervc-dbsync ceilometer # /bin/su - pwrvcdb -c "db2 connect TO ceilodb2; db2 CALL GET_DBSIZE_INFO '(?, ?, ?, 0)' > /tmp/ceilodb2_db_size.out; db2 terminate" > /dev/null
Multi tenancy ... how to deal with a huge environment
As my environment is growing bigger and bigger I faced a couple people trying to force me to multiply the number of PowerVC machine we have. As Openstack is a solution designed to handle both density and scalability I said that doing this is just a "non-sense". Seriously people who still believe in this have not understand anything about the cloud, openstack and PowerVC. Hopefully we found a solution acceptable by everybody. As we are created what we are calling "building-block" we had to find a way to isolate one "block" from one another. The solution for host isolation is called mutly tenancy isolation. For the storage side we are just going to play with quotas. By doing this a user will be able to manage a couple of hosts and the associated storage (storage template) without having the right to do anything on the others:
Before doing anything create the tenant (or project) and a user associated with it:
# cat /opt/ibm/powervc/version.properties | grep cloud_enabled cloud_enabled = yes # ~/powervcrc export OS_USERNAME=root export OS_PASSWORD=root export OS_TENANT_NAME=ibm-default export OS_AUTH_URL=https://powervc.lab.chmod666.org:5000/v3/ export OS_IDENTITY_API_VERSION=3 export OS_CACERT=/etc/pki/tls/certs/powervc.crt export OS_REGION_NAME=RegionOne export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_COMPUTE_API_VERSION=2.25 export OS_NETWORK_API_VERSION=2.0 export OS_IMAGE_API_VERSION=2 export OS_VOLUME_API_VERSION=2 # source powervcrc # openstack project create hb01 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | 90d064b4abea4339acd32a8b6a8b1fdf | | is_domain | False | | name | hb01 | | parent_id | default | +-------------+----------------------------------+ # openstack role list +----------------------------------+---------------------+ | ID | Name | +----------------------------------+---------------------+ | 1a76014f12594214a50c36e6a8e3722c | deployer | | 54616a8b136742098dd81eede8fd5aa8 | vm_manager | | 7bd6de32c14d46f2bd5300530492d4a4 | storage_manager | | 8260b7c3a4c24a38ba6bee8e13ced040 | deployer_restricted | | 9b69a55c6b9346e2b317d0806a225621 | image_manager | | bc455ed006154d56ad53cca3a50fa7bd | admin | | c19a43973db148608eb71eb3d86d4735 | service | | cb130e4fa4dc4f41b7bb4f1fdcf79fc2 | self_service | | f1a0c1f9041d4962838ec10671befe33 | vm_user | | f8cf9127468045e891d5867ce8825d30 | viewer | +----------------------------------+---------------------+ # useradd hb01_admin # openstack role add --project hb01 --user hb01_admin admin
Then associate each host group (aggregates in Openstack terms) (you have to put your allowed hosts in an host group to enable this feature) that are allowed for this tenant using filter_tenant_id meta-data. For each allowed host group add this field to the metatadata of the host. (first find the tenant id):
# openstack project list +----------------------------------+-------------+ | ID | Name | +----------------------------------+-------------+ | 1471acf124a0479c8d525aa79b2582d0 | ibm-default | | 90d064b4abea4339acd32a8b6a8b1fdf | hb01 | | b79b694c70734a80bc561e84a95b313d | powervm | | c8c42d45ef9e4a97b3b55d7451d72591 | service | | f371d1f29c774f2a97f4043932b94080 | project1 | +----------------------------------+-------------+ # openstack aggregate list +----+---------------+-------------------+ | ID | Name | Availability Zone | +----+---------------+-------------------+ | 1 | Default Group | None | | 21 | aggregate2 | None | | 41 | hg2 | None | | 43 | hb01_mn | None | | 44 | hb01_me | None | +----+---------------+-------------------+ # nova aggregate-set-metadata hb01_mn filter_tenant_id=90d064b4abea4339acd32a8b6a8b1fdf Metadata has been successfully updated for aggregate 43. | Id | Name | Availability Zone | Hosts | Metadata | 43 | hb01_mn | - | '9119MME_1009425' | 'dro_enabled=False', 'filter_tenant_id=90d064b4abea4339acd32a8b6a8b1fdf', 'hapolicy-id=1', 'hapolicy-run_interval=1', 'hapolicy-stabilization=1', 'initialpolicy-id=4', 'runtimepolicy-action=migrate_vm_advise_only', 'runtimepolicy-id=5', 'runtimepolicy-max_parallel=10', 'runtimepolicy-run_interval=5', 'runtimepolicy-stabilization=2', 'runtimepolicy-threshold=70' | # nova aggregate-set-metadata hb01_me filter_tenant_id=90d064b4abea4339acd32a8b6a8b1fdf Metadata has been successfully updated for aggregate 44. | Id | Name | Availability Zone | Hosts | Metadata | 44 | hb01_me | - | '9119MME_0696010' | 'dro_enabled=False', 'filter_tenant_id=90d064b4abea4339acd32a8b6a8b1fdf', 'hapolicy-id=1', 'hapolicy-run_interval=1', 'hapolicy-stabilization=1', 'initialpolicy-id=2', 'runtimepolicy-action=migrate_vm_advise_only', 'runtimepolicy-id=5', 'runtimepolicy-max_parallel=10', 'runtimepolicy-run_interval=5', 'runtimepolicy-stabilization=2', 'runtimepolicy-threshold=70' |
To make this work add the AggregateMultiTenancyIsolation to the scheduler_default_filter in nova.conf file and restart nova services:
# grep scheduler_default_filter /etc/nova/nova.conf scheduler_default_filters = RamFilter,CoreFilter,ComputeFilter,RetryFilter,AvailabilityZoneFilter,ImagePropertiesFilter,ComputeCapabilitiesFilter,MaintenanceFilter,PowerVCServerGroupAffinityFilter,PowerVCServerGroupAntiAffinityFilter,PowerVCHostAggregateFilter,PowerVMNetworkFilter,PowerVMProcCompatModeFilter,PowerLMBSizeFilter,PowerMigrationLicenseFilter,PowerVMMigrationCountFilter,PowerVMStorageFilter,PowerVMIBMiMobilityFilter,PowerVMRemoteRestartFilter,PowerVMRemoteRestartSameHMCFilter,PowerVMEndianFilter,PowerVMGuestCapableFilter,PowerVMSharedProcPoolFilter,PowerVCResizeSameHostFilter,PowerVCDROFilter,PowerVMActiveMemoryExpansionFilter,PowerVMNovaLinkMobilityFilter,AggregateMultiTenancyIsolation # powervc-services restart
We are done regarding the hosts.
Enabling quotas
To allow one user/tenant to create volumes only on onz storage provider we first need to enable quotas using the following commands:
# grep quota /opt/ibm/powervc/policy/cinder/policy.json "volume_extension:quotas:show": "", "volume_extension:quotas:update": "rule:admin_only", "volume_extension:quotas:delete": "rule:admin_only", "volume_extension:quota_classes": "rule:admin_only", "volume_extension:quota_classes:validate_setup_for_nested_quota_use": "rule:admin_only",
Then put to 0 all the non-allowed storage template for this tenant and let the only one you want to 10000. Easy:
# cinder --service-type volume type-list +--------------------------------------+---------------------------------------------+-------------+-----------+ | ID | Name | Description | Is_Public | +--------------------------------------+---------------------------------------------+-------------+-----------+ | 53434872-a0d2-49ea-9683-15c7940b30e5 | svc2 base template | - | True | | e49e9cc3-efc3-4e7e-bcb9-0291ad28df42 | svc1 base template | - | True | | f45469d5-df66-44cf-8b60-b226425eee4f | svc3 | - | True | +--------------------------------------+---------------------------------------------+-------------+-----------+ # cinder --service-type volume quota-update --volumes 0 --volume-type "svc2" 90d064b4abea4339acd32a8b6a8b1fdf # cinder --service-type volume quota-update --volumes 0 --volume-type "svc3" 90d064b4abea4339acd32a8b6a8b1fdf +-------------------------------------------------------+----------+ | Property | Value | +-------------------------------------------------------+----------+ | backup_gigabytes | 1000 | | backups | 10 | | gigabytes | 1000000 | | gigabytes_svc2 base template | 10000000 | | gigabytes_svc1 base template | 10000000 | | gigabytes_svc3 | -1 | | per_volume_gigabytes | -1 | | snapshots | 100000 | | snapshots_svc2 base template | 100000 | | snapshots_svc1 base template | 100000 | | snapshots_svc3 | -1 | | volumes | 100000 | | volumes_svc2 base template | 100000 | | volumes_svc1 base template | 0 | | volumes_svc3 | 0 | +-------------------------------------------------------+----------+ # powervc-services stop # powervc-services start
By doing this you have enable the isolation between two tenants. Then use the appropriate user to do the appropriate task.
PowerVC cinder above the Petabyte
Now that quota are enabled use this command if you want to be able to have more that one petabyte of data managed by PowerVC:
# cinder --service-type volume quota-class-update --gigabytes -1 default # powervc-services stop # powervc-services start
PowerVC cinder above 10000 luns
Change the osapi_max_limit in cinder.conf if you want to go above the 10000 lun limits (check every cinder configuration files; the cinder.conf if for the global number of volumes):
# grep ^osapi_max_limit cinder.conf osapi_max_limit = 15000 # powervc-services stop # powervc-services start
Snapshot and consistncy group
There is a new cool feature available with the latest version of PowerVC (1.3.1.2). This feature allows you to create snapshots of volume (only on SVC and Storwise for the moment). You now have the possibility to create consistency group (group of volumes) and create snapshots of these consistency groups (allowing for instance to make a backup of a volume group directly from OpenStack. I'm doing the example below using the command line because I think it is easier to understand with these commands rather than showing you the same thing with the rest api):
First create a consistency group:
# cinder --service-type volume type-list +--------------------------------------+---------------------------------------------+-------------+-----------+ | ID | Name | Description | Is_Public | +--------------------------------------+---------------------------------------------+-------------+-----------+ | 53434872-a0d2-49ea-9683-15c7940b30e5 | svc2 base template | - | True | | 862b0a8e-cab4-400c-afeb-99247838f889 | p8_ssp base template | - | True | | e49e9cc3-efc3-4e7e-bcb9-0291ad28df42 | svc1 base template | - | True | | f45469d5-df66-44cf-8b60-b226425eee4f | svc3 | - | True | +--------------------------------------+---------------------------------------------+-------------+-----------+ # cinder --service-type volume consisgroup-create --name foovg_cg "svc1 base template" +-------------------+-------------------------------------------+ | Property | Value | +-------------------+-------------------------------------------+ | availability_zone | nova | | created_at | 2016-09-11T21:10:58.000000 | | description | None | | id | 950a5193-827b-49ab-9511-41ba120c9ebd | | name | foovg_cg | | status | creating | | volume_types | [u'e49e9cc3-efc3-4e7e-bcb9-0291ad28df42'] | +-------------------+-------------------------------------------+ # cinder --service-type volume consisgroup-list +--------------------------------------+-----------+----------+ | ID | Status | Name | +--------------------------------------+-----------+----------+ | 950a5193-827b-49ab-9511-41ba120c9ebd | available | foovg_cg | +--------------------------------------+-----------+----------+
Create volume in this consistency group:
# cinder --service-type volume create --volume-type "svc1 base template" --name foovg_vol1 --consisgroup-id 950a5193-827b-49ab-9511-41ba120c9ebd 200 # cinder --service-type volume create --volume-type "svc1 base template" --name foovg_vol2 --consisgroup-id 950a5193-827b-49ab-9511-41ba120c9ebd 200 +------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | 950a5193-827b-49ab-9511-41ba120c9ebd | | created_at | 2016-09-11T21:23:02.000000 | | description | None | | encrypted | False | | health_status | {u'health_value': u'PENDING', u'id': u'8d078772-00b5-45fc-89c8-82c63e2c48ed', u'value_reason': u'PENDING', u'updated_at': u'2016-09-11T21:23:02.669372'} | | id | 8d078772-00b5-45fc-89c8-82c63e2c48ed | | metadata | {} | | migration_status | None | | multiattach | False | | name | foovg_vol2 | | os-vol-host-attr:host | None | | os-vol-tenant-attr:tenant_id | 1471acf124a0479c8d525aa79b2582d0 | | replication_status | disabled | | size | 200 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 | | volume_type | svc1 base template | +------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
You're now able to attach these two volumes to a machine from the PowerVC GUI:
# lsmpio -q Device Vendor Id Product Id Size Volume Name ------------------------------------------------------------------------------ hdisk0 IBM 2145 64G volume-aix72-44c7a72c-000000e0- hdisk1 IBM 2145 100G volume-snap1-dab0e2d1-130a hdisk2 IBM 2145 100G volume-snap2-5e863fdb-ab8c hdisk3 IBM 2145 200G volume-foovg_vol1-3ba0ff59-acd8 hdisk4 IBM 2145 200G volume-foovg_vol2-8d078772-00b5 # cfgmr # lspv hdisk0 00c8b2add70d7db0 rootvg active hdisk1 00f9c9f51afe960e None hdisk2 00f9c9f51afe9698 None hdisk3 none None hdisk4 none None
Then you can create a snapshot fo these two volumes. It's that easy :
# cinder --service-type volume cgsnapshot-create 950a5193-827b-49ab-9511-41ba120c9ebd +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | consistencygroup_id | 950a5193-827b-49ab-9511-41ba120c9ebd | | created_at | 2016-09-11T21:31:12.000000 | | description | None | | id | 20e2ce6b-9c4a-4eea-b05d-f0b0b6e4768f | | name | None | | status | creating | +---------------------+--------------------------------------+ # cinder --service-type volume cgsnapshot-list +--------------------------------------+-----------+------+ | ID | Status | Name | +--------------------------------------+-----------+------+ | 20e2ce6b-9c4a-4eea-b05d-f0b0b6e4768f | available | - | +--------------------------------------+-----------+------+ # cinder --service-type volume cgsnapshot-show 20e2ce6b-9c4a-4eea-b05d-f0b0b6e4768f +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | consistencygroup_id | 950a5193-827b-49ab-9511-41ba120c9ebd | | created_at | 2016-09-11T21:31:12.000000 | | description | None | | id | 20e2ce6b-9c4a-4eea-b05d-f0b0b6e4768f | | name | None | | status | available | +---------------------+--------------------------------------+
Conclusion
Please keep in mind that the content of this blog post comes from real life and production examples. I hope you will be able to better understand that scalability, density, fast deployment, snapshots, multi tenancy are some features that are absolutely needed in the AIX world. As you can see the PowerVC team is moving fast. Probably faster than every customer I have ever seen. I must admit they are right. Doing this is the only way the face the Linux X86 offering. And I must confess this is damn fun to work on those things. I'm so happy to have the best of two worlds AIX/PowerSystem and Openstack. This is the only direction we have to take if we want AIX to survive. So please stop being scared or not convinced by these solutions they are damn good, production ready. Please face and embrace the future and stop looking at the past. As always I hope it help.