[Bug 2009141] Re: [UBUNTU 22.04] OS installer exits for zfcp 32G adapter with an unknown error. An error occurred during installation

Frank Heimes 2009141 at bugs.launchpad.net
Wed Apr 19 11:53:03 UTC 2023


After being back from a few days pto, I'm catching up on this now ...

I redid my installation (on z/VM with FCP disks, but on a non-DPM system)
leaving here some details:

▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
  Willkommen! Bienvenue! Welcome! Добро пожаловать!┌──────────────────[ Help ]┐
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀│ Help choosing a language │▀
  Use UP, DOWN and ENTER keys to select your langua│ Keyboard shortcuts       │
                                                   │ Enter shell              │
                [ Asturianu                        │ View error reports       │
                [ Bahasa Indonesia                 ├──────────────────────────┤
                [ Català                           │ About this installer     │
                [ Deutsch                          │ Help on SSH access       │
                [ English                          └──────────────────────────┘
                [ English (UK)                              ▸ ]█              
                [ Español                                   ▸ ]█              
                [ Français                                  ▸ ]█              
                [ Galego                                    ▸ ]█              
                [ Hrvatski                                  ▸ ]█              
                [ Latviski                                  ▸ ]               
                [ Lietuviškai                               ▸ ]               
                [ Magyar                                    ▸ ]               
                [ Nederlands                                ▸ ]               
                [ Norsk bokmål                              ▸ ]               
                [ Occitan (aprèp 1500)                      ▸ ]               
                [ Polski                                    ▸ ]               
                [ Português                                 ▸ ]▾              
                                                                              

...
Installer shell session activated.

This shell session is running inside the installer environment.  You
will be returned to the installer when this shell is exited, for
example by typing Control-D or 'exit'.

Be aware that this is an ephemeral environment.  Changes to this
environment will not survive a reboot. If the install has started, the
installed system will be mounted at /target.
root at ubuntu-server:/# cat /proc/cmdline 
%@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
root at ubuntu-server:/# uname -a
Linux ubuntu-server 5.15.0-60-generic #66-Ubuntu SMP Fri Jan 20 14:30:43 UTC 2023 s390x s390x s390x GNU/Linux
root at ubuntu-server:/# snap list subiquity
Name       Version  Rev   Tracking         Publisher    Notes
subiquity  23.02.1  4383  latest/stable/…  canonical**  classic
root at ubuntu-server:/# python3 --version
Python 3.10.6
root at ubuntu-server:/# ls -lad /var/log 
drwxrwxr-x 1 root syslog 320 Apr 19 10:18 /var/log
root at ubuntu-server:/# lszdev --online
TYPE         ID                          ON   PERS  NAMES
qeth         0.0.0600:0.0.0601:0.0.0602  yes  no    enc600
generic-ccw  0.0.0009                    yes  no    
root at ubuntu-server:/# 

▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
  Zdev setup                                           ┌──────────────[ Help ]┐
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀│ Help on this screen  │▀
  0.0.0400                                             │ Keyboard shortcuts   │
  0.0.0592                                             │ Enter shell          │
                                                       │ View error reports   │
  qeth                                                 ├──────────────────────┤
  0.0.0600:0.0.0601:0.0.0602                     enc600│ About this installer │
  0.0.0603:0.0.0604:0.0.0605                           │ Help on SSH access   │
                                                       └──────────────────────┘
  dasd-eckd                                                                   
  0.0.1607                                                ▸                  █
                                                                             █
  zfcp-host                                                                  █
  0.0.f00b                               online           ▸                  █
  0x50050763060b16b6:0x4026400600000000          sdb sg1                     █
  0x50050763061b16b6:0x4026400600000000          sda sg0                     █
  0.0.f10b                               online           ▸                  █
  0x50050763060b16b6:0x4026400600000000          sdc sg2                     █
  0x50050763061b16b6:0x4026400600000000          sdd sg3                     ▾
                                                                              
                                 [ Continue   ]                               
                                 [ Back       ]                               
                                                                              
root at ubuntu-server:/# lszdev --online
TYPE         ID                                              ON   PERS  NAMES
zfcp-host    0.0.f00b                                        yes  yes   
zfcp-host    0.0.f10b                                        yes  yes   
zfcp-lun     0.0.f00b:0x50050763060b16b6:0x4026400600000000  yes  no    sdb sg1
zfcp-lun     0.0.f00b:0x50050763061b16b6:0x4026400600000000  yes  no    sda sg0
zfcp-lun     0.0.f10b:0x50050763060b16b6:0x4026400600000000  yes  no    sdc sg2
zfcp-lun     0.0.f10b:0x50050763061b16b6:0x4026400600000000  yes  no    sdd sg3
qeth         0.0.0600:0.0.0601:0.0.0602                      yes  no    enc600
generic-ccw  0.0.0009                                        yes  no    
root at ubuntu-server:/# multipath -ll
mpatha (36005076306ffd6b60000000000002606) dm-0 IBM,2107900
size=64G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  |- 0:0:1:1074151462 sdb 8:16 active ready running
  |- 0:0:0:1074151462 sda 8:0  active ready running
  |- 1:0:1:1074151462 sdd 8:48 active ready running
  `- 1:0:0:1074151462 sdc 8:32 active ready running
root at ubuntu-server:/#

▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
  Guided storage configuration                                        [ Help ]
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
  Configure a guided storage layout, or create a custom one:                  
                                                                              
  (X)  Use an entire disk                                                    ▴
                                                                             █
       [ 0x6005076306 multipath device 64.000G                            ▾ ]█
         ffd6b6000000                                                        █
         0000002606                                                          █
                                                                             █
       [X]  Set up this disk as an LVM group                                 █
                                                                             █
            [ ]  Encrypt the LVM group with LUKS                             █
                                                                             █
                         Passphrase:                                         █
                                                                             █
                                                                              
                 Confirm passphrase:                                          
                                                                             ▾
                                                                              
                                 [ Done       ]                               
                                 [ Back       ]                               
                                                                              

▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
  Storage configuration                                               [ Help ]
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
  FILE SYSTEM SUMMARY                                                        ▴
                                                                             █
    MOUNT POINT     SIZE    TYPE      DEVICE TYPE                            █
  [ /              30.996G  new ext4  new LVM logical volume            ▸ ]  █
  [ /boot           2.000G  new ext4  new partition of multipath device ▸ ]  █
                                                                             █
                                                                             █
  AVAILABLE DEVICES                                                          █
                                                                             █
    DEVICE                                   TYPE                 SIZE        
  [ ubuntu-vg (new)                          LVM volume group    61.996G  ▸ ] 
    free space                                                   31.000G  ▸   
                                                                              
  [ Create software RAID (md) ▸ ]                                             
  [ Create volume group (LVM) ▸ ]                                             
                                                                             ▾
                                                                              
                                 [ Done       ]                               
                                 [ Reset      ]                               
                                 [ Back       ]                               
                                                                              
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
  Storage configuration                                               [ Help ]
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
  FILE SYSTEM SUMMARY                                                        ▴
                                                                             █

   ┌────────────────────── Confirm destructive action ──────────────────────┐
   │                                                                        │
   │  Selecting Continue below will begin the installation process and      │
   │  result in the loss of data on the disks selected to be formatted.     │
   │                                                                        │
   │  You will not be able to return to this or a previous screen once the  │
   │  installation has started.                                             │
   │                                                                        │
   │  Are you sure you want to continue?                                    │
   │                                                                        │
   │                             [ No         ]                             │
   │                             [ Continue   ]                             │
   │                                                                        │
   └────────────────────────────────────────────────────────────────────────┘

                                 [ Reset      ]                               
                                 [ Back       ]                               
                                                                              
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
  Install complete!                                                   [ Help ]
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
  ┌──────────────────────────────────────────────────────────────────────────┐
  │            configuring multipath                                        ▴│
  │            updating packages on target system                            │
  │            configuring pollinate user-agent on target                    │
  │            updating initramfs configuration                              │
  │            configuring target system bootloader                          │
  │final system configuration                                                │
  │  configuring cloud-init                                                  │
  │  calculating extra packages to install                                   │
  │  installing openssh-server                                               │
  │    curtin command system-install                                         │
  │  downloading and installing security updates                             │
  │    curtin command in-target                                              │
  │  restoring apt configuration                                             │
  │    curtin command in-target                                             █│
  │subiquity/Late/run                                                       ▾│
  └──────────────────────────────────────────────────────────────────────────┘

                               [ View full log ]
                               [ Reboot Now    ]

So all good here.

But a couple of obversations / comments:

You mentioned that wiping out older or addditional LUNs is not an option.
And I think it's not needed, only thought about wiping the LUNs for the OS itself and only enable this OS LUN during installaltion. Any additional LUNs can be easily added post-install and should not be enabled at install time (here, for testing and to be on the safe side).
(I think that's easier compared to disabling them on the SAN side ...)

If you do a 'normal' install, there is no need to add anything to the parmfile,
and by default the parmfile only containes a single line like this (without quotes):
" --- "
And make sure the parm file is in the correct encoding (fix length "F 80" or variable, "Trunc=80"):
 PARMFILE UBUNTU   O1  F 80  Trunc=80 Size=4 Line=0 Col=1 Alt=0                                                                           
00000 * * * Top of File * * *
00001  ---
00002 * * * End of File * * *

The 3 dashes (" --- ") are to separate installer from kernel arguments.
"<installer> --- <kernel>"
And if you do such a plain and simple install, you will be asked at the console to specify some basic network related data, incl. the qeth that should be used for the installation.
(I guess I should double-check if this is clearly stated in the docs ...)

If you don't want to specify this data manually at the early boot stage, you can add this to the parmfile (like I guess you tried).
Looking at your parmfile:
"ip=9.11.116.213::9.11.116.1:24:255.255.255.0:ilabg13:ence0f:none:9.11.227.25" 
I think it's not fully correct, since the ":24" is not needed or even wrong there.
It should be like:
"ip=9.11.116.213::9.11.116.1:255.255.255.0:ilabg13:ence0f:none:9.11.227.25 --- " 
(so in your case gw and dns are differen systems, right?)

But again, for figuring out any install issues, I would go with a
standard installation first (means with specifying this data at the
early boot stage at the console.)

At the early boot stage it's about "interactive netboot" and asks for network information only
and all network devices are qeth (of course except RoCE) - so don't specify any other devices here (like HBAs or LUNS)'or whatever).
And the term "zdev" comes from the lszdev and chzdev commands, hence it was also used for the 'zdev" UI installer screen, so in my case:
"
Attempt interactive netboot from a URL?
yes no (default yes): yes
Available qeth devices:
0.0.0600 0.0.0603
zdev to activate (comma separated, optional): 
"
If you would like to see some changes in the terminology that is used here, we are of course open for any ...

I believe that the content of the kernel parameter with all the "@" is
more a representaton issue (nevertheless, not very nice though ...), but
since it works for me on my system - and even with much more kernel args
that are needed in case of a fully non-interactive "autoinstall".

What I noticed in the crash file is the following snippet:
"
 2023-04-06 19:17:21,139 DEBUG subiquitycore.utils:77 run_command ['udevadm', 'settle', '-t', '0'] exited with code 0
 2023-04-06 19:17:21,139 ERROR subiquity.server.server:424 top level error
 Traceback (most recent call last):
   File "/snap/subiquity/4383/usr/lib/python3.8/asyncio/events.py", line 81, in _run
     self._context.run(self._callback, *self._args)
   File "/snap/subiquity/4383/lib/python3.8/site-packages/subiquity/server/controllers/filesystem.py", line 682, in _udev_event
     action, dev = self._monitor.receive_device()
   File "/snap/subiquity/4383/lib/python3.8/site-packages/pyudev/monitor.py", line 400, in receive_device
     device = self.poll()
   File "/snap/subiquity/4383/lib/python3.8/site-packages/pyudev/monitor.py", line 358, in poll
     if eintr_retry_call(poll.Poll.for_events((self, "r")).poll, timeout):
   File "/snap/subiquity/4383/lib/python3.8/site-packages/pyudev/_util.py", line 164, in eintr_retry_call
     return func(*args, **kwargs)
   File "/snap/subiquity/4383/lib/python3.8/site-packages/pyudev/_os/poll.py", line 94, in poll
     return list(self._parse_events(eintr_retry_call(self._notifier.poll, timeout)))
   File "/snap/subiquity/4383/lib/python3.8/site-packages/pyudev/_os/poll.py", line 109, in _parse_events
     raise IOError("Error while polling fd: {0!r}".format(fd))
 OSError: Error while polling fd: 20
 2023-04-06 19:17:21,142 DEBUG subiquity.common.errorreport:384 generating crash report
 2023-04-06 19:17:21,143 INFO subiquity.common.errorreport:406 saving crash report 'unknown error crashed with OSError' to /var/crash/1680808641.142762184.unknown.crash
 2023-04-06 19:17:21,143 INFO root:37 start: subiquity/ErrorReporter/1680808641.142762184.unknown/add_info: 
 2023-04-06 19:17:21,143 INFO root:37 finish: subiquity/Meta/status_GET: SUCCESS: 200 {"state": "ERROR", "confirming_tty": "", "error": {"state": "INCOMPLETE", "ba...
 2023-04-06 19:17:21,144 INFO root:37 finish: subiquity/Meta/status_GET: SUCCESS: 200 {"state": "ERROR", "confirming_tty": "", "error": {"state": "INCOMPLETE", "ba...
 2023-04-06 19:17:21,144 INFO root:37 finish: subiquity/Meta/status_GET: SUCCESS: 200 {"state": "ERROR", "confirming_tty": "", "error": {"state": "INCOMPLETE", "ba...
 2023-04-06 19:17:21,144 INFO root:37 finish: subiquity/Meta/status_GET: SUCCESS: 200 {"state": "ERROR", "confirming_tty": "", "error": {"state": "INCOMPLETE", "ba...
 2023-04-06 19:17:21,144 DEBUG subiquitycore.utils:64 run_command called: ['udevadm', 'settle', '-t', '0']
InstallerServerLogInfo:
 2023-04-06 19:04:23,061 INFO subiquity:161 Starting Subiquity server revision 4383
"

That could be a problem with asyncio (I remember that there was an issue with asyncio in the past) or a race condition.
I'll ask my installer colleague to have a look at this ...

-- 
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to subiquity in Ubuntu.
https://bugs.launchpad.net/bugs/2009141

Title:
  [UBUNTU 22.04] OS installer exits for zfcp 32G adapter with an unknown
  error. An error occurred during installation

Status in Ubuntu on IBM z Systems:
  Triaged
Status in subiquity package in Ubuntu:
  Triaged

Bug description:
  Bug Description:
  Installation of Ubuntu 22.04 on s390x failed with an unknown error just after having successfully activated a zfcp HBA with Fibre-Channel-attached SCSI disks.

  I do see 0.0.100d successfully being online and having paths attached:
                     zfcp-host
                     0.0.100d                               online             ?
                     0x500173800cef0111:0x0000000000000000          sg16                                                                                               
                     0x500173800cef0111:0x0001000000000000          sdp sg17                                                                                           
                     0x500173800cef0111:0x0002000000000000          sdq sg18
  [host.ilabg13_9.11.116.213_ubuntu-22.04_ssh_installer.txt lines 41-41/9373 byte 238696/1146881 21%]

  But immediately after that, the installer reports an error:

                     An error occurred during installation

  ??????????????????????????????????????????????????????????????????????????
  ?subiquity/Early/apply_autoinstall_config                                                                                                        
  ?subiquity/Reporting/apply_autoinstall_config                                                                                                    
  ?subiquity/Error/apply_autoinstall_config                                                                                                        
  ?subiquity/Userdata/apply_autoinstall_config                                                                                                     
  ?subiquity/Package/apply_autoinstall_config                                                                                                      
  ?subiquity/Debconf/apply_autoinstall_config                                                                                                      
  ?subiquity/Kernel/apply_autoinstall_config                                                                                                       
  ?subiquity/Late/apply_autoinstall_config                                                                                                                                  ??????????????????????????????????????????????????????????????????????????
  ?  Sorry, an unknown error occurred.                                     ?
  ?  Information is being collected from the system that will help the     ?
  ?  developers diagnose the report.                                       |

  It looks like there was some ascii art progress bar while data was
  collected and then the dialog updated to:

  ?                       [ View full report      ]                        ?
  ?  If you want to help improve the installer, you can send an error      ?
  ?  report.                                                               ?
  ?                       [ Send to Canonical     ]                        ?
  ?                       [ Close report          ]                        |

  Next "View full report" was selected?

  ProblemType: Bug
  Architecture: s390x
  CrashDB: {'impl': 'launchpad', 'project': 'subiquity'}
  CurrentDmesg:
   [    0.093997] Linux version 5.15.0-43-generic (buildd at bos02-s390x-005) (gcc (Ubuntu 11.2.0-19ubuntu1) 11.2.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #46-Ubuntu SMP Tue Jul 12 12:40:17 UTC 2022 (Ubuntu 5.15.0-43.46-generic 5.15.39)
   [    0.094001] setup: Linux is running as a z/VM guest operating system in 64-bit mode

   [    0.360759] Kernel command line:
  ip=9.11.116.213::9.11.116.1:255.255.255.0:ilabg13:ence0f:none:9.11.227.25
  https://cdimage.ubuntu.com/releases/jammy/release/ubuntu-22.04.1-live-
  server-s39

   [ 2677.003438] zfcp 0.0.100d: qdio: ZFCP on SC 10 using AI:1 QEBSM:1 PRI:1 TDD:1 SIGA: W 
   [ 2677.032397] scsi host0: zfcp
   [ 2677.060025] scsi 0:0:0:0: RAID              IBM      2810XIV-LUN-0    0000 PQ: 0 ANSI: 5
   [ 2677.061097] scsi 0:0:0:0: alua: disable for non-disk devices
   [ 2677.061134] scsi 0:0:0:0: Attached scsi generic sg0 type 12
   [ 2677.062988] scsi 0:0:0:1: Direct-Access     IBM      2810XIV          0000 PQ: 0 ANSI: 5
   [ 2677.064628] scsi 0:0:0:1: alua: supports implicit TPGS
   [ 2677.064632] scsi 0:0:0:1: alua: device naa.6001738cfc900cef0000000000013596 port group 0 rel port 301
   [ 2677.064679] sd 0:0:0:1: Attached scsi generic sg1 type 0

  ==> zfcp could successfully set 0.0.100d online and automatic LUN scan
  worked.

   [ 2677.513028] sd 0:0:16:1074020357: [sdac] Attached SCSI disk
   [ 2677.522641] sd 0:0:16:1074020356: [sdab] Attached SCSI disk
   [ 2677.522677] sd 0:0:15:1074020357: [sdaa] Attached SCSI disk
   [ 2677.522693] sd 0:0:15:1074020356: [sdz] Attached SCSI disk
   [ 2678.660373] device-mapper: multipath service-time: version 0.3.0 loaded
  Date: Wed Jan 18 21:04:03 2023
  DistroRelease: Ubuntu 22.04
  ExecutablePath: /snap/subiquity/3699/lib/python3.8/site-packages/subiquity/cmd/server.py
  InstallerServerLog:
   2023-01-18 20:23:57,901 INFO subiquity:112 Starting Subiquity server revision 3699

   2023-01-18 21:02:53,381 DEBUG root:39 start: subiquity/Zdev/GET: 
   2023-01-18 21:02:53,381 DEBUG subiquitycore.utils:64 run_command called: ['lszdev', '--quiet', '--pairs', '--columns', 'id,type,on,exists,pers,auto,failed,names']
   2023-01-18 21:02:53,405 DEBUG subiquitycore.utils:77 run_command ['lszdev', '--quiet', '--pairs', '--columns', 'id,type,on,exists,pers,auto,failed,names'] exited with code 0
   2023-01-18 21:02:53,406 DEBUG root:39 finish: subiquity/Zdev/GET: SUCCESS: 200 [{"id": "0.0.0009", "type": "generic-ccw", "on": true, "exists": true, "pers"...
   2023-01-18 21:02:53,414 INFO aiohttp.access:233  [18/Jan/2023:21:02:53 +0000] "GET /zdev HTTP/1.1" 200 2059 "-" "Python/3.8 aiohttp/3.6.2"
   2023-01-18 21:03:06,492 DEBUG subiquitycore.utils:64 run_command called: ['udevadm', 'settle', '-t', '0']
   2023-01-18 21:03:06,534 DEBUG subiquitycore.utils:77 run_command ['udevadm', 'settle', '-t', '0'] exited with code 0
   2023-01-18 21:03:06,534 DEBUG probert.network:585 event for addr_change: CHANGE {'ifindex': 2, 'flags': 768, 'family': 10, 'scope': 0, 'local': b'2002:90b:e006:116:76:54ff:fe00:5/64'}
   2023-01-18 21:03:06,534 DEBUG probert.network:717 addr_change CHANGE {'ifindex': 2, 'flags': 768, 'family': 10, 'scope': 0, 'local': b'2002:90b:e006:116:76:54ff:fe00:5/64'}
   2023-01-18 21:03:06,534 DEBUG root:39 start: subiquity/Network/_send_update: CHANGE ence0f
   2023-01-18 21:03:06,534 DEBUG subiquity.server.controllers.network:354 dev_info ence0f {'addresses': ['9.11.116.213/24'], 'gateway4': '9.11.116.1', 'nameservers': {'addresses': ['9.11.227.25']}}
   2023-01-18 21:03:06,535 DEBUG root:39 finish: subiquity/Network/_send_update: SUCCESS: CHANGE ence0f
   2023-01-18 21:03:19,283 DEBUG subiquitycore.utils:64 run_command called: ['udevadm', 'settle', '-t', '0']
   2023-01-18 21:03:19,314 DEBUG subiquitycore.utils:77 run_command ['udevadm', 'settle', '-t', '0'] exited with code 0
   2023-01-18 21:03:19,314 DEBUG probert.network:585 event for addr_change: CHANGE {'ifindex': 2, 'flags': 768, 'family': 10, 'scope': 0, 'local': b'2002:90b:e006:116:76:54ff:fe00:5/64'}
   2023-01-18 21:03:19,314 DEBUG probert.network:717 addr_change CHANGE {'ifindex': 2, 'flags': 768, 'family': 10, 'scope': 0, 'local': b'2002:90b:e006:116:76:54ff:fe00:5/64'}
   2023-01-18 21:03:19,314 DEBUG root:39 start: subiquity/Network/_send_update: CHANGE ence0f
   2023-01-18 21:03:19,314 DEBUG subiquity.server.controllers.network:354 dev_info ence0f {'addresses': ['9.11.116.213/24'], 'gateway4': '9.11.116.1', 'nameservers': {'addresses': ['9.11.227.25']}}
   2023-01-18 21:03:19,314 DEBUG root:39 finish: subiquity/Network/_send_update: SUCCESS: CHANGE ence0f
   2023-01-18 21:03:59,037 DEBUG root:39 start: subiquity/Zdev/chzdev_POST: 
   2023-01-18 21:03:59,038 DEBUG subiquitycore.utils:92 arun_command called: ['chzdev', '--enable', '0.0.100d']
   2023-01-18 21:03:59,357 DEBUG subiquitycore.utils:64 run_command called: ['udevadm', 'settle', '-t', '0']
   2023-01-18 21:03:59,373 DEBUG subiquitycore.utils:77 run_command ['udevadm', 'settle', '-t', '0'] exited with code 1
   2023-01-18 21:03:59,373 DEBUG subiquity.server.controller.filesystem:495 waiting 0.1 to let udev event queue settle
  ... <<< some time with repeated udevadm settle >>> ...
   2023-01-18 21:04:02,984 DEBUG subiquitycore.utils:64 run_command called: ['udevadm', 'settle', '-t', '0']
   2023-01-18 21:04:02,986 DEBUG subiquitycore.utils:77 run_command ['udevadm', 'settle', '-t', '0'] exited with code 1
   2023-01-18 21:04:02,986 DEBUG subiquity.server.controller.filesystem:495 waiting 0.1 to let udev event queue settle
   2023-01-18 21:04:03,029 DEBUG subiquitycore.utils:101 arun_command ['chzdev', '--enable', '0.0.100d'] exited with code 0
   2023-01-18 21:04:03,029 DEBUG subiquitycore.utils:64 run_command called: ['lszdev', '--quiet', '--pairs', '--columns', 'id,type,on,exists,pers,auto,failed,names']
   2023-01-18 21:04:03,047 DEBUG subiquitycore.utils:77 run_command ['lszdev', '--quiet', '--pairs', '--columns', 'id,type,on,exists,pers,auto,failed,names'] exited with code 0
   2023-01-18 21:04:03,052 DEBUG root:39 finish: subiquity/Zdev/chzdev_POST: SUCCESS: 200 [{"id": "0.0.0009", "type": "generic-ccw", "on": true, "exists": true, "pers"...
   2023-01-18 21:04:03,052 INFO aiohttp.access:233  [18/Jan/2023:21:03:59 +0000] "POST /zdev/chzdev?action=%22enable%22&zdev=%7B%22id%22:+%220.0.100d%22,+%22type%22:+%22zfcp-host%22,+%22on%22:+false,+%22exists%22:+true,+%22pers%22:+false,+%22auto%22:+false,+%22failed%22:+false,+%22names%22:+%22%22%7D HTTP/1.1" 200 7956 "-" "Python/3.8 aiohttp/3.6.2"
   2023-01-18 21:04:03,087 DEBUG subiquitycore.utils:64 run_command called: ['udevadm', 'settle', '-t', '0']
   2023-01-18 21:04:03,114 DEBUG subiquitycore.utils:77 run_command ['udevadm', 'settle', '-t', '0'] exited with code 0
   2023-01-18 21:04:03,114 ERROR subiquity.server.server:416 top level error
   2023-01-18 21:04:03,114 ERROR subiquity.server.server:416 top level error
   Traceback (most recent call last):
     File "/snap/subiquity/3699/usr/lib/python3.8/asyncio/events.py", line 81, in _run
       self._context.run(self._callback, *self._args)
     File "/snap/subiquity/3699/lib/python3.8/site-packages/subiquity/server/controllers/filesystem.py", line 506, in _udev_event
       action, dev = self._monitor.receive_device()
     File "/snap/subiquity/3699/lib/python3.8/site-packages/pyudev/monitor.py", line 397, in receive_device
       device = self.poll()
     File "/snap/subiquity/3699/lib/python3.8/site-packages/pyudev/monitor.py", line 357, in poll
       if eintr_retry_call(poll.Poll.for_events((self, 'r')).poll, timeout):
     File "/snap/subiquity/3699/lib/python3.8/site-packages/pyudev/_util.py", line 163, in eintr_retry_call
       return func(*args, **kwargs)
     File "/snap/subiquity/3699/lib/python3.8/site-packages/pyudev/_os/poll.py", line 97, in poll
       return list(self._parse_events(eintr_retry_call(self._notifier.poll, timeout)))
     File "/snap/subiquity/3699/lib/python3.8/site-packages/pyudev/_os/poll.py", line 112, in _parse_events
       raise IOError('Error while polling fd: {0!r}'.format(fd))
   OSError: Error while polling fd: 20
   2023-01-18 21:04:03,116 DEBUG subiquitycore.common.errorreport:384 generating crash report
   2023-01-18 21:04:03,116 INFO subiquitycore.common.errorreport:406 saving crash report 'unknown error crashed with OSError' to /var/crash/1674075843.116781473.unknown.crash
   2023-01-18 21:04:03,117 INFO root:39 start: subiquity/ErrorReporter/1674075843.116781473.unknown/add_info: 
   2023-01-18 21:04:03,117 INFO root:39 finish: subiquity/Meta/status_GET: SUCCESS: 200 {"state": "ERROR", "confirming_tty": "", "error": {"state": "INCOMPLETE", "ba...
   2023-01-18 21:04:03,117 INFO root:39 finish: subiquity/Meta/status_GET: SUCCESS: 200 {"state": "ERROR", "confirming_tty": "", "error": {"state": "INCOMPLETE", "ba...
   2023-01-18 21:04:03,117 INFO root:39 finish: subiquity/Meta/status_GET: SUCCESS: 200 {"state": "ERROR", "confirming_tty": "", "error": {"state": "INCOMPLETE", "ba...
   2023-01-18 21:04:03,118 INFO root:39 finish: subiquity/Meta/status_GET: SUCCESS: 200 {"state": "ERROR", "confirming_tty": "", "error": {"state": "INCOMPLETE", "ba...
   2023-01-18 21:04:03,118 DEBUG subiquitycore.utils:64 run_command called: ['udevadm', 'settle', '-t', '0']
  InstallerServerLogInfo:

  InterpreterPath: /snap/subiquity/3699/usr/bin/python3.8

  UdevDb:
  ...
   P: /devices/css0/0.0.0010/0.0.100d
   L: 0
   E: DEVPATH=/devices/css0/0.0.0010/0.0.100d
   E: SUBSYSTEM=ccw
   E: DRIVER=zfcp
   E: CU_TYPE=1731
   E: CU_MODEL=03
   E: DEV_TYPE=1732
   E: DEV_MODEL=03
   E: MODALIAS=ccw:t1731m03dt1732dm03
  ...
   P: /devices/css0/0.0.0010/0.0.100d/host0/rport-0:0-0/target0:0:0/0:0:0:1/block/sda
   N: sda
   L: 0
   S: disk/by-path/ccw-0.0.100d-fc-0x500173800cef0131-lun-1
   S: disk/by-id/scsi-0IBM_2810XIV_host=ilabg13_tuc3_fcp_32G
   S: disk/by-id/scsi-36001738cfc900cef0000000000013596
   S: disk/by-id/wwn-0x6001738cfc900cef0000000000013596
   S: disk/by-uuid/da904d37-e306-4223-8fbb-9670357ca708
   S: disk/by-id/scsi-SIBM_2810XIV_6000CEF0000000000013596
   S: disk/by-id/scsi-1IBM_2810XIV_6000CEF0000000000013596
   ...
   E: DM_MULTIPATH_DEVICE_PATH=1
  ...
   P: /devices/css0/0.0.0010/0.0.100d/host0/rport-0:0-17/target0:0:17/0:0:17:1/block/sdae
   N: sdae
   L: 0
   S: disk/by-id/scsi-1IBM_FlashSystem-984026c6702d26c6-0000-007d-000294
   S: disk/by-id/scsi-SIBM_FlashSystem-9840_26c6702d26c6-0000-007d-000294
   S: disk/by-path/ccw-0.0.100d-fc-0x500507605e8b7271-lun-1
   S: disk/by-uuid/97c7452a-2f45-4f7b-9127-d63b4a34e1e4
   S: disk/by-id/scsi-36005076b19bcb49b180000007d000294
   S: disk/by-id/wwn-0x6005076b19bcb49b180000007d000294
   E: DEVPATH=/devices/css0/0.0.0010/0.0.100d/host0/rport-0:0-17/target0:0:17/0:0:17:1/block/sdae
   E: SUBSYSTEM=block
   E: DEVNAME=/dev/sdae
   E: DEVTYPE=disk
   E: DISKSEQ=43
   E: MAJOR=65
   E: MINOR=224
   E: USEC_INITIALIZED=2677514875
   E: DM_MULTIPATH_DEVICE_PATH=1
  ...
   P: /devices/css0/0.0.0010/0.0.100d/host0/rport-0:0-9/target0:0:9/0:0:9:1/block/sdo
   N: sdo
   L: 0
   S: disk/by-path/ccw-0.0.100d-fc-0x500507680b2541ba-lun-1
   S: disk/by-id/wwn-0x6005076400818089b000000000000157
   S: disk/by-id/scsi-36005076400818089b000000000000157
   S: disk/by-id/scsi-SIBM_2145_010020c0226cXX00
   S: disk/by-uuid/0fbe15bc-49fb-4a09-b9f2-269129fa9913
   E: DEVPATH=/devices/css0/0.0.0010/0.0.100d/host0/rport-0:0-9/target0:0:9/0:0:9:1/block/sdo
   E: SUBSYSTEM=block
   E: DEVNAME=/dev/sdo
   E: DEVTYPE=disk
   E: DISKSEQ=27
   E: MAJOR=8
   E: MINOR=224
   E: USEC_INITIALIZED=2677616364
   E: DM_MULTIPATH_DEVICE_PATH=1

  ==> Probably the last discovered SCSI LUN.

   P: /devices/virtual/block/dm-0
   N: dm-0
   L: 50
   S: disk/by-id/scsi-360050762198c1fc2180000000b000132
   S: disk/by-id/wwn-0x60050762198c1fc2180000000b000132
   S: disk/by-id/dm-name-mpatha
   S: disk/by-id/dm-uuid-mpath-360050762198c1fc2180000000b000132
   S: mapper/mpatha
   E: DEVPATH=/devices/virtual/block/dm-0
   E: SUBSYSTEM=block
   E: DEVNAME=/dev/dm-0
   E: DEVTYPE=disk
   E: DISKSEQ=44
   E: MAJOR=253
   E: MINOR=0
   E: USEC_INITIALIZED=2678671661
   E: DM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1
   E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
   E: DM_ACTIVATION=1
   E: DM_NAME=mpatha
   E: DM_UUID=mpath-360050762198c1fc2180000000b000132
   E: DM_SUSPENDED=0
   E: DM_UDEV_RULES=1
   E: DM_UDEV_RULES_VSN=2
   E: MPATH_SBIN_PATH=/sbin
   E: MPATH_DEVICE_READY=1

  ==> Multipath devices also got assembled.

  Canonical, why did the installer get an error?

  Does the installer really have a busy(!) waiting loop calling udevadm settle with zero timeout?
  But even if so, with the number of discovered devices and the settle finally returning with success errorlevel 0, it should just work?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/2009141/+subscriptions




More information about the foundations-bugs mailing list