[Bug 1540407] Comment bridged from LTC Bugzilla
bugproxy
bugproxy at us.ibm.com
Fri Mar 11 13:45:11 UTC 2016
------- Comment From thorsten.diehl at de.ibm.com 2016-03-11 05:05 EDT-------
(In reply to comment #27)
> Here's my initial test with the merged multipath-tools test.
> I had the FCP devices enabled with the 0.5.0+git<hash>-1ubuntu2 package
> installed from the merges ppa and rebooted the system.
>
> After booting, I confirmed the paths were up, then used a vmcp command to
> disconnect the devices.
> I then queried mulitpath over a number of minutes to ensure the path remains
> (but shows faulty).
> After 15 minutues or so I contacted an admin to renable the FCP devices and
> observed the paths
> becoming restored in multipath
Ryan,
congrats, well done. :-) I did an extended test with this multipathd/kpartx on a z/VM guest and in LPAR overnight and it ran very well. Even after 300 "off/on" cycles all paths were "active ready running" at the end.
So, please go ahead with this. I will continue testing this when it is in xenial/main, since some of my other tests rely on this.
I will close this defect when this fix is in xenial/main.
--
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to multipath-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1540407
Title:
multipathd drops paths of a temporarily lost device
Status in multipath-tools package in Ubuntu:
Triaged
Bug description:
== Comment: #0 - Thorsten Diehl <thorsten.diehl at de.ibm.com> - 2016-02-01 08:57:28 ==
# uname -a
Linux s83lp31 4.4.0-1-generic #15-Ubuntu SMP Thu Jan 21 22:19:04 UTC 2016 s390x s390x s390x GNU/Linux
# dpkg -s multipath-tools|grep ^Version:
Version: 0.5.0-7ubuntu9
# cat /etc/multipath.conf
defaults {
default_features "1 queue_if_no_path"
user_friendly_names yes
path_grouping_policy multibus
dev_loss_tmo 2147483647
fast_io_fail_tmo 5
}
blacklist {
devnode '*'
}
blacklist_exceptions {
devnode "^sd[a-z]+"
}
---------------------------------------
On a z Systems LPAR with a single LUN, 2 zfcp devices, 2 storage ports, and the following multipath topology:
mpatha (36005076304ffc3e80000000000003050) dm-0 IBM,2107900
size=1.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 0:0:0:1079001136 sda 8:0 active ready running
|- 0:0:1:1079001136 sdb 8:16 active ready running
|- 1:0:0:1079001136 sdc 8:32 active ready running
`- 1:0:1:1079001136 sdd 8:48 active ready running
I observed the following:
When I deconfigure one of the two zfcp devices (e.g. via chchp -c 0, or directly on the HMC), the multipathd removes the two paths via these devices from the pathgroup after 10 seconds. When the zfcp devices comes back, it runs through zfcp error recovery and is being set up properly, and also the mid layer objects are looking fine. However, the multipathd does not add them to the path group again.
Expected behaviour: multipathd does not remove the paths from topology
list, but holds them as "failed faulty offline" until dev_loss_tmo
timout is reached (which is infinite here).
I discussed this already with zfcp development, and it looks most
likely as a problem with multipathd, rather than zfcp or mid-layer.
Easy to reproduce: you need two zfcp devices, one LUN, and preferably
two ports on the storage server (WWPNs). Configure LUN via 2 zfcp
devices * 2 WWPNs = 4 paths.
This can be also reproduced on a z/VM guest. Instead of configuing the
CHPID off, just detach one zfcp device and re-attach it after 30....60
seconds. Same problem.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1540407/+subscriptions
More information about the foundations-bugs
mailing list