Git Product home page Git Product logo

ceph-iscsi's Introduction

ceph-iscsi

This project provides the common logic and CLI tools for creating and managing LIO gateways for Ceph.

It includes the rbd-target-api daemon which is responsible for restoring the state of LIO following a gateway reboot/outage and exporting a REST API to configure the system using tools like gwcli. It replaces the existing 'target' service.

There is also a second daemon rbd-target-gw which exports a REST API to gather statistics.

It also includes the CLI tool gwcli which can be used to configure and manage the Ceph iSCSI gateway, which replaces the existing targetcli CLI tool. This CLI tool utilizes the rbd-target-api server daemon to configure multiple gateways concurrently.

Usage

This package should be installed on each node that is intended to be an iSCSI gateway. The Python ceph_iscsi_config modules are used by:

  • the rbd-target-api daemon to restore LIO state at boot time
  • API/CLI configuration tools

Installation

Repository

A YUM repository is available with the lastest releases. The repository is available at https://download.ceph.com/ceph-iscsi/{version}/rpm/{distribution}/noarch/. For example, https://download.ceph.com/ceph-iscsi/latest/rpm/el7/noarch/

Alternatively, you may download the YUM repo description at https://download.ceph.com/ceph-iscsi/latest/rpm/el7/ceph-iscsi.repo

Packages are signed with the following key: https://download.ceph.com/keys/release.asc

Via RPM

Simply install the provided rpm with: rpm -ivh ceph-iscsi-<ver>.el7.noarch.rpm

Manually

The following packages are required by ceph-iscsi-config and must be installed before starting the rbd-target-api and rbd-target-gw services:

python-rados python-rbd python-netifaces python-rtslib python-configshell python-cryptography python-flask

To install the python package that provides the CLI tool, daemons and application logic, run the provided setup.py script i.e. > python setup.py install

If using systemd, copy the following unit files into their equivalent places on each gateway:

  • <archive_root>/usr/lib/systemd/system/rbd-target-gw.service --> /lib/systemd/system
  • <archive_root>/usr/lib/systemd/system/rbd-target-api.service --> /lib/systemd/system

Once the unit files are in place, reload the configuration with

systemctl daemon-reload
systemctl enable rbd-target-api
systemctl enable rbd-target-gw
systemctl start rbd-target-api
systemctl start rbd-target-gw

Features

The functionality provided by each module in the python package is summarised below;

Module Description
client logic handling the create/update and remove of a NodeACL from a gateway
config common code handling the creation and update mechanisms for the rados configuration object
gateway definition of the iSCSI gateway (target plus target portal groups)
lun rbd image management (create/resize), combined with mapping to the OS and LIO instance
utils common code called by multiple modules

The rbd-target-api daemon performs the following tasks;

  1. At start up remove any osd blocklist entry that may apply to the running host
  2. Read the configuration object from Rados
  3. Process the configuration 3.1 map rbd's to the host 3.2 add rbd's to LIO 3.3 Create the iscsi target, TPG's and port IP's 3.4 Define clients (NodeACL's) 3.5 add the required rbd images to clients
  4. Export a REST API for system configuration.

ceph-iscsi's People

Contributors

adk3798 avatar alexander-bauer avatar gangbiao avatar guits avatar hobadee avatar hxrforcode avatar idryomov avatar javacruft avatar jkjameson avatar kevinzs2048 avatar ktdreyer avatar lenzgr avatar leseb avatar lxbsz avatar matthewoliver avatar mikechristie avatar pcuzner avatar phvalguima avatar ricardoasmarques avatar rjerk avatar rjfd avatar sathieu avatar smithfarm avatar swiftgist avatar tangwenji avatar vatelzh avatar vshankar avatar wuxingyi avatar xin3liang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ceph-iscsi's Issues

opensuse and centos iscsi targets

Hello,

I have a need to add a few opensuse iscsi targets as I need persistent reservations. This worked well but when I open the gwcli on the opensuse nodes I get:

SVR-ISCSI1:~ # gwcli
/iscsi-target...w2:iscsi-igw2> ls
AttributeError: 'Gateway' object has no attribute 'portal_ip_address'

I have checked the versions of gwcli and targetcli between the two and they are the same
gwcli - 2.7
/usr/bin/targetcli version 2.1.fb49

The gwcli is working on the centos servers.

any ideas?

Glen

Failed to reconfigure : Unhandled exception: 'dict_keys' object is not subscriptable

Hey guys,

my environment is the same as in #87 so this is a follow up (but separate issue).
I'm seeing the following error when trying to change a value in gwcli (or through CURL -X PUT) on a target:

/iscsi-target...-gw:iscsi-igw> reconfigure cmdsn_depth 512
Issuing reconfigure request: controls={"cmdsn_depth": "512"}
Failed to reconfigure : Unhandled exception: 'dict_keys' object is not subscriptable
/iscsi-target...-gw:iscsi-igw>

As mentioned in #87, I'm using Fedora 30 which ships with Python 3.7.
The error seems to hint at the attempt to access an index of a dictionary's keys() method return value which, in Python 3, returns an iterable and no indexable object anymore as it did in Python 2.

#2 seems to be adding Python3 support. What's the test coverage on Python 3?
I can see that there is a fix for an occurrence of what I mentioned above at https://github.com/ceph/ceph-iscsi/pull/2/files#diff-b97b10799eed7439f92d29f3e043232eL1309

I tried looking through the code if I can find more instances of what I described but I got lost pretty quick as I'm not really familiar with the code structure.

Thanks!

can't add disk to client via gwcli

Error I get:
Unhandled exception: object of type 'NoneType' has no len()

From the rbd-target-api log:

2019-05-28 10:19:50,036 INFO [_internal.py:87:_log()] - ::1 - - [28/May/2019 10:19:50] "PUT /api/targetlun/iqn.2018-10.com.cruisesystem.ho:ho-ceph1 HTTP/1.1" 200 -
2019-05-28 10:19:50,047 DEBUG [common.py:402:refresh()] - config refresh - current config is {u'updated': u'2019/05/28 14:19:49', u'created': u'2019/05/28 14:04:41', u'disks': {u'rbd/data': {u'allocating_host': u'ho-ceph1', u'updated': u'2019/05/28 14:19:49', u'created': u'2019/05/28 14:15:56', u'image': u'data', u'pool_id': 4, u'controls': {}, u'backstore': u'user:rbd', u'owner': u'ho-ceph1', u'wwn': u'f9c1c24f-38ad-4c83-a4ec-dbead9297008', u'backstore_object_name': u'rbd.data', u'pool': u'rbd'}}, u'epoch': 7, u'version': 9, u'gateways': {u'ho-ceph1': {u'updated': u'2019/05/28 14:19:49', u'active_luns': 1, u'created': u'2019/05/28 14:11:39'}}, u'targets': {u'iqn.2018-10.com.cruisesystem.ho:ho-ceph1': {u'portals': {u'ho-ceph1': {u'portal_ip_addresses': [u'172.16.0.10'], u'gateway_ip_list': [u'172.16.0.10'], u'inactive_portal_ips': [], u'tpgs': 1}}, u'clients': {u'iqn.2018-10.com.cruisesystem.ho:storage01': {u'luns': {}, u'auth': {u'username': u'hoiscsiuser', u'password_encryption_enabled': True, u'mutual_username': u'', u'mutual_password_encryption_enabled': False, u'mutual_password': u'', u'password': None}, u'group_name': u''}}, u'acl_enabled': True, u'created': u'2019/05/28 14:08:51', u'disks': [u'rbd/data'], u'updated': u'2019/05/28 14:19:49', u'controls': {}, u'groups': {}, u'ip_list': [u'172.16.0.10']}}, u'discovery_auth': {u'username': u'', u'password_encryption_enabled': False, u'mutual_username': u'', u'mutual_password_encryption_enabled': False, u'mutual_password': u'', u'password': u''}}
2019-05-28 10:19:50,048 DEBUG [common.py:127:_open_ioctx()] - (_open_ioctx) Opening connection to rbd pool
2019-05-28 10:19:50,048 DEBUG [common.py:134:_open_ioctx()] - (_open_ioctx) connection opened
2019-05-28 10:19:50,048 DEBUG [common.py:106:_read_config_object()] - _read_config_object reading the config object
2019-05-28 10:19:50,049 DEBUG [common.py:156:_get_ceph_config()] - (_get_rbd_config) config object contains '{
"created": "2019/05/28 14:04:41",
"discovery_auth": {
"mutual_password": "",
"mutual_password_encryption_enabled": false,
"mutual_username": "",
"password": "",
"password_encryption_enabled": false,
"username": ""
},
"disks": {
"rbd/data": {
"allocating_host": "ho-ceph1",
"backstore": "user:rbd",
"backstore_object_name": "rbd.data",
"controls": {},
"created": "2019/05/28 14:15:56",
"image": "data",
"owner": "ho-ceph1",
"pool": "rbd",
"pool_id": 4,
"updated": "2019/05/28 14:19:49",
"wwn": "f9c1c24f-38ad-4c83-a4ec-dbead9297008"
}
},
"epoch": 7,
"gateways": {
"ho-ceph1": {
"active_luns": 1,
"created": "2019/05/28 14:11:39",
"updated": "2019/05/28 14:19:49"
}
},
"targets": {
"iqn.2018-10.com.cruisesystem.ho:ho-ceph1": {
"acl_enabled": true,
"clients": {
"iqn.2018-10.com.cruisesystem.ho:storage01": {
"auth": {
"mutual_password": "",
"mutual_password_encryption_enabled": false,
"mutual_username": "",
"password": null,
"password_encryption_enabled": true,
"username": "hoiscsiuser"
},
"group_name": "",
"luns": {}
}
},
"controls": {},
"created": "2019/05/28 14:08:51",
"disks": [
"rbd/data"
],
"groups": {},
"ip_list": [
"172.16.0.10"
],
"portals": {
"ho-ceph1": {
"gateway_ip_list": [
"172.16.0.10"
],
"inactive_portal_ips": [],
"portal_ip_addresses": [
"172.16.0.10"
],
"tpgs": 1
}
},
"updated": "2019/05/28 14:19:49"
}
},
"updated": "2019/05/28 14:19:49",
"version": 9
}'
2019-05-28 10:19:50,051 INFO [_internal.py:87:_log()] - ::1 - - [28/May/2019 10:19:50] "GET /api/config HTTP/1.1" 200 -
2019-05-28 10:19:50,061 DEBUG [utils.py:164:get_remote_gateways()] - this host is ho-ceph1
2019-05-28 10:19:50,061 DEBUG [utils.py:167:get_remote_gateways()] - all gateways - [u'ho-ceph1']
2019-05-28 10:19:50,061 DEBUG [utils.py:174:get_remote_gateways()] - remote gateways: []
2019-05-28 10:19:50,061 ERROR [rbd-target-api:114:unhandled_exception()] - Unhandled Exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functionsrule.endpoint
File "/usr/lib/python2.7/site-packages/ceph_iscsi-3.0-py2.7.egg/EGG-INFO/scripts/rbd-target-api", line 107, in decorated
File "/usr/lib/python2.7/site-packages/ceph_iscsi-3.0-py2.7.egg/EGG-INFO/scripts/rbd-target-api", line 1918, in clientlun
File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/client.py", line 759, in init
if len(self.password_str) > 0 and encryption_enabled:
TypeError: object of type 'NoneType' has no len()
2019-05-28 10:19:50,064 INFO [_internal.py:87:_log()] - ::1 - - [28/May/2019 10:19:50] "PUT /api/clientlun/iqn.2018-10.com.cruisesystem.ho:ho-ceph1/iqn.2018-10.com.cruisesystem.ho:storage01 HTTP/1.1" 500 -
2019-05-28 10:23:53,102 DEBUG [utils.py:164:get_remote_gateways()] - this host is ho-ceph1
2019-05-28 10:23:53,103 DEBUG [utils.py:167:get_remote_gateways()] - all gateways - [u'ho-ceph1']
2019-05-28 10:23:53,103 DEBUG [utils.py:174:get_remote_gateways()] - remote gateways: []
2019-05-28 10:23:53,103 ERROR [rbd-target-api:114:unhandled_exception()] - Unhandled Exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functionsrule.endpoint
File "/usr/lib/python2.7/site-packages/ceph_iscsi-3.0-py2.7.egg/EGG-INFO/scripts/rbd-target-api", line 107, in decorated
File "/usr/lib/python2.7/site-packages/ceph_iscsi-3.0-py2.7.egg/EGG-INFO/scripts/rbd-target-api", line 1918, in clientlun
File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/client.py", line 759, in init
if len(self.password_str) > 0 and encryption_enabled:
TypeError: object of type 'NoneType' has no len()

Network issues possibly causing ceph rbd cmd timeouts

hi can anyone tell me what the f is going on here?
sorry for multiple questions in a row but I'm really tired of this

i was using ceph-iscsi with multipath and got errors after googling found out about multipath
i reconfigured everything and now i have one esxi and one gateway
3 luns

esxi version is 6.5u3

the following is my system info and logs
I should mention my iscsi server is hpdl380 g7

I'm struggling with the same kind of behavior for over a mount and so much reconfiguring and tweaking I'm really really tired
do i have to drop ceph because of VMware compatibility

does anyone successfully used ceph for VMWare in productional environment?

uname -a
Linux ceph-client02 3.10.0-1062.4.3.el7.x86_64 #1 SMP Wed Nov 13 23:58:53 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

lspci
00:00.0 Host bridge: Intel Corporation 5520 I/O Hub to ESI Port (rev 13)
00:01.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 1 (rev 13)
00:02.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 2 (rev 13)
00:03.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 3 (rev 13)
00:04.0 PCI bridge: Intel Corporation 5520/X58 I/O Hub PCI Express Root Port 4 (rev 13)
00:05.0 PCI bridge: Intel Corporation 5520/X58 I/O Hub PCI Express Root Port 5 (rev 13)
00:06.0 PCI bridge: Intel Corporation 5520/X58 I/O Hub PCI Express Root Port 6 (rev 13)
00:07.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 7 (rev 13)
00:08.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 8 (rev 13)
00:09.0 PCI bridge: Intel Corporation 7500/5520/5500/X58 I/O Hub PCI Express Root Port 9 (rev 13)
00:0a.0 PCI bridge: Intel Corporation 7500/5520/5500/X58 I/O Hub PCI Express Root Port 10 (rev 13)
00:0d.0 Host bridge: Intel Corporation Device 343a (rev 13)
00:0d.1 Host bridge: Intel Corporation Device 343b (rev 13)
00:0d.2 Host bridge: Intel Corporation Device 343c (rev 13)
00:0d.3 Host bridge: Intel Corporation Device 343d (rev 13)
00:0d.4 Host bridge: Intel Corporation 7500/5520/5500/X58 Physical Layer Port 0 (rev 13)
00:0d.5 Host bridge: Intel Corporation 7500/5520/5500 Physical Layer Port 1 (rev 13)
00:0d.6 Host bridge: Intel Corporation Device 341a (rev 13)
00:0e.0 Host bridge: Intel Corporation Device 341c (rev 13)
00:0e.1 Host bridge: Intel Corporation Device 341d (rev 13)
00:0e.2 Host bridge: Intel Corporation Device 341e (rev 13)
00:0e.3 Host bridge: Intel Corporation Device 341f (rev 13)
00:0e.4 Host bridge: Intel Corporation Device 3439 (rev 13)
00:14.0 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub System Management Registers (rev 13)
00:14.1 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub GPIO and Scratch Pad Registers (rev 13)
00:14.2 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub Control Status and RAS Registers (rev 13)
00:1c.0 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 1
00:1c.2 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 3
00:1c.4 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 5
00:1d.0 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #1
00:1d.1 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #2
00:1d.2 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #3
00:1d.3 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #6
00:1d.7 USB controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #1
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90)
00:1f.0 ISA bridge: Intel Corporation 82801JIB (ICH10) LPC Interface Controller
00:1f.2 IDE interface: Intel Corporation 82801JI (ICH10 Family) 4 port SATA IDE Controller #1
01:03.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] ES1000 (rev 02)
02:00.0 System peripheral: Hewlett-Packard Company Integrated Lights-Out Standard Slave Instrumentation & System Support (rev 04)
02:00.2 System peripheral: Hewlett-Packard Company Integrated Lights-Out Standard Management Processor Support and Messaging (rev 04)
02:00.4 USB controller: Hewlett-Packard Company Integrated Lights-Out Standard Virtual USB Controller (rev 01)
03:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
03:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
04:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
04:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
05:00.0 RAID bus controller: Hewlett-Packard Company Smart Array G6 controllers (rev 01)
0b:00.0 Ethernet controller: Emulex Corporation OneConnect 10Gb NIC (be3) (rev 01)
0b:00.1 Ethernet controller: Emulex Corporation OneConnect 10Gb NIC (be3) (rev 01)
0b:00.2 Fibre Channel: Emulex Corporation OneConnect 10Gb FCoE Initiator (be3) (rev 01)
0b:00.3 Fibre Channel: Emulex Corporation OneConnect 10Gb FCoE Initiator (be3) (rev 01)
3e:00.0 Host bridge: Intel Corporation Xeon 5600 Series QuickPath Architecture Generic Non-core Registers (rev 02)
3e:00.1 Host bridge: Intel Corporation Xeon 5600 Series QuickPath Architecture System Address Decoder (rev 02)
3e:02.0 Host bridge: Intel Corporation Xeon 5600 Series QPI Link 0 (rev 02)
3e:02.1 Host bridge: Intel Corporation Xeon 5600 Series QPI Physical 0 (rev 02)
3e:02.2 Host bridge: Intel Corporation Xeon 5600 Series Mirror Port Link 0 (rev 02)
3e:02.3 Host bridge: Intel Corporation Xeon 5600 Series Mirror Port Link 1 (rev 02)
3e:02.4 Host bridge: Intel Corporation Xeon 5600 Series QPI Link 1 (rev 02)
3e:02.5 Host bridge: Intel Corporation Xeon 5600 Series QPI Physical 1 (rev 02)
3e:03.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Registers (rev 02)
3e:03.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Target Address Decoder (rev 02)
3e:03.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller RAS Registers (rev 02)
3e:03.4 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Test Registers (rev 02)
3e:04.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Control (rev 02)
3e:04.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Address (rev 02)
3e:04.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Rank (rev 02)
3e:04.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Thermal Control (rev 02)
3e:05.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Control (rev 02)
3e:05.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Address (rev 02)
3e:05.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Rank (rev 02)
3e:05.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Thermal Control (rev 02)
3e:06.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Control (rev 02)
3e:06.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Address (rev 02)
3e:06.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Rank (rev 02)
3e:06.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Thermal Control (rev 02)
3f:00.0 Host bridge: Intel Corporation Xeon 5600 Series QuickPath Architecture Generic Non-core Registers (rev 02)
3f:00.1 Host bridge: Intel Corporation Xeon 5600 Series QuickPath Architecture System Address Decoder (rev 02)
3f:02.0 Host bridge: Intel Corporation Xeon 5600 Series QPI Link 0 (rev 02)
3f:02.1 Host bridge: Intel Corporation Xeon 5600 Series QPI Physical 0 (rev 02)
3f:02.2 Host bridge: Intel Corporation Xeon 5600 Series Mirror Port Link 0 (rev 02)
3f:02.3 Host bridge: Intel Corporation Xeon 5600 Series Mirror Port Link 1 (rev 02)
3f:02.4 Host bridge: Intel Corporation Xeon 5600 Series QPI Link 1 (rev 02)
3f:02.5 Host bridge: Intel Corporation Xeon 5600 Series QPI Physical 1 (rev 02)
3f:03.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Registers (rev 02)
3f:03.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Target Address Decoder (rev 02)
3f:03.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller RAS Registers (rev 02)
3f:03.4 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Test Registers (rev 02)
3f:04.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Control (rev 02)
3f:04.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Address (rev 02)
3f:04.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Rank (rev 02)
3f:04.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Thermal Control (rev 02)
3f:05.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Control (rev 02)
3f:05.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Address (rev 02)
3f:05.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Rank (rev 02)
3f:05.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Thermal Control (rev 02)
3f:06.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Control (rev 02)
3f:06.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Address (rev 02)
3f:06.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Rank (rev 02)
3f:06.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Thermal Control (rev 02)

bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 9000
inet x.x.x.x netmask 255.255.255.0 broadcast x.x.x.x
inet6 fe80::82f1:ba05:ad23:a878 prefixlen 64 scopeid 0x20
ether x:x:x:x:x txqueuelen 1000 (Ethernet)
RX packets 19094018 bytes 85766194981 (79.8 GiB)
RX errors 0 dropped 34014 overruns 1794 frame 0
TX packets 20622318 bytes 86347023115 (80.4 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Nov 25 11:28:42 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:28:42 ceph-client02 tcmu-runner: tcmu_notify_conn_lost:201 rbd/rbd.cdm-03: Handler connection lost (lock state 1)
Nov 25 11:28:42 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:28:42 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:28:42 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:28:42 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:28:42 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:28:42 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:28:42 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:28:42 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:28:42 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:28:42 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:28:42 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:28:42 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:28:42 ceph-client02 tcmu-runner: tgt_port_grp_recovery_thread_fn:245: Disabled iscsi/iqn.2019-10.com.redhat.iscsi-gw:iscsi-igw/tpgt_1.
Nov 25 11:28:42 ceph-client02 tcmu-runner: tgt_port_grp_recovery_thread_fn:271: Enabled iscsi/iqn.2019-10.com.redhat.iscsi-gw:iscsi-igw/tpgt_1.
Nov 25 11:28:46 ceph-client02 tcmu-runner: alua_implicit_transition:570 rbd/rbd.cdm-03: Starting lock acquisition operation.
Nov 25 11:28:46 ceph-client02 tcmu-runner: tcmu_rbd_lock:762 rbd/rbd.cdm-03: Acquired exclusive lock.
Nov 25 11:28:46 ceph-client02 tcmu-runner: tcmu_acquire_dev_lock:441 rbd/rbd.cdm-03: Lock acquisition successful
Nov 25 11:29:11 ceph-client02 kernel: ABORT_TASK: Found referenced iSCSI task_tag: 4909859
Nov 25 11:29:16 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:29:16 ceph-client02 tcmu-runner: tcmu_notify_conn_lost:201 rbd/rbd.cdm-03: Handler connection lost (lock state 1)
Nov 25 11:29:16 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:29:16 ceph-client02 kernel: ABORT_TASK: Sending TMR_FUNCTION_COMPLETE for ref_tag: 4909859
Nov 25 11:29:16 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:29:16 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:29:16 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:29:16 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:29:16 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:29:16 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:29:16 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:29:16 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:29:16 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:29:16 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:29:16 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:29:16 ceph-client02 tcmu-runner: tgt_port_grp_recovery_thread_fn:245: Disabled iscsi/iqn.2019-10.com.redhat.iscsi-gw:iscsi-igw/tpgt_1.
Nov 25 11:29:16 ceph-client02 tcmu-runner: tgt_port_grp_recovery_thread_fn:271: Enabled iscsi/iqn.2019-10.com.redhat.iscsi-gw:iscsi-igw/tpgt_1.
Nov 25 11:29:20 ceph-client02 tcmu-runner: alua_implicit_transition:570 rbd/rbd.cdm-03: Starting lock acquisition operation.
Nov 25 11:29:20 ceph-client02 tcmu-runner: tcmu_rbd_lock:762 rbd/rbd.cdm-03: Acquired exclusive lock.
Nov 25 11:29:20 ceph-client02 tcmu-runner: tcmu_acquire_dev_lock:441 rbd/rbd.cdm-03: Lock acquisition successful
Nov 25 11:29:50 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:29:50 ceph-client02 tcmu-runner: tcmu_notify_conn_lost:201 rbd/rbd.cdm-03: Handler connection lost (lock state 1)
Nov 25 11:29:50 ceph-client02 tcmu-runner: tgt_port_grp_recovery_thread_fn:245: Disabled iscsi/iqn.2019-10.com.redhat.iscsi-gw:iscsi-igw/tpgt_1.
Nov 25 11:29:50 ceph-client02 tcmu-runner: tgt_port_grp_recovery_thread_fn:271: Enabled iscsi/iqn.2019-10.com.redhat.iscsi-gw:iscsi-igw/tpgt_1.
Nov 25 11:29:54 ceph-client02 tcmu-runner: alua_implicit_transition:570 rbd/rbd.cdm-03: Starting lock acquisition operation.
Nov 25 11:29:54 ceph-client02 tcmu-runner: tcmu_rbd_lock:762 rbd/rbd.cdm-03: Acquired exclusive lock.
Nov 25 11:29:54 ceph-client02 tcmu-runner: tcmu_acquire_dev_lock:441 rbd/rbd.cdm-03: Lock acquisition successful
Nov 25 11:30:12 ceph-client02 kernel: ABORT_TASK: Found referenced iSCSI task_tag: 4910855
Nov 25 11:30:24 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:30:24 ceph-client02 tcmu-runner: tcmu_notify_conn_lost:201 rbd/rbd.cdm-03: Handler connection lost (lock state 1)
Nov 25 11:30:24 ceph-client02 kernel: ABORT_TASK: Sending TMR_FUNCTION_COMPLETE for ref_tag: 4910855
Nov 25 11:30:24 ceph-client02 kernel: ABORT_TASK: Sending TMR_TASK_DOES_NOT_EXIST for ref_tag: 4910855
Nov 25 11:30:24 ceph-client02 tcmu-runner: tgt_port_grp_recovery_thread_fn:245: Disabled iscsi/iqn.2019-10.com.redhat.iscsi-gw:iscsi-igw/tpgt_1.
Nov 25 11:30:24 ceph-client02 tcmu-runner: tgt_port_grp_recovery_thread_fn:271: Enabled iscsi/iqn.2019-10.com.redhat.iscsi-gw:iscsi-igw/tpgt_1.
Nov 25 11:30:28 ceph-client02 tcmu-runner: alua_implicit_transition:570 rbd/rbd.cdm-03: Starting lock acquisition operation.
Nov 25 11:30:28 ceph-client02 tcmu-runner: tcmu_rbd_lock:762 rbd/rbd.cdm-03: Acquired exclusive lock.
Nov 25 11:30:28 ceph-client02 tcmu-runner: tcmu_acquire_dev_lock:441 rbd/rbd.cdm-03: Lock acquisition successful
Nov 25 11:30:58 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:30:58 ceph-client02 tcmu-runner: tcmu_notify_conn_lost:201 rbd/rbd.cdm-03: Handler connection lost (lock state 1)
Nov 25 11:30:58 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:30:58 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:30:58 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:30:58 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:30:58 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:30:58 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:30:58 ceph-client02 tcmu-runner: tcmu_rbd_handle_timedout_cmd:995 rbd/rbd.cdm-03: Timing out cmd.
Nov 25 11:30:58 ceph-client02 tcmu-runner: tgt_port_grp_recovery_thread_fn:245: Disabled iscsi/iqn.2019-10.com.redhat.iscsi-gw:iscsi-igw/tpgt_1.
Nov 25 11:30:58 ceph-client02 tcmu-runner: tgt_port_grp_recovery_thread_fn:271: Enabled iscsi/iqn.2019-10.com.redhat.iscsi-gw:iscsi-igw/tpgt_1.
Nov 25 11:31:02 ceph-client02 tcmu-runner: alua_implicit_transition:570 rbd/rbd.cdm-03: Starting lock acquisition operation.
Nov 25 11:31:02 ceph-client02 tcmu-runner: tcmu_rbd_lock:762 rbd/rbd.cdm-03: Acquired exclusive lock.
Nov 25 11:31:02 ceph-client02 tcmu-runner: tcmu_acquire_dev_lock:441 rbd/rbd.cdm-03: Lock acquisition successful
Nov 25 11:31:04 ceph-client02 kernel: ABORT_TASK: Found referenced iSCSI task_tag: 4913273
Nov 25 11:31:12 ceph-client02 kernel: INFO: task jbd2/rbd0-8:30759 blocked for more than 120 seconds.
Nov 25 11:31:12 ceph-client02 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 25 11:31:12 ceph-client02 kernel: jbd2/rbd0-8 D ffffa0ec6a3fd230 0 30759 2 0x00000080
Nov 25 11:31:12 ceph-client02 kernel: Call Trace:
Nov 25 11:31:12 ceph-client02 kernel: [] ? task_rq_unlock+0x20/0x20
Nov 25 11:31:12 ceph-client02 kernel: [] schedule+0x29/0x70
Nov 25 11:31:12 ceph-client02 kernel: [] jbd2_journal_commit_transaction+0x23c/0x19f0 [jbd2]
Nov 25 11:31:12 ceph-client02 kernel: [] ? account_entity_dequeue+0xae/0xd0
Nov 25 11:31:12 ceph-client02 kernel: [] ? dequeue_task_fair+0x41e/0x660
Nov 25 11:31:12 ceph-client02 kernel: [] ? __switch_to+0xce/0x580
Nov 25 11:31:12 ceph-client02 kernel: [] ? wake_up_atomic_t+0x30/0x30
Nov 25 11:31:12 ceph-client02 kernel: [] ? __schedule+0x448/0x9c0
Nov 25 11:31:12 ceph-client02 kernel: [] ? try_to_del_timer_sync+0x5e/0x90
Nov 25 11:31:12 ceph-client02 kernel: [] kjournald2+0xc9/0x260 [jbd2]
Nov 25 11:31:12 ceph-client02 kernel: [] ? wake_up_atomic_t+0x30/0x30
Nov 25 11:31:12 ceph-client02 kernel: [] ? commit_timeout+0x10/0x10 [jbd2]
Nov 25 11:31:12 ceph-client02 kernel: [] kthread+0xd1/0xe0
Nov 25 11:31:12 ceph-client02 kernel: [] ? insert_kthread_work+0x40/0x40
Nov 25 11:31:12 ceph-client02 kernel: [] ret_from_fork_nospec_begin+0x21/0x21
Nov 25 11:31:12 ceph-client02 kernel: [] ? insert_kthread_work+0x40/0x40
Nov 25 11:31:12 ceph-client02 kernel: INFO: task kworker/u65:0:30791 blocked for more than 120 seconds.
Nov 25 11:31:12 ceph-client02 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 25 11:31:12 ceph-client02 kernel: kworker/u65:0 D ffffa0f84a37c1c0 0 30791 2 0x00000080
Nov 25 11:31:12 ceph-client02 kernel: Workqueue: writeback bdi_writeback_workfn (flush-252:0)
Nov 25 11:31:12 ceph-client02 kernel: Call Trace:
Nov 25 11:31:12 ceph-client02 kernel: [] schedule+0x29/0x70
Nov 25 11:31:12 ceph-client02 kernel: [] wait_transaction_locked+0x85/0xd0 [jbd2]
Nov 25 11:31:12 ceph-client02 kernel: [] ? wake_up_atomic_t+0x30/0x30
Nov 25 11:31:12 ceph-client02 kernel: [] add_transaction_credits+0x268/0x2f0 [jbd2]
Nov 25 11:31:12 ceph-client02 kernel: [] ? ___slab_alloc+0x3ac/0x4f0
Nov 25 11:31:12 ceph-client02 kernel: [] start_this_handle+0x1a1/0x430 [jbd2]
Nov 25 11:31:12 ceph-client02 kernel: [] ? kmem_cache_alloc+0x1c2/0x1f0
Nov 25 11:31:12 ceph-client02 kernel: [] jbd2__journal_start+0xf3/0x1f0 [jbd2]
Nov 25 11:31:12 ceph-client02 kernel: [] ? ext4_writepages+0x43c/0xcf0 [ext4]
Nov 25 11:31:12 ceph-client02 kernel: [] __ext4_journal_start_sb+0x69/0xe0 [ext4]
Nov 25 11:31:12 ceph-client02 kernel: [] ext4_writepages+0x43c/0xcf0 [ext4]
Nov 25 11:31:12 ceph-client02 kernel: [] ? generic_writepages+0x58/0x80
Nov 25 11:31:12 ceph-client02 kernel: [] do_writepages+0x21/0x50
Nov 25 11:31:12 ceph-client02 kernel: [] __writeback_single_inode+0x40/0x260
Nov 25 11:31:12 ceph-client02 kernel: [] ? wake_up_bit+0x25/0x30
Nov 25 11:31:12 ceph-client02 kernel: [] writeback_sb_inodes+0x1c4/0x430
Nov 25 11:31:12 ceph-client02 kernel: [] __writeback_inodes_wb+0x9f/0xd0
Nov 25 11:31:12 ceph-client02 kernel: [] wb_writeback+0x263/0x2f0
Nov 25 11:31:12 ceph-client02 kernel: [] ? get_nr_inodes+0x4c/0x70
Nov 25 11:31:12 ceph-client02 kernel: [] bdi_writeback_workfn+0x2cb/0x460
Nov 25 11:31:12 ceph-client02 kernel: [] process_one_work+0x17f/0x440
Nov 25 11:31:12 ceph-client02 kernel: [] worker_thread+0x126/0x3c0
Nov 25 11:31:12 ceph-client02 kernel: [] ? manage_workers.isra.26+0x2a0/0x2a0
Nov 25 11:31:12 ceph-client02 kernel: [] kthread+0xd1/0xe0
Nov 25 11:31:12 ceph-client02 kernel: [] ? insert_kthread_work+0x40/0x40
Nov 25 11:31:12 ceph-client02 kernel: [] ret_from_fork_nospec_begin+0x21/0x21
Nov 25 11:31:12 ceph-client02 kernel: [] ? insert_kthread_work+0x40/0x40
Nov 25 11:31:12 ceph-client02 kernel: INFO: task fio:31269 blocked for more than 120 seconds.
Nov 25 11:31:12 ceph-client02 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 25 11:31:12 ceph-client02 kernel: fio D ffffa0f8690ce2a0 0 31269 1 0x00000084
Nov 25 11:31:12 ceph-client02 kernel: Call Trace:
Nov 25 11:31:12 ceph-client02 kernel: [] schedule+0x29/0x70
Nov 25 11:31:12 ceph-client02 kernel: [] schedule_timeout+0x221/0x2d0
Nov 25 11:31:12 ceph-client02 kernel: [] ? blk_flush_plug_list+0xce/0x230
Nov 25 11:31:12 ceph-client02 kernel: [] io_schedule_timeout+0xad/0x130
Nov 25 11:31:12 ceph-client02 kernel: [] wait_for_completion_io+0xfd/0x140
Nov 25 11:31:12 ceph-client02 kernel: [] ? wake_up_state+0x20/0x20
Nov 25 11:31:12 ceph-client02 kernel: [] blkdev_issue_zeroout+0x270/0x280
Nov 25 11:31:12 ceph-client02 kernel: [] ext4_issue_zeroout+0x32/0x40 [ext4]
Nov 25 11:31:12 ceph-client02 kernel: [] ext4_ext_zeroout+0x2f/0x40 [ext4]
Nov 25 11:31:12 ceph-client02 kernel: [] ext4_ext_handle_unwritten_extents+0xa0b/0xc00 [ext4]
Nov 25 11:31:12 ceph-client02 kernel: [] ext4_ext_map_blocks+0x8cf/0xf60 [ext4]
Nov 25 11:31:12 ceph-client02 kernel: [] ext4_map_blocks+0x155/0x6e0 [ext4]
Nov 25 11:31:12 ceph-client02 kernel: [] ? kmem_cache_alloc+0x1c2/0x1f0
Nov 25 11:31:12 ceph-client02 kernel: [] ? alloc_buffer_head+0x21/0x60
Nov 25 11:31:12 ceph-client02 kernel: [] _ext4_get_block+0x1df/0x220 [ext4]
Nov 25 11:31:12 ceph-client02 kernel: [] ext4_get_block+0x16/0x20 [ext4]
Nov 25 11:31:12 ceph-client02 kernel: [] __block_write_begin_int+0x198/0x650
Nov 25 11:31:12 ceph-client02 kernel: [] ? kmem_cache_alloc+0x1c2/0x1f0
Nov 25 11:31:12 ceph-client02 kernel: [] ? _ext4_get_block+0x220/0x220 [ext4]
Nov 25 11:31:12 ceph-client02 kernel: [] ? ext4_write_begin+0x116/0x440 [ext4]
Nov 25 11:31:12 ceph-client02 kernel: [] __block_write_begin+0x11/0x20
Nov 25 11:31:12 ceph-client02 kernel: [] ext4_write_begin+0x18f/0x440 [ext4]
Nov 25 11:31:12 ceph-client02 kernel: [] ext4_da_write_begin+0x2ae/0x360 [ext4]
Nov 25 11:31:12 ceph-client02 kernel: [] generic_file_buffered_write+0x10f/0x270
Nov 25 11:31:12 ceph-client02 kernel: [] __generic_file_aio_write+0x1e2/0x400
Nov 25 11:31:12 ceph-client02 kernel: [] generic_file_aio_write+0x59/0xa0
Nov 25 11:31:12 ceph-client02 kernel: [] ext4_file_write+0xd2/0x1e0 [ext4]
Nov 25 11:31:12 ceph-client02 kernel: [] ? __sb_start_write+0x58/0x120
Nov 25 11:31:12 ceph-client02 kernel: [] ? security_file_permission+0x27/0xa0
Nov 25 11:31:12 ceph-client02 kernel: [] ? ext4_write_checks.isra.8+0x150/0x150 [ext4]
Nov 25 11:31:12 ceph-client02 kernel: [] do_io_submit+0x3e3/0x8a0
Nov 25 11:31:12 ceph-client02 kernel: [] SyS_io_submit+0x10/0x20
Nov 25 11:31:12 ceph-client02 kernel: [] system_call_fastpath+0x25/0x2a
Nov 25 11:31:12 ceph-client02 kernel: [] ? system_call_after_swapgs+0xae/0x146
Nov 25 11:31:28 ceph-client02 kernel: ABORT_TASK: Sending TMR_FUNCTION_COMPLETE for ref_tag: 4913273
Nov 25 11:31:28 ceph-client02 kernel: ABORT_TASK: Found referenced iSCSI task_tag: 4913274
Nov 25 11:31:28 ceph-client02 kernel: ABORT_TASK: Sending TMR_FUNCTION_COMPLETE for ref_tag: 4913274
Nov 25 11:31:28 ceph-client02 kernel: ABORT_TASK: Found referenced iSCSI task_tag: 4913275
Nov 25 11:31:28 ceph-client02 kernel: ABORT_TASK: Sending TMR_TASK_DOES_NOT_EXIST for ref_tag: 4913275
Nov 25 11:31:28 ceph-client02 kernel: ABORT_TASK: Found referenced iSCSI task_tag: 4913279
Nov 25 11:31:28 ceph-client02 kernel: ABORT_TASK: Sending TMR_TASK_DOES_NOT_EXIST for ref_tag: 4913279
Nov 25 11:31:30 ceph-client02 kernel: Unable to locate ITT: 0x004af879 on CID: 0
Nov 25 11:31:30 ceph-client02 kernel: Unable to locate RefTaskTag: 0x004af879 on CID: 0.
Nov 25 11:31:30 ceph-client02 kernel: Unable to locate ITT: 0x004af87a on CID: 0
Nov 25 11:31:30 ceph-client02 kernel: Unable to locate RefTaskTag: 0x004af87a on CID: 0.

annot create image: Image rbd/new3 does not exist

I used restapi to create a image as below, but failed and feedback as bellow:

[root@node11 ~]# curl --insecure --user admin:admin -d mode=create -d size=10g -d pool=rbd -X PUT http://10.10.10.10:5001/api/disk/rbd/new3
{
  "message": "Image rbd/new3 does not exist"
}

all functions are normal in old version,:

ceph-iscsi-cli.noarch=2.6-1.el7.centos                 
ceph-iscsi-config.noarch=2.5-1.el7.centos        
python-rtslib.noarch= 2.1.67-1
targetcli-fb.noarch=2.1.fb48-1  
tcmu-runner=1.3.0

I updated tools recently and now they are:

ceph-iscsi=3.0
 rtslib-fb=-2.1.fb69
targetcli-fb=2.1.fb49

`gwcli` is not working with ceph octopus

Since PR ceph/ceph#29493 the output of ceph status --format json has changed.

When trying to use gwcli with ceph octopus (master branch), I get the following error:

vagrant@node1:~> sudo gwcli -d
Adding ceph cluster 'ceph' to the UI
Fetching ceph osd information
Querying ceph for state information
Traceback (most recent call last):
  File "/usr/bin/gwcli", line 4, in <module>
    __import__('pkg_resources').run_script('ceph-iscsi==3.2', 'gwcli')
  File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 661, in run_script
    self.require(requires)[0].run_script(script_name, ns)
  File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 1448, in run_script
    exec(script_code, namespace, namespace)
  File "/usr/lib/python3.6/site-packages/ceph_iscsi-3.2-py3.6.egg/EGG-INFO/scripts/gwcli", line 194, in <module>
  File "/usr/lib/python3.6/site-packages/ceph_iscsi-3.2-py3.6.egg/EGG-INFO/scripts/gwcli", line 99, in main
  File "/usr/lib/python3.6/site-packages/ceph_iscsi-3.2-py3.6.egg/gwcli/gateway.py", line 57, in __init__
  File "/usr/lib/python3.6/site-packages/ceph_iscsi-3.2-py3.6.egg/gwcli/ceph.py", line 37, in __init__
  File "/usr/lib/python3.6/site-packages/ceph_iscsi-3.2-py3.6.egg/gwcli/ceph.py", line 89, in __init__
  File "/usr/lib/python3.6/site-packages/ceph_iscsi-3.2-py3.6.egg/gwcli/ceph.py", line 325, in __init__
KeyError: 'osdmap'

Python3 - KeyError: 'pool' when 'refresh' is invoked

Hey Guys,

Creating a disk in gwcli succeeds with ok but then nosedives with a 'KeyError: pool'.

/disks> create rbd/esxi 100G
user provided pool/image format request
CMD: /disks/ create pool=rbd image=esxi size=100G count=1
pool 'rbd' is ok to use
Creating/mapping disk rbd/esxi
Issuing disk create request
- LUN(s) ready on all gateways
ok
Updating UI for the new disk(s)
Traceback (most recent call last):
  File "/usr/local/bin/gwcli", line 4, in <module>
    __import__('pkg_resources').run_script('ceph-iscsi==3.0', 'gwcli')
  File "/usr/lib/python3.7/site-packages/pkg_resources/__init__.py", line 666, in run_script
    self.require(requires)[0].run_script(script_name, ns)
  File "/usr/lib/python3.7/site-packages/pkg_resources/__init__.py", line 1453, in run_script
    exec(script_code, namespace, namespace)
  File "/usr/local/lib/python3.7/site-packages/ceph_iscsi-3.0-py3.7.egg/EGG-INFO/scripts/gwcli", line 194, in <module>
  File "/usr/local/lib/python3.7/site-packages/ceph_iscsi-3.0-py3.7.egg/EGG-INFO/scripts/gwcli", line 125, in main
  File "/usr/lib/python3.7/site-packages/configshell_fb/shell.py", line 905, in run_interactive
    self._cli_loop()
  File "/usr/lib/python3.7/site-packages/configshell_fb/shell.py", line 734, in _cli_loop
    self.run_cmdline(cmdline)
  File "/usr/lib/python3.7/site-packages/configshell_fb/shell.py", line 848, in run_cmdline
    self._execute_command(path, command, pparams, kparams)
  File "/usr/lib/python3.7/site-packages/configshell_fb/shell.py", line 823, in _execute_command
    result = target.execute_command(command, pparams, kparams)
  File "/usr/lib/python3.7/site-packages/configshell_fb/node.py", line 1406, in execute_command
    return method(*pparams, **kparams)
  File "/usr/local/lib/python3.7/site-packages/ceph_iscsi-3.0-py3.7.egg/gwcli/storage.py", line 261, in ui_command_create
  File "/usr/local/lib/python3.7/site-packages/ceph_iscsi-3.0-py3.7.egg/gwcli/storage.py", line 355, in create_disk
  File "/usr/local/lib/python3.7/site-packages/ceph_iscsi-3.0-py3.7.egg/gwcli/storage.py", line 601, in __init__
  File "/usr/local/lib/python3.7/site-packages/ceph_iscsi-3.0-py3.7.egg/gwcli/storage.py", line 605, in refresh
KeyError: 'pool'

The odd thing is that from that point on invoking gwcli immediately crashes with the same error.

[root@ceph-igw01 ~]# gwcli -d
Adding ceph cluster 'ceph' to the UI
Fetching ceph osd information
Querying ceph for state information
Refreshing disk information from the config object
- Scanning will use 8 scan threads
- rbd image scan complete: 0s
Traceback (most recent call last):
  File "/usr/local/bin/gwcli", line 4, in <module>
    __import__('pkg_resources').run_script('ceph-iscsi==3.0', 'gwcli')
  File "/usr/lib/python3.7/site-packages/pkg_resources/__init__.py", line 666, in run_script
    self.require(requires)[0].run_script(script_name, ns)
  File "/usr/lib/python3.7/site-packages/pkg_resources/__init__.py", line 1453, in run_script
    exec(script_code, namespace, namespace)
  File "/usr/local/lib/python3.7/site-packages/ceph_iscsi-3.0-py3.7.egg/EGG-INFO/scripts/gwcli", line 194, in <module>
  File "/usr/local/lib/python3.7/site-packages/ceph_iscsi-3.0-py3.7.egg/EGG-INFO/scripts/gwcli", line 105, in main
  File "/usr/local/lib/python3.7/site-packages/ceph_iscsi-3.0-py3.7.egg/gwcli/gateway.py", line 65, in refresh
  File "/usr/local/lib/python3.7/site-packages/ceph_iscsi-3.0-py3.7.egg/gwcli/storage.py", line 139, in refresh
  File "/usr/local/lib/python3.7/site-packages/ceph_iscsi-3.0-py3.7.egg/gwcli/storage.py", line 601, in __init__
  File "/usr/local/lib/python3.7/site-packages/ceph_iscsi-3.0-py3.7.egg/gwcli/storage.py", line 605, in refresh
KeyError: 'pool'

It's only fixable by completely destroying the pool and starting over.

Another snag I hit with python3 is the systemd files:

python3 setup.py install

installs the binaries under /usr/local/bin but the provides systemd units invoke binaries in /usr/bin. It's an easy manual fix just wanted to let you know

A bug when sending multiple requests to clientlun in parallel.

If send multiple requests to add or delete disks for a client in parallel, there would be only one disk added or deleted, and other requests would be invalid. Because each thread makes up its own image_list and updates to the client, and the last one would override the previous. I suggest to add a lock to the clientlun function, or is there any other way?

can't handle discovery auth requests when discovery auth is disabled

The Ceph iSCSI gateway doesn't allow target discovery when discovery_auth is disabled but the client sends discovery_auth requests

Apr 18 13:05:01 ceiscsi0 kernel: CHAP user or password not set for Initiator ACL
Apr 18 13:05:01 ceiscsi0 kernel: Security negotiation failed.
Apr 18 13:05:01 ceiscsi0 kernel: iSCSI Login negotiation failed.

This is especially annoying with oVirt, where you can only give one set of credentials for discovery and target auth and discovery auth requests are always sent. When I don't want to have the same credentials for both phases on the gateway and I disable discovery_auth, oVirt can't login. Other iSCSI vendors (FreeNAS) don't have this limitation.

I don't know if this is the right place to report it, but I would very much appreciate if this could be resolved.

thank you

Upgrading to format 4 fails because config doesn't have "controls" entry

Trying to upgrade a test cluster to the 3.0 release and it fails here: https://github.com/ceph/ceph-iscsi/blob/3.0/ceph_iscsi_config/common.py#L202

Apr 02 20:49:58 new-croit-host-C0DE01 rbd-target-api[2735]: Traceback (most recent call last):
Apr 02 20:49:58 new-croit-host-C0DE01 rbd-target-api[2735]:   File "/usr/bin/rbd-target-api", line 4, in <module>
Apr 02 20:49:58 new-croit-host-C0DE01 rbd-target-api[2735]:     __import__('pkg_resources').run_script('ceph-iscsi==3.0', 'rbd-target-api')
Apr 02 20:49:58 new-croit-host-C0DE01 rbd-target-api[2735]:   File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 666, in run_script
Apr 02 20:49:58 new-croit-host-C0DE01 rbd-target-api[2735]:     self.require(requires)[0].run_script(script_name, ns)
Apr 02 20:49:58 new-croit-host-C0DE01 rbd-target-api[2735]:   File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 1453, in run_script
Apr 02 20:49:58 new-croit-host-C0DE01 rbd-target-api[2735]:     exec(script_code, namespace, namespace)
Apr 02 20:49:58 new-croit-host-C0DE01 rbd-target-api[2735]:   File "/usr/local/lib/python2.7/dist-packages/ceph_iscsi-3.0-py2.7.egg/EGG-INFO/scripts/rbd-target-api", line 2691, in <module>
Apr 02 20:49:58 new-croit-host-C0DE01 rbd-target-api[2735]:     
Apr 02 20:49:58 new-croit-host-C0DE01 rbd-target-api[2735]:   File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/common.py", line 87, in __init__
Apr 02 20:49:58 new-croit-host-C0DE01 rbd-target-api[2735]:   File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/common.py", line 202, in _upgrade_config
Apr 02 20:49:58 new-croit-host-C0DE01 rbd-target-api[2735]: KeyError: 'controls'
Apr 02 20:49:58 new-croit-host-C0DE01 systemd[1]: rbd-target-api.service: Main proces

My gateway.conf is version 3 and doesn't have a 'controls' field.

I can not stop rbd-target-gw when i stoped tcmu-runner.

centos 7.6.1810
tcmu-runner 1.4.0
When i killed tcmu-runner, rbd-target-gw can not be stopped.
rbd-target-gw will hung on method lio.drop_lun_maps(config, False).

[store@server1 ~]$ sudo cat /proc/73563/stack 
[<ffffffffc0426bc4>] tcmu_netlink_event+0x334/0x4a0 [target_core_user]
[<ffffffffc0427adc>] tcmu_destroy_device+0x5c/0x90 [target_core_user]
[<ffffffffc03d8334>] target_free_device+0xb4/0x120 [target_core_mod]
[<ffffffffc03d22d5>] target_core_dev_release+0x15/0x20 [target_core_mod]
[<ffffffff8e8d131a>] config_item_release+0x6a/0xf0
[<ffffffff8e8d13cc>] config_item_put+0x2c/0x30
[<ffffffff8e8cf4bb>] configfs_rmdir+0x1eb/0x310
[<ffffffff8e84e1dc>] vfs_rmdir+0xdc/0x150
[<ffffffff8e853681>] do_rmdir+0x1f1/0x220
[<ffffffff8e8548b6>] SyS_rmdir+0x16/0x20
[<ffffffff8ed74ddb>] system_call_fastpath+0x22/0x27
[<ffffffffffffffff>] 0xffffffffffffffff

run rbd-target-api, "No module named gateway" error

[root@node3 ~]# rbd-target-api
Traceback (most recent call last):
File "/usr/bin/rbd-target-api", line 5, in
pkg_resources.run_script('ceph-iscsi==3.2', 'rbd-target-api')
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 540, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 1462, in run_script
exec_(script_code, namespace, namespace)
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 41, in exec_
exec("""exec code in globs, locs""")
File "", line 1, in
File "/usr/lib/python2.7/site-packages/ceph_iscsi-3.2-py2.7.egg/EGG-INFO/scripts/rbd-target-api", line 38, in

File "/root/ceph-iscsi/build/scripts-2.7/gwcli.py", line 15, in
from gwcli.gateway import ISCSIRoot
ImportError: No module named gateway

Use hosts, host-groups in iscsi target

Hi all.

i setup 2 ceph-iscsi gateway and 1 iscsi-initiator.

Ceph and ceph-iscsi version :

ceph-iscsi-config-2.6-2.6.el7.noarch
python-ceph-argparse-14.1.0-0.el7.x86_64
libcephfs2-14.1.0-0.el7.x86_64
ceph-iscsi-cli-2.7-2.7.el7.noarch
python-cephfs-14.1.0-0.el7.x86_64
ceph-common-14.1.0-0.el7.x86_64
ceph-iscsi-tools-2.1-2.1.el7.noarch

tcmu, rtslib, targetcli version :

python-rtslib-2.1.fb67-2.5.noarch
libtcmu-1.4.0-106.gd17d24e.el7.x86_64
targetcli-2.1.fb47-0.1.el7.noarch
tcmu-runner-1.4.0-106.gd17d24e.el7.x86_64

image

Ater create 2 hosts, i discovery iscsi target and login. Iscsi initiator map all lun.

image

I want to 1 initiator use 1 hosts.
How i can restrict lun base hosts and host-groups.

Thanks.

i got an error when use gwcli to create a rbd image

i test create rbd image in gwcli but i get this error.
/disks> create pool=rbd image=rr size=100G
Failed : disk create/update failed on node1. Unhandled exception: init() got an unexpected keyword argument 'control'
my ceph cluster version is mimic 13.2.5 stable
ceph-iscsi 3.0 wtih patch
e8550d7

my os version :3.10.0-957.1.3.el7.x86_64 ,centos 7.6.1810

Upgrading from 2.7 to 3.1 fails

upgrading from 2.7 to 3.1 leaves me with a gateway.conf with no gateway entries:

this problem is new with config v10, our builds on 4fb1f0c work just fine

{
    "created": "2019/06/28 15:13:19",
    "discovery_auth": {
        "mutual_password": "",
        "mutual_password_encryption_enabled": false,
        "mutual_username": "",
        "password": "",
        "password_encryption_enabled": false,
        "username": ""
    },
    "disks": {
        "rbd/test": {
            "backstore": "user:rbd",
            "backstore_object_name": "rbd.test",
            "controls": {},
            "created": "2019/06/28 15:14:36",
            "image": "test",
            "owner": "server-2",
            "pool": "rbd",
            "pool_id": 1,
            "updated": "2019/06/28 21:27:18",
            "wwn": "ca848c29-00b3-423a-ac79-3273e5b62075"
        },
        "rbd/test2": {
            "backstore": "user:rbd",
            "backstore_object_name": "rbd.test2",
            "controls": {},
            "created": "2019/06/28 15:14:42",
            "image": "test2",
            "owner": "server-1",
            "pool": "rbd",
            "pool_id": 1,
            "updated": "2019/06/28 21:27:18",
            "wwn": "8bf6c1f0-2fba-497f-8281-1d6b1b933711"
        }
    },
    "epoch": 1381,
    "gateways": {},
    "targets": {
        "iqn.2017-01.io.croit.iscsi:ceph-gateway": {
            "acl_enabled": true,
            "clients": {
                "iqn.1998-01.com.vmware:esxi-0e1df5fb": {
                    "auth": {
                        "mutual_password": "",
                        "mutual_password_encryption_enabled": false,
                        "mutual_username": "",
                        "password": "testtesttest",
                        "password_encryption_enabled": false,
                        "username": "testtest"
                    },
                    "group_name": "",
                    "luns": {
                        "rbd/test": {
                            "lun_id": 0
                        }
                    }
                },
                "iqn.2111-02.0.0.2.4:lol2": {
                    "auth": {
                        "mutual_password": "",
                        "mutual_password_encryption_enabled": false,
                        "mutual_username": "",
                        "password": "testtesttest",
                        "password_encryption_enabled": false,
                        "username": "testtest"
                    },
                    "group_name": "",
                    "luns": {
                        "rbd/test": {
                            "lun_id": 0
                        }
                    }
                }
            },
            "controls": {},
            "created": "2019/06/28 21:03:52",
            "disks": [
                "rbd/test2",
                "rbd/test"
            ],
            "groups": {},
            "ip_list": [
                "10.0.0.101",
                "10.0.0.102"
            ],
            "portals": {},
            "updated": "2019/06/28 21:27:18"
        }
    },
    "updated": "2019/06/28 21:27:18",
    "version": 10
}

ceph-iscsi on Ubuntu / no attribute cluster

Trying to get it running on Ubuntu 19.10 / Eoan.

gw service is running but the api makes some problems:

server03 /opt/ceph-iscsi # /usr/bin/rbd-target-api
Traceback (most recent call last):
  File "/usr/bin/rbd-target-api", line 4, in <module>
    __import__('pkg_resources').run_script('ceph-iscsi==3.3', 'rbd-target-api')
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 666, in run_script
    self.require(requires)[0].run_script(script_name, ns)
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 1469, in run_script
    exec(script_code, namespace, namespace)
  File "/usr/local/lib/python3.7/dist-packages/ceph_iscsi-3.3-py3.7.egg/EGG-INFO/scripts/rbd-target-api", line 2916, in <module>
  File "/usr/local/lib/python3.7/dist-packages/ceph_iscsi-3.3-py3.7.egg/ceph_iscsi_config/common.py", line 80, in __init__
  File "/usr/local/lib/python3.7/dist-packages/ceph_iscsi-3.3-py3.7.egg/ceph_iscsi_config/common.py", line 34, in __init__
  File "rados.pyx", line 625, in rados.Rados.__init__
  File "rados.pyx", line 516, in rados.requires.wrapper.validate_func
  File "rados.pyx", line 665, in rados.Rados.__setup
rados.Error: rados_initialize failed with error code: -22
Exception ignored in: <function CephCluster.__del__ at 0x7f28c80210e0>
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/ceph_iscsi-3.3-py3.7.egg/ceph_iscsi_config/common.py", line 42, in __del__
AttributeError: 'CephCluster' object has no attribute 'cluster'

Install Log:

server03 /opt/ceph-iscsi # python3 setup.py install --install-scripts=/usr/bin
running install
running bdist_egg
running egg_info
writing ceph_iscsi.egg-info/PKG-INFO
writing dependency_links to ceph_iscsi.egg-info/dependency_links.txt
writing top-level names to ceph_iscsi.egg-info/top_level.txt
reading manifest file 'ceph_iscsi.egg-info/SOURCES.txt'
writing manifest file 'ceph_iscsi.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build/bdist.linux-x86_64/egg
creating build/bdist.linux-x86_64/egg/ceph_iscsi_config
copying build/lib/ceph_iscsi_config/gateway_object.py -> build/bdist.linux-x86_64/egg/ceph_iscsi_config
copying build/lib/ceph_iscsi_config/group.py -> build/bdist.linux-x86_64/egg/ceph_iscsi_config
copying build/lib/ceph_iscsi_config/settings.py -> build/bdist.linux-x86_64/egg/ceph_iscsi_config
copying build/lib/ceph_iscsi_config/common.py -> build/bdist.linux-x86_64/egg/ceph_iscsi_config
copying build/lib/ceph_iscsi_config/target.py -> build/bdist.linux-x86_64/egg/ceph_iscsi_config
copying build/lib/ceph_iscsi_config/client.py -> build/bdist.linux-x86_64/egg/ceph_iscsi_config
copying build/lib/ceph_iscsi_config/lio.py -> build/bdist.linux-x86_64/egg/ceph_iscsi_config
copying build/lib/ceph_iscsi_config/alua.py -> build/bdist.linux-x86_64/egg/ceph_iscsi_config
copying build/lib/ceph_iscsi_config/__init__.py -> build/bdist.linux-x86_64/egg/ceph_iscsi_config
copying build/lib/ceph_iscsi_config/backstore.py -> build/bdist.linux-x86_64/egg/ceph_iscsi_config
copying build/lib/ceph_iscsi_config/discovery.py -> build/bdist.linux-x86_64/egg/ceph_iscsi_config
copying build/lib/ceph_iscsi_config/gateway.py -> build/bdist.linux-x86_64/egg/ceph_iscsi_config
copying build/lib/ceph_iscsi_config/metrics.py -> build/bdist.linux-x86_64/egg/ceph_iscsi_config
copying build/lib/ceph_iscsi_config/utils.py -> build/bdist.linux-x86_64/egg/ceph_iscsi_config
copying build/lib/ceph_iscsi_config/lun.py -> build/bdist.linux-x86_64/egg/ceph_iscsi_config
copying build/lib/ceph_iscsi_config/gateway_setting.py -> build/bdist.linux-x86_64/egg/ceph_iscsi_config
creating build/bdist.linux-x86_64/egg/gwcli
copying build/lib/gwcli/ceph.py -> build/bdist.linux-x86_64/egg/gwcli
copying build/lib/gwcli/client.py -> build/bdist.linux-x86_64/egg/gwcli
copying build/lib/gwcli/node.py -> build/bdist.linux-x86_64/egg/gwcli
copying build/lib/gwcli/__init__.py -> build/bdist.linux-x86_64/egg/gwcli
copying build/lib/gwcli/storage.py -> build/bdist.linux-x86_64/egg/gwcli
copying build/lib/gwcli/gateway.py -> build/bdist.linux-x86_64/egg/gwcli
copying build/lib/gwcli/utils.py -> build/bdist.linux-x86_64/egg/gwcli
copying build/lib/gwcli/hostgroup.py -> build/bdist.linux-x86_64/egg/gwcli
byte-compiling build/bdist.linux-x86_64/egg/ceph_iscsi_config/gateway_object.py to gateway_object.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/ceph_iscsi_config/group.py to group.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/ceph_iscsi_config/settings.py to settings.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/ceph_iscsi_config/common.py to common.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/ceph_iscsi_config/target.py to target.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/ceph_iscsi_config/client.py to client.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/ceph_iscsi_config/lio.py to lio.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/ceph_iscsi_config/alua.py to alua.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/ceph_iscsi_config/__init__.py to __init__.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/ceph_iscsi_config/backstore.py to backstore.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/ceph_iscsi_config/discovery.py to discovery.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/ceph_iscsi_config/gateway.py to gateway.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/ceph_iscsi_config/metrics.py to metrics.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/ceph_iscsi_config/utils.py to utils.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/ceph_iscsi_config/lun.py to lun.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/ceph_iscsi_config/gateway_setting.py to gateway_setting.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/gwcli/ceph.py to ceph.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/gwcli/client.py to client.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/gwcli/node.py to node.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/gwcli/__init__.py to __init__.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/gwcli/storage.py to storage.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/gwcli/gateway.py to gateway.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/gwcli/utils.py to utils.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/gwcli/hostgroup.py to hostgroup.cpython-37.pyc
installing package data to build/bdist.linux-x86_64/egg
running install_data
creating build/bdist.linux-x86_64/egg/EGG-INFO
installing scripts to build/bdist.linux-x86_64/egg/EGG-INFO/scripts
running install_scripts
running build_scripts
creating build/bdist.linux-x86_64/egg/EGG-INFO/scripts
copying build/scripts-3.7/rbd-target-gw.py -> build/bdist.linux-x86_64/egg/EGG-INFO/scripts
copying build/scripts-3.7/rbd-target-api.py -> build/bdist.linux-x86_64/egg/EGG-INFO/scripts
copying build/scripts-3.7/gwcli.py -> build/bdist.linux-x86_64/egg/EGG-INFO/scripts
changing mode of build/bdist.linux-x86_64/egg/EGG-INFO/scripts/rbd-target-gw.py to 755
changing mode of build/bdist.linux-x86_64/egg/EGG-INFO/scripts/rbd-target-api.py to 755
changing mode of build/bdist.linux-x86_64/egg/EGG-INFO/scripts/gwcli.py to 755
copying ceph_iscsi.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying ceph_iscsi.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying ceph_iscsi.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying ceph_iscsi.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
zip_safe flag not set; analyzing archive contents...
creating 'dist/ceph_iscsi-3.3-py3.7.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing ceph_iscsi-3.3-py3.7.egg
Removing /usr/local/lib/python3.7/dist-packages/ceph_iscsi-3.3-py3.7.egg
Copying ceph_iscsi-3.3-py3.7.egg to /usr/local/lib/python3.7/dist-packages
ceph-iscsi 3.3 is already the active version in easy-install.pth
Installing gwcli script to /usr/bin
Installing rbd-target-api script to /usr/bin
Installing rbd-target-gw script to /usr/bin

Installed /usr/local/lib/python3.7/dist-packages/ceph_iscsi-3.3-py3.7.egg
Processing dependencies for ceph-iscsi==3.3
Finished processing dependencies for ceph-iscsi==3.3

rbd-target-api hangs if DNS resolution times out

rbd-target-api becomes unresponsive if a DNS server is configured that doesn't reply.

Example: set nameserver 8.8.8.8 in /etc/resolv.conf while having a default route installed that doesn't forward packets to the internet. Removing either the nameserver config or the whole default route fixes rbd-target-api instantly. That means only DNS timeouts are a problem as "no route to host" errors for DNS resolution are handled properly.

Failed to start rbd-target-api

Hi,

I build a ceph iscsi gw service. I create an iscsi target by gwcli. I get error message “at least 2 gateways must exist before disk mapping operations are permitted” when I add ceph RBD image to the target disks. So, I set minimum_gateways to 1 at iscsi-gateway.cfg and restart rbd-target-api service, but it is failure. I remove the minimum_gateways option from iscsi-gateway.cfg. I can’t start the rbd-target-api service yet. Check messages, I can see “IndexError: list index out of range” exception for ceph iscsi.
Below is error from /var/log/messages

Dec  6 14:19:21 localhost systemd: Started Ceph iscsi target configuration API.
Dec  6 14:19:21 localhost systemd: Starting Ceph iscsi target configuration API...
Dec  6 14:19:22 localhost journal: Started the configuration object watcher
Dec  6 14:19:22 localhost journal: Checking for config object changes every 1s
Dec  6 14:19:22 localhost journal: Processing osd blacklist entries for this node
Dec  6 14:19:22 localhost journal: No OSD blacklist entries found
Dec  6 14:19:22 localhost journal: Reading the configuration object to update local LIO configuration
Dec  6 14:19:22 localhost journal: Processing Gateway configuration
Dec  6 14:19:22 localhost journal: Setting up iqn.2019-01.com.redhat.iscsi-gw:iscsi-igw
Dec  6 14:19:22 localhost rbd-target-api: Traceback (most recent call last):
Dec  6 14:19:22 localhost rbd-target-api: File "/usr/bin/rbd-target-api", line 5, in <module>
Dec  6 14:19:22 localhost rbd-target-api: pkg_resources.run_script('ceph-iscsi==3.3', 'rbd-target-api')
Dec  6 14:19:22 localhost rbd-target-api: File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 540, in run_script
Dec  6 14:19:22 localhost rbd-target-api: self.require(requires)[0].run_script(script_name, ns)
Dec  6 14:19:22 localhost rbd-target-api: File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 1462, in run_script
Dec  6 14:19:22 localhost rbd-target-api: exec_(script_code, namespace, namespace)
Dec  6 14:19:22 localhost rbd-target-api: File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 41, in exec_
Dec  6 14:19:22 localhost rbd-target-api: exec("""exec code in globs, locs""")
Dec  6 14:19:22 localhost rbd-target-api: File "<string>", line 1, in <module>
Dec  6 14:19:22 localhost rbd-target-api: File "/usr/lib/python2.7/site-packages/ceph_iscsi-3.3-py2.7.egg/EGG-INFO/scripts/rbd-target-api", line 2939, in <module>
Dec  6 14:19:22 localhost rbd-target-api: File "/usr/lib/python2.7/site-packages/ceph_iscsi-3.3-py2.7.egg/EGG-INFO/scripts/rbd-target-api", line 2854, in main
Dec  6 14:19:22 localhost rbd-target-api: File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/gateway.py", line 239, in define
Dec  6 14:19:22 localhost rbd-target-api: File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/gateway.py", line 212, in define_targets
Dec  6 14:19:22 localhost rbd-target-api: File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/gateway.py", line 169, in define_target
Dec  6 14:19:22 localhost rbd-target-api: File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/target.py", line 586, in manage
Dec  6 14:19:22 localhost rbd-target-api: File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/target.py", line 388, in load_config
Dec  6 14:19:22 localhost rbd-target-api: IndexError: list index out of range
Dec  6 14:19:22 localhost systemd: rbd-target-api.service: main process exited, code=exited, status=1/FAILURE
Dec  6 14:19:22 localhost systemd: Unit rbd-target-api.service entered failed state.
Dec  6 14:19:22 localhost systemd: rbd-target-api.service failed.

How to fix the issue and start rbd-target-api service?

Thanks
Ray

cannot list rbd snapshots via rbd-target-api

I hope I am wrong but I cannot seem to figure out how I can list snapshots directly through the rbd-target-api.

I see that gwcli is able to grab them but would like to request they are view-able directly through the api itself.

It seems that it should be a GET at /api/disksnap/pool/image and/or GET at /api/disk/pool/image

If this is already available I would appreciate being shown how. Thank you.

Missing log warning when using old werkzeug<0.10

When using rbd-iscsi-api on Centos 7 SSL verification fails on the rbd-iscsi-api because the api server doesn't send a cert chain despite it being provided in /etc/ceph/iscsi-gateway.crt. We traced it to:

ceph-iscsi/rbd-target-api.py

Lines 2822 to 2830 in 3a7dcf4

else:
logger.info("API server using TLSv1 (older version of werkzeug)")
context = OpenSSL.SSL.Context(OpenSSL.SSL.TLSv1_METHOD)
try:
context.use_certificate_file(cert_files[0])
context.use_privatekey_file(cert_files[1])
except OpenSSL.SSL.Error as err:
logger.critical("SSL Error : {}".format(err))

When using werkzeug==0.9.1, context.use_privatekey_file does not send the full certificate chain. Ideally, this should at least result in a warning in the logs that this may cause verification problems. The verification issue isn't present when using an upgraded werkzeug>10.0.

Using curl to get the error:

# curl -I https://ceph-gw.example.com:5000                                                                                                                                                                                                  
curl: (60) Peer's Certificate issuer is not recognized.                                                                                                                                                                                                                                   
More details here: http://curl.haxx.se/docs/sslcerts.html                                                                                                                                                                                                                                 curl performs SSL certificate verification by default, using a "bundle"                                                                                                                                                                                                                   
 of Certificate Authority (CA) public keys (CA certs). If the default                                                                                                                                                                                                                     
 bundle file isn't adequate, you can specify an alternate file                                                                                                                                                                                                                            
 using the --cacert option.                                                                                                                                                                                                                                                               
If this HTTPS server uses a certificate signed by a CA represented in                                                                                                                                                                                                                     
 the bundle, the certificate verification probably failed due to a                                                                                                                                                                                                                        
 problem with the certificate (it might be expired, or the name might                                                                                                                                                                                                                     
 not match the domain name in the URL).                                                                                                                                                                                                                                                   
If you'd like to turn off curl's verification of the certificate, use                                                                                                                                                                                                                     
 the -k (or --insecure) option.

Using openssl to show what the server is actually sending:
openssl s_client -showcerts -connect ceph-gw.example.com:5000

is it possible to map a LUN to multiple targets?

I read code and find mapping a LUN to multiple targets is prohibited in the following code snippet:

return jsonify(message="Disk {} cannot be used because it is already mapped on "

I tried to comment the above lines out and successfully mapped single LUN to two targets ONLY when I set minimum_gateways to 1 and run the gateway locally, when I run two gateways, it failed with error message: Failed to add the LUN - Existing ALUA group tag for group ao in invalid state.

Need some clarity on Manual installation

Hi,

Under the installation section, It's mentioned that for the daemons, we gotta copy the files to the respective paths.

<archive_root>/usr/lib/systemd/system/rbd-target-gw.service --> /lib/systemd/system
<archive_root>/usr/lib/systemd/system/rbd-target-api.service --> /lib/systemd/system
<archive_root>/usr/bin/rbd-target-gw --> /usr/bin
<archive_root>/usr/bin/rbd-target-api --> /usr/bin
<archive_root>/usr/bin/gwcli --> /usr/bin

But, looking at the repo, there's no usr/bin folder. Only usr/lib/systemd/system/ exists.
The files mentioned to be placed at /usr/bin are rbd-target-gw, rbd-target-api, gwcli. Now, are these same files as the ones present in repo as rbd-target-gw.py, rbd-target-api.py and gwcli.py respectively? Need some clarity on this.

TPG level CHAP authentication and LUN id can't be configured

I'm trying to represent the following simple targetcli configuration in ceph-iscsi syntax, but two things appear to be unsupported:

  • explicit LUN1 at a TPG level, instead of a client mapping with explicit LUN id
  • TPG level CHAP authentication settings
# targetcli ls
  o- backstores ..................................................... [...]
...
  | o- user:rbd ...................................... [Storage Objects: 1] 
  |   o- rbd-lrbd_test ................ [rbd/lrbd_test (10.0GiB) activated]
  |     o- alua .......................................... [ALUA Groups: 1] 
  |       o- default_tg_pt_gp .............. [ALUA state: Active/optimized]
  o- iscsi .................................. [1-way disc auth, Targets: 1] 
  | o- iqn.2016-11.org.linux-iscsi.igw:stuff .................... [TPGs: 1]
  |   o- tpg1 ............................ [gen-acls, tpg-auth, 1-way auth]
  |     o- acls ................................................. [ACLs: 0] 
  |     o- luns ................................................. [LUNs: 1] 
  |     | o- lun1 ................. [user/rbd-lrbd_test (default_tg_pt_gp)]
  |     o- portals ........................................... [Portals: 1] 
  |       o- 192.168.1.1:3260 ........................................ [OK]

gwcli can hang due to hang of rbd-target-api

I've seen rbd-target-api go into an uninterruptible sleep (a state very difficult to debug).
Starting gwcli in this circumstance leads it to completely hang on doing

        api = APIRequest(endpoint + "/config")
        api.get()

The hang in gwcli at least is avoidable. My quick fix for that was to add these lines to gwcli/utils.py APIRequest.init():

        if 'verify' not in self.kwargs:
            self.kwargs['verify'] = settings.config.api_ssl_verify
        ##### added this: #####
        if 'timeout' not in self.kwargs:
            self.kwargs['timeout'] = 20

(Though the error handling would also best be improved so that a timeout leads to an appropriate message rather than an "unknown error".)

How can I delete the host of iscsi gateway under gwcli?

hi
How can I delete the host of iscsi gateway under gwcli?
I just did not find a command like "remove" under the gateway node.
My need is to delete a node under the iscsi gateway. If you know how to delete, please let me know, thank you.
E.g
/iscsi-target...ceph-target-3> pwd
/iscsi-target/iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw/gateways/ceph-target-3
/iscsi-target...ceph-target-3>
/ @gateways @host-groups @hosts bookmarks
cd exit get goto help
info ls pwd refresh set

Unable to access the configuration object : REST API failure, code : 503

Following this guide https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/

I reached the point when I am supposed to use gwcli, but it just doesn't work:

TEST [root@ceph1 ceph-iscsi]# gwcli
Unable to access the configuration object : REST API failure, code : 503
GatewayError:

All services as required in manual are installed and running

TEST [root@ceph1 ceph-iscsi]# service rbd-target-gw status
Redirecting to /bin/systemctl status rbd-target-gw.service
● rbd-target-gw.service - Setup system to export rbd images through LIO
   Loaded: loaded (/usr/lib/systemd/system/rbd-target-gw.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2019-08-30 15:41:49 CEST; 3min 47s ago
 Main PID: 25769 (rbd-target-gw)
   Memory: 22.1M
   CGroup: /system.slice/rbd-target-gw.service
           └─25769 /usr/bin/python /usr/bin/rbd-target-gw

Aug 30 15:41:49 ceph1.cz.nonprod systemd[1]: Started Setup system to export rbd im...O.
Aug 30 15:41:49 ceph1.cz.nonprod rbd-target-gw[25769]: Integrated Prometheus export...d
Aug 30 15:41:49 ceph1.cz.nonprod rbd-target-gw[25769]: * Serving Flask app "rbd-tar...)
Aug 30 15:41:49 ceph1.cz.nonprod rbd-target-gw[25769]: * Environment: production
Aug 30 15:41:49 ceph1.cz.nonprod rbd-target-gw[25769]: WARNING: Do not use the deve....
Aug 30 15:41:49 ceph1.cz.nonprod rbd-target-gw[25769]: Use a production WSGI server....
Aug 30 15:41:49 ceph1.cz.nonprod rbd-target-gw[25769]: * Debug mode: off
Aug 30 15:41:49 ceph1.cz.nonprod rbd-target-gw[25769]:  * Running on http://[::]:92...)
Hint: Some lines were ellipsized, use -l to show in full.
TEST [root@ceph1 ceph-iscsi]#
TEST [root@ceph1 ceph-iscsi]# service rbd-target-api status
Redirecting to /bin/systemctl status rbd-target-api.service
● rbd-target-api.service - Ceph iscsi target configuration API
   Loaded: loaded (/usr/lib/systemd/system/rbd-target-api.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2019-08-30 15:37:52 CEST; 7min ago
 Main PID: 25421 (rbd-target-api)
   Memory: 45.1M
   CGroup: /system.slice/rbd-target-api.service
           └─25421 /usr/bin/python /usr/bin/rbd-target-api

Aug 30 15:37:53 ceph1.cz.nonprod rbd-target-api[25421]: Processing osd blacklist en...e
Aug 30 15:37:53 ceph1.cz.nonprod rbd-target-api[25421]: No OSD blacklist entries found
Aug 30 15:37:53 ceph1.cz.nonprod rbd-target-api[25421]: Reading the configuration o...n
Aug 30 15:37:53 ceph1.cz.nonprod rbd-target-api[25421]: Configuration does not have...O
Aug 30 15:37:53 ceph1.cz.nonprod rbd-target-api[25421]: * Serving Flask app "rbd-ta...)
Aug 30 15:37:53 ceph1.cz.nonprod rbd-target-api[25421]: * Environment: production
Aug 30 15:37:53 ceph1.cz.nonprod rbd-target-api[25421]: WARNING: Do not use the dev....
Aug 30 15:37:53 ceph1.cz.nonprod rbd-target-api[25421]: Use a production WSGI serve....
Aug 30 15:37:53 ceph1.cz.nonprod rbd-target-api[25421]: * Debug mode: off
Aug 30 15:37:53 ceph1.cz.nonprod rbd-target-api[25421]:  * Running on http://[::]:5...)
Hint: Some lines were ellipsized, use -l to show in full.

What is the problem?

Rbd-target-api.service failed to start

I have reported an error using gwcli:
[root@node1 tan]# gwcli
Unable to access the configuration object : REST API failure, code : 404
GatewayError:

404 - Not Found: Use when object does not exists. This will also be returned in the case where it might exist in SpringCM, but the user does not have access to it based on their security profile. In this case the error message returned will be "Object does not exist or the user does not have access rights".

[root@node1 tcmu-runner]# systemctl status rbd-target-api.service
● rbd-target-api.service - Ceph iscsi target configuration API
Loaded: loaded (/usr/lib/systemd/system/rbd-target-api.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Wed 2019-05-15 13:37:36 CST; 3min 45s ago
Process: 81570 ExecStart=/usr/bin/rbd-target-api (code=exited, status=1/FAILURE)
Main PID: 81570 (code=exited, status=1/FAILURE)
May 15 13:37:36 node1 systemd[1]: Unit rbd-target-api.service entered failed state.
May 15 13:37:36 node1 systemd[1]: rbd-target-api.service failed.
May 15 13:37:36 node1 systemd[1]: rbd-target-api.service holdoff time over, scheduling restart.
May 15 13:37:36 node1 systemd[1]: Stopped Ceph iscsi target configuration API.
May 15 13:37:36 node1 systemd[1]: start request repeated too quickly for rbd-target-api.service
May 15 13:37:36 node1 systemd[1]: Failed to start Ceph iscsi target configuration API.
May 15 13:37:36 node1 systemd[1]: Unit rbd-target-api.service entered failed state.
May 15 13:37:36 node1 systemd[1]: rbd-target-api.service failed.

The rpm version I am using is:

ceph-iscsi-cli-2.6-1.el7.centos.noarch.rpm
python-rtslib-2.1.67-1.noarch.rpm
tcmu-runner-1.3.0-rc4.el7.centos.x86_64.rpm
ceph-iscsi-config-2.5-1.el7.centos.noarch.rpm
targetcli-fb-2.1.fb48-1.noarch.rpm

thanks

Error: "Disk 'rbd.esx' is not defined to the configuration" on "info" and "delete" commands

Hey guys,

thanks a ton for all your work on this fantastic project. I do love almost everything about how the iSCSI GW and it's API is designed.
The only issue I could find which kinda relates to my problem is in the old ceph-iscsi-cli project under ceph/ceph-iscsi-cli#108

OS/Kernel

[root@ceph-osd01 ~]# uname -a
Linux ceph-osd01 5.0.16-300.fc30.x86_64 #1 SMP Tue May 14 19:33:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

TCMU Runner

[root@ceph-osd01 ~]# tcmu-runner -V
tcmu-runner 1.4.0

NOTE: I was actually pulling my hair out a bit over this one because I didn't recognize that the tcmu-runner version that comes with Fedora 30 is still v1.1.3 which first was supplied with Fedora 26. Is that because of licensing changes for tcmu-runner?
So then I compiled from master to benefit from commits targeting performance and ESXi fixes and tcmu-runner should be v1.4.1+ but I used the ./make_runnerrpms.sh --without glfs --without qcow --without zbc --without fbo script which calls git describe --tags and that only returns v1.4.0. Not sure why it doesn't return the v1.4.1 tag.
The first time I ran into this, I compiled without any exclusion parameters with the same issue.

RTS Lib

[root@ceph-osd01 ~]# pip3 list | grep rts
rtslib-fb       2.1.69

Ceph iSCSI

[root@ceph-osd01 ~]# pip3 list | grep ceph-iscsi
ceph-iscsi      3.0

ceph-iscsi is actually installed from recent master to benefit from post 3.0 fixes

Target CLI (this is not used in ceph-iscsi 3.0+ anymore right? Can I drop this from the requirements)

[root@ceph-osd01 ~]# targetcli -v
/usr/bin/targetcli version 2.1.fb49

After creating an image in gwcli under /disks, issuing commands that require a image ID, like info <image_id> or delete <image_id> error out with Disk 'rbd.esx' is not defined to the configuration and disk name provided does not exist

Here is the output from gwcli

/> ls
o- / .................................................................... [...]
  o- cluster .................................................... [Clusters: 1]
  | o- ceph ....................................................... [HEALTH_OK]
  |   o- pools ..................................................... [Pools: 2]
  |   | o- .rgw.root ...... [(x3), Commit: 0.00Y/3710570752K (0%), Used: 0.00Y]
  |   | o- rbd ................ [(x3), Commit: 1G/3710570752K (0%), Used: 768K]
  |   o- topology ........................................... [OSDs: 3,MONs: 3]
  o- disks ..................................................... [1G, Disks: 1]
  | o- rbd ......................................................... [rbd (1G)]
  |   o- esx ................................................... [rbd/esx (1G)]
  o- iscsi-targets .......................... [DiscoveryAuth: None, Targets: 0]
/disks/rbd/esx> info
Image                 .. esx
Ceph Cluster          .. ceph
Pool                  .. rbd
Wwn                   .. 225c7825-0b5d-4454-9a59-fe5043f150a3
Size H                .. 1G
Feature List          .. RBD_FEATURE_LAYERING
                         RBD_FEATURE_EXCLUSIVE_LOCK
                         RBD_FEATURE_OBJECT_MAP
                         RBD_FEATURE_FAST_DIFF
                         RBD_FEATURE_DEEP_FLATTEN
Snapshots             ..
Owner                 ..
Backstore             .. user:rbd
Backstore Object Name .. rbd.esx
Control Values
- hw_max_sectors .. 1024
- max_data_area_mb .. 8
- osd_op_timeout .. 30
- qfull_timeout .. 5
/disks> info rbd.esx
CMD: /disks/ info rbd.esx
disk name provided does not exist
/disks> delete rbd.esx
Disk 'rbd.esx' is not defined to the configuration

Here is the output from rbd info esx

[root@ceph-osd01 ~]# rbd info esx
rbd image 'esx':
	size 1 GiB in 1024 objects
	order 20 (1 MiB objects)
	snapshot_count: 0
	id: 25a1abe28e9f6
	block_name_prefix: rbd_data.25a1abe28e9f6
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features:
	flags:
	create_timestamp: Thu May 23 12:23:51 2019
	access_timestamp: Thu May 23 12:23:51 2019
	modify_timestamp: Thu May 23 12:23:51 2019

I tried both the gwcli help text suggested format for the image id (rbd.esx) and the ID displayed by rbd info esx which both result in the same errors.

I know that I haven't defined any gateways or targets yet. I scrapped my old config in order to start clean and see if the issue still happens when I try these operations without any other config items like gateways or targets configured. It is the same behavior.

Thanks to anyone who takes a stab at this!

rbd-target-api service fails to start

ceph-iscsi version 3.0 - pulled from git repo May 23rd.

After initial config, this was working:

o- / ......................................................................................................................... [...]
o- cluster ......................................................................................................... [Clusters: 1]
| o- ceph .......................................................................................................... [HEALTH_WARN]
| o- pools .......................................................................................................... [Pools: 1]
| | o- rbd ................................................................... [(x3), Commit: 17.0T/18117898M (98%), Used: 192K]
| o- topology ............................................................................................... [OSDs: 12,MONs: 3]
o- disks ....................................................................................................... [17.0T, Disks: 1]
| o- rbd ........................................................................................................... [rbd (17.0T)]
| o- windata ............................................................................................. [rbd/windata (17.0T)]
o- iscsi-targets ............................................................................... [DiscoveryAuth: None, Targets: 1]
o- iqn.2018-10.com.cruisesystem.ho:ho-ceph1 ...................................................................... [Gateways: 1]
o- disks .......................................................................................................... [Disks: 1]
| o- rbd/windata ........................................................................................... [Owner: ho-ceph1]
o- gateways ............................................................................................ [Up: 1/1, Portals: 1]
| o- ho-ceph1 ............................................................................................. [172.16.0.10 (UP)]
o- host-groups .................................................................................................. [Groups : 0]
o- hosts .............................................................................................. [Hosts: 1: Auth: CHAP]
o- iqn.1991-05.com.microsoft:storage01.ho.cruisesystem.com ................................... [Auth: CHAP, Disks: 1(17.0T)]
o- lun 0 ........................................................................... [rbd/windata(17.0T), Owner: ho-ceph1]

after rebooting the server, I get these errors in /var/log/messages:

May 29 12:42:24 ho-ceph1 systemd: Started Ceph iscsi target configuration API.
May 29 12:42:24 ho-ceph1 journal: Processing osd blacklist entries for this node
May 29 12:42:24 ho-ceph1 journal: Started the configuration object watcher
May 29 12:42:24 ho-ceph1 journal: Checking for config object changes every 1s
May 29 12:42:25 ho-ceph1 journal: No OSD blacklist entries found
May 29 12:42:25 ho-ceph1 journal: Reading the configuration object to update local LIO configuration
May 29 12:42:25 ho-ceph1 journal: Processing Gateway configuration
May 29 12:42:25 ho-ceph1 journal: Setting up iqn.2018-10.com.cruisesystem.ho:ho-ceph1
May 29 12:42:25 ho-ceph1 journal: (Gateway.load_config) successfully loaded existing target definition
May 29 12:42:25 ho-ceph1 journal: Processing LUN configuration
May 29 12:42:25 ho-ceph1 journal: iqn.2018-10.com.cruisesystem.ho:ho-ceph1 - Processing client configuration
May 29 12:42:25 ho-ceph1 rbd-target-api: Traceback (most recent call last):
May 29 12:42:25 ho-ceph1 rbd-target-api: File "/usr/bin/rbd-target-api", line 5, in
May 29 12:42:25 ho-ceph1 rbd-target-api: pkg_resources.run_script('ceph-iscsi==3.0', 'rbd-target-api')
May 29 12:42:25 ho-ceph1 rbd-target-api: File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 540, in run_script
May 29 12:42:25 ho-ceph1 rbd-target-api: self.require(requires)[0].run_script(script_name, ns)
May 29 12:42:25 ho-ceph1 rbd-target-api: File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 1462, in run_script
May 29 12:42:25 ho-ceph1 rbd-target-api: exec_(script_code, namespace, namespace)
May 29 12:42:25 ho-ceph1 rbd-target-api: File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 41, in exec_
May 29 12:42:25 ho-ceph1 rbd-target-api: exec("""exec code in globs, locs""")
May 29 12:42:25 ho-ceph1 rbd-target-api: File "", line 1, in
May 29 12:42:25 ho-ceph1 rbd-target-api: File "/usr/lib/python2.7/site-packages/ceph_iscsi-3.0-py2.7.egg/EGG-INFO/scripts/rbd-target-api", line 2849, in
May 29 12:42:25 ho-ceph1 rbd-target-api: File "/usr/lib/python2.7/site-packages/ceph_iscsi-3.0-py2.7.egg/EGG-INFO/scripts/rbd-target-api", line 2764, in main
May 29 12:42:25 ho-ceph1 rbd-target-api: File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/gateway.py", line 239, in define
May 29 12:42:25 ho-ceph1 rbd-target-api: File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/gateway.py", line 212, in define_targets
May 29 12:42:25 ho-ceph1 rbd-target-api: File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/gateway.py", line 186, in define_target
May 29 12:42:25 ho-ceph1 rbd-target-api: File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/client.py", line 283, in define_clients
May 29 12:42:25 ho-ceph1 rbd-target-api: File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/client.py", line 759, in init
May 29 12:42:25 ho-ceph1 rbd-target-api: TypeError: object of type 'NoneType' has no len()
May 29 12:42:25 ho-ceph1 systemd: rbd-target-api.service: main process exited, code=exited, status=1/FAILURE
May 29 12:42:25 ho-ceph1 systemd: Unit rbd-target-api.service entered failed state.
May 29 12:42:25 ho-ceph1 systemd: rbd-target-api.service failed.
May 29 12:42:26 ho-ceph1 systemd: rbd-target-api.service holdoff time over, scheduling restart.
May 29 12:42:26 ho-ceph1 systemd: Stopped Ceph iscsi target configuration API.

system is running centOS 7-1810 with kernel 3.10.0-957.12.2.el7.x86_64

the only changes to the default iscsi-gateway.cfg are the trusted IP list, securing the api with a self-signed cert and:
minimum_gateways = 1

The health warn in the status above was because I forgot to enable the rbd for the pool - and has been corrected since.

I had everything working with the settings outlined above for iscsi-gateway.cfg

also, If I remove both the rbd disk image and rbd pool and reconfig from scratch, I can get everything working again... it just never survives reboot

Migrate away from pycrypto?

Hi. I noticed that ceph-iscsi has a runtime dependency on pycrypto 2.6.1 [1]

This is problematic for openSUSE because this package has been removed from the distribution. It is slated to be dropped from SLES as well, though that has not happened just yet.

Apparently, this package is only used here:

from Crypto.PublicKey import RSA

The openSUSE maintainers suggest that we migrate to https://pypi.org/project/pycryptodomex/ or https://pypi.org/project/pyOpenSSL/ - is this feasible?

[1] https://www.dlitz.net/software/pycrypto/

Create a dedicated "nautilus" branch?

As the iSCSI management functionality of Ceph Dashboard is tightly coupled with ceph-iscsi's features, I think it would make sense to have git branches for ceph-iscsi that correlate with the Ceph version they have been developed against and tested with. I'd like to suggest that we branch off the current 3.x version in the master branch into a "nautilus" branch and bump up the version number in the master branch accordingly.

Create target gateway failure

Hi

I try to create a target gateway, but I meet the below error message:

/iscsi-target...-igw/gateways> create iscsi-gw-1 192.168.2.98
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
Failed : ip_addresses query to iscsi-gw-1 failed - check rbd-target-api log. Is the API server running and in the right mode (http/https)?

I am sure the rbd-target-api service is running at 192.168.2.98. Which wrong is for target api configuration?

Thanks
Ray

Ceph ISCSI Gateway Build

Hello.
Im trying to setup an Ceph Cluster with an ISCSI Gateway.

My first question is: Do I need two gateways for that and if yes can i run them on one node?

My Issue is that when I want to create my first gateway (i tried it before and purged the config with cleanconfig confirm=true) and I get this terrible error:
CMD: ../gateways/ create ceph-iscsi-gateway IP-Address nosync=False skipchecks=true OS version/package checks have been bypassed Adding gateway, sync'ing 0 disk(s) and 0 client(s) Failed : Gateway creation failed on IP-Address. Failed to create the gateway
If i try my second IP Address of the node i get the same error with the same IP Address.

Here's the rbd-target-api log:

2019-07-10 08:35:02,110 DEBUG [common.py:123:_open_ioctx()] - (_open_ioctx) Opening connection to rbd pool 2019-07-10 08:35:02,111 DEBUG [common.py:130:_open_ioctx()] - (_open_ioctx) connection opened 2019-07-10 08:35:02,115 DEBUG [common.py:171:init_config()] - (init_config) using pre existing config object 2019-07-10 08:35:02,115 DEBUG [common.py:123:_open_ioctx()] - (_open_ioctx) Opening connection to rbd pool 2019-07-10 08:35:02,115 DEBUG [common.py:130:_open_ioctx()] - (_open_ioctx) connection opened 2019-07-10 08:35:02,116 DEBUG [common.py:102:_read_config_object()] - _read_config_object reading the config object 2019-07-10 08:35:02,117 DEBUG [common.py:152:_get_ceph_config()] - (_get_rbd_config) config object contains '{ "clients": {}, "created": "2019/07/09 12:41:21", "disks": {}, "epoch": 2, "gateways": { "created": "2019/07/09 12:41:21", "ip_list": [ "10.0.1.72" ], "iqn": "iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw" }, "groups": {}, "updated": "2019/07/09 12:51:34", "version": 3 }' 2019-07-10 08:35:02,165 DEBUG [gateway.py:278:create_target()] - (Gateway.create_target) Added iscsi target - iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw 2019-07-10 08:35:02,165 DEBUG [gateway.py:287:create_target()] - Creating tpgs 2019-07-10 08:35:02,166 DEBUG [gateway.py:238:create_tpg()] - (Gateway.create_tpg) Added tpg for portal ip 10.0.1.72 2019-07-10 08:35:02,166 DEBUG [gateway.py:244:create_tpg()] - (Gateway.create_tpg) Added tpg for portal ip 10.0.1.72 is enabled 2019-07-10 08:35:02,166 INFO [gateway.py:266:create_tpg()] - (Gateway.create_tpg) created TPG '1' for target iqn 'iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw' 2019-07-10 08:35:02,167 DEBUG [gateway.py:238:create_tpg()] - (Gateway.create_tpg) Added tpg for portal ip 10.0.1.72 2019-07-10 08:35:02,167 CRITICAL [gateway.py:292:create_target()] - Unable to create the TPG for 10.0.1.72 - Could not create NetworkPortal in configFS 2019-07-10 08:35:02,167 DEBUG [gateway.py:155:update_tpg_controls()] - (GWGateway.update_tpg_controls) {} 2019-07-10 08:35:02,176 ERROR [rbd-target-api:555:_gateway()] - manage(target) logic failed for ceph-iscsi-gateway: Could not create NetworkPortal in configFS 2019-07-10 08:35:02,178 INFO [_internal.py:87:_log()] - ::ffff:10.0.1.72 - - [10/Jul/2019 08:35:02] "PUT /api/_gateway/ceph-iscsi-gateway HTTP/1.1" 500 - 2019-07-10 08:35:02,179 ERROR [rbd-target-api:1796:call_api()] - _gateway change on 10.0.1.72 failed with 500 2019-07-10 08:35:02,180 DEBUG [rbd-target-api:1818:call_api()] - failed on 10.0.1.72. Failed to create the gateway 2019-07-10 08:35:02,180 INFO [_internal.py:87:_log()] - ::1 - - [10/Jul/2019 08:35:02] "PUT /api/gateway/ceph-iscsi-gateway HTTP/1.1" 500 -

and here the gwcli log:
2019-07-10 09:11:40,600 ERROR [gateway.py:664:ui_command_create()] Failed : Gateway creation failed on 10.0.1.72. Failed to create the gateway 2019-07-10 09:11:50,860 DEBUG [gateway.py:612:ui_command_create()] CMD: ../gateways/ create ceph-iscsi-gateway 10.0.1.133 nosync=False skipchecks=true 2019-07-10 09:11:50,863 WARNING [gateway.py:647:ui_command_create()] OS version/package checks have been bypassed 2019-07-10 09:11:50,864 INFO [gateway.py:649:ui_command_create()] Adding gateway, sync'ing 0 disk(s) and 0 client(s) 2019-07-10 09:11:50,993 ERROR [gateway.py:664:ui_command_create()] Failed : Gateway creation failed on 10.0.1.72. Failed to create the gateway 2019-07-10 09:12:08,277 DEBUG [gateway.py:612:ui_command_create()] CMD: ../gateways/ create ceph-iscsi-gateway-2 10.0.1.133 nosync=False skipchecks=true 2019-07-10 09:12:08,277 ERROR [gateway.py:623:ui_command_create()] The first gateway defined must be the local machine 2019-07-10 09:12:21,085 DEBUG [gateway.py:612:ui_command_create()] CMD: ../gateways/ create ceph-iscsi-gateway localhost nosync=False skipchecks=true 2019-07-10 09:12:21,088 WARNING [gateway.py:647:ui_command_create()] OS version/package checks have been bypassed 2019-07-10 09:12:21,088 INFO [gateway.py:649:ui_command_create()] Adding gateway, sync'ing 0 disk(s) and 0 client(s) 2019-07-10 09:12:21,224 ERROR [gateway.py:664:ui_command_create()] Failed : Gateway creation failed on 10.0.1.72. Failed to create the gateway 2019-07-10 09:12:27,299 DEBUG [gateway.py:612:ui_command_create()] CMD: ../gateways/ create ceph-iscsi-gateway 0.0.0.0 nosync=False skipchecks=true 2019-07-10 09:12:27,302 WARNING [gateway.py:647:ui_command_create()] OS version/package checks have been bypassed 2019-07-10 09:12:27,302 INFO [gateway.py:649:ui_command_create()] Adding gateway, sync'ing 0 disk(s) and 0 client(s) 2019-07-10 09:12:27,441 ERROR [gateway.py:664:ui_command_create()] Failed : Gateway creation failed on 10.0.1.72. Failed to create the gateway 2019-07-10 09:28:42,217 DEBUG [ceph.py:51:__init__()] Adding ceph cluster 'ceph' to the UI 2019-07-10 09:28:42,467 DEBUG [ceph.py:289:populate()] Fetching ceph osd information 2019-07-10 09:28:42,485 DEBUG [ceph.py:198:update_state()] Querying ceph for state information 2019-07-10 09:28:42,514 DEBUG [storage.py:99:refresh()] Refreshing disk information from the config object 2019-07-10 09:28:42,514 DEBUG [storage.py:103:refresh()] - Scanning will use 8 scan threads 2019-07-10 09:28:42,551 DEBUG [storage.py:128:refresh()] - rbd image scan complete: 0s 2019-07-10 09:28:42,551 DEBUG [gateway.py:372:refresh()] Refreshing gateway & client information 2019-07-10 09:28:42,570 DEBUG [ceph.py:198:update_state()] Querying ceph for state information 2019-07-10 09:28:42,585 DEBUG [ceph.py:309:refresh()] Gathering pool stats for cluster 'ceph' 2019-07-10 09:28:47,267 DEBUG [gateway.py:612:ui_command_create()] CMD: ../gateways/ create ceph-iscsi-gateway 10.0.1.133 nosync=False skipchecks=true 2019-07-10 09:28:47,270 WARNING [gateway.py:647:ui_command_create()] OS version/package checks have been bypassed 2019-07-10 09:28:47,270 INFO [gateway.py:649:ui_command_create()] Adding gateway, sync'ing 0 disk(s) and 0 client(s) 2019-07-10 09:28:47,420 ERROR [gateway.py:664:ui_command_create()] Failed : Gateway creation failed on 10.0.1.72. Failed to create the gateway 2019-07-10 10:05:14,954 DEBUG [ceph.py:51:__init__()] Adding ceph cluster 'ceph' to the UI 2019-07-10 10:05:15,208 DEBUG [ceph.py:289:populate()] Fetching ceph osd information 2019-07-10 10:05:15,225 DEBUG [ceph.py:198:update_state()] Querying ceph for state information 2019-07-10 10:05:15,255 DEBUG [storage.py:99:refresh()] Refreshing disk information from the config object 2019-07-10 10:05:15,255 DEBUG [storage.py:103:refresh()] - Scanning will use 8 scan threads 2019-07-10 10:05:15,288 DEBUG [storage.py:128:refresh()] - rbd image scan complete: 0s 2019-07-10 10:05:15,288 DEBUG [gateway.py:372:refresh()] Refreshing gateway & client information 2019-07-10 10:05:15,307 DEBUG [ceph.py:198:update_state()] Querying ceph for state information 2019-07-10 10:05:15,321 DEBUG [ceph.py:309:refresh()] Gathering pool stats for cluster 'ceph' 2019-07-10 10:06:17,869 DEBUG [gateway.py:612:ui_command_create()] CMD: ../gateways/ create ceph-iscsi-gateway 10.0.1.72 nosync=False skipchecks=true 2019-07-10 10:06:17,873 WARNING [gateway.py:647:ui_command_create()] OS version/package checks have been bypassed 2019-07-10 10:06:17,873 INFO [gateway.py:649:ui_command_create()] Adding gateway, sync'ing 0 disk(s) and 0 client(s) 2019-07-10 10:06:18,032 ERROR [gateway.py:664:ui_command_create()] Failed : Gateway creation failed on 10.0.1.72. Failed to create the gateway

and that is the message i get, when i try to clear the config: "Executor(ceph-iscsi-gateway) must be in gateway list: []"

I hope someone can help me

Best regards
~Felix

rpm: use systemd scriptlets macros

The RPM packaging for RHEL and Fedora currently hard-codes some systemd things. Like

BuildRequires:  systemd

or

/bin/systemctl --system daemon-reload &> /dev/null || :
/bin/systemctl --system enable rbd-target-gw &> /dev/null || :
/bin/systemctl --system enable rbd-target-api &> /dev/null || :

We can switch these out for scriptlets instead: https://docs.fedoraproject.org/en-US/packaging-guidelines/Scriptlets/#_scriptlets

This will make it easier to get ceph-iscsi into Fedora if we follow the package guidelines here.

It will also make it easier to drop the dependency on systemd eventually when we run inside containers. Fedora has a new %systemd_ordering macro for this, but I've not played around with it yet.

upgrading to ceph-iscsi-3.0 with installed version of ceph-iscsi-tools

When upgrading to ceph-iscsi from a ceph-iscsi-cli & ceph-iscsi-config environment I had to work around the issue of ceph-iscsi-tools requiring ceph-iscsi-config >= 2.3 (which is removed/obsoleted by the ceph-iscsi replacement). Going forward will there be a ceph-iscsi-tools package which will require ceph-iscsi >= 3.0. or will that be integrated into the new ceph-iscsi. It appears ceph-iscsi-tools is not being build in shaman so generally wondering what the future looks like for this utility.

Thanks.

Error: Package: ceph-iscsi-tools-2.1-2.1.el7.noarch (installed)
Requires: ceph-iscsi-config >= 2.3
Removing: ceph-iscsi-config-2.6-80.g24deeb2.el7.noarch (installed)
ceph-iscsi-config = 2.6-80.g24deeb2.el7
Obsoleted By: ceph-iscsi-3.0-17.ge6a0067.el7.noarch (/ceph-iscsi-3.0-17.ge6a0067.el7.noarch)
Not found

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.