Git Product home page Git Product logo

rtslib-fb's Introduction

=================================================================

                Linux* Open-iSCSI

=================================================================

                                                   Jun 6, 2022
Contents
========

- 1. In This Release
- 2. Introduction
- 3. Installation
- 4. Open-iSCSI daemon
- 5. Open-iSCSI Configuration Utility
- 6. Configuration
- 7. Getting Started
- 8. Advanced Configuration
- 9. iSCSI System Info


1. In This Release
==================

This file describes the Linux* Open-iSCSI Initiator. The software was
tested on AMD Opteron (TM) and Intel Xeon (TM).

The latest development release is available at:

	https://github.com/open-iscsi/open-iscsi

For questions, comments, contributions post an issue on github, or
send e-mail to:

	[email protected]

You can also raise an issue on the github page.


1.1. Features
=============

- highly optimized and very small-footprint data path
- persistent configuration database
- SendTargets discovery
- CHAP
- PDU header Digest
- multiple sessions


1.2  Licensing
==============

The daemon and other top-level commands are licensed as GPLv3, while the
libopeniscsiusr library used by some of those commmands is licensed as LGPLv3.


2. Introduction
===============

The Open-iSCSI project is a high-performance, transport independent,
multi-platform implementation of RFC3720 iSCSI.

Open-iSCSI is partitioned into user and kernel parts.

The kernel portion of Open-iSCSI was originally part of this project
repository, but now is built into the linux kernel itself. It
includes loadable modules: scsi_transport_iscsi.ko, libiscsi.ko and
scsi_tcp.ko. The kernel code handles the "fast" path, i.e. data flow.

User space contains the entire control plane: configuration
manager, iSCSI Discovery, Login and Logout processing,
connection-level error processing, Nop-In and Nop-Out handling,
and (perhaps in the future:) Text processing, iSNS, SLP, Radius, etc.

The user space Open-iSCSI consists of a daemon process called
iscsid, and a management utility iscsiadm. There are also helper
programs, and iscsiuio, which is used for certain iSCSI adapters.


3. Installation
===============

NOTE:	You will need to be root to install the Open-iSCSI code, and
	you will also need to be root to run it.

As of today, the Open-iSCSI Initiator requires a host running the
Linux operating system with kernel.

The userspace components iscsid, iscsiadm and iscsistart require the
open-isns library, unless open-isns use is diabled when building (see
below).

If this package is not available for your distribution, you can download
and install it yourself.  To install the open-isns headers and library
required for Open-iSCSI, download the current release from:

	https://github.com/open-iscsi/open-isns

Then, from the top-level directory, run:

	./configure [<OPTIONS>]
	make
	make install

For the open-iscsi project and iscsiuio, the original build
system used make and autoconf the build the project. These
build systems are being depcreated in favor of meson (and ninja).
See below for how to build using make and autoconf, but
migrating as soon as possible to meson would be a good idea.

Building open-iscsi/iscsiuio using meson
----------------------------------------
For Open-iSCSI and iscsiuio, the system is built using meson and ninja
(see https://github.com/mesonbuild/meson). If these packages aren't
available to you on your Linux distribution, you can download
the latest release from: https://github.com/mesonbuild/meson/releases).
The README.md file describes in detail how to build it yourself, including
how to get ninja.

To build the open-iscsi project, including iscsiuio, first run meson
to configure the build, from the top-level open-iscsi directory, e.g.:

	rm -rf builddir
	mkdir builddir
	meson [<MESON-OPTIONS>] builddir

Then, to build the code:

	ninja -C builddir

If you change any code and want to rebuild, you simply run ninja again.

When you are ready to install:

	[DESTDIR=<SOME-DIR>] ninja -C builddir install

This will install the iSCSI tools, configuration files, interfaces, and
documentation. If you do not set DESTDIR, it defaults to "/".


MESON-OPTIONS:
--------------
One can override several default values when building with meson:


Option			Description
=====================	=====================================================

--libdir=<LIBDIR>	Where library files go [/lib64]
--sbindir=<DIR>		Meson 0.63 or newer: Where binaries go [/usr/sbin]
-Dc_flags="<C-FLAGS>"	Add in addition flags to the C compiler
-Dno_systemd=<BOOL>	Enables systemd usage [false]
			(set to "true" to disable systemd)
-Dsystemddir=<DIR>	Set systemd unit directory [/usr/lib/systemd]
-Dhomedir=<DIR>		Set config file directory [/etc/iscsi]
-Ddbroot=<DIR>		Set Database directory [/etciscsi]
-Dlockdir=<DIR>		Set Lock directory [/run/lock/iscsi]
-Drulesdir=<DIR>	Set udev rules directory [/usr/lib/udev/rules.d]
-Discsi_sbindir=<DIR>	Where binaries go [/usr/sbin]
			(for use when sbindir can't be set, in older versions
			 of meson)
-Disns_supported=<BOOL>	Enable/disable iSNS support [true]
			(set to "false" to disable use of open-isns)


Building open-iscsi/iscsiuio using make/autoconf (Deprecated)
-------------------------------------------------------------
If you wish to build using the older deprecated system, you can
simply run:

	make [<MAKE-OPTIONS>]
	make [DESTDIR=<SOME-DIR>] install

Where MAKE-OPTIONS are from:
	* SBINDIR=<some-dir>  [/usr/bin]   for executables
	* DBROOT=<some-dir>   [/etc/iscsi] for iscsi database files
	* HOMEDIR=<some-dir>  [/etc/iscsi] for iscsi config files


4. Open-iSCSI daemon
====================

The iscsid daemon implements control path of iSCSI protocol, plus some
management facilities. For example, the daemon could be configured to
automatically re-start discovery at startup, based on the contents of
persistent iSCSI database (see next section).

For help, run:

	iscsid --help

The output will be similar to the following (assuming a default install):

Usage: iscsid [OPTION]

  -c, --config=[path]     Execute in the config file (/etc/iscsi/iscsid.conf).
  -i, --initiatorname=[path]     read initiatorname from file (/etc/iscsi/initiatorname.iscsi).
  -f, --foreground        run iscsid in the foreground
  -d, --debug debuglevel  print debugging information
  -u, --uid=uid           run as uid, default is current user
  -g, --gid=gid           run as gid, default is current user group
  -n, --no-pid-file       do not use a pid file
  -p, --pid=pidfile       use pid file (default /run/iscsid.pid).
  -h, --help              display this help and exit
  -v, --version           display version and exit


5. Open-iSCSI Configuration and Administration Utility
======================================================

Open-iSCSI persistent configuration is stored in a number of
directories under a configuration root directory, using a flat-file
format. This configuration root directory is /etc/iscsi by default,
but may also commonly be in /var/lib/iscsi (see "dbroot" in the meson
options discussed earlier).

Configuration is contained in directories for:

	- nodes
	- isns
	- static
	- fw
	- send_targets
	- ifaces

The iscsiadm utility is a command-line tool to manage (update, delete,
insert, query) the persistent database, as well manage discovery,
session establishment (login), and ending sessions (logout).

This utility presents set of operations that a user can perform
on iSCSI node, session, connection, and discovery records.

Open-iSCSI does not use the term node as defined by the iSCSI RFC,
where a node is a single iSCSI initiator or target. Open-iSCSI uses the
term node to refer to a portal on a target, so tools like iscsiadm
require that the '--targetname' and '--portal' arguments be used when
in node mode.

For session mode, a session id (sid) is used. The sid of a session can be
found by running:

	iscsiadm -m session -P 1

The session id is not currently persistent and is partially determined by
when the session is setup.

Note that some of the iSCSI Node and iSCSI Discovery operations
do not require iSCSI daemon (iscsid) loaded.

For help on the command, run:

	iscsiadm --help

The output will be similar to the following.

iscsiadm -m discoverydb [-hV] [-d debug_level] [-P printlevel] [-t type -p ip:port -I ifaceN ... [-Dl]] | [[-p ip:port -t type] [-o operation] [-n name] [-v value] [-lD]]
iscsiadm -m discovery [-hV] [-d debug_level] [-P printlevel] [-t type -p ip:port -I ifaceN ... [-l]] | [[-p ip:port] [-l | -D]] [-W]
iscsiadm -m node [-hV] [-d debug_level] [-P printlevel] [-L all,manual,automatic,onboot] [-W] [-U all,manual,automatic,onboot] [-S] [[-T targetname -p ip:port -I ifaceN] [-l | -u | -R | -s]] [[-o operation ] [-n name] [-v value]]
iscsiadm -m session [-hV] [-d debug_level] [-P printlevel] [-r sessionid | sysfsdir [-R | -u | -s] [-o operation] [-n name] [-v value]]
iscsiadm -m iface [-hV] [-d debug_level] [-P printlevel] [-I ifacename | -H hostno|MAC] [[-o operation ] [-n name] [-v value]] [-C ping [-a ip] [-b packetsize] [-c count] [-i interval]]
iscsiadm -m fw [-d debug_level] [-l] [-W] [[-n name] [-v value]]
iscsiadm -m host [-P printlevel] [-H hostno|MAC] [[-C chap [-x chap_tbl_idx]] | [-C flashnode [-A portal_type] [-x flashnode_idx]] | [-C stats]] [[-o operation] [-n name] [-v value]]
iscsiadm -k priority


The first parameter specifies the mode to operate in:

  -m, --mode <op>	specify operational mode op =
			<discoverydb|discovery|node|session|iface|fw|host>

Mode "discoverydb"
------------------

  -m discoverydb --type=[type] --interface=[iface…] --portal=[ip:port] \
			--print=[N] \
			--op=[op]=[NEW | UPDATE | DELETE | NONPERSISTENT] \
			--discover

			  This command will use the discovery record settings
			  matching the record with type=type and
			  portal=ip:port]. If a record does not exist, it will
			  create a record using the iscsid.conf discovery
			  settings.

			  By default, it will then remove records for
			  portals no longer returned. And,
			  if a portal is returned by the target, then the
			  discovery command will create a new record or modify
			  an existing one with values from iscsi.conf and the
			  command line.

			  [op] can be passed in multiple times to this
			  command, and it will alter the node DB manipulation.

			  If [op] is passed in and the value is
			  "new", iscsiadm will add records for portals that do
			  not yet have records in the db.

			  If [op] is passed in and the value is
			  "update", iscsiadm will update node records using
			  info from iscsi.conf and the command line for portals
			  that are returned during discovery and have
			  a record in the db.

			  If [op] is passed in and the value is "delete",
			  iscsiadm will delete records for portals that
			  were not returned during discovery.

			  If [op] is passed in and the value is
			  "nonpersistent", iscsiadm will not store
			  the portals found in the node DB. This is
			  only useful with the --login command.

			  See the example section for more info.

			  See below for how to setup iSCSI ifaces for
			  software iSCSI or override the system defaults.

			  Multiple ifaces can be passed in during discovery.

			  For the above commands, "print" is optional. If
			  used, N can be 0 or 1.
			  0 = The old flat style of output is used.
			  1 = The tree style with the inteface info is used.

			  If print is not used, the old flat style is used.

  -m discoverydb --interface=[iface...] --type=[type] --portal=[ip:port] \
			--print=[N] \
			--op=[op]=[NEW | UPDATE | DELETE | NONPERSISTENT] \
			--discover --login

			  This works like the previous discoverydb command
			  with the --login argument passed in will also
			  log into the portals that are found.

  -m discoverydb --portal=[ip:port] --type=[type] \
			--op=[op] [--name=[name] --value=[value]]

			  Perform specific DB operation [op] for
			  discovery portal. It could be one of:
			  [new], [delete], [update] or [show]. In case of
			  [update], you have to provide [name] and [value]
			  you wish to update

			  Setting op=NEW will create a new discovery record
			  using the iscsid.conf discovery settings. If it
			  already exists, it will be overwritten using
			  iscsid.conf discovery settings.

			  Setting op=DELETE will delete the discovery record
			  and records for the targets found through
			  Phat discovery source.

			  Setting op=SHOW will display the discovery record
			  values. The --show argument can be used to
			  force the CHAP passwords to be displayed.

Mode "discovery"
----------------

  -m discovery --type=[type] --interface=iscsi_ifacename \
			--portal=[ip:port] --login --print=[N] \
			--op=[op]=[NEW | UPDATE | DELETE | NONPERSISTENT]

			  Perform [type] discovery for target portal with
			  ip-address [ip] and port [port].

			  This command will not use the discovery record
			  settings. It will use the iscsid.conf discovery
			  settings and it will overwrite the discovery
			  record with iscsid.conf discovery settings if it
			  exists. By default, it will then remove records for
			  portals no longer returned. And,
			  if a portal is returned by the target, then the
			  discovery command will create a new record or modify
			  an existing one with values from iscsi.conf and the
			  command line.

			  [op] can be passed in multiple times to this
			  command, and it will alter the DB manipulation.

			  If [op] is passed in and the value is
			  "new", iscsiadm will add records for portals that do
			  not yet have records in the db.

			  If [op] is passed in and the value is
			  "update", iscsiadm will update node records using
			  info from iscsi.conf and the command line for portals
			  that are returned during discovery and have
			  a record in the db.

			  If [op] is passed in and the value is "delete",
			  iscsiadm will delete records for portals that
			  were not returned during discovery.

			  If [op] is passed in and the value is
			  "nonpersistent", iscsiadm will not store
			  the portals found in the node DB.

			  See the example section for more info.

			  See below for how to setup iSCSI ifaces for
			  software iSCSI or override the system defaults.

			  Multiple ifaces can be passed in during discovery.

  -m discovery --print=[N]

			  Display all discovery records from internal
			  persistent discovery database.

Mode "node"
-----------

  -m node		  display all discovered nodes from internal
			  persistent discovery database

  -m node --targetname=[name] --portal=[ip:port] \
			--interface=iscsi_ifacename] \
			[--login|--logout|--rescan|--stats] [-W]

  -m node --targetname=[name] --portal=[ip:port]
			--interface=[driver,HWaddress] \
			--op=[op] [--name=[name] --value=[value]]

  -m node --targetname=[name] --portal=[ip:port]
			--interface=iscsi_ifacename] \
			--print=[level]

			  Perform specific DB operation [op] for specific
			  interface on host that will connect to portal on
			  target. targetname, portal and interface are optional.
			  See below for how to setup iSCSI ifaces for
			  software iSCSI or override the system defaults.

			  The op could be one of [new], [delete], [update] or
			  [show]. In case of [update], you have to provide
			  [name] and [value] you wish to update.
			  For [delete], note that if a session is using the
			  node record, the session will be logged out then
			  the record will be deleted.

			  Using --rescan will perform a SCSI layer scan of the
			  session to find new LUNs.

			  Using --stats prints the iSCSI stats for the session.

			  Using --login normally sends a login request to the
			  specified target and normally waits for the results.
			  If -W/--no_wait is supplied return success if we are
			  able to send the login request, and do not wait
			  for the response. The user will have to poll for
			  success

			  Print level can be 0 to 1.

  -m node --logoutall=[all|manual|automatic]
			  Logout "all" the running sessions or just the ones
			  with a node startup value manual or automatic.
			  Nodes marked as ONBOOT are skipped.

  -m node --loginall=[all|manual|automatic] [-W]
			  Login "all" the running sessions or just the ones
			  with a node startup value manual or automatic.
			  Nodes marked as ONBOOT are skipped.

			  If -W is supplied then do not wait for the login
			  response for the target, returning success if we
			  are able to just send the request. The client
			  will have to poll for success.

Mode "session"
--------------

  -m session		  display all active sessions and connections

  -m session --sid=[sid] [ --print=level | --rescan | --logout ]
			--op=[op] [--name=[name] --value=[value]]

			  Perform operation for specific session with
			  session id sid. If no sid is given, the operation
			  will be performed on all running sessions if possible.
			  --logout and --op work like they do in node mode,
			  but in session mode targetname and portal info
			  is not passed in.

			  Print level can be 0 to 3.
			  0 = Print the running sessions.
			  1 = Print basic session info like node we are
			  connected to and whether we are connected.
			  2 = Print iSCSI params used.
			  3 = Print SCSI info like LUNs, device state.

			  If no sid and no operation is given print out the
			  running sessions.

Mode "iface"
------------

  -m iface --interface=iscsi_ifacename --op=[op] [--name=[name] --value=[value]]
			--print=level

			  Perform operation on given interface with name
			  iscsi_ifacename.

			  See below for examples.

  -m iface --interface=iscsi_ifacename -C ping --ip=[ipaddr] --packetsize=[size]
			--count=[count] --interval=[interval]

Mode "host"
-----------

  -m host [--host=hostno|MAC] --print=level -C chap --op=[SHOW]

			  Display information for a specific host. The host
			  can be passed in by host number or by MAC address.
			  If a host is not passed in, then info
			  for all hosts is printed.

			  Print level can be 0 to 4.
			  1 = Print info for how like its state, MAC, and
			      netinfo if possible.
			  2 = Print basic session info for nodes the host
			      is connected to.
			  3 = Print iSCSI params used.
			  4 = Print SCSI info like LUNs, device state.

  -m host --host=hostno|MAC -C chap --op=[DELETE] --index=[chap_tbl_idx]

			  Delete chap entry at the given index from chap table.

  -m host --host=hostno|MAC -C chap --op=[NEW | UPDATE] --index=[chap_tbl_idx] \
			--name=[name] --value=[value]

			  Add new or update existing chap entry at the given
			  index with given username and password pair. If index
			  is not passed then entry is added at the first free
			  index in chap table.

  -m host --host=hostno|MAC -C flashnode

			  Display list of all the targets in adapter's
			  flash (flash node), for the specified host,
			  with ip, port, tpgt and iqn.

  -m host --host=hostno|MAC -C flashnode --op=[NEW] --portal_type=[ipv4|ipv6]

			  Create new flash node entry for the given host of the
			  specified portal_type. This returns the index of the
			  newly created entry on success.

  -m host --host=hostno|MAC -C flashnode --index=[flashnode_index] \
			--op=[UPDATE] --name=[name] --value=[value]

			  Update the params of the specified flash node.
			  The [name] and [value] pairs must be provided for the
			  params that need to be updated. Multiple params can
			  be updated using a single command.

  -m host --host=hostno|MAC -C flashnode --index=[flashnode_index] \
			--op=[SHOW | DELETE | LOGIN | LOGOUT]

			  Setting op=DELETE|LOGIN|LOGOUT will perform
			  deletion/login/ logout operation on the specified
			  flash node.

			  Setting op=SHOW will list all params with the values
			  for the specified flash node. This is the default
			  operation.

			  See the iscsiadm example section below for more info.

Other arguments
---------------

  -d, --debug debuglevel  print debugging information

  -V, --version		  display version and exit

  -h, --help		  display this help and exit


5.1 iSCSI iface setup
=====================

The next sections describe how to setup iSCSI ifaces so you can bind
a session to a NIC port when using software iSCSI (section 5.1.1), and
it describes how to setup ifaces for use with offload cards from Chelsio
and Broadcom (section 5.1.2).


5.1.1 How to setup iSCSI interfaces (iface) for binding
=======================================================

If you wish to allow the network susbsystem to figure out
the best path/NIC to use, then you can skip this section. For example
if you have setup your portals and NICs on different subnets, then
the following is not needed for software iSCSI.

Warning!!!!!!
This feature is experimental. The interface may change. When reporting
bugs, if you cannot do a "ping -I ethX target_portal", then check your
network settings first. Make sure the rp_filter setting is set to 0 or 2
(see Prep section below for more info). If you cannot ping the portal,
then you will not be able to bind a session to a NIC.

What is a scsi_host and iface for software, hardware and partial
offload iSCSI?

Software iSCSI, like iscsi_tcp and iser, allocates a scsi_host per session
and does a single connection per session. As a result
/sys/class_scsi_host and /proc/scsi will report a scsi_host for
each connection/session you have logged into. Offload iSCSI, like
Chelsio cxgb3i, allocates a scsi_host for each PCI device (each
port on a HBA will show up as a different PCI device so you get
a scsi_host per HBA port).

To manage both types of initiator stacks, iscsiadm uses the interface (iface)
structure. For each HBA port or for software iSCSI for each network
device (ethX) or NIC, that you wish to bind sessions to you must create
a iface config /etc/iscsi/ifaces.

Prep
----

The iface binding feature requires the sysctl setting
net.ipv4.conf.default.rp_filter to be set to 0 or 2.
This can be set in /etc/sysctl.conf by having the line:
	net.ipv4.conf.default.rp_filter = N

where N is 0 or 2. Note that when setting this you may have to reboot
for the value to take effect.


rp_filter information from Documentation/networking/ip-sysctl.txt:

rp_filter - INTEGER
	0 - No source validation.
	1 - Strict mode as defined in RFC3704 Strict Reverse Path
	    Each incoming packet is tested against the FIB and if the interface
	    is not the best reverse path the packet check will fail.
	    By default failed packets are discarded.
	2 - Loose mode as defined in RFC3704 Loose Reverse Path
	    Each incoming packet's source address is also tested against the FIB
	    and if the source address is not reachable via any interface
	    the packet check will fail.

Running
-------

The command:

	iscsiadm -m iface

will report iface configurations that are setup in /etc/iscsi/ifaces:

	iface0 qla4xxx,00:c0:dd:08:63:e8,20.15.0.7,default,iqn.2005-06.com.redhat:madmax
	iface1 qla4xxx,00:c0:dd:08:63:ea,20.15.0.9,default,iqn.2005-06.com.redhat:madmax

The format is:

	iface_name transport_name,hwaddress,ipaddress,net_ifacename,initiatorname

For software iSCSI, you can create the iface configs by hand, but it is
recommended that you use iscsiadm's iface mode. There is an iface.example in
/etc/iscsi/ifaces which can be used as a template for the daring.

For each network object you wish to bind a session to, you must create
a separate iface config in /etc/iscsi/ifaces and each iface config file
must have a unique name which is less than or equal to 64 characters.

Example
-------

If you have NIC1 with MAC address 00:0F:1F:92:6B:BF and NIC2 with
MAC address 00:C0:DD:08:63:E7, and you wanted to do software iSCSI over
TCP/IP, then in /etc/iscsi/ifaces/iface0 you would enter:

	iface.transport_name = tcp
	iface.hwaddress = 00:0F:1F:92:6B:BF

and in /etc/iscsi/ifaces/iface1 you would enter:

	iface.transport_name = tcp
	iface.hwaddress = 00:C0:DD:08:63:E7

Warning: Do not name an iface config file  "default" or "iser".
They are special values/files that are used by the iSCSI tools for
backward compatibility. If you name an iface default or iser, then
the behavior is not defined.

To use iscsiadm to create an iface0 similar to the above example, run:

	iscsiadm -m iface -I iface0 --op=new

(This will create a new empty iface config. If there was already an iface
with the name "iface0", this command will overwrite it.)

Next, set the hwaddress:

	iscsiadm -m iface -I iface0 --op=update \
		-n iface.hwaddress -v 00:0F:1F:92:6B:BF

If you had sessions logged in, iscsiadm will not update or overwrite
an iface. You must log out first. If you have an iface bound to a node/portal
but you have not logged in, then iscsiadm will update the config and
all existing bindings.

You should now skip to 5.1.3 to see how to log in using the iface, and for
some helpful management commands.


5.1.2 Setting up an iface for an iSCSI offload card
===================================================

This section describes how to setup ifaces for use with Chelsio, Broadcom and
QLogic cards.

By default, iscsiadm will create an iface for each Broadcom, QLogic and Chelsio
port. The iface name will be of the form:

	$transport/driver_name.$MAC_ADDRESS

Running the following command:

	iscsiadm -m iface

will report iface configurations that are setup in /etc/iscsi/ifaces:

	default tcp,<empty>,<empty>,<empty>,<empty>
	iser iser,<empty>,<empty>,<empty>,<empty>
	cxgb3i.00:07:43:05:97:07 cxgb3i,00:07:43:05:97:07,<empty>,<empty>,<empty>
	qla4xxx.00:0e:1e:04:8b:2e qla4xxx,00:0e:1e:04:8b:2e,<empty>,<empty>,<empty>

The format is:

	iface_name transport_name,hwaddress,ipaddress,net_ifacename,initiatorname

where:	iface_name:		name of iface
	transport_name:		name of driver
	hwaddress:		MAC address
	ipaddress:		IP address to use for this port
	net_iface_name:		will be <empty> because change between reboots.
				It is used for software iSCSI's vlan or alias binding.
	initiatorname:		Initiatorname to be used if you want to override the
				default one in /etc/iscsi/initiatorname.iscsi.

To display these values in a more friendly way, run:

	iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07

Example output:

	# BEGIN RECORD 2.0-871
	iface.iscsi_ifacename = cxgb3i.00:07:43:05:97:07
	iface.net_ifacename = <empty>
	iface.ipaddress = <empty>
	iface.hwaddress = 00:07:43:05:97:07
	iface.transport_name = cxgb3i
	iface.initiatorname = <empty>
	# END RECORD

Before you can use the iface, you must set the IP address for the port.
We determine the corresponding variable name that we want to update from
the output above, which is "iface.ipaddress".
Then we fill this empty variable with the value we desire, with this command:

	iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07 -o update \
		-n iface.ipaddress -v 20.15.0.66

Note for QLogic ports: After updating the iface record, you must apply or
applyall the settings for the changes to take effect:

	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2e -o apply
	iscsiadm -m iface -H 00:0e:1e:04:8b:2e -o applyall

With "apply", the network settings for the specified iface will take effect.
With "applyall", the network settings for all ifaces on a specific host will
take effect. The host can be specified using the -H/--host argument by either
the MAC address of the host or the host number.

Here is an example of setting multiple IPv6 addresses on a single iSCSI
interface port.
First interface (no need to set iface_num, it is 0 by default):

	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2a -o update \
		 -n iface.ipaddress -v fec0:ce00:7014:0041:1111:2222:1e04:9392

Create the second interface if it does not exist (iface_num is mandatory here):

	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2a.1 -op=new
	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2a -o update \
		 -n iface.iface_num -v 1
	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2a -o update \
		 -n iface.ipaddress -v fec0:ce00:7014:0041:1111:2222:1e04:9393
	iscsiadm -m iface -H 00:0e:1e:04:8b:2a --op=applyall

Note: If there are common settings for multiple interfaces then the
settings from 0th iface would be considered valid.

Now, we can use this iface to login into targets, which is described in the
next section.


5.1.3 Discovering iSCSI targets/portals
========================================

Be aware that iscsiadm will use the default route to do discovery. It will
not use the iface specified. So if you are using an offload card, you will
need a separate network connection to the target for discovery purposes.

*This should be fixed in the some future version of Open-iSCSI*

For compatibility reasons, when you run iscsiadm to do discovery, it
will check for interfaces in /etc/iscsi/iscsi/ifaces that are using
tcp for the iface.transport, and it will bind the portals that are discovered
so that they will be logged in through those ifaces. This behavior can also
be overridden by passing in the interfaces you want to use. For the case
of offload, like with cxgb3i and bnx2i, this is required because the transport
will not be tcp.

For example if you had defined two interfaces but only wanted to use one,
you can use the --interface/-I argument:

	iscsiadm -m discoverydb -t st -p ip:port -I iface1 --discover -P 1

If you had defined interfaces but wanted the old behavior, where we do not
bind a session to an iface, then you can use the special iface "default":

	iscsiadm -m discoverydb -t st -p ip:port -I default --discover -P 1

And if you did not define any interfaces in /etc/iscsi/ifaces and do
not pass anything into iscsiadm, running iscsiadm will do the default
behavior, allowing the network subsystem to decide which device to use.

If you later want to remove the bindings for a specific target and
iface, then you can run:

	iscsiadm -m node -T my_target -I iface0 --op=delete

To do this for a specific portal on a target, run:

	iscsiadm -m node -T my_target -p ip:port -I iface0 --op=delete

If you wanted to delete all bindinds for iface0, then you can run:

	iscsiadm -m node -I iface0 --op=delete

And for equalogic targets it is sometimes useful to remove just by portal:

	iscsiadm -m node -p ip:port -I iface0 --op=delete


Now logging into targets is the same as with software iSCSI. See section 7
for how to get started.


5.2 iscsiadm examples
=====================

Usage examples using the one-letter options (see iscsiadm man page
for long options):

Discovery mode
--------------

- SendTargets iSCSI Discovery using the default driver and interface and
		using the discovery settings for the discovery record with the
		ID [192.168.1.1:3260]:

	iscsiadm -m discoverydb -t st -p 192.168.1.1:3260 --discover

  This will search /etc/iscsi/send_targets for a record with the
  ID [portal = 192.168.1.1:3260 and type = sendtargets. If found it
  will perform discovery using the settings stored in the record.
  If a record does not exist, it will be created using the iscsid.conf
  discovery settings.

  The argument to -p may also be a hostname instead of an address:

		iscsiadm -m discoverydb -t st -p somehost --discover

  For the ifaces, iscsiadm will first search /etc/iscsi/ifaces for
  interfaces using software iSCSI. If any are found then nodes found
  during discovery will be setup so that they can logged in through
  those interfaces. To specify a specific iface, pass the
  -I argument for each iface.

- SendTargets iSCSI Discovery updating existing target records:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o update --discover

  If there is a record for targetX, and portalY exists in the DB, and
  is returned during discovery, it will be updated with the info from
  the iscsi.conf. No new portals will be added and stale portals
  will not be removed.

- SendTargets iSCSI Discovery deleting existing target records:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o delete --discover

  If there is a record for targetX, and portalY exists in the DB, but
  is not returned during discovery, it will be removed from the DB.
  No new portals will be added and existing portal records will not
  be changed.

  Note: If a session is logged into portal we are going to delete
  a record for, it will be logged out then the record will be
  deleted.

- SendTargets iSCSI Discovery adding new records:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o new --discover

  If there is targetX, and portalY is returned during discovery, and does
  not have a record, it will be added. Existing records are not modified.

- SendTargets iSCSI Discovery using multiple ops:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o new -o delete --discover

  This command will add new portals and delete records for portals
  no longer returned. It will not change the record information for
  existing portals.

- SendTargets iSCSI Discovery in nonpersistent mode:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o nonpersistent --discover

  This command will perform discovery, but not manipulate the node DB.

- SendTargets iSCSI Discovery with a specific interface.  If you wish
  to only use a subset of the interfaces in
  /etc/iscsi/ifaces, then you can pass them in during discovery:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		--interface=iface0 --interface=iface1 --discover

  Note that for software iSCSI, we let the network layer select
  which NIC to use for discovery, but for later logins iscsiadm
  will use the NIC defined in the iface configuration.

  qla4xxx support is very basic and experimental. It does not store
  the record info in the card's FLASH or the node DB, so you must
  rerun discovery every time the driver is reloaded.

- Manipulate SendTargets DB: Create new SendTargets discovery record or
  overwrite an existing discovery record with iscsid.conf
  discovery settings:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 -o new

- Manipulate SendTargets DB: Display discovery settings:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 -o show

- Manipulate SendTargets DB: Display hidden discovery settings like
		 CHAP passwords:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o show --show

- Manipulate SendTargets DB: Set discovery setting.

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o update -n name -v value

- Manipulate SendTargets DB: Delete discovery record. This will also delete
  the records for the targets found through the discovery source.

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 -o delete

- Show all records in discovery database:

	iscsiadm -m discovery

- Show all records in discovery database and show the targets that were
  discovered from each record:

	iscsiadm -m discovery -P 1

Node mode
---------

In node mode you can specify which records you want to log
into by specifying the targetname, ip address, port or interface
(if specifying the interface it must already be setup in the node db).
iscsiadm will search the node db for records which match the values
you pass in, so if you pass in the targetname and interface, iscsiadm
will search for records with those values and operate on only them.
Passing in none of them will result in all node records being operated on.

- iSCSI Login to all portals on every node/starget through each interface
  set in the db:

	iscsiadm -m node -l

- iSCSI login to all portals on a node/target through each interface set
  in the db, but do not wait for the login response:

	iscsiadm -m node -T iqn.2005-03.com.max -l -W

- iSCSI login to a specific portal through each interface set in the db:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 -l

  To specify an iPv6 address, the following can be used:

	iscsiadm -m node -T iqn.2005-03.com.max \
		-p 2001:c90::211:9ff:feb8:a9e9 -l

  The above command would use the default port, 3260. To specify a
  port, use the following:

	iscsiadm -m node -T iqn.2005-03.com.max \
		-p [2001:c90::211:9ff:feb8:a9e9]:3260 -l

  To specify a hostname, the following can be used:

	iscsiadm -m node -T iqn.2005-03.com.max -p somehost -l

- iSCSI Login to a specific portal through the NIC setup as iface0:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 \
		-I iface0  -l

- iSCSI Logout of all portals on every node/starget through each interface
  set in the db:

	iscsiadm -m node -u

  Warning: this does not check startup values like the logout/login all
  option. Do not use this if you are running iSCSI on your root disk.

- iSCSI logout of all portals on a node/target through each interface set
  in the db:

	iscsiadm -m node -T iqn.2005-03.com.max -u

- iSCSI logout of a specific portal through each interface set in the db:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 -u

- iSCSI Logout of a specific portal through the NIC setup as iface0:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 \
		-I iface0

- Changing iSCSI parameter:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 \
		-o update -n node.cnx[0].iscsi.MaxRecvDataSegmentLength -v 65536

  You can also change parameters for multiple records at once, by
  specifying different combinations of target, portal and interface
  like above.

- Adding custom iSCSI portal:

	iscsiadm -m node -o new -T iqn.2005-03.com.max \
		-p 192.168.0.1:3260,2 -I iface4

  The -I/--interface is optional. If not passed in, "default" is used.
  For tcp or iser, this would allow the network layer to decide what is
  best.

  Note that for this command, the Target Portal Group Tag (TPGT) should
  be passed in. If it is not passed in on the initial creation command,
  then the user must run iscsiadm again to set the value. Also,
  if the TPGT is not initially passed in, the old behavior of not
  tracking whether the record was statically or dynamically created
  is used.

- Adding custom NIC config to multiple targets:

	iscsiadm -m node -o new -I iface4

  This command will add an interface config using the iSCSI and SCSI
  settings from iscsid.conf to every target that is in the node db.

- Removing iSCSI portal:

	iscsiadm -m node -o delete -T iqn.2005-03.com.max -p 192.168.0.4:3260

  You can also delete multiple records at once, by specifying different
  combinations of target, portal and interface like above.

- Display iSCSI portal onfiguration:

	iscsiadm -m node [-o show] -T iqn.2005-03.com.max -p 192.168.0.4:3260

  You can also display multiple records at once, by specifying different
  combinations of target, portal and interface like above.

  Note: running "iscsiadm -m node" will only display the records. It
  will not display the configuration info. For the latter, run:

	iscsiadm -m node -o show

- Show all node records:

	iscsiadm -m node

  This will print the nodes using the old flat format where the
  interface and driver are not displayed. To display that info
  use the -P option with the argument "1":

	iscsiadm -m node -P 1

Session mode
------------

- Display session statistics:

	iscsiadm -m session -r 1 --stats

  This function also works in node mode. Instead of the "-r $sid"
  argument, you would pass in the node info like targetname and/or portal,
  and/or interface.

- Perform a SCSI scan on a session

	iscsiadm -m session -r 1 --rescan

  This function also works in node mode. Instead of the "-r $sid"
  argument, you would pass in the node info like targetname and/or portal,
  and/or interface.

  Note: Rescanning does not delete old LUNs. It will only pick up new
  ones.

- Display running sessions:

	iscsiadm -m session -P 1

Host mode with flashnode submode
--------------------------------

- Display list of flash nodes for a host

	iscsiadm -m host -H 6 -C flashnode

  This will print list of all the flash node entries for the given host
  along with their ip, port, tpgt and iqn values.

- Display all parameters of a flash node entry for a host

	iscsiadm -m host -H 6 -C flashnode -x 0

  This will list all the parameter name,value pairs for the
  flash node entry at index 0 of host 6.

- Add a new flash node entry for a host

	iscsiadm -m host -H 6 -C flashnode -o new -A [ipv4|ipv6]

  This will add new flash node entry for the given host 6 with portal
  type of either ipv4 or ipv6. The new operation returns the index of
  the newly created flash node entry.

- Update a flashnode entry

	iscsiadm -m host -H 6 -C flashnode -x 1 -o update \
		-n flashnode.conn[0].ipaddress -v 192.168.1.12 \
		-n flashnode.session.targetname \
		-v iqn.2002-03.com.compellent:5000d310004b0716

  This will update the values of ipaddress and targetname params of
  the flash node entry at index 1 of host 6.

- Login to a flash node entry

	iscsiadm -m host -H 6 -C flashnode -x 1 -o login

- Logout from a flash node entry
	Logout can be performed either using the flash node index:

	iscsiadm -m host -H 6 -C flashnode -x 1 -o logout

  or by using the corresponding session index:

	iscsiadm -m session -r $sid -u

- Delete a flash node entry

	iscsiadm -m host -H 6 -C flashnode -x 1 -o delete

Host mode with chap submode
---------------------------

- Display list of chap entries for a host

	iscsiadm -m host -H 6 -C chap -o show

- Delete a chap entry for a host

	iscsiadm -m host -H 6 -C chap -o delete -x 5

  This will delete any chap entry present at index 5.

- Add/Update a local chap entry for a host

	iscsiadm -m host -H 6 -C chap -o update -x 4 -n username \
			-v value -n password -v value

  This will update the local chap entry present at index 4. If index 4
  is free, then a new entry of type local chap will be created at that
  index with given username and password values.

- Add/Update a bidi chap entry for a host

	iscsiadm -m host -H 6 -C chap -o update -x 5 -n username_in \
		-v value -n password_in -v value

  This will update the bidi chap entry present at index 5. If index 5
  is free then entry of type bidi chap will be created at that index
  with given username_in and password_in values.

Host mode with stats submode
----------------------------

- Display host statistics:

	iscsiadm -m host -H 6 -C stats

  This will print the aggregate statistics on the host adapter port.
  This includes MAC, TCP/IP, ECC & iSCSI statistics.


6. Configuration
================

The default configuration file is /etc/iscsi/iscsid.conf, but the
directory is configurable with the top-level make option "homedir".
The remainder of this document will assume the /etc/iscsi directory.
This file contains only configuration that could be overwritten by iSCSI
discovery, or manually updated via iscsiadm utility. Its OK if this file
does not exist, in which case compiled-in default configuration will take place
for newer discovered Target nodes.

See the man page and the example file for the current syntax.
The manual pages for iscsid, iscsiadm are in the doc subdirectory and can be
installed in the appropriate man page directories and need to be manually
copied into e.g. /usr/local/share/man8.


7. Getting Started
==================

There are three steps needed to set up a system to use iSCSI storage:

7.1. iSCSI startup using the systemd units or manual startup.
7.2. Discover targets.
7.3. Automate target logins for future system reboots.

The systemd startup units will start the iSCSI daemon and log into any
portals that are set up for automatic login (discussed in 7.2)
or discovered through the discover daemon iscsid.conf params
(discussed in 7.1.2).

If your distro does not have systemd units for iSCSI, then you will have
to start the daemon and log into the targets manually.


7.1.1 iSCSI startup using the init script
=========================================

Red Hat or Fedora:
-----------------
To start Open-iSCSI in Red Hat/Fedora you can do:

	systemctl start open-iscsi

To get Open-iSCSI to automatically start at run time you may have to
run:
	systemctl enable open-iscsi

And, to automatically mount a file system during startup
you must have the partition entry in /etc/fstab marked with the "_netdev"
option. For example this would mount an iSCSI disk sdb:

	/dev/sdb /mnt/iscsi ext3 _netdev 0 0

SUSE or Debian:
---------------
The Open-iSCSI service is socket activated, so there is no need to
enable the Open-iSCSI service. Likewise, the iscsi.service login
service is enabled automatically, so setting 'startup' to "automatic'
will enable automatic login to Open-iSCSI targets.


7.1.2 Manual Startup
====================

7.1.2.1 Starting up the iSCSI daemon (iscsid) and loading modules
=================================================================

If there is no initd script, you must start the tools by hand. First load the
iSCSI modules:

	modprobe -q iscsi_tcp

After that, start iSCSI as a daemon process:

	iscsid

or alternatively, start it with debug enabled, in a separate window,
which will force it into "foreground" mode:

	iscsid -d 8


7.1.2.2 Logging into Targets
============================

Use the configuration utility, iscsiadm, to add/remove/update Discovery
records, iSCSI Node records or monitor active iSCSI sessions (see above or the
iscsiadm man files and see section 7.2 below for how to discover targets):

	iscsiadm  -m node

This will print out the nodes that have been discovered as:

	10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311
	10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311

The format is:

	ip:port,target_portal_group_tag targetname

If you are using the iface argument or want to see the driver
info, use the following:

	iscsiadm -m node -P 1

Example output:

	Target: iqn.1992-08.com.netapp:sn.33615311
	        Portal: 10.15.84.19:3260,2
	                Iface Name: iface2
	        Portal: 10.15.85.19:3260,3
	                Iface Name: iface2

The format is:

	Target: targetname
		Portal ip_address:port,tpgt
			Iface: ifacename

Here, where targetname is the name of the target and ip_address:port
is the address and port of the portal. tpgt is the Target Portal Group
Tag of the portal, and is not used in iscsiadm commands except for static
record creation. ifacename is the name of the iSCSI interface
defined in /etc/iscsi/ifaces. If no interface was defined in
/etc/iscsi/ifaces or passed in, the default behavior is used.
Default here is iscsi_tcp/tcp to be used over whichever NIC the
network layer decides is best.

To login, take the ip, port and targetname from above and run:

	iscsiadm -m node -T targetname -p ip:port -l

In this example we would run:

	iscsiadm -m node -T iqn.1992-08.com.netapp:sn.33615311 \
		-p 10.15.84.19:3260 -l

Note: drop the portal group tag from the "iscsiadm -m node" output.

If you wish, for example to login to all targets represented in the node
database, but not wait for the login responses:

	iscsiadm -m node -l -W

After this, you can use "session" mode to detect when the logins complete:

	iscsiadm -m session


7.2. Discover Targets
=====================

Once the iSCSI service is running, you can perform discovery using
SendTarget with:

	iscsiadm -m discoverydb -t sendtargets -p ip:port --discover

Here, "ip" is the address of the portal and "port" is the port.

To use iSNS you can run the discovery command with the type as "isns"
and pass in the ip:port:

	iscsiadm -m discoverydb -t isns -p ip:port --discover

Both commands will print out the list of all discovered targets and their
portals, e.g.:

	iscsiadm -m discoverydb -t st -p 10.15.85.19:3260 --discover

This might produce:

	10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311
	10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311

The format for the output is:

	ip:port,tpgt targetname

In this example, for the first target the ip address is 10.15.85.19, and
the port is 3260. The target portal group is 3. The target name
is iqn.1992-08.com.netapp:sn.33615311.

If you would also like to see the iSCSI inteface which will be used
for each session then use the --print=[N]/-P [N] option:

	iscsiadm -m discoverydb -t sendtargets -p ip:port -P 1 --discover

This might print:

    Target: iqn.1992-08.com.netapp:sn.33615311
        Portal: 10.15.84.19:3260,2
           Iface Name: iface2
        Portal: 10.15.85.19:3260,3
           Iface Name: iface2

In this example, the IP address of the first portal is 10.15.84.19, and
the port is 3260. The target portal group is 3. The target name
is iqn.1992-08.com.netapp:sn.33615311. The iface being used is iface2.

While discovery targets are kept in the discovery db, they are
useful only for re-discovery. The discovered targets (a.k.a. nodes)
are stored as records in the node db.

The discovered targets are not logged into yet. Rather than logging
into the discovered nodes (making LUs from those nodes available as
storage), it is better to automate the login to the nodes we need.

If you wish to log into a target manually now, see section
"7.1.2.2 Logging in targets" above.


7.3. Automate Target Logins for Future System Startups
======================================================

Note: this may only work for distros with systemd iSCSI login scripts.

To automate login to a node, use the following with the record ID
(record ID is the targetname and portal) of the node discovered in the
discovery above:

	iscsiadm -m node -T targetname -p ip:port --op update -n node.startup -v automatic

To set the automatic setting to all portals on a target through every
interface setup for each protal, the following can be run:

	iscsiadm -m node -T targetname --op update -n node.startup -v automatic

Or to set the "node.startup" attribute to "automatic" as default for
all sessions add the following to the /etc/iscsi/iscsid.conf:

	node.startup = automatic

Setting this in iscsid.conf will not affect existing nodes. It will only
affect nodes that are discovered after setting the value.

To login to all automated nodes, simply restart the iSCSI login service, e.g. with:

	systemctl restart iscsi.service

On your next startup the nodes will be logged into automatically.


7.4 Automatic Discovery and Login
=================================

Instead of running the iscsiadm discovery command and editing the
startup setting, iscsid can be configured so that every X seconds
it performs discovery and logs in and out of the portals returned or
no longer returned. In this mode, when iscsid starts it will check the
discovery db for iSNS records with:

	discovery.isns.use_discoveryd = Yes

This tells iscsi to check for SendTargets discovery records that have the
setting:

	discovery.sendtargets.use_discoveryd = Yes

If set, iscsid will perform discovery to the address every
discovery.isns.discoveryd_poll_inval or
discovery.sendtargets.discoveryd_poll_inval seconds,
and it will log into any portals found from the discovery source using
the ifaces in /etc/iscsi/ifaces.

Note that for iSNS the poll_interval does not have to be set. If not set,
iscsid will only perform rediscovery when it gets a SCN from the server.

#   iSNS Note:
#   For servers like Microsoft's where they allow SCN registrations, but do not
#   send SCN events, discovery.isns.poll_interval should be set to a non zero
#   value to auto discover new targets. This is also useful for servers like
#   linux-isns (SLES's iSNS server) where it sometimes does not send SCN
#   events in the proper format, so they may not get handled.

Examples
--------

SendTargets
-----------

- Create a SendTargets record by passing iscsiadm the "-o new" argument in
		discoverydb mode:

	iscsiadm -m discoverydb -t st -p 20.15.0.7:3260 -o new

  On success, this will output something like:

  New discovery record for [20.15.0.7,3260] added.

- Set the use_discoveryd setting for the record:

	iscsiadm -m discoverydb -t st -p 20.15.0.7:3260  -o update \
		-n discovery.sendtargets.use_discoveryd -v Yes

- Set the polling interval:

	iscsiadm -m discoverydb -t st -p 20.15.0.7:3260  -o update \
		-n discovery.sendtargets.discoveryd_poll_inval -v 30

To have the new settings take effect, restart iscsid by restarting the
iSCSI services.

NOTE:	When iscsiadm is run with the -o new argument, it will use the
	discovery.sendtargets.use_discoveryd and
	discovery.sendtargets.discoveryd_poll_inval
	settings in iscsid.conf for the records initial settings. So if those
	are set in iscsid.conf, then you can skip the iscsiadm -o update
	commands.

iSNS
----

- Create an iSNS record by passing iscsiadm the "-o new" argument in
		discoverydb mode:

	iscsiadm -m discoverydb -t isns -p 20.15.0.7:3205 -o new

  Response on success:

	New discovery record for [20.15.0.7,3205] added.

- Set the use_discoveryd setting for the record:

	iscsiadm -m discoverydb -t isns -p 20.15.0.7:3205  -o update \
		-n discovery.isns.use_discoveryd -v Yes

- [OPTIONAL: see iSNS note above] Set the polling interval if needed:

	iscsiadm -m discoverydb -t st -p 20.15.0.7:3205  -o update \
		-n discovery.isns.discoveryd_poll_inval -v 30

To have the new settings take effect, restart iscsid by restarting the
iscsi services.

Note:	When iscsiadm is run with the -o new argument, it will use the
	discovery.isns.use_discoveryd and discovery.isns.discoveryd_poll_inval
	settings in iscsid.conf for the record's initial settings. So if those
	are set in iscsid.conf, then you can skip the iscsiadm -o update
	commands.


8. Advanced Configuration
=========================

8.1 iSCSI settings for dm-multipath
===================================

When using dm-multipath, the iSCSI timers should be set so that commands
are quickly failed to the dm-multipath layer. For dm-multipath you should
then set values like queue if no path, so that IO errors are retried and
queued if all paths are failed in the multipath layer.


8.1.1 iSCSI ping/Nop-Out settings
=================================
To quickly detect problems in the network, the iSCSI layer will send iSCSI
pings (iSCSI NOP-Out requests) to the target. If a NOP-Out times out, the
iSCSI layer will respond by failing the connection and starting the
replacement_timeout. It will then tell the SCSI layer to stop the device queues
so no new IO will be sent to the iSCSI layer and to requeue and retry the
commands that were running if possible (see the next section on retrying
commands and the replacement_timeout).

To control how often a NOP-Out is sent, the following value can be set:

	node.conn[0].timeo.noop_out_interval = X

Where X is in seconds and the default is 10 seconds. To control the
timeout for the NOP-Out the noop_out_timeout value can be used:

	node.conn[0].timeo.noop_out_timeout = X

Again X is in seconds and the default is 15 seconds.

Normally for these values you can use:

	node.conn[0].timeo.noop_out_interval = 5
	node.conn[0].timeo.noop_out_timeout = 10

If there are a lot of IO error messages like

	detected conn error (22)

in the kernel log then the above values may be too aggressive. You may need to
increase the values for your network conditions and workload, or you may need
to check your network for possible problems.


8.1.2 SCSI command retries
==========================

SCSI disk commands get 5 retries by default. In newer kernels this can be
controlled via the sysfs file:

	/sys/block/$sdX/device/scsi_disk/$host:$bus:$target:LUN/max_retries

by writing a integer lower than 5 to reduce retries or setting to -1 for
infinite retries.

The number of actual retries a command gets may be less than 5 or what is
requested in max_retries if the replacement timeout expires. When that timer
expires it tells the SCSI layer to fail all new and queued commands.


8.1.3 replacement_timeout
=========================

The iSCSI layer timer:

	node.session.timeo.replacement_timeout = X

controls how long to wait for session re-establishment before failing all SCSI
commands:

	1. commands that have been requeued and awaiting a retry
	2. commands that are being operated on by the SCSI layer's error handler
	3. all new commands that are queued to the device

up to a higher level like multipath, filesystem layer, or to the application.

The setting is in seconds. zero means to fail immediately. -1 means an infinite
timeout which will wait until iscsid does a relogin, the user runs the iscsiadm
logout command or until the node.session.reopen_max limit is hit.

When this timer is started, the iSCSI layer will stop new IO from executing
and requeue running commands to the Block/SCSI layer. The new and requeued
commands will then sit in the Block/SCSI layer queue until the timeout has
expired, there is userspace intervention like a iscsiadm logout command, or
there is a successful relogin. If the command has run out of retries, the
command will be failed instead of being requeued.

After this timer has expired iscsid can continue to try to relogin. By default
iscsid will continue to try to relogin until there is a successful relogin or
until the user runs the iscsiadm logout command. The number of relogin retries
is controlled by the Open-iSCSI setting node.session.reopen_max. If that is set
too low, iscsid may give up and forcefully logout the session (equivalent to
running the iscsiadm logout command on a failed session) before replacement
timeout seconds. This will result in all commands being failed at that time.
The user would then have to manually relogin.

This timer starts when you see the connection error messsage:

	detected conn error (%d)

in the kernel log. The %d will be a integer with the following mappings
and meanings:

Int     Kernel define           Description
value
------------------------------------------------------------------------------
1	ISCSI_ERR_DATASN	Low level iSCSI protocol error where a data
				sequence value did not match the expected value.
2	ISCSI_ERR_DATA_OFFSET	There was an error where we were asked to
				read/write past a buffer's length.
3	ISCSI_ERR_MAX_CMDSN	Low level iSCSI protocol error where we got an
				invalid MaxCmdSN value.
4	ISCSI_ERR_EXP_CMDSN	Low level iSCSI protocol error where the
				ExpCmdSN from the target didn't match the
				expected value.
5	ISCSI_ERR_BAD_OPCODE	The iSCSI Target has sent an invalid or unknown
				opcode.
6	ISCSI_ERR_DATALEN	The iSCSI target has send a PDU with a data
				length that is invalid.
7	ISCSI_ERR_AHSLEN	The iSCSI target has sent a PDU with an invalid
				Additional Header Length.
8	ISCSI_ERR_PROTO		The iSCSI target has performed an operation that
				violated the iSCSI RFC.
9	ISCSI_ERR_LUN		The iSCSI target has requested an invalid LUN.
10	ISCSI_ERR_BAD_ITT       The iSCSI target has sent an invalid Initiator
				Task Tag.
11	ISCSI_ERR_CONN_FAILED   Generic error that can indicate the transmission
				of a PDU, like a SCSI cmd or task management
				function, has timed out. Or, we are not able to
				transmit a PDU because the network layer has
				returned an error, or we have detected a
				network error like a link down. It can
				sometimes be an error that does not fit the
				other error codes like a kernel function has
				returned a failure and there no other way to
				recovery from it except to try and kill the
				existing session and relogin.
12	ISCSI_ERR_R2TSN		Low level iSCSI protocol error where the R2T
				sequence numbers to not match.
13	ISCSI_ERR_SESSION_FAILED
				Unused.
14	ISCSI_ERR_HDR_DGST	iSCSI Header Digest error.
15	ISCSI_ERR_DATA_DGST	iSCSI Data Digest error.
16	ISCSI_ERR_PARAM_NOT_FOUND
				Userspace has passed the kernel an unknown
				setting.
17	ISCSI_ERR_NO_SCSI_CMD	The iSCSI target has sent a ITT for an unknown
				task.
18	ISCSI_ERR_INVALID_HOST	The iSCSI Host is no longer present or being
				removed.
19	ISCSI_ERR_XMIT_FAILED	The software iSCSI initiator or cxgb was not
				able to transmit a PDU becuase of a network
				layer error.
20	ISCSI_ERR_TCP_CONN_CLOSE
				The iSCSI target has closed the connection.
21	ISCSI_ERR_SCSI_EH_SESSION_RST
				The SCSI layer's Error Handler has timed out
				the SCSI cmd, tried to abort it and possibly
				tried to send a LUN RESET, and it's now
				going to drop the session.
22	ISCSI_ERR_NOP_TIMEDOUT	An iSCSI Nop as a ping has timed out.


8.1.4 Running Commands, the SCSI Error Handler, and replacement_timeout
=======================================================================

Each SCSI command has a timer controlled by:

	/sys/block/sdX/device/timeout

The value is in seconds and the default ranges from 30 - 60 seconds
depending on the distro's udev scripts.

When a command is sent to the iSCSI layer the timer is started, and when it's
returned to the SCSI layer the timer is stopped. This could be for successful
completion or due to a retry/requeue due to a conn error like described
previously. If a command is retried the timer is reset.

When the command timer fires, the SCSI layer will ask the iSCSI layer to abort
the command by sending an ABORT_TASK task management request. If the abort
is successful the SCSI layer retries the command if it has enough retries left.
If the abort times out, the iSCSI layer will report failure to the SCSI layer
and will fire a ISCSI_ERR_SCSI_EH_SESSION_RST error. In the logs you will see:

	detected conn error (21)

The ISCSI_ERR_SCSI_EH_SESSION_RST will cause the connection/session to be
dropped and the iSCSI layer will start the replacement_timeout operations
described in that section.

The SCSI layer will then eventually call the iSCSI layer's target/session reset
callout which will wait for the replacement timeout to expire, a successful
relogin to occur, or for userspace to logout the session.

- If the replacement timeout fires, then commands will be failed upwards as
described in the replacement timeout section. The SCSI devices will be put
into an offline state until iscsid performs a relogin.

- If a relogin occurs before the timer fires, commands will be retried if
possible.

To check if the SCSI error handler is running, iscsiadm can be run as:

	iscsiadm -m session -P 3

and you will see:

	Host Number: X State: Recovery

To modify the timer that starts the SCSI EH, you can either write
directly to the device's sysfs file:

	echo X > /sys/block/sdX/device/timeout

where X is in seconds.
Alternatively, on most distros you can modify the udev rule.

To modify the udev rule open /etc/udev/rules.d/50-udev.rules, and find the
following lines:

	ACTION=="add", SUBSYSTEM=="scsi" , SYSFS{type}=="0|7|14", \
		RUN+="/bin/sh -c 'echo 60 > /sys$$DEVPATH/timeout'"

And change the "echo 60" part of the line to the value that you want.

The default timeout for normal File System commands is 30 seconds when udev
is not being used. If udev is used the default is the above value which
is normally 60 seconds.


8.1.4 Optimal replacement_timeout Value
=======================================

The default value for replacement_timeout is 120 seconds, but because
multipath's queue_if_no_path and no_path_retry setting can prevent IO errors
from being propagated to the application, replacement_timeout can be set to a
shorter value like 5 to 15 seconds. By setting it lower, pending IO is quickly
sent to a new path and executed while the iSCSI layer attempts
re-establishment of the session. If all paths end up being failed, then the
multipath and device mapper layer will internally queue IO based on the
multipath.conf settings, instead of the iSCSI layer.


8.2 iSCSI settings for iSCSI root
=================================

When accessing the root partition directly through an iSCSI disk, the
iSCSI timers should be set so that iSCSI layer has several chances to try to
re-establish a session and so that commands are not quickly requeued to
the SCSI layer. Basically you want the opposite of when using dm-multipath.

For this setup, you can turn off iSCSI pings (NOPs) by setting:

	node.conn[0].timeo.noop_out_interval = 0
	node.conn[0].timeo.noop_out_timeout = 0

And you can turn the replacement_timer to a very long value:

	node.session.timeo.replacement_timeout = 86400


8.3 iSCSI settings for iSCSI tape
=================================

It is possible to use open-iscsi to connect to a remote tape drive,
making available locally. In such a case, you need to disable NOPs out,
since tape drives don't handle those well at all. See above (section 8.2)
for how to disable these NOPs.


9. iSCSI System Info
====================

To get information about the running sessions: including the session and
device state, session ids (sid) for session mode, and some of the
negotiated parameters, run:

	iscsiadm -m session -P 2

If you are looking for something shorter, like just the sid to node mapping,
run:

	iscsiadm -m session [-P 0]

This will print the list of running sessions with the format:

	driver [sid] ip:port,target_portal_group_tag targetname

Example output of "iscsiadm -m session":

	tcp [2] 10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311
	tcp [3] 10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311

To print the hw address info use the -P option with "1":

	iscsiadm -m session -P 1

This will print the sessions with the following format:

	Target: targetname
		Current Portal: portal currently logged into
		Persistent Portal: portal we would fall back to if we had got
				   redirected during login
			Iface Transport: driver/transport_name
			Iface IPaddress: IP address of iface being used
			Iface HWaddress: HW address used to bind session
			Iface Netdev: netdev value used to bind session
			SID: iscsi sysfs session id
			iSCSI Connection State: iscsi state

Note: if an older kernel is being used or if the session is not bound,
then the keyword "default" is printed to indicate that the default
network behavior is being used.

Example output of "iscsiadm -m session -P 1":

	Target: iqn.1992-08.com.netapp:sn.33615311
		Current Portal: 10.15.85.19:3260,3
		Persistent Portal: 10.15.85.19:3260,3
			Iface Transport: tcp
			Iface IPaddress: 10.11.14.37
			Iface HWaddress: default
			Iface Netdev: default
			SID: 7
			iSCSI Connection State: LOGGED IN
			Internal iscsid Session State: NO CHANGE

The connection state is currently not available for qla4xxx.

To get a HBA/Host view of the session, there is the host mode:

	iscsiadm -m host

This prints the list of iSCSI hosts in the system with the format:

	driver [hostno] ipaddress,[hwaddress],net_ifacename,initiatorname

Example output:

	cxgb3i: [7] 10.10.15.51,[00:07:43:05:97:07],eth3 <empty>

To print this info in a more user friendly way, the -P argument can be used:

	iscsiadm -m host -P 1

Example output:

	Host Number: 7
		State: running
		Transport: cxgb3i
		Initiatorname: <empty>
		IPaddress: 10.10.15.51
		HWaddress: 00:07:43:05:97:07
		Netdev: eth3

Here, you can also see the state of the host.

You can also pass in any value from 1 - 4 to print more info, like the
sessions running through the host, what ifaces are being used and what
devices are accessed through it.

To print the info for a specific host, you can pass in the -H argument
with the host number:

	iscsiadm -m host -P 1 -H 7

rtslib-fb's People

Contributors

afamilyman avatar agrover avatar apearson-ibm avatar colml avatar cvubrugier avatar dajt avatar eharney avatar gonzoleeman avatar hreinecke avatar iammattcoleman avatar jaredeh avatar jengelh avatar jonnyjd avatar kidwithservers avatar linonymous avatar logost avatar lxbsz avatar maurizio-lombardi avatar mulkieran avatar nemunaire avatar niektoniekde avatar olafhering avatar potatogim avatar pzakha avatar ssudhakarp avatar tangwenji avatar tramjoe avatar vstinner avatar zhaofengli avatar zoumingzhe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rtslib-fb's Issues

load modules only when needed

Don't load all kernel modules when targetcli comes up, change class FabricModule to overload check_self, it loads the kmod and creates the cfs group.

def _check_self(self):
    if not self._is_loaded:
        self._load()
        print "LOADED %s" % self.name
        self._is_loaded = True
    super(FabricModule, self)._check_self()

handle new attributes better, based on kernel version

whenever an attribute is added to configfs, we need to be able to cope with it being present or not being present (on older kernels). right now we just catch exceptions & ignore when they're not there, but this is bad because it may mask times we do want to raise, on newer kernels.

Implement a cleaner way to do reads/writes that looks at the kernel version and acts appropriately.

dpkg: error processing python-rtslib-fb (--install): SyntaxError

After a make deb and doing a dpkg -i dist/*.deb I get :

Setting up python-rtslib-fb (2.1.fb49) ...
SyntaxError: ('invalid syntax', ('/usr/lib/python2.6/dist-packages/rtslib/fabric.py', 120, 36, 'version_attributes = {"lio_version", "version"}\n'))

dpkg: error processing python-rtslib-fb (--install):
 subprocess installed post-installation script returned error exit status 101
Setting up python-rtslib-fb-docs (2.1.fb49) ...
Errors were encountered while processing:
 python-rtslib-fb

convert to python 3

support Python 3. I did some test 2to3 runs and it seemed to get pretty much everything, but more examination of the results is needed before pulling the trigger.

setup() really wants to be a generator

err_func is a hack -- it's more natural to return nonfatal error messages with yield!

Unfortunately, py 2.7 doesn't have 'yield from', which would let us convert without ugly for loops every time we call another setup(). This will be a nice cleanup once we are py3-only.

Adding RBD disk with ValueError: No JSON object could be decoded

After creating iSCSI gateways successfully, I add RBD disk by using

create pool=rbd image=iscsi-test-16T size=16T

Then, I got the following error

ValueError: No JSON object could be decoded

The rbd-target-api daemon is

[root@ceph-gw-1 ~]# journalctl -u rbd-target-api
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: (LUN.allocate) created rbd/iscsi-test-16T successfully
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: (LUN.add_dev_to_lio) Adding image 'rbd.iscsi-test-16T' to LIO
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: 127.0.0.1 - - [10/Nov/2017 12:00:18] "PUT /api/_disk/rbd.iscsi-test-16T HTTP/1.1" 500 -
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: Traceback (most recent call last):
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1997, in __call__
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     return self.wsgi_app(environ, start_response)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1985, in wsgi_app
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     response = self.handle_exception(e)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1540, in handle_exception
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     reraise(exc_type, exc_value, tb)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1982, in wsgi_app
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     response = self.full_dispatch_request()
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1614, in full_dispatch_req
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     rv = self.handle_user_exception(e)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1517, in handle_user_excep
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     reraise(exc_type, exc_value, tb)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1612, in full_dispatch_req
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     rv = self.dispatch_request()
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1598, in dispatch_request
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     return self.view_functions[rule.endpoint](**req.view_args)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/gwcli-2.5-py2.7.egg/EGG-INFO/scripts/rbd-target-
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/gwcli-2.5-py2.7.egg/EGG-INFO/scripts/rbd-target-
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/lun.py", line 426, in allocate
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     lun = self.add_dev_to_lio()
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/lun.py", line 612, in add_dev_to_l
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     wwn=in_wwn)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/rtslib_fb/tcm.py", line 815, in __init__
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     self._configure(config, size, wwn, hw_max_sectors)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/rtslib_fb/tcm.py", line 831, in _configure
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     self._enable()
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/rtslib_fb/tcm.py", line 172, in _enable
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     fwrite(path, "1\n")
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/rtslib_fb/utils.py", line 79, in fwrite
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     file_fd.write(str(string))
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: IOError: [Errno 2] No such file or directory
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: _disk change on 127.0.0.1 failed with 500
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: 127.0.0.1 - - [10/Nov/2017 12:00:18] "PUT /api/disk/rbd.iscsi-test-16T HTTP/1.1" 500 -
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: Traceback (most recent call last):
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1997, in __call__
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     return self.wsgi_app(environ, start_response)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1985, in wsgi_app
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     response = self.handle_exception(e)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1540, in handle_exception
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     reraise(exc_type, exc_value, tb)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1982, in wsgi_app
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     response = self.full_dispatch_request()
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1614, in full_dispatch_req
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     rv = self.handle_user_exception(e)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1517, in handle_user_excep
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     reraise(exc_type, exc_value, tb)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1612, in full_dispatch_req
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     rv = self.dispatch_request()
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1598, in dispatch_request
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     return self.view_functions[rule.endpoint](**req.view_args)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/gwcli-2.5-py2.7.egg/EGG-INFO/scripts/rbd-target-
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/gwcli-2.5-py2.7.egg/EGG-INFO/scripts/rbd-target-
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/gwcli-2.5-py2.7.egg/EGG-INFO/scripts/rbd-target-
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/requests/models.py", line 866, in json
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     return complexjson.loads(self.text, **kwargs)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/json/__init__.py", line 339, in loads
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     return _default_decoder.decode(s)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/json/decoder.py", line 364, in decode
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     obj, end = self.raw_decode(s, idx=_w(s, 0).end())
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/json/decoder.py", line 382, in raw_decode
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     raise ValueError("No JSON object could be decoded")
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: ValueError: No JSON object could be decoded

And the log file: /var/log/rbd-target-api.log

[root@ceph-gw-1 log]# tail rbd-target-api.log
2017-11-10 12:00:18,133    DEBUG [lun.py:323:allocate()] - rados pool 'rbd' contains the following - [u'iscsi-test-8T']
2017-11-10 12:00:18,133    DEBUG [lun.py:328:allocate()] - Hostname Check - this host is ceph-gw-1, target host for allocations is ceph-gw-1
2017-11-10 12:00:18,594    DEBUG [common.py:256:add_item()] - (Config.add_item) config updated to {u'updated': u'2017/11/10 01:50:09', u'disks': {'rbd.iscsi-test-16T': {'created': '2017/11/10 04:00:18'}}, u'created': u'2017/11/08 07:36:52', u'clients': {}, u'epoch': 4, u'version': 3, u'gateways': {u'iqn': u'iqn.2017-11.com.ctcloud.iscsi-gw:ceph-igw', u'created': u'2017/11/09 08:56:41', u'ceph-gw-1': {u'gateway_ip_list': [u'192.168.100.248', u'192.168.100.246'], u'active_luns': 0, u'created': u'2017/11/10 00:30:31', u'updated': u'2017/11/10 01:20:25', u'iqn': u'iqn.2017-11.com.ctcloud.iscsi-gw:ceph-igw', u'inactive_portal_ips': [u'192.168.100.246'], u'portal_ip_address': u'192.168.100.248', u'tpgs': 2}, u'ip_list': [u'192.168.100.248', u'192.168.100.246'], u'ceph-gw-2': {u'gateway_ip_list': [u'192.168.100.248', u'192.168.100.246'], u'active_luns': 0, u'created': u'2017/11/10 01:50:09', u'updated': u'2017/11/10 01:50:09', u'iqn': u'iqn.2017-11.com.ctcloud.iscsi-gw:ceph-igw', u'inactive_portal_ips': [u'192.168.100.248'], u'portal_ip_address': u'192.168.100.246', u'tpgs': 2}}, u'groups': {}}
2017-11-10 12:00:18,594     INFO [lun.py:344:allocate()] - (LUN.allocate) created rbd/iscsi-test-16T successfully
2017-11-10 12:00:18,595    DEBUG [lun.py:384:allocate()] - Check the rbd image size matches the request
2017-11-10 12:00:18,595    DEBUG [lun.py:407:allocate()] - Begin processing LIO mapping
2017-11-10 12:00:18,595     INFO [lun.py:598:add_dev_to_lio()] - (LUN.add_dev_to_lio) Adding image 'rbd.iscsi-test-16T' to LIO
2017-11-10 12:00:18,633     INFO [_internal.py:87:_log()] - 127.0.0.1 - - [10/Nov/2017 12:00:18] "PUT /api/_disk/rbd.iscsi-test-16T HTTP/1.1" 500 -
2017-11-10 12:00:18,640    ERROR [rbd-target-api:1266:call_api()] - _disk change on 127.0.0.1 failed with 500
2017-11-10 12:00:18,651     INFO [_internal.py:87:_log()] - 127.0.0.1 - - [10/Nov/2017 12:00:18] "PUT /api/disk/rbd.iscsi-test-16T HTTP/1.1" 500 -

And I tried touch a new file in /sys/kernel/config/target/, but I failed.

What is it? How could I solve the problem? Thanks!

recent changes about renaming README to README.md cause "make rpm" to fail

Source : current git
make rpm
Exporting the repository files...
Cleaning up the target tree...
Fixing version string...
( too long text cut for clarity )
cp: cannot stat 'README': No such file or directory
error: Bad exit status from /var/tmp/rpm-tmp.qZMxBU (%doc)

RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.qZMxBU (%doc)
make: *** [build/rpm-stamp] Error 1

Build host : Fedora 18, kernel-3.8.11-200.fc18.x86_64
Same thing for the other packages (targetcli and configshell)

Raised exception has changed when a storage object is not found: RTSLibNotInCFS became RTSLibError

After upgrading rtslib-fb in a NAS firmware at work that uses a custom software on top of rtslib-fb, I observed that the exception raised when a storage object is not found in lookup mode is now RTSLibError. It used to be RTSLibNotInCFS prior to rtslib-fb v2.1.fb44:

Behaviour up to rtslib-fb v2.1.fb43:

>>> import rtslib
>>> rtslib.tcm.FileIOStorageObject('foo', None, None, None, False)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "build/bdist.linux-x86_64/egg/rtslib/tcm.py", line 566, in __init__
  File "build/bdist.linux-x86_64/egg/rtslib/tcm.py", line 48, in __init__
  File "build/bdist.linux-x86_64/egg/rtslib/tcm.py", line 798, in __init__
  File "build/bdist.linux-x86_64/egg/rtslib/node.py", line 53, in _create_in_cfs_ine
rtslib.utils.RTSLibNotInCFS: No such _Backstore in configfs: /sys/kernel/config/target/core/fileio_0.

Since rtslib-fb v2.1.fb44:

>>> import rtslib_fb
>>> rtslib_fb.tcm.FileIOStorageObject('foo', None, None, None, False)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "build/bdist.linux-x86_64/egg/rtslib/tcm.py", line 566, in __init__
  File "build/bdist.linux-x86_64/egg/rtslib/tcm.py", line 48, in __init__
  File "build/bdist.linux-x86_64/egg/rtslib/tcm.py", line 785, in __init__
rtslib.utils.RTSLibError: Storage object fileio/foo not found

@grover, I can provide a patch if you consider this is an issue.

Debian packet can not be build due to pythonpath error

Hello,

There is a problem with the building of the Debian packet.
Here is my output:

make[1]: Entering directory `/root/rtslib-fb/build/rtslib-2.1.fb40.10.g3fbe7e4'
dh_testdir
/usr/bin/python ./setup.py --quiet build --build-base build install \
                --no-compile --install-purelib debian/tmp/lib/rtslib \
                --install-scripts debian/tmp/bin
TEST FAILED: debian/tmp/lib/rtslib/ does NOT support .pth files
error: bad install directory or PYTHONPATH

You are attempting to install a package to a directory that is not
on PYTHONPATH and which Python does not read ".pth" files from.  The
installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:

    debian/tmp/lib/rtslib/

and your PYTHONPATH environment variable currently contains:

    ''

regards,
Alex

latest releases not on pypi

Hi, I'd like to update our package of this, but it looks like the 2.1.61 release at least isn't on pypi. Was it missed?

fabric wwns() calls shouldn't use from_fabric_wwn

Both from_fabric_wwn and wwns() are converting a wwn in some format to rtslib's canonical format, but they need not be the SAME format.

For example, with fcoe, the possible wwns are listed in /sys/class/fc_host/host*/port_name as 0x123456. The wwns() method should be stripping the 0x, and adding the "naa." type at the front. But, the fcoe LIO fabric does 12:34:56. So, from_fabric_wwn should be stripping colons and adding the type. These are close but not identical conversions, and trying to do them both in from_fabric_wwn is confusing.

The obstacle to fixing this right away is we want to be able to test and verify for each fabric that we fixed it right. Fabrics have evolved from_fabric_wwn's that handle both, and we don't want to keep breaking stuff.

problem with qla2xxx wwn conversion to naa.*

Problem : In recent releases, the wwn format 21:00:00:XX:XX:XX:XX:XX has been dropped in favor of the NAA format : naa.210000XXXXXXXXXX (I don't know why ?). And there are some conversions missing in the code.

Version tested :

  • configshell-fb-1.1.fb7
  • rtslib-fb-2.1.fb33
  • targetcli-fb-2.1.fb24

downloaded from https://fedorahosted.org/released/targetcli-fb/.
kernel: kernel-3.8.11-200.fc18.x86_64

Symptoms :

PYTHONPATH=. targetcli-fb-2.1.fb24/scripts/targetcli
targetcli shell version 2.1.24
Copyright 2011 by RisingTide Systems LLC and others.
For help on commands, type 'help'.

/> saveconfig
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json
/> /qla2xxx
/qla2xxx> info
Fabric module name: qla2xxx
ConfigFS path: /sys/kernel/config/target/qla2xxx
Allowed WWN types: naa
Allowed WWNs list: naa.21000024ff369652, naa.21000024ff369653, naa.21000024ff369874, naa.21000024ff369875
Fabric module features: acls
Corresponding kernel module: tcm_qla2xxx
/qla2xxx> create naa.21000024ff369652
Created target naa.21000024ff369652.
/qla2xxx> /
/> saveconfig
WWN not valid as: naa
/> exit
Global pref auto_save_on_exit=true
Traceback (most recent call last):
File "targetcli-fb-2.1.fb24/scripts/targetcli", line 88, in
main()
File "targetcli-fb-2.1.fb24/scripts/targetcli", line 84, in main
root_node.ui_command_saveconfig()
File "/root/downloads/t/targetcli/ui_root.py", line 69, in ui_command_saveconfig
f.write(json.dumps(RTSRoot().dump(), sort_keys=True, indent=2))
File "/root/downloads/t/rtslib/root.py", line 132, in dump
d['targets'] = [t.dump() for t in self.targets]
File "/root/downloads/t/rtslib/root.py", line 76, in _list_targets
for target in fabric_module.targets:
File "/root/downloads/t/rtslib/fabric.py", line 161, in _list_targets
yield Target(self, self.from_fabric_wwn(wwn), 'lookup')
File "/root/downloads/t/rtslib/target.py", line 72, in init
self.wwn, self.wwn_type = fabric_module.to_normalized_wwn(wwn)
File "/root/downloads/t/rtslib/fabric.py", line 181, in to_normalized_wwn
return normalize_wwn(self.wwn_types, wwn, self.wwns)
File "/root/downloads/t/rtslib/utils.py", line 322, in normalize_wwn
raise RTSLibError("WWN not valid as: %s" % ", ".join(wwn_types))
rtslib.utils.RTSLibError: WWN not valid as: naa

The current code seems to be calling normalize_wwn with these parameters :
normalize_wwn(('naa',),'naa.:00:00:24:ff:36:96:52')

As you can see, the WWN is kinda broken (missing '21', replaced by 'naa.')

Documentation/Examples

Can you please add some example code to the repo? I was unable to find even how to create a backstore and link it to a target after a day's reading code from both rtslib-fb and targetcli-fb. Some simple example code would be sufficient to understand how the API is to be used.

Thanks

add return types to epydoc docstrings

Working on the session API I realized, that most properties and functions don't list a return type.
The format is @return: description and @rtype: type for functions and @type: type for properties (and instance variables). This is especially helpful because we can even link the types: L{NodeACL}.

epydoc is already used and there is also an old documentation online. There is a lot of epydoc-specific parameter documentation in the docstrings of functions, but somehow return types are missing.

I documented my session branch (JonnyJD/rtslib-fb@d95ecc6) additions.

I note that we don't really have to document private functions and variables this way.

Issue warning when using fileio bs for block device

Nick says we can't defeature fileio support for block devices, but the usage case is a pretty advanced one.

To protect against users inadvertently using fileio when block backstore would be better, issue a warning and recommend block.

OSError: [Errno 16] Device or resource busy: '/sys/kernel/config/target/core/fileio_2/tmp-test3.raw'

/backstores/fileio> ls
o- fileio ..................................................................................................... [Storage Objects: 3]
  o- tmp-test3.raw ................................................................. [/tmp/test3.raw (50.0MiB) write-thru ACTIVATED]
  o- tmp-youpi.raw .................................................................. [/tmp/youpi.raw (1.0GiB) write-thru ACTIVATED]
  o- youpi ........................................................................ [/tmp/youpi.raw (1.0GiB) write-back DEACTIVATED]
/backstores/fileio> delete tmp-test3.raw 
Traceback (most recent call last):
  File "/usr/bin/targetcli", line 100, in <module>
    main()
  File "/usr/bin/targetcli", line 90, in main
    shell.run_interactive()
  File "/usr/lib/python3/dist-packages/configshell/shell.py", line 948, in run_interactive
    self._cli_loop()
  File "/usr/lib/python3/dist-packages/configshell/shell.py", line 777, in _cli_loop
    self.run_cmdline(cmdline)
  File "/usr/lib/python3/dist-packages/configshell/shell.py", line 891, in run_cmdline
    self._execute_command(path, command, pparams, kparams)
  File "/usr/lib/python3/dist-packages/configshell/shell.py", line 866, in _execute_command
    result = target.execute_command(command, pparams, kparams)
  File "/usr/lib/python3/dist-packages/configshell/node.py", line 1413, in execute_command
    return method(*pparams, **kparams)
  File "/usr/lib/python3/dist-packages/targetcli/ui_backstore.py", line 150, in ui_command_delete
    child.rtsnode.delete()
  File "/usr/lib/python3/dist-packages/rtslib/tcm.py", line 235, in delete
    super(StorageObject, self).delete()
  File "/usr/lib/python3/dist-packages/rtslib/node.py", line 199, in delete
    os.rmdir(self.path)
OSError: [Errno 16] Device or resource busy: '/sys/kernel/config/target/core/fileio_2/tmp-test3.raw'

Maybe this operation should generate an error instead of a traceback?

code deduplication (LUN)

There is loop.LUN and target.LUN.

I see that they are different, but they do share a LOT of code.__init__ is quite similar, _configure , _get_alias,``_get_storage_object` are completely the same. Maybe even more, I didn't check.

This needs a single LUN class and LoopLUN and TargetLUN that inherit from it.
That LUN class should probably go in tcm.py.

If we don't want to change the API we might keep the class names, but we should still have a super class. I would prefer changing the names, because having two (or 3) different LUN classes in the lib is not optimal.

get_disk_size failure on partitioned drives

get_disk_size(path) fails on partitioned drives. e.g. it will look for the size of /dev/sda1 in /sys/block/sda1/size instead of /sys/block/sda/sda1/size. This causes targetcli-fb to crash on many commands. Especially on re-starting of the software.

The "pr" database root directory /var/target should be moved from /var

The target_core_mod module allows changing the target "database" root directory from the current default of /var/target to another location.

The target_core_mod module allows changing the db directory if and only if no target modules have been loaded,

Since rtslib loads the target_core_mod module, it is the place that needs to be fixed to allow changing this directory.

I looked for a config option for rtslib, but I don't see any. I have a branch that unconditionally changes the directory to /etc/target, but I'm not sure that's the best approach for everybody, even if that will work for me.

Ideas?

qla2xxx broken

commit 76b7af4 changed the name of the qla2xxx module to qla2xxxx which is IMHO a bad thing.

YAML instead of JSON for config savefile

Should we use YAML instead of JSON for the config savefile? It's supposed to be a little more human-readable, but when I tried it the results were a little messy (using PyYAML). It's not clear to me the usage model yet for the savefile - if it will ever really be modified by users directly, or always via targetcli.

targetcli may fail to start since eecb633 "Display a more meaningful error when targetcli cannot change dbroot"

At revision eecb633, I added some code to display a more meaningful error when rtslib cannot change the "dbroot" value. This error can be raised when the new "dbroot" value points to a directory that does not exist.

However, there is at least another situation where the "dbroot" value cannot be changed: when the target drivers are already loaded:

# dmesg
db_root: cannot be changed: target drivers registered

This second situation should not prevent targetcli from starting.

mount configfs with rtslib / Could not create RTSRoot in configFS

Most packages include target rc script or systemd service file to make sure configfs is not only loaded as a kernel module, but also /sys/kernel/config is mounted.

However, targetcli/rtslib also works halfway without it.
When targetcli starts it uses rtslib to check that target_core_mod is mounted. If configfs is loaded and mounted then modprobe target_core_mod works fine.

When configfs is not mounted then modprobe target_core_mod will also load configfs as a dependency, but /sys/kernel/config is not mounted, which then leads to:

Could not create RTSRoot in configfs

since /sys/kernel/config is there, but you can't write to it.

rtslib could possibly also check for the configfs mount and mount it if it isn't present.
A service/rc file is still necessary to actually start and load the target configuration at boot, but targetcli would be available for testing without these scripts already.

Add support for emulex ocs_fc_lio

Hi,

So far we used with standard "rtslib" by providing spec file in /var/target/fabric/ to configure our Emulex OCS LIO targets. Now we wanted to use ocs_fc_lio kernel module driver with "rtslib-fc" module. We added code to support Emulex ocs_fc_lio driver in "fabric.py" and verified the code changes. Could you please review the attached patch and let me know if any modifications are required?

I also wanted to know the code check-in

process, I'm not sure if i can do commit at this time or not

Thanks,
Ravindar
rtslib-fb.txt

set emulate_model_alias

Nick posted he'd like to accept a feature to set the SCSI model name based on the name of the backstore (I think.) This sounds like a cool feature, but Nick wants to disable it so existing people relying on the name won't have issues.

I think we want this on by default, and we will need to set

/sys/kernel/config/target/core/$HBA/$DEV/attrib/emulate_model_alias

to 1.

Invalid distutils versioning

If you try to create a Python package that depends on rtslib using distutils, it isn't possible to require a specific version or range of versions, such as >=2.1.fb27. When processing setup.py, you see:
<...>
File "/usr/lib64/python2.7/distutils/version.py", line 40, in init
self.parse(vstring)
File "/usr/lib64/python2.7/distutils/version.py", line 107, in parse
raise ValueError, "invalid version number '%s'" % vstring
ValueError: invalid version number '2.1.fb27'

This is because of the rules as described in http://epydoc.sourceforge.net/stdlib/distutils.version.StrictVersion-class.html .

rtslib on PyPI should use a different scheme such as 2.1.27.

targetctl restore should activate tcmu-runner if needed

If there are any tcmu-backed storage objects, then tcmu-runner likely needs to be running to handle them. This can be done by making a call to the DBus interface, which will activate the daemon.

We should only activate tcmu-runner this way if tcmu backstores are actually in use.

AttributeError: 'UUID' object has no attribute 'get_hex'

/loopback> create
Traceback (most recent call last):
  File "/usr/bin/targetcli", line 100, in <module>
    main()
  File "/usr/bin/targetcli", line 90, in main
    shell.run_interactive()
  File "/usr/lib/python3/dist-packages/configshell/shell.py", line 948, in run_interactive
    self._cli_loop()
  File "/usr/lib/python3/dist-packages/configshell/shell.py", line 777, in _cli_loop
    self.run_cmdline(cmdline)
  File "/usr/lib/python3/dist-packages/configshell/shell.py", line 891, in run_cmdline
    self._execute_command(path, command, pparams, kparams)
  File "/usr/lib/python3/dist-packages/configshell/shell.py", line 866, in _execute_command
    result = target.execute_command(command, pparams, kparams)
  File "/usr/lib/python3/dist-packages/configshell/node.py", line 1413, in execute_command
    return method(*pparams, **kparams)
  File "/usr/lib/python3/dist-packages/targetcli/ui_target.py", line 183, in ui_command_create
    target = Target(self.rtsnode, wwn, mode='create')
  File "/usr/lib/python3/dist-packages/rtslib/target.py", line 74, in __init__
    self.wwn = generate_wwn(fabric_module.wwn_types[0])
  File "/usr/lib/python3/dist-packages/rtslib/utils.py", line 284, in generate_wwn
    return "naa.5001405" + uuid.uuid4().get_hex()[-9:]
AttributeError: 'UUID' object has no attribute 'get_hex'

move saveconfig to rtslib

Now that there are >1 entities using rtslib (targetcli and targetd), in order to properly save state for both, this function must be centralized in rtslib.

fail to save due to invalid literal for attributes

issue

When I save (exit) targetcli shell, I am getting following error.

Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup.
Traceback (most recent call last):
  File "/usr/bin/targetcli", line 121, in <module>
    main()
  File "/usr/bin/targetcli", line 117, in main
    root_node.ui_command_saveconfig()
  File "/usr/lib/python3.6/site-packages/targetcli/ui_root.py", line 90, in ui_command_saveconfig
    self.rtsroot.save_to_file(savefile)
  File "/usr/lib/python3.6/site-packages/rtslib_fb/root.py", line 270, in save_to_file
    f.write(json.dumps(self.dump(), sort_keys=True, indent=2))
  File "/usr/lib/python3.6/site-packages/rtslib_fb/root.py", line 160, in dump
    d['storage_objects'] = [so.dump() for so in self.storage_objects]
  File "/usr/lib/python3.6/site-packages/rtslib_fb/root.py", line 160, in <listcomp>
    d['storage_objects'] = [so.dump() for so in self.storage_objects]
  File "/usr/lib/python3.6/site-packages/rtslib_fb/tcm.py", line 839, in dump
    d = super(UserBackedStorageObject, self).dump()
  File "/usr/lib/python3.6/site-packages/rtslib_fb/tcm.py", line 294, in dump
    d = super(StorageObject, self).dump()
  File "/usr/lib/python3.6/site-packages/rtslib_fb/node.py", line 217, in dump
    attrs[item] = int(self.get_attribute(item))

step to reproduce

NOTE: sorry, I am premised on using tcmu-runner.

1. create dummy file

qemu-img create -f qcow2 /tmp/test.img 3G

2. start targetcli shell

# targetcli 
targetcli shell version 2.1.fb46
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.
/>

3. create backstore with string attributes and exit

/> backstores/user:qcow create cfgstring=/tmp/test.img name=test.img size=3G
Created user-backed storage object test.img size 3221225472.
/> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup.
Traceback (most recent call last):
  File "/usr/bin/targetcli", line 121, in <module>
    main()
  File "/usr/bin/targetcli", line 117, in main
    root_node.ui_command_saveconfig()
  File "/usr/lib/python3.6/site-packages/targetcli/ui_root.py", line 90, in ui_command_saveconfig
    self.rtsroot.save_to_file(savefile)
  File "/usr/lib/python3.6/site-packages/rtslib_fb/root.py", line 270, in save_to_file
    f.write(json.dumps(self.dump(), sort_keys=True, indent=2))
  File "/usr/lib/python3.6/site-packages/rtslib_fb/root.py", line 160, in dump
    d['storage_objects'] = [so.dump() for so in self.storage_objects]
  File "/usr/lib/python3.6/site-packages/rtslib_fb/root.py", line 160, in <listcomp>
    d['storage_objects'] = [so.dump() for so in self.storage_objects]
  File "/usr/lib/python3.6/site-packages/rtslib_fb/tcm.py", line 839, in dump
    d = super(UserBackedStorageObject, self).dump()
  File "/usr/lib/python3.6/site-packages/rtslib_fb/tcm.py", line 294, in dump
    d = super(StorageObject, self).dump()
  File "/usr/lib/python3.6/site-packages/rtslib_fb/node.py", line 217, in dump
    attrs[item] = int(self.get_attribute(item))
ValueError: invalid literal for int() with base 10: 'qcow//tmp/test.img'

Race condition mounting configfs

On systems where loading the configfs module (modprobe configfs) automatically mounts /sys/kernel/config, a race condition can occur in the mount_configfs() function in utils.py.

As that function is called immediately after the modprode function, the os.path.ismount check can be called before configfs is finished mounting. The mount command is then run but fails because the mount is busy as it's mounted.

One possible workaround is to make a second os.path.ismount check if the mount command fails, and if it is mounted then we don't raise an exception.
if process.returncode != 0 and not os.path.ismount("/sys/kernel/config"):
raise RTSLibError("Cannot mount configfs")

I submitted a pull request #73 with this workaround. There are other ways this could be handled as well, this is just one fix that worked for me.

MappedLUN constructor fails

I wanted to create a mapped lun. I got a LUN object (lun) and an NodeACL object (acl). First i tried this:

'MappedLUN(acl, lun.lun, lun)' and it fails with

Traceback (most recent call last):
File "", line 2, in
File "/usr/lib/python2.7/site-packages/rtslib/target.py", line 1028, in init
+ "a NodeACL object")
rtslib.utils.RTSLibError: The parent_nodeacl parameter must be a NodeACL object

with a 'type(acl)' i could verify that it is of type:
<class 'rtslib_fb.target.NodeACL'>

But do it the other way with:
acl.mapped_lun(lun.lun, lun) it worked.

restore from json - failed auth from iscsi initiators

If I run these commands my initiator can login:

targetcli "/backstores/block create name=mptarget4 dev=/dev/vg_storage/station4mp"
targetcli "/iscsi set discovery_auth enable=1"
targetcli "/iscsi set discovery_auth mutual_userid=target"
targetcli "/iscsi set discovery_auth mutual_password=itsreallyme"
targetcli "/iscsi set discovery_auth userid=initiator"
targetcli "/iscsi set discovery_auth password=letmein"
targetcli "/iscsi create wwn=iqn.2003-01.org.linux-iscsi.storage:mptarget4"
targetcli "/iscsi/iqn.2003-01.org.linux-iscsi.storage:mptarget4/tpgt1/luns create storage_object=/backstores/block/mptarget4"
targetcli "/iscsi/iqn.2003-01.org.linux-iscsi.storage:mptarget4/tpgt1/acls create wwn=iqn.1994-05.com.redhat:station4"
targetcli "/iscsi/iqn.2003-01.org.linux-iscsi.storage:mptarget4/tpgt1/acls/iqn.1994-05.com.redhat:station4 set auth mutual_userid=target"
targetcli "/iscsi/iqn.2003-01.org.linux-iscsi.storage:mptarget4/tpgt1/acls/iqn.1994-05.com.redhat:station4 set auth mutual_password=itsreallyme"
targetcli "/iscsi/iqn.2003-01.org.linux-iscsi.storage:mptarget4/tpgt1/acls/iqn.1994-05.com.redhat:station4 set auth userid=station4"
targetcli "/iscsi/iqn.2003-01.org.linux-iscsi.storage:mptarget4/tpgt1/acls/iqn.1994-05.com.redhat:station4 set auth password=letmein"
targetcli "/iscsi/iqn.2003-01.org.linux-iscsi.storage:mptarget4/tpgt1/portals create 10.100.0.199 ip_port=3260"

And on my initiator:

iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.storage:mptarget4 -l

Logging in to [iface: default, target: iqn.2003-01.org.linux-iscsi.storage:mptarget4, portal: 10.100.0.199,3260]
Login to [iface: default, target: iqn.2003-01.org.linux-iscsi.storage:mptarget4, portal: 10.100.0.199,3260] successful.

But if I I then do a:

targetcli "/ saveconfig"

and

targetcli restoreconfig clear_existing=true (or a reboot)

Then the initiator can't login and I get "iSCSI Login negotiation failed" in dmesg on the target.

Here is the saveconfig.json that saveconfig creates above and that the initiator can't login with:

[root@storage ~]# cat /etc/target/saveconfig.json
{
"fabric_modules": [
{
"discovery_enable_auth": true,
"discovery_mutual_password": "itsreallyme",
"discovery_mutual_userid": "target",
"discovery_password": "letmein",
"discovery_userid": "initiator",
"name": "iscsi"
}
],
"storage_objects": [
{
"attributes": {
"block_size": 512,
"emulate_dpo": 0,
"emulate_fua_read": 0,
"emulate_fua_write": 1,
"emulate_rest_reord": 0,
"emulate_tas": 1,
"emulate_tpu": 0,
"emulate_tpws": 0,
"emulate_ua_intlck_ctrl": 0,
"emulate_write_cache": 0,
"enforce_pr_isids": 1,
"is_nonrot": 0,
"max_sectors": 1024,
"max_unmap_block_desc_count": 0,
"max_unmap_lba_count": 0,
"optimal_sectors": 1024,
"queue_depth": 128,
"task_timeout": 0,
"unmap_granularity": 0,
"unmap_granularity_alignment": 0
},
"dev": "/dev/vg_storage/station4mp",
"name": "mptarget4",
"plugin": "block",
"wwn": "6be30fb6-3bc9-43c4-a866-4d8633af5cf2"
}
],
"targets": [
{
"fabric": "iscsi",
"tpgs": [
{
"attributes": {
"authentication": 1,
"cache_dynamic_acls": 0,
"default_cmdsn_depth": 16,
"demo_mode_write_protect": 1,
"generate_node_acls": 0,
"login_timeout": 15,
"netif_timeout": 2,
"prod_mode_write_protect": 0
},
"luns": [
{
"index": 0,
"storage_object": "/backstores/block/mptarget4"
}
],
"node_acls": [
{
"attributes": {
"dataout_timeout": 3,
"dataout_timeout_retries": 5,
"default_erl": 0,
"nopin_response_timeout": 5,
"nopin_timeout": 5,
"random_datain_pdu_offsets": 0,
"random_datain_seq_offsets": 0,
"random_r2t_offsets": 0
},
"chap_mutual_password": "itsreallyme",
"chap_mutual_userid": "target",
"chap_password": "letmein",
"chap_userid": "station4",
"mapped_luns": [
{
"index": 0,
"write_protect": false
}
],
"node_wwn": "iqn.1994-05.com.redhat:station4",
"tcq_depth": 16
}
],
"portals": [
{
"ip_address": "10.100.0.199",
"port": 3260
}
],
"tag": 1
}
],
"wwn": "iqn.2003-01.org.linux-iscsi.storage:mptarget4"
}
]
}

synchronize access to configfs

As raised on target-devel, it is possible that more than one agent could be changing configfs at the same time. This is very bad. We should establish a locking convention that all target configfs-accessing libs (not just this one) will follow in order to make configfs changes atomic.

Some kind of lockfile would be the normal way to do this, I guess.

RPM, Deb scripts missing from all -fb repos.

I read one of the commit logs... and blaming debian for removing a functional, working feature is just low. Bring it back. Many of us use it when we're installing from source. You can easily omit it from the release tarballs if you want for debian's pleasure, but anybody working from the repo probably wants and needs those rpm/apt bits.

last commits made both qla2xxx and tcm_fc targets disappear

commit that caused the error : c79ee86

git checkout 751ca29
gives me
targetcli ls
o- / ......................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: 0]
| o- fileio ................................................................................................. [Storage Objects: 0]
| o- pscsi .................................................................................................. [Storage Objects: 0]
| o- ramdisk ................................................................................................ [Storage Objects: 0]
o- iscsi ............................................................................................................ [Targets: 0]
o- loopback ......................................................................................................... [Targets: 0]
o- qla2xxx .......................................................................................................... [Targets: 0]
o- tcm_fc ........................................................................................................... [Targets: 0]
o- vhost ............................................................................................................ [Targets: 0]

git checkout master
gives me
targetcli ls
o- / ......................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: 0]
| o- fileio ................................................................................................. [Storage Objects: 0]
| o- pscsi .................................................................................................. [Storage Objects: 0]
| o- ramdisk ................................................................................................ [Storage Objects: 0]
o- iscsi ............................................................................................................ [Targets: 0]
o- loopback ......................................................................................................... [Targets: 0]
o- vhost ............................................................................................................ [Targets: 0]

Display a more meaningful error when the preferred DB root does not exist

@sithglan reported on the targetcli-fb mailing list that he could not run the upstream version of targetcli-fb due to the following error:

(debian) [~] targetcli
[Errno 22] Invalid argument

I looked at his issue and I found that the error happens when the /etc/target directory is missing. The root cause of the error is that targetcli fails to change the value in /sys/kernel/config/target/dbroot from /var/target (kernel default) to /etc/target/ (preferred by targetcli) because the directory does not exist. When this happens, the following error is displayed in dmesg:

# dmesg
db_root: cannot open: /etc/target

I will provide a patch to display a more meaningful error when this error occurs.

Update for Fedora 17?

This is probably not the best place to ask this but I saw that you maintain the python-rtslib package for Fedora, are you intending to release another update for the package on Fedora 17 or will it be left at 2.1.fb14? I ask because I'm trying out ZFS on Linux and one of the updates to rtslib fixes a iSCSI block device issue with ZFS devices (openzfs/zfs#515). I see updates to the f17 branch for it here http://pkgs.fedoraproject.org/cgit/python-rtslib.git/log/?h=f17 but I don't know enough about the Fedora packaging process to know whether that means an updated package will actually be released.

Please remove the Debian folder from the repository

Hi,

I'm the package maintainer for python-rtslib-fb in Debian. I maintain my package using git.debian.org, where all of the packaging is stored. Unfortunately, upstream holding a debian folder is a very bad idea, as it prevents me from merging your latest tags (ie: it create merge conflicts in my Git when I do so).

Could you please remove the debian folder, or at least rename it to something like "debian-upstream", or alternatively, push it to a separate branch (which is perfect for using git-buildpackage, especially if you privde a debian/gbp.conf file)?

Also, I would very much enjoy having contribution within the OpenStack packaging team, to improve the package. The packaging improvements will go to both Debian and Ubuntu (as they sync the package from Debian).

Best would be if you could release a new tag without that debian folder asap, and then I'll be able to package and upload that to Debian Sid.

Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.