Git Product home page Git Product logo

targetcli-fb's Introduction

=================================================================

                Linux* Open-iSCSI

=================================================================

                                                   Jun 6, 2022
Contents
========

- 1. In This Release
- 2. Introduction
- 3. Installation
- 4. Open-iSCSI daemon
- 5. Open-iSCSI Configuration Utility
- 6. Configuration
- 7. Getting Started
- 8. Advanced Configuration
- 9. iSCSI System Info


1. In This Release
==================

This file describes the Linux* Open-iSCSI Initiator. The software was
tested on AMD Opteron (TM) and Intel Xeon (TM).

The latest development release is available at:

	https://github.com/open-iscsi/open-iscsi

For questions, comments, contributions post an issue on github, or
send e-mail to:

	[email protected]

You can also raise an issue on the github page.


1.1. Features
=============

- highly optimized and very small-footprint data path
- persistent configuration database
- SendTargets discovery
- CHAP
- PDU header Digest
- multiple sessions


1.2  Licensing
==============

The daemon and other top-level commands are licensed as GPLv3, while the
libopeniscsiusr library used by some of those commmands is licensed as LGPLv3.


2. Introduction
===============

The Open-iSCSI project is a high-performance, transport independent,
multi-platform implementation of RFC3720 iSCSI.

Open-iSCSI is partitioned into user and kernel parts.

The kernel portion of Open-iSCSI was originally part of this project
repository, but now is built into the linux kernel itself. It
includes loadable modules: scsi_transport_iscsi.ko, libiscsi.ko and
scsi_tcp.ko. The kernel code handles the "fast" path, i.e. data flow.

User space contains the entire control plane: configuration
manager, iSCSI Discovery, Login and Logout processing,
connection-level error processing, Nop-In and Nop-Out handling,
and (perhaps in the future:) Text processing, iSNS, SLP, Radius, etc.

The user space Open-iSCSI consists of a daemon process called
iscsid, and a management utility iscsiadm. There are also helper
programs, and iscsiuio, which is used for certain iSCSI adapters.


3. Installation
===============

NOTE:	You will need to be root to install the Open-iSCSI code, and
	you will also need to be root to run it.

As of today, the Open-iSCSI Initiator requires a host running the
Linux operating system with kernel.

The userspace components iscsid, iscsiadm and iscsistart require the
open-isns library, unless open-isns use is diabled when building (see
below).

If this package is not available for your distribution, you can download
and install it yourself.  To install the open-isns headers and library
required for Open-iSCSI, download the current release from:

	https://github.com/open-iscsi/open-isns

Then, from the top-level directory, run:

	./configure [<OPTIONS>]
	make
	make install

For the open-iscsi project and iscsiuio, the original build
system used make and autoconf the build the project. These
build systems are being depcreated in favor of meson (and ninja).
See below for how to build using make and autoconf, but
migrating as soon as possible to meson would be a good idea.

Building open-iscsi/iscsiuio using meson
----------------------------------------
For Open-iSCSI and iscsiuio, the system is built using meson and ninja
(see https://github.com/mesonbuild/meson). If these packages aren't
available to you on your Linux distribution, you can download
the latest release from: https://github.com/mesonbuild/meson/releases).
The README.md file describes in detail how to build it yourself, including
how to get ninja.

To build the open-iscsi project, including iscsiuio, first run meson
to configure the build, from the top-level open-iscsi directory, e.g.:

	rm -rf builddir
	mkdir builddir
	meson [<MESON-OPTIONS>] builddir

Then, to build the code:

	ninja -C builddir

If you change any code and want to rebuild, you simply run ninja again.

When you are ready to install:

	[DESTDIR=<SOME-DIR>] ninja -C builddir install

This will install the iSCSI tools, configuration files, interfaces, and
documentation. If you do not set DESTDIR, it defaults to "/".


MESON-OPTIONS:
--------------
One can override several default values when building with meson:


Option			Description
=====================	=====================================================

--libdir=<LIBDIR>	Where library files go [/lib64]
--sbindir=<DIR>		Meson 0.63 or newer: Where binaries go [/usr/sbin]
-Dc_flags="<C-FLAGS>"	Add in addition flags to the C compiler
-Dno_systemd=<BOOL>	Enables systemd usage [false]
			(set to "true" to disable systemd)
-Dsystemddir=<DIR>	Set systemd unit directory [/usr/lib/systemd]
-Dhomedir=<DIR>		Set config file directory [/etc/iscsi]
-Ddbroot=<DIR>		Set Database directory [/etciscsi]
-Dlockdir=<DIR>		Set Lock directory [/run/lock/iscsi]
-Drulesdir=<DIR>	Set udev rules directory [/usr/lib/udev/rules.d]
-Discsi_sbindir=<DIR>	Where binaries go [/usr/sbin]
			(for use when sbindir can't be set, in older versions
			 of meson)
-Disns_supported=<BOOL>	Enable/disable iSNS support [true]
			(set to "false" to disable use of open-isns)


Building open-iscsi/iscsiuio using make/autoconf (Deprecated)
-------------------------------------------------------------
If you wish to build using the older deprecated system, you can
simply run:

	make [<MAKE-OPTIONS>]
	make [DESTDIR=<SOME-DIR>] install

Where MAKE-OPTIONS are from:
	* SBINDIR=<some-dir>  [/usr/bin]   for executables
	* DBROOT=<some-dir>   [/etc/iscsi] for iscsi database files
	* HOMEDIR=<some-dir>  [/etc/iscsi] for iscsi config files


4. Open-iSCSI daemon
====================

The iscsid daemon implements control path of iSCSI protocol, plus some
management facilities. For example, the daemon could be configured to
automatically re-start discovery at startup, based on the contents of
persistent iSCSI database (see next section).

For help, run:

	iscsid --help

The output will be similar to the following (assuming a default install):

Usage: iscsid [OPTION]

  -c, --config=[path]     Execute in the config file (/etc/iscsi/iscsid.conf).
  -i, --initiatorname=[path]     read initiatorname from file (/etc/iscsi/initiatorname.iscsi).
  -f, --foreground        run iscsid in the foreground
  -d, --debug debuglevel  print debugging information
  -u, --uid=uid           run as uid, default is current user
  -g, --gid=gid           run as gid, default is current user group
  -n, --no-pid-file       do not use a pid file
  -p, --pid=pidfile       use pid file (default /run/iscsid.pid).
  -h, --help              display this help and exit
  -v, --version           display version and exit


5. Open-iSCSI Configuration and Administration Utility
======================================================

Open-iSCSI persistent configuration is stored in a number of
directories under a configuration root directory, using a flat-file
format. This configuration root directory is /etc/iscsi by default,
but may also commonly be in /var/lib/iscsi (see "dbroot" in the meson
options discussed earlier).

Configuration is contained in directories for:

	- nodes
	- isns
	- static
	- fw
	- send_targets
	- ifaces

The iscsiadm utility is a command-line tool to manage (update, delete,
insert, query) the persistent database, as well manage discovery,
session establishment (login), and ending sessions (logout).

This utility presents set of operations that a user can perform
on iSCSI node, session, connection, and discovery records.

Open-iSCSI does not use the term node as defined by the iSCSI RFC,
where a node is a single iSCSI initiator or target. Open-iSCSI uses the
term node to refer to a portal on a target, so tools like iscsiadm
require that the '--targetname' and '--portal' arguments be used when
in node mode.

For session mode, a session id (sid) is used. The sid of a session can be
found by running:

	iscsiadm -m session -P 1

The session id is not currently persistent and is partially determined by
when the session is setup.

Note that some of the iSCSI Node and iSCSI Discovery operations
do not require iSCSI daemon (iscsid) loaded.

For help on the command, run:

	iscsiadm --help

The output will be similar to the following.

iscsiadm -m discoverydb [-hV] [-d debug_level] [-P printlevel] [-t type -p ip:port -I ifaceN ... [-Dl]] | [[-p ip:port -t type] [-o operation] [-n name] [-v value] [-lD]]
iscsiadm -m discovery [-hV] [-d debug_level] [-P printlevel] [-t type -p ip:port -I ifaceN ... [-l]] | [[-p ip:port] [-l | -D]] [-W]
iscsiadm -m node [-hV] [-d debug_level] [-P printlevel] [-L all,manual,automatic,onboot] [-W] [-U all,manual,automatic,onboot] [-S] [[-T targetname -p ip:port -I ifaceN] [-l | -u | -R | -s]] [[-o operation ] [-n name] [-v value]]
iscsiadm -m session [-hV] [-d debug_level] [-P printlevel] [-r sessionid | sysfsdir [-R | -u | -s] [-o operation] [-n name] [-v value]]
iscsiadm -m iface [-hV] [-d debug_level] [-P printlevel] [-I ifacename | -H hostno|MAC] [[-o operation ] [-n name] [-v value]] [-C ping [-a ip] [-b packetsize] [-c count] [-i interval]]
iscsiadm -m fw [-d debug_level] [-l] [-W] [[-n name] [-v value]]
iscsiadm -m host [-P printlevel] [-H hostno|MAC] [[-C chap [-x chap_tbl_idx]] | [-C flashnode [-A portal_type] [-x flashnode_idx]] | [-C stats]] [[-o operation] [-n name] [-v value]]
iscsiadm -k priority


The first parameter specifies the mode to operate in:

  -m, --mode <op>	specify operational mode op =
			<discoverydb|discovery|node|session|iface|fw|host>

Mode "discoverydb"
------------------

  -m discoverydb --type=[type] --interface=[iface…] --portal=[ip:port] \
			--print=[N] \
			--op=[op]=[NEW | UPDATE | DELETE | NONPERSISTENT] \
			--discover

			  This command will use the discovery record settings
			  matching the record with type=type and
			  portal=ip:port]. If a record does not exist, it will
			  create a record using the iscsid.conf discovery
			  settings.

			  By default, it will then remove records for
			  portals no longer returned. And,
			  if a portal is returned by the target, then the
			  discovery command will create a new record or modify
			  an existing one with values from iscsi.conf and the
			  command line.

			  [op] can be passed in multiple times to this
			  command, and it will alter the node DB manipulation.

			  If [op] is passed in and the value is
			  "new", iscsiadm will add records for portals that do
			  not yet have records in the db.

			  If [op] is passed in and the value is
			  "update", iscsiadm will update node records using
			  info from iscsi.conf and the command line for portals
			  that are returned during discovery and have
			  a record in the db.

			  If [op] is passed in and the value is "delete",
			  iscsiadm will delete records for portals that
			  were not returned during discovery.

			  If [op] is passed in and the value is
			  "nonpersistent", iscsiadm will not store
			  the portals found in the node DB. This is
			  only useful with the --login command.

			  See the example section for more info.

			  See below for how to setup iSCSI ifaces for
			  software iSCSI or override the system defaults.

			  Multiple ifaces can be passed in during discovery.

			  For the above commands, "print" is optional. If
			  used, N can be 0 or 1.
			  0 = The old flat style of output is used.
			  1 = The tree style with the inteface info is used.

			  If print is not used, the old flat style is used.

  -m discoverydb --interface=[iface...] --type=[type] --portal=[ip:port] \
			--print=[N] \
			--op=[op]=[NEW | UPDATE | DELETE | NONPERSISTENT] \
			--discover --login

			  This works like the previous discoverydb command
			  with the --login argument passed in will also
			  log into the portals that are found.

  -m discoverydb --portal=[ip:port] --type=[type] \
			--op=[op] [--name=[name] --value=[value]]

			  Perform specific DB operation [op] for
			  discovery portal. It could be one of:
			  [new], [delete], [update] or [show]. In case of
			  [update], you have to provide [name] and [value]
			  you wish to update

			  Setting op=NEW will create a new discovery record
			  using the iscsid.conf discovery settings. If it
			  already exists, it will be overwritten using
			  iscsid.conf discovery settings.

			  Setting op=DELETE will delete the discovery record
			  and records for the targets found through
			  Phat discovery source.

			  Setting op=SHOW will display the discovery record
			  values. The --show argument can be used to
			  force the CHAP passwords to be displayed.

Mode "discovery"
----------------

  -m discovery --type=[type] --interface=iscsi_ifacename \
			--portal=[ip:port] --login --print=[N] \
			--op=[op]=[NEW | UPDATE | DELETE | NONPERSISTENT]

			  Perform [type] discovery for target portal with
			  ip-address [ip] and port [port].

			  This command will not use the discovery record
			  settings. It will use the iscsid.conf discovery
			  settings and it will overwrite the discovery
			  record with iscsid.conf discovery settings if it
			  exists. By default, it will then remove records for
			  portals no longer returned. And,
			  if a portal is returned by the target, then the
			  discovery command will create a new record or modify
			  an existing one with values from iscsi.conf and the
			  command line.

			  [op] can be passed in multiple times to this
			  command, and it will alter the DB manipulation.

			  If [op] is passed in and the value is
			  "new", iscsiadm will add records for portals that do
			  not yet have records in the db.

			  If [op] is passed in and the value is
			  "update", iscsiadm will update node records using
			  info from iscsi.conf and the command line for portals
			  that are returned during discovery and have
			  a record in the db.

			  If [op] is passed in and the value is "delete",
			  iscsiadm will delete records for portals that
			  were not returned during discovery.

			  If [op] is passed in and the value is
			  "nonpersistent", iscsiadm will not store
			  the portals found in the node DB.

			  See the example section for more info.

			  See below for how to setup iSCSI ifaces for
			  software iSCSI or override the system defaults.

			  Multiple ifaces can be passed in during discovery.

  -m discovery --print=[N]

			  Display all discovery records from internal
			  persistent discovery database.

Mode "node"
-----------

  -m node		  display all discovered nodes from internal
			  persistent discovery database

  -m node --targetname=[name] --portal=[ip:port] \
			--interface=iscsi_ifacename] \
			[--login|--logout|--rescan|--stats] [-W]

  -m node --targetname=[name] --portal=[ip:port]
			--interface=[driver,HWaddress] \
			--op=[op] [--name=[name] --value=[value]]

  -m node --targetname=[name] --portal=[ip:port]
			--interface=iscsi_ifacename] \
			--print=[level]

			  Perform specific DB operation [op] for specific
			  interface on host that will connect to portal on
			  target. targetname, portal and interface are optional.
			  See below for how to setup iSCSI ifaces for
			  software iSCSI or override the system defaults.

			  The op could be one of [new], [delete], [update] or
			  [show]. In case of [update], you have to provide
			  [name] and [value] you wish to update.
			  For [delete], note that if a session is using the
			  node record, the session will be logged out then
			  the record will be deleted.

			  Using --rescan will perform a SCSI layer scan of the
			  session to find new LUNs.

			  Using --stats prints the iSCSI stats for the session.

			  Using --login normally sends a login request to the
			  specified target and normally waits for the results.
			  If -W/--no_wait is supplied return success if we are
			  able to send the login request, and do not wait
			  for the response. The user will have to poll for
			  success

			  Print level can be 0 to 1.

  -m node --logoutall=[all|manual|automatic]
			  Logout "all" the running sessions or just the ones
			  with a node startup value manual or automatic.
			  Nodes marked as ONBOOT are skipped.

  -m node --loginall=[all|manual|automatic] [-W]
			  Login "all" the running sessions or just the ones
			  with a node startup value manual or automatic.
			  Nodes marked as ONBOOT are skipped.

			  If -W is supplied then do not wait for the login
			  response for the target, returning success if we
			  are able to just send the request. The client
			  will have to poll for success.

Mode "session"
--------------

  -m session		  display all active sessions and connections

  -m session --sid=[sid] [ --print=level | --rescan | --logout ]
			--op=[op] [--name=[name] --value=[value]]

			  Perform operation for specific session with
			  session id sid. If no sid is given, the operation
			  will be performed on all running sessions if possible.
			  --logout and --op work like they do in node mode,
			  but in session mode targetname and portal info
			  is not passed in.

			  Print level can be 0 to 3.
			  0 = Print the running sessions.
			  1 = Print basic session info like node we are
			  connected to and whether we are connected.
			  2 = Print iSCSI params used.
			  3 = Print SCSI info like LUNs, device state.

			  If no sid and no operation is given print out the
			  running sessions.

Mode "iface"
------------

  -m iface --interface=iscsi_ifacename --op=[op] [--name=[name] --value=[value]]
			--print=level

			  Perform operation on given interface with name
			  iscsi_ifacename.

			  See below for examples.

  -m iface --interface=iscsi_ifacename -C ping --ip=[ipaddr] --packetsize=[size]
			--count=[count] --interval=[interval]

Mode "host"
-----------

  -m host [--host=hostno|MAC] --print=level -C chap --op=[SHOW]

			  Display information for a specific host. The host
			  can be passed in by host number or by MAC address.
			  If a host is not passed in, then info
			  for all hosts is printed.

			  Print level can be 0 to 4.
			  1 = Print info for how like its state, MAC, and
			      netinfo if possible.
			  2 = Print basic session info for nodes the host
			      is connected to.
			  3 = Print iSCSI params used.
			  4 = Print SCSI info like LUNs, device state.

  -m host --host=hostno|MAC -C chap --op=[DELETE] --index=[chap_tbl_idx]

			  Delete chap entry at the given index from chap table.

  -m host --host=hostno|MAC -C chap --op=[NEW | UPDATE] --index=[chap_tbl_idx] \
			--name=[name] --value=[value]

			  Add new or update existing chap entry at the given
			  index with given username and password pair. If index
			  is not passed then entry is added at the first free
			  index in chap table.

  -m host --host=hostno|MAC -C flashnode

			  Display list of all the targets in adapter's
			  flash (flash node), for the specified host,
			  with ip, port, tpgt and iqn.

  -m host --host=hostno|MAC -C flashnode --op=[NEW] --portal_type=[ipv4|ipv6]

			  Create new flash node entry for the given host of the
			  specified portal_type. This returns the index of the
			  newly created entry on success.

  -m host --host=hostno|MAC -C flashnode --index=[flashnode_index] \
			--op=[UPDATE] --name=[name] --value=[value]

			  Update the params of the specified flash node.
			  The [name] and [value] pairs must be provided for the
			  params that need to be updated. Multiple params can
			  be updated using a single command.

  -m host --host=hostno|MAC -C flashnode --index=[flashnode_index] \
			--op=[SHOW | DELETE | LOGIN | LOGOUT]

			  Setting op=DELETE|LOGIN|LOGOUT will perform
			  deletion/login/ logout operation on the specified
			  flash node.

			  Setting op=SHOW will list all params with the values
			  for the specified flash node. This is the default
			  operation.

			  See the iscsiadm example section below for more info.

Other arguments
---------------

  -d, --debug debuglevel  print debugging information

  -V, --version		  display version and exit

  -h, --help		  display this help and exit


5.1 iSCSI iface setup
=====================

The next sections describe how to setup iSCSI ifaces so you can bind
a session to a NIC port when using software iSCSI (section 5.1.1), and
it describes how to setup ifaces for use with offload cards from Chelsio
and Broadcom (section 5.1.2).


5.1.1 How to setup iSCSI interfaces (iface) for binding
=======================================================

If you wish to allow the network susbsystem to figure out
the best path/NIC to use, then you can skip this section. For example
if you have setup your portals and NICs on different subnets, then
the following is not needed for software iSCSI.

Warning!!!!!!
This feature is experimental. The interface may change. When reporting
bugs, if you cannot do a "ping -I ethX target_portal", then check your
network settings first. Make sure the rp_filter setting is set to 0 or 2
(see Prep section below for more info). If you cannot ping the portal,
then you will not be able to bind a session to a NIC.

What is a scsi_host and iface for software, hardware and partial
offload iSCSI?

Software iSCSI, like iscsi_tcp and iser, allocates a scsi_host per session
and does a single connection per session. As a result
/sys/class_scsi_host and /proc/scsi will report a scsi_host for
each connection/session you have logged into. Offload iSCSI, like
Chelsio cxgb3i, allocates a scsi_host for each PCI device (each
port on a HBA will show up as a different PCI device so you get
a scsi_host per HBA port).

To manage both types of initiator stacks, iscsiadm uses the interface (iface)
structure. For each HBA port or for software iSCSI for each network
device (ethX) or NIC, that you wish to bind sessions to you must create
a iface config /etc/iscsi/ifaces.

Prep
----

The iface binding feature requires the sysctl setting
net.ipv4.conf.default.rp_filter to be set to 0 or 2.
This can be set in /etc/sysctl.conf by having the line:
	net.ipv4.conf.default.rp_filter = N

where N is 0 or 2. Note that when setting this you may have to reboot
for the value to take effect.


rp_filter information from Documentation/networking/ip-sysctl.txt:

rp_filter - INTEGER
	0 - No source validation.
	1 - Strict mode as defined in RFC3704 Strict Reverse Path
	    Each incoming packet is tested against the FIB and if the interface
	    is not the best reverse path the packet check will fail.
	    By default failed packets are discarded.
	2 - Loose mode as defined in RFC3704 Loose Reverse Path
	    Each incoming packet's source address is also tested against the FIB
	    and if the source address is not reachable via any interface
	    the packet check will fail.

Running
-------

The command:

	iscsiadm -m iface

will report iface configurations that are setup in /etc/iscsi/ifaces:

	iface0 qla4xxx,00:c0:dd:08:63:e8,20.15.0.7,default,iqn.2005-06.com.redhat:madmax
	iface1 qla4xxx,00:c0:dd:08:63:ea,20.15.0.9,default,iqn.2005-06.com.redhat:madmax

The format is:

	iface_name transport_name,hwaddress,ipaddress,net_ifacename,initiatorname

For software iSCSI, you can create the iface configs by hand, but it is
recommended that you use iscsiadm's iface mode. There is an iface.example in
/etc/iscsi/ifaces which can be used as a template for the daring.

For each network object you wish to bind a session to, you must create
a separate iface config in /etc/iscsi/ifaces and each iface config file
must have a unique name which is less than or equal to 64 characters.

Example
-------

If you have NIC1 with MAC address 00:0F:1F:92:6B:BF and NIC2 with
MAC address 00:C0:DD:08:63:E7, and you wanted to do software iSCSI over
TCP/IP, then in /etc/iscsi/ifaces/iface0 you would enter:

	iface.transport_name = tcp
	iface.hwaddress = 00:0F:1F:92:6B:BF

and in /etc/iscsi/ifaces/iface1 you would enter:

	iface.transport_name = tcp
	iface.hwaddress = 00:C0:DD:08:63:E7

Warning: Do not name an iface config file  "default" or "iser".
They are special values/files that are used by the iSCSI tools for
backward compatibility. If you name an iface default or iser, then
the behavior is not defined.

To use iscsiadm to create an iface0 similar to the above example, run:

	iscsiadm -m iface -I iface0 --op=new

(This will create a new empty iface config. If there was already an iface
with the name "iface0", this command will overwrite it.)

Next, set the hwaddress:

	iscsiadm -m iface -I iface0 --op=update \
		-n iface.hwaddress -v 00:0F:1F:92:6B:BF

If you had sessions logged in, iscsiadm will not update or overwrite
an iface. You must log out first. If you have an iface bound to a node/portal
but you have not logged in, then iscsiadm will update the config and
all existing bindings.

You should now skip to 5.1.3 to see how to log in using the iface, and for
some helpful management commands.


5.1.2 Setting up an iface for an iSCSI offload card
===================================================

This section describes how to setup ifaces for use with Chelsio, Broadcom and
QLogic cards.

By default, iscsiadm will create an iface for each Broadcom, QLogic and Chelsio
port. The iface name will be of the form:

	$transport/driver_name.$MAC_ADDRESS

Running the following command:

	iscsiadm -m iface

will report iface configurations that are setup in /etc/iscsi/ifaces:

	default tcp,<empty>,<empty>,<empty>,<empty>
	iser iser,<empty>,<empty>,<empty>,<empty>
	cxgb3i.00:07:43:05:97:07 cxgb3i,00:07:43:05:97:07,<empty>,<empty>,<empty>
	qla4xxx.00:0e:1e:04:8b:2e qla4xxx,00:0e:1e:04:8b:2e,<empty>,<empty>,<empty>

The format is:

	iface_name transport_name,hwaddress,ipaddress,net_ifacename,initiatorname

where:	iface_name:		name of iface
	transport_name:		name of driver
	hwaddress:		MAC address
	ipaddress:		IP address to use for this port
	net_iface_name:		will be <empty> because change between reboots.
				It is used for software iSCSI's vlan or alias binding.
	initiatorname:		Initiatorname to be used if you want to override the
				default one in /etc/iscsi/initiatorname.iscsi.

To display these values in a more friendly way, run:

	iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07

Example output:

	# BEGIN RECORD 2.0-871
	iface.iscsi_ifacename = cxgb3i.00:07:43:05:97:07
	iface.net_ifacename = <empty>
	iface.ipaddress = <empty>
	iface.hwaddress = 00:07:43:05:97:07
	iface.transport_name = cxgb3i
	iface.initiatorname = <empty>
	# END RECORD

Before you can use the iface, you must set the IP address for the port.
We determine the corresponding variable name that we want to update from
the output above, which is "iface.ipaddress".
Then we fill this empty variable with the value we desire, with this command:

	iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07 -o update \
		-n iface.ipaddress -v 20.15.0.66

Note for QLogic ports: After updating the iface record, you must apply or
applyall the settings for the changes to take effect:

	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2e -o apply
	iscsiadm -m iface -H 00:0e:1e:04:8b:2e -o applyall

With "apply", the network settings for the specified iface will take effect.
With "applyall", the network settings for all ifaces on a specific host will
take effect. The host can be specified using the -H/--host argument by either
the MAC address of the host or the host number.

Here is an example of setting multiple IPv6 addresses on a single iSCSI
interface port.
First interface (no need to set iface_num, it is 0 by default):

	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2a -o update \
		 -n iface.ipaddress -v fec0:ce00:7014:0041:1111:2222:1e04:9392

Create the second interface if it does not exist (iface_num is mandatory here):

	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2a.1 -op=new
	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2a -o update \
		 -n iface.iface_num -v 1
	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2a -o update \
		 -n iface.ipaddress -v fec0:ce00:7014:0041:1111:2222:1e04:9393
	iscsiadm -m iface -H 00:0e:1e:04:8b:2a --op=applyall

Note: If there are common settings for multiple interfaces then the
settings from 0th iface would be considered valid.

Now, we can use this iface to login into targets, which is described in the
next section.


5.1.3 Discovering iSCSI targets/portals
========================================

Be aware that iscsiadm will use the default route to do discovery. It will
not use the iface specified. So if you are using an offload card, you will
need a separate network connection to the target for discovery purposes.

*This should be fixed in the some future version of Open-iSCSI*

For compatibility reasons, when you run iscsiadm to do discovery, it
will check for interfaces in /etc/iscsi/iscsi/ifaces that are using
tcp for the iface.transport, and it will bind the portals that are discovered
so that they will be logged in through those ifaces. This behavior can also
be overridden by passing in the interfaces you want to use. For the case
of offload, like with cxgb3i and bnx2i, this is required because the transport
will not be tcp.

For example if you had defined two interfaces but only wanted to use one,
you can use the --interface/-I argument:

	iscsiadm -m discoverydb -t st -p ip:port -I iface1 --discover -P 1

If you had defined interfaces but wanted the old behavior, where we do not
bind a session to an iface, then you can use the special iface "default":

	iscsiadm -m discoverydb -t st -p ip:port -I default --discover -P 1

And if you did not define any interfaces in /etc/iscsi/ifaces and do
not pass anything into iscsiadm, running iscsiadm will do the default
behavior, allowing the network subsystem to decide which device to use.

If you later want to remove the bindings for a specific target and
iface, then you can run:

	iscsiadm -m node -T my_target -I iface0 --op=delete

To do this for a specific portal on a target, run:

	iscsiadm -m node -T my_target -p ip:port -I iface0 --op=delete

If you wanted to delete all bindinds for iface0, then you can run:

	iscsiadm -m node -I iface0 --op=delete

And for equalogic targets it is sometimes useful to remove just by portal:

	iscsiadm -m node -p ip:port -I iface0 --op=delete


Now logging into targets is the same as with software iSCSI. See section 7
for how to get started.


5.2 iscsiadm examples
=====================

Usage examples using the one-letter options (see iscsiadm man page
for long options):

Discovery mode
--------------

- SendTargets iSCSI Discovery using the default driver and interface and
		using the discovery settings for the discovery record with the
		ID [192.168.1.1:3260]:

	iscsiadm -m discoverydb -t st -p 192.168.1.1:3260 --discover

  This will search /etc/iscsi/send_targets for a record with the
  ID [portal = 192.168.1.1:3260 and type = sendtargets. If found it
  will perform discovery using the settings stored in the record.
  If a record does not exist, it will be created using the iscsid.conf
  discovery settings.

  The argument to -p may also be a hostname instead of an address:

		iscsiadm -m discoverydb -t st -p somehost --discover

  For the ifaces, iscsiadm will first search /etc/iscsi/ifaces for
  interfaces using software iSCSI. If any are found then nodes found
  during discovery will be setup so that they can logged in through
  those interfaces. To specify a specific iface, pass the
  -I argument for each iface.

- SendTargets iSCSI Discovery updating existing target records:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o update --discover

  If there is a record for targetX, and portalY exists in the DB, and
  is returned during discovery, it will be updated with the info from
  the iscsi.conf. No new portals will be added and stale portals
  will not be removed.

- SendTargets iSCSI Discovery deleting existing target records:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o delete --discover

  If there is a record for targetX, and portalY exists in the DB, but
  is not returned during discovery, it will be removed from the DB.
  No new portals will be added and existing portal records will not
  be changed.

  Note: If a session is logged into portal we are going to delete
  a record for, it will be logged out then the record will be
  deleted.

- SendTargets iSCSI Discovery adding new records:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o new --discover

  If there is targetX, and portalY is returned during discovery, and does
  not have a record, it will be added. Existing records are not modified.

- SendTargets iSCSI Discovery using multiple ops:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o new -o delete --discover

  This command will add new portals and delete records for portals
  no longer returned. It will not change the record information for
  existing portals.

- SendTargets iSCSI Discovery in nonpersistent mode:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o nonpersistent --discover

  This command will perform discovery, but not manipulate the node DB.

- SendTargets iSCSI Discovery with a specific interface.  If you wish
  to only use a subset of the interfaces in
  /etc/iscsi/ifaces, then you can pass them in during discovery:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		--interface=iface0 --interface=iface1 --discover

  Note that for software iSCSI, we let the network layer select
  which NIC to use for discovery, but for later logins iscsiadm
  will use the NIC defined in the iface configuration.

  qla4xxx support is very basic and experimental. It does not store
  the record info in the card's FLASH or the node DB, so you must
  rerun discovery every time the driver is reloaded.

- Manipulate SendTargets DB: Create new SendTargets discovery record or
  overwrite an existing discovery record with iscsid.conf
  discovery settings:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 -o new

- Manipulate SendTargets DB: Display discovery settings:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 -o show

- Manipulate SendTargets DB: Display hidden discovery settings like
		 CHAP passwords:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o show --show

- Manipulate SendTargets DB: Set discovery setting.

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o update -n name -v value

- Manipulate SendTargets DB: Delete discovery record. This will also delete
  the records for the targets found through the discovery source.

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 -o delete

- Show all records in discovery database:

	iscsiadm -m discovery

- Show all records in discovery database and show the targets that were
  discovered from each record:

	iscsiadm -m discovery -P 1

Node mode
---------

In node mode you can specify which records you want to log
into by specifying the targetname, ip address, port or interface
(if specifying the interface it must already be setup in the node db).
iscsiadm will search the node db for records which match the values
you pass in, so if you pass in the targetname and interface, iscsiadm
will search for records with those values and operate on only them.
Passing in none of them will result in all node records being operated on.

- iSCSI Login to all portals on every node/starget through each interface
  set in the db:

	iscsiadm -m node -l

- iSCSI login to all portals on a node/target through each interface set
  in the db, but do not wait for the login response:

	iscsiadm -m node -T iqn.2005-03.com.max -l -W

- iSCSI login to a specific portal through each interface set in the db:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 -l

  To specify an iPv6 address, the following can be used:

	iscsiadm -m node -T iqn.2005-03.com.max \
		-p 2001:c90::211:9ff:feb8:a9e9 -l

  The above command would use the default port, 3260. To specify a
  port, use the following:

	iscsiadm -m node -T iqn.2005-03.com.max \
		-p [2001:c90::211:9ff:feb8:a9e9]:3260 -l

  To specify a hostname, the following can be used:

	iscsiadm -m node -T iqn.2005-03.com.max -p somehost -l

- iSCSI Login to a specific portal through the NIC setup as iface0:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 \
		-I iface0  -l

- iSCSI Logout of all portals on every node/starget through each interface
  set in the db:

	iscsiadm -m node -u

  Warning: this does not check startup values like the logout/login all
  option. Do not use this if you are running iSCSI on your root disk.

- iSCSI logout of all portals on a node/target through each interface set
  in the db:

	iscsiadm -m node -T iqn.2005-03.com.max -u

- iSCSI logout of a specific portal through each interface set in the db:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 -u

- iSCSI Logout of a specific portal through the NIC setup as iface0:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 \
		-I iface0

- Changing iSCSI parameter:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 \
		-o update -n node.cnx[0].iscsi.MaxRecvDataSegmentLength -v 65536

  You can also change parameters for multiple records at once, by
  specifying different combinations of target, portal and interface
  like above.

- Adding custom iSCSI portal:

	iscsiadm -m node -o new -T iqn.2005-03.com.max \
		-p 192.168.0.1:3260,2 -I iface4

  The -I/--interface is optional. If not passed in, "default" is used.
  For tcp or iser, this would allow the network layer to decide what is
  best.

  Note that for this command, the Target Portal Group Tag (TPGT) should
  be passed in. If it is not passed in on the initial creation command,
  then the user must run iscsiadm again to set the value. Also,
  if the TPGT is not initially passed in, the old behavior of not
  tracking whether the record was statically or dynamically created
  is used.

- Adding custom NIC config to multiple targets:

	iscsiadm -m node -o new -I iface4

  This command will add an interface config using the iSCSI and SCSI
  settings from iscsid.conf to every target that is in the node db.

- Removing iSCSI portal:

	iscsiadm -m node -o delete -T iqn.2005-03.com.max -p 192.168.0.4:3260

  You can also delete multiple records at once, by specifying different
  combinations of target, portal and interface like above.

- Display iSCSI portal onfiguration:

	iscsiadm -m node [-o show] -T iqn.2005-03.com.max -p 192.168.0.4:3260

  You can also display multiple records at once, by specifying different
  combinations of target, portal and interface like above.

  Note: running "iscsiadm -m node" will only display the records. It
  will not display the configuration info. For the latter, run:

	iscsiadm -m node -o show

- Show all node records:

	iscsiadm -m node

  This will print the nodes using the old flat format where the
  interface and driver are not displayed. To display that info
  use the -P option with the argument "1":

	iscsiadm -m node -P 1

Session mode
------------

- Display session statistics:

	iscsiadm -m session -r 1 --stats

  This function also works in node mode. Instead of the "-r $sid"
  argument, you would pass in the node info like targetname and/or portal,
  and/or interface.

- Perform a SCSI scan on a session

	iscsiadm -m session -r 1 --rescan

  This function also works in node mode. Instead of the "-r $sid"
  argument, you would pass in the node info like targetname and/or portal,
  and/or interface.

  Note: Rescanning does not delete old LUNs. It will only pick up new
  ones.

- Display running sessions:

	iscsiadm -m session -P 1

Host mode with flashnode submode
--------------------------------

- Display list of flash nodes for a host

	iscsiadm -m host -H 6 -C flashnode

  This will print list of all the flash node entries for the given host
  along with their ip, port, tpgt and iqn values.

- Display all parameters of a flash node entry for a host

	iscsiadm -m host -H 6 -C flashnode -x 0

  This will list all the parameter name,value pairs for the
  flash node entry at index 0 of host 6.

- Add a new flash node entry for a host

	iscsiadm -m host -H 6 -C flashnode -o new -A [ipv4|ipv6]

  This will add new flash node entry for the given host 6 with portal
  type of either ipv4 or ipv6. The new operation returns the index of
  the newly created flash node entry.

- Update a flashnode entry

	iscsiadm -m host -H 6 -C flashnode -x 1 -o update \
		-n flashnode.conn[0].ipaddress -v 192.168.1.12 \
		-n flashnode.session.targetname \
		-v iqn.2002-03.com.compellent:5000d310004b0716

  This will update the values of ipaddress and targetname params of
  the flash node entry at index 1 of host 6.

- Login to a flash node entry

	iscsiadm -m host -H 6 -C flashnode -x 1 -o login

- Logout from a flash node entry
	Logout can be performed either using the flash node index:

	iscsiadm -m host -H 6 -C flashnode -x 1 -o logout

  or by using the corresponding session index:

	iscsiadm -m session -r $sid -u

- Delete a flash node entry

	iscsiadm -m host -H 6 -C flashnode -x 1 -o delete

Host mode with chap submode
---------------------------

- Display list of chap entries for a host

	iscsiadm -m host -H 6 -C chap -o show

- Delete a chap entry for a host

	iscsiadm -m host -H 6 -C chap -o delete -x 5

  This will delete any chap entry present at index 5.

- Add/Update a local chap entry for a host

	iscsiadm -m host -H 6 -C chap -o update -x 4 -n username \
			-v value -n password -v value

  This will update the local chap entry present at index 4. If index 4
  is free, then a new entry of type local chap will be created at that
  index with given username and password values.

- Add/Update a bidi chap entry for a host

	iscsiadm -m host -H 6 -C chap -o update -x 5 -n username_in \
		-v value -n password_in -v value

  This will update the bidi chap entry present at index 5. If index 5
  is free then entry of type bidi chap will be created at that index
  with given username_in and password_in values.

Host mode with stats submode
----------------------------

- Display host statistics:

	iscsiadm -m host -H 6 -C stats

  This will print the aggregate statistics on the host adapter port.
  This includes MAC, TCP/IP, ECC & iSCSI statistics.


6. Configuration
================

The default configuration file is /etc/iscsi/iscsid.conf, but the
directory is configurable with the top-level make option "homedir".
The remainder of this document will assume the /etc/iscsi directory.
This file contains only configuration that could be overwritten by iSCSI
discovery, or manually updated via iscsiadm utility. Its OK if this file
does not exist, in which case compiled-in default configuration will take place
for newer discovered Target nodes.

See the man page and the example file for the current syntax.
The manual pages for iscsid, iscsiadm are in the doc subdirectory and can be
installed in the appropriate man page directories and need to be manually
copied into e.g. /usr/local/share/man8.


7. Getting Started
==================

There are three steps needed to set up a system to use iSCSI storage:

7.1. iSCSI startup using the systemd units or manual startup.
7.2. Discover targets.
7.3. Automate target logins for future system reboots.

The systemd startup units will start the iSCSI daemon and log into any
portals that are set up for automatic login (discussed in 7.2)
or discovered through the discover daemon iscsid.conf params
(discussed in 7.1.2).

If your distro does not have systemd units for iSCSI, then you will have
to start the daemon and log into the targets manually.


7.1.1 iSCSI startup using the init script
=========================================

Red Hat or Fedora:
-----------------
To start Open-iSCSI in Red Hat/Fedora you can do:

	systemctl start open-iscsi

To get Open-iSCSI to automatically start at run time you may have to
run:
	systemctl enable open-iscsi

And, to automatically mount a file system during startup
you must have the partition entry in /etc/fstab marked with the "_netdev"
option. For example this would mount an iSCSI disk sdb:

	/dev/sdb /mnt/iscsi ext3 _netdev 0 0

SUSE or Debian:
---------------
The Open-iSCSI service is socket activated, so there is no need to
enable the Open-iSCSI service. Likewise, the iscsi.service login
service is enabled automatically, so setting 'startup' to "automatic'
will enable automatic login to Open-iSCSI targets.


7.1.2 Manual Startup
====================

7.1.2.1 Starting up the iSCSI daemon (iscsid) and loading modules
=================================================================

If there is no initd script, you must start the tools by hand. First load the
iSCSI modules:

	modprobe -q iscsi_tcp

After that, start iSCSI as a daemon process:

	iscsid

or alternatively, start it with debug enabled, in a separate window,
which will force it into "foreground" mode:

	iscsid -d 8


7.1.2.2 Logging into Targets
============================

Use the configuration utility, iscsiadm, to add/remove/update Discovery
records, iSCSI Node records or monitor active iSCSI sessions (see above or the
iscsiadm man files and see section 7.2 below for how to discover targets):

	iscsiadm  -m node

This will print out the nodes that have been discovered as:

	10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311
	10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311

The format is:

	ip:port,target_portal_group_tag targetname

If you are using the iface argument or want to see the driver
info, use the following:

	iscsiadm -m node -P 1

Example output:

	Target: iqn.1992-08.com.netapp:sn.33615311
	        Portal: 10.15.84.19:3260,2
	                Iface Name: iface2
	        Portal: 10.15.85.19:3260,3
	                Iface Name: iface2

The format is:

	Target: targetname
		Portal ip_address:port,tpgt
			Iface: ifacename

Here, where targetname is the name of the target and ip_address:port
is the address and port of the portal. tpgt is the Target Portal Group
Tag of the portal, and is not used in iscsiadm commands except for static
record creation. ifacename is the name of the iSCSI interface
defined in /etc/iscsi/ifaces. If no interface was defined in
/etc/iscsi/ifaces or passed in, the default behavior is used.
Default here is iscsi_tcp/tcp to be used over whichever NIC the
network layer decides is best.

To login, take the ip, port and targetname from above and run:

	iscsiadm -m node -T targetname -p ip:port -l

In this example we would run:

	iscsiadm -m node -T iqn.1992-08.com.netapp:sn.33615311 \
		-p 10.15.84.19:3260 -l

Note: drop the portal group tag from the "iscsiadm -m node" output.

If you wish, for example to login to all targets represented in the node
database, but not wait for the login responses:

	iscsiadm -m node -l -W

After this, you can use "session" mode to detect when the logins complete:

	iscsiadm -m session


7.2. Discover Targets
=====================

Once the iSCSI service is running, you can perform discovery using
SendTarget with:

	iscsiadm -m discoverydb -t sendtargets -p ip:port --discover

Here, "ip" is the address of the portal and "port" is the port.

To use iSNS you can run the discovery command with the type as "isns"
and pass in the ip:port:

	iscsiadm -m discoverydb -t isns -p ip:port --discover

Both commands will print out the list of all discovered targets and their
portals, e.g.:

	iscsiadm -m discoverydb -t st -p 10.15.85.19:3260 --discover

This might produce:

	10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311
	10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311

The format for the output is:

	ip:port,tpgt targetname

In this example, for the first target the ip address is 10.15.85.19, and
the port is 3260. The target portal group is 3. The target name
is iqn.1992-08.com.netapp:sn.33615311.

If you would also like to see the iSCSI inteface which will be used
for each session then use the --print=[N]/-P [N] option:

	iscsiadm -m discoverydb -t sendtargets -p ip:port -P 1 --discover

This might print:

    Target: iqn.1992-08.com.netapp:sn.33615311
        Portal: 10.15.84.19:3260,2
           Iface Name: iface2
        Portal: 10.15.85.19:3260,3
           Iface Name: iface2

In this example, the IP address of the first portal is 10.15.84.19, and
the port is 3260. The target portal group is 3. The target name
is iqn.1992-08.com.netapp:sn.33615311. The iface being used is iface2.

While discovery targets are kept in the discovery db, they are
useful only for re-discovery. The discovered targets (a.k.a. nodes)
are stored as records in the node db.

The discovered targets are not logged into yet. Rather than logging
into the discovered nodes (making LUs from those nodes available as
storage), it is better to automate the login to the nodes we need.

If you wish to log into a target manually now, see section
"7.1.2.2 Logging in targets" above.


7.3. Automate Target Logins for Future System Startups
======================================================

Note: this may only work for distros with systemd iSCSI login scripts.

To automate login to a node, use the following with the record ID
(record ID is the targetname and portal) of the node discovered in the
discovery above:

	iscsiadm -m node -T targetname -p ip:port --op update -n node.startup -v automatic

To set the automatic setting to all portals on a target through every
interface setup for each protal, the following can be run:

	iscsiadm -m node -T targetname --op update -n node.startup -v automatic

Or to set the "node.startup" attribute to "automatic" as default for
all sessions add the following to the /etc/iscsi/iscsid.conf:

	node.startup = automatic

Setting this in iscsid.conf will not affect existing nodes. It will only
affect nodes that are discovered after setting the value.

To login to all automated nodes, simply restart the iSCSI login service, e.g. with:

	systemctl restart iscsi.service

On your next startup the nodes will be logged into automatically.


7.4 Automatic Discovery and Login
=================================

Instead of running the iscsiadm discovery command and editing the
startup setting, iscsid can be configured so that every X seconds
it performs discovery and logs in and out of the portals returned or
no longer returned. In this mode, when iscsid starts it will check the
discovery db for iSNS records with:

	discovery.isns.use_discoveryd = Yes

This tells iscsi to check for SendTargets discovery records that have the
setting:

	discovery.sendtargets.use_discoveryd = Yes

If set, iscsid will perform discovery to the address every
discovery.isns.discoveryd_poll_inval or
discovery.sendtargets.discoveryd_poll_inval seconds,
and it will log into any portals found from the discovery source using
the ifaces in /etc/iscsi/ifaces.

Note that for iSNS the poll_interval does not have to be set. If not set,
iscsid will only perform rediscovery when it gets a SCN from the server.

#   iSNS Note:
#   For servers like Microsoft's where they allow SCN registrations, but do not
#   send SCN events, discovery.isns.poll_interval should be set to a non zero
#   value to auto discover new targets. This is also useful for servers like
#   linux-isns (SLES's iSNS server) where it sometimes does not send SCN
#   events in the proper format, so they may not get handled.

Examples
--------

SendTargets
-----------

- Create a SendTargets record by passing iscsiadm the "-o new" argument in
		discoverydb mode:

	iscsiadm -m discoverydb -t st -p 20.15.0.7:3260 -o new

  On success, this will output something like:

  New discovery record for [20.15.0.7,3260] added.

- Set the use_discoveryd setting for the record:

	iscsiadm -m discoverydb -t st -p 20.15.0.7:3260  -o update \
		-n discovery.sendtargets.use_discoveryd -v Yes

- Set the polling interval:

	iscsiadm -m discoverydb -t st -p 20.15.0.7:3260  -o update \
		-n discovery.sendtargets.discoveryd_poll_inval -v 30

To have the new settings take effect, restart iscsid by restarting the
iSCSI services.

NOTE:	When iscsiadm is run with the -o new argument, it will use the
	discovery.sendtargets.use_discoveryd and
	discovery.sendtargets.discoveryd_poll_inval
	settings in iscsid.conf for the records initial settings. So if those
	are set in iscsid.conf, then you can skip the iscsiadm -o update
	commands.

iSNS
----

- Create an iSNS record by passing iscsiadm the "-o new" argument in
		discoverydb mode:

	iscsiadm -m discoverydb -t isns -p 20.15.0.7:3205 -o new

  Response on success:

	New discovery record for [20.15.0.7,3205] added.

- Set the use_discoveryd setting for the record:

	iscsiadm -m discoverydb -t isns -p 20.15.0.7:3205  -o update \
		-n discovery.isns.use_discoveryd -v Yes

- [OPTIONAL: see iSNS note above] Set the polling interval if needed:

	iscsiadm -m discoverydb -t st -p 20.15.0.7:3205  -o update \
		-n discovery.isns.discoveryd_poll_inval -v 30

To have the new settings take effect, restart iscsid by restarting the
iscsi services.

Note:	When iscsiadm is run with the -o new argument, it will use the
	discovery.isns.use_discoveryd and discovery.isns.discoveryd_poll_inval
	settings in iscsid.conf for the record's initial settings. So if those
	are set in iscsid.conf, then you can skip the iscsiadm -o update
	commands.


8. Advanced Configuration
=========================

8.1 iSCSI settings for dm-multipath
===================================

When using dm-multipath, the iSCSI timers should be set so that commands
are quickly failed to the dm-multipath layer. For dm-multipath you should
then set values like queue if no path, so that IO errors are retried and
queued if all paths are failed in the multipath layer.


8.1.1 iSCSI ping/Nop-Out settings
=================================
To quickly detect problems in the network, the iSCSI layer will send iSCSI
pings (iSCSI NOP-Out requests) to the target. If a NOP-Out times out, the
iSCSI layer will respond by failing the connection and starting the
replacement_timeout. It will then tell the SCSI layer to stop the device queues
so no new IO will be sent to the iSCSI layer and to requeue and retry the
commands that were running if possible (see the next section on retrying
commands and the replacement_timeout).

To control how often a NOP-Out is sent, the following value can be set:

	node.conn[0].timeo.noop_out_interval = X

Where X is in seconds and the default is 10 seconds. To control the
timeout for the NOP-Out the noop_out_timeout value can be used:

	node.conn[0].timeo.noop_out_timeout = X

Again X is in seconds and the default is 15 seconds.

Normally for these values you can use:

	node.conn[0].timeo.noop_out_interval = 5
	node.conn[0].timeo.noop_out_timeout = 10

If there are a lot of IO error messages like

	detected conn error (22)

in the kernel log then the above values may be too aggressive. You may need to
increase the values for your network conditions and workload, or you may need
to check your network for possible problems.


8.1.2 SCSI command retries
==========================

SCSI disk commands get 5 retries by default. In newer kernels this can be
controlled via the sysfs file:

	/sys/block/$sdX/device/scsi_disk/$host:$bus:$target:LUN/max_retries

by writing a integer lower than 5 to reduce retries or setting to -1 for
infinite retries.

The number of actual retries a command gets may be less than 5 or what is
requested in max_retries if the replacement timeout expires. When that timer
expires it tells the SCSI layer to fail all new and queued commands.


8.1.3 replacement_timeout
=========================

The iSCSI layer timer:

	node.session.timeo.replacement_timeout = X

controls how long to wait for session re-establishment before failing all SCSI
commands:

	1. commands that have been requeued and awaiting a retry
	2. commands that are being operated on by the SCSI layer's error handler
	3. all new commands that are queued to the device

up to a higher level like multipath, filesystem layer, or to the application.

The setting is in seconds. zero means to fail immediately. -1 means an infinite
timeout which will wait until iscsid does a relogin, the user runs the iscsiadm
logout command or until the node.session.reopen_max limit is hit.

When this timer is started, the iSCSI layer will stop new IO from executing
and requeue running commands to the Block/SCSI layer. The new and requeued
commands will then sit in the Block/SCSI layer queue until the timeout has
expired, there is userspace intervention like a iscsiadm logout command, or
there is a successful relogin. If the command has run out of retries, the
command will be failed instead of being requeued.

After this timer has expired iscsid can continue to try to relogin. By default
iscsid will continue to try to relogin until there is a successful relogin or
until the user runs the iscsiadm logout command. The number of relogin retries
is controlled by the Open-iSCSI setting node.session.reopen_max. If that is set
too low, iscsid may give up and forcefully logout the session (equivalent to
running the iscsiadm logout command on a failed session) before replacement
timeout seconds. This will result in all commands being failed at that time.
The user would then have to manually relogin.

This timer starts when you see the connection error messsage:

	detected conn error (%d)

in the kernel log. The %d will be a integer with the following mappings
and meanings:

Int     Kernel define           Description
value
------------------------------------------------------------------------------
1	ISCSI_ERR_DATASN	Low level iSCSI protocol error where a data
				sequence value did not match the expected value.
2	ISCSI_ERR_DATA_OFFSET	There was an error where we were asked to
				read/write past a buffer's length.
3	ISCSI_ERR_MAX_CMDSN	Low level iSCSI protocol error where we got an
				invalid MaxCmdSN value.
4	ISCSI_ERR_EXP_CMDSN	Low level iSCSI protocol error where the
				ExpCmdSN from the target didn't match the
				expected value.
5	ISCSI_ERR_BAD_OPCODE	The iSCSI Target has sent an invalid or unknown
				opcode.
6	ISCSI_ERR_DATALEN	The iSCSI target has send a PDU with a data
				length that is invalid.
7	ISCSI_ERR_AHSLEN	The iSCSI target has sent a PDU with an invalid
				Additional Header Length.
8	ISCSI_ERR_PROTO		The iSCSI target has performed an operation that
				violated the iSCSI RFC.
9	ISCSI_ERR_LUN		The iSCSI target has requested an invalid LUN.
10	ISCSI_ERR_BAD_ITT       The iSCSI target has sent an invalid Initiator
				Task Tag.
11	ISCSI_ERR_CONN_FAILED   Generic error that can indicate the transmission
				of a PDU, like a SCSI cmd or task management
				function, has timed out. Or, we are not able to
				transmit a PDU because the network layer has
				returned an error, or we have detected a
				network error like a link down. It can
				sometimes be an error that does not fit the
				other error codes like a kernel function has
				returned a failure and there no other way to
				recovery from it except to try and kill the
				existing session and relogin.
12	ISCSI_ERR_R2TSN		Low level iSCSI protocol error where the R2T
				sequence numbers to not match.
13	ISCSI_ERR_SESSION_FAILED
				Unused.
14	ISCSI_ERR_HDR_DGST	iSCSI Header Digest error.
15	ISCSI_ERR_DATA_DGST	iSCSI Data Digest error.
16	ISCSI_ERR_PARAM_NOT_FOUND
				Userspace has passed the kernel an unknown
				setting.
17	ISCSI_ERR_NO_SCSI_CMD	The iSCSI target has sent a ITT for an unknown
				task.
18	ISCSI_ERR_INVALID_HOST	The iSCSI Host is no longer present or being
				removed.
19	ISCSI_ERR_XMIT_FAILED	The software iSCSI initiator or cxgb was not
				able to transmit a PDU becuase of a network
				layer error.
20	ISCSI_ERR_TCP_CONN_CLOSE
				The iSCSI target has closed the connection.
21	ISCSI_ERR_SCSI_EH_SESSION_RST
				The SCSI layer's Error Handler has timed out
				the SCSI cmd, tried to abort it and possibly
				tried to send a LUN RESET, and it's now
				going to drop the session.
22	ISCSI_ERR_NOP_TIMEDOUT	An iSCSI Nop as a ping has timed out.


8.1.4 Running Commands, the SCSI Error Handler, and replacement_timeout
=======================================================================

Each SCSI command has a timer controlled by:

	/sys/block/sdX/device/timeout

The value is in seconds and the default ranges from 30 - 60 seconds
depending on the distro's udev scripts.

When a command is sent to the iSCSI layer the timer is started, and when it's
returned to the SCSI layer the timer is stopped. This could be for successful
completion or due to a retry/requeue due to a conn error like described
previously. If a command is retried the timer is reset.

When the command timer fires, the SCSI layer will ask the iSCSI layer to abort
the command by sending an ABORT_TASK task management request. If the abort
is successful the SCSI layer retries the command if it has enough retries left.
If the abort times out, the iSCSI layer will report failure to the SCSI layer
and will fire a ISCSI_ERR_SCSI_EH_SESSION_RST error. In the logs you will see:

	detected conn error (21)

The ISCSI_ERR_SCSI_EH_SESSION_RST will cause the connection/session to be
dropped and the iSCSI layer will start the replacement_timeout operations
described in that section.

The SCSI layer will then eventually call the iSCSI layer's target/session reset
callout which will wait for the replacement timeout to expire, a successful
relogin to occur, or for userspace to logout the session.

- If the replacement timeout fires, then commands will be failed upwards as
described in the replacement timeout section. The SCSI devices will be put
into an offline state until iscsid performs a relogin.

- If a relogin occurs before the timer fires, commands will be retried if
possible.

To check if the SCSI error handler is running, iscsiadm can be run as:

	iscsiadm -m session -P 3

and you will see:

	Host Number: X State: Recovery

To modify the timer that starts the SCSI EH, you can either write
directly to the device's sysfs file:

	echo X > /sys/block/sdX/device/timeout

where X is in seconds.
Alternatively, on most distros you can modify the udev rule.

To modify the udev rule open /etc/udev/rules.d/50-udev.rules, and find the
following lines:

	ACTION=="add", SUBSYSTEM=="scsi" , SYSFS{type}=="0|7|14", \
		RUN+="/bin/sh -c 'echo 60 > /sys$$DEVPATH/timeout'"

And change the "echo 60" part of the line to the value that you want.

The default timeout for normal File System commands is 30 seconds when udev
is not being used. If udev is used the default is the above value which
is normally 60 seconds.


8.1.4 Optimal replacement_timeout Value
=======================================

The default value for replacement_timeout is 120 seconds, but because
multipath's queue_if_no_path and no_path_retry setting can prevent IO errors
from being propagated to the application, replacement_timeout can be set to a
shorter value like 5 to 15 seconds. By setting it lower, pending IO is quickly
sent to a new path and executed while the iSCSI layer attempts
re-establishment of the session. If all paths end up being failed, then the
multipath and device mapper layer will internally queue IO based on the
multipath.conf settings, instead of the iSCSI layer.


8.2 iSCSI settings for iSCSI root
=================================

When accessing the root partition directly through an iSCSI disk, the
iSCSI timers should be set so that iSCSI layer has several chances to try to
re-establish a session and so that commands are not quickly requeued to
the SCSI layer. Basically you want the opposite of when using dm-multipath.

For this setup, you can turn off iSCSI pings (NOPs) by setting:

	node.conn[0].timeo.noop_out_interval = 0
	node.conn[0].timeo.noop_out_timeout = 0

And you can turn the replacement_timer to a very long value:

	node.session.timeo.replacement_timeout = 86400


8.3 iSCSI settings for iSCSI tape
=================================

It is possible to use open-iscsi to connect to a remote tape drive,
making available locally. In such a case, you need to disable NOPs out,
since tape drives don't handle those well at all. See above (section 8.2)
for how to disable these NOPs.


9. iSCSI System Info
====================

To get information about the running sessions: including the session and
device state, session ids (sid) for session mode, and some of the
negotiated parameters, run:

	iscsiadm -m session -P 2

If you are looking for something shorter, like just the sid to node mapping,
run:

	iscsiadm -m session [-P 0]

This will print the list of running sessions with the format:

	driver [sid] ip:port,target_portal_group_tag targetname

Example output of "iscsiadm -m session":

	tcp [2] 10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311
	tcp [3] 10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311

To print the hw address info use the -P option with "1":

	iscsiadm -m session -P 1

This will print the sessions with the following format:

	Target: targetname
		Current Portal: portal currently logged into
		Persistent Portal: portal we would fall back to if we had got
				   redirected during login
			Iface Transport: driver/transport_name
			Iface IPaddress: IP address of iface being used
			Iface HWaddress: HW address used to bind session
			Iface Netdev: netdev value used to bind session
			SID: iscsi sysfs session id
			iSCSI Connection State: iscsi state

Note: if an older kernel is being used or if the session is not bound,
then the keyword "default" is printed to indicate that the default
network behavior is being used.

Example output of "iscsiadm -m session -P 1":

	Target: iqn.1992-08.com.netapp:sn.33615311
		Current Portal: 10.15.85.19:3260,3
		Persistent Portal: 10.15.85.19:3260,3
			Iface Transport: tcp
			Iface IPaddress: 10.11.14.37
			Iface HWaddress: default
			Iface Netdev: default
			SID: 7
			iSCSI Connection State: LOGGED IN
			Internal iscsid Session State: NO CHANGE

The connection state is currently not available for qla4xxx.

To get a HBA/Host view of the session, there is the host mode:

	iscsiadm -m host

This prints the list of iSCSI hosts in the system with the format:

	driver [hostno] ipaddress,[hwaddress],net_ifacename,initiatorname

Example output:

	cxgb3i: [7] 10.10.15.51,[00:07:43:05:97:07],eth3 <empty>

To print this info in a more user friendly way, the -P argument can be used:

	iscsiadm -m host -P 1

Example output:

	Host Number: 7
		State: running
		Transport: cxgb3i
		Initiatorname: <empty>
		IPaddress: 10.10.15.51
		HWaddress: 00:07:43:05:97:07
		Netdev: eth3

Here, you can also see the state of the host.

You can also pass in any value from 1 - 4 to print more info, like the
sessions running through the host, what ifaces are being used and what
devices are accessed through it.

To print the info for a specific host, you can pass in the -H argument
with the host number:

	iscsiadm -m host -P 1 -H 7

targetcli-fb's People

Contributors

agrover avatar apearson-ibm avatar avagin avatar bgly avatar chris-se avatar cvubrugier avatar ddiss avatar gonzoleeman avatar hreinecke avatar iammattcoleman avatar jonnyjd avatar martinhoyer avatar maurizio-lombardi avatar nak3 avatar olafhering avatar pzakha avatar runsisi avatar tangwenji avatar tjakob avatar tramjoe avatar zealoussnow avatar zoumingzhe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

targetcli-fb's Issues

ZFS block backstore support

Hello,

I have a fresh install of Fedora 20like as
[root@philippe ~]# yum list | grep rtslib
python-rtslib.noarch 2.1.fb46-1.fc20 @updates
python-rtslib-doc.noarch 2.1.fb46-1.fc20 updates
python3-rtslib.noarch 2.1.fb46-1.fc20 updates
[root@philippe ~]# uname -r
3.11.10-301.fc20.x86_64
I have some problems to put "options qla2xxx qlini_mode=disabled" into /usr/lib/modprobe.d/qla2xxx.conf
and
use
mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r)-nouveau.img
dracut /boot/initramfs-$(uname -r).img $(uname -r)
With this I can create
/backstores/block> /qla2xxx create naa.2100001b3215dd57
Created target naa.2100001b3215dd57.
I was happy.

But at the reboot
[root@philippe target]# more saveconfig.json
{
"fabric_modules": [],
"storage_objects": [],
"targets": []
}

With
---->
Bruno Goncalves 2013-04-26 11:28:10 EDT

Correct, configuring the process to start on boot solves the problem.

chkconfig targetcli on

systemctl list-unit-files | grep targetcli
targetcli.service enabled
-->
But at the reboot same issue.
today
[root@philippe ~]# systemctl list-unit-files | grep targetcli
targetcli.service enabled
[root@philippe ~]#

But If save the config into saveconfig.json.1, before reboot
and
[root@philippe target]# cp saveconfig.json.1 saveconfig.json
cp: overwrite ‘saveconfig.json’? y
[root@philippe target]# systemctl restart targetcli
wait ....
[root@philippe target]#
[root@philippe target]# targecli
bash: targecli: command not found...
[root@philippe target]# targetcli
targetcli shell version 2.1.fb30
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> ls
o- / ............................................................................... [...]
o- backstores .................................................................... [...]
| o- block ........................................................ [Storage Objects: 1]
| | o- backend_x2200 .......................... [/dev/zd0 (1.0GiB) write-thru activated]
| o- fileio ....................................................... [Storage Objects: 0]
| o- pscsi ........................................................ [Storage Objects: 0]
| o- ramdisk ...................................................... [Storage Objects: 0]
o- iscsi .................................................................. [Targets: 0]
o- loopback ............................................................... [Targets: 0]
o- qla2xxx ................................................................ [Targets: 1]
| o- naa.2100001b3215dd57 ................................................... [gen-acls]
| o- acls .................................................................. [ACLs: 0]
| o- luns .................................................................. [LUNs: 1]
| o- lun0 ......................................... [block/backend_x2200 (/dev/zd0)]
o- vhost .................................................................. [Targets: 0]
/> exit

GOOD !!!
Pascal.

add luns directly to acl

see https://bugzilla.redhat.com/show_bug.cgi?id=828697

when auto_add_mapped_luns is false:

instead of

  1. create backstore
  2. add to target
  3. add mapping of target->lun to acl->mapped_lun

it should be

  1. create backstore object (aka storage object)
  2. add storage object to ACL's mapped lun

i.e. target map step should happen automatically. I need to look into what purpose them being there even really serves.

saveconfig doesn't save discovery auth parameters

targetcli "/iscsi set discovery_auth enable=1"
targetcli "/iscsi set discovery_auth mutual_userid=target"
targetcli "/iscsi set discovery_auth mutual_password=itsreallyme"
targetcli "/iscsi set discovery_auth userid=initiator"
targetcli "/iscsi set discovery_auth password=letmein"
targetcli "/ saveconfig"

less /etc/target/saveconfig.json

[snip]

"targets": [
{
"fabric": "iscsi",
"tpgs": [
{
"attributes": {
"authentication": 1,
"cache_dynamic_acls": 0,
"default_cmdsn_depth": 16,
"demo_mode_write_protect": 1,
"generate_node_acls": 0,
"login_timeout": 15,
"netif_timeout": 2,
"prod_mode_write_protect": 0
},

[snip]

Support Python 3

We've been talking about it over on #23, but that bug really is about taking advantage of Python 3 once we can run on both. This bug is to separately track that first step.

I just pushed a change to make python-ethool optional -- it seems a shame to hold up Python 3 work due to one trivial usage of that library. @cvubrugier or @JonnyJD do you see any other barriers?

Can you add more examples how to interact with user backend?

I've give a try to user:file_async

/backstores/user:file_async> create name=test size=1G cfgstring=/tmp/1
'size', 'level', and 'config' must be set when creating a new UserBackedStorageObject

and i don't find a way to specify 'level'
tcmu-runner with strace show me:

access("/tmp/1", W_OK)                  = 0
Mar 15 18:29:38 ceph-osd-01 strace[26439]: write(8, "\1\0\0\0\0\0\0\0", 8)         = 8

May be i miss understand something? Please add some examples.

Thanks

ib_srpt not working

Trying to create an IB SRP target (I have IB in my server) :

/ib_srpt> create eui.0002c903000e3e77

The shell returns this :

The underlying rtslib object for /ib_srpt does not exist.

issue with translate

I'm not sure exactly which project to put this bug on (rtslib-fb or this), so I've decided to stick it here.

I've attempted to get targetcli-fb built on a Debian wheezy machine, and when I try and run it, I get the following error:

translate() takes exactly one argument (2 given)

I'm running the stock version of python3 supplied with debian (Python 3.2.3).

I did a little digging and the help function in python supplies the following definition:

translate(...)
    S.translate(table) -> str

    Return a copy of the string S, where all characters have been mapped
    through the given translation table, which must be a mapping of
    Unicode ordinals to Unicode ordinals, strings, or None.
    Unmapped characters are left untouched. Characters mapped to None
    are deleted.

Digging through the sources of rtslib-fb reveals the following function:

def from_fabric_wwn(self, wwn):
        if wwn.startswith("0x"):
            wwn = wwn[2:]
        return "naa." + wwn.translate(None, ":")

Based on the definition of the function, it appears something is wrong with the usage, and perhaps should be replaced with something like string.replace()

Provide user backend registration

The TCM-Userspace (TCMU) handler types are currently only queried from tcmu-runner's DBus service. While the latter has recently introduced a proxy protocol for other handler processes to register themselves so they can be used in targetcli (see tcmulib_register() in libtcmu library), the drawback of this indirection layer is that it makes tcmu-runner service an extra dependency of other handlers.

If we can add a daemon process in targetd that publishes a handler registration service by itself (via DBus or other measures), the custom handler process can then work independently from tcmu-runner daemon.

@agrover @pkalever @vbellur Let's use this issue to explore this idea and if we can reach a consensus, I'll look into sending a PR.

show fileio buffered status in UI

rtslib exposes but we do not show anywhere in UI. We should show it somewhere, since it was a (optional) parameter in creating the storageobject.

Use fallocate instead of ftruncate in UIFileIOBackstore._create_file()

In order to create a file backstore, targetcli invokes os.ftruncate(). The ftruncate syscall sets the file size to the requested size but does not reserve the blocks in the file system. As a consequence, ftruncate succeeds even if there is not enough free space. This is kind of thin provisioning, but I find it dangerous because, you may run out of space later when writing data.

I think that targetcli should use fallocate instead of ftruncate. Here is an example that highlights the difference between ftruncate and fallocate:

Mount a 100 MB tmpfs file system

$ sudo mount -t tmpfs -o size=100m tmpfs /mnt

With truncate, the disk usage does not change:

$ truncate -s 12345678 /mnt/a
$ df -h /mnt/
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           100M     0  100M   0% /mnt
$ stat -c %s /mnt/a
12345678
$ du -s /mnt/a
0       /mnt/a

With fallocate, the disk usage is correctly updated:

$ fallocate -l 12345678 /mnt/a
$ df -h /mnt/
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           100M   12M   89M  12% /mnt
$ stat -c %s /mnt/a
12345678
$ du -s /mnt/a
12060   /mnt/a

Unfortunately, fallocate is not available in python before version 3.3

As a consequence, if we want to use fallocate, we need to invoke the fallocate program shipped with util-linux or write our own wrapper.

I can work on a patch to fix this issue.

RFC: remove packaging from upstream

With the Debian packaging initiative making progress, I think it is time to maybe reconsider the future of our in-repo packaging.

Debian packagers have reported in the past that having a "debian" directory makes their work harder when merging the upstream branch into their packaging branch. Moreover, our in-repo Debian packaging is less complete than the "soon to be ready" Debian packaging. The same remark may apply to our in-repo RPM packaging.

Maybe it is time to drop our in-repo packaging and invite people to install the targetcli-fb package from their distro in our README.md. We could replace the "in-repo packaging" section with a list of distributions that ship targetcli-fb (RHEL, Fedora, Suse, Archlinux, and soon Debian).

What do you think?

targetcli / rtslib won't create sbp targets

I had to make the attached patch to rtslib-fb-2.1.fb40/rtslib/fabric.py in order for targetcli to successfully create a sbp2 target on kernel-3.12.0-0.rc2.git0.1.fc21.x86_64

--- fabric.py   2013-09-23 16:44:55.000000000 -0400
+++ fabric.py.new   2013-09-25 13:36:36.000000000 -0400
@@ -336,10 +336,10 @@ class SBPFabricModule(_BaseFabricModule)
         self.kernel_module = "sbp_target"

     def to_fabric_wwn(self, wwn):
-        return "0x" + wwn[4:]
+        return wwn[4:]

     def from_fabric_wwn(self, wwn):
-        return "eui." + wwn[2:]
+        return "eui." + wwn

     # return 1st local guid (is unique) so our share is named uniquely
     @property
@@ -347,7 +347,10 @@ class SBPFabricModule(_BaseFabricModule)
         for fname in glob("/sys/bus/firewire/devices/fw*/is_local"):
             if bool(int(fread(fname))):
                 guid_path = os.path.dirname(fname) + "/guid"
-                yield self.from_fabric_wwn(fread(guid_path))
+                tmp = fread(guid_path)
+                if tmp[0:2] == '0x':
+                    tmp = tmp[2:]
+                yield self.from_fabric_wwn(tmp)
                 break

Error when creating iSCSI ACLs .

Whenever I try to create an ACL under iSCSI TPG , the shell throws this :

Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/configshell/shell.py", line 990, in run_interactive
self._cli_loop()
File "/usr/lib/python2.7/site-packages/configshell/shell.py", line 813, in _cli_loop
self.run_cmdline(cmdline)
File "/usr/lib/python2.7/site-packages/configshell/shell.py", line 934, in run_cmdline
self._execute_command(path, command, pparams, kparams)
File "/usr/lib/python2.7/site-packages/configshell/shell.py", line 909, in _execute_command
result = target.execute_command(command, pparams, kparams)
File "/usr/lib/python2.7/site-packages/targetcli/ui_node.py", line 87, in execute_command
pparams, kparams)
File "/usr/lib/python2.7/site-packages/configshell/node.py", line 1405, in execute_command
result = method(_pparams, *_kparams)
File "/usr/lib/python2.7/site-packages/targetcli/ui_target.py", line 489, in ui_command_create
ui_node_acl = UINodeACL(node_acl, self)
File "/usr/lib/python2.7/site-packages/targetcli/ui_target.py", line 661, in init
super(UINodeACL, self).init(name, self.rtsnodes[0], parent)
IndexError: list index out of range

This doesn't happen with the RTS branch , it only happens in the FP .
Can someone shed some light on this issue ?
Thank you in advance .

Which versions of kernel/python packages behave best with targetcli-fb and on which platform?

Hi there,

I have recently installed targetcli-fb using the manjaro arch.
That kernel was possibly 3.18 and it was ok.
Then I did an update to kernel41 and it seemed be ok.
Another week ago, I did an update to kernel 42/kernel 43 and targetcli wasn't working anymore.
Rolling back is tricky, it involves individual package management which quickly becomes time consuming.

Oddly enough the same behaviour occurred using fedora23beta using targetcli packages from fedora repos. Again Rolling back is time-consuming for Fedora23.

Even more odd is last week Debian Sid packages listed targetcli as part of their repo. I installed that and then this past monday when I updated, some python packages along with the kernel were affected and targetcli wasn't working anymore. Again rolling back is time-consuming for Debian.

From what I experienced, both targetcli/targetcli-fb seem to be fragile and vulnerable to python/kernel updates.

So here is my question: which Linux kernel version and python packages versions produce expected well-behaving targetcli?

How can I prevent targetcli from breaking from:
apt-get update
pacman -Syu
dnf update
?

What's the preferred Linux distro for targetcli-fb? It seems arch(manjaro) has it in their repos, but Debian Sid and Fedora 23 beta provide targetcli(non-fb), but not currently well-behaving for me.

Thank you for listening

New feature: document target parameters

The get attribute and get parameter commands in targetcli do not provide much information because the description is always the same: The foobar attribute/parameter. I would like to improve this situation and I have started some work in this direction. For instance, targetcli get parameter in the TPG context would display:

InitialR2T=Yes
--------------
If set to No, the default use of R2T (Ready To Transfer) is disabled.

MaxBurstLength=262144
---------------------
Maximum SCSI data payload in bytes in a Data-In or a solicited Data-Out iSCSI sequence.

I have created a branch named document-parameters that shows a proof of concept implementation. Basically, subclasses of UIRTSLibNode can declare attributes and parameters class members to document their configfs parameters. As an illustration, I have added description for the TPG parameters. My implementation is a bit hackish though.

@agrover and others, what do you think about this feature? Is it worthwhile and how can I improve it?

P.S. : my targetcli "document-parameters" branch depends on some changes published in my configshell-fb "document-parameters" branch.

properly use stderr

All messages are going to stdout. Error messages should instead go to stderr.

targetcli won't restore config properly during boot

it's restoring the backstores and luns but no acls or portals.

I added systemd.log_level=debug systemd.log_target=kmsg to /etc/default/grub:GRUB_CMDLINE_LINUX

[root@station10 ~]# dmesg | grep targetcli
[ 11.814301] targetcli[863]: Loaded tcm_loop kernel module.
[ 11.820515] targetcli[863]: Created '/sys/kernel/config/target/loopback'.
[ 11.820520] targetcli[863]: Done loading loopback fabric module.
[ 11.829018] targetcli[863]: Loaded tcm_fc kernel module.
[ 11.829127] targetcli[863]: Created '/sys/kernel/config/target/fc'.
[ 11.829263] targetcli[863]: Done loading tcm_fc fabric module.
[ 11.847275] targetcli[863]: Loaded iscsi_target_mod kernel module.
[ 11.848030] targetcli[863]: Created '/sys/kernel/config/target/iscsi'.
[ 11.848077] targetcli[863]: Done loading iscsi fabric module.
[ 11.939993] abrt[863]: detected unhandled Python exception in '/usr/bin/targetcli'
[ 11.954804] targetcli[863]: Traceback (most recent call last):
[ 11.954807] targetcli[863]: File "/usr/bin/targetcli", line 84, in
[ 11.954810] targetcli[863]: main()
[ 11.954812] targetcli[863]: File "/usr/bin/targetcli", line 71, in main
[ 11.954815] targetcli[863]: shell.run_cmdline(" ".join(sys.argv[1:]))
[ 11.954817] targetcli[863]: File "/usr/lib/python2.7/site-packages/configshell/shell.py", line 934, in run_cmdline
[ 11.954820] targetcli[863]: self._execute_command(path, command, pparams, kparams)
[ 11.954824] targetcli[863]: File "/usr/lib/python2.7/site-packages/configshell/shell.py", line 909, in _execute_command
[ 11.954826] targetcli[863]: result = target.execute_command(command, pparams, kparams)
[ 11.954829] targetcli[863]: File "/usr/lib/python2.7/site-packages/targetcli/ui_node.py", line 83, in execute_command
[ 11.954832] targetcli[863]: self.shell.log.error(msg)
[ 11.954858] targetcli[863]: File "/usr/lib/python2.7/site-packages/configshell/log.py", line 159, in error
[ 11.954862] targetcli[863]: self._log('error', msg)
[ 11.954865] targetcli[863]: File "/usr/lib/python2.7/site-packages/configshell/log.py", line 112, in _log
[ 11.954867] targetcli[863]: self.con.display(msg)
[ 11.954870] targetcli[863]: File "/usr/lib/python2.7/site-packages/configshell/console.py", line 152, in display
[ 11.954888] targetcli[863]: self.raw_write(text)
[ 11.954891] targetcli[863]: File "/usr/lib/python2.7/site-packages/configshell/console.py", line 140, in raw_write
[ 11.954911] targetcli[863]: self._stdout.write(text)
[ 11.954913] targetcli[863]: TypeError: expected a character buffer object
[ 12.020244] systemd[1]: targetcli.service: main process exited, code=exited, status=1
[ 12.032296] systemd[1]: Unit targetcli.service entered failed state.
[root@station10 ~]#

TargetCLI disables APTPL by default

TargetCLI seems to create LUNs with APTPL set to disabled rather than enabled?

There is seemingly no way to enable this manually after TargetCLI has created them so you can never persist the SCSIID metadata between hosts:

# cat /sys/kernel/config/target/core/iblock_0/iscsi_lun_r0/pr/res_*
APTPL Bit Status: Disabled
Ready to process PR APTPL metadata..
No SPC-3 Reservation holder
No SPC-3 Reservation holder
0x00000000
No SPC-3 Reservation holder
SPC-3 PR Registrations:
None
No SPC-3 Reservation holder
SPC3_PERSISTENT_RESERVATIONS

This stops you being able to perform reliable failover between targets with Xen / XenServer initators.

update manpage

update manpage for discovery auth, other new stuff (?), and update set authentication bit to wherever that actually is

Can't map rbd by symlink

Hi,
i've try to use LIO for exporting RBD by iSCSI
/dev/rbd? is not persistent, but symlink
/deb/rbd/<pool>/<name> persistent
if i try to add fileio backend by symlink, targetcli just fallback to /dev/rbd? path

Can i fix it?

It's a problem, if i want to export several images from 1 iSCSI target

resizing the lun exposed by tcmu-runner

I was emailed a request (below) for resizable tcmu backstores, but I'm not sure the degree we support resizing kernel-based backstores. We need to understand better the capabilities and implement support for both kinds to be resized. Here's the email:

It will be super cool, if we could introduce a resize option for the
LUN exposed from gluster volume as a backing store.

Currently I have done this manually with below steps,

Assuming the LUN was created with size 8G  and already in use by an initiator
1. umount the device and logout from the initiator
       # umount /mnt
       # multipath -F
       # iscsiadm -m node --logout

2. expand the target file size (on nodes belong to gluster volume)
       # mount -t glusterfs 10.70.42.151:/iscsi-store /mnt
       # truncate -s +2G /mnt/app-store.img
       # umount /mnt
       # Edit /etc/target/saveconfig to have new size 10G in
storage_objects -> "size": 10737418240'
       # systemctl restart target

3. login to iscsi target from initiator node and
       # iscsiadm -m discovery -t st -p 10.70.42.151 -l
       # parted -l /dev/sda (notice the target size changed)
       # mount /dev/sda /mnt -o sync
       # df -Th   (notice fs size is still 8G)
       # xfs_growfs  /dev/sda
       # df -Th (Boom!! see the grown xfs layout size to 10G)

discovery_auth cannot be disabled

Have enabled discovery_auth by issuing:
set discovery_auth enable=1 userid=username password=mypassword

Now I want to disable it and it doesn't obey the command:
/iscsi> set discovery_auth enable=0
Parameter enable is now '1'.

How come 1 ? Should confirm saying it is now '0' no ?

Using:
Ubuntu 14.04 - Kernel 3.13.0-24
targetcli 2.1-1

sessions command doesn't work with ib_srpt

reported by @celesteking in #12.

Our current sessions cmd works by parsing the nodeacl info file in configfs. see https://github.com/open-iscsi/rtslib-fb/blob/master/rtslib/target.py#L922. One way to solve this would be change ib_srpt in the kernel to report sessions there, so targetcli/rtslib would pick it up without change. Or, we could do something in rtslib to get this info for srpt another way, but this may be difficult because we don't currently have an abstraction that would let us override NodeACL behavior for a particular fabric type.

Is some config migration from targetli to targetcli-fb available/possible?

I am basically forwarding this request from today (2014-03-11) from Arch Linux:
https://aur.archlinux.org/packages/targetcli-fb/

Is there a migration/upgrade strategy to move the targetcli/lio-utils/.py "configuration" to the targetcl-fb/rtslib-fb/json format?
(apart from reading the *.py configuration and manually creating the json files)

From what I grasp, at least part could be read into configfs with the old configuration and could be extracted and saved to json from the new python code.

This possibly would work when having lio-utils installed to read/start the old configuration and having targetclif-fb/rtslib-fb installed (or available) without interfering with the running setup.

Don't collect random while first start

Hi,
i use targetcli-fb on virtual machine for proxy Ceph RBD disk

(i've try use heveged, but it's won't help me)

As i see targetctl take 1m+ for initializing, and as strace show, it's because targetctl try get random data while initialization

May be this process can be offloaded from first start?

/*
As i think random data used for generating UUID and other random stuff for creating new target/LUN
May be it's can be performed in targetcti?
*/

Feel free to kick me, if i miss something.

Backups only written if backup directory exists.

The following bug has recently been reported in the Debian bugtracker:
https://bugs.debian.org/858459

The problem is that targetcli claims on exit (and when manually saving) that a backup file was written, but in reality any error when writing the file is simply ignored.

I don't think it's necessarily wrong to not fail with an error here, but at least a warning message should be printed if the backup file could not be written.

Furthermore it might be prudent to try to create the backup directory if it doesn't already exist.

I'll happily provide a patch (+ pull request) for adding the warning message, but before I do so I'd like to know if you'd be in favor of auto-creating the backup directory if it doesn't already exist.

robust restoreconfig

restoring should not barf on any one item, but should emit a diagnostic and continue loading the rest of the stuff.

typo in man?

The man has examples with:

iqn.2006-04.example.com:test-target

but according to everywhere else i read, those iqn wwn addresses should be:

iqn.2006-04.com.example:test-target

Backtrace when attaching tape drive to Backstores/pscsi

Attach tape device to /backstores/pscsi using SCSI tape hardware path, system issues backtrace and the tape failed to be attached:

Command:

/> /backstores/pscsi create name=Ultrium-tape dev=9:0:0:0

Backtrace:
Traceback (most recent call last):
File "/usr/bin/targetcli", line 5, in
pkg_resources.run_script('targetcli-fb==2.1.fb43', 'targetcli')
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 540, in run_scr ipt
self.require(requires)[0].run_script(script_name, ns)
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 1462, in run_sc ript
exec_(script_code, namespace, namespace)
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 41, in exec_
exec("""exec code in globs, locs""")
File "", line 1, in
File "/usr/lib/python2.7/site-packages/targetcli_fb-2.1.fb43-py2.7.egg/EGG-INF O/scripts/targetcli", line 121, in

File "/usr/lib/python2.7/site-packages/targetcli_fb-2.1.fb43-py2.7.egg/EGG-INF O/scripts/targetcli", line 111, in main

File "/usr/lib/python2.7/site-packages/configshell_fb/shell.py", line 894, in run_interactive
self._cli_loop()
File "/usr/lib/python2.7/site-packages/configshell_fb/shell.py", line 723, in _cli_loop
self.run_cmdline(cmdline)
File "/usr/lib/python2.7/site-packages/configshell_fb/shell.py", line 837, in run_cmdline
self._execute_command(path, command, pparams, kparams)
File "/usr/lib/python2.7/site-packages/configshell_fb/shell.py", line 812, in _execute_command
result = target.execute_command(command, pparams, kparams)
File "/usr/lib/python2.7/site-packages/configshell_fb/node.py", line 1406, in execute_command
return method(_pparams, *_kparams)
File "build/bdist.linux-x86_64/egg/targetcli/ui_backstore.py", line 229, in ui _command_create
File "/usr/lib/python2.7/site-packages/rtslib_fb/tcm.py", line 306, in _init _
self._configure(dev)
File "/usr/lib/python2.7/site-packages/rtslib_fb/tcm.py", line 339, in _config ure
lunid)
File "/usr/lib/python2.7/site-packages/rtslib_fb/utils.py", line 293, in conve rt_scsi_hctl_to_path
scsi_device = pyudev.Device.from_name(_CONTEXT, 'scsi', ':'.join(hctl))
TypeError: sequence item 0: expected string, int found

No such file or directory: '/var/lib/target/fabric'

I got this:

[root@dev ~]# targetcli
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
Traceback (most recent call last):
File "/usr/bin/targetcli", line 87, in
main()
File "/usr/bin/targetcli", line 65, in main
root_node.refresh()
File "/usr/lib/python2.6/site-packages/targetcli/ui_root.py", line 50, in refresh
for fm in RTSRoot().fabric_modules:
File "/usr/lib/python2.6/site-packages/rtslib/root.py", line 115, in _list_fabric_modules
for mod in FabricModule.all():
File "/usr/lib/python2.6/site-packages/rtslib/target.py", line 55, in all
mod_names = [mod_name[:-5] for mod_name in os.listdir(spec_dir)
OSError: [Errno 2] No such file or directory: '/var/lib/target/fabric'

Works fine after mkdir'ing it.

Masao (RPM clueless)

Does -fb support RBD?

I've seen patches telling me it does and should, but when I rip out distro provided (and horribly broken) targetcli and try to load targetcli-fb, it chokes on rbd devices being loaded.. so uh.. have the RBD patches been merged yet, and if not, why not?

Check for invalid integer when creating a TPG

The command for creating a TPG requires a single argument which must be a positive integer. If an argument is given that isn't an integer (like "tpg2" instead of just "2") the targetcli dies with "ValueError: invalid literal for int() with base 10".

cannot create pscsi changer using mhvtl changer device

I'm trying to expose an mhvtl virtual media changer device via iscsi.
I'm also trying to associate a real optical drive as one of those devices the media changer can move stuff to. I did read about tcm_node doing something to that effect, but it's in lio-utils and deprecated. Can tcm_node coexist with targetcli? I also heard about tcmu-runner plugin handler being a possible place to put something like this. Any recommendations. Thank you.

tried /dev/sch0
tried /dev/sg16
tried 8:0:0:0
None of these work.

targetcli

targetcli shell version 2.1.fb43
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> cd /backstores/pscsi
/backstores/pscsi> ls
o- pscsi ...................................................................................................... [Storage Objects: 0]
/backstores/pscsi> create name=pscsi_sch0 dev=/dev/sch0
Cannot find SCSI device by path, and dev parameter not in H:C:T:L format: /dev/sch0
/backstores/pscsi> create name=pscsi_sch0 dev=/dev/sg16
Cannot find SCSI device by path, and dev parameter not in H:C:T:L format: /dev/sg16
/backstores/pscsi> create name=pscsi_sch0 dev=8:0:0:0
Traceback (most recent call last):
File "/usr/lib/python3.5/site-packages/rtslib_fb/tcm.py", line 319, in _configure
convert_scsi_path_to_hctl(dev)
TypeError: 'NoneType' object is not iterable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3.5/site-packages/rtslib_fb/utils.py", line 304, in convert_scsi_hctl_to_path
scsi_device = pyudev.Device.from_name(_CONTEXT, 'scsi', ':'.join(hctl))
TypeError: sequence item 0: expected str instance, int found

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/bin/targetcli", line 121, in
main()
File "/usr/bin/targetcli", line 111, in main
shell.run_interactive()
File "/usr/lib/python3.5/site-packages/configshell_fb/shell.py", line 894, in run_interactive
self._cli_loop()
File "/usr/lib/python3.5/site-packages/configshell_fb/shell.py", line 723, in _cli_loop
self.run_cmdline(cmdline)
File "/usr/lib/python3.5/site-packages/configshell_fb/shell.py", line 837, in run_cmdline
self._execute_command(path, command, pparams, kparams)
File "/usr/lib/python3.5/site-packages/configshell_fb/shell.py", line 812, in _execute_command
result = target.execute_command(command, pparams, kparams)
File "/usr/lib/python3.5/site-packages/configshell_fb/node.py", line 1406, in execute_command
return method(_pparams, *_kparams)
File "/usr/lib/python3.5/site-packages/targetcli/ui_backstore.py", line 229, in ui_command_create
so = PSCSIStorageObject(name, dev)
File "/usr/lib/python3.5/site-packages/rtslib_fb/tcm.py", line 306, in init
self._configure(dev)
File "/usr/lib/python3.5/site-packages/rtslib_fb/tcm.py", line 336, in _configure
lunid)
File "/usr/lib/python3.5/site-packages/rtslib_fb/utils.py", line 305, in convert_scsi_hctl_to_path
except DeviceNotFoundError:
NameError: name 'DeviceNotFoundError' is not defined

lsscsi -g
[6:0:0:0] cd/dvd MATSHITA BD-MLT UJ260AF 1.00 /dev/sr0 /dev/sg1
[8:0:0:0] mediumx STK L700 0105 /dev/sch0 /dev/sg16
[8:0:1:0] tape IBM ULT3580-TD5 0105 /dev/st0 /dev/sg8
[8:0:8:0] mediumx STK L80 0105 /dev/sch1 /dev/sg17

Document whether targetcli works as an iSCSI server to a XEN Cluster.

There was not much to configure with IET to allow multiple XEN host servers to simultaneously "connect" to iSCSI shares, but targetcli on CentOS7 does not work. I can get any one of three to connect, but not the others.

Should this work out of the box? Does it require a newer kernel than 3.10.0-229.4.2.el7.x86_64?

Do i need to configure "Persistent Reservations" or something else to use multi-initiator access?

"Could not create Target in configFS." - targetcli firewire sbp-target

.

Hi Guys,

Much respect, much gratitude for your expertise and volunteered time. I'm sorry I'm finally forced to trouble you with this forum-post. Please forgive any newbie errors. I've spent weeks hammering StartPage. I've learned much, read even more. But remain stumped. Trying to "RTFM" has been an exercise in frustration, the wikis seriously Out-Of-Date, or sparse, or both.

In brief, I'm trying to configure my Linux desktop-machine as a Firewire-enclosure. Targetcli seems to make a block backstore but refuses to create a target. Fuller details below (pruned for readability).

Considering the discussions at #22 and https://github.com/agrover/rtslib-fb/issues/37 and elsewhere, I suspect my issue may be a bug rather than the usual "BKAC" (between-keyboard-and-chair), and you seemed the best place to report my experience. Hopefully it's helpful.

Comments sought. Guidance appreciated. Fixes welcome.

Thanks.

M.

. . . . .

"Could not create Target in configFS."

    targetcli version 2.1.fb30
    Copyright 2011-2013 by Datera, Inc and others.

    /sbp> info
    Fabric module name: sbp
    ConfigFS path: /sys/kernel/config/target/sbp
    Allowed WWN types: eui
    Allowed WWNs list: eui.xxxxxxGUIDxxxxxx
    Fabric module features: 
    Corresponding kernel module: sbp_target

    /sbp> create eui.xxxxxxGUIDxxxxxx
    Could not create Target in configFS.

Targetcli refuses to create the target in /sbp.
The fail is the same as command+args in #root Bash shell.

    # targetcli /sbp create eui.xxxxxxGUIDxxxxxx
    Could not create Target in configFS.

And yes, for this forum-post, xxxxxxGUIDxxxxxx is a substitute for the real number.
For the record, I also tried mkdir directly within a #root Bash shell :

    # whoami 
    root
    # mkdir -pvm777  /sys/kernel/config/target/sbp/eui.xxxxxxGUIDxxxxxx
    mkdir: cannot create directory '/sys/kernel/config/target/sbp/eui.xxxxxxGUIDxxxxxx': Invalid argument

Which seems odd, given the (2012-vintage Linux-3.4, sans-targetcli) direct strategy here :
http://www.studioteabag.com/science/sbp-target/
Mind you, mkdir will not create for me anything higher up the /sys/kernel/config tree either.

So I wonder whether the set-up of the configfs is somehow not quite right.
Is targetcli (installed using yum) involved at all in that set-up?
As you yourselves mused in the links above, maybe Fedora's CONFIGFS_FS=y is somehow culpable?
Compounded by dracut/systemd voodoo? Hyper-threading SMP complications? PCI-card quirks?
Something exposing some other deeper kernel/driver issues?
I wish I could offer more than just ignorant questions.

Sadly, although I can cobble Bash scripts together, I'm neither a coder nor CS graduate.
I grasp certain ideas, but know nothing of the nut-and-bolts which you guys deal with.
Least-of-all kernels or drivers.
So I'm waaayyy out of my depth here.
I'll only be able to help troubleshoot if instructions are explicit and comprehensive.
But I'm willing to do my best, as far as my circumstances and abilities allow.

Worth asking, has anyone actually achieved a JBOD set-up with current software?
The dearth of detail around targetcli + Firewire IEEE1394 is worrying.
Am I trail-blazing here, gentlemen?

.
.

Background Details :

It's an HP Pentium-4 minitower PC with a VIA-chip Firewire+USB PCI-card fitted.
Fedora-19-LXDE with plenty of yum-installed extra tools, all updated.
Local custom-configured rpmbuild from 3.11.6 kernel srpm as per Fedora's instructions.
Static ("Y," not module "M") support for XFS, NFS, HFSPLUS, and 1394/FIREWIRE.
Configfs seems to be mounted, if boot.log messages are to be believed.

.

uname -a

    Linux localhost.localdomain 
    3.11.6-201.xfsnfshfsplus.fc19.i686 #1 SMP 
    Tue Nov 5 14:49:12 GMT 2013 
    i686 i686 i386 GNU/Linux

cat /etc/issue

    Fedora release 19 (Schrödinger’s Cat)
    Kernel \r on an \m (\l)

targetcli version

    targetcli version 2.1.fb30

tree -a -L 4 /sys/kernel/config/target/

/sys/kernel/config/target/
├── core
│   ├── alua
│   │   └── lu_gps
│   │       └── default_lu_gp
│   └── iblock_0
│       ├── hba_info
│       ├── hba_mode
│       └── testblockdisk01
│           ├── alias
│           ├── alua
│           ├── alua_lu_gp
│           ├── attrib
│           ├── control
│           ├── enable
│           ├── info
│           ├── pr
│           ├── statistics
│           ├── udev_path
│           └── wwn
├── iscsi
│   ├── discovery_auth
│   │   ├── authenticate_target
│   │   ├── enforce_discovery_auth
│   │   ├── password
│   │   ├── password_mutual
│   │   ├── userid
│   │   └── userid_mutual
│   └── lio_version
├── sbp
│   ├── discovery_auth
│   └── version
└── version

15 directories, 17 files

.

lspci -vvv 2>/dev/null | sed -ne '/^05:09.3/,/^$/ p'

05:09.3 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev 46) (prog-if 10 [OHCI])
    Subsystem: VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller
    Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
    Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
    Latency: 64 (8000ns max), Cache Line Size: 64 bytes
    Interrupt: pin A routed to IRQ 18
    Region 0: Memory at fc511000 (32-bit, non-prefetchable) [size=2K]
    Region 1: I/O ports at 1000 [size=128]
    Capabilities: [50] Power Management version 2
        Flags: PMEClk- DSI- D1- D2+ AuxCurrent=0mA PME(D0-,D1-,D2+,D3hot+,D3cold+)
        Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
    Kernel driver in use: firewire_ohci

.

Locally-Built Kernel-Configuration :

cat /boot/config-3.11.6-201.xfsnfshfsplus.fc19.i686
| grep -v -iE '(CONFIG-_)'
| grep -B1 -A2 -iE '(firewire|1394|sbp|configfs|Configuration File System|target[-]core|targetd|tcm|tgt|iblock|iscsi|pscsi|core[-]hba|T10|VPD|targetcli)'

CONFIG_SCSI_DMA=y
CONFIG_SCSI_TGT=m
CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y
--
CONFIG_SCSI_FC_ATTRS=m
CONFIG_SCSI_FC_TGT_ATTRS=y
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
CONFIG_SCSI_SAS_LIBSAS=m
--
CONFIG_SCSI_SRP_ATTRS=m
CONFIG_SCSI_SRP_TGT_ATTRS=y
CONFIG_SCSI_LOWLEVEL=y
CONFIG_ISCSI_TCP=m
CONFIG_ISCSI_BOOT_SYSFS=m
CONFIG_SCSI_CXGB3_ISCSI=m
CONFIG_SCSI_CXGB4_ISCSI=m
CONFIG_SCSI_BNX2_ISCSI=m
CONFIG_SCSI_BNX2X_FCOE=m
CONFIG_BE2ISCSI=m
CONFIG_BLK_DEV_3W_XXXX_RAID=m
CONFIG_SCSI_HPSA=m
--
CONFIG_SCSI_QLA_FC=m
CONFIG_TCM_QLA2XXX=m
CONFIG_SCSI_QLA_ISCSI=m
CONFIG_SCSI_LPFC=m
# CONFIG_SCSI_LPFC_DEBUG_FS is not set
--
CONFIG_DM_SWITCH=m
CONFIG_TARGET_CORE=m
CONFIG_TCM_IBLOCK=m
CONFIG_TCM_FILEIO=m
CONFIG_TCM_PSCSI=m
CONFIG_LOOPBACK_TARGET=m
CONFIG_TCM_FC=m
CONFIG_ISCSI_TARGET=m
CONFIG_SBP_TARGET=m
CONFIG_FUSION=y
CONFIG_FUSION_SPI=m
--
#
# IEEE 1394 (FireWire) support
#
CONFIG_FIREWIRE=y
CONFIG_FIREWIRE_OHCI=y
CONFIG_FIREWIRE_SBP2=y
CONFIG_FIREWIRE_NET=y
CONFIG_FIREWIRE_NOSY=m
# CONFIG_I2O is not set
CONFIG_MACINTOSH_DRIVERS=y
--
CONFIG_STE10XP=m
CONFIG_LSI_ET1011C_PHY=m
CONFIG_MICREL_PHY=m
CONFIG_FIXED_PHY=y
--
#
# Supported FireWire (IEEE 1394) Adapters
#
CONFIG_DVB_FIREDTV=m
--
CONFIG_MEDIA_TUNER_MT2131=m
CONFIG_MEDIA_TUNER_QT1010=m
CONFIG_MEDIA_TUNER_XC2028=m
CONFIG_MEDIA_TUNER_XC5000=m
--
CONFIG_SND_USB_HIFACE=m
CONFIG_SND_FIREWIRE=y
CONFIG_SND_FIREWIRE_LIB=m
CONFIG_SND_FIREWIRE_SPEAKERS=m
CONFIG_SND_ISIGHT=m
CONFIG_SND_SCS1X=m
--
# CONFIG_BCM_WIMAX is not set
# CONFIG_FT1000 is not set

#
--
# CONFIG_DGRP is not set
CONFIG_FIREWIRE_SERIAL=m
# CONFIG_ZCACHE is not set
CONFIG_X86_PLATFORM_DEVICES=y
--
CONFIG_DMI_SYSFS=y
CONFIG_ISCSI_IBFT_FIND=y
CONFIG_ISCSI_IBFT=m
# CONFIG_GOOGLE_FIRMWARE is not set

--
CONFIG_HUGETLB_PAGE=y
CONFIG_CONFIGFS_FS=y
CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ADFS_FS is not set
--
CONFIG_TEST_KSTRTOX=y
CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
# CONFIG_FIREWIRE_OHCI_REMOTE_DMA is not set
CONFIG_BUILD_DOCSRC=y
# CONFIG_DMA_API_DEBUG is not set
--
CONFIG_CRC16=y
CONFIG_CRC_T10DIF=y
CONFIG_CRC_ITU_T=y
CONFIG_CRC32=y

.

dmesg
| grep -B1 -A3 -iE '(firewire|1394|sbp|configfs|Configuration File System|target[-]core|targetd|tcm|tgt|iblock|iscsi|pscsi|core[-]hba|T10|VPD|targetcli)'

[    1.845440] ata1.00: 150136560 sectors, multi 16: LBA 
[    1.876049] firewire_ohci 0000:05:09.3: added OHCI v1.0 device as card 0, 4 IR + 8 IT contexts, quirks 0x41
[    1.876437] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    1.876516] ehci-pci: EHCI PCI platform driver
[    1.876755] ehci-pci 0000:00:1d.7: setting latency timer to 64
--
[    2.319555] input: GenPS/2 Genius Mouse as /devices/platform/i8042/serio1/input/input2
[    2.377284] firewire_core 0000:05:09.3: created device fw0: GUID xxxxxxGUIDxxxxxx, S400
[    2.584099] usb 4-2: new low-speed USB device number 2 using uhci_hcd
[    2.610108] tsc: Refined TSC clocksource calibration: 2793.181 MHz
[    2.634454] systemd-udevd[158]: starting version 204
--
[    3.610157] Switched to clocksource tsc
[    3.888652] net firewire0: IP over IEEE 1394 on card 0000:05:09.3
[    3.888846] firewire_core 0000:05:09.3: refreshed device fw0
[    4.638949] XFS (sda5): Mounting Filesystem
[    4.863059] XFS (sda5): Ending clean mount
[    5.537666] systemd-journald[63]: Received SIGTERM
--
[   18.932960] systemd-modules-load[369]: Module 'libcrc32c' is builtin
[   19.034297] systemd-modules-load[369]: Module 'firewire_ohci' is builtin
[   19.036447] systemd-modules-load[369]: Module 'firewire_core' is builtin
[   19.038369] systemd-modules-load[369]: Module 'crc_itu_t' is builtin
[   19.040288] systemd-modules-load[369]: Module 'sunrpc' is builtin
[   19.043710] systemd[1]: Unit systemd-modules-load.service entered failed state.
--
[344855.029985] err 3 len 1 pos 1
[418429.080264] target_core_get_fabric() failed for test_mkdir
[418459.595757] target_core_get_fabric() failed for test_mkdir
[431473.992019] tg3 0000:05:02.0: vpd r/w failed.  This is likely a firmware bug on this device.  Contact the card vendor for a firmware update.
[492702.861062] err 3 len 1 pos 1
[492814.953862] err 3 len 0 pos 0

.

cat /var/log/boot.log
| grep -B1 -A2 -iE '(firewire|1394|sbp|configfs|Configuration File System|target[-]core|targetd|tcm|tgt|iblock|iscsi|pscsi|core[-]hba|T10|VPD|targetcli)'

         Mounting FUSE Control File System...
         Mounting Configuration File System...
[  OK  ] Mounted Configuration File System.
[  OK  ] Mounted FUSE Control File System.
[  OK  ] Reached target System Initialization.
--
[  OK  ] Listening on CUPS Printing Service Sockets.
[  OK  ] Listening on Open-iSCSI iscsid Socket.
[  OK  ] Listening on Open-iSCSI iscsiuio Socket.
[  OK  ] Listening on PC/SC Smart Card Daemon Activation Socket.
[  OK  ] Listening on RPCbind Server Activation Socket.

.

lsmod
| grep -B1 -A2 -iE '(firewire|1394|sbp|configfs|Configuration File System|target[-]core|targetd|tcm|tgt|iblock|iscsi|pscsi|core[-]hba|T10|VPD|targetcli)'

Module                  Size  Used by
target_core_pscsi      18275  0 
target_core_file       17702  0 
target_core_iblock     17690  1 
sbp_target             32926  1 
nf_nat_h323            17419  0 
nf_conntrack_h323      62344  1 nf_nat_h323
--
rfcomm                 53537  4 
iscsi_target_mod      244295  1 
target_core_mod       256075  11 target_core_iblock,sbp_target,target_core_pscsi,iscsi_target_mod,target_core_file
bnep                   18959  2 
bluetooth             317776  10 bnep,rfcomm

.

systemctl --full status
| grep -B1 -A3 -iE '(firewire|1394|sbp|configfs|Configuration File System|target[-]core|targetd|tcm|tgt|iblock|iscsi|pscsi|core[-]hba|T10|VPD|targetcli)'

sys-devices-pci0000:00-0000:00:1e.0-0000:05:09.3-net-firewire0.device -> '/org/freedesktop/systemd1/unit/sys_2ddevices_2dpci0000_3a00_2d0000_3a00_3a1e_2e0_2d0000_3a05_3a09_2e3_2dnet_2dfirewire0_2edevice'

sys-devices-pci0000:00-0000:00:1e.0-0000:05:09.3-net-firewire0.device - VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller
   Loaded: loaded
   Active: active (plugged) since Mon 2013-11-25 00:06:55 GMT; 6 days ago
   Device: /sys/devices/pci0000:00/0000:00:1e.0/0000:05:09.3/net/firewire0

sys-devices-pci0000:00-0000:00:1e.0-0000:05:0a.0-net-enp5s10.device -> '/org/freedesktop/systemd1/unit/sys_2ddevices_2dpci0000_3a00_2d0000_3a00_3a1e_2e0_2d0000_3a05_3a0a_2e0_2dnet_2denp5s10_2edevice'

--

sys-module-configfs.device -> '/org/freedesktop/systemd1/unit/sys_2dmodule_2dconfigfs_2edevice'

sys-module-configfs.device - /sys/module/configfs
   Loaded: loaded
   Active: active (plugged) since Mon 2013-11-25 00:06:54 GMT; 6 days ago
   Device: /sys/module/configfs

sys-module-fuse.device -> '/org/freedesktop/systemd1/unit/sys_2dmodule_2dfuse_2edevice'

--

sys-subsystem-net-devices-firewire0.device -> '/org/freedesktop/systemd1/unit/sys_2dsubsystem_2dnet_2ddevices_2dfirewire0_2edevice'

sys-subsystem-net-devices-firewire0.device - VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller
   Loaded: loaded
   Active: active (plugged) since Mon 2013-11-25 00:06:55 GMT; 6 days ago
   Device: /sys/devices/pci0000:00/0000:00:1e.0/0000:05:09.3/net/firewire0

sys-subsystem-net-devices-wwp0s29f7u2u2i1.device -> '/org/freedesktop/systemd1/unit/sys_2dsubsystem_2dnet_2ddevices_2dwwp0s29f7u2u2i1_2edevice'

--

sys-kernel-config.mount - Configuration File System
   Loaded: loaded (/usr/lib/systemd/system/sys-kernel-config.mount; static)
   Active: active (mounted) since Mon 2013-11-25 00:06:59 GMT; 6 days ago
    Where: /sys/kernel/config
     What: configfs
     Docs: https://www.kernel.org/doc/Documentation/filesystems/configfs/configfs.txt
           http://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
  Process: 520 ExecMount=/bin/mount configfs /sys/kernel/config -t configfs (code=exited, status=0/SUCCESS)

sys-kernel-debug.mount -> '/org/freedesktop/systemd1/unit/sys_2dkernel_2ddebug_2emount'

.

cat /sys/devices/pci0000:00/0000:00:1e.0/0000:05:09.3/net/firewire0/address

xx:xx:xx:GU:ID:xx:xx:xx:0a:02:00:01:00:00:00:00

.

Oh, and for completeness, checking permissions, I even tried chmod tactics.

    # chmod -v 777  /sys/kernel/config/target/sbp
    mode of '/sys/kernel/config/target/sbp' changed from 0755 (rwxr-xr-x) to 0777 (rwxrwxrwx)
    # ls -ial  /sys/kernel/config/target/sbp
    2314497 drwxrwxrwx 3 root root    0 Nov 25 14:02 .

The mod-change went ahead but it did not resolve the error.
Instead targetcli failure as before.

.

"Fingers crossed.."

.

saveconfig/restoreconfig should use rtslib

Since we have saveconfig support in rtslib, we should probably use rtslib for targetcli saveconfig.
This way we make sure both really do the same and don't duplicate code (even if it isn't much).

make auto portal creation configurable, and don't error when create multiple targets

Chris Moore says:

I've started running into this too - I think it happened when I switched to testing with RHEL 7.1.
The auto create of a portal at 0.0.0.0 is really killing me.

I create Target1 and it auto creates the 0.0.0.0:3260 portal. I want my portal at 192.168.1.1:3260
but I can't create it because the auto portal is already using that port.

I can delete the portal at 0.0.0.0:3260, then create mine at 192.168.1.1:3260. But then when
I try to create Target2 it fails because it's trying to create a portal at 0.0.0.0:3260 for that one
and it's getting EADDRINUSE.

Is there a way to turn off this automatic creation of portals in targetcli?

We should add a targetcli global parameter 'auto_add_default_portal' and catch exceptions from attempted creation.

How can I use ib_srp on redhat 7 with targetcli_2.1.fb34?

Hi,
I have install the targetcli_2.1.fb34 on redhat 7.But I haven't see the ib_srp option when I use ls.
/> ls
o- / ......................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: 0]
| o- fileio ................................................................................................. [Storage Objects: 0]
| o- pscsi .................................................................................................. [Storage Objects: 0]
| o- ramdisk ................................................................................................ [Storage Objects: 0]
o- iscsi ............................................................................................................ [Targets: 0]
o- loopback ......................................................................................................... [Targets: 0]
/>

I have already load driver target_core_iblock ,target_core_mod.
[root@DATA]# lsmod |grep ib
ib_ipoib 91629 0
ib_ucm 22546 0
ib_uverbs 46783 2 ib_ucm,rdma_ucm
ib_umad 18027 0
ib_srpt 52289 0
ib_cm 42689 4 rdma_cm,ib_ucm,ib_srpt,ib_ipoib
ib_sa 33950 4 rdma_cm,ib_cm,rdma_ucm,ib_ipoib
ib_mad 43055 4 ib_cm,ib_sa,ib_srpt,ib_umad
ib_core 87335 11 rdma_cm,ib_cm,ib_sa,iw_cm,ib_mad,ib_ucm,ib_srpt,ib_umad,ib_uverbs,rdma_ucm,ib_ipoib
ib_addr 18923 3 rdma_cm,ib_core,rdma_ucm
target_core_iblock 18177 5
target_core_mod 299412 40 target_core_iblock,target_core_pscsi,iscsi_target_mod,ib_srpt,target_core_file

The kernel version is 3.10.0-123.el7.x86_64 .

Please help me.Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.