NVMe Linux driver series 1: host side [fabrics.c]<67>

nvmf_connect_data_prep


static struct nvmf_connect_data *nvmf_connect_data_prep(struct nvme_ctrl *ctrl,
u16 cntlid)
{<!-- -->
struct nvmf_connect_data *data;

data = kzalloc(sizeof(*data), GFP_KERNEL);
if (!data)
return NULL;

uuid_copy( &data->hostid, &ctrl->opts->host->id);
data->cntlid = cpu_to_le16(cntlid);
strncpy(data->subsysnqn, ctrl->opts->subsysnqn, NVMF_NQN_SIZE);
strncpy(data->hostnqn, ctrl->opts->host->nqn, NVMF_NQN_SIZE);

return data;
}

This code defines a function called nvmf_connect_data_prep to prepare the connection data structure struct nvmf_connect_data. This function allocates memory for the connection data structure, fills the structure with relevant information, and then returns a pointer to the connection data structure.

The parameters of the function are as follows:

  • ctrl: A pointer to an NVMe controller instance representing a specific device to connect to.
  • cntlid: Connection ID, indicating which subsystem to establish a connection with.

The main steps of the function are as follows:

  1. Use the kzalloc function to allocate memory space for the connection data structure data, and specify the memory allocation method through the GFP_KERNEL flag.
  2. Returns NULL if memory allocation fails.
  3. Copy the host ID from the controller options into the hostid field of the connection data structure.
  4. Convert the incoming cntlid to little-endian byte order, and fill it into the cntlid field of the connection data structure.
  5. Copy the subsystem NQN from the controller options into the subsysnqn field of the connection data structure.
  6. Copy the host NQN from the controller options into the hostnqn field of the connection data structure.
  7. Returns a pointer to the populated connection data structure.

The purpose of this function is to create a connection data structure to be used when establishing a connection. It contains key information used to establish a connection, such as host ID, connection ID, subsystem NQN, host NQN, etc.

nvmf_connect_admin_queue

/**
 * nvmf_connect_admin_queue() - NVMe Fabrics Admin Queue "Connect"
 * API function.
 * @ctrl: Host nvme controller instance used to request
 * a new NVMe controller allocation on the target
 * system and establish an NVMe Admin connection to
 * that controller.
 *
 * This function enables an NVMe host device to request a new allocation of
 * an NVMe controller resource on a target system as well establish a
 * fabrics-protocol connection of the NVMe Admin queue between the
 * host system device and the allocated NVMe controller on the
 * target system via a NVMe Fabrics "Connect" command.
 *
 * Return:
 * 0: success
 * > 0: NVMe error status code
 * < 0: Linux errno error code
 *
 */
int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
{<!-- -->
struct nvme_command cmd = {<!-- --> };
union nvme_result res;
struct nvmf_connect_data *data;
int ret;
u32 result;

nvmf_connect_cmd_prep(ctrl, 0, & amp;cmd);

data = nvmf_connect_data_prep(ctrl, 0xffff);
if (!data)
return -ENOMEM;

ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res,
data, sizeof(*data), NVME_QID_ANY, 1,
BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
if (ret) {<!-- -->
nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32),
&cmd, data);
goto out_free_data;
}

result = le32_to_cpu(res.u32);
ctrl->cntlid = result & 0xFFFF;
if (result & amp; (NVME_CONNECT_AUTHREQ_ATR | NVME_CONNECT_AUTHREQ_ASCR)) {<!-- -->
/* Secure concatenation is not implemented */
if (result & NVME_CONNECT_AUTHREQ_ASCR) {<!-- -->
dev_warn(ctrl->device,
"qid 0: secure concatenation is not supported\\
");
ret = NVME_SC_AUTH_REQUIRED;
goto out_free_data;
}
/* Authentication required */
ret = nvme_auth_negotiate(ctrl, 0);
if (ret) {<!-- -->
dev_warn(ctrl->device,
"qid 0: authentication setup failed\\
");
ret = NVME_SC_AUTH_REQUIRED;
goto out_free_data;
}
ret = nvme_auth_wait(ctrl, 0);
if (ret)
dev_warn(ctrl->device,
"qid 0: authentication failed\\
");
else
dev_info(ctrl->device,
"qid 0: authenticated\\
");
}
out_free_data:
kfree(data);
return ret;
}
EXPORT_SYMBOL_GPL(nvmf_connect_admin_queue);

This code defines a function called nvmf_connect_admin_queue to connect the admin queue in NVMe Fabrics. This function allows the NVMe host device to request the allocation of a new NVMe controller resource on the target system, and NVMe management to establish a Fabric protocol connection between the host system device and the allocated NVMe controller on the target system through the NVMe Fabrics “Connect” command queue.

The parameters of the function are as follows:

  • ctrl: A pointer to an NVMe controller instance, representing a specific device to establish a connection to.

The main steps of the function are as follows:

  1. Call the nvmf_connect_cmd_prep function to prepare the connection command structure for connecting to the administrator queue. qid is set to 0 to indicate an administrator queue.
  2. Call the nvmf_connect_data_prep function to prepare the connection data structure for connecting to the administrator queue. qid is also set to 0.
  3. Use the __nvme_submit_sync_cmd function to submit the connection command and wait for the response result. If the return value is non-zero, it means that the connection command submission failed, and the nvmf_log_connect_error function will be called to output the error message and release the connection data structure.
  4. Parse the response result, get the connection’s result and connection ID, and authenticate if necessary.
  5. Release the connection data structure.
  6. Return the operation result (return 0 for success, NVMe error status code or Linux error code for failure).

The purpose of this function is to connect the administrator queue in NVMe Fabrics by sending the connection command to realize the connection between NVMe controllers. Authentication may be required during the connection process, and the connection result will be output to the device log according to the situation.

nvmf_connect_io_queue

/**
 * nvmf_connect_io_queue() - NVMe Fabrics I/O Queue "Connect"
 * API function.
 * @ctrl: Host nvme controller instance used to establish an
 * NVMe I/O queue connection to the already allocated NVMe
 * controller on the target system.
 * @qid: NVMe I/O queue number for the new I/O connection between
 * host and target (note qid == 0 is illegal as this is
 * the Admin queue, per NVMe standard).
 *
 * This function issues a fabrics-protocol connection
 * of a NVMe I/O queue (via NVMe Fabrics "Connect" command)
 * between the host system device and the allocated NVMe controller
 * on the target system.
 *
 * Return:
 * 0: success
 * > 0: NVMe error status code
 * < 0: Linux errno error code
 */
int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
{<!-- -->
struct nvme_command cmd = {<!-- --> };
struct nvmf_connect_data *data;
union nvme_result res;
int ret;
u32 result;

nvmf_connect_cmd_prep(ctrl, qid, & amp;cmd);

data = nvmf_connect_data_prep(ctrl, ctrl->cntlid);
if (!data)
return -ENOMEM;

ret = __nvme_submit_sync_cmd(ctrl->connect_q, & amp;cmd, & amp;res,
data, sizeof(*data), qid, 1,
BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
if (ret) {<!-- -->
nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32),
&cmd, data);
}
result = le32_to_cpu(res.u32);
if (result & amp; (NVME_CONNECT_AUTHREQ_ATR | NVME_CONNECT_AUTHREQ_ASCR)) {<!-- -->
/* Secure concatenation is not implemented */
if (result & NVME_CONNECT_AUTHREQ_ASCR) {<!-- -->
dev_warn(ctrl->device,
"qid 0: secure concatenation is not supported\\
");
ret = NVME_SC_AUTH_REQUIRED;
goto out_free_data;
}
/* Authentication required */
ret = nvme_auth_negotiate(ctrl, qid);
if (ret) {<!-- -->
dev_warn(ctrl->device,
"qid %d: authentication setup failed\\
", qid);
ret = NVME_SC_AUTH_REQUIRED;
} else {<!-- -->
ret = nvme_auth_wait(ctrl, qid);
if (ret)
dev_warn(ctrl->device,
"qid %u: authentication failed\\
", qid);
}
}
out_free_data:
kfree(data);
return ret;
}
EXPORT_SYMBOL_GPL(nvmf_connect_io_queue);

This code defines a function named nvmf_connect_io_queue for connecting to I/O queues in NVMe Fabrics. This function allows the NVMe host device to establish an NVMe I/O queue connection between the host system device and the assigned NVMe controller on the target system by issuing the “Connect” command of the Fabrics protocol.

The parameters of the function are as follows:

  • ctrl: Pointer to the NVMe controller instance, indicating the specific device to which the connection is to be established.
  • qid: The number of the NVMe I/O queue, used to indicate the I/O queue to be connected. Note that qid = 0 is illegal because this is a managed queue according to the NVMe standard.

The main steps of the function are similar to the nvmf_connect_admin_queue function introduced before:

  1. Call the nvmf_connect_cmd_prep function to prepare the connection command structure for connecting to the I/O queue. qid represents the I/O queue number to be connected.
  2. Call the nvmf_connect_data_prep function to prepare the connection data structure for connecting to the I/O queue. qid represents the I/O queue number to be connected.
  3. Use the __nvme_submit_sync_cmd function to submit the connection command and wait for the response. If the return value is non-zero, it means that the connection command submission failed, and the nvmf_log_connect_error function will be called to output the error message and release the connection data structure.
  4. Parse the response, obtain the connection’s results and connection ID, and authenticate as needed.
  5. Release the connection data structure.
  6. Returns the operation result (returns 0 if successful, returns NVMe error status code or Linux error code if failed).

The purpose of this function is to connect I/O queues in NVMe Fabrics by sending connection commands to realize I/O queue connections between NVMe controllers. Authentication may be required during the connection process, and the connection results will be output to the device log according to the situation.

nvmf_should_reconnect

bool nvmf_should_reconnect(struct nvme_ctrl *ctrl)
{<!-- -->
if (ctrl->opts->max_reconnects == -1 ||
ctrl->nr_reconnects < ctrl->opts->max_reconnects)
return true;

return false;
}
EXPORT_SYMBOL_GPL(nvmf_should_reconnect);

This code defines a function called nvmf_should_reconnect that determines whether an attempt should be made to reconnect the NVMe Fabrics controller.

The function’s argument is a pointer to the NVMe controller instance ctrl.

The logic of the function is as follows:

  1. First, the function checks the value of ctrl->opts->max_reconnects. This value represents the maximum number of reconnections. If the value of max_reconnects is -1, it means that there is no connection limit and should always try to reconnect, so return true directly.

  2. If max_reconnects is not -1, continue to judge whether the current number of reconnections ctrl->nr_reconnects is less than max_reconnects. If so, returns true, indicating that a reconnection should be attempted.

  3. If nr_reconnects is greater than or equal to max_reconnects, false is returned, indicating that reconnection should not continue.

Therefore, the purpose of this function is to judge whether or not it should also try to reconnect the NVMe Fabrics controller, based on the maximum number of reconnects set in the connection options. If max_reconnects is -1 or the current number of reconnections has not reached the upper limit, the function will return true, otherwise it will return false. This can help decide whether to continue trying to reconnect if there is a problem with the connection.

nvmf_register_transport

/**
 * nvmf_register_transport() - NVMe Fabrics Library registration function.
 * @ops: Transport ops instance to be registered to the
 * common fabrics library.
 *
 * API function that registers the type of specific transport fabric
 * being implemented to the common NVMe fabrics library. Part of
 * the overall init sequence of starting up a fabrics driver.
 */
int nvmf_register_transport(struct nvmf_transport_ops *ops)
{<!-- -->
if (!ops->create_ctrl)
return -EINVAL;

down_write( & amp;nvmf_transports_rwsem);
list_add_tail( & amp;ops->entry, & amp;nvmf_transports);
up_write( & amp;nvmf_transports_rwsem);

return 0;
}
EXPORT_SYMBOL_GPL(nvmf_register_transport);

This code defines a function named nvmf_register_transport that is used to register an implementation of the NVMe Fabrics transport protocol. It adds the transport protocol’s operation function structure ops to the shared transport protocol list.

The parameter of the function is a pointer ops pointing to the struct nvmf_transport_ops type, which contains the operation functions related to the transport protocol.

The logic of the function is as follows:

  1. First, the function checks whether the ops parameter passed in contains the create_ctrl operation function, that is, to determine whether the function for creating a controller is provided. If not provided, the function will return error code -EINVAL, indicating that the parameters passed in are invalid.

  2. If the create_ctrl action function is provided, the function will acquire a write lock on the list of transports to ensure no races when adding transports.

  3. Next, the function adds the entry field of the ops structure of the transport protocol to the end of the transport protocol list nvmf_transports, that is, the operation function of the current transport protocol The structure is added to the global transport protocol list.

  4. Finally, the function releases the write lock on the transport protocol list, completes the registration process of the transport protocol, and returns a successful status code 0.

The purpose of this function is to add the operation function structure that implements different transmission protocols to the shared transmission protocol list, so that different transmission protocols can be registered in the shared library of NVMe Fabrics for subsequent use.

nvmf_unregister_transport

/**
 * nvmf_unregister_transport() - NVMe Fabrics Library unregistration function.
 * @ops: Transport ops instance to be unregistered from the
 * common fabrics library.
 *
 * Fabrics API function that unregisters the type of specific transport
 * fabric being implemented from the common NVMe fabrics library.
 * Part of the overall exit sequence of unloading the implemented driver.
 */
void nvmf_unregister_transport(struct nvmf_transport_ops *ops)
{<!-- -->
down_write( & amp;nvmf_transports_rwsem);
list_del( & amp;ops->entry);
up_write( & amp;nvmf_transports_rwsem);
}
EXPORT_SYMBOL_GPL(nvmf_unregister_transport);

This code defines a function named nvmf_unregister_transport to unregister the implementation of a specific transport protocol from the NVMe Fabrics shared library. When uninstalling the implemented driver, this function can be called to perform the logout operation.

The parameter of the function is a pointer ops pointing to the struct nvmf_transport_ops type, which contains the operation functions related to the transport protocol.

The logic of the function is as follows:

  1. First, the function acquires a write lock on the transport list to ensure that no contention occurs when transports are removed.

  2. Next, the function removes the entry field of the ops structure of the transport protocol from the transport protocol list nvmf_transports, that is, the operation function of the current transport protocol The structure is removed from the global transport protocol list.

  3. Finally, the function releases the write lock on the transport protocol list and completes the unregistration process of the transport protocol.

The purpose of this function is to remove the operation function structure of a specific transport protocol from the shared transport protocol list, so as to realize the logout operation on the transport protocol. In this way, when the implemented driver is unloaded, this function can be called to cancel the registration of the transmission protocol, so as to ensure that no invalid transmission protocol registration information will be left when the system exits.

nvmf_lookup_transport

static struct nvmf_transport_ops *nvmf_lookup_transport(
struct nvmf_ctrl_options *opts)
{<!-- -->
struct nvmf_transport_ops *ops;

lockdep_assert_held( & amp;nvmf_transports_rwsem);

list_for_each_entry(ops, & amp;nvmf_transports, entry) {<!-- -->
if (strcmp(ops->name, opts->transport) == 0)
return ops;
}

return NULL;
}

This code defines a function called nvmf_lookup_transport that is used to look up the transport protocol operation function structure associated with a given NVMe controller option.

The parameter of the function is a pointer opts pointing to struct nvmf_ctrl_options type, which contains the options of the controller, including the name of the transport protocol.

The logic of the function is as follows:

  1. First, the function uses the lockdep_assert_held function to assert that the current thread already holds the nvmf_transports_rwsem lock. This is to ensure that the function can only run while holding the lock when called, to prevent race conditions.

  2. Next, the function uses a loop to iterate over each transport protocol operation function structure ops in the transport protocol list nvmf_transports.

  3. Inside the loop, the function judges whether to A matching transport protocol was found.

  4. If a matching transport protocol is found, the function immediately returns the corresponding transport protocol operation function structure ops.

  5. If the entire list of transport protocols is traversed and no matching transport protocol is found, NULL is returned.

The purpose of this function is to find the corresponding transport protocol operation function structure from the transport protocol list according to the transport protocol name in the given NVMe controller option. In this way, the operation function of a specific transport protocol can be obtained for subsequent operations, such as creating a controller, establishing a connection, etc.

opt_tokens

static const match_table_t opt_tokens = {<!-- -->
{<!-- --> NVMF_OPT_TRANSPORT, "transport=%s" },
{<!-- --> NVMF_OPT_TRADDR, "traddr=%s" },
{<!-- --> NVMF_OPT_TRSVCID, "trsvcid=%s" },
{<!-- --> NVMF_OPT_NQN, "nqn=%s" },
{<!-- --> NVMF_OPT_QUEUE_SIZE, "queue_size=%d" },
{<!-- --> NVMF_OPT_NR_IO_QUEUES, "nr_io_queues=%d" },
{<!-- --> NVMF_OPT_RECONNECT_DELAY, "reconnect_delay=%d" },
{<!-- --> NVMF_OPT_CTRL_LOSS_TMO, "ctrl_loss_tmo=%d" },
{<!-- --> NVMF_OPT_KATO, "keep_alive_tmo=%d" },
{<!-- --> NVMF_OPT_HOSTNQN, "hostnqn=%s" },
{<!-- --> NVMF_OPT_HOST_TRADDR, "host_traddr=%s" },
{<!-- --> NVMF_OPT_HOST_IFACE, "host_iface=%s" },
{<!-- --> NVMF_OPT_HOST_ID, "hostid=%s" },
{<!-- --> NVMF_OPT_DUP_CONNECT, "duplicate_connect" },
{<!-- --> NVMF_OPT_DISABLE_SQFLOW, "disable_sqflow" },
{<!-- --> NVMF_OPT_HDR_DIGEST, "hdr_digest" },
{<!-- --> NVMF_OPT_DATA_DIGEST, "data_digest" },
{<!-- --> NVMF_OPT_NR_WRITE_QUEUES, "nr_write_queues=%d" },
{<!-- --> NVMF_OPT_NR_POLL_QUEUES, "nr_poll_queues=%d" },
{<!-- --> NVMF_OPT_TOS, "tos=%d" },
{<!-- --> NVMF_OPT_FAIL_FAST_TMO, "fast_io_fail_tmo=%d" },
{<!-- --> NVMF_OPT_DISCOVERY, "discovery" },
{<!-- --> NVMF_OPT_DHCHAP_SECRET, "dhchap_secret=%s" },
{<!-- --> NVMF_OPT_DHCHAP_CTRL_SECRET, "dhchap_ctrl_secret=%s" },
{<!-- --> NVMF_OPT_ERR, NULL }
};

This code defines a match table named opt_tokens for mapping NVMe controller option tokens to corresponding format strings. This structure is commonly used to parse and format option strings so that specific options can be quickly identified and processed based on their identifiers.

The structure of the match table is composed of a series of {identifier, format string} pairs. where the identifier is a constant representing the various attributes of the NVMe controller option, and the format string specifies how to format the value of the option as a string.

For example, for option NVMF_OPT_TRANSPORT, the match table specifies the corresponding format string "transport=%s", where %s indicates that Insert the value of the option since this is a string type option.

When parsing options, this match table can be used to look up the format for each option and populate the option value into the format string, resulting in the complete option string. When the option string is formatted, substitutions are made according to the format in the match table, resulting in the correct option representation.

Finally, the match table also contains a NULL pair identified as NVMF_OPT_ERR to indicate the end of the option list.