DPDK 应用程序测试管道在app_ports_check_link失败



测试设置:Linux-Server-1 Port-A <==> Port 1 DPDK-Server-2 Port 2 <==> Port B Linux-Server-2.

遵循的步骤:

  1. 物理链路已连接
  2. 绑定到 DPDK 的设备:
Network devices using DPDK-compatible driver
============================================
0000:03:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=uio_pci_generic unused=ixgbe,vfio-pci
0000:03:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=uio_pci_generic unused=ixgbe,vfio-pci
Network devices using kernel driver
===================================
0000:05:00.0 'I210 Gigabit Network Connection 1533' if=enp5s0 drv=igb unused=vfio-pci,uio_pci_generic *Active*
0000:06:00.0 'I210 Gigabit Network Connection 1533' if=enp6s0 drv=igb unused=vfio-pci,uio_pci_generic

问题:DPDK 服务器的端口 2 恢复关闭app_ports_check_link

[编辑] 使用 DPDK 示例运行,我能够将数据包发送到 DPDK 端口 1 和端口 2。

事件开发日志:

EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:03:00.1 on NUMA socket 0
EAL:   probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1533 net_e1000_igb
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1533 net_e1000_igb
USER1: Creating the mbuf pool ...
USER1: Initializing NIC port 0 ...
USER1: Initializing NIC port 1 ...
USER1: Port 0 (10 Gbps) UP
USER1: Port 1 (0 Gbps) DOWN
PANIC in app_ports_check_link():
Some NIC ports are DOWN
8: [./build/pipeline(_start+0x2a) [0x558dc37c1d8a]]
7: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7f318e9f5b97]]
6: [./build/pipeline(main+0x7a) [0x558dc37c1fa4]]
5: [./build/pipeline(_Z8app_initv+0x18) [0x558dc37c2940]]
4: [./build/pipeline(+0x8c909) [0x558dc37c2909]]
3: [./build/pipeline(+0x8c677) [0x558dc37c2677]]
2: [./build/pipeline(__rte_panic+0xc5) [0x558dc37b4a90]]
1: [./build/pipeline(rte_dump_stack+0x2e) [0x558dc385954e]]
fish: “sudo ./build/pipeline” terminated by signal SIGABRT (Abort)

法典

static void
app_ports_check_link(void)
{
uint32_t all_ports_up, i;
all_ports_up = 1;
for (i = 0; i < app.n_ports; i++) {
struct rte_eth_link link;
uint16_t port;
port = app.ports[i];
memset(&link, 0, sizeof(link));
rte_eth_link_get_nowait(port, &link);
RTE_LOG(INFO, USER1, "Port %u (%u Gbps) %sn",
port,
link.link_speed / 1000,
link.link_status ? "UP" : "DOWN");
if (link.link_status == ETH_LINK_DOWN)
all_ports_up = 0;
}
if (all_ports_up == 0)
rte_panic("Some NIC ports are DOWNn");
}
static void
app_init_ports(void)
{
uint32_t i;
struct rte_eth_conf port_conf = app_port_conf_init();
struct rte_eth_rxconf rx_conf = app_rx_conf_init();
struct rte_eth_txconf tx_conf = app_tx_conf_init();
(void)tx_conf;
/* Init NIC ports, then start the ports */
for (i = 0; i < app.n_ports; i++) {
uint16_t port;
int ret;
port = app.ports[i];
RTE_LOG(INFO, USER1, "Initializing NIC port %u ...n", port);
/* Init port */
ret = rte_eth_dev_configure(
port,
1,
1,
&port_conf);
if (ret < 0)
rte_panic("Cannot init NIC port %u (%s)n",
port, rte_strerror(ret));
rte_eth_promiscuous_enable(port);
/* Init RX queues */
ret = rte_eth_rx_queue_setup(
port,
0,
app.port_rx_ring_size,
rte_eth_dev_socket_id(port),
&rx_conf,
app.pool);
if (ret < 0)
rte_panic("Cannot init RX for port %u (%d)n",
(uint32_t) port, ret);
/* Init TX queues */
ret = rte_eth_tx_queue_setup(
port,
0,
app.port_tx_ring_size,
rte_eth_dev_socket_id(port),
NULL);
if (ret < 0)
rte_panic("Cannot init TX for port %u (%d)n",
(uint32_t) port, ret);
/* Start port */
ret = rte_eth_dev_start(port);
if (ret < 0)
rte_panic("Cannot start port %u (%d)n", port, ret);
}
app_ports_check_link();
}

[编辑]2020/7/1 更新

运行$RTE_SDK/examples/skeleton/build/basicfwd -l 1,我得到以下内容:

EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:03:00.1 on NUMA socket 0
EAL:   probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1533 net_e1000_igb
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1533 net_e1000_igb
Port 0 MAC: 9c 69 b4 60 90 26
Port 1 MAC: 9c 69 b4 60 90 27
Core 1 forwarding packets. [Ctrl+C to quit]
recv pkts num: 1, port: 0
================= Ether header ===============
srcmac: 9C:69:B4:60:90:17
dstmac: 33:33:00:00:00:16
ethertype: 34525
This packet is IPv6
================= Ether header ===============
srcmac: 9C:69:B4:60:90:17
dstmac: 33:33:00:00:00:16
ethertype: 34525
This packet is IPv6
send 1 pkts, port: 1
recv pkts num: 1, port: 1
================= Ether header ===============
srcmac: 9C:69:B4:60:90:1C
dstmac: 33:33:00:00:00:16
ethertype: 34525
This packet is IPv6
================= Ether header ===============
srcmac: 9C:69:B4:60:90:1C
dstmac: 33:33:00:00:00:16
ethertype: 34525
This packet is IPv6
send 1 pkts, port: 0
recv pkts num: 1, port: 1
================= Ether header ===============
srcmac: 9C:69:B4:60:90:1C
dstmac: 33:33:00:00:00:16
ethertype: 34525
This packet is IPv6
================= Ether header ===============
srcmac: 9C:69:B4:60:90:1C
dstmac: 33:33:00:00:00:16
ethertype: 34525
This packet is IPv6
send 1 pkts, port: 0
...

看来两个端口没有问题。奇怪!

[编辑]2020/7/2 更新

rte_eth_link_get替换rte_eth_link_get_nowait后,程序可以正常工作。

按照@Vipin Varghese的建议,我用ethtool DEVNAMEethtool -a DEVNAME检查了端口的设置:

  1. DPDK 服务器端口 1:
Settings for ens1f1:
Supported ports: [ FIBRE ]
Supported link modes:   10000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes:  10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes
  1. 其他
Settings for ens1f0:
Supported ports: [ FIBRE ]
Supported link modes:   1000baseT/Full
10000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes:  1000baseT/Full
10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes
  1. 所有端口都具有相同的暂停参数
Autonegotiate:  off
RX:             on
TX:             on

但我真的很困惑:

  1. 当前设置和暂停参数有什么区别?如您所见,在当前设置中,自动协商处于打开状态,但在暂停参数中,自动协商处于关闭状态。
  2. rte_eth_link_get_nowaitrte_eth_link_get有什么区别?朝鲜文档为什么autoneg可以使它们的行为不同?

解释:

  1. 运行ethtool -while application is down is not a trusted way. Depending upon DPDK versionrte_eth_dev_closeorrte_cleanup' 不会使 NIC 处于正确的状态。
  2. 但是在运行应用程序时,如果 DPDK 端口 1 关闭,则可能是原因

a. 服务器 3 端口可能与 DPDK 端口 1 自动协商,导致rte_eth_link_get_nowait报告为down。(正确的 API 是调用rte_eth_link_get(。 b.Server-3 端口可以手动配置为非双工和非 10G 模式。

正确的调试方法是

  • 按照注释中的建议将 DPDK 端口放回内核。
  • 交叉检查自动否定和速度。
  • 在服务器 1 和服务器 3 上使用no auto-neg, 10G, full-duplex进行配置
  • 将服务器 2 端口 0 和端口 1 绑定到 DPDK。
  • 如果可能,请使用白名单运行 DPDK 测试管道。
  • 在服务器 3 上为端口 B 运行ethtool -t以交叉结果。

注意:这将帮助您确定它 server-3 端口驱动程序/固件在端口发送和接收数据包时与 auto-neg 的行为不同,是否在example/skeleton命令$RTE_SDK/examples/skeleton/build/basicfwd -l 1下成功

[EDIT-1] 根据评论中的更新,看起来rte_eth_link_get_nowait是快速方法,正确的方法是与rte_eth_link_get一起使用。要求与作者进行在线调试

[EDIT-2] 基于注释rte_eth_link_get已完成所需的工作。我记得rte_eth_link_get等待物理设备寄存器的实际读数,而rte_eth_link_get_nowait则无需等待即可调用。因此,为rte_eth_link_get填充了正确的值。

相关内容

  • 没有找到相关文章

最新更新