Hi, all

I meet one issue that when I set dpdk rx queue number larger than 1 (which also enables RSS) in startup.conf, for example:

dev default {

# Number of receive queues, enables RSS

# Default is 1

num-rx-queues 2

When start VPP, it will meet segmentation fault, the error log is:

dpdk [debug ]: [0] interface dpdk_eth0 created

interface/rx-queue [debug ]: set_input_node: node dpdk-input for interface dpdk_eth0

interface/rx-queue [debug ]: register: interface dpdk_eth0 queue-id 0 thread 1

interface/rx-queue [debug ]: register: interface dpdk_eth0 queue-id 1 thread 2

dpdk [debug ]: [0] configuring device name: 0000:d8:00.0, numa: 1, driver: net_ice, bus: pci

dpdk [debug ]: [0] Supported RX offloads: vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip

outer-ipv4-cksum vlan-filter vlan-extend scatter

timestamp keep-crc rss-hash

dpdk [debug ]: [0] Configured RX offloads: ipv4-cksum scatter

dpdk [debug ]: [0] Supported TX offloads: vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum

tcp-tso outer-ipv4-cksum qinq-insert multi-segs

mbuf-fast-free outer-udp-cksum

dpdk [debug ]: [0] Configured TX offloads: ipv4-cksum udp-cksum tcp-cksum multi-segs

Segmentation fault (core dumped)

I think I find the bad commit: ce4083ce48958d9d3956e8317445a5552780af1a (“dpdk: offloads cleanup”)

Does anyone also meet issue? Is there any solution to it? Thanks!

Best Regards,

Xu Ting

Hi, Damjan

Thanks for your help, and the backtrace from gdb is below (a file with same content is attached for better format). I use the commit ce4083ce48958d9d3956e8317445a5552780af1a (“dpdk: offloads cleanup”) to get these info. The previous one commit 3b7ef512f190a506f62af53536b586b4800f66c1 ("misc: fix the uninitialization error") does not cause the error.

Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
0x00007fff7211f958 in ice_sq_send_cmd_nolock (hw=0x0, cq=0x0, desc=0x0, buf=0x0, buf_size=0, cd=0x0) at ../src-dpdk/drivers/net/ice/base/ice_controlq.c:889
889 {
(gdb) bt
#0 0x00007fff7211f958 in ice_sq_send_cmd_nolock (hw=0x0, cq=0x0, desc=0x0, buf=0x0, buf_size=0, cd=0x0) at ../src-dpdk/drivers/net/ice/base/ice_controlq.c:889
#1 0x00007fff721434f9 in ice_sq_send_cmd (hw=0x7fd2bf7f9b00, cq=0x7fd2bf7fb5a0, desc=0x7fff6d361f40, buf=0x7fe2c025d000, buf_size=6, cd=0x0) at ../src-dpdk/drivers/net/ice/base/ice_controlq.c:1076
#2 0x00007fff721724bc in ice_sq_send_cmd_retry (hw=0x7fd2bf7f9b00, cq=0x7fd2bf7fb5a0, desc=0x7fff6d361f40, buf=0x7fe2c025d000, buf_size=6, cd=0x0) at ../src-dpdk/drivers/net/ice/base/ice_common.c:1415
#3 0x00007fff72180687 in ice_aq_send_cmd (hw=0x7fd2bf7f9b00, desc=0x7fff6d361f40, buf=0x7fe2c025d000, buf_size=6, cd=0x0) at ../src-dpdk/drivers/net/ice/base/ice_common.c:1474
#4 0x00007fff72181130 in ice_aq_alloc_free_res (hw=0x7fd2bf7f9b00, num_entries=1, buf=0x7fe2c025d000, buf_size=6, opc=ice_aqc_opc_alloc_res, cd=0x0) at ../src-dpdk/drivers/net/ice/base/ice_common.c:1810
#5 0x00007fff72181255 in ice_alloc_hw_res (hw=0x7fd2bf7f9b00, type=96, num=1, btm=false, res=0x7fff6d364452) at ../src-dpdk/drivers/net/ice/base/ice_common.c:1840
#6 0x00007fff72327d2c in ice_alloc_prof_id (hw=0x7fd2bf7f9b00, blk=ICE_BLK_RSS, prof_id=0x7fff6d3644ba "5r") at ../src-dpdk/drivers/net/ice/base/ice_flex_pipe.c:3305
#7 0x00007fff72348519 in ice_add_prof (hw=0x7fd2bf7f9b00, blk=ICE_BLK_RSS, id=17179875328, ptypes=0x7fe2c025ddbc "", attr=0x0, attr_cnt=0, es=0x7fe2c025dc90, masks=0x7fe2c025dd5a) at ../src-dpdk/drivers/net/ice/base/ice_flex_pipe.c:4980
#8 0x00007fff72364b71 in ice_flow_add_prof_sync (hw=0x7fd2bf7f9b00, blk=ICE_BLK_RSS, dir=ICE_FLOW_RX, prof_id=17179875328, segs=0x7fe2c025dec0, segs_cnt=1 '\001', acts=0x0, acts_cnt=0 '\000', prof=0x7fff6d368fb8) at ../src-dpdk/drivers/net/ice/base/ice_flow.c:2054
#9 0x00007fff7236574a in ice_flow_add_prof (hw=0x7fd2bf7f9b00, blk=ICE_BLK_RSS, dir=ICE_FLOW_RX, prof_id=17179875328, segs=0x7fe2c025dec0, segs_cnt=1 '\001', acts=0x0, acts_cnt=0 '\000', prof=0x7fff6d368fb8) at ../src-dpdk/drivers/net/ice/base/ice_flow.c:2371
#10 0x00007fff7238bd74 in ice_add_rss_cfg_sync (hw=0x7fd2bf7f9b00, vsi_handle=0, cfg=0x7fff6d369010) at ../src-dpdk/drivers/net/ice/base/ice_flow.c:3884
#11 0x00007fff7238beef in ice_add_rss_cfg (hw=0x7fd2bf7f9b00, vsi_handle=0, cfg=0x7fff6d3690b0) at ../src-dpdk/drivers/net/ice/base/ice_flow.c:3937
#12 0x00007fff724e6301 in ice_add_rss_cfg_wrap (pf=0x7fd2bf7fc7d0, vsi_id=0, cfg=0x7fff6d3690b0) at ../src-dpdk/drivers/net/ice/ice_ethdev.c:2792
#13 0x00007fff724e6457 in ice_rss_hash_set (pf=0x7fd2bf7fc7d0, rss_hf=12220) at ../src-dpdk/drivers/net/ice/ice_ethdev.c:2834
#14 0x00007fff724fc253 in ice_init_rss (pf=0x7fd2bf7fc7d0) at ../src-dpdk/drivers/net/ice/ice_ethdev.c:3102
#15 0x00007fff724fc369 in ice_dev_configure (dev=0x7fff746a0100 <rte_eth_devices>) at ../src-dpdk/drivers/net/ice/ice_ethdev.c:3131
#16 0x00007fff70d9c3e4 in rte_eth_dev_configure (port_id=0, nb_rx_q=8, nb_tx_q=5, dev_conf=0x7fff6d36ecc0) at ../src-dpdk/lib/ethdev/rte_ethdev.c:1578
#17 0x00007fff73e10178 in dpdk_device_setup (xd=0x7fff7c8f4f00) at /root/networking.dataplane.fdio.vpp/src/plugins/dpdk/device/common.c:156
#18 0x00007fff73e47b84 in dpdk_lib_init (dm=0x7fff74691f58 <dpdk_main>) at /root/networking.dataplane.fdio.vpp/src/plugins/dpdk/device/init.c:582
#19 0x00007fff73e459f4 in dpdk_process (vm=0x7fff76800680, rt=0x7fff76e191c0, f=0x0) at /root/networking.dataplane.fdio.vpp/src/plugins/dpdk/device/init.c:1499
#20 0x00007ffff6e7033d in vlib_process_bootstrap (_a=140735062407352) at /root/networking.dataplane.fdio.vpp/src/vlib/main.c:1235
#21 0x00007ffff6d0ebf8 in clib_calljmp () at /root/networking.dataplane.fdio.vpp/src/vppinfra/longjmp.S:123
#22 0x00007fff6f66f8b0 in ?? ()
#23 0x00007ffff6e6fd5f in vlib_process_startup (vm=0x7fff76800680, p=0x7fff76e191c0, f=0x0) at /root/networking.dataplane.fdio.vpp/src/vlib/main.c:1260
#24 0x00007ffff6e6b4fa in dispatch_process (vm=0x7fff76800680, p=0x7fff76e191c0, f=0x0, last_time_stamp=2826656650676320) at /root/networking.dataplane.fdio.vpp/src/vlib/main.c:1316
#25 0x00007ffff6e6bdf5 in vlib_main_or_worker_loop (vm=0x7fff76800680, is_main=1) at /root/networking.dataplane.fdio.vpp/src/vlib/main.c:1515
#26 0x00007ffff6e6e45a in vlib_main_loop (vm=0x7fff76800680) at /root/networking.dataplane.fdio.vpp/src/vlib/main.c:1728
#27 0x00007ffff6e6e242 in vlib_main (vm=0x7fff76800680, input=0x7fff6f66ffa8) at /root/networking.dataplane.fdio.vpp/src/vlib/main.c:2017
#28 0x00007ffff6ed02ce in thread0 (arg=140735181489792) at /root/networking.dataplane.fdio.vpp/src/vlib/unix/main.c:671
#29 0x00007ffff6d0ebf8 in clib_calljmp () at /root/networking.dataplane.fdio.vpp/src/vppinfra/longjmp.S:123
#30 0x00007fffffffc9f0 in ?? ()
#31 0x00007ffff6ecfdfe in vlib_unix_main (argc=59, argv=0x446500) at /root/networking.dataplane.fdio.vpp/src/vlib/unix/main.c:751
#32 0x0000000000406b23 in main (argc=59, argv=0x446500) at /root/networking.dataplane.fdio.vpp/src/vpp/vnet/main.c:342

Please tell me if any more info needed

Best Regards,
Xu Ting
toggle quoted message Show quoted text
-----Original Message-----
From: Damjan Marion <dmarion@...>
Sent: Friday, March 4, 2022 9:13 PM
To: Xu, Ting <ting.xu@...>
Cc: vpp-dev@...
Subject: Re: [vpp-dev] Segmentation fault when dpdk number-rx-queues > 1
in startup.conf


Dear Xu Ting,

Data you provided is not sufficient to help you.
i.e. providing backtrace may help us understand where problem is.


Damjan



On 03.03.2022., at 08:10, Xu, Ting <ting.xu@...> wrote:

Hi, all

I meet one issue that when I set dpdk rx queue number larger than 1
(which also enables RSS) in startup.conf, for example:

dev default {
# Number of receive queues, enables RSS
# Default is 1
num-rx-queues 2
}

When start VPP, it will meet segmentation fault, the error log is:

……
dpdk [debug ]: [0] interface dpdk_eth0 created
interface/rx-queue [debug ]: set_input_node: node dpdk-input for
interface dpdk_eth0
interface/rx-queue [debug ]: register: interface dpdk_eth0 queue-id 0
thread 1
interface/rx-queue [debug ]: register: interface dpdk_eth0 queue-id 1
thread 2
dpdk [debug ]: [0] configuring device name: 0000:d8:00.0, numa: 1, driver:
net_ice, bus: pci
dpdk [debug ]: [0] Supported RX offloads: vlan-strip ipv4-cksum udp-cksum
tcp-cksum qinq-strip
outer-ipv4-cksum vlan-filter vlan-extend scatter
timestamp keep-crc rss-hash
dpdk [debug ]: [0] Configured RX offloads: ipv4-cksum scatter
dpdk [debug ]: [0] Supported TX offloads: vlan-insert ipv4-cksum udp-
cksum tcp-cksum sctp-cksum
tcp-tso outer-ipv4-cksum qinq-insert multi-segs
mbuf-fast-free outer-udp-cksum
dpdk [debug ]: [0] Configured TX offloads: ipv4-cksum udp-cksum tcp-
cksum multi-segs
Segmentation fault (core dumped)

I think I find the bad commit:
ce4083ce48958d9d3956e8317445a5552780af1a (“dpdk: offloads cleanup”)
Does anyone also meet issue? Is there any solution to it? Thanks!

Best Regards,
Xu Ting

On 07.03.2022., at 07:01, Xu, Ting < ting.xu@... > wrote:

Hi, Damjan

Thanks for your help, and the backtrace from gdb is below (a file with same content is attached for better format). I use the commit ce4083ce48958d9d3956e8317445a5552780af1a (“dpdk: offloads cleanup”) to get these info. The previous one commit 3b7ef512f190a506f62af53536b586b4800f66c1 ("misc: fix the uninitialization error") does not cause the error.

Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
0x00007fff7211f958 in ice_sq_send_cmd_nolock (hw=0x0, cq=0x0, desc=0x0, buf=0x0, buf_size=0, cd=0x0) at ../src-dpdk/drivers/net/ice/base/ice_controlq.c:889
889     {
(gdb) bt
#0  0x00007fff7211f958 in ice_sq_send_cmd_nolock (hw=0x0, cq=0x0, desc=0x0, buf=0x0, buf_size=0, cd=0x0) at ../src-dpdk/drivers/net/ice/base/ice_controlq.c:889
#1  0x00007fff721434f9 in ice_sq_send_cmd (hw=0x7fd2bf7f9b00, cq=0x7fd2bf7fb5a0, desc=0x7fff6d361f40, buf=0x7fe2c025d000, buf_size=6, cd=0x0) at ../src-dpdk/drivers/net/ice/base/ice_controlq.c:1076
#2  0x00007fff721724bc in ice_sq_send_cmd_retry (hw=0x7fd2bf7f9b00, cq=0x7fd2bf7fb5a0, desc=0x7fff6d361f40, buf=0x7fe2c025d000, buf_size=6, cd=0x0) at ../src-dpdk/drivers/net/ice/base/ice_common.c:1415
#3  0x00007fff72180687 in ice_aq_send_cmd (hw=0x7fd2bf7f9b00, desc=0x7fff6d361f40, buf=0x7fe2c025d000, buf_size=6, cd=0x0) at ../src-dpdk/drivers/net/ice/base/ice_common.c:1474
#4  0x00007fff72181130 in ice_aq_alloc_free_res (hw=0x7fd2bf7f9b00, num_entries=1, buf=0x7fe2c025d000, buf_size=6, opc=ice_aqc_opc_alloc_res, cd=0x0) at ../src-dpdk/drivers/net/ice/base/ice_common.c:1810
#5  0x00007fff72181255 in ice_alloc_hw_res (hw=0x7fd2bf7f9b00, type=96, num=1, btm=false, res=0x7fff6d364452) at ../src-dpdk/drivers/net/ice/base/ice_common.c:1840
#6  0x00007fff72327d2c in ice_alloc_prof_id (hw=0x7fd2bf7f9b00, blk=ICE_BLK_RSS, prof_id=0x7fff6d3644ba "5r") at ../src-dpdk/drivers/net/ice/base/ice_flex_pipe.c:3305
#7  0x00007fff72348519 in ice_add_prof (hw=0x7fd2bf7f9b00, blk=ICE_BLK_RSS, id=17179875328, ptypes=0x7fe2c025ddbc "", attr=0x0, attr_cnt=0, es=0x7fe2c025dc90, masks=0x7fe2c025dd5a) at ../src-dpdk/drivers/net/ice/base/ice_flex_pipe.c:4980
#8  0x00007fff72364b71 in ice_flow_add_prof_sync (hw=0x7fd2bf7f9b00, blk=ICE_BLK_RSS, dir=ICE_FLOW_RX, prof_id=17179875328, segs=0x7fe2c025dec0, segs_cnt=1 '\001', acts=0x0, acts_cnt=0 '\000', prof=0x7fff6d368fb8) at ../src-dpdk/drivers/net/ice/base/ice_flow.c:2054
#9  0x00007fff7236574a in ice_flow_add_prof (hw=0x7fd2bf7f9b00, blk=ICE_BLK_RSS, dir=ICE_FLOW_RX, prof_id=17179875328, segs=0x7fe2c025dec0, segs_cnt=1 '\001', acts=0x0, acts_cnt=0 '\000', prof=0x7fff6d368fb8) at ../src-dpdk/drivers/net/ice/base/ice_flow.c:2371
#10 0x00007fff7238bd74 in ice_add_rss_cfg_sync (hw=0x7fd2bf7f9b00, vsi_handle=0, cfg=0x7fff6d369010) at ../src-dpdk/drivers/net/ice/base/ice_flow.c:3884
#11 0x00007fff7238beef in ice_add_rss_cfg (hw=0x7fd2bf7f9b00, vsi_handle=0, cfg=0x7fff6d3690b0) at ../src-dpdk/drivers/net/ice/base/ice_flow.c:3937
#12 0x00007fff724e6301 in ice_add_rss_cfg_wrap (pf=0x7fd2bf7fc7d0, vsi_id=0, cfg=0x7fff6d3690b0) at ../src-dpdk/drivers/net/ice/ice_ethdev.c:2792
#13 0x00007fff724e6457 in ice_rss_hash_set (pf=0x7fd2bf7fc7d0, rss_hf=12220) at ../src-dpdk/drivers/net/ice/ice_ethdev.c:2834
#14 0x00007fff724fc253 in ice_init_rss (pf=0x7fd2bf7fc7d0) at ../src-dpdk/drivers/net/ice/ice_ethdev.c:3102
#15 0x00007fff724fc369 in ice_dev_configure (dev=0x7fff746a0100 <rte_eth_devices>) at ../src-dpdk/drivers/net/ice/ice_ethdev.c:3131
#16 0x00007fff70d9c3e4 in rte_eth_dev_configure (port_id=0, nb_rx_q=8, nb_tx_q=5, dev_conf=0x7fff6d36ecc0) at ../src-dpdk/lib/ethdev/rte_ethdev.c:1578
#17 0x00007fff73e10178 in dpdk_device_setup (xd=0x7fff7c8f4f00) at /root/networking.dataplane.fdio.vpp/src/plugins/dpdk/device/common.c:156
#18 0x00007fff73e47b84 in dpdk_lib_init (dm=0x7fff74691f58 <dpdk_main>) at /root/networking.dataplane.fdio.vpp/src/plugins/dpdk/device/init.c:582
#19 0x00007fff73e459f4 in dpdk_process (vm=0x7fff76800680, rt=0x7fff76e191c0, f=0x0) at /root/networking.dataplane.fdio.vpp/src/plugins/dpdk/device/init.c:1499
#20 0x00007ffff6e7033d in vlib_process_bootstrap (_a=140735062407352) at /root/networking.dataplane.fdio.vpp/src/vlib/main.c:1235
#21 0x00007ffff6d0ebf8 in clib_calljmp () at /root/networking.dataplane.fdio.vpp/src/vppinfra/longjmp.S:123
#22 0x00007fff6f66f8b0 in ?? ()
#23 0x00007ffff6e6fd5f in vlib_process_startup (vm=0x7fff76800680, p=0x7fff76e191c0, f=0x0) at /root/networking.dataplane.fdio.vpp/src/vlib/main.c:1260
#24 0x00007ffff6e6b4fa in dispatch_process (vm=0x7fff76800680, p=0x7fff76e191c0, f=0x0, last_time_stamp=2826656650676320) at /root/networking.dataplane.fdio.vpp/src/vlib/main.c:1316
#25 0x00007ffff6e6bdf5 in vlib_main_or_worker_loop (vm=0x7fff76800680, is_main=1) at /root/networking.dataplane.fdio.vpp/src/vlib/main.c:1515
#26 0x00007ffff6e6e45a in vlib_main_loop (vm=0x7fff76800680) at /root/networking.dataplane.fdio.vpp/src/vlib/main.c:1728
#27 0x00007ffff6e6e242 in vlib_main (vm=0x7fff76800680, input=0x7fff6f66ffa8) at /root/networking.dataplane.fdio.vpp/src/vlib/main.c:2017
#28 0x00007ffff6ed02ce in thread0 (arg=140735181489792) at /root/networking.dataplane.fdio.vpp/src/vlib/unix/main.c:671
#29 0x00007ffff6d0ebf8 in clib_calljmp () at /root/networking.dataplane.fdio.vpp/src/vppinfra/longjmp.S:123
#30 0x00007fffffffc9f0 in ?? ()
#31 0x00007ffff6ecfdfe in vlib_unix_main (argc=59, argv=0x446500) at /root/networking.dataplane.fdio.vpp/src/vlib/unix/main.c:751
#32 0x0000000000406b23 in main (argc=59, argv=0x446500) at /root/networking.dataplane.fdio.vpp/src/vpp/vnet/main.c:342

Please tell me if any more info needed

Best Regards,
Xu Ting

-----Original Message-----
From: Damjan Marion < dmarion@... >
Sent: Friday, March 4, 2022 9:13 PM
To: Xu, Ting < ting.xu@... >
Cc: vpp-dev@...
Subject: Re: [vpp-dev] Segmentation fault when dpdk number-rx-queues > 1
in startup.conf


Dear Xu Ting,

Data you provided is not sufficient to help you.
i.e. providing backtrace may help us understand where problem is.


Damjan



On 03.03.2022., at 08:10, Xu, Ting < ting.xu@... > wrote:

Hi, all

I meet one issue that when I set dpdk rx queue number larger than 1
(which also enables RSS) in startup.conf, for example:

dev default {
# Number of receive queues, enables RSS
# Default is 1
num-rx-queues 2
}

When start VPP, it will meet segmentation fault, the error log is:

……
dpdk [debug ]: [0] interface dpdk_eth0 created
interface/rx-queue [debug ]: set_input_node: node dpdk-input for
interface dpdk_eth0
interface/rx-queue [debug ]: register: interface dpdk_eth0 queue-id 0
thread 1
interface/rx-queue [debug ]: register: interface dpdk_eth0 queue-id 1
thread 2
dpdk [debug ]: [0] configuring device name: 0000:d8:00.0, numa: 1, driver:
net_ice, bus: pci
dpdk [debug ]: [0] Supported RX offloads: vlan-strip ipv4-cksum udp-cksum
tcp-cksum qinq-strip
outer-ipv4-cksum vlan-filter vlan-extend scatter
timestamp keep-crc rss-hash
dpdk [debug ]: [0] Configured RX offloads: ipv4-cksum scatter
dpdk [debug ]: [0] Supported TX offloads: vlan-insert ipv4-cksum udp-
cksum tcp-cksum sctp-cksum
tcp-tso outer-ipv4-cksum qinq-insert multi-segs
mbuf-fast-free outer-udp-cksum
dpdk [debug ]: [0] Configured TX offloads: ipv4-cksum udp-cksum tcp-
cksum multi-segs
Segmentation fault (core dumped)

I think I find the bad commit:
ce4083ce48958d9d3956e8317445a5552780af1a (“dpdk: offloads cleanup”)
Does anyone also meet issue? Is there any solution to it? Thanks!

Best Regards,
Xu Ting



<backtrace.txt>

Hi, Damjan

I look into the code. The bad commit is ce4083ce48958d9d3956e8317445a5552780af1a (“dpdk: offloads cleanup”), and the previous commit is correct, so I compare these two. Since they use the same DPDK version, I check the input of rte API.

I find that the direct cause is configuring default RSS in DPDK. It is called by dpdk_device_setup() in dpdk plugins, the API function is rte_eth_dev_configure() . However, the bad commit and the good commit have almost the same input to rte_eth_dev_configure() , the only difference is a Tx offload flag (TX_IPV4_CSUM), but I think it is not the root cause because it does not help after I fix it. Since they have the same input to dpdk API, I think it is not DPDK's issue.

I find there are a lot of flags or offloads configuring change in commit (“dpdk: offloads cleanup”). I guess is it possible that some flags are not correct? I look at the code in dpdk_lib_init() , but do not find the cause yet.

Do you have any suggestion to me? Thanks!

Best Regards,
Xu Ting
On 06.05.2022., at 11:33, Xu, Ting <ting.xu@...> wrote:

Hi, Damjan

I look into the code. The bad commit is ce4083ce48958d9d3956e8317445a5552780af1a (“dpdk: offloads cleanup”), and the previous commit is correct, so I compare these two. Since they use the same DPDK version, I check the input of rte API.

I find that the direct cause is configuring default RSS in DPDK. It is called by dpdk_device_setup() in dpdk plugins, the API function is rte_eth_dev_configure(). However, the bad commit and the good commit have almost the same input to rte_eth_dev_configure(), the only difference is a Tx offload flag (TX_IPV4_CSUM), but I think it is not the root cause because it does not help after I fix it. Since they have the same input to dpdk API, I think it is not DPDK's issue.

I find there are a lot of flags or offloads configuring change in commit (“dpdk: offloads cleanup”). I guess is it possible that some flags are not correct? I look at the code in dpdk_lib_init(), but do not find the cause yet.

Do you have any suggestion to me? Thanks!
No. DPDK should not crash even if we are doing something wrong. It should return error value.


Damjan