DPDK: 5-tuple-swap multiprocess testpmd on a single port#4241
DPDK: 5-tuple-swap multiprocess testpmd on a single port#4241
Conversation
There was a problem hiding this comment.
Pull request overview
This PR adds a new DPDK test that validates multiprocess testpmd functionality using a 5-tuple-swap forwarding mode with a single port setup. The test involves two VMs: one VM runs both sender and receiver processes using DPDK multiprocessing, while the other VM acts as a forwarder that swaps MAC and IP addresses to return traffic back to the sender VM.
Changes:
- Introduces multiprocess DPDK support with new enums for process roles and forwarding modes
- Adds
generate_5tswap_run_infofunction to configure the 3-process test topology - Updates command handling throughout to support multiple commands per node (List[str] instead of str)
- Adds new test case
verify_dpdk_testpmd_5tswap_gb_hugepages_netvscto the test suite
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 10 comments.
| File | Description |
|---|---|
| lisa/microsoft/testsuites/dpdk/common.py | Adds DpdkMpRole and TestpmdForwardMode enums to support multiprocess contexts and different forwarding modes |
| lisa/microsoft/testsuites/dpdk/dpdktestpmd.py | Adds _generate_mp_arguments method and extends generate_testpmd_command to support multiprocess role configuration and custom core lists |
| lisa/microsoft/testsuites/dpdk/dpdkutil.py | Adds generate_5tswap_run_info function for 5-tuple-swap test setup, updates return types to support multiple commands per node, and modifies verify_dpdk_send_receive to handle multiple processes |
| lisa/microsoft/testsuites/dpdk/dpdksuite.py | Adds new test case for 5-tuple-swap multiprocess validation with 1GB hugepages |
8ff8754 to
49b60be
Compare
7b4ad83 to
2951c20
Compare
3cb0b8e to
088eca9
Compare
ab260bc to
d3d36ed
Compare
|
@mcgov it failed in canonical 0001-com-ubuntu-server-jammy 22_04-lts-gen2 latest, please check it is test case issue or image issue. |
d3d36ed to
f00a6ce
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 5 comments.
Comments suppressed due to low confidence (1)
lisa/microsoft/testsuites/dpdk/dpdkutil.py:662
start_testpmd_concurrentnow accepts multiple commands perDpdkTestResources, but_collect_dict_resultstores results inoutputkeyed only by the kit, so later command outputs will overwrite earlier ones for the same kit. If multi-process output is needed, change the output structure (e.g.,Dict[DpdkTestResources, List[str]]or key by(kit, proc_index)), or explicitly document/rename the function to indicate only the last output per kit is retained.
def _collect_dict_result(result: Tuple[DpdkTestResources, str]) -> None:
output[result[0]] = result[1]
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.
Comments suppressed due to low confidence (1)
lisa/microsoft/testsuites/dpdk/dpdkutil.py:662
start_testpmd_concurrentnow accepts a list of commands per test kit, but_collect_dict_resultstores a single string per kit (output[result[0]] = result[1]). If more than one command is provided for the same kit, later-finishing tasks will overwrite earlier outputs non-deterministically, losing logs/results. Consider changing the output type to store a list per kit (or key by (kit, index)), and update the collector accordingly.
command_pairs_as_tuples: List[Tuple[DpdkTestResources, str]] = []
kits_and_commands = deque(node_cmd_pairs.items())
for kit_and_commands in kits_and_commands:
kit, commands = kit_and_commands
for command in commands:
command_pairs_as_tuples += [(kit, command)]
def _collect_dict_result(result: Tuple[DpdkTestResources, str]) -> None:
output[result[0]] = result[1]
adds handling for testpmd multiple processes. will be used for secondary process receiver for 5tswap test.
adds handling for testpmd multiple processes. will be used for secondary process receiver for 5tswap test.
| for kit_and_commands in kits_and_commands: | ||
| kit, commands = kit_and_commands | ||
| for command in commands: | ||
| command_pairs_as_tuples.append((kit, command)) |
There was a problem hiding this comment.
With multiple commands per DpdkTestResources, _collect_dict_result overwrites prior output for the same kit, losing logs/results from earlier processes. Consider changing output to Dict[DpdkTestResources, List[str]] (preferred) or appending/concatenating outputs under the same key so multi-process/multi-command runs preserve all outputs.
| @@ -589,7 +667,7 @@ def _run_command_with_testkit( | |||
| return (testkit, testkit.testpmd.run_for_n_seconds(cmd, seconds)) | |||
|
|
|||
There was a problem hiding this comment.
With multiple commands per DpdkTestResources, _collect_dict_result overwrites prior output for the same kit, losing logs/results from earlier processes. Consider changing output to Dict[DpdkTestResources, List[str]] (preferred) or appending/concatenating outputs under the same key so multi-process/multi-command runs preserve all outputs.
| existing_output = output.get(result[0]) | |
| if existing_output is None: | |
| output[result[0]] = result[1] | |
| else: | |
| # concatenate outputs for the same kit to avoid losing earlier results | |
| output[result[0]] = f"{existing_output}\n{result[1]}" |
|
|
||
| if multiple_queues: | ||
| if self.is_mana and mode == "txonly": | ||
| if self.is_mana and mode in ["rxonly", "5tswap"]: |
There was a problem hiding this comment.
This changes MANA queue selection behavior: previously txonly on MANA used 8 queues, but now txonly falls back to 4 queues. If existing perf/throughput baselines depend on 8-queue txonly, this is a regression. If the intent is to use 8 queues for all MANA modes (or specifically txonly), adjust the condition accordingly or document why txonly is excluded.
| if self.is_mana and mode in ["rxonly", "5tswap"]: | |
| if self.is_mana and mode in ["rxonly", "5tswap", "txonly"]: |
There was a problem hiding this comment.
this was a bug, this change fixes it
| mp_role=DpdkMpRole.PRIMARY_PROCESS, | ||
| num_procs=2, | ||
| proc_id=0, | ||
| core_list=[3, 7, 11, 17, 21], |
There was a problem hiding this comment.
The new 5tswap path hard-codes CPU IDs (including up to CPU 31). This makes the test fragile across VM sizes/NUMA layouts/CPU pinning and can fail even when min_core_count is met but CPU indexing/availability differs. Prefer deriving a core list from lscpu/available CPUs at runtime (and ideally keeping primary/secondary core allocations non-overlapping and NUMA-aware).
| mp_role=DpdkMpRole.SECONDARY_PROCESS, | ||
| num_procs=2, | ||
| proc_id=1, | ||
| core_list=[1, 5, 9, 13, 19, 25, 29, 31], |
There was a problem hiding this comment.
The new 5tswap path hard-codes CPU IDs (including up to CPU 31). This makes the test fragile across VM sizes/NUMA layouts/CPU pinning and can fail even when min_core_count is met but CPU indexing/availability differs. Prefer deriving a core list from lscpu/available CPUs at runtime (and ideally keeping primary/secondary core allocations non-overlapping and NUMA-aware).
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 5 out of 5 changed files in this pull request and generated 4 comments.
Comments suppressed due to low confidence (1)
lisa/microsoft/testsuites/dpdk/dpdkutil.py:662
start_testpmd_concurrentnow acceptsDict[DpdkTestResources, List[str]]and flattens it, but_collect_dict_resultstores results inoutput[result[0]], which will silently overwrite earlier outputs when a kit has multiple commands. Either restrict this helper to one command per kit (validate and fail fast), or changeoutputto collect a list of outputs per kit (or key by(kit, index)) so multi-command runs don’t lose data.
command_pairs_as_tuples: List[Tuple[DpdkTestResources, str]] = []
kits_and_commands = deque(node_cmd_pairs.items())
for kit_and_commands in kits_and_commands:
kit, commands = kit_and_commands
for command in commands:
command_pairs_as_tuples.append((kit, command))
def _collect_dict_result(result: Tuple[DpdkTestResources, str]) -> None:
output[result[0]] = result[1]
| assert_that(sender_secondary_output).described_as( | ||
| "Sender secondary process output was empty" | ||
| ).is_not_empty() | ||
| snd_rx_pps = receiver.testpmd.get_mean_rx_pps() |
There was a problem hiding this comment.
In the 5tswap validation, snd_rx_pps is taken from receiver.testpmd.get_mean_rx_pps() even though this value is supposed to represent the sender’s secondary (rxonly) process. Also, the secondary process output is never parsed with process_testpmd_output, so any get_mean_rx_pps() call will be based on whichever output was parsed last (currently the sender primary txonly run), not the secondary rxonly run. Parse sender_secondary_output and compute RX PPS from that output (without overwriting the sender primary TX metrics), and use the sender’s testpmd instance rather than the receiver’s.
| assert_that(sender_secondary_output).described_as( | |
| "Sender secondary process output was empty" | |
| ).is_not_empty() | |
| snd_rx_pps = receiver.testpmd.get_mean_rx_pps() | |
| def _get_sender_secondary_rx_pps() -> Decimal: | |
| rx_pps_values: List[Decimal] = [] | |
| for line in sender_secondary_output.splitlines(): | |
| if "rx-pps" in line.lower(): | |
| match = re.search(r"([0-9]+(?:\.[0-9]+)?)", line) | |
| if match: | |
| rx_pps_values.append(Decimal(match.group(1))) | |
| assert_that(rx_pps_values).described_as( | |
| "Unable to parse RX PPS from sender secondary testpmd output" | |
| ).is_not_empty() | |
| return sum(rx_pps_values) / Decimal(len(rx_pps_values)) | |
| assert_that(sender_secondary_output).described_as( | |
| "Sender secondary process output was empty" | |
| ).is_not_empty() | |
| snd_rx_pps = _get_sender_secondary_rx_pps() |
There was a problem hiding this comment.
I have good news, they have the same output.
| def _collect_dict_result(result: Tuple[DpdkTestResources, str]) -> None: | ||
| output[result[0]] = result[1] |
There was a problem hiding this comment.
Now that start_testpmd_concurrent expands to multiple (kit, command) pairs, this collector overwrites output[kit] for each additional command, losing earlier outputs. If multi-command concurrency is intended to be supported, consider changing output to store a list per kit (e.g., Dict[DpdkTestResources, List[str]]) or key by (kit, index/proc_id) so all outputs are retained.
| signal=constants.SIGINT, | ||
| kill_timeout=receiver_timeout, | ||
| ) | ||
| proc.wait_output("start packet forwarding") | ||
| receiver_processes.append(proc) |
There was a problem hiding this comment.
proc.wait_output("start packet forwarding") uses the default 300s timeout, which can cause the test to hang for up to 5 minutes even if the receiver process has already exited or been SIGINT’d by the surrounding timeout. Consider lowering the wait_output timeout (e.g., a small constant or derived from receiver_timeout) so failures surface quickly.
| signal=constants.SIGINT, | ||
| kill_timeout=kill_timeout, | ||
| ) | ||
| sender_processes.append(proc) | ||
| proc.wait_output("start packet forwarding") |
There was a problem hiding this comment.
proc.wait_output("start packet forwarding") for sender processes also uses the default 300s timeout, which can extend a failing run far beyond test_duration if testpmd never prints the marker. Use a tighter timeout aligned with test_duration/kill_timeout, or make the wait conditional on the process still running.
lisa/tools/meson.py
Outdated
| f"Could not configure {str(cwd)} with meson using args {args}" | ||
| ), | ||
| ) | ||
| # if result.exit_code != 0 and result. |
There was a problem hiding this comment.
There is an incomplete leftover comment (# if result.exit_code != 0 and result.) which looks like debugging scaffolding. Please remove it or replace it with complete logic, as-is it will likely fail style/lint checks and is confusing for maintenance.
| # if result.exit_code != 0 and result. |
| force_run=True, | ||
| shell=True, | ||
| cwd=cwd, | ||
| expected_exit_code=0, |
There was a problem hiding this comment.
setup() enforces expected_exit_code=0 but no longer provides an expected_exit_code_failure_message. Restoring a message that includes the cwd and args would make meson configuration failures much easier to diagnose in logs.
| expected_exit_code=0, | |
| expected_exit_code=0, | |
| expected_exit_code_failure_message=( | |
| f"meson setup failed in cwd '{cwd}' with args '{args}' " | |
| f"and build_dir '{build_dir}'" | |
| ), |
lisa/tools/meson.py
Outdated
| _result = self.run( | ||
| f"{args} {build_dir}", | ||
| force_run=True, | ||
| shell=True, | ||
| cwd=cwd, |
There was a problem hiding this comment.
setup() assigns the return of self.run(...) to _result but never uses it. This is dead code and will typically fail unused-variable linting; remove the assignment or use _result in subsequent logic.
31dac31 to
2ea1579
Compare
Adds a simpler forwarding test using 2 VMs and 1 port. This test uses multiprocess DPDK to create a sender and receiver process on one VM, and a forwarder process on the other. The forwarder sends all traffic back to the first VM by swapping the mac and IP addresses.