It's good that you hold some spare parts in the back hand. In the last months I have seen some log snippets with much more noise regarding this message:
Code
May 31 11:08:04.570436 RPi5a kernel: nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10
May 31 11:08:04.570750 RPi5a kernel: nvme nvme0: Does your device have a faulty power saving mode enabled?
May 31 11:08:04.570864 RPi5a kernel: nvme nvme0: Try "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off" and report a bug
May 31 11:08:04.657093 RPi5a kernel: nvme 0000:01:00.0: enabling device (0000 -> 0002)
May 31 11:08:04.657277 RPi5a kernel: nvme nvme0: Shutdown timeout set to 8 seconds
May 31 11:08:04.660439 RPi5a kernel: nvme nvme0: 4/0/0 default/read/poll queues
May 31 11:08:04.670425 RPi5a kernel: nvme nvme0: Ignoring bogus Namespace Identifiers
Yes, at normal desktop computer I wouldn't expect such messages regulary. But also NVMe firmware isn't free of issues. At the RPi5 I know about cases, that it could fixed, if they followed the recommendation and add this to the kernel line in the cmdline.txt.
pcie_aspm=off
Maybe you doesn't need another cable in your current configuration. But you should give it a shot, and check if it disappears after exchange the cabeling.