Ok, I'm quite sure now that this is some sort of timing issue.
I decided to play around with different options in the mount script since I noticed luguber had some differences to my mount | grep nfs output:
/storage$ mount | grep nfs
192.168.1.200:/mnt/HD_a2 on /storage/NAS_1 type nfs (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.200,mountvers=3,mountproto=tcp,local_lock=all,addr=192.168.1.200)
192.168.1.200:/mnt/HD_b2 on /storage/NAS_2 type nfs (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.200,mountvers=3,mountproto=tcp,local_lock=all,addr=192.168.1.200)
So I amended the options line in the NAS_1 script (thinking if that worked and the NAS_2 script didn't then that would prove the missing options):
Options=vers=3,nolock,noacl,retry=2,timeo=900,noatime,proto=tcp,mountproto=udp
And rebooted (after a systemctl daemon-reload command) - also removed my 4tb hdd attached to usb3 on the RPi5 so it wouldn't keep starting and stopping during the testing.
It worked with both mount scripts so I had access to both drives in the NAS from le.
I then did a shutdown to re-attach my hdd and restarted - both scripts failed this time.
Tried another reboot and this time the NAS_2 script mounted but the NAS_1 didn't! But when I went into a ssh session and did a "systemctl start storage-NAS_1.mount" command, I could then access the NAS_1.
So, if I want to keep up with le development I'm going to have to do multiple reboots in order to have both NAS drives mounted unless I find a solution around this timing issue.
p.s. my rcpinfo output:
/storage$ rpcinfo -p 192.168.1.200 |egrep "service|nfs"
program vers proto port service
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs