You are completely right.
However in my mind (might be wrong here) if I use another node, i wouldn’t use the RAID array completely.
While setup up i thought that its either:
In either case, the availability of my data would be quite the same right ?
(Then there is options to backup my PV to s3 with longhorn and all that i would have to setup again though )
Thanks for your answer !
Hello @theit8514
You are actually spot on ^^
I did look in my exports file which was like so :
/mnt/DiskArray 192.168.0.16(rw) 192.168.0.65(rw)
I added a localhost line in case:
/mnt/DiskArray 127.0.0.1(rw) 192.168.0.16(rw) 192.168.0.65(rw)
It didn’t solve the problem. I went to investigate with the mount command:
Will mount on 192.168.0.65:
mount -t nfs 192.168.0.55:/mnt/DiskArray/mystuff/ /tmp/test
Will NOT mount on 192.168.0.55 (NAS):
mount -t nfs 192.168.0.55:/mnt/DiskArray/mystuff/ /tmp/test
Will mount on 192.168.0.55 (NAS):
mount -t nfs 127.0.0.1:/mnt/DiskArray/mystuff/ /tmp/test
The
mount -t nfs 192.168.0.55
is the one that the cluster does actually. So i either need to find a way for it to use 127.0.0.1 on the NAS machine, or use a hostname that might be better to resolveEDIT:
I was acutally WAY simpler.
I just added 192.168.0.55 to my /etc/exports file. It works fine now ^^
Thanks a lot for your help@theit8514@lemmy.world !