Hello !

I currently have a problem on my kubernetes cluster.

I have 3 nodes:

  • 192.168.0.16
  • 192.168.0.65
  • 192.168.0.55

I use a storage class nfs (sigs/nfs-subdir-external-provisioner) to use an NFS.

The NFS is actually set up on the 192.168.0.55 which is also a worker node then.

I noticed that i have problems mounting volumes when a pod is created on the 192.168.0.55 node. If its one of the other two, it mounts. (The error is actually a permission denied on the 192.168.0.55 node)

I would guess that something goes wrong when kube tries to mount to NFS since it’s on the same machine ?

Any idea on how i can fix this? Cheers !

  • Burn1ngBull3t@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    5 months ago

    Hello @theit8514

    You are actually spot on ^^

    I did look in my exports file which was like so :/mnt/DiskArray 192.168.0.16(rw) 192.168.0.65(rw)

    I added a localhost line in case: /mnt/DiskArray 127.0.0.1(rw) 192.168.0.16(rw) 192.168.0.65(rw)

    It didn’t solve the problem. I went to investigate with the mount command:

    • Will mount on 192.168.0.65: mount -t nfs 192.168.0.55:/mnt/DiskArray/mystuff/ /tmp/test

    • Will NOT mount on 192.168.0.55 (NAS): mount -t nfs 192.168.0.55:/mnt/DiskArray/mystuff/ /tmp/test

    • Will mount on 192.168.0.55 (NAS): mount -t nfs 127.0.0.1:/mnt/DiskArray/mystuff/ /tmp/test

    The mount -t nfs 192.168.0.55 is the one that the cluster does actually. So i either need to find a way for it to use 127.0.0.1 on the NAS machine, or use a hostname that might be better to resolve

    EDIT:

    I was acutally WAY simpler.

    I just added 192.168.0.55 to my /etc/exports file. It works fine now ^^

    Thanks a lot for your help@theit8514@lemmy.world !