• 0 Posts
  • 11 Comments
Joined 1Y ago
cake
Cake day: Aug 02, 2023

help-circle
rss

In days past some drive vendors had different sector layouts for drives and would cause issues with raid. Pretty sure most nowadays are all the same layout and you won’t run into any issues. I still look to get the same drive model anyways just to be perfectly sure that there are no issues.

Even then you may run into weird issues like one of my 1.2 TB enterprise ssd drives was reporting 1.12 TiB rather than 1.09 TiB the other 7 drives had. TrueNas refused to build a vdev with that drive and I had to return it to get a new one.


Typically a Fiber ISP will run Fiber optics only to your DEMARC (or Demarcation) point. This will be usually where your main cable (before any splits) or DSL line used to come in (in the US they’ve been using Orange tubes to indicate this and it will usually run to a panel in some closet or laundry). At the DEMARC they’ll install one of two things: a basic fiber to ethernet converter which will provide you a single ethernet port and a pure tap to the internet, or a Gateway device that will convert the fiber to multiple eithernet with NAT (usually providing other capabilites like TV, Phone, etc).

If you have the latter, you may not get much say in what you can do with your connection, and would be limited to a DMZ mode that is configured on the Gateway. What you put behind the converter or gateway is up to you.


I was thinking this too, if you have an open-ended 4x slot it can fit a 16x card but only runs at 4x.

TBH if you’re running 10Gb you may want to look for a board with on-board 10Gb rather than a PCIE which will save you the slot. My HP server has a swapable daughter board for the nic so you can chose 4x1Gb or 4x10Gb.


For the disks, you may have a small issue with having multiple types of disks in a single RAID10, as those disks might have slightly different physical attributes. ZFS is an option here as you can add two vdevs for the different drive types and add them to the same zpool, which effectively creates the RAID10 you’re looking for. You would typically not use LVM on top of ZFS but if you go with RAID10 it would let you create logical partitions that can be expanded easily at a later time.

Another ZFS option is to use RAIDZ1 with the 4 disks in a vdev. The vdev will use 1 disk of space across all disks to maintain a parity with the other disks. You will have 12TB of usable storage on your 16TB raw storage. This will allow you to lose one drive with no data loss.


Since we don’t know what server or VM tech you’re using the advice will be pretty generic. For self hosting, you can likely get away with your ISCSI traffic sharing the LAN interface with your usual vm traffic but if you need high throughput you will want ISCSI optimized nics and turn on jumbo frames (mtu of 9000 is the standard here). This requires a switch that supports jumbo frames as well.

For Windows, I find the ISCSI support to be very lacking. Every time I have used it I have had sporadic loss of connectivity, failure to mount on boot, and other issues. I would avoid it.

For ESXi you can map an ISCSI lun as a datastore and create vmdks on top. This functions the same if you use actual FC luns or NFS mounts, and have had no issues with reliability. There’s also RDM which is raw direct map which can mount the ISCSI lun as a disk of the vm. If you’re using vSphere I would advise against this as you lose the ability to vMotion or use DRS.


I believe ZFS works best when having direct access to the disks, so having a md underlying it is not best practice. Not sure how well ZFS handles external disks, but that is something to consider. As for the drive sizes and redundancy, each type should have its own vdev. So you should be looking at a vdev of the 2x6TB in mirror and a vdev of the 2x12TB in mirror for maximum redundancy against drive failure, totaling 18TB usable in your pool. Later on if you need to add more space you can create new vdevs and add them to the pool.

If you’re not worried about redundancy, then you could bypass ZFS and just setup a RAID-0 through mdadm or add the disks to a LVM VG to use all the capacity, but remember that you might lose the whole volume if a disk dies. Keep in mind that this would include accidentally unplugging an external disk.


Could be trying to mount it loopback instead of by ip. What does your exports file look like? Can you do a mount from 192.168.0.55 manually?



I would start by testing if you can resolve acme-v02.api.letsencrypt.org from the PiHole and if not, see what you need to unblock that.


Based on your update you may need to bring the containers down and up to fix the database.

Sometimes when opening LinguaCafe the first time there is an error message about users database table. If this happens, just stop and start your containers again, it should fix the problem.

docker compose down
docker compose up -d

Since you’ve probably been using the SMB protocol to access the NAS you probably need to understand a few things about the NFS protocol which functions differently. The NFS mount acts like a mapping of the entire system, rather than a specific user. That means that if there are differences in the systems, you may get access errors. For example the default user in Synology has a uid of 1024, but most client systems have a default of 1000. This means your user may not have access to the share or files, even if you have it mounted on the client.

One thing to check is what your Shared Folder’s NFS permissions squash is set to. This is found in Control Panel > Shared Folder the the NFS permissions tab. If it’s set to “no mapping” then uids must match. The easiest setup is to “map all users to admin” but you may encounter issues with that later if you switch back to SMB since new files will be owned by admin.