• ready to use and comfortable ZFS storage appliance for iSCSI/FC, NFS and SMB
  • Active Directory support with Snaps as Previous Version
  • user friendly Web-GUI that includes all functions for a sophisticated NAS or SAN appliance.
  • commercial use allowed
  • no capacity limit
  • free download for End-User

  • Individual support and consulting
  • increased GUI performance/ background agents
  • bugfix/ updates/ access to bugfixes
  • extensions like comfortable ACL handling, disk and realtime monitoring or remote replication
  • appliance diskmap, security and tuning (Pro complete)
  • Redistribution/Bundling/setup on customers demand optional
please request a quotation.
Details: Featuresheet.pdf

Ready to use ESXi VM (ova template)

1. Download the .ova template
This is a ready to use napp-it ZFS server VM

2. current edition: ESXi ZFS Appliance with OmniOS 151028 for napp-in-one on ESXi 6.7U1

Napp-in-One offers the most options and flexibility. Download ESXi 6.5 free without RAM-Limit from  (ESXi is a ultra lightweight Hypervisor - think about  as a Super Deluxe Bios extension where you can run supported operating systems like BSD, OSX, Linux, Solaris or Windows side by side, each with best performance). You should own a modern mainboard with passthrough support (vt-d) to virtualize your NAS/SAN with similar  performance like a barebone setup or you can try a physical RDM (Raw Disk Mapping) setup.

Install the free ESXi edition. enble pass-through support in your mainboard bios and in ESXI. Download the zipped and preconfigured VM, unzip it and upload the VM folder via ESXi filebrowser to a local ESXi datastore.  Import the VM in ESXi 6.5 in menu Virtual machines > create/register or in older ESXi via its filebrowser and a rightclick on the .vmx file within the folder. Add your SAS controller as a pci device via pass-through to your OmniOS SAN VM and boot  it up. It is ready to run with 5.1/5.5 vmware tools, two nics (e1000 and vmxnet3 in dhcp mode) and root pw unset. Run ifconfig at console to get current ip and manage via browser and http://ip:81.

In a larger ESXi environment, you can use several Napp-In-Ones where each ESXi use its own local highspeed ZFS SAN. No single point of failure and no SAN bottleneck. You can use Napp-In-One datastores locally or remotely and you can use combined VMmotion and storagemotion to move to another machine.

Download a ready to use napp-it ZFS server VM based on current OmniOS stable.
For end-users this is free even commercially. Resellers need a bundling license.
Please discuss problems at

A lot of ESXi users keep their VM's on a dedicated shared SAN storage server.  For high performance, low latency and reliability a redundant FC or IP network is used to connect them together with a second SAN storage server for redundancy with Clustering/ High Availabilty. But quite often, you do not need such a complex and expensive high-end solution. You only want shared storage with Highend-SAN features and high performance and low latency between ESXi and storage. Your ESXi servers are not working to capacity. You have enough CPU and especially RAM left for another VM.

This is where Napp-In-One can be the solution. Napp-In-One means, integrate the two functions ESXi server and ZFS NAS/SAN storage server into one box and connect them in Software. You can reach quite the same performance like with two dedicated servers (same RAM on storage server) connected via highend 10 GBe networks. Usually you keep all VM's on their 'lokal' SAN storage. Each Napp-In-One works completely independent from others (SAN is no longer s single point of failure) while you have all options like flexible storage allocation, boot a VM from another SAN, move, clone and backup between your SAN's.
If you have enough disk bays free in your Napp-In-One (highly recommended) you can even move a pool physically in case of problems or wanted updates to another Napp-In-One, import the pool and start the VM's there or from there. To move VMs you need a combined Vmotion + storagemotion.

You need:
- ESXi server not loaded to capacity limits
- Mainboard with hardware virtualisation vt-d (Intel) or IOMMU (AMD) for high-performance storage access
- Sata + one ot more storage controller (example onboard Sata + LSI HBA SAS controller)

Napp-In-One manual

There are problems with Intel Optane NVMe pass-through
A workaround is to add Optane 900P device ID to

enable SSH on ESXi and login via WinSCP as user root

- edit /etc/vmware/
- add following lines at the end of the file:

# Intel Optane 900P
8086 2700 d3d0 false

- restart hypervisor

napp-it 27.02.2024