Tower being a little over twice as tall as wide, and just over half deeper than tall, for small to medium business.
2U rack unit for medium to large business.
Prefer converged infrastructure, disaggregated HCI, and/or composable infrastructure for datacenters, with blades for compute and storage nodes in 4U rack form. Ideally hyperscale. EDR, FDR, or HDR InfiniBand or successor Gen-Z for fabric interconnect, also have 10GiB Fibre Channel over Ethernet. PCI Express expansion as well. Half-height blades ~8” high, full-height ones ~15” high, both single-width, about ~2” wide. Multi-layer enterprise managed switching system of stackable 1U switches. 2U or 4U rack servers for management.
Converged infrastructure of compute, networking, and storage would be up to 42U.
For rack and each full-height blade: x86 or RISC(ARM, Power Architecture, MIPS, RISC-V, etc.) processor, with four sockets each with eight to 24 dual-threaded cores, for a total of 32-96 cores and 64-192 threads. Quad-channel memory. Symmetric multiprocessing with SIMD.
For half-height blade: same x86 or RISC, with two sockets. Same number of threads per core and same number of cores and chips per socket.
Dual inline memory module.
Hybrid flash array. NTFS /Users and ReFS root desired for Windows, and Reiser4 or Ext4 /boot, Btrfs or OCFS2 root, and XFS or ext4 /home for Linux, for local filesystem. For FreeBSD, ZFS root and UFS2, ext4, or XFS /home. Veritas Volume Manager for Windows, Unix, and Linux, or LVM2 for Linux and Storage Spaces for Windows. Dell EMC MPFSi (preferred), Quantum StorNext, CXFS, or VxCFS (less preferred) for shared-disk file system. Distributed file systems from most to least preferred Scality RING, Infinit.sh, LizardFS, TerraGrid, OpenIO, or Quobyte. Parallel filesystem either GPFS or OrangeFS. Hadoop, MapReduce, and S3 desired. CTERA for federated file system.
Preferred cluster software: either Univa Grid Engine, Moab Cluster Suite, Platform LSF, or PBS Professional.
Cloud infrastructure most to least preferred: OpenNebula, Apache CloudStack, OpenStack, or Eucalyptus.
Linux (Red Hat Enterprise Linux, SUSE Linux, Oracle Linux, or Novell Open Enterprise Server) or Windows Server (2000 Advanced Server; 2003 R2 Enterprise for Itanium; 2008 R2 Standard or Enterprise; or 2016 or later Standard or Datacenter). FreeBSD or derivative for network-attached storage.
From most to least preferred VMware vSphere, Xen, or QEMU-KVM for hypervisor.
Enterprise servers, including management servers for converged infrastructure, disaggregated HCI, and/or composable infrastructure mentioned earlier in this post, should be in 4U or 8U rack form, and desire ccNUMA or object-page distributed shared memory (OPDSM) with four or eight hot-plug compute trays each with a pair of CPU’s sharing DIMM. Each CPU pair with shorter access time to its own node’s memory but longer access time to other nodes’ memory. Enterprise server should also have hybrid-flash storage with NVMe SSD’s and RAID6 and/or RAID10 of SAS HDD’s.
Large high-end server may be designed like the Superdome 2-16’s, but with twice the number CPU’s in half the height of blade system. Also with facility for InfiniBand as well as Fibre Channel over Ethernet. And of course with inter-blade ccNUMA or OPDSM, with CPU’s of each node sharing memory. Also several NVIDIA Quadro in Optimus. May also direct-attached hard disk drives and solid state storage and rely only part on external disk enclosures.
MOESIF and directory-based coherency preferred for NUMA. Prefer segmented directory or better yet, hybrid limited-pointer and linked list. Replicated partial directory, shared-documents (RPD-SD).
For NUMA, have dual-plane system of extended generalized fat-tree and from most to least preferred either generalized exchanged hypercube, folded crossed cube, or locally twisted cubes. For massively parallel processing, have 3D or 6D folded (or) twisted torus.
IO: 10GiB or higher Ethernet, eSATAp, SSUSB, other IO.
Could replace DRAM with T-RAM, S-RAM with Z-RAM, solid state drives with compound-semiconductor memory e.g. UltraRAM, and HDD’s with racetrack memory. Firmware should then be charge-trap NOR flash.