To support the point, here is the quote from NetApp documentation:
When a system doesn’t have enough disk drives to accept the write workload, writes may have to start running synchronously with disk operations. This means that instead of RAM speeds, the system has to write at disk speeds.
This is not a limitation of NVRAM; larger NVRAM would not improve it. More NVRAM and cached data would only gap the amount of time before the flood of write requests overflows NVRAM. The limitation in these scenarios is actually in the bandwidth, in IOPS or in bytes/sec, provided by the HDDs. If the disks aren’t able to ingest that data rapidly enough, this situation may occur. Appropriate sizing of the number of disks to the workload provides enough bandwidth to the system. This can lead to sizing more disks for the workload than necessary, from a capacity point of view. That is, the number of disks is sized based on performance so that they provide enough I/O capacity to handle the work.