TheinsanegamerN Well then, please explain to us why, if the drives in the NAS are dramatically faster then 125MB/s, why aggregate connections WOULDNT work better, since the 1Gbit Ethernet is the bottleneck?

As I said, when you have a server which is accessed by multiple clients, you gain a benefit on the server side, as it can push out more data to the multiple clients. However, a single client doesn't reap any benefit, even if it's connected to the switch with a 10Gbps, as in your case. You still won't see more than 1Gbps per client. However, if you use say four clients, in theory, each client would get 1Gbps of dedicated bandwidth to the server in your scenario, rather than the four clients ending up sharing a single 1Gbps link. This is because packets aren't split across multiple network cards, at least not in the case of LACP, which is the most common type of link aggregation. This might provide some more detailed information for you serverfault.com/questions/569060/link-aggregation-lacp-802-3ad-max-throughput/ As a side note, I tested this myself some 5-6 years ago, using a Synology NAS and a switch that supported LACP, as well as a pair of Intel NIC's that supported bonding. I got no perceivable performance improvement, but if I connected a second PC to the switch, the performance on the first PC didn't drop when both machines were accessing the NAS.