@RUMC - Thanks for the additional feedback. The layout below (click for full size) makes maximum use of what you already have, adds as little cost and complexity as possible and also positions for future upgrades. Items in
black stay the same. Items in
green are new or optional.
Physically Unchanged: AT&T modem, GS1900 switches, PoE+ switches, IP cams and host connections (server, NAS's, PC, etc.)
Physical Changes (~$300 of gear, <$100 of Cat6 + connectors):
1) Router Swap -- The AC86U (R1) is replaced with a VLAN-capable wired router, specifically one offering
SQM to eliminate bufferbloat on the 10/1 DSL link. I'd recommend a $60 Ubiquiti ER-X, running either native Smart Queue QoS (fq_codel + HTB) or
a back-port of CAKE, tuned appropriately.
2) Topology Changes -- R1 should be wired to the GS1900 core switch (S1), which is then backboned to the other GS1900 (S2). Both then downlink to their respective PoE+ access switches. Based on current cabling constraints, this gives you the best traffic flow and lowest broadcast overhead.
3) Wifi APs - Same-brand, VLAN-capable APs ("AP1" through "AP4") will replace each of the consumer routers (86U, R7000, 68U, 66U). For each AP I would run a new Cat6 home-run to the closest GS1900 (S1 or S2), for dedicated backhaul, plus power via PoE injectors, instead of having to buy managed PoE+ switches to replace the current ones (which are just serving IP Cams anyways, so all that traffic can be "unmanaged" and tagged as one VLAN on ingress into the GS1900's). I recommend TP-Link Omada EAP225v3's, which are a mere $60 each and come with PoE injectors included (~$240 total), plus you can run the Omada controller (necessary for central admin, seamless roaming and guest portal) for free on the Windows server.
4) Backbone Upgrade (optional) -- Since you have core network services (DHCP, DNS, NAT, etc.) sitting on either side of S1 and S2, you might consider an additional run of Cat6 between the switches to form a 2-port LAG. This would add redundancy and 2x throughput. Alternatively, you could run fiber between the two switches (the GS1900's have 2 SFP ports), which, although adding no more bandwidth now, would drop latency between S1 and S2 to almost zero, and provide 10/40Gb backbone when the time comes for switch upgrades.
5) Server/NAS Relocation (optional) - Additional to or instead of a backbone upgrade, you might also consider moving the server and/or NAS 2 to Building B, and consolidating your "data center" there, as it's usually best to connect as many core network services to your core switch, whenever possible. Alternatively, you could cable R1 to S2, making that your core switch, and bring NAS 1 to Building A. Or if each NAS serves mostly hosts in the same building, you could leave as-is.
Config Changes:
R1 - Will need interfaces, VLANs and DNS and DHCP forwarding (to the Win server) setup for each respective subnet (VLAN) you wish to setup, as well as firewall rules created to drop/permit traffic between VLANs (mostly drop, based on your requirements).
S1 and S2 - Will need identical VLANs defined, plus corresponding netmask, gateway and DNS, as well as the correct assignments of untagged and tagged VLANs to ports.
APs - Will need SSIDs and SSID-to-VLAN mappings -- doable centrally from the controller.
Windows Server - Presuming you do DHCP on the server, you'll need a DHCP scope for each VLAN. You might also consider running DNS on the server as well, depending on how much name-based resourcing you plan on hosting in your Windows environment.
Looking forward:
Layer-3 Core Switch - As long as you don't foresee super heavy local static or inter-VLAN routing, doing layer 3 on the gateway should be fine for the near-term future. An L3 switch would make certain local behavior faster, but at your average throughput it's likely peanuts compared to lower hanging fruit (layer 2, wireless access, etc.). You can always upgrade to a layer-3 core switch later, without having to undo anything major.
Fewer, Bigger Managed PoE Switches - This comes down to cost savings and cabling constraints. For now, injectors off the GS1900's will work. If/when you see any bottlenecks in your access uplinks, you can then build a case for higher-density managed PoE switches, plus more Cat6 home-runs. Either way, not a show-stopper for now.
-------------------
Hope that helps give good guidance for now. Feel free to ask questions as needed.