Jitter actually has not much impact on NTP...
Root Dispersion is a better indication of reliability and trust of both the client and server...
If a server gets its time from an external clock, its root dispersion is the estimated maximum error of that clock. If it gets its time from another NTP server, its root dispersion is that server's root dispersion plus the dispersion added by the network link between them
Here's a snapshot of a BSD based server running NTPd - this one is not disciplined via GPS...
ntpq -p -c rv
remote refid st t when poll reach delay offset jitter
==============================================================================
+time1.google.co .GOOG. 1 u 320 512 377 35.987 3.781 0.747
+time2.google.co .GOOG. 1 u 278 512 377 74.228 -9.910 0.769
*time3.google.co .GOOG. 1 u 514 512 377 35.833 3.862 0.377
-time4.google.co .GOOG. 1 u 341 512 377 75.027 -10.065 0.490
associd=0 status=0615 leap_none, sync_ntp, 1 event, clock_sync,
version="ntpd 4.2.8p13@1.3847-o Fri May 10 20:05:13 UTC 2019 (1)",
processor="amd64", system="FreeBSD/11.2-RELEASE-p10", leap=00, stratum=2,
precision=-22, rootdelay=35.833, rootdisp=38.925, refid=216.239.35.8,
reftime=e0ba5d37.e73e976b Sun, Jun 23 2019 13:19:03.903,
clock=e0ba6140.02dd47ae Sun, Jun 23 2019 13:36:16.011, peer=13523, tc=9,
mintc=3, offset=0.491552, frequency=0.188, sys_jitter=12.048831,
clk_jitter=0.417, clk_wander=0.013
Same device over time (two days in this example)... and this device would be perfectly fine to provide time source for a small business network.
View attachment 18370