lets say I reduce it by 100,000 entries using
iprange
Code:
326293 printed CIDRs, break down by prefix:
- prefix /3 counts 1 entries
- prefix /8 counts 3 entries
- prefix /9 counts 1 entries
- prefix /10 counts 16 entries
- prefix /11 counts 31 entries
- prefix /12 counts 76 entries
- prefix /13 counts 122 entries
- prefix /14 counts 196 entries
- prefix /15 counts 322 entries
- prefix /16 counts 929 entries
- prefix /17 counts 629 entries
- prefix /18 counts 949 entries
- prefix /19 counts 1508 entries
- prefix /20 counts 2028 entries
- prefix /21 counts 2481 entries
- prefix /22 counts 5771 entries
- prefix /23 counts 5314 entries
- prefix /24 counts 13500 entries
- prefix /25 counts 424 entries
- prefix /26 counts 560 entries
- prefix /27 counts 583 entries
- prefix /28 counts 798 entries
- prefix /29 counts 1285 entries
- prefix /30 counts 2631 entries
- prefix /31 counts 9815 entries
- prefix /32 counts 276320 entries
totals: 326293 lines read, 302935 distinct IP ranges found, 26 CIDR prefixes, 326293 CIDRs printed, 1100260370 unique IPs
completed in 12.08756 seconds (read 0.22889 + think 0.21960 + speak 11.63906)
and here is if I reduce it by 1 million
Code:
742955 printed CIDRs, break down by prefix:
- prefix /16 counts 15653 entries
- prefix /24 counts 289012 entries
- prefix /32 counts 438290 entries
totals: 326293 lines read, 302935 distinct IP ranges found, 3 CIDR prefixes, 742955 CIDRs printed, 1100260370 unique IPs
completed in 28.41651 seconds (read 0.22530 + think 0.23088 + speak 27.96033)
the last list would probably be the most "optimized" for IPset hash:ip in-regards to memory (RAM) consumption because I have reduced the differences in prefix lengths to its lowest possible outcome.
For hash:net, the first list would be the most optimized.
As a side note
1100260370 unique IPs
could imply that my list blocks slightly more than approximately 1/4th
about 25.62%
the worlds IP addresses (on all open incoming ports) and (all outbound connections).
@Tech9 .