What's new

QUAD9 moves to Switzerland

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Status
Not open for further replies.

Quad9 is bad when it comes to reacting to new things. For example SAD dns vulnerability. I wrote to them 1.5 month ago with question why saddns.net shows their dns as vulnerable and they said they are investigating it and because of security reasons they cant share more information or tell me when is the deadline of applying fixes. From what ive heard this site checks linux version of the server and so far only nextdns reacted really fast and patched everything within weeks. Its been some months since 8-11 november 2020 and all dns servers are still not patched according to saddns.net site. Quad9 scores high on filtering urls tests but still below nextdns and is months late compared to nextdns. The bad side of nextdns is that its not free but paid.
 
From what ive heard this site checks linux version of the server

So not what you KNOW, just internet prattle.

The SADDNS test page warns you that the test is not necessarily accurate. There are also warnings from other researchers that the results for DNS servers checks are also cached for long periods which gives false results.

Quad9 and many other providers use DNSSEC which goes a long way to preventing SAD DNS issues and all the information released is a proof of concept so you need to stop panicking .

Just because you don't like or understand the reply you received does not make Quad9 "bad".
 
So not what you KNOW, just internet prattle.

The SADDNS test page warns you that the test is not necessarily accurate. There are also warnings from other researchers that the results for DNS servers checks are also cached for long periods which gives false results.

Quad9 and many other providers use DNSSEC which goes a long way to preventing SAD DNS issues and all the information released is a proof of concept so you need to stop panicking .

Just because you don't like or understand the reply you received does not make Quad9 "bad".

Where do you see panic and i said that quad9 is bad at reacting to new things not that its bad overall so you lie here (i pointed out good/bad sides). The only dns server that reacted quickly and scored higher than quad9 is nextdns thats also true. Even quad9 themself replied they are investigating the problem and not its already "fixed". You should better stop spreading lies and misinformation and focus on facts instead.
 
Where do you see panic and i said that quad9 is bad at reacting to new things not that its bad overall so you lie here (i pointed out good/bad sides). The only dns server that reacted quickly and scored higher than quad9 is nextdns thats also true. Even quad9 themself replied they are investigating the problem and not its already "fixed". You should better stop spreading lies and misinformation and focus on facts instead.
Can you give a specific example of Quad9 being slow to react (or "bad at reacting") to new things? The SAD thing isn't in fact an example of that, since it's not something we were vulnerable to. We were the first recursive resolver to implement DNSSEC. We were the first recursive resolver to be GDPR compliant. We were the first recursive resolver to implement standards-based encryption. We were the first recursive resolver to integrate multi-source threat intelligence for malware blocking. We're the first recursive resolver to relocate to a jurisdiction which doesn't have national security gag orders. We're the first recursive resolver to not be subject to intelligence or law enforcement data collection requirements. We're the first recursive resolver to have a human rights policy. And this week we just committed to publish our negative trust anchor list, called for others to do the same, and began the work to establish an open process for a unified NTA list to be publicly managed.

If there are ways we can improve, we're all ears. It's how we continue making things better.
 
Can you give a specific example of Quad9 being slow to react (or "bad at reacting") to new things? The SAD thing isn't in fact an example of that, since it's not something we were vulnerable to. We were the first recursive resolver to implement DNSSEC. We were the first recursive resolver to be GDPR compliant. We were the first recursive resolver to implement standards-based encryption. We were the first recursive resolver to integrate multi-source threat intelligence for malware blocking. We're the first recursive resolver to relocate to a jurisdiction which doesn't have national security gag orders. We're the first recursive resolver to not be subject to intelligence or law enforcement data collection requirements. We're the first recursive resolver to have a human rights policy. And this week we just committed to publish our negative trust anchor list, called for others to do the same, and began the work to establish an open process for a unified NTA list to be publicly managed.

If there are ways we can improve, we're all ears. It's how we continue making things better.

I exchanged emails with quad9 support and its been 2 months since last response and it still looks that quad9 fails the saddns.net test. My comment about quad9 being slow to react is based exclusively on sad dns vulnerability because i simply did not read about the others in past thats the first one i read about. I understand points about test not being accurate but since 8-11 november 2020 there was enough time and nextdns did it in weeks without saying "we investigate" like quad9 replied. People that say quad9 is not vulnerable probably say so because of dnssec and here is the response from the research website "Does DNSSEC mitigate attack? Yes and no, the server must implement strict DNSSEC check (i.e., refuse the responses that break the trust chain) to prevent the off-path attacks. However, since DNSSEC is still under development and servers need to accept such responses (i.e., only DNSSEC aware but not DNSSEC validate) when visiting a misconfigured domain."

So scoring high on tests and being free is the biggest advantage of quad9 in my opinion but reacting to this proof of concept vulnerability is way too long in my opinion.
 
it still looks that quad9 fails the saddns.net test. My comment about quad9 being slow to react is based exclusively on sad dns vulnerability. I understand points about test not being accurate. So scoring high on tests and being free is the biggest advantage of quad9 in my opinion but reacting to this proof of concept vulnerability is way too long in my opinion.
I'm not clear on what you're looking for, exactly. Do you want us to work with the developer of the test to try to help them to improve their test until it's accurate? That doesn't scale, and doesn't provide users with any benefit, so that's really not where our effort is directed.

"Scoring high on tests" isn't a goal for us. We work to protect users.

You say you want us to "react quickly" but you don't specify what that reaction would be.

Can you detail the chain of events that you're imagining would happen, in a perfect world?
 
I'm not clear on what you're looking for, exactly. Do you want us to work with the developer of the test to try to help them to improve their test until it's accurate? That doesn't scale, and doesn't provide users with any benefit, so that's really not where our effort is directed.

"Scoring high on tests" isn't a goal for us. We work to protect users.

You say you want us to "react quickly" but you don't specify what that reaction would be.

Can you detail the chain of events that you're imagining would happen, in a perfect world?

The chain of events would be to test saddns.net site and see information about being "not vulnerable" because people without knowledge about cyber security like me usually rely on web sites like this one and on searching for opinions of experts to know which dns gives them protection even against proof of concept attacks.

Also usually first step to know if some1 is vulnerable is to ask directly that is quad9 support. They replied "we are investigating there is no eta for a fix" so the answer was not "its fixed"
 
The way I see it, this is a bug with the SADDNS site, not with Quad9. It's up to that test site to fix their test, Quad9 cannot fix it for them.

Just like Cloudflare's DoT+DNSSEC test site has been broken for nearly a year, and no amount of poking at them got them to fix it. The best we managed to do (as the community) was to get them to acknowledge that it was broken (as they generate temporary URLs that break DNSSEC validation).

You are barking at the wrong tree here...
 
So not what you KNOW, just internet prattle.

The SADDNS test page warns you that the test is not necessarily accurate. There are also warnings from other researchers that the results for DNS servers checks are also cached for long periods which gives false results.

Quad9 and many other providers use DNSSEC which goes a long way to preventing SAD DNS issues and all the information released is a proof of concept so you need to stop panicking .

You should better stop spreading lies and misinformation and focus on facts instead


You need to take a break and think about the way you address people. You are wrong and by your own admission don't know what you are talking about.

The only person in this discussion "spreading lies" is yourself , you ask for help/advice but won't accept the facts and answers given to you.
 
How do they react to something Quad9 isn't vulnerable to?
The way I see it, this is a bug with the SADDNS site, not with Quad9. It's up to that test site to fix their test, Quad9 cannot fix it for them.

Just like Cloudflare's DoT+DNSSEC test site has been broken for nearly a year, and no amount of poking at them got them to fix it. The best we managed to do (as the community) was to get them to acknowledge that it was broken (as they generate temporary URLs that break DNSSEC validation).

You are barking at the wrong tree here...

If im wrong then quad9 support is wrong too because they responded to me that "they are investigating and there is no eta for fix". Im just using words of quad9 support not my words.
Anyways i wont argue more since i dont have knowledge to do so. If you state that they fixed it already then maybe ill switch back to it from paid dns that is nextdns.
 
Last edited by a moderator:
I am using Quad9 DoT Secure with ECS Support Service and get the following from test at https://www.saddns.net/
Code:
Your DNS server IP is 74.63.28.246
Since it blocks outgoing ICMP packets, your DNS server is not vulnerable.
The test currently only takes the side channel port scanning vulnerability into consideration. A successful attack may also require other features in the server (e.g., supporting cache).
The test is conducted on 2021-03-04 14:30:05.122875679 UTC
Disclaimer: This test is not 100% accurate and is for test purposes only.
 
Quad9 was never vulnerable to SADDNS , there was nothing to fix.

fix.jpg


Ok thank you for explaining situation to me maybe ill consider switching to quad9 when saddns fix their website. Quad9 support is really consumer friendly even tho they are doing it for free so respect for that.
@EmeraldDeer Sadly just tested quad9 and saddns still reports as vulnerable.
 
Last edited by a moderator:
Sadly just tested quad9 and saddns still reports as vulnerable
Why bother looking ? Quad9 staff have told you that Q9 was never vulnerable to this "attack"

Several people have already told you that the test site is BROKEN , it will probably never change.

The SADDNS test is pointless, 1) it is broken, 2) as they clearly state, the test is not accurate and the only looks at one part of the "vulnerability" , other features are required to complete an attack and the test does NOT check for them.


The test currently only takes the side channel port scanning vulnerability into consideration. A successful attack may also require other features in the server (e.g., supporting cache).
 
I am sympathetic to anyone who pays for a service, when there are free/comparable options available, because they cannot trust for whatever reason those options. Sometimes the information you get from one representative of a company is not correct. Nevertheless, I tend to trust the common consensus of talented and knowledgeable people.

Hope that you @podkaracz, come to realize with high confidence, that researchers who find/responsibly reveal these vulnerabilities (first, privately to effected entities) are counterbalanced by those effected entities, who receive these revelations and mitgate/correct the problem(s) that exist in their area of operation before that research is published . Most everyone wants to do their job well, whatever it is.


On that note, while @Bill Woodcock, may still be around/available, are you aware of this CNAME-related vulnerability? Seems that NextDNS is able to partly filter/mitigate it. The article mostly mentions browser-based filtering, but would this be something that would be within the realm of Quad9 filtering?

 
Last edited:
We've been following the issue closely... In general, we're stretched very thin, so we have to be very careful about taking on a larger scope of issues than we can see our way to successfully solving. We view our mandate as a balance of security, privacy, and performance. And we work as much by trying to establish new minimum-acceptable-levels-of-service as by doing everything ourselves. We don't have the resources to protect the whole world directly, and that would be an inefficient way to go about things anyway. Plus it would centralize control, which is an anti-goal. So we try to very pointedly do limited things better, such that the monetizers have to also raise their standards somewhat in order to remain in the running.

Our blocking has been limited to fairly narrowly-construed security threats, thus far. Malware, phishing, C&C, homonyms and typo-squatting. By focusing on that, we've been able to achieve a 98% protection rate, in an industry which previously considered 10% to be good. Consequently, others have picked up their game as well, and we're now starting to see a few others that are consistently blocking the majority of threats, as well. So, people are more secure now.

With regard to privacy, our focus thus far has been entirely on not-doing-harm. Not collecting user data was the very first principle we started with, even before we added the security mandate, so that's always shaped how we've looked at privacy. And there's still vast room for improvement in the industry, as regards operators not collecting and abusing people's personal information. The entire move to Switzerland was about improving users' privacy expectations, and we hope that other recursive resolver operators will also step out from behind the skirts of the Northern California court that shields them from responsibility.

But the CNAME chaining threat is another issue. It's one I brought up explicitly two years ago as one of the harms that was likely to befall non-profits if a private equity firm was allowed to monetize the .ORG domain, so it's not a new threat, but as people are noticing, it's getting worse and worse. Mitigating it would be a bit of a game of cat-and-mouse, but not more difficult or complicated than the malware threats that analysts current identify.

Quad9 is, essentially, a network-and-server operations shop, rather than a software development shop or a malware analysis shop. Although we're very active in the standardization process, we depend upon programmers to build the software tools that we glue together and deploy, and we depend upon analysts to do the difficult detective work of figuring out what domains exist to do harm. There are many analysts, and none of them are right all of the time, so a part of our work, and a big part of the value-add that gets us from 10% success to 98% success, is in very careful integration of the work those threat analysts do. So our expertise, such as it is, isn't in identifying malware, but in identifying misidentified malware domains. In other words, our expertise isn't in malware, but in false-positives.

In order to achieve the same success with a more proactive interpretation of defending people's privacy, blocking connections to machines which exist to deanonymize users and monetize their personal information, we'd need to channel the zeitgeist and come to a view of what people thought was acceptable versus what they thought was unacceptable. This is a much more nuanced situation, much more of a gray area, than malware. Extended Client Subnet is an excellent example of the problem that we'd be facing constantly, if we take a broader view of how to protect people's privacy: monetizers introduced ECS in order to get recursive resolver operators to un-mask users, and pass their identities through in the form of IP addresses. The story they tell is that this allows them to somehow give users better performance, but that's false... anycast already does that, without requiring users to divulge any information. So the monetizers commit a network neutrality violation, and sporadically punish users whose IP addresses aren't passed through, leading users, who mostly don't have any visibility into this struggle, to simply switch recursive resolver operators until they find one who gives up their IP address, so the punishment stops. This leads the users to blame the recursive resolver operators who tried to defend their privacy, while giving unwarranted praise to those who sold them out. Any attempt to protect users' privacy will be countered by monetizers in this way, and until users understand the conflict and challenge the net neutrality violations, the dynamic is unlikely to change.

We're actually facing an exactly analogous situation with DNSSEC Negative Trust Anchors right now. We were the first recursive resolver operator to perform DNSSEC validation... the other two, at the time, said it would never work, because too many sites had broken DNSSEC signatures, and users would never put up with being unable to reach those sites, and would simply defect to other recursive resolvers that didn't try to provide users with DNSSEC security. We went ahead and did it, and it worked (mostly), and now the others do DNSSEC validation as well, and the world is a safer place. Many fewer DNS hijackings are possible now, and the ones that remain happen in domains people don't care enough about to sign. The problem is, that Google and OpenDNS weren't wrong. There are some very high-profile domains that just chronically shoot themselves in the foot and break their DNSSEC, in ways that are indistinguishable from malicious hijackings. So we all have to use "Negative Trust Anchors" which are, essentially, rules that say, "for this specific domain, don't actually DNSSEC validate, because we already know it's likely to be broken and history tells us it's due to incompetence rather than malice." We've tried for several years to get the other recursive resolver operators to coordinate and all work from a single public NTA list, but they haven't, thus far, so users try to go to, for instance, schwab.com, to take one perennially broken one, can't reach it, try a different recursive resolver (which has an NTA for Schwab) and succeed, and misconclude that it's the resolver that's broken, rather than Schwab. So they switch away from the resolver that protects them, and to the resolver that doesn't protect them. For so long, getting people to deploy DNSSEC was such an uphill battle that there was a sort of gentleman's agreement not to shame people who were trying but failing. But we're past that point now. Lots and lots of domains are signed, successfully, so there isn't really an excuse to be doing it and failing at this point; at least not persistently failing. That means you're not actually trying, because other people, with fewer resources, are succeeding all around you. So we're working with DNS-OARC to finally get that public NTA list set up, with public discussion of who goes on it and when they can be taken off. And this week we committed to publishing our own NTA list, in the mean time, and pushed the other resolver operators to do the same. So, maybe this is an area we can make better.

So, to return to your question: the simplest mechanism I can think of for dealing with the CNAME chaining thing is to take a few ad-blocking and tracker-blocking feeds, resolve them all back to IP addresses, expand those IP addresses out to /24s and /48s; then watch the IP addresses that come back in answers (A and AAAA records, whether at the end of CNAME chains or not) block anything that comes back from those subnets, and wait for false-positive reports. Narrow the size of the address blocks in response to false-positives. Look at what QNAMEs produce IP addresses in those ranges, spider outwards from there.

The problem is that that's all work that, ideally, is being done by a bunch of different teams, each using their own methodology and their own input data sets. We can help them, but we don't have the resources to do it all ourselves; and again, that would be centralization. We don't provide good results by doing it ourselves, we provide good results by aggregating a lot of different methods from different sources, and developing expertise at sorting the wheat from the chaff.

And right now, there are not a lot of people working in this space. I'm sure they'll get sued and sabotaged, to an even greater degree than people doing cybersecurity analysis already do, and it'll be by folks with more resources, who don't simultaneously have to hide from the law. At least, they don't have to hide in the US. In Europe, they're criminals. In the US, they get a pass.

So, already stretched thin, it's not clear to me that this is an area that's ready for us to expend resources yet. Though I'd be really, really happy to be proven wrong, and I look forward to there being more people working on it, producing deanonymization threat feeds, in the same way that there are malware threat feeds for us to integrate right now.
 
We've been following the issue closely... In general, we're stretched very thin, so we have to be very careful about taking on a larger scope of issues than we can see our way to successfully solving. We view our mandate as a balance of security, privacy, and performance. And we work as much by trying to establish new minimum-acceptable-levels-of-service as by doing everything ourselves. We don't have the resources to protect the whole world directly, and that would be an inefficient way to go about things anyway. Plus it would centralize control, which is an anti-goal. So we try to very pointedly do limited things better, such that the monetizers have to also raise their standards somewhat in order to remain in the running.

Our blocking has been limited to fairly narrowly-construed security threats, thus far. Malware, phishing, C&C, homonyms and typo-squatting. By focusing on that, we've been able to achieve a 98% protection rate, in an industry which previously considered 10% to be good. Consequently, others have picked up their game as well, and we're now starting to see a few others that are consistently blocking the majority of threats, as well. So, people are more secure now.

~

But the CNAME chaining threat is another issue. It's one I brought up explicitly two years ago as one of the harms that was likely to befall non-profits if a private equity firm was allowed to monetize the .ORG domain, so it's not a new threat, but as people are noticing, it's getting worse and worse. Mitigating it would be a bit of a game of cat-and-mouse, but not more difficult or complicated than the malware threats that analysts current identify.

~

So, to return to your question: the simplest mechanism I can think of for dealing with the CNAME chaining thing is to take a few ad-blocking and tracker-blocking feeds, resolve them all back to IP addresses, expand those IP addresses out to /24s and /48s; then watch the IP addresses that come back in answers (A and AAAA records, whether at the end of CNAME chains or not) block anything that comes back from those subnets, and wait for false-positive reports. Narrow the size of the address blocks in response to false-positives. Look at what QNAMEs produce IP addresses in those ranges, spider outwards from there.

~

And right now, there are not a lot of people working in this space. I'm sure they'll get sued and sabotaged, to an even greater degree than people doing cybersecurity analysis already do, and it'll be by folks with more resources, who don't simultaneously have to hide from the law. At least, they don't have to hide in the US. In Europe, they're criminals. In the US, they get a pass.

So, already stretched thin, it's not clear to me that this is an area that's ready for us to expend resources yet. Though I'd be really, really happy to be proven wrong, and I look forward to there being more people working on it, producing deanonymization threat feeds, in the same way that there are malware threat feeds for us to integrate right now.

I actually read and was quite informed by your post. A lot to digest. I may have to read it a few times ;) I was looking for the definitive term to use in my post about "the realm of Quad9 filtering" and MANDATE is what I was looking for. And your term "CNAME chaining threat" is more inline with what I would understand rather then 2 other writers using "CNAME collusion" or "CNAME Cloaking", though as I have read more and more about this from various sources, each writer is helping me to get it more and more.

At least one writer did bring up the issue that this type of CNAME chaining threat could violate European GDPR and other regional privacy laws. Until the matter is adjudicated in court(s), this threat will undoubtedly continue unabated and grow in severity.

One security threat that this poses according to one writer: 3rd party Impersonation of you during the time you have not explicitly ended the authorized session with the 2nd party website; since the 3rd party would have access to all cookies including authentication cookies.

I think this is a fine example of unintended consequences of tweaking something or new features. As one popular technophile I listen to often says about poorly-designed new features, "...What could POSSIBLY go wrong!?" :rolleyes: right before digging into and tearing apart how it could go/went VERY WRONG.
 
Last edited:
I remember when using both cloudflare dns servers 1 and backup one 2 i had vulnerable message but when changing over to only 1.1.1.1 it says not vulnerable weird that adding backup dns server makes it vulnerable. It seems like only 1.1.1.1 is patched and rest of cloudflare backup servers are reported as vulnerable even if they are only added as backup they make the one not vulnerable show up as vulnerable on website. Quad9, Google still reports as vulnerable no matter what and NextDNS reports as not vulnerable no matter what.

What is interesting i hear constant stating here in this thread that quad9 was never vulnerable but official white papers state something else.(to be precise the page that shows vulnerable is titled Port Inference: Measurement so maybe it was vulnerable to this and not necessarily to whole saddns. So who is lying here university that published white paper or people here on forum?

Here is official white paper: https://www.saddns.net/slides.pdf (page 11). 12/14 vulnerable with exception of 2 chinese dns servers. (If i remember correctly tencent the ones that own riot games). I would use them because they were bulletproof at the start but since they are from china i will never use them. Interesting topic.
 
Last edited by a moderator:
Status
Not open for further replies.

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top