Description of problem: I work remotely and use VPN (openconnect) for connect to my organization network. After updating systemd to version 246~rc1-1, hosts names stoped resolving by domain names. $ ping git.sbis.ru ping: git.sbis.ru: Name or service not known $ nslookup git.sbis.ru Server: 10.76.4.153 Address: 10.76.4.153#53 Non-authoritative answer: Name: git.sbis.ru Address: 10.76.168.67 But after stopping `systemd-resolved.service` all started work as intended.
This bug appears to have been reported against 'rawhide' during the Fedora 33 development cycle. Changing version to 33.
Proposed as a Blocker and Freeze Exception for 33-beta by Fedora user mikhail using the blocker tracking app because: It must be possible to connect to hosts placed behind VPN (openconnect) out of the box. Currently, the latest version systemd for resolving host's names brought systemd-resolved.service. This service has an issue. Resolving the host's names by DNS server placed behind VPN (openconnect) is not working. The workaround is only manually disabling "systemd-resolved.service". But this is one of the new F33 features.
Marking this bug as blocking the systemd-resolved Changes Tracking bug.
Discussed during the 2020-08-17 blocker review meeting: [0] The decision to delay the classification of this as a blocker bug was made as this bug brings up an inadequacy of the criteria: we don't have explicit network or VPN criteria, and it'd be too much of a stretch to crowbar this into any existing criterion. We are punting on the decision to propose and discuss explicit network/VPN criteria. [0] https://meetbot.fedoraproject.org/fedora-blocker-review/2020-08-17/f33-blocker-review.2020-08-17-16.11.txt
Discussed during the 2020-08-17 blocker review meeting: [0] The decision to classify this bug as an "AcceptedFreezeException" was made as it is a noticeable issue that cannot be fixed with an update. [0] https://meetbot.fedoraproject.org/fedora-blocker-review/2020-08-17/f33-blocker-review.2020-08-17-16.11.txt
My guess would be that openconnect is missing the integration to push information about the new servers to resolved. I have never used openconnect and I have a VPN I could use for testing, so it'd be good if openconnect maintainers looked into this. Withtout this integration there isn't much we can do. See https://codesearch.debian.net/search?q=SetLinkDNS&literal=1&page=1&perpkg=1 for examples how this is done in various packager: either by calling org.freedesktop.resolve1.SetLinkDNS[Ex] or with busctl call/dbus call.
BTW I have extensively tested openvpn, but not openconnect. I had heard some complaints that openconnect doesn't work well with resolved, so this isn't too surprising.
(In reply to Fedora Blocker Bugs Application from comment #2) > Proposed as a Blocker and Freeze Exception for 33-beta by Fedora user > mikhail using the blocker tracking app because: I totally missed that this was proposed beta blocker. (In reply to Geoffrey Marr from comment #4) > Discussed during the 2020-08-17 blocker review meeting: [0] > > The decision to delay the classification of this as a blocker bug was made > as this bug brings up an inadequacy of the criteria: we don't have explicit > network or VPN criteria, and it'd be too much of a stretch to crowbar this > into any existing criterion. We are punting on the decision to propose and > discuss explicit network/VPN criteria. Thing is, not all VPN plugins are created equal. We have to define which VPN plugins to block on. Surely we don't want to block on pptp (and in fact, why is that still offered at all? we have no indication or warning that it is horribly insecure!). If we're going to block on any VPN, then openvpn would be for sure, and blocking on wireguard would make sense in the future once the desktop supports it. I'm not sure about openconnect and vpnc -- in fact, I'm not even sure what the difference between them is -- but they seem to be designed for compatibility with proprietary VPNs, which is a significant distinction from openvpn and wireguard. Anyway, although I would be a little irritated if we block on openconnect, I guess it is good that we are having this conversation, because we're steadily increasing quality requirements for Fedora relative to where we were in the past, and that is a good direction to be going in.
(In reply to Fedora Blocker Bugs Application from comment #2) > It must be possible to connect to hosts placed behind VPN (openconnect) out > of the box. I agree with Zbigniew, resolved is likely working as designed. The VPN plugin is responsible for telling resolved which DNS server to use for which interface, and which hosts should be resolved via that interface. For example, when I connect to two different openvpns and run 'resolvectl query', I see: Link 8 (tun1) Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6 DefaultRoute setting: yes LLMNR setting: yes MulticastDNS setting: no DNSOverTLS setting: no DNSSEC setting: no DNSSEC supported: no Current DNS Server: <server 1> DNS Servers: <server 1> <server 2> DNS Domain: redhat.com Link 5 (tun0) Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6 DefaultRoute setting: yes LLMNR setting: yes MulticastDNS setting: no DNSOverTLS setting: no DNSSEC setting: no DNSSEC supported: no Current DNS Server: 10.8.0.1 DNS Servers: 10.8.0.1 DNS Domain: ~. which says all requests for redhat.com go to one of two internal DNS servers on tun1, all other requests go to a different DNS server on tun0. This all works perfectly for openvpn. From Zbigniew's codesearch, I guess it probably works for vpnc. I suspect pptp will have the same problem as openconnect.
Michael: I did post the proposed criterion to desktop@ last Friday. This is the proposed text relating to VPNs: "Using the default network configuration tools for the console and for release-blocking desktops, it must be possible to establish a working connection to common OpenVPN, openconnect-supported and vpnc-supported VNC servers with typical configurations. Footnote title "Supported servers and configurations": As there are many different VPN server applications and configurations, blocker reviewers must use their best judgment in determining whether violations of this criterion are likely to be encountered commonly enough to block a release, and if so, at which milestone. As a general principle, the more people are likely to use affected servers and the less complicated the configuration required to hit the bug, the more likely it is to be a blocker." my rationale is that there are many large companies etc. that use VPN systems supported by the openconnect and vpnc plugins. It's important that Fedora users be able to connect to those - "I can't connect to my corporate VPN" may well be a showstopper for using Fedora, for someone. And we can hardly ask them to change the VPN their giant company uses. It'd be great if everyone used openvpn, but, well, we live in the world we live in. :) Please do contribute your opinion on the proposed criteria, though, it's a big area and this is only the first draft.
Yeah that looks fine, I noticed that after leaving the comments above.
Discussed at 2020-08-24 blocker review meeting: https://meetbot-raw.fedoraproject.org/fedora-blocker-review/2020-08-24/f33-blocker-review.2020-08-24-16.07.html . We agreed to delay the decision on this for a week while we review the proposed criteria, but as things stand we're working on the assumption we'll approve a VPN criterion that covers this and it'll be accepted.
Discussed during the 2020-08-31 blocker review meeting: [0] The decision to classify this bug as an "AcceptedBlocker" was made as a violation of the currently-under-discussion new networking criteria, as there has been no opposition to the idea that typical configs of common VPNs, including OpenConnect-supported VPNs, should work, and this is such a case. [0] https://meetbot.fedoraproject.org/fedora-blocker-review/2020-08-31/f33-blocker-review.2020-08-31-16.00.txt
OpenConnect and vpnc both just spawn vpnc-script with the IP and DNS information. The upstream vpnc-script added support for systemd-resolved in 2016 with this commit: http://git.infradead.org/users/dwmw2/vpnc-scripts.git/commitdiff/6f87b0fe7b20d802a0747cc310217920047d58d3 That does seem to be included in our vpnc-script package; isn't it working? Note that if you're using NetworkManager, things are completely different. When invoked from NetworkManager, OpenConnect tells NetworkManager about the DNS and then it's up to NetworkManager to feed the information into whatever is actually being used for DNS resolution. That would be an entirely different bug.
(In reply to Michael Catanzaro from comment #8) > Anyway, although I would be a little irritated if we block on openconnect, I > guess it is good that we are having this conversation, because we're > steadily increasing quality requirements for Fedora relative to where we > were in the past, and that is a good direction to be going in. I'd be irritated if we *didn't* block on it. My employer currently forces us to use Ubuntu for machines connected the the corporate network, and I *hate* it, precisely because Ubuntu lacks the focus on "enterprise" use cases, and is just plain broken for so many normal day-to-day things like joining a corporate VPN (and getting DNS right when I do so), joining a Windows domain, having Kerberos authentication working, installing corporate SSL CAs, etc.). The fact that Fedora *does* care about this stuff, and *does* expect it to work, is something I really miss. And I'd be very sad if Fedora lowered its standards. We should expect corporate users to be able to use Fedora for their common use cases, and continue to expect it to Just Work. Proprietary VPNs are a very large part of that (and there's an open source server implementation in ocserv too, btw).
(In reply to David Woodhouse from comment #14) > Note that if you're using NetworkManager, things are completely different. > When invoked from NetworkManager, OpenConnect tells NetworkManager about the > DNS and then it's up to NetworkManager to feed the information into whatever > is actually being used for DNS resolution. That would be an entirely > different bug. Well it's not entirely clear, but I assume this bug is for the NetworkManager case. Mikhail, is that correct? Since NetworkManager is handling openvpn correctly, I'll guess the responsible component is NetworkManager-openconnect. I doubt anything is going wrong in NetworkManager itself, again since it works well with openvpn. (In reply to David Woodhouse from comment #15) > I'd be irritated if we *didn't* block on it. We have consensus to add this blocker criterion: "Using the default network configuration tools for the console and for release-blocking desktops, it must be possible to establish a working connection to common OpenVPN, openconnect-supported and vpnc-supported VPN servers with typical configurations." So the NetworkManager case will be a blocker. If the problem is command line only, not a blocker. This is an awkward bug for systemd-resolved change proposal owners since we don't use openconnect and are just hoping that somebody else fixes the issue. Other serious systemd-resolved bugs are under control right now.
(In reply to Michael Catanzaro from comment #16) > If the problem is command line only, not a blocker. Er, I misread the criterion. It seems it's a blocker either way. OK.
Assuming NetworkManager... can I see the output of 'nmcli con show XXXX' for the VPN connection please? Along with whatever NetworkManager logs while the connection is established. The way that OpenConnect feeds DNS information to NetworkManager hasn't changed for a long time, and is one step removed from how NetworkManager feeds it to resolved/dnsmasq/etc. — it seems odd that this would be openconnect-specific. But let's take a look...
Could it be that bug 1553634 isn't actually fixed correctly, and we still require some "special" configuration to make the default VPN case work? Does the DNS start working if you add sbis.ru to the search domains for the VPN? Or set ipv4.dns-priority=-1 or whatever else was needed to make it work?
Also: does IP routing actually work? Is it *only* DNS that isn't working? Is it just that DNS goes to the wrong place and if you put the correct VPN nameserver directly into /etc/resolv.conf does it work? What is the output of 'resolvectl query' when connected to the VPN?
(In reply to David Woodhouse from comment #20) > if you put the correct VPN > nameserver directly into /etc/resolv.conf does it work? Remember that since F33, glibc will only look at /etc/resolv.conf if systemd-resolved is not running at all. So it only matters for applications that read the file manually, bypassing glibc, which is not recommended as that doesn't allow for split DNS.
$ nmcli c show NAME UUID TYPE DEVICE Tensor a1748f5f-af61-4d15-9842-9b007d30f828 vpn enp5s0 Wired connection 1 0033f0da-1584-3cc5-92dd-f026510a0d95 ethernet enp5s0 vpn0 4f248d82-abfe-46ec-855a-01d0cf476ade tun vpn0 virbr0 48d4182c-6c9b-42f3-ad55-7f146c36774f bridge virbr0 vnet0 0126d091-b1e0-47f1-b6be-d11310193174 tun vnet0 $ ping git.sbis.ru PING git.sbis.ru (10.76.168.67) 56(84) bytes of data. 64 bytes from git.sbis.ru (10.76.168.67): icmp_seq=1 ttl=58 time=27.8 ms 64 bytes from git.sbis.ru (10.76.168.67): icmp_seq=2 ttl=58 time=27.7 ms 64 bytes from git.sbis.ru (10.76.168.67): icmp_seq=3 ttl=58 time=27.6 ms 64 bytes from git.sbis.ru (10.76.168.67): icmp_seq=4 ttl=58 time=27.7 ms 64 bytes from git.sbis.ru (10.76.168.67): icmp_seq=5 ttl=58 time=27.7 ms ^C64 bytes from git.sbis.ru (10.76.168.67): icmp_seq=6 ttl=58 time=27.7 ms 64 bytes from git.sbis.ru (10.76.168.67): icmp_seq=7 ttl=58 time=27.7 ms ^C --- git.sbis.ru ping statistics --- 7 packets transmitted, 7 received, 0% packet loss, time 6008ms rtt min/avg/max/mdev = 27.585/27.717/27.829/0.069 ms $ sudo systemctl enable systemd-resolved.service Created symlink /etc/systemd/system/dbus-org.freedesktop.resolve1.service → /usr/lib/systemd/system/systemd-resolved.service. Created symlink /etc/systemd/system/multi-user.target.wants/systemd-resolved.service → /usr/lib/systemd/system/systemd-resolved.service. $ sudo systemctl start systemd-resolved.service $ ping git.sbis.ru ping: git.sbis.ru: Name or service not known $ nmcli c modify a1748f5f-af61-4d15-9842-9b007d30f828 ipv4.dns-priority -1 $ ping git.sbis.ru ping: git.sbis.ru: Name or service not known $ resolvectl query git.sbis.ru git.sbis.ru: resolve call failed: 'git.sbis.ru' not found $ nmcli c down Tensor --ask Connection 'Tensor' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5) $ nmcli c up Tensor --ask POST https://vpn.tensor.ru:501/ Connected to 91.213.144.15:501 SSL negotiation with vpn.tensor.ru Connected to HTTPS on vpn.tensor.ru with ciphersuite (TLS1.2)-(ECDHE-SECP256R1)-(RSA-SHA512)-(AES-256-GCM) XML POST enabled Please enter your username and password. GROUP: [Corp|Main|Office|Region|TechSupport|TechSupport-Region]:Region POST https://vpn.tensor.ru:501/ XML POST enabled Please enter your username and password. Username:mv.gavrilov Password: POST https://vpn.tensor.ru:501/ Please enter your one-time password. You will receive a password via mobile application or via SBIS online. Response: POST https://vpn.tensor.ru:501/ Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7) $ resolvectl query git.sbis.ru git.sbis.ru: resolve call failed: 'git.sbis.ru' not found $ ping git.sbis.ru ping: git.sbis.ru: Name or service not known $ sudo systemctl stop systemd-resolved.service $ ping git.sbis.ru PING git.sbis.ru (10.76.168.67) 56(84) bytes of data. 64 bytes from git-internal.sbis.ru (10.76.168.67): icmp_seq=1 ttl=58 time=27.6 ms 64 bytes from git-internal.sbis.ru (10.76.168.67): icmp_seq=2 ttl=58 time=27.7 ms 64 bytes from git-internal.sbis.ru (10.76.168.67): icmp_seq=3 ttl=58 time=27.8 ms 64 bytes from git-internal.sbis.ru (10.76.168.67): icmp_seq=4 ttl=58 time=27.8 ms ^C --- git.sbis.ru ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3005ms rtt min/avg/max/mdev = 27.625/27.711/27.786/0.073 ms
> Also: does IP routing actually work? Is it *only* DNS that isn't working? $ ping git.sbis.ru ping: git.sbis.ru: Name or service not known $ ping 10.76.168.67 PING 10.76.168.67 (10.76.168.67) 56(84) bytes of data. 64 bytes from 10.76.168.67: icmp_seq=1 ttl=58 time=27.8 ms 64 bytes from 10.76.168.67: icmp_seq=2 ttl=58 time=27.7 ms 64 bytes from 10.76.168.67: icmp_seq=3 ttl=58 time=27.7 ms ^C --- 10.76.168.67 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 27.654/27.719/27.756/0.046 ms $ sudo systemctl stop systemd-resolved.service $ ping git.sbis.ru PING git.sbis.ru (10.76.168.67) 56(84) bytes of data. 64 bytes from git.sbis.ru (10.76.168.67): icmp_seq=1 ttl=58 time=27.5 ms 64 bytes from git.sbis.ru (10.76.168.67): icmp_seq=2 ttl=58 time=27.9 ms 64 bytes from git.sbis.ru (10.76.168.67): icmp_seq=3 ttl=58 time=27.7 ms ^C --- git.sbis.ru ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 27.499/27.715/27.940/0.180 ms As you can see *only* DNS isn't working
Hm, OK... so that proves your point, only DNS is broken. Now, when the VPN connection is up, let's try: $ nmcli connection show Tensor | grep DNS That will print what NetworkManager thinks the DNS server(s) is. $ resolvectl dns That will print what systemd thinks the DNS server(s) is. It should match. Does it? $ nmcli connection show Tensor | grep DOMAIN That should print what NetworkManager thinks the search domain(s) is (if any; there might not be one if you want everything to go through the VPN connection). $ resolvectl domain That will print what systmed thinks the search domain(s) is (if any). Again, it should match. Does it?
Also, I notice you're not configuring resolv.conf when enabling/disabling resolved, so let's see how you have configured /etc/resolv.conf: $ ls -l /etc | grep resolv.conf When resolved is enabled, it should look like this: $ ls -l /etc | grep resolv.conf lrwxrwxrwx. 1 root root 39 Sep 1 09:32 resolv.conf -> ../run/systemd/resolve/stub-resolv.conf That's not the only supported configuration, but it's our default configuration and therefore the only case that matters for blocker bugs. Just running 'systemctl enable systemd-resolved' is not going to touch resolv.conf; that has to be modified manually when enabling/disabling resolved. (In reply to Michael Catanzaro from comment #24) > $ resolvectl dns > > That will print what systemd thinks the DNS server(s) is. It should match. > Does it? It should also match the first server listed in /etc/resolv.conf when resolvectl is disabled. When resolved is enabled, then the first server listed in /etc/resolv.conf is going to be used. And we know that must be working for you, so the information from 'resolvectl dns' probably doesn't match that. The really interesting question is whether NetworkManager has it right or not....
$ nmcli connection show Tensor | grep DNS IP4.DNS[1]: 10.76.4.153 $ resolvectl dns Global: 10.76.4.153 Link 2 (enp5s0): 192.168.1.1 Link 3 (wlp4s0): Link 4 (virbr0): Link 5 (virbr0-nic): Link 6 (vnet0): Link 7 (vpn0): $ nmcli connection show Tensor | grep DOMAIN $ resolvectl domain Global: Link 2 (enp5s0): ~. Link 3 (wlp4s0): Link 4 (virbr0): Link 5 (virbr0-nic): Link 6 (vnet0): Link 7 (vpn0): $ ls -l /etc | grep resolv.conf -rw-r--r--. 1 root root 53 Sep 6 18:15 resolv.conf -rw-r--r--. 1 root root 76 Aug 11 06:15 resolv.conf.orig-with-nm $ cat /etc/resolv.conf # Generated by NetworkManager nameserver 10.76.4.153 $ ping git.sbis.ru ping: git.sbis.ru: Name or service not known
OK, so NetworkManager has correctly propagated the right DNS settings to systemd, but: (In reply to Mikhail from comment #26) > $ resolvectl domain > Global: > Link 2 (enp5s0): ~. > Link 3 (wlp4s0): > Link 4 (virbr0): > Link 5 (virbr0-nic): > Link 6 (vnet0): > Link 7 (vpn0): OK, that's no good. There's the problem. systemd has been told to use the DNS server for enp5s0, 192.168.1.1 (your router), for all traffic, and to use your VPN's DNS server, 10.76.4.153, for no traffic. NetworkManager has told systemd to do the wrong thing, and systemd just does what it's told. But the DNS itself is otherwise configured correctly. I don't know why the domain is misconfigured, but since NetworkManager is responsible for pushing this configuration to systemd, and since it does so properly for openvpn connections, I continue to suspect NetworkManager-openconnect. The expected behavior would be either this: $ resolvectl domain Global: Link 2 (enp5s0): Link 3 (wlp4s0): Link 4 (virbr0): Link 5 (virbr0-nic): Link 6 (vnet0): Link 7 (vpn0): ~. which tells systemd to use your VPN's DNS for everything; or, alternatively: $ resolvectl domain Global: Link 2 (enp5s0): ~. Link 3 (wlp4s0): Link 4 (virbr0): Link 5 (virbr0-nic): Link 6 (vnet0): Link 7 (vpn0): sbis.ru which would tell systemd to use your VPN's DNS for sbis.ru and your router's DNS for everything else. (That was not possible with the old nss-dns name resolution.) Lastly, I wonder what would happen if you check the box "Use this connection only for resources on its network" in the desktop configuration settings for your VPN. (Or with nmcli. Not sure how to do that with nmcli.) Shouldn't be necessary, but might serve as a workaround. That works fine for openvpn, and *should* workaround this bug by establishing the search domain for sbis.ru on vpn0. > $ cat /etc/resolv.conf > # Generated by NetworkManager > nameserver 10.76.4.153 Here's a second problem, but probably not responsible for this bug. This is supposed to be a non-default configuration; we should default to /etc/resolv.conf managed by systemd, not by NetworkManager. But this happened by accident due to bug #1873856. Since that bug has a freeze exception, nobody should wind up in this configuration by accident anymore, assuming the fixes in that bug work properly. Manual intervention is required to fix this: you should delete /etc/resolv.conf, replace it with a symlink to ../run/systemd/resolve/stub-resolv.conf, then reboot (or restart NetworkManager and resolved).
> The expected behavior would be either this: > > $ resolvectl domain > Global: > Link 2 (enp5s0): > Link 3 (wlp4s0): > Li1nk 4 (virbr0): > Li1nk 5 (virbr0-nic): > Link 6 (vnet0): > Link 7 (vpn0): ~. > > which tells systemd to use your VPN's DNS for everything; Yes, it expected behavior because organization's DNS server also resolves names for several domains. For example: $ ping test-reg.tensor.ru PING test-reg.tensor.ru (10.76.242.74) 56(84) bytes of data. 64 bytes from office-test-reg.tensor.ru (10.76.242.74): icmp_seq=1 ttl=58 time=27.5 ms 64 bytes from office-test-reg.tensor.ru (10.76.242.74): icmp_seq=2 ttl=58 time=27.7 ms ^C --- test-reg.tensor.ru ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 27.475/27.609/27.744/0.134 ms > Lastly, I wonder what would happen if you check the box "Use this connection only for resources on its network" in the desktop configuration settings for your VPN. (Or with nmcli. Not sure how to do that with nmcli.) Shouldn't be necessary, but might serve as a workaround. I tried this option but without success.
Try setting ipv4.dns-priority=-1 and ipv4.dns-search=~. on the VPN connection. If that works around it, we need to revisit the "fix" for bug 1553634 because you shouldn't have to do that workaround; this should work out of the box. Don't just set the search domain to sbis.ru because then you'll be suffering CVE-2018-1000135.
Can you confirm this is a full tunnel VPN, and your default IP route is through it when you're connected?
Thanks, with ipv4.dns-priority=-1 and ipv4.dns-search=~. on the VPN connection, resolving names works as intended. $ nmcli c show Tensor | grep ipv4.dns-priority ipv4.dns-priority: -1 $ nmcli c show Tensor | grep ipv4.dns-search ipv4.dns-search: -- $ nmcli c modify Tensor ipv4.dns-search ~. $ resolvectl domain Failed to get global data: Could not activate remote peer. $ sudo systemctl enable systemd-resolved.service Created symlink /etc/systemd/system/dbus-org.freedesktop.resolve1.service → /usr/lib/systemd/system/systemd-resolved.service. Created symlink /etc/systemd/system/multi-user.target.wants/systemd-resolved.service → /usr/lib/systemd/system/systemd-resolved.service. $ sudo systemctl start systemd-resolved.service $ ping git.sbis.ru ping: git.sbis.ru: Name or service not known $ resolvectl domain Global: Link 2 (enp5s0): ~. Link 3 (wlp4s0): Link 4 (virbr0): Link 5 (virbr0-nic): Link 6 (vpn0): $ nmcli c show Tensor | grep ipv4.dns-search ipv4.dns-search: ~. $ nmcli c down Tensor Connection 'Tensor' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4) $ nmcli c up Tensor A password is required to connect to 'Tensor'. Warning: password for 'vpn.secrets.gateway' not given in 'passwd-file' and nmcli cannot ask without '--ask' option. Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6) [mikhail@localhost ~]$ resolvectl domain Global: Link 2 (enp5s0): Link 3 (wlp4s0): Link 4 (virbr0): Link 5 (virbr0-nic): Link 6 (vpn0): ~. $ ping git.sbis.ru PING git.sbis.ru (10.76.168.67) 56(84) bytes of data. 64 bytes from git.sbis.ru (10.76.168.67): icmp_seq=1 ttl=58 time=33.3 ms 64 bytes from git.sbis.ru (10.76.168.67): icmp_seq=2 ttl=58 time=32.8 ms 64 bytes from git.sbis.ru (10.76.168.67): icmp_seq=3 ttl=58 time=32.6 ms ^C --- git.sbis.ru ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 32.632/32.913/33.284/0.273 ms $ ping test-reg.tensor.ru PING test-reg.tensor.ru (10.76.242.74) 56(84) bytes of data. 64 bytes from test-help-reg.tensor.ru (10.76.242.74): icmp_seq=1 ttl=58 time=32.5 ms 64 bytes from test-help-reg.tensor.ru (10.76.242.74): icmp_seq=2 ttl=58 time=33.0 ms 64 bytes from test-help-reg.tensor.ru (10.76.242.74): icmp_seq=3 ttl=58 time=32.8 ms 64 bytes from test-help-reg.tensor.ru (10.76.242.74): icmp_seq=4 ttl=58 time=32.7 ms ^C --- test-reg.tensor.ru ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3004ms rtt min/avg/max/mdev = 32.538/32.762/33.017/0.180 ms
> Can you confirm this is a full tunnel VPN, and your default IP route is through it when you're connected? $ ip r default via 192.168.1.1 dev enp5s0 proto dhcp metric 100 10.0.0.0/8 dev vpn0 proto static scope link metric 50 10.176.132.0/22 dev vpn0 proto kernel scope link src 10.176.135.130 metric 50 91.213.144.15 via 192.168.1.1 dev enp5s0 proto static metric 100 192.168.1.0/24 dev enp5s0 proto kernel scope link src 192.168.1.68 metric 100 192.168.1.1 dev enp5s0 proto static scope link metric 100 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
(In reply to Mikhail from comment #26) > $ nmcli connection show Tensor | grep DOMAIN > > $ resolvectl domain > (In reply to Mikhail from comment #32) > $ ip r > default via 192.168.1.1 dev enp5s0 proto dhcp metric 100 > ... So, this isn't a full tunnel VPN and the VPN server doesn't push a domain. As a consequence, NM defaults to using the local nameserver for all queries. I think the problem is that the VPN server doesn't push the domain (or that NM/NM-openconnect don't use it).
Just now deleted and created a fresh openconnect setting and sure that `ipv4.dns-priority` not needed changing for workaround the problem. $ ping git.sbis.ru ping: git.sbis.ru: Name or service not known $ ping test-reg.tensor.ru ping: test-reg.tensor.ru: Name or service not known $ nmcli c show Tensor | grep ipv4.dns-search ipv4.dns-search: -- $ nmcli c show Tensor | grep ipv4.dns-priority ipv4.dns-priority: 0 $ nmcli c modify Tensor ipv4.dns-search ~. $ nmcli c down Tensor --ask Connection 'Tensor' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/8) $ nmcli c up Tensor --ask POST https://vpn.tensor.ru:501/ Connected to 91.213.144.15:501 SSL negotiation with vpn.tensor.ru Connected to HTTPS on vpn.tensor.ru with ciphersuite (TLS1.2)-(ECDHE-SECP256R1)-(RSA-SHA512)-(AES-256-GCM) XML POST enabled Please enter your username and password. GROUP: [Corp|Main|Office|Region|TechSupport|TechSupport-Region]:Region POST https://vpn.tensor.ru:501/ XML POST enabled Please enter your username and password. Username:mv.gavrilov Password: POST https://vpn.tensor.ru:501/ Please enter your one-time password. You will receive a password via mobile application or via SBIS online. Response: POST https://vpn.tensor.ru:501/ Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/10) $ ping test-reg.tensor.ru PING test-reg.tensor.ru (10.76.242.74) 56(84) bytes of data. 64 bytes from office-test-reg.tensor.ru (10.76.242.74): icmp_seq=1 ttl=58 time=27.4 ms 64 bytes from office-test-reg.tensor.ru (10.76.242.74): icmp_seq=2 ttl=58 time=27.3 ms 64 bytes from office-test-reg.tensor.ru (10.76.242.74): icmp_seq=3 ttl=58 time=27.4 ms ^C --- test-reg.tensor.ru ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 27.327/27.368/27.426/0.042 ms $ ping git.sbis.ru PING git.sbis.ru (10.76.168.67) 56(84) bytes of data. 64 bytes from git.sbis.ru (10.76.168.67): icmp_seq=1 ttl=58 time=28.0 ms 64 bytes from git.sbis.ru (10.76.168.67): icmp_seq=2 ttl=58 time=27.5 ms 64 bytes from git.sbis.ru (10.76.168.67): icmp_seq=3 ttl=58 time=27.6 ms ^C --- git.sbis.ru ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 27.481/27.678/27.954/0.201 ms $ ip r default via 192.168.1.1 dev enp5s0 proto dhcp metric 100 10.0.0.0/8 dev vpn0 proto static scope link metric 50 10.176.132.0/22 dev vpn0 proto kernel scope link src 10.176.135.163 metric 50 91.213.144.15 via 192.168.1.1 dev enp5s0 proto static metric 100 192.168.1.0/24 dev enp5s0 proto kernel scope link src 192.168.1.68 metric 100 192.168.1.1 dev enp5s0 proto static scope link metric 100 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown > I think the problem is that the VPN server doesn't push the domain (or that NM/NM-openconnect don't use it). Before implementation "systemd-resolved.service" name resolving works without it.
So setting ipv4.dns-search to ~. fixes it even with systemd-resolved? Seems like that should perhaps be the default for a VPN which doesn't specify any lookup domains.
> So setting ipv4.dns-search to ~. fixes it even with systemd-resolved? yes
(In reply to Mikhail from comment #34) > > I think the problem is that the VPN server doesn't push the domain (or that NM/NM-openconnect don't use it). > Before implementation "systemd-resolved.service" name resolving works > without it. That's because we were using nss-dns, which doesn't know how to do split DNS and just stupidly uses the first server listed in /etc/resolv.conf for *all* connections. That's what you want, but it's not what other VPN users necessarily want. E.g. I only want connections to redhat.com going through my VPN; using redhat.com DNS for *all* my traffic would be a major privacy violation. Similarly, if I were to connect to my personal VPN after my redhat.com VPN, then internal redhat.com requests would be leaked to the personal VPN server, and then we have a security breach in the other direction since my personal VPN shouldn't know about internal redhat.com hostnames, and also a usability breach since you can't connect to internal resources since the DNS requests are going to the wrong VPN. (Whichever VPN you connect to second was listed first and therefore received all DNS.) That's how I wound up getting involved in all this; it took me several days to figure out why Red Hat's VPN sometimes worked and sometimes didn't.... Anyway, suffice to say nss-dns is a hot mess, and with systemd-resolved we have proper search domains that you can configure to split DNS to where it really belongs. (In reply to David Woodhouse from comment #35) > So setting ipv4.dns-search to ~. fixes it even with systemd-resolved? > > Seems like that should perhaps be the default for a VPN which doesn't specify any lookup domains. Probably? Seems like a good idea. At least if it has functional DNS? I can't think of any reason a VPN wouldn't want you to use its DNS....
"That's because we were using nss-dns, which doesn't know how to do split DNS and just stupidly uses the first server listed in /etc/resolv.conf for *all* connections. That's what you want, but it's not what other VPN users necessarily want. E.g. I only want connections to redhat.com going through my VPN; using redhat.com DNS for *all* my traffic would be a major privacy violation." But...this wasn't actually a problem before anyway, because the first server listed in /etc/resolv.conf was the local caching server which could do all this split DNS stuff if you wanted it to. That's how I've been doing it for years.
(In reply to Adam Williamson from comment #38) > But...this wasn't actually a problem before anyway, because the first server > listed in /etc/resolv.conf was the local caching server which could do all > this split DNS stuff if you wanted it to. That's how I've been doing it for > years. That was true in some other distros, but not in Fedora. There was a change proposal to run a local resolver years ago, but it was never successfully implemented. If you had that setup, you must have configured it manually. (FWIW, another local caching server, like dnsmasq, would indeed work just as well as systemd-resolved. I think Ubuntu did something like that several years ago, before switching to systemd.)
yeah, I did configure it manually. it's like one line in the config file or something. I'm not sure all the problems we seem to be running into with systemd-resolved are worth it if the payoff is to get that by default. Especially since we could've just switched NetworkManager to use dnsmasq by default instead...
I think it's basically what Michael says in comment 27 and comment 29. Except I disagree with - "NetworkManager has told systemd to do the wrong thing" (rather, NetworkManager also did what it was told) - "Here's a second problem, but probably not responsible for this bug" (rather, this seems to be the issue). What really matters is how DNS is currently configured. If we look at comment 26, I think we can see what's wrong. 1), NetworkManager never configures a global DNS server in systemd-resolved. The > $ resolvectl dns > Global: 10.76.4.153 > Link 2 (enp5s0): 192.168.1.1 > > $ ls -l /etc | grep resolv.conf > -rw-r--r--. 1 root root 53 Sep 6 18:15 resolv.conf means systemd-resolved picked this up from /etc/resolv.conf (as also documented in `man systemd-resolved`: "Alternatively, /etc/resolv.conf may be managed by other packages, in which case systemd-resolved will read it for DNS configuration data"). 2) NetworkManager wrote /etc/resolv.conf as a file. With systemd-resolved you usually don't want that. NetworkManager has 3 relevant options in `man NetworkManager.conf`: "main.dns", "main.rc-manager", "main.systemd-resolved". By default you run with no explicit configuration for NetworkManager, so the default gets detect -- based on whether /etc/resolv.conf is a symlink. Since it's not a symlink you effectively have behavior like: [main] dns=default rc-manager=symlink systemd-resolved=yes which means, to maintain a regular /etc/resolv.conf but also push the configuration to systemd-resolved via D-Bus. Also, check the logs, which has a line like > dns-mgr[0x56186552f2c0]: init: dns=??? rc-manager=???, plugin=??? (In the future, please always include useful logs when reporting a problem with NetworkManager, see https://cgit.freedesktop.org/NetworkManager/NetworkManager/tree/contrib/fedora/rpm/NetworkManager.conf#n28 for hints about logging). Btw, NetworkManager always defaults to `main.systemd-resolved=yes`, because it really wants to push the DNS configuration to systemd-resolved if it is running, because that is the only way it can do per-interface name resolution -- which is important for connectivity checking to work properly. I think the simplest way to fix that is making /etc/resolv.conf a symlink (as Michael said an explained in `man systemd-resolved`) and `systemctl reload NetworkManager`. 3) as Michael says in comment 25, the Change to use systemd-resolved is (also) about having /etc/resolv.conf as a symlink. That did not correctly happen on that system, so it's not clear where that configuration comes from (??). It's probably as Michael says: a duplicate of bug 1873856. Although, that bug is about fresh install, while this bug is about upgrade(??). (In reply to Adam Williamson from comment #40) > yeah, I did configure it manually. it's like one line in the config file or > something. I'm not sure all the problems we seem to be running into with > systemd-resolved are worth it if the payoff is to get that by default. > Especially since we could've just switched NetworkManager to use dnsmasq by > default instead... I don't agree. I am talking about NetworkManager's dns=dnsmasq plugin, not running dnsmasq as stand alone service (with which NetworkManager cannot integrate). The dns=dnsmasq setting is a nice, simple thing, where NetworkManager can run a local caching DNS server for simple use-cases. But since Fedora has no /usr/sbin/resolvconf (aside the one that systemd-resolved provides), that's really not a proper, general purpose solution because only NetworkManager can configure that dnsmasq instance that it runs (there is no resolveconf or similar). OTOH, systemd-resolved is the only solution that provides a central service for DNS so that different network "managers" can integrate. It is thereby also the only way on Fedora to get /usr/sbin/resolvconf working.
My understanding is that per-interface name resolution (well, per-domain resolution based on interface configuration) has worked in Fedora with dns=dnsmasq for years. Doing it with systemd-resolved is different, but not new. Regarding: > I can't think of any reason a VPN wouldn't want you to use its DNS.... Indeed so. Most corporate VPN providers will *insist* upon it. Letting *any* DNS lookups leak to the local wifi hotspot is the problem discussed in CVE-2018-1000135. It should not be the default behaviour of a VPN. Even if my VPN adds 'example.com' as a default search domain so that I can be lazy and just type 'intranet' into my web browser and have it auto-completed to 'intranet.example.com', that is an autocomplete or "search domain" configuration. It has nothing to do with which DNS server we should use for lookups. We should use the VPN DNS server for *all* lookups unless explicitly configured otherwise.
Please note https://bugzilla.redhat.com/show_bug.cgi?id=1558238#c7 which says > On RHEL 7 and Fedora, `dns=default` is used by default, which is not vulnerable to [CVE-2018-1000135], since it does not use "split DNS". It looks like we are making that no longer true. Please let's make sure we don't introduce that vulnerability.
OK, I'll close this as a duplicate of bug #1873856. I'll test that later today and propose it as a blocker using this VPN functionality criterion if it's not already fixed. (In reply to David Woodhouse from comment #42) > Indeed so. Most corporate VPN providers will *insist* upon it. Letting *any* > DNS lookups leak to the local wifi hotspot is the problem discussed in > CVE-2018-1000135. It should not be the default behaviour of a VPN. Well, it depends on whether you check "Use this connection only for resources on its network," which is off by default. We don't have separate GUI settings for DNS and routing, so they both get lumped together under this one setting. That seems fine to me. I assume most users probably expect DNS to go where their traffic goes, and if an employer wants to see all your DNS, it should either have enough VPN capacity to handle the corresponding traffic as well, or provide manual configuration instructions to avoid that. *** This bug has been marked as a duplicate of bug 1873856 ***
> OK, I'll close this as a duplicate of bug #1873856. I'll test that later today and propose it as a blocker using this VPN functionality criterion if it's not already fixed. I think that's wrong and we are introducing a security vulnerability by enabling split DNS in Fedora without fixing the default behaviour. We should use the VPN DNS for all lookups unless the "Use this connection only for resources on its network" option is set, which you rightly note is not the default.
(In reply to David Woodhouse from comment #45) > We should use the VPN DNS for all lookups unless the "Use this connection > only for resources on its network" option is set, which you rightly note is > not the default. Well, I could just as easily say that not splitting DNS is a security vulnerability, but I agree with this part. If that's not happening, of course it's a bug. Now, if we've analyzed this problem correctly, it's caused by bug #1873856, which I've just verified is fixed for both fresh installs and upgrades from F32, so this should hopefully only have affected users who installed F33 pre-beta. I've posted instructions to manually fix affected systems at https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/thread/QN5VHS5SRUG2ZXADMNPYN25Q5QYACBR4/. Mikhail, if you could perform the steps I recommend in that mail, then create a new openconnect connection and confirm that the newly-created connection works properly, that would be great, so that we can be confident this is really fixed.
Mikhail, please confirm that unless you select 'Use this connection only for resource on its network', DNS lookups for e.g. google.com are going to the VPN nameserver and *not* being leaked to the local public network. If they are leaking, then we have introduced CVE-2018-1000135 into Fedora, after previously saying Fedora was OK because it didn't have split DNS.
> then we have introduced CVE-2018-1000135 into Fedora I assume you are talking about comment 43, where there is the claim that if you use dns=default, you don't do split DNS and don't get DNS leaks. I don't think that is correct. You get DNS leaks if (and only if) multiple DNS servers are configured. NetworkManager configures the DNS servers as the user asks it to and you control that with `ipv4.dns-priority` setting. And if split DNS is enable (with dns=dnsmasq or dns=systemd-resolved) then `ipv4.dns-priority` interacts with the search domains as described in `man nm-settings`. If you run NetworkManager with dns=default, the NetworkManager can still configure multiple DNS servers in /etc/resolv.conf and potentially there is a DNS leak. Sure, the glibc's resolver implementation by default asks the name servers in order of appearance (seemingly avoiding the DNS leak). But if the VPN's name server times out, it will proceed to the next server and thus "leak". You avoid CVE-2018-1000135 by configuring your DNS in NetworkManager correctly. The only complained here is that several tools default to a configuration where split-DNS is done instead of avoiding DNS leaks. But that is up to those tools (or the user) to create the configuration that the user wants. E.g. `nmcli connection import type wireguard file $CONF` sets ipv4.dns-priority and avoid DNS leaks ([1]). In any case, this is fully configurable and when you talk about "defaults", you really need to talk about the tool that you use to create your configuration (keyfiles on disk, nm-connection-editor, nmcli, D-Bus API, ...). [1] https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/blob/a0179362231e2c1c4ebba7d5616da2a4677b1c4b/clients/common/nm-vpn-helpers.c#L764
Looks like NetworkManager does add ~. for full tunnel VPNs automatically; it's only split tunnel for which it doesn't, so we don't suffer the leak in the case that matters. Although perhaps NM should add ~. even for split tunnel VPNs if the VPN offers *no* routing domains?
> > then we have introduced CVE-2018-1000135 into Fedora > I assume you are talking about comment 43, where there is the claim that if you use dns=default, you don't do split DNS and don't get DNS leaks. > I don't think that is correct. > You get DNS leaks if (and only if) multiple DNS servers are configured. NetworkManager configures the DNS servers as the user asks it to and you control that with `ipv4.dns-priority` setting. And if split DNS is enable (with dns=dnsmasq or dns=systemd-resolved) then `ipv4.dns-priority` interacts with the search domains as described in `man nm-settings`. > If you run NetworkManager with dns=default, the NetworkManager can still configure multiple DNS servers in /etc/resolv.conf and potentially there is a DNS leak. > Sure, the glibc's resolver implementation by default asks the name servers in order of appearance (seemingly avoiding the DNS leak). But if the VPN's name server times out, it will proceed to the next server and thus "leak". Agreed. That was a suboptimal response from the security response team (originally in https://bugzilla.redhat.com/show_bug.cgi?id=1558238#c7). > You avoid CVE-2018-1000135 by configuring your DNS in NetworkManager correctly. Yep. And you're right that it's about *defaults*. To expand on my previous comment, which crossed in the air with yours: I just spun up a new F33 VM, added a VPN using the GUI tool without explicitly changing any DNS settings, and because it was a full-tunnel VPN NetworkManager did the right thing and added ~. for me — so it does look like the leak I was concerned about doesn't happen. I did need to manually fix the /etc/resolv.conf symlink but perhaps I just installed from a slightly out of date mirror. Thanks.
> VPN NetworkManager did the right thing and added ~. for me — so it does look like the leak I was concerned about doesn't happen. That seems to work as expected. The canonical way for avoiding DNS leaks is to set ipv4.dns-priority to the smallest negative value (of all active profiles). Then only DNS servers from the profile/interface are considered. That works the same with and without split DNS. Split DNS gives you more flexibility, and another way how to avoid DNS leaks without setting DNS priority to a negative value. With split DNS the search domains are like route destinations for DNS queries, and the dns-priority is like a route metric. -- note that if you leave ipv4.dns-priority at 0 (which most tools that configure profiles choose by default), then it translates to dns-priority=50 for VPN and dns-priority=100 otherwise. So, if you have a VPN profile and a Wi-Fi profile, both with search domains "~." and dns-priority 50 and 100, respectively, then VPN shadows the search domains from the Wi-Fi profile. Thus, in this example you also avoided DNS leaks despite not setting DNS priority to a negative value. Of course, if the Wi-Fi profile also had another search domain like ~example.com, then such queries would still go via Wi-Fi as requested, which may be considered an undesired leak.
(In reply to David Woodhouse from comment #47) > Mikhail, please confirm that unless you select 'Use this connection only for > resource on its network', DNS lookups for e.g. google.com are going to the > VPN nameserver and *not* being leaked to the local public network. > If they are leaking, then we have introduced CVE-2018-1000135 into Fedora, > after previously saying Fedora was OK because it didn't have split DNS. After I apply changes from the mailing list message [1], I now see that always using the local DNS server for resolving names. [1] https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/thread/QN5VHS5SRUG2ZXADMNPYN25Q5QYACBR4/
$ nmcli c show Tensor | grep ipv4.dns-priority ipv4.dns-priority: 0 $ nmcli c show Tensor | grep ipv4.dns-search ipv4.dns-search: ~. $ nslookup google.com Server: 127.0.0.53 Address: 127.0.0.53#53 Non-authoritative answer: Name: google.com Address: 64.233.165.139 Name: google.com Address: 64.233.165.100 Name: google.com Address: 64.233.165.101 Name: google.com Address: 64.233.165.102 Name: google.com Address: 64.233.165.113 Name: google.com Address: 64.233.165.138
Reopening.
> After I apply changes from the mailing list message [1], I now see that always using the local DNS server for resolving names. We don't really know which nameserver is used from the output you pasted. nslookup only shows that it talks to resolved running on localhost.
(In reply to Zbigniew Jędrzejewski-Szmek from comment #55) > We don't really know which nameserver is used from the output you pasted. > nslookup only shows that it talks to resolved running on localhost. I understand it. I don't have an idea of how to debug after applying changes [1].
Please just re-run the commands from comment #c26.
Adam, Kamil, can we push this out from beta to final blocker? We have only tested two VPN configurations (Mikhail's and David's) and so far one is affected and the other is not, so we don't know how prevalent the issue actually is.
(In reply to Michael Catanzaro from comment #58) > We have only tested two VPN configurations (I mean openconnect configurations. Other types of VPN don't seem to be affected.)
(In reply to Michael Catanzaro from comment #58) > ... so we don't know how prevalent the issue actually is. Hi. I don't understand what the issue is (much less its prevalence). Can you explain what you think what the remaining issue is? Is this in response to comment 52, I suppose? Or what you think is special about openconnect...
The remaining issue seems to be that Mikhail's VPN is still using local DNS rather than the VPN's DNS, even though /etc/resolv.conf is configured properly now. We should probably never create VPN connections that exclusively use local DNS unless the user has intentionally configured it this way.
> Please just re-run the commands from comment #c26. $ nmcli connection show Tensor | grep DNS IP4.DNS[1]: 10.76.4.153 $ resolvectl dns Global: Link 2 (enp5s0): Link 3 (wlp4s0): Link 4 (virbr0): Link 5 (virbr0-nic): Link 7 (vpn0): 10.76.4.153 Link 8 (vnet0): $ resolvectl domain Global: Link 2 (enp5s0): Link 3 (wlp4s0): Link 4 (virbr0): Link 5 (virbr0-nic): Link 7 (vpn0): ~. Link 8 (vnet0): $ cat /etc/resolv.conf # This file is managed by man:systemd-resolved(8). Do not edit. # # This is a dynamic resolv.conf file for connecting local clients to the # internal DNS stub resolver of systemd-resolved. This file lists all # configured search domains. # # Run "resolvectl status" to see details about the uplink DNS servers # currently in use. # # Third party programs should typically not access this file directly, but only # through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a # different way, replace this symlink by a static file or a different symlink. # # See man:systemd-resolved.service(8) for details about the supported modes of # operation for /etc/resolv.conf. nameserver 127.0.0.53 options edns0 trust-ad $ nmcli c show Tensor | grep ipv4.dns-search ipv4.dns-search: ~. > Adam, Kamil, can we push this out from beta to final blocker? We have only tested two VPN configurations (Mikhail's and David's) and so far one is affected and the other is not, so we don't know how prevalent the issue actually is. For me VPN still not working out of box for newly created connections even after applying changes from [1] Only workaround by manually setting `ipv4.dns-search` to `~.` value for my VPN connection helps. [1] https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/thread/QN5VHS5SRUG2ZXADMNPYN25Q5QYACBR4/
I don't really have a good handle on what we think is still broken here. It's hard to judge whether it still needs to be a blocker and for which release. This whole area is getting very confusing...
The bug is as originally reported, except it doesn't affect all openconnect VPNs as we originally assumed, it affects only some unknown subset of openconnect VPNs, and we don't know which percentage of users are affected. Since we only know of one affected user, I don't think it makes sense to delay beta release over this. (We still need to fix it, of course. Looks like any change would have to be in NetworkManager.)
As said before, I think the problem is the following: with dns=default, NM used to put the VPN nameserver at the top of resolv.conf. Now we have more complex logic; if the VPN is full-tunnel (has the default route) then we use its DNS server for all queries. If it's not full-tunnel, then we use it only for the domains that the VPN pushes. The idea is that if you are connected, for example, to a corporate intranet, you don't want your query for "funnycatpics.net" to go over the VPN. This also has the effect that when the VPN doesn't push any domains, the VPN DNS server is never used. Which is exactly what happens here: the VPN pushes a route for 10.0.0.0/8 but doesn't advertise which domains should go over it. I think the ideal solution to this would be that the VPN gets reconfigured to push the "sbis.ru" domain (or any other domain it wants); since it clearly doesn't want to get all traffic, it shouldn't automatically get all queries as well. Maybe we should do what David suggests in comment 49 - adding ~. automatically if the VPN offers no domains. But this doesn't seem desirable either: thinking about the corporate VPN case above, one day the VPN could stop pushing the domain and it will silently start to get all the users' queries unrelated to the intranet.
I agree with Beniamino in comment 65 (which is also what I tried to say in https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/message/OS3FU3KPU7QSOSXGPUI2B2FICOFN7HGU/ and the follow up email). But is *that* really the remaining issue here? comment 61 says the issue is: > The remaining issue seems to be that Mikhail's VPN is still using local DNS rather than the VPN's DNS, even though /etc/resolv.conf is configured properly now. We should probably never create VPN connections that exclusively use local DNS unless the user has intentionally configured it this way. as said in comment 52: > After I apply changes from the mailing list message [1], I now see that always using the local DNS server for resolving names. But what exactly means "local DNS" here? That was followed by comment 53: > $ nslookup google.com > Server: 127.0.0.53 > Address: 127.0.0.53#53 nslookup queries a DNS server directly, which is not what your libc resolver does (which uses NSS). So, nslookup reads /etc/resolv.conf and finds 127.0.0.53 (thus asking systemd-resolved). I think this would be expected that nslookup asks the local DNS server 127.0.0.53 (i.e. systemd-resolved). Is that what you mean by "using the local DNS server"? If not, what is the actual problem here? And, comment 62 also doesn't show that a "local DNS server" was used.
We're using two different definitions of "local DNS server." In my comment, I meant "Mikhail's ISP's DNS server," not "the systemd-resolved DNS server running on 127.0.0.53." Of course it's expected that nslookup has to use 127.0.0.53. > I think the ideal solution to this would be that the VPN gets > reconfigured to push the "sbis.ru" domain (or any other domain it > wants); since it clearly doesn't want to get all traffic, it shouldn't > automatically get all queries as well. OK, so the VPN has somehow indicated that it does *not* want all traffic? (How does that happen, is it part of the openconnect protocol?) I didn't know that was possible; I had (improperly?) assumed that was only configured client-side, when the user decides whether or not to set the "Use this connection only for resources on its network" setting. I suppose that, if this VPN is misconfigured, then perhaps it would be reasonable to close this and say NetworkManager won't change its behavior.
(In reply to Beniamino Galvani from comment #65) > Now we have more complex logic; if the VPN is full-tunnel (has the > default route) then we use its DNS server for all queries. If it's not > full-tunnel, then we use it only for the domains that the VPN > pushes. The idea is that if you are connected, for example, to a > corporate intranet, you don't want your query for "funnycatpics.net" > to go over the VPN. > > This also has the effect that when the VPN doesn't push any domains, > the VPN DNS server is never used. Which is exactly what happens here: > the VPN pushes a route for 10.0.0.0/8 but doesn't advertise which > domains should go over it. Hm, I don't think that's what's happening here though. Mikhail has already proved (back in comment #23) that his IP traffic is actually using the VPN's route. Only his DNS is leaking to his ISP's DNS instead of using his VPN's DNS, so this problem is really specific to DNS.
(In reply to Michael Catanzaro from comment #68) > > This also has the effect that when the VPN doesn't push any domains, > > the VPN DNS server is never used. Which is exactly what happens here: > > the VPN pushes a route for 10.0.0.0/8 but doesn't advertise which > > domains should go over it. > > Hm, I don't think that's what's happening here though. Mikhail has already > proved (back in comment #23) that his IP traffic is actually using the VPN's > route. Only his DNS is leaking to his ISP's DNS instead of using his VPN's > DNS, so this problem is really specific to DNS. His routing table from comment 32 is: $ ip r default via 192.168.1.1 dev enp5s0 proto dhcp metric 100 10.0.0.0/8 dev vpn0 proto static scope link metric 50 10.176.132.0/22 dev vpn0 proto kernel scope link src 10.176.135.130 metric 50 91.213.144.15 via 192.168.1.1 dev enp5s0 proto static metric 100 192.168.1.0/24 dev enp5s0 proto kernel scope link src 192.168.1.68 metric 100 192.168.1.1 dev enp5s0 proto static scope link metric 100 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown In comment 23 he pings an address in the VPN subnet (10.0.0.0/8) and git.sbis.ru, which resolves to the same address. Since he has the "10.0.0.0/8 dev vpn0" route, those pings go through the VPN. The rest of the traffic goes through the default route on enp5s0.
Since there's clearly a question about whether this should still be an accepted blocker, I'm dropping the tag so it reverts to proposed blocker, and will re-open the discussion in the ticket if I can figure out how. Please discuss/re-vote on blocker status there - https://pagure.io/fedora-qa/blocker-review/issue/24
(In reply to Beniamino Galvani from comment #69) > His routing table from comment 32 is: > > $ ip r > default via 192.168.1.1 dev enp5s0 proto dhcp metric 100 > 10.0.0.0/8 dev vpn0 proto static scope link metric 50 > 10.176.132.0/22 dev vpn0 proto kernel scope link src 10.176.135.130 metric > 50 > 91.213.144.15 via 192.168.1.1 dev enp5s0 proto static metric 100 > 192.168.1.0/24 dev enp5s0 proto kernel scope link src 192.168.1.68 metric > 100 > 192.168.1.1 dev enp5s0 proto static scope link metric 100 > 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 > linkdown > > In comment 23 he pings an address in the VPN subnet (10.0.0.0/8) and > git.sbis.ru, which resolves to the same address. Since he has the > "10.0.0.0/8 dev vpn0" route, those pings go through the VPN. The rest > of the traffic goes through the default route on enp5s0. Thanks, I see you're right. So the VPN is not actually used for most traffic. I see the argument that it makes sense to not use its DNS either....
Do we know for sure how much of this routing and domain information is 'pushed' by the VPN versus what's in the local configuration?
(In reply to Michael Catanzaro from comment #67) > We're using two different definitions of "local DNS server." In my comment, > I meant "Mikhail's ISP's DNS server," not "the systemd-resolved DNS server > running on 127.0.0.53." Of course it's expected that nslookup has to use > 127.0.0.53. > > > I think the ideal solution to this would be that the VPN gets > > reconfigured to push the "sbis.ru" domain (or any other domain it > > wants); since it clearly doesn't want to get all traffic, it shouldn't > > automatically get all queries as well. > > OK, so the VPN has somehow indicated that it does *not* want all traffic? No, the decision statically comes from the NetworkManager profile. I don't know where Mikhail's VPN profile comes from, if it was provided by his organization, imported, or configured manually. Since he doesn't have the default route on the VPN, that means that ipv4.never-default is set to 'yes' (equivalent to checking "Use this connection only for resources in its network). Either the organization set this flag when shipping the VPN profile, or it was set when creating the VPN profile in NetworkManager. Mikhail, can you please paste the output of "nmcli -o connection show Tensor" (redacting sensitive information)? Where does this profile come from? Did you create it manually?
(In reply to Beniamino Galvani from comment #73) > Mikhail, can you please paste the output of "nmcli -o connection show > Tensor" (redacting sensitive information)? Where does this profile come > from? Did you create it manually? $ nmcli -o connection show Tensor connection.id: Tensor connection.uuid: 2bbce62f-13cb-4965-b5c7-e91c932b4fe6 connection.type: vpn connection.autoconnect: no connection.timestamp: 1600329256 connection.permissions: user:mikhail ipv4.method: auto ipv6.method: auto vpn.service-type: org.freedesktop.NetworkManager.openconnect vpn.data: authtype = password, autoconnect-flags = 0, certsigs-flags = 0, cookie-flags = 2, enable_csd_trojan = no, gateway = vpn.tensor.ru:501, gateway-flags = 2, gwcert-flags = 2, lasthost-flags = 0, pem_passphrase_fsid = no, prevent_invalid_cert = no, protocol = anyconnect, stoken_source = disabled, xmlconfig-flags = 0 GENERAL.NAME: Tensor GENERAL.UUID: 2bbce62f-13cb-4965-b5c7-e91c932b4fe6 GENERAL.DEVICES: enp5s0 GENERAL.IP-IFACE: enp5s0 GENERAL.STATE: activated GENERAL.DEFAULT: no GENERAL.DEFAULT6: no GENERAL.SPEC-OBJECT: /org/freedesktop/NetworkManager/ActiveConnection/1 GENERAL.VPN: yes GENERAL.DBUS-PATH: /org/freedesktop/NetworkManager/ActiveConnection/6 GENERAL.CON-PATH: /org/freedesktop/NetworkManager/Settings/6 GENERAL.ZONE: -- GENERAL.MASTER-PATH: /org/freedesktop/NetworkManager/Devices/2 IP4.ADDRESS[1]: 10.176.132.85/22 IP4.ROUTE[1]: dst = 10.0.0.0/8, nh = 0.0.0.0, mt = 50 IP4.ROUTE[2]: dst = 10.176.132.0/22, nh = 0.0.0.0, mt = 50 IP4.DNS[1]: 10.76.4.153 VPN.TYPE: openconnect VPN.USERNAME: -- VPN.GATEWAY: vpn.tensor.ru:501 VPN.BANNER: -- VPN.VPN-STATE: 5 - VPN connected VPN.CFG[1]: authtype = password VPN.CFG[2]: autoconnect-flags = 0 VPN.CFG[3]: certsigs-flags = 0 VPN.CFG[4]: cookie-flags = 2 VPN.CFG[5]: enable_csd_trojan = no VPN.CFG[6]: gateway = vpn.tensor.ru:501 VPN.CFG[7]: gateway-flags = 2 VPN.CFG[8]: gwcert-flags = 2 VPN.CFG[9]: lasthost-flags = 0 VPN.CFG[10]: pem_passphrase_fsid = no VPN.CFG[11]: prevent_invalid_cert = no VPN.CFG[12]: protocol = anyconnect VPN.CFG[13]: stoken_source = disabled VPN.CFG[14]: xmlconfig-flags = 0
> Where does this profile come from? Did you create it manually? My organization provides only settings for Cisco Anyconnect which contain the only gateway.
(In reply to Michael Catanzaro from comment #67) > OK, so the VPN has somehow indicated that it does *not* want all traffic? > (How does that happen, is it part of the openconnect protocol?) I didn't > know that was possible; I had (improperly?) assumed that was only configured > client-side, when the user decides whether or not to set the "Use this > connection only for resources on its network" setting. Many VPNs, including OpenVPN, OpenConnect, and IPSec-based VPNs supported by vpnc, accept DNS and routing information from the server at connection time. NetworkManager provides a way for them to pass the received information to a tool which hands it back to NetworkManager over D-Bus. So at runtime you can find that it takes the default route, or not. (In reply to Michael Catanzaro from comment #71) > Thanks, I see you're right. So the VPN is not actually used for most > traffic. I see the argument that it makes sense to not use its DNS either.... Nope, that would be CVE-2018-1000135 again. We must not leak *any* DNS lookups to the local coffee shop wifi when we're on the VPN, unless the VPN is explicitly configured to 'use this connection only for resources on its network'.
(In reply to David Woodhouse from comment #76) > Nope, that would be CVE-2018-1000135 again. We must not leak *any* DNS > lookups to the local coffee shop wifi when we're on the VPN, unless the VPN > is explicitly configured to 'use this connection only for resources on its > network'. But that is exactly what has happened here, right? "Since he doesn't have the default route on the VPN, that means that ipv4.never-default is set to 'yes' (equivalent to checking "Use this connection only for resources in its network)." The VPN is not being used for routing any public resources, only for 10.0.0.0/8 and 10.176.132.0/22, so split DNS is configured. And there are no search domains configured, so no way to know which domains should go to the VPN's DNS server, hence it gets nothing. Beniamino's argument makes sense to me: it's not a bug because the VPN is intentionally not configured as the default route, therefore its DNS should be used only for configured search domains, of which there are none. Of course it's a change from how Fedora worked in the past, but that doesn't mean it's a bug. If the VPN had the default route, its DNS server would be used by default and there would be no problem. That is how my personal VPN works: it gets everything by default, except stuff that goes to my work VPN. My work VPN doesn't have the default route and its DNS only gets used for configured search domains. So Beniamino, I think we're at the point where it's time to close this issue, yes? We should provide Mikhail with instructions for how to configure appropriate search domains for his connection, though.
1) Anyone using Cisco Anyconnect on Windows and migrating to Linux (openconnect) will experience the same problems and return to Windows. Why make life on Linux more difficult? 2) I know in many organizations it is practiced to use the same domain (inside and outside) to access the accounting system. If we access from the outside we get into the DMZ where a stripped-down version is located, which is essentially a gateway to the main system. If you refer to the same domain from the inside, then we will be work with real base. 3) And finally, the most important argument, if you are connected to VPN for work, is mean your workplace must match the company network policy.
If initially NetworkManager worked the way you want I would never be able to work from home. That is, I would try use settings which give me employer and nothing worked for me. I would just, as a person not have experience in administration of NetworkManager, not try anything else and I would decide that all this works only with AnyConnect. I think NM should add ~. even for split tunnel VPNs if the VPN offers *no* routing domains.
(In reply to Michael Catanzaro from comment #77) > (In reply to David Woodhouse from comment #76) > > Nope, that would be CVE-2018-1000135 again. We must not leak *any* DNS > > lookups to the local coffee shop wifi when we're on the VPN, unless the VPN > > is explicitly configured to 'use this connection only for resources on its > > network'. > > But that is exactly what has happened here, right? "Since he doesn't have > the default route on the VPN, that means that ipv4.never-default is set to > 'yes' (equivalent to checking "Use this connection only for resources in its > network)." No, those aren't equivalent. If I explicitly check the 'Use this connection only for resources in its network' box, then it is permissible to let DNS lookups "leak" to the local hostile wireless network. Because I *asked* for that. If the VPN merely happens to return a set of routing information which only includes certain networks and lets IP traffic to most destinations go through the local Internet connection, that is *different*. NetworkManager should still route DNS lookups through the VPN unless explicitly told otherwise. (In reply to Mikhail from comment #79) > I think NM should add ~. even for split tunnel VPNs if the VPN offers *no* > routing domains. Careful. One of you spoke about 'search domains', the other about 'routing domains'. They are very different things. Most configurations have only search domains — so if I type, for example, just 'intranet' into my web browser, it tries searching for 'intranet.example.com' instead of just telling me that no such top level domain exists. In Mikhail's case, his search domain might reasonably include 'sbis.ru' if his admins want to encourage him to be lazy. It probably *wouldn't* include 'tensor.ru', or other domains which have a schizodns and get a different view from the inside. Most VPNs don't have 'routing domains'. And Mikhail was closer to being correct, IMO, when he said that NM should be adding ~. even for split tunnel VPNs iif the VPN offers no *routing* domains.
Sorry, I incorrectly assumed that Mikhail had explicitly set "ipv4.never-default=yes" because I saw no default route on the VPN. Instead, he has ipv4.never-default=no (the default, which allows a default route), but NM doesn't add the default route because the NM openconnect plugin does this: /* Routes */ val = get_ip4_routes (); if (val) { g_variant_builder_add (&ip4builder, "{sv}", NM_VPN_PLUGIN_IP4_CONFIG_ROUTES, val); /* If routes-to-include were provided, that means no default route */ g_variant_builder_add (&ip4builder, "{sv}", NM_VPN_PLUGIN_IP4_CONFIG_NEVER_DEFAULT, g_variant_new_boolean (TRUE)); } Basically, if the server pushes any routes then the plugin tells NM to not add the default one. So the fact that this is full-tunnel VPN or not is determined based on both static and dynamic information.
OK... so Beniamino, I assume you agree with David's assessment in comment #80? Do you think NetworkManager-openconnect should not be setting NM_VPN_PLUGIN_IP4_CONFIG_NEVER_DEFAULT to TRUE there?
(In reply to David Woodhouse from comment #80) > If I explicitly check the 'Use this connection only for resources in its > network' box, then it is permissible to let DNS lookups "leak" to the local > hostile wireless network. Because I *asked* for that. > > If the VPN merely happens to return a set of routing information which only > includes certain networks and lets IP traffic to most destinations go > through the local Internet connection, that is *different*. NetworkManager > should still route DNS lookups through the VPN unless explicitly told > otherwise. Currently we add the default domain only to full-tunnel VPNs, where the full-tunnel status is determined based on the presence of the default route, which depends on both runtime (pushed routes) and static ("Use this connection..." checkbox) information. If I understand correctly, your proposal is to consider only the static "Use this connection..." property. This sounds good to me, as it gives the user a switch to select where queries should go by default; and the default configuration will be what most users need.
(In reply to Michael Catanzaro from comment #82) > OK... so Beniamino, I assume you agree with David's assessment in comment > #80? Yes. > Do you think NetworkManager-openconnect should not be setting > NM_VPN_PLUGIN_IP4_CONFIG_NEVER_DEFAULT to TRUE there? I think it's right as it is now.
Discussed at 2020-09-21 blocker review meeting: https://meetbot-raw.fedoraproject.org/fedora-blocker-review/2020-09-21/f33-blocker-review.2020-09-21-16.00.html . Rejected as a blocker - as best as we understand things, this has boiled down to a pretty detailed question about what behaviour we should choose in an ambiguous situation, and there isn't even a clear-cut bug exactly, let alone one bad enough to qualify as a blocker. If someone thinks there really is something seriously broken in the current state of affairs, please do re-propose, but with a very clear explanation. Thanks!
To clarify Beniamino's last comment: he thinks NetworkManager-openconnect behavior is good, and that NetworkManager itself should change behavior, not that everything is all right the way it is now.
> If someone thinks there really is something seriously broken in the current state of affairs, please do re-propose, but with a very clear explanation. Thanks! CVE-2018-1000135 was closed as "does not affect Fedora because Fedora doesn't do split DNS out of the box". Fedora 33 is changing that, and introducing the issue described by CVE-2018-100135. We should not "leak" DNS requests to the local airport/hotel wifi for *any* domains, while on a VPN. Unless that VPN explicitly sets a smaller set of routing domains (not search domains) or the 'use this connection only for resources on its network" checkbox is explicitly set in the configuration. As I understand it, Beniamino is proposing that we fix that in NetworkManager.
(In reply to David Woodhouse from comment #87) > As I understand it, Beniamino is proposing that we fix that in > NetworkManager. Yes, this merge request should fix the behavior: https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/631
Thanks Beniamino! I considered proposing a freeze exception for this, but I think it's OK for it to be fixed via a post-beta update. Please just make sure to prepare an update soon so that we don't wind up chasing this for F33 final release.
> https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/631 Eep. That MR mentions search domains. But search domains should not have any bearing here. Maybe routing domains might, but definitely not search domains.
(In reply to Beniamino Galvani from comment #88) > Yes, this merge request should fix the behavior: > > https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/631 See discussion in that MR. But for clarity, I don't believe that does fix things. The design followed in that MR still requires individual VPN plugins to be audited to ensure they have the expected default behaviour and don't introduce the DNS leakage when split DNS becomes default. Which is OK as a design, I suppose, but it does mean we need changes to more than just NetworkManager itself.
I wanted to check the patch. But couldn't apply it on top of NM 1.26.2 + cd NetworkManager-1.26.2 + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + /usr/bin/cat /builddir/build/SOURCES/0001-nm-fix-generated-xml-docs-syntax.patch + /usr/bin/patch -p1 -s --fuzz=0 --no-backup-if-mismatch + /usr/bin/cat /builddir/build/SOURCES/NM-631.patch + /usr/bin/patch -p1 -s --fuzz=0 --no-backup-if-mismatch 1 out of 3 hunks FAILED -- saving rejects to file src/nm-ip6-config.c.rej error: Bad exit status from /var/tmp/rpm-tmp.D1gpPu (%prep) line 169: It's not recommended to have unversioned Obsoletes: Obsoletes: dhcdbd line 293: It's not recommended to have unversioned Obsoletes: Obsoletes: NetworkManager-atm line 313: It's not recommended to have unversioned Obsoletes: Obsoletes: NetworkManager-bt Bad exit status from /var/tmp/rpm-tmp.D1gpPu (%prep) RPM build errors: Finish: rpmbuild NetworkManager-1.26.2-3.fc34.src.rpm Finish: build phase for NetworkManager-1.26.2-3.fc34.src.rpm ERROR: Exception(/home/mikhail/packaging-work/NetworkManager/NetworkManager-1.26.2-3.fc34.src.rpm) Config(fedora-rawhide-x86_64) 0 minutes 58 seconds
So I understand this should be fixed by in NetworkManager 1.28 by https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/commit/bba1ab0f21b4114a6ae3d92c536e0803bcf9e4cd. It's a little late for F33, but any plans for F33?
I can backport commit "dns: add wildcard domain to VPNs with never-default=no and no domains" to F33. This should be pretty safe. I wouldn't backport the VPN priority change because that can have a bigger impact and needs more testing.
Sounds good. I understand that should be enough to fix this bug.
FEDORA-2020-1212a896dc has been submitted as an update to Fedora 33. https://bodhi.fedoraproject.org/updates/FEDORA-2020-1212a896dc
OK, Mikhail, hopefully this is the final time we have to ask you to test something... but please test that NetworkManager update! Hopefully that resolves this issue.
FEDORA-2020-1212a896dc has been pushed to the Fedora 33 testing repository. In short time you'll be able to install the update with the following command: `sudo dnf upgrade --enablerepo=updates-testing --advisory=FEDORA-2020-1212a896dc` You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2020-1212a896dc See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.
(In reply to Michael Catanzaro from comment #97) > OK, Mikhail, hopefully this is the final time we have to ask you to test > something... but please test that NetworkManager update! Hopefully that > resolves this issue. With NetworkManager 1.26.4-1.fc33 problem gone. Also waiting for the fix in Rawhide.
Yay, thanks everyone! This was quite a learning experience, to say the least....
FEDORA-2020-1212a896dc has been pushed to the Fedora 33 stable repository. If problem still persists, please make note of it in this bug report.
The issue still persists with openconnect vpn over jupiter For example running: sudo openconnect --user=jr --authgroup=JR-USER-RSA-TOKEN --juniper https://vpn.eng.jr.net/jr --no-proxy --no-http-keepalive --no-dtls and my network looks like this: nmcli -o connection show hakunamatata connection.id: hakunamatata connection.uuid: 5cb6617a-9352-4dfa-98e4-bc624fac778a connection.type: 802-11-wireless connection.interface-name: wlp3s0 connection.timestamp: 1604507590 802-11-wireless.ssid: hakunamatata 802-11-wireless.mode: infrastructure 802-11-wireless.seen-bssids: 50:0F:F5:D3:66:31,50:0F:F5:D3:66:39 802-11-wireless-security.key-mgmt: wpa-psk 802-11-wireless-security.auth-alg: open 802-11-wireless-security.wep-key-flags: 0 (none) 802-11-wireless-security.psk-flags: 0 (none) 802-11-wireless-security.leap-password-flags:0 (none) ipv4.method: auto ipv6.method: auto GENERAL.NAME: hakunamatata GENERAL.UUID: 5cb6617a-9352-4dfa-98e4-bc624fac778a GENERAL.DEVICES: wlp3s0 GENERAL.IP-IFACE: wlp3s0 GENERAL.STATE: activated GENERAL.DEFAULT: yes GENERAL.DEFAULT6: no GENERAL.SPEC-OBJECT: /org/freedesktop/NetworkManager/AccessPoint/10 GENERAL.VPN: no GENERAL.DBUS-PATH: /org/freedesktop/NetworkManager/ActiveConnection/10 GENERAL.CON-PATH: /org/freedesktop/NetworkManager/Settings/3 GENERAL.ZONE: -- GENERAL.MASTER-PATH: -- IP4.ADDRESS[1]: 192.168.6.171/24 IP4.GATEWAY: 192.168.6.6 IP4.ROUTE[1]: dst = 0.0.0.0/0, nh = 192.168.6.6, mt = 600 IP4.ROUTE[2]: dst = 192.168.6.0/24, nh = 0.0.0.0, mt = 600 IP4.ROUTE[3]: dst = 193.162.145.68/32, nh = 192.168.6.6, mt = 0 IP4.DNS[1]: 192.168.6.6 IP4.DOMAIN[1]: tendawifi.com DHCP4.OPTION[1]: dhcp_lease_time = 86400 DHCP4.OPTION[2]: domain_name = tendawifi.com DHCP4.OPTION[3]: domain_name_servers = 192.168.6.6 DHCP4.OPTION[4]: expiry = 1604571523 DHCP4.OPTION[5]: host_name = mk02 DHCP4.OPTION[6]: ip_address = 192.168.6.171 DHCP4.OPTION[7]: next_server = 192.168.6.6 DHCP4.OPTION[8]: requested_broadcast_address = 1 DHCP4.OPTION[9]: requested_domain_name = 1 DHCP4.OPTION[10]: requested_domain_name_servers = 1 DHCP4.OPTION[11]: requested_domain_search = 1 DHCP4.OPTION[12]: requested_host_name = 1 DHCP4.OPTION[13]: requested_interface_mtu = 1 DHCP4.OPTION[14]: requested_ms_classless_static_routes = 1 DHCP4.OPTION[15]: requested_nis_domain = 1 DHCP4.OPTION[16]: requested_nis_servers = 1 DHCP4.OPTION[17]: requested_ntp_servers = 1 DHCP4.OPTION[18]: requested_rfc3442_classless_static_routes = 1 DHCP4.OPTION[19]: requested_root_path = 1 DHCP4.OPTION[20]: requested_routers = 1 DHCP4.OPTION[21]: requested_static_routes = 1 DHCP4.OPTION[22]: requested_subnet_mask = 1 DHCP4.OPTION[23]: requested_time_offset = 1 DHCP4.OPTION[24]: requested_wpad = 1 DHCP4.OPTION[25]: routers = 192.168.6.6 DHCP4.OPTION[26]: subnet_mask = 255.255.255.0 IP6.ADDRESS[1]: fe80::b5b0:544c:c48e:d017/64 IP6.ROUTE[1]: dst = fe80::/64, nh = ::, mt = 600 IP6.ROUTE[2]: dst = ff00::/8, nh = ::, mt = 256, table=255 but my resolver actually updates: ➜ cat /etc/resolv.conf # This file is managed by man:systemd-resolved(8). Do not edit. # # This is a dynamic resolv.conf file for connecting local clients to the # internal DNS stub resolver of systemd-resolved. This file lists all # configured search domains. # # Run "resolvectl status" to see details about the uplink DNS servers # currently in use. # # Third party programs should typically not access this file directly, but only # through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a # different way, replace this symlink by a static file or a different symlink. # # See man:systemd-resolved.service(8) for details about the supported modes of # operation for /etc/resolv.conf. nameserver 127.0.0.53 options edns0 trust-ad search dk.eng.jr.net tendawifi.com but not connection is possible while trying to connect to the internal networks and resolves on: ssh: Could not resolve hostname p-login.interal-server.idk: Name or service not known
Hi, your issue cannot be related to this bug, because you are running openconnect manually instead of via NetworkManager. This bug was a NetworkManager bug. Feel free to report a bug against openconnect itself, so we can investigate your issue separately.