The semantics (sometimes it returns an error that is really just a
warning) are too confusing, and it turns out that we really only need
it in one place (platformCheckSupported()); after that we've already
figured out what IP families are supported, so we could just use
utiliptables.NewBestEffort() instead, knowing we want exactly what it
returns.
So we can just expand the semantics of the old NewDualStack() inline
in the one place we care, without hiding any of it behind a
too-complicated return value.
For kube-proxy, node addition and node update is semantically
considered as similar event, we have exactly same handler
logic for these two events resulting in duplicate code and
unit tests.
This merges the `NodeHandler` interface methods OnNodeAdd and
OnNodeUpdate into OnNodeChange along with the implementation
of the interface.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
ProxyHealthServer now consumes NodeManager to get the latest
updated node object for determining node eligibility.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
Co-authored-by: Dan Winship <danwinship@redhat.com>
NodeManager, if configured with to watch for PodCIDR watch, watches
for changes in PodCIDRs and crashes kube-proxy if a change is
detected in PodCIDRs.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
Co-authored-by: Dan Winship <danwinship@redhat.com>
NodeManager initialises node informers, waits for cache sync and polls for
node object to retrieve NodeIPs, handle node events and crashes kube-proxy
when change in NodeIPs is detected.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
Co-authored-by: Dan Winship <danwinship@redhat.com>
This simplifies how the proxier receives update for change in node
labels. Instead of passing the complete Node object we just pass
the proxy relevant topology labels extracted from the complete list
of labels, and the downstream event handlers will only be notified
when there are changes in topology labels.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
For kube-proxy, node addition and node update is semantically
considered as similar event, we have exactly same handler
logic for these two events resulting in duplicate code and
unit tests.
This merges the `NodeHandler` interface methods OnNodeAdd and
OnNodeUpdate into OnNodeChange along with the implementation
of the interface.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
ProxyHealthServer now consumes NodeManager to get the latest
updated node object for determining node eligibility.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
NodeManager, if configured with to watch for PodCIDR watch, watches
for changes in PodCIDRs and crashes kube-proxy if a change is
detected in PodCIDRs.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
NodeManager initialises node informers, waits for cache sync and polls for
node object to retrieve NodeIPs, handle node events and crashes kube-proxy
when change in NodeIPs is detected.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
This simplifies how the proxier receives update for change in node
labels. Instead of passing the complete Node object we just pass
the proxy relevant topology labels extracted from the complete list
of labels, and the downstream event handlers will only be notified
when there are changes in topology labels.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
Rather than having a RetryAfter function, do a retry (at a fixed
interval) if the work function returns an error.
Co-authored-by: Antonio Ojea <aojea@google.com>
Burst syncs are theoretically useful for dealing with a single change
that results in multiple Run() calls (eg, a Service and EndpointSlice
both changing), but 2 isn't enough to cover all cases, and a better
way of dealing with this problem is to just use a smaller
minSyncPeriod.
Co-authored-by: Antonio Ojea <aojea@google.com>
- Use structured logging.
- Use t.Helper() in unit tests.
- Improve some comments.
- Remove an unnecessary check/panic.
Co-authored-by: Antonio Ojea <aojea@google.com>
With filter-output chain already operating with priority
post DNAT, we can merge both the chains together.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
With this commit the filter-input, filter-forward, and filter-output base chains
are hooked with priority 0. For filtering before DNAT, filter-prerouting-pre-dnat
and filter-output-pre-dnat should be used which have a priority lower than DNAT
(-110)
Signed-off-by: Daman Arora <aroradaman@gmail.com>
With this commit, the conntrack reconciler clears the stales
entries when endpoints change port without changing IP.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
A packet can traverse the service-xxxx chains by matching on either
service-ips or service-nodeports verdict map. We masquerade off-cluster
traffic to ClusterIP (when masqueradeAll = false) by adding a rule in
service-xxxx which checks if destination IP is ClusterIP, port and
protocol matches with service specs and source IP doesn't belong to
PodCIDR and masquerade on match.
If the packet reaches the service chain by match on service-ips map,
then ClusterIP, port and protocol are already matching service specs.
If it comes via external-xxxx chain then the destination IP will
never be ClusterIP. Therefore, we can simplify the masquerade
off-cluster traffic to ClusterIP check by simply matching on
destination ip and source ip.
Signed-off-by: Daman Arora <aroradaman@gmail.com>