Configure and Access Services
OAuth2-Proxy supports multiple OAuth Providers. This section describes how to configure this extension to use KubeSphere as the OAuth Provider, allowing direct access to various services after authenticating and logging in via KubeSphere.
Configure OAuth2-Proxy
The OAuth2-Proxy extension provides two methods, NodePort and Ingress, to offer unified KubeSphere user-based authentication for applications. The method for configuring OAuth2-Proxy differs depending on the approach.
NodePort Method
By exposing the OpenResty NodePort from the extension externally, it provides a unified access entry point for proxied applications.
In the extension configuration, modify global.host, confirm openresty.service.nodePort, and complete the deployment of the extension.
global:
# OAuth2-Proxy service external access address
# For example, using NodePort, the address is http://172.31.19.4:32080,
# using Ingress, the host is http://172.31.19.4.nip.io:80
host: "http://<oauth2-proxy-service-external-access-address>"
# Kubesphere portal address. For example, http://172.31.19.4:30880
# No need to set this explicitly, KubeSphere's portal address will be auto-injected.
portal.url: "http://<kubesphere-console-address>"
openresty:
enabled: true
service:
type: NodePort
portNumber: 80
nodePort: 32080
annotations: {}
oauth2-proxy:
extraArgs:
provider: oidc
provider-display-name: "kubesphere"
# Issuer address
# The KubeSphere portal URL is filled by default, but if you use another OAuth Provider, change it
oidc-issuer-url: "{{ .Values.global.portal.url }}"
Ingress Method
OAuth2-Proxy supports configuring unified authentication for applications via Ingress. In this scenario, Ingress replaces OpenResty to provide a unified service entry point and reverse proxy functionality.
-
In the extension configuration, change
openresty.enabledto false,ingress.enabledto true, modifyglobal.host, and then complete the deployment of the extension.global: # OAuth2-Proxy service external access address # For example, using NodePort, the address is http://172.31.19.4:32080, # using Ingress, the host is http://172.31.19.4.nip.io:80 host: "http://<oauth2-proxy-service-external-access-address>" # Kubesphere portal address. For example, http://172.31.19.4:30880 # No need to set this explicitly, KubeSphere's portal address will be auto-injected. portal.url: "http://<kubesphere-console-address>" openresty: enabled: false oauth2-proxy: extraArgs: provider: oidc provider-display-name: "kubesphere" # Issuer address # The KubeSphere portal URL is filled by default, but if you use another OAuth Provider, change it oidc-issuer-url: "{{ .Values.global.portal.url }}" ingress: enabled: true className: nginx -
Add relevant annotations to the Ingress field of the application. Please refer to the ingress-nginx/External OAUTH Authentication example.
... metadata: name: application annotations: nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth" nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri" ...
Notes
If you are using KubeSphere 4.x as the OAuth Provider, ensure that the external access address of the KubeSphere Console matches the issuer.url in the configmap kubesphere-config. If they do not match, update it according to the following steps.
apiVersion: v1
kind: ConfigMap
metadata:
name: kubesphere-config
namespace: kubesphere-system
data:
kubesphere.yaml: |
authentication:
issuer:
url: "http://172.31.19.4:30880" # Confirm the issuer address
-
Copy the ks-core values.yaml file and create a new file named
custom-kscore-values.yaml.cp ks-core/values.yaml custom-kscore-values.yaml -
Modify
portal.hostnameto configure the actual address.portal: ## The IP address or hostname to access ks-console service. ## DO NOT use IP address if ingress is enabled. hostname: "172.31.19.4" http: port: 30880 -
Update ks-core.
helm upgrade --install -n kubesphere-system --create-namespace ks-core ${kscore_chart_path} -f ./custom-kscore-values.yaml --debug --wait
Example 1: Access AlertManager Service via NodePort
-
In the extension configuration, modify
global.hostand confirmopenresty.service.nodePort. -
Then modify the
openresty.configsconfiguration as follows.openresty: configs: - name: alertmanager description: KubeSphere Monitoring Stack Internal Alertmanager Endpoint subPath: /alertmanager/ endpoint: http://whizard-notification-alertmanager.kubesphere-monitoring-system.svc:9093/ -
After configuration is complete, access the external address of OAuth2-Proxy, such as http://172.31.19.4:32080. After authenticating and logging in via KubeSphere, you will see the entry for the Alertmanager service on the homepage. Click to access it.
Example 2: Access AlertManager Service via Ingress
-
In the extension configuration, change
openresty.enabledto false,ingress.enabledto true, and modifyglobal.host. -
Install the ingress-nginx controller.
helm upgrade --install ingress-nginx ingress-nginx \ --repo https://kubernetes.github.io/ingress-nginx \ --namespace ingress-nginx --create-namespace -
Modify the deployment named ingress-nginx-controller. Set the external access method for the ingress, currently exposed via host network.
spec: nodeName: <node-name> # Replace with the actual node name hostNetwork: true -
Create the alertmanager custom resource, service, and ingress.
vim alertmanager.yamlapiVersion: monitoring.coreos.com/v1 kind: Alertmanager metadata: name: main namespace: extension-oauth2-proxy spec: externalUrl: http://172.31.19.4.nip.io/alertmanager # Replace with the actual address portName: web replicas: 1 resources: requests: memory: 400Mi --- apiVersion: v1 kind: Service metadata: name: alertmanager-main namespace: extension-oauth2-proxy spec: type: ClusterIP ports: - name: web port: 9093 protocol: TCP targetPort: web selector: alertmanager: main --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2/start?rd=$escaped_request_uri nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2/auth nginx.ingress.kubernetes.io/rewrite-target: /$2 name: alertmanager-ingress namespace: extension-oauth2-proxy spec: ingressClassName: nginx rules: - host: 172.31.19.4.nip.io # Replace with the actual address http: paths: - backend: # Application configuration part service: name: alertmanager-main port: number: 9093 path: /alertmanager(/|$)(.*) pathType: ImplementationSpecific -
Deploy the Alertmanager service.
kubectl apply -f alertmanager.yaml -
In a browser, access
<node-ip>.nip.io/alertmanager, such as 172.31.19.4.nip.io/alertmanager, to access the Alertmanager user interface.