OAuth2-Proxy supports multiple OAuth Providers. This section describes how to configure this extension to use KubeSphere as the OAuth Provider, allowing direct access to various services after authenticating and logging in via KubeSphere.

Configure OAuth2-Proxy

The OAuth2-Proxy extension provides two methods, NodePort and Ingress, to offer unified KubeSphere user-based authentication for applications. The method for configuring OAuth2-Proxy differs depending on the approach.

NodePort Method

By exposing the OpenResty NodePort from the extension externally, it provides a unified access entry point for proxied applications.

In the extension configuration, modify global.host, confirm openresty.service.nodePort, and complete the deployment of the extension.

global:
  # OAuth2-Proxy service external access address
  # For example, using NodePort, the address is http://172.31.19.4:32080,
  # using Ingress, the host is http://172.31.19.4.nip.io:80
  host: "http://<oauth2-proxy-service-external-access-address>"

  # Kubesphere portal address. For example, http://172.31.19.4:30880
  # No need to set this explicitly, KubeSphere's portal address will be auto-injected.
  portal.url: "http://<kubesphere-console-address>"

openresty:
  enabled: true

  service:
    type: NodePort
    portNumber: 80
    nodePort: 32080
    annotations: {}

oauth2-proxy:
  extraArgs:
    provider: oidc
    provider-display-name: "kubesphere"
    # Issuer address
    # The KubeSphere portal URL is filled by default, but if you use another OAuth Provider, change it
    oidc-issuer-url: "{{ .Values.global.portal.url }}"

Ingress Method

OAuth2-Proxy supports configuring unified authentication for applications via Ingress. In this scenario, Ingress replaces OpenResty to provide a unified service entry point and reverse proxy functionality.

  1. In the extension configuration, change openresty.enabled to false, ingress.enabled to true, modify global.host, and then complete the deployment of the extension.

    global:
      # OAuth2-Proxy service external access address
      # For example, using NodePort, the address is http://172.31.19.4:32080,
      # using Ingress, the host is http://172.31.19.4.nip.io:80
      host: "http://<oauth2-proxy-service-external-access-address>"
    
      # Kubesphere portal address. For example, http://172.31.19.4:30880
      # No need to set this explicitly, KubeSphere's portal address will be auto-injected.
      portal.url: "http://<kubesphere-console-address>"
    
    openresty:
      enabled: false
    
    oauth2-proxy:
      extraArgs:
        provider: oidc
        provider-display-name: "kubesphere"
        # Issuer address
        # The KubeSphere portal URL is filled by default, but if you use another OAuth Provider, change it
        oidc-issuer-url: "{{ .Values.global.portal.url }}"
    
      ingress:
        enabled: true
        className: nginx
  2. Add relevant annotations to the Ingress field of the application. Please refer to the ingress-nginx/External OAUTH Authentication example.

    ...
    metadata:
      name: application
      annotations:
        nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
        nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
    ...

Notes

If you are using KubeSphere 4.x as the OAuth Provider, ensure that the external access address of the KubeSphere Console matches the issuer.url in the configmap kubesphere-config. If they do not match, update it according to the following steps.

apiVersion: v1
kind: ConfigMap
metadata:
  name: kubesphere-config
  namespace: kubesphere-system
data:
  kubesphere.yaml: |
    authentication:
      issuer:
        url: "http://172.31.19.4:30880"     # Confirm the issuer address
  1. Copy the ks-core values.yaml file and create a new file named custom-kscore-values.yaml.

    cp ks-core/values.yaml custom-kscore-values.yaml
  2. Modify portal.hostname to configure the actual address.

    portal:
      ## The IP address or hostname to access ks-console service.
      ## DO NOT use IP address if ingress is enabled.
      hostname: "172.31.19.4"
      http:
        port: 30880
  3. Update ks-core.

    helm upgrade --install -n kubesphere-system --create-namespace ks-core ${kscore_chart_path}  -f ./custom-kscore-values.yaml  --debug --wait

Example 1: Access AlertManager Service via NodePort

  1. In the extension configuration, modify global.host and confirm openresty.service.nodePort.

  2. Then modify the openresty.configs configuration as follows.

    openresty:
      configs:
        - name: alertmanager
          description: KubeSphere Monitoring Stack Internal Alertmanager Endpoint
          subPath: /alertmanager/
          endpoint: http://whizard-notification-alertmanager.kubesphere-monitoring-system.svc:9093/
  3. After configuration is complete, access the external address of OAuth2-Proxy, such as http://172.31.19.4:32080. After authenticating and logging in via KubeSphere, you will see the entry for the Alertmanager service on the homepage. Click to access it.

Example 2: Access AlertManager Service via Ingress

  1. In the extension configuration, change openresty.enabled to false, ingress.enabled to true, and modify global.host.

  2. Install the ingress-nginx controller.

    helm upgrade --install ingress-nginx ingress-nginx \
      --repo https://kubernetes.github.io/ingress-nginx \
      --namespace ingress-nginx --create-namespace
  3. Modify the deployment named ingress-nginx-controller. Set the external access method for the ingress, currently exposed via host network.

    spec:
        nodeName: <node-name>  # Replace with the actual node name
        hostNetwork: true
  4. Create the alertmanager custom resource, service, and ingress.

    vim alertmanager.yaml
    apiVersion: monitoring.coreos.com/v1
    kind: Alertmanager
    metadata:
      name: main
      namespace: extension-oauth2-proxy
    spec:
      externalUrl: http://172.31.19.4.nip.io/alertmanager # Replace with the actual address
      portName: web
      replicas: 1
      resources:
        requests:
          memory: 400Mi
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: alertmanager-main
      namespace: extension-oauth2-proxy
    spec:
      type: ClusterIP
      ports:
      - name: web
        port: 9093
        protocol: TCP
        targetPort: web
      selector:
        alertmanager: main
    ---
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      annotations:
        nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2/start?rd=$escaped_request_uri
        nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2/auth
        nginx.ingress.kubernetes.io/rewrite-target: /$2
      name: alertmanager-ingress
      namespace: extension-oauth2-proxy
    spec:
      ingressClassName: nginx
      rules:
      - host: 172.31.19.4.nip.io  # Replace with the actual address
        http:
          paths:
          - backend:      # Application configuration part
              service:
                name: alertmanager-main
                port:
                  number: 9093
            path: /alertmanager(/|$)(.*)
            pathType: ImplementationSpecific
  5. Deploy the Alertmanager service.

    kubectl apply -f alertmanager.yaml
  6. In a browser, access <node-ip>.nip.io/alertmanager, such as 172.31.19.4.nip.io/alertmanager, to access the Alertmanager user interface.