Secure Container Deployment with Harbor

André HoffmannAndré Hoffmann
17 min read

This time, I would like to delve into a discussion about Harbor, which is a particular container registry within the Cloud Native Computing Foundation (CNCF) landscape. Harbor is an open-source tool that plays a crucial role in managing and securing container images. It provides a centralized location to store, manage, and serve container images, making it an essential tool for developers and organizations that rely on containerized applications. Harbor offers a range of features, including vulnerability scanning, role-based access control, and image replication, which enhance security and efficiency in handling container images. By integrating seamlessly with existing CI/CD pipelines, Harbor helps streamline the development and deployment processes, ensuring that container images are both secure and readily available for use across various environments.

Before I dive into Harbor, it's important to first explain the concept of container images, as they are fundamental to understanding how Harbor operates. A container image is a lightweight, standalone, and executable software package that includes everything needed to run a piece of software, such as the code, runtime, libraries, environment variables, and configuration files. These images are used to create containers, which are isolated environments where applications can run consistently across different computing environments. By using container images, developers can ensure that their applications behave the same way, regardless of where they are deployed, whether it's on a developer's laptop, in a test environment, or in a production data center. This consistency is crucial for maintaining reliability and efficiency in software development and deployment processes. Understanding container images is essential for appreciating the role of Harbor in managing these images effectively.

For example, when a developer is preparing to deploy an application into a Kubernetes cluster, they must first create a Dockerfile. This Dockerfile serves as a blueprint that outlines all the necessary steps to build a Docker image. It specifies the base image to use, includes the application code, and defines any dependencies, libraries, and environment variables required for the application to run. Once the Dockerfile is ready, the developer uses it to build a Docker image, which is a packaged version of the application. This image can then be pushed to a container registry, such as Harbor, where it is stored and managed. From there, the image can be pulled and deployed into the Kubernetes cluster, ensuring that the application runs smoothly and consistently in the desired environment. This process is integral to modern DevOps practices, allowing for efficient and reliable application deployment.

Deploy Image Registry Harbor

As in my other articles, I used helm to deploy Harbor. The Helm chart for Harbor offers a wide array of configurations, allowing you to tailor the setup according to your specific infrastructure needs. These configurations can vary greatly depending on factors such as your cloud provider settings, network policies, and storage preferences.

I first I added the helm chart and prepared configuration into a values file.

helm repo add harbor https://helm.goharbor.io
helm repo update
helm show values harbor/harbor > harbor-values.yaml

In the harbor-values.yaml file, I configured various components. Since I intended to use ingress, I modified the expose type to ingress and set the ingress class to Traefik instead of nginx. I also added my hosts to the ingress configuration. Additionally, to ensure data encryption during image push or pull operations, I enabled TLS and configured it to be auto-generated during deployment. It is essential that the externalURL aligns with the ingress configuration; otherwise, connections will default to https://core.harbor.domain. Furthermore, to facilitate trace analysis with Jaeger, I provided all necessary information about my Jaeger collector to the chart.

expose:
  type: ingress
  tls:
    enabled: true
    certSource: auto
    auto:
      commonName: ""
    secret:
      secretName: ""
  ingress:
    hosts:
      core: hometown
    controller: default
    kubeVersionOverride: ""
    className: "traefik"
    annotations:
      ingress.kubernetes.io/ssl-redirect: "true"
      ingress.kubernetes.io/proxy-body-size: "0"
    labels: {}

externalURL: https://hometown

trace:
  enabled: true
  provider: jaeger
  sample_rate: 1
  namespace: jaeger
  attributes:
    application: harbor
  jaeger:
    endpoint: http://hometown/jaeger:14268/api/traces
  otel:
    endpoint: hostname:4318
    url_path: /v1/traces
    compression: false
    insecure: true
    timeout: 10

After all these Configurations I deployed harbor

helm install harbor harbor/harbor -n harbor -f harbor-values.yaml

Now I could reach Harbor under https://hometown. With the following credentials I could do my first login:

User: admin

Password: Harbor12345

First image for harbor

As soon as Harbor was available I was able to login and create a new Project by clicking in the Projects menu on ‘New Project’. I named the new project ‘secondtry‘ and gave it no public access, so users would have to authenticate to pull or push images. Additionally I did not want to use any quata limit so I set it to -1 (unlimited space).

Now I wanted to create an user and add it to the member of the project ‘secondtry‘. In the administration menu I created User named ‘andre’ and set a password. After that I added the user ‘andre’ to the project as a developer.

Then I could create an image that I wanted to upload to my Harbor registry. For that I created a Dockerfile with a simple command.

FROM alpine:latest
CMD echo "Hello, World!"

With the following command I build an Docker-image out of the Dockerfile

docker build -t helloworld .

Now I had to tag my image with the host from my registry. Harbor supports by giving the exact commands in the right corner of the project menu. I tagged my image with the following pattern:

docker tag SOURCE_IMAGE[:TAG] hometown/secondtry/REPOSITORY[:TAG]

💡
Important: Because my Harbor URL looks like this https://hometown, I cannot tag my image exactly like Harbors recommendation. If I use the command docker tag helloworld:latest hometown/secondtry/helloworld:0.1 then my image get tagged withdocker.io/hometown/secondtry/helloworld:0.1 because docker do not recognize hometown as my registry and thinks its a namespace so it adds automatically docker.io to the tag, so it will pushed to docker-hub instead of my Harbor!

I had to add the port number (443 because of TLS) on which is Harbor running to the tag of my image. So docker would not add docker.io to my image.

docker tag helloworld:latest hometown:443/secondtry/helloworld:0.1

Cause of using encryption while using TLS I needed to install the ca-certificate on my client pc. For this the ca.crt certificate is needed. It is downloadable on the project menu next to push command ‘registry certificate‘ but it can also be found via kubectl.

kubectl get secret harbor-ingress -n harbor -o jsonpath='{.data}'

The value of ca.crt is base64 encrypted so I had to decrypt it with the following command.

echo 'YW5kcmUtY2EtY2VydGlmaWNhdGUK' | base64 -d

The unencrypted ca-certificate could now be added to my certificate store under /etc/docker/certs.d/hometown/ca.crt. Then, I restarted Docker.

systemctl restart docker

Also I added the ca.crt to the local ca-certificates with the following commands:

sudo vi /usr/local/share/ca-certificates/hometown.crt
sudo update-ca-certificates

To test if the correct ca certificate will get used I use openssl

openssl s_client -connect hometown:443 -showcerts

Now I could use the docker login command without any TLS certificate issue.

💡
Important: When I want to push my tagged images (hometown:443/) then I have to use the port number also in the docker login command
docker login https://hometown:443

Additionally I used the credentials of my user andre and login.

The last step is to push the tagged image to the harbor registry. Again Harbor provide the command pattern to push images.

docker push hometown/secondtry/REPOSITORY[:TAG]

With the information about the port number I used the following command to push the image to the registry in project secondtry.

docker push hometown:443/secondtry/helloworld:0.1

After refresh the project menu of Harbor the new image helloworld is shown.

Image Scanning and SBOM via Trivy

Harbor comes with Trivy integrated by default, which is a significant advantage for maintaining security. Trivy is a powerful tool developed by Aqua Security, specifically designed for scanning container images for vulnerabilities. It performs CVE (Common Vulnerabilities and Exposures) scanning, which helps identify potential security risks in the images stored in your registry. By using Trivy, users can ensure that container images are free from known vulnerabilities before they are deployed. This integration allows for automated scanning, providing an extra layer of security by continuously monitoring for any new vulnerabilities that might affect your images. With Trivy, a high level of security compliance can be maintained , making it an essential component of a container management strategy.

I wanted to scan my image helloworld for vulnerabilities. To do this I pressed scan vulnerability in the image menu. It showed me that there is No recognizable vulnerability detected.

In addition to scanning for vulnerabilities, it is also possible to create a Software Bill of Materials (SBOM). An SBOM is a detailed list that outlines all the components and dependencies included in a software application or container image. This document is crucial for understanding the makeup of your software, as it provides transparency into the various libraries and packages used. By generating an SBOM, one can gain insights into potential security risks associated with specific components, track the use of open-source software, and ensure compliance with licensing requirements. Creating an SBOM can also aid in the quick identification and resolution of vulnerabilities by allowing users to pinpoint affected components easily. This process enhances the overall security posture of your software development lifecycle, making it an invaluable practice for maintaining robust security standards.

Since I wanted to create a SBOM , I selected my artifact and pressed generate SBOM.

A list of the SBOM can be viewed directly in Harbor, but it can also be downloaded as JSON.

It is also possible to configure the project when someone pushes an image to a repo the image will automatically scanned and it will create a SBOM.

Harbor Image pull from K8s cluster

Since the image is stored in registry and security checked threw CVE image scan, I could now pull my self created image and use it on my Kubernetes Cluster. But Kubernetes has to authenticate itself when it pulls images from Harbor. I wanted to follow best practices and not use my developer user ‘andre‘ to pull images. I created a technical user which is allowed to pull images from Harbor. Additional according to least privileged the technical user will only have those permission for the secondtry project. So my Kubernetes cluster couldn’t pull images from an other private project. In Harbor technical users are called Robot Account. I created the robot account ‘captainhook’ which is only allowed to pulled images from secondtry.

Now is everything prepared so that I could pull my image from Harbor. First I started by creating an pod manifest with the following command:

kubectl run harbor-pull --image=hometown:443/secondtry/helloworld:0.1 -o yaml --dry-run=client > harbor-pull.yaml

This pod is called harbor-pull and uses the image on harbor, but first I needed to configure the authentication. If I didn’t I’d get an ImagePullBackOff error. Hence I created a docker-registry secret

kubectl create secret docker-registry harbor-pull \
  --docker-server=hometown:443 \
  --docker-username='robot$secondtry+captainhook' \
  --docker-password='xQjRUfeRvXk5YwLff01Ab2x62CienecN' \
  --docker-email=you@example.com

After that I could configure the pod manifest and set the pull image secret (vi harbor-pull.yaml cause I already created the manifest)

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: harbor-pull
  name: harbor-pull
spec:
  containers:
  - image: hometown:443/secondtry/helloworld:0.1
    name: harbor-pull
    command: ["sleep", "500"]
    resources: {}
  imagePullSecrets:
  - name: harbor-pull
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

I only had to apply the manifest to create the pod

kubectl apply -f harbor-pull.yaml

It was also shown in the audit logs that the robot account pulled the image helloworld

💡
Note: I also had to integrate the ca-certificate on hometown (same process as before - update ca-certificates) to pull image from my K3s cluster.

Managing stored images

I would like to mention that Harbor has a wide range of functionality to maintain and manage images. Harbor provides capabilities for image replication, which enables the synchronization of images across multiple Harbor instances, ensuring consistency and availability. Additionally, it supports image retention policies that help in automatically cleaning up old or unused images, thereby optimizing storage usage. Harbor also offers detailed access control mechanisms, allowing administrators to define who can view, pull, or push images, enhancing security.

In addition to the various features offered by Harbor for managing images, I also have the ability to label my images. This functionality allows me to categorize and organize images more effectively, making it easier to search and filter them based on specific criteria. By applying labels, I can quickly identify images that belong to a particular project, environment, or version. This not only improves the management of images but also enhances collaboration among team members by providing a clear and structured way to handle image repositories. Labels can be customized to fit the needs of the organization, ensuring that the image management process is both flexible and efficient.

Alerts via webhooks

In my Grafana article I added a alert function: every time I reached more than 23 running pods in my cluster I would get informed via discord. For Harbor I wanted to add a different trigger. Every time an image get pushed or pulled I shall get contacted on Discord aswell.

First I created a new text channel and a new webhook named spideybot.

Right now Harbor webhooks only supports Payload Formats ‘default‘ and ‘CloudEvents‘. That is not compatible to Discord. So some kind of ‘listener’ is needed which will listen to Harbors webhook and push a message to my channel #harbor in discord.

To do that I first created the listener with node.js

const express = require('express');
const axios = require('axios');
const config = require('./config');

const app = express();

app.use(express.json());

// Helper function to send Discord message
async function sendDiscordMessage(content) {
  try {
    const payload = { 
      content,
      username: config.discord.username
    };

    // Add avatar if configured
    if (config.discord.avatarUrl) {
      payload.avatar_url = config.discord.avatarUrl;
    }

    // If channel ID is specified, add it to the payload
    if (config.discord.channelId) {
      payload.channel_id = config.discord.channelId;
    }

    const response = await axios.post(config.discord.webhookUrl, payload, {
      headers: {
        'Content-Type': 'application/json'
      },
      timeout: 10000 // 10 second timeout
    });

    console.log('Discord message sent successfully');
    return true;
  } catch (error) {
    console.error('Failed to send Discord message:', error.message);
    if (error.response) {
      console.error('Discord API response:', error.response.status, error.response.data);
    }
    return false;
  }
}

// Helper function to check if event should be processed
function shouldProcessEvent(event) {
  // Check if event type is in the allowed list
  if (config.harbor.events.length > 0 && !config.harbor.events.includes(event.type)) {
    console.log(`Skipping event type: ${event.type} (not in allowed list)`);
    return false;
  }

  // Check if repository is in the allowed list
  if (config.harbor.repositories.length > 0 && event.event_data && event.event_data.repository) {
    const repoName = event.event_data.repository.repo_full_name;
    if (!config.harbor.repositories.some(allowedRepo => 
      repoName.includes(allowedRepo) || allowedRepo.includes(repoName)
    )) {
      console.log(`Skipping repository: ${repoName} (not in allowed list)`);
      return false;
    }
  }

  return true;
}

// Helper function to format event messages
function formatEventMessage(event) {
  const eventType = event.type;
  const operator = event.operator || 'Unknown';
  const timestamp = new Date().toISOString();

  let message = '';
  let emoji = '📦';

  switch (eventType) {
    case 'PUSH_ARTIFACT':
      const repoInfo = event.event_data.repository;
      const resource = event.event_data.resources[0] || {};
      const tag = resource.tag || 'latest';
      message = `${emoji} **PUSH_ARTIFACT**: \`${repoInfo.repo_full_name}:${tag}\` by \`${operator}\``;
      break;

    case 'DELETE_ARTIFACT':
      const deleteRepoInfo = event.event_data.repository;
      const deleteResource = event.event_data.resources[0] || {};
      const deleteTag = deleteResource.tag || 'latest';
      emoji = '🗑️';
      message = `${emoji} **DELETE_ARTIFACT**: \`${deleteRepoInfo.repo_full_name}:${deleteTag}\` by \`${operator}\``;
      break;

    case 'PULL_ARTIFACT':
      const pullRepoInfo = event.event_data.repository;
      const pullResource = event.event_data.resources[0] || {};
      const pullTag = pullResource.tag || 'latest';
      emoji = '⬇️';
      message = `${emoji} **PULL_ARTIFACT**: \`${pullRepoInfo.repo_full_name}:${pullTag}\` by \`${operator}\``;
      break;

    case 'CREATE_TAG':
      const createTagRepoInfo = event.event_data.repository;
      const createTagResource = event.event_data.resources[0] || {};
      const createTagTag = createTagResource.tag || 'latest';
      emoji = '🏷️';
      message = `${emoji} **CREATE_TAG**: \`${createTagRepoInfo.repo_full_name}:${createTagTag}\` by \`${operator}\``;
      break;

    case 'DELETE_TAG':
      const deleteTagRepoInfo = event.event_data.repository;
      const deleteTagResource = event.event_data.resources[0] || {};
      const deleteTagTag = deleteTagResource.tag || 'latest';
      emoji = '🏷️🗑️';
      message = `${emoji} **DELETE_TAG**: \`${deleteTagRepoInfo.repo_full_name}:${deleteTagTag}\` by \`${operator}\``;
      break;

    case 'CREATE_REPOSITORY':
      const createRepoInfo = event.event_data.repository;
      emoji = '📁';
      message = `${emoji} **CREATE_REPOSITORY**: \`${createRepoInfo.repo_full_name}\` by \`${operator}\``;
      break;

    case 'DELETE_REPOSITORY':
      const deleteRepoInfo2 = event.event_data.repository;
      emoji = '📁🗑️';
      message = `${emoji} **DELETE_REPOSITORY**: \`${deleteRepoInfo2.repo_full_name}\` by \`${operator}\``;
      break;

    default:
      emoji = 'ℹ️';
      message = `${emoji} **${eventType}**: Event received from \`${operator}\``;
      if (event.event_data && event.event_data.repository) {
        message += ` for repository \`${event.event_data.repository.repo_full_name}\``;
      }
  }

  // Add timestamp
  message += `\n⏰ ${timestamp}`;

  return message;
}

// Webhook endpoint
app.post('/webhook', async (req, res) => {
  try {
    const event = req.body;

    if (config.logging.enableRequestLogging) {
      console.log('Received Harbor webhook event:', {
        type: event.type,
        operator: event.operator,
        timestamp: event.occur_at,
        repository: event.event_data?.repository?.repo_full_name
      });
    }

    // Validate event structure
    if (!event.type) {
      console.warn('Received invalid event without type');
      return res.status(400).send('Invalid event format');
    }

    // Check if event should be processed
    if (!shouldProcessEvent(event)) {
      return res.status(200).send('Event filtered out');
    }

    // Format the message
    const message = formatEventMessage(event);

    // Send to Discord
    const success = await sendDiscordMessage(message);

    if (success) {
      res.status(200).send('OK');
    } else {
      res.status(500).send('Failed to send Discord message');
    }

  } catch (error) {
    console.error('Error processing webhook:', error);
    res.status(500).send('Internal server error');
  }
});

// Health check endpoint
app.get('/health', (req, res) => {
  res.status(200).json({
    status: 'healthy',
    timestamp: new Date().toISOString(),
    discord_webhook_configured: !!config.discord.webhookUrl,
    discord_channel_configured: !!config.discord.channelId,
    allowed_events: config.harbor.events,
    allowed_repositories: config.harbor.repositories
  });
});

// Configuration endpoint
app.get('/config', (req, res) => {
  res.status(200).json({
    discord: {
      webhook_configured: !!config.discord.webhookUrl,
      channel_configured: !!config.discord.channelId,
      username: config.discord.username
    },
    harbor: {
      allowed_events: config.harbor.events,
      allowed_repositories: config.harbor.repositories
    },
    server: {
      port: config.server.port,
      host: config.server.host
    }
  });
});

// Start server
app.listen(config.server.port, config.server.host, () => {
  console.log(`🚀 Harbor Discord Listener started on ${config.server.host}:${config.server.port}`);
  console.log(`📡 Discord webhook URL: ${config.discord.webhookUrl ? 'Configured' : 'Not configured'}`);
  console.log(`📢 Discord channel ID: ${config.discord.channelId || 'Not specified'}`);
  console.log(`🔗 Webhook endpoint: http://${config.server.host}:${config.server.port}/webhook`);
  console.log(`❤️  Health check: http://${config.server.host}:${config.server.port}/health`);
  console.log(`⚙️  Config endpoint: http://${config.server.host}:${config.server.port}/config`);
  console.log(`📋 Allowed events: ${config.harbor.events.join(', ')}`);
  console.log(`📁 Allowed repositories: ${config.harbor.repositories.length > 0 ? config.harbor.repositories.join(', ') : 'All'}`);
});

Then I wanted to build this via Docker. So I created a Dockerfile and build it via docker build -t listener .

FROM node:18-alpine

# Install curl for health checks
RUN apk add --no-cache curl

WORKDIR /app

# Copy package files and install dependencies
COPY package*.json ./
RUN npm install --only=production

# Copy application files
COPY listener.js config.js ./

# Create non-root user for security
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nodejs -u 1001

# Change ownership of the app directory
RUN chown -R nodejs:nodejs /app
USER nodejs

# Expose port
EXPOSE 8080

# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
  CMD curl -f http://localhost:8080/health || exit 1

# Start the application
CMD ["node", "listener.js"]

I tagged my listener image and pushed it on Harbor

docker tag listener:latest hometown:443/secondtry/listener:0.1
docker push hometown:443/secondtry/listener:0.1

Then I could create deployment, service and ingress to deploy my listener in my Kubernetes Cluster.

I used a configmap and secret to contain the information of the Discord Url and the Habor repository to listen to. Then I applied my YAML via kubectl apply -f

apiVersion: v1
kind: ConfigMap
metadata:
  name: harbor-discord-listener-config
data:
  # Harbor Configuration
  HARBOR_EVENTS: "PUSH_ARTIFACT,DELETE_ARTIFACT,PULL_ARTIFACT,CREATE_TAG,DELETE_TAG,CREATE_REPOSITORY,DELETE_REPOSITORY"
  HARBOR_REPOSITORIES: ""
  # Server Configuration
  PORT: "8080"
  HOST: "0.0.0.0"
  # Logging Configuration
  LOG_LEVEL: "info"
  ENABLE_REQUEST_LOGGING: "true"
---
apiVersion: v1
kind: Secret
metadata:
  name: harbor-discord-listener-secret
type: Opaque
data:
  DISCORD_WEBHOOK_URL: "xxx"
  DISCORD_CHANNEL_ID: "MTQwMDE2NzMxMjcxMTIyNTQ2Ngo="
  DISCORD_USERNAME: "SGFyYm9yIEJvdA=="
  DISCORD_AVATAR_URL: ""
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: harbor-discord-listener
  labels:
    app: harbor-discord-listener
spec:
  replicas: 1
  selector:
    matchLabels:
      app: harbor-discord-listener
  template:
    metadata:
      labels:
        app: harbor-discord-listener
    spec:
      imagePullSecrets:
        - name: harbor-pull
      containers:
        - name: harbor-discord-listener
          image: hometown:443/secondtry/listener:0.1
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
              name: http
          envFrom:
            - configMapRef:
                name: harbor-discord-listener-config
            - secretRef:
                name: harbor-discord-listener-secret
          resources:
            requests:
              memory: "64Mi"
              cpu: "50m"
            limits:
              memory: "128Mi"
              cpu: "100m"
          livenessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 30
            periodSeconds: 30
            timeoutSeconds: 10
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 10
            timeoutSeconds: 5
            failureThreshold: 3
          securityContext:
            runAsNonRoot: true
            runAsUser: 1001
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            capabilities:
              drop:
                - ALL
---
apiVersion: v1
kind: Service
metadata:
  name: harbor-discord-listener-service
  labels:
    app: harbor-discord-listener
spec:
  type: ClusterIP
  ports:
    - port: 8080
      targetPort: 8080
      protocol: TCP
      name: http
  selector:
    app: harbor-discord-listener
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: harbor-discord-listener-ingress
spec:
  rules:
    - host: hometown
      http:
        paths:
          - path: /webhook
            pathType: Prefix
            backend:
              service:
                name: harbor-discord-listener-service
                port:
                  number: 8080

Now I could create a webhook in my project ‘secondtry’ which is reacting on the event types artifact pulled and artifact pushed. It will use the ingress from the listener as endpoint http://hometown/webhook.

Since everything is up and ready I could test my push-alert. Again I used my helloworld image and tagged it and push it to harbor.

docker tag helloworld:latest hometown:443/secondtry/helloworld:0.5
docker push hometown:443/secondtry/helloworld:0.5

I got directly informed by discord that an artifact got pushed by user andre. Then it was pulled a few times by Trivy via a robot account which was doing a CVE scan automatically. Last but not least the scanned artifact was pushed again by Trivy.

In Harbor all webhook executions are listed in the project menu. Here I could see the whole Payload and when the webhook was triggered.

I would like to show one last CVE image scan from my listener which was pushed into Harbor before. It is a good example how Harbor lists vulnerabilities from my image.

Conclusion

In conclusion, Harbor plays a pivotal role in the secure deployment and management of container images, offering a comprehensive suite of features that enhance both security and efficiency. By integrating seamlessly with CI/CD pipelines, Harbor ensures that container images are consistently available and secure across various environments. Its capabilities, such as vulnerability scanning with Trivy, image replication, and role-based access control, provide robust security measures and streamline the development process. Additionally, the ability to generate Software Bill of Materials (SBOM) and manage images through labeling and retention policies further enhances its utility. With the added functionality of webhooks for alerts, Harbor not only secures but also optimizes the container image lifecycle, making it an indispensable tool for modern DevOps practices.

Personally, I find the Harbor Registry to be an excellent open-source alternative that can confidently compete with popular cloud solutions like the Azure Registry in terms of usability and user interface. Its features, such as an integrated CVE scanner, SBOM, and various image management functions, make the tool well-rounded. While the alert system primarily works through webhooks, as I demonstrated in the article, this is entirely feasible and adds to its versatility.

2
Subscribe to my newsletter

Read articles from André Hoffmann directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

André Hoffmann
André Hoffmann

I am a Solution Architect with focus on IT-infrastructure. I like to work in the cloud but also on-prem. With Azure Cloud I build solutions for different customers. I am always interested in new Technology :) Certified Kubernetes Administrator Certified Azure Administrator Associate