KubeAcademy by VMware
Building Images with Buildpacks: pack, Spring Boot, kpack, and Paketo Buildpacks
Next Lesson

In this lesson we explore the features and functionality of Cloud Native Buildpacks using three tools (platforms) for building images: the pack CLI, Spring Boot build plugin, and kpack hosted service. We also explore the modularized buildpacks provided by Paketo Buildpacks for building images in a variety of languages.

Cora Iberkleid

Part Developer Advocate, part Advisory Solutions Engineer at VMware

Cora Iberkleid is part Developer Advocate, part Advisory Solutions Engineer at VMware, helping developers and enterprises navigate and adopt modern practices and technologies including Spring, Cloud Foundry, Kubernetes, VMware Tanzu, and modern CI/CD.

View Profile

In this lesson, we'll go hands-on with Cloud Native Buildpacks and Paketo. We'll use the pack CLI to go through examples of building, inspecting and rebasing images, as well as using custom buildpacks. Then we'll see how Spring Boot and kpack offer a different user experience. And we'll wrap up with a recap of the benefits of the system. Let's start with the same hello world go application that we worked with in our Docker file examples. We have our source code in the current directory.

Now we'll be using the base builder from Paketo, which is publicly available on Google Container Registry. So let's go ahead and set it as the default. For efficiency I already built this image once, that caused the builder and the run images to be downloaded to my local Docker Daemon and the local cache to be populated. Let's rebuild the image. The command is very simple, just pack build, and the name of the image. For these examples, we'll be publishing images to the local Docker Daemon however, you can simply add the minus minus publish flag and add your registry and repository name to your image and pack will use the registry instead.

Let's review the output. You can see the lifecycle phases that we learned about in the previous lesson detection of buildpacks, followed by loading of metadata and data for optimization of the build, followed by execution of buildpacks. The output within the build phase comes from the Paketo buildpacks. Notice that they employ the best practices that we had to configure manually using Docker file, such as separating modules from source. Finally, the export publishes any change layers and updates the cache as necessary.

At this point, you can use Docker Run to run the application as we did in the lesson on Docker file or deploy it to Kubernetes. So we see that this is very simple to use for applications in a number of languages. And anyone across an enterprise or across the world, in fact, who has access to the same builder and the same source code can reliably build the same image. We can Use the pack inspect image command to get metadata provided by the lifecycle, including build packs that were used, the start command and more. You might be curious about the user setup. The default is a non root user in a corresponding group. This is configured by Paketo.

Okay. So inspect the builder. You can see, for example, additional frameworks that this builder supports, the buildpacks it contains with links to their homepages and more. You can visit the home pages for information about configuring the buildpacks, either through a buildpack dot Yammel file or through environment variables. Now imagine that a vulnerability is discovered in the operating system and Paketo Buildpacks makes a new run image with a patch available. We can rebase the image using the new run image.

A new image was created with a different image ID. In fact, we can use Docker inspect to compare the layers corresponding to the root file system. You can see that two of the layers have changed. Now, what if you want to include custom files or behavior? You could do it using a custom buildpack. Here we see the structure of a simple example buildpack. It includes a configuration file and two scripts corresponding to the buildpacks API, detect and build. To use a custom buildpack, simply include it in your pack, build command, using a path or URL. You can choose to run it before, after, or instead of the buildpacks from the builder. This sample buildpack, simply prints environment variables, but you can see that it was detected and invoked. To share custom build packs more easily. You can also bundle them into a builder.

Let's switch to a Java example and use Spring Boot instead of pack. Spring Boot 2.3.0 added support for Cloud Native Buildpacks through its Maven and Gradle plugins. Let's kick off a build. Spring Boot uses Paketo by default and uses the artifact ID and version to name the image. Both of these settings are configurable. You can see the same familiar lifecycle reflected in the output of the Spring Boot build. In the case of this app, five applicable buildpacks are detected. Within the build phase, the behavior and the output that we're seeing comes from the Paketo buildpacks for Java, not Spring Boot. Notice the environment variables and the output, which give us hints about how we can configure the Java buildpacks.

For example, you can set the version of the JDK and JRE by using the BP JVM version environment variable. You can also see that the Paketo buildpacks include JVM memory calculator that they inherited from the Cloud Foundry buildpacks. The launch command includes support for detecting a Java ops environment variable at runtime. Paketo will also automatically take advantage of layering of application files. This is similar to the layering that we saw with Gibb and is also now supported in Spring Boot.

We saw the inspect image command earlier, which gave us information provided by the lifecycle. You can add a bill of materials flag BOM to see additional information provided by the buildpacks. Here are the categories of information provided by the Paketo Java buildpacks. If we dig into dependencies, for example, we can see that the full list of Java dependencies and their versions is included.

The last platform that we'll explore in this lesson is Kpack. Kpack operates as a service in Kubernetes that can be configured declaratively. We can see, for example, the resources that it installs. In this example, we're going to configure a builder and an image. The builder resource points to the Paketo builder we want to use. The image resource includes the builder resource name, the location of the source code, the destination of the image, a service account with authorization to publish to that destination registry, and a flag that enables caching. Kpack uses a persistent volume claim for the cache. By default, Kpack will pull for changes. It pulls the get repo, the builder and the run image every five minutes, and it will rebuild or rebase automatically as appropriate. If it detects any changes, let's trigger a build using a git commit.

We see a new image published to Docker Hub. And back in Kpack, we can check the image resource details. We can see how many builds it has done, the name of the last build, and more. Each build creates a build resource and a pod. You can use Cube Cuddle described to get more information on the associated build and pod resources as well.

Kpack also includes metadata in the build resource about the reason for the build. You can see this build was triggered by a commit. Other reasons might be stack buildpack config changes, or a manual trigger. Finally Kpack provides a CLI called logs that enables us to easily see the output of a build. Again, we see the familiar orchestration of the life cycle. It's easy to envision Kpack forming part of an automated build tool chain.

Let's review some of the benefits that we've seen over the last two lessons. Cloud Native Buildpacks makes it easy to build images in a consistent and reliable way. It has a well-designed model and reference implementations that provide bill time optimizations, as well as a foundation for growing ecosystems of platforms and buildpacks. It embraces modern container standards, such as OCI and offers advanced features, such as base image rebasing. It also enables standardizing builds across an organization in a way that's easy to govern and manage over time.

Paketo Buildpacks improves on the previous generation of Cloud Foundry Buildpacks, making them accessible to the modern container at runtime ecosystem, they apply best practices for a variety of programming languages and provide configuration hooks through buildpack Yammel files or environment variables. The code base is actively updated providing the latest features and patches and language runtimes and in the base operating system to keep our images current and secure. That concludes this lesson on Cloud Native Buildpacks and Paketo.

In this lesson, we learned how to use three different platforms optimized for three different user experiences. Pack for issuing imperative commands at a command line, Kpack for automated builds based on the declarative configuration, and Spring Boot to integrate with Maven or Gradle workflows. We also explored features such as publishing to a registry, rebasing the operating system using custom buildpacks. And throughout these examples, we took advantage of Paketo Buildpacks, inspecting the builders, images, and modularized buildpacks it provides, and exploring a standardized approach to configuration.

Give Feedback

Help us improve by sharing your thoughts.

Share