© 2016 The original authors.
1. Introduction
Fabric8 Maven Plugin is deprecated and is no longer supported. Please consider migrating to Eclipse JKube plugins: Kubernetes Maven Plugin or OpenShift Maven Plugin . You can read the Migration Guide for more details. |
The fabric8-maven-plugin (f8-m-p) brings your Java applications on to Kubernetes and OpenShift.
It provides a tight integration into Maven and benefits from the build configuration already provided.
This plugin focus on two tasks: Building Docker images and creating Kubernetes and OpenShift resource descriptors.
It can be configured very flexibly and supports multiple configuration models for creating: A Zero-Config setup allows for a quick ramp-up with some opinionated defaults.
For more advanced requirements, an XML configuration provides additional configuration options which can be added to the pom.xml
.
For the full power, in order to tune all facets of the creation, external resource fragments and Dockerfiles can be used.
1.1. Building Images
The fabric8:build goal is for creating Docker images containing the actual application. These then can be deployed later on Kubernetes or OpenShift. It is easy to include build artifacts and their dependencies into these images. This plugin uses the assembly descriptor format from the maven-assembly-plugin to specify the content which will be added to the image. That images can then be pushed to public or private Docker registries with fabric8:push.
Depending on the operational mode, for building the actual image either a Docker daemon is used directly or an OpenShift Docker Build is performed.
A special fabric8:watch goal allows for reacting to code changes to automatically recreate images or copy new artifacts into running containers.
These image related features are inherited from the fabric8io/docker-maven-plugin which is part of this plugin.
1.2. Kubernetes and OpenShift Resources
Kubernetes and OpenShift resource descriptors can be created or generated from fabric8:resource. These files are packaged within the Maven artifacts and can be deployed to a running orchestration platform with fabric8:apply.
Typically you only specify a small part of the real resource descriptors which will be enriched by this plugin with various extra information taken from the pom.xml
.
This drastically reduces boilerplate code for common scenarios.
1.3. Configuration
As mentioned already there are three levels of configuration:
-
Zero-Config mode makes some very opinionated decisions based on what is present in the pom.xml like what base image to use or which ports to expose. This is great for starting up things and for keeping quickstart applications small and tidy.
-
XML plugin configuration mode is similar to what docker-maven-plugin provides. This allows for type-safe configuration with IDE support, but only a subset of possible resource descriptor features is provided.
-
Kubernetes & OpenShift resource fragments are user provided YAML files that can be enriched by the plugin. This allows expert users to use a plain configuration file with all their capabilities, but also to add project specific build information and avoid boilerplate code.
The following table gives an overview of the different models
Model | Docker Images | Resource Descriptors |
---|---|---|
Zero-Config |
Generators are used to create Docker image configurations. Generators can detect certain aspects of the build (e.g. whether Spring Boot is used) and then choose some default like the base image, which ports to expose and the startup command. They can be configured, but offer only a few options. |
Default Enrichers will create a default Service and Deployment (DeploymentConfig for OpenShift) when no other resource objects are provided. Depending on the image they can detect which port to expose in the service. As with Generators, Enrichers support a limited set of configuration options. |
XML configuration |
f8-m-p inherits the XML based configuration for building images from the docker-maven-plugin and provides the same functionality. It supports an assembly descriptor for specifying the content of the Docker image. |
A subset of possible resource objects can be configured with a dedicated XML syntax. With a decent IDE you get autocompletion on most object and inline documentation for the available configuration elements. The provided configuration can be still enhanced by Enhancers which is useful for adding e.g. labels and annotations containing build or other information. |
Resource Fragments and Dockerfiles |
Similarly to the docker-maven-plugin, f8-m-p supports external Dockerfiles too, which are referenced from the plugin configuration. |
Resource descriptors can be provided as external YAML files which specify a skeleton. This skeleton is then filled by Enrichers which add labels and more. Maven properties within these files are resolved to thier values. With this model you can use every Kubernetes / OpenShift resource object with all their flexibility, but still get the benefit of adding build information. |
1.4. Examples
Let’s have a look at some code. The following examples will demonstrate all three configurations variants:
1.4.1. Zero-Config
This minimal but full working example pom.xml
shows how a simple spring boot application can be dockerized and prepared for Kubernetes and OpenShift. The full example can be found in directory samples/zero-config.
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-sample-zero-config</artifactId>
<version>4.4.2</version>
<packaging>jar</packaging>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId> (1)
<version>1.5.5.RELEASE</version>
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId> (2)
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId> (3)
</plugin>
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId> (4)
<version>4.4.2</version>
</plugin>
</plugins>
</build>
</project>
1 | This minimalistic spring boot application uses the spring-boot parent POM for setting up dependencies and plugins |
2 | The Spring Boot web starter dependency enables a simple embedded Tomcat for serving Spring MVC apps |
3 | The spring-boot-maven-plugin is responsible for repackaging the application into a fat jar, including all dependencies and the embedded Tomcat |
4 | The fabric8-maven-plugin enables the automatic generation of a Docker image and Kubernetes / OpenShift descriptors including this Spring application. |
This setup make some opinionated decisions for you:
-
As base image fabric8/java-jboss-openjdk8-jdk is chosen which enables Jolokia and jmx_exporter. It also comes with a sophisticated startup script.
-
It will create a Kubernetes Deployment and a Service as resource objects
-
It exports port 8080 as the application service port (and 8778 and 9779 for Jolokia and jmx_exporter access, respectively)
These choices can be influenced by configuration options as decribed in Spring Boot Generator.
To start the Docker image build, you simply run
mvn package fabric8:build
This will create the Docker image against a running Docker daemon (which must be accessible either via Unix Socket or with the URL set in DOCKER_HOST
). Alternatively, when connected to an OpenShift cluster (or using the openshift
mode explicitly), then a Docker build will be performed on OpenShift which at the end creates an ImageStream.
To deploy the resources to the cluster call
mvn fabric8:resource fabric8:deploy
By default a Service and a Deployment object pointing to the created Docker image is created. When running in OpenShift mode, a Service and DeploymentConfig which refers the ImageStream created with fabric8:build
will be installed.
Of course you can bind all those fabric8-goals to execution phases as well, so that they are called along with standard lifecycle goals like install
. For example, to bind the building of the Kubernetes resource files and the Docker images, add the following goals to the execution of the f-m-p:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<!-- ... -->
<executions>
<execution>
<goals>
<goal>resource</goal>
<goal>build</goal>
</goals>
</execution>
</executions>
</plugin>
If you’d also like to automatically deploy to Kubernetes each time you do a mvn install
you can add the deploy
goal:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<!-- ... -->
<executions>
<execution>
<goals>
<goal>resource</goal>
<goal>build</goal>
<goal>deploy</goal>
</goals>
</execution>
</executions>
</plugin>
1.4.2. XML Configuration
XML based configuration is only partially implemented and is not recommended for use right now. |
Although the Zero-config mode and its generators can be tweaked with options up to a certain degree, many cases require more flexibility. For such instances, an XML-based plugin configuration can be used, in a way similar to the
XML configuration used by docker-maven-plugin
.
The plugin configuration can be roughly divided into the following sections:
-
Global configuration options are responsible for tuning the behaviour of plugin goals
-
<images>
defines which Docker images are used and configured. This section is similar to the image configuration of thedocker-maven-plugin
, except that<run>
and<external>
sub-elements are ignored) -
<resource>
defines the resource descriptors for deploying on an OpenShift or Kuberneres cluster. -
<generator>
configures generators which are responsible for creating images. Generators are used as an alternative to a dedicated<images>
section. -
<enricher>
configures various aspects of enrichers for creating or enhancing resource descriptors.
A working example can be found in the samples/xml-config directory. An extract of the plugin configuration is shown below:
<configuration>
<namespace>test-ns</namespace>
<images> (1)

</images>
<resources> (2)
<labels> (3)
<all>
<group>quickstarts</group>
</all>
</labels>
<deployment> (4)
<name>${project.artifactId}</name>
<replicas>1</replicas>
<containers> (5)
<container>
<alias>camel-app</alias> (6)
<ports>
<port>8778</port>
</ports>
<mounts>
<scratch>/var/scratch</scratch>
</mounts>
</container>
</containers>
<volumes> (7)
<volume>
<name>scratch</name>
<type>emptyDir</type>
</volume>
</volumes>
</deployment>
<services> (8)
<service>
<name>camel-service</name>
<headless>true</headless>
</service>
</services>
<serviceAccounts>
<serviceAccount>
<name>build-robot</name>
</serviceAccount>
</serviceAccounts>
</resources>
</configuration>
1 | Standard docker-maven-plugin configuration for building one single Docker image |
2 | Kubernetes / OpenShift resources to create |
3 | Labels which should be applied globally to all resource objects |
4 | Definition of a Deployment to create |
5 | Containers to include in the deployment |
6 | An alias is used to correlate a container’s image with the image definition in the <images> section where each image carry an alias. Can be omitted if only a single image is used |
7 | Volume definitions used in a Deployment’s ReplicaSet |
8 | One or more Service definitions. |
The XML resource configuration is based on plain Kubernetes resource objects. When targeting OpenShift, Kubernetes resource descriptors will be automatically converted to their OpenShift counterparts, e.g. a Kubernetes Deployment will be converted to an OpenShift DeploymentConfig.
1.4.3. Resource Fragments
The third configuration option is to use an external configuration in form of YAML resource descriptors which are located in the src/main/fabric8
directory. Each resource get its own file, which contains a skeleton of a resource descriptor. The plugin will pick up the resource, enrich it and then combine all to a single kubernetes.yml
and openshift.yml
file. Within these descriptor files you are can freely use any Kubernetes feature.
Note: In order to support simultaneously both OpenShift and Kubernetes, there is currently no way to specify OpenShift-only features this way, though this might change in future releases.
Let’s have a look at an example from samples/external-resources. This is a plain Spring Boot application, whose images are auto generated like in the Zero-Config case. The resource fragments are in src/main/fabric8
.
spec:
replicas: 1
template:
spec:
volumes:
- name: config
gitRepo:
repository: 'https://github.com/jstrachan/sample-springboot-config.git'
revision: 667ee4db6bc842b127825351e5c9bae5a4fb2147
directory: .
containers:
- volumeMounts:
- name: config
mountPath: /app/config
env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
serviceAccount: ribbon
As you can see, there is no metadata
section as would be expected for Kubernetes resources because it will be automatically added by the fabric8-maven-plugin
. The object’s Kind
, if not given, is automatically derived from the
filename. In this case, the fabric8-maven-plugin
will create a Deployment
because the file is called deployment.yml
. Similar mappings between file names and resource type exist for each supported resource kind, the
complete list of which (along with associated abbreviations) can be found in the Appendix.
Now that sidecar containers are supported in this plugin, be careful whenever you’re supplying container name in the resource fragment. If container specified in resource fragment doesn’t have a name or it’s name is equal to default fmp generated application’s container name; it would not be treated as sidecar and it would be merged into main container. However, You can override plugin’s default name for main container via fabric8.generator.alias property.
|
Additionally, if you name your fragment using a name prefix followed by a dash and the mapped file name, the plugin will automatically use that name for your resource. So, for example, if you name your deployment fragment
myapp-deployment.yml
, the plugin will name your resource myapp
. In the absence of such provided name for your resource, a name will be automatically derived from your project’s metadata (in particular, its artifactId
as specified in your POM).
No image is also referenced in this example because the plugin also fills in the image details based on the configured image you are building with (either from a generator or from a dedicated image plugin configuration, as seen before).
For building images there is also an alternative mode using external Dockerfiles, in addition to the XML based configuration. Refer to fabric8:build for details. |
Enrichment of resource fragments can be fine-tuned by using profile sub-directories. For more details see Profiles.
Now that we have seen some examples for the various ways how this plugin can be used, the following sections will describe the plugin goals and extension points in detail.
2. Compatibility with OpenShift and Kubernetes
2.1. OpenShift Compatibility
FMP | Openshift 4.2.0 | Openshift 4.1.0 | OpenShift 3.11.0 | OpenShift 3.10.0 | OpenShift 3.9.0 | OpenShift 3.7.0 | OpenShift 3.6.0 | |
---|---|---|---|---|---|---|---|---|
FMP 4.3.1 |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
|
FMP 4.3.0 |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
|
FMP 4.2.0 |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
|
FMP 4.1.0 |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
|
FMP 4.0.0-M2 |
○ |
○ |
○ |
○ |
✓ |
✓ |
✓ |
|
FMP 4.0.0-M1 |
○ |
○ |
○ |
○ |
✓ |
✓ |
✓ |
|
FMP 3.5.42 |
✗ |
✗ |
✗ |
✗ |
○ |
✓ |
✓ |
|
FMP 3.5.41 |
✗ |
✗ |
✗ |
✗ |
○ |
✓ |
✓ |
|
FMP 3.5.40 |
✗ |
✗ |
✗ |
✗ |
○ |
✓ |
✓ |
|
FMP 3.5.39 |
✗ |
✗ |
✗ |
✗ |
✗ |
✓ |
✓ |
|
FMP 3.5.38 |
✗ |
✗ |
✗ |
✗ |
✗ |
✓ |
✓ |
|
FMP 3.5.37 |
✗ |
✗ |
✗ |
✗ |
✗ |
✓ |
✓ |
|
FMP 3.5.36 |
✗ |
✗ |
✗ |
✗ |
✗ |
✓ |
✓ |
|
FMP 3.5.35 |
✗ |
✗ |
✗ |
✗ |
✗ |
✓ |
✓ |
|
FMP 3.5.34 |
✗ |
✗ |
✗ |
✗ |
✗ |
✓ |
✓ |
|
FMP 3.5.33 |
✗ |
✗ |
✗ |
✗ |
✗ |
✓ |
✓ |
|
FMP 3.5.32 |
✗ |
✗ |
✗ |
✗ |
✗ |
✓ |
✓ |
2.2. Kubernetes Compatibility
FMP | Kubernetes 1.15.3 | Kubernetes 1.14.2 | Kubernetes 1.12.0 | Kubernetes 1.11.0 | Kubernetes 1.10.0 | Kubernetes 1.9.0 | Kubernetes 1.8.0 | Kubernetes 1.7.0 | Kubernetes 1.6.0 | Kubernetes 1.5.1 | Kubernetes 1.4.0 |
---|---|---|---|---|---|---|---|---|---|---|---|
FMP 4.3.1 |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
FMP 4.3.0 |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
FMP 4.2.0 |
○ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
FMP 4.1.0 |
○ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
FMP 4.0.0 |
○ |
○ |
○ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
FMP 4.0.0-M2 |
○ |
○ |
○ |
○ |
○ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
FMP 4.0.0-M1 |
○ |
○ |
○ |
○ |
○ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
FMP 3.5.42 |
✗ |
✗ |
○ |
○ |
○ |
○ |
✓ |
✓ |
✓ |
✓ |
✓ |
FMP 3.5.41 |
✗ |
✗ |
✗ |
✗ |
✗ |
○ |
✓ |
✓ |
✓ |
✓ |
✓ |
FMP 3.5.40 |
✗ |
✗ |
✗ |
✗ |
✗ |
○ |
✓ |
✓ |
✓ |
✓ |
✓ |
FMP 3.5.39 |
✗ |
✗ |
✗ |
✗ |
✗ |
✗ |
✓ |
✓ |
✓ |
✓ |
✓ |
FMP 3.5.38 |
✗ |
✗ |
✗ |
✗ |
✗ |
✗ |
✓ |
✓ |
✓ |
✓ |
✓ |
FMP 3.5.37 |
✗ |
✗ |
✗ |
✗ |
✗ |
✗ |
✓ |
✓ |
✓ |
✓ |
✓ |
FMP 3.5.36 |
✗ |
✗ |
✗ |
✗ |
✗ |
✗ |
✓ |
✓ |
✓ |
✓ |
✓ |
FMP 3.5.35 |
✗ |
✗ |
✗ |
✗ |
✗ |
✗ |
✓ |
✓ |
✓ |
✓ |
✓ |
FMP 3.5.34 |
✗ |
✗ |
✗ |
✗ |
✗ |
✗ |
✓ |
✓ |
✓ |
✓ |
✓ |
FMP 3.5.33 |
✗ |
✗ |
✗ |
✗ |
✗ |
✗ |
✓ |
✓ |
✓ |
✓ |
✓ |
FMP 3.5.32 |
✗ |
✗ |
✗ |
✗ |
✗ |
✗ |
✓ |
✓ |
✓ |
✓ |
✓ |
3. Installation
This plugin is available from Maven central and can be connected to pre- and post-integration phase as seen below. The configuration and available goals are described below.
By default, Maven will only search for plugins in the org.apache.maven.plugins
and org.codehaus.mojo
packages. In order to resolve the provider for the Fabric8 plugin goals, you need to edit ~/.m2/settings.xml
and add the io.fabric8
namespace so the <pluginGroups>
configuration.
<settings>
...
<pluginGroups>
<pluginGroup>io.fabric8</pluginGroup>
</pluginGroups>
...
</settings>
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>4.4.2</version>
<configuration>
....
<images>
<!-- A single's image configuration -->

....
</images>
</configuration>
<!-- Connect fabric8:resource, fabric8:build and fabric8:helm to lifecycle phases -->
<executions>
<execution>
<id>fabric8</id>
<goals>
<goal>resource</goal>
<goal>build</goal>
<goal>helm</goal>
</goals>
</execution>
</executions>
</plugin>
4. Goals Overview
This plugin supports a rich set for providing a smooth Java developer experience. These goals can be categorized in multiple groups:
-
Build goals are all about creating and managing Kubernetes and OpenShift build artifacts like Docker images or S2I builds.
-
Development goals target help not only in deploying resource descriptors to the development cluster but also to manage the lifecycle of the development cluster as well.
Goal | Description |
---|---|
Build images |
|
Push images to a registry |
|
Create Kubernetes or OpenShift resource descriptors |
|
Apply resources to a running cluster |
|
Run |
Goal | Description |
---|---|
Deploy resources decriptors to a cluster after creating them and building the app. Same as [fabric8:run] except that it runs in the backgorund. |
|
Undeploy and remove resources decriptors from a cluster. |
|
Watch for file changes and perform rebuilds and redeployments |
|
Show the logs of the running application |
|
Enable remote debugging |
Depending on whether the OpenShift or Kubernetes operational mode is used, the workflow and the performed actions differs :
Use Case | Kubernetes | OpenShift |
---|---|---|
Build |
|
|
Deploy |
|
|
5. Build Goals
5.1. fabric8:resource
This is chapter is incomplete, but there is work in progress. |
5.1.1. Labels and Annotations
Labels and annotations can be easily added to any resource object. This is best explained by an example.
<plugin>
...
<configuration>
...
<resources>
<labels> (1)
<all> (1)
<property> (2)
<name>organisation</name>
<value>unesco</value>
</property>
</all>
<service> (3)
<property>
<name>database</name>
<value>mysql</value>
</property>
<property>
<name>persistent</name>
<value>true</value>
</property>
</service>
<replicaSet> (4)
...
</replicaSet>
<pod> (5)
...
</pod>
<deployment> (6)
...
</deployment>
</labels>
<annotations> (7)
...
</annotations>
<remotes> (8)
<remote>https://gist.githubusercontent.com/lordofthejars/ac2823cec7831697d09444bbaa76cd50/raw/e4b43f1b6494766dfc635b5959af7730c1a58a93/deployment.yaml</remote>
</remotes>
</resource>
</configuration>
</plugin>
1 | <labels> section with <resources> contains labels which should be applied to objects of various kinds |
2 | Within <all> labels which should be applied to every object can be specified |
3 | <service> labels are used to label services |
4 | <replicaSet> labels are for replica set and replication controller |
5 | <pod> holds labels for pod specifications in replication controller, replica sets and deployments |
6 | <deployment> is for labels on deployments (kubernetes) and deployment configs (openshift) |
7 | The subelements are also available for specifying annotations. |
8 | <remotes> you can set location of fragments as URL . |
Labels and annotations can be specified in free form as a map. In this map the element name is the name of the label or annotation respectively, whereas the content is the value to set.
The following subelements are possible for <labels>
and <annotations>
:
Element | Description |
---|---|
all |
All entries specified in the |
deployment |
Labels and annotations applied to |
pod |
Labels and annotations applied pod specification as used in |
replicaSet |
Labels and annotations applied to |
service |
Labels and annotations applied to |
5.1.2. Secrets
Once you’ve configured some docker registry credentials into ~/.m2/setting.xml
, as explained in the
Authentication section, you can create Kubernetes secrets from a server declaration.
XML configuration
You can create a secret using xml configuration in the pom.xml
file. It should contain the following fields:
key | required | description |
---|---|---|
dockerServerId |
|
the server id which is configured in
|
name |
|
this will be used as name of the kubernetes secret resource |
namespace |
|
the secret resource will be applied to the specific namespace, if provided |
This is best explained by an example.
<properties>
<docker.registry>docker.io</docker.registry>
</properties>
...
<configuration>
<resources>
<secrets>
<secret>
<dockerServerId>${docker.registry}</dockerServerId>
<name>mydockerkey</name>
</secret>
</secrets>
</resources>
</configuration>
Yaml fragment with annotation
You can create a secret using a yaml fragment. You can reference the docker server id with an annotation
maven.fabric8.io/dockerServerId
. The yaml fragment file should be put under
the src/main/fabric8/
folder.
apiVersion: v1
kind: Secret
metadata:
name: mydockerkey
namespace: default
annotations:
maven.fabric8.io/dockerServerId: ${docker.registry}
type: kubernetes.io/dockercfg
5.1.3. Resource Validation
Resource goal also validates the generated resource descriptors using API specification of Kubernetes and OpenShift.
Configuration | Description | Default |
---|---|---|
fabric8.skipResourceValidation |
If value is set to |
|
fabric8.failOnValidationError |
If value is set to |
|
fabric8.build.switchToDeployment |
If value is set to |
|
fabric8.openshift.trimImageInContainerSpec |
If value is set to |
|
5.1.4. Route Generation
When the fabric8:resource
goal is run, an OpenShift Route descriptor (route.yml
) will also be generated along the service if an OpenShift cluster is targeted.
If you do not want to generate a Route descriptor, you can set the fabric8.openshift.generateRoute
property to false
.
Configuration | Description | Default |
---|---|---|
fabric8.openshift.generateRoute |
If value is set to |
|
If you do not want to generate a Route descriptor, you can also specify so in the plugin configuration in your POM as seen below.
pom.xml
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>4.4.2</version>
<configuration>
<generateRoute>false</generateRoute>
</configuration>
</plugin>
If you are using resource fragments, then also you can configure it in your Service resource fragment (e.g. service.yml
). You need to add an expose
label to the metadata
section of your service and set it to false
.
metadata:
annotations:
api.service.kubernetes.io/path: /hello
labels:
expose: "false"
spec:
type: LoadBalancer
In case both the label and the property have been set with conflicting values, precedence will be given to the property value, so if you set the label to true
but set the property to false
then no Route descriptor will be generated because precedence will be given to the property value.
5.1.5. Other flags
Configuration | Description | Default |
---|---|---|
fabric8.openshift.enableAutomaticTrigger |
If the value is set to |
|
fabric8.skipHealthCheck |
If the value is set to |
|
fabric8.openshift.deployTimeoutSeconds |
The OpenShift deploy timeout in seconds. |
3600 |
fabric8.openshift.imageChangeTrigger |
Add ImageChange triggers to DeploymentConfigs when on openshift. |
|
5.2. fabric8:build
This goal is for building Docker images. Images can be built in two different ways depending on the mode
configuration (controlled by the fabric8.mode
property).
By default the mode is set to auto
. In this case the plugin tries to detect which kind of build should be performed by contacting the API server. If this fails or if no cluster access is configured e.g. with oc login
then the mode is set to kubernetes
in which case a standard Docker build or dockerless Java Image Builder build is performed. It can also be forced to openshift
to perform an OpenShift build.
5.2.1. Kubernetes Build
If the mode is set to kubernetes
then a normal Docker build or Dockerless JIB is performed. The connection configuration to access the Docker daemon is described in Access Configuration.
Build Options |
Description |
---|---|
|
In order to make the generated images available to the Kubernetes cluster the generated images need to be pushed to a registry with the goal fabric8:push when a standard Docker build is performed.This is not necessary for single node clusters, though as there is no need to distribute images. |
|
A Dockerless JIB Build is performed when |
Type |
Description |
---|---|
|
RegistryImage is build and pushed to the registry if the registry is correctly authenticated. Defaut registry used is Dockerhub. More about RegistryImage. |
|
TarImage archive is build if the registry is not authenticated correcty. The tar archive is build and can be found in |
<plugins>
....
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>4.2</version>
<configuration>
<isJib>true</isJib>
</configuration>
</plugin>
</plugins>
<properties>
....
<fabric8.build.jib>true</fabric8.build.jib>
</properties>
For automatic push to registry, conventions for target image configuration should be followed:-
<name>${image.user}/${project.artifactId}:${project.version}</name>
where ${image.user}
should be replaced with registry username.
Currently, Apart from Java 8, base images for Java 11 are supported for Kubernetes build. To enable Java 11 base images,
user needs to specift release version 11 in their maven-compiler-plugin
configuration in their project pom.
<plugins>
....
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
<release>11</release>
</configuration>
</plugin>
</plugins>
Refer to the spring-boot-with-yaml sample for a demo.
5.2.2. OpenShift Build
For the openshift
mode, OpenShift specific builds will be performed. These are so called Binary Source builds ("binary builds" in short), where the data specified with the build configuration is sent directly to OpenShift as a binary archive.
There are two kind of binary builds supported by this plugin, which can be selected with the buildStrategy
configuration option (fabric8.build.strategy
property)
buildStrategy |
Description |
---|---|
|
The Source-to-Image (S2I) build strategy uses so called builder images for creating new application images from binary build data. The builder image to use is taken from the base image configuration specified with from in the image build configuration. See below for a list of builder images which can be used with this plugin. |
|
A Docker Build is similar to a normal Docker build except that it is done by the OpenShift cluster and not by a Docker daemon. In addition this build pushes the generated image to the OpenShift internal registry so that it is accessbile in the whole cluster. |
Both build strategies update an Image Stream after the image creation.
The Build Config and Image streams can be managed by this plugin. If they do not exist, they will be automatically created by fabric8:build
. If they do already exist, they are reused, except when the buildRecreate
configuration option (property fabric8.build.recreate
) is set to a value as described in Configuration. Also if the provided build strategy is different than the one defined in the existing build configuration, the Build Config is edited to reflect the new type (which in turn removes all build associated with the previous build).
If you want to configure memory/cpu requests and limits related to BuildConfig
, you can either provide them as in plugin configuration or as a resource fragment in src/main/fabric8
directory. for XML configuration it needs to be done like this:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>${plugin.version}</version>
<configuration>
<resources>
<openshiftBuildConfig>
<limits>
<cpu>100m</cpu>
<memory>256Mi</memory>
</limits>
</openshiftBuildConfig>
</resources>
</configuration>
</plugin>
This image stream created can then be directly referenced from Deployment Configuration objects created by fabric8:resource.
By default, image streams are created with a local lookup policy, so that they can be used also by other resources such as Deployments or StatefulSets.
This behavior can be turned off by setting the fabric8.s2i.imageStreamLookupPolicyLocal
property to false
when building the project.
In order to be able to to create these OpenShift resource objects access to an OpenShift installation is required. The access parameters are described in Access Configuration.
Regardless of which build mode is used, the images are configured in the same way.
The configuration consists of two parts:
* a global section which defines the overall behaviour of this plugin
* and an <images>
section which defines how the images should be build
Many of the options below are relevant for the Kubernetes Workflow or the OpenShift Workflow with Docker builds as they influence how the Docker image is build.
For an S2I binary build, on the other hand, the most relevant section is the Assembly one because the build depends on which buider/base image is used and how it interprets the content of the uploaded docker.tar
.
5.2.3. Configuration
The following sections describe the usual configuration, which is similar to the build configuration used in the docker-maven-plugin.
In addition a more automatic way for creating predefined build configuration can be performed with so called Generators. Generators are very flexible and can be easily created. These are described in an extra section.
Global configuration parameters specify overall behavior common for all images to build. Some of the configuration options are shared with other goals.
Element | Description | Property |
---|---|---|
apiVersion |
Use this variable if you are using an older version of docker not compatible with the current default use to communicate with the server. |
|
authConfig |
Authentication information when pulling from or pushing to Docker registry. There is a dedicated section Authentication for how doing security. |
|
autoPull |
Decide how to pull missing base images or images to start:
|
|
buildRecreate |
If the effective mode is
The default is |
|
buildStrategy |
If the effective mode is
By default S2I is used. |
|
isJib |
If the effective mode is |
|
forcePull |
Applicable only for OpenShift, S2I build strategy. While creating a BuildConfig, By default, if the builder image specified in the build configuration is available locally on the node, that image will be used. Using forcePull will override the local image and refresh it from the registry the image stream points to. |
|
certPath |
Path to SSL certificate when SSL is used for communicating with the Docker daemon. These certificates are normally stored in |
|
dockerHost |
The URL of the Docker Daemon. If this configuration option is not given, then the optional
|
|
image |
In order to temporarily restrict the operation of plugin goals this configuration option can be used. Typically this will be set via the system property |
|
machine |
Docker machine configuration. See Docker Machine for possible values |
|
The build mode which can be
|
|
|
maxConnections |
Number of parallel connections are allowed to be opened to the Docker Host. For parsing log output, a connection needs to be kept open (as well for the wait features), so don’t put that number to low. Default is 100 which should be suitable for most of the cases. |
|
access |
Group of configuration parameters to connect to Kubernetes/OpenShift cluster |
|
outputDirectory |
Default output directory to be used by this plugin. The default value is |
|
portPropertyFile |
Global property file into which the mapped properties should be written to. The format of this file and its purpose are also described in Port Mapping. |
|
profile |
Profile to which contains enricher and generators configuration. See Profiles for details. |
|
pullSecret |
The name to use for naming pullSecret to be created to pull the base image in case pulling from a private registry which requires authentication for Openshift. The default value for pull registry will be picked from "docker.pull.registry/docker.registry". |
|
registry |
Specify globally a registry to use for pulling and pushing images. See Registry handling for details. |
|
resourceDir |
Directory where fabric8 resources are stored. This is also the directory where a custom profile is looked up. Default is |
|
environment |
Environment name where resources are placed. For example, if you set this property to dev and resourceDir is the default one, Fabric8 will look at src/main/fabric8/dev. If not set then root |
|
skip |
With this parameter the execution of this plugin can be skipped completely. |
|
skipBuild |
If set not images will be build (which implies also skip.tag) with |
|
skipBuildPom |
If set the build step will be skipped for modules of type |
|
skipTag |
If set to |
|
skipMachine |
Skip using docker machine in any case |
|
sourceDirectory |
Default directory that contains the assembly descriptor(s) used by the plugin. The default value is |
|
verbose |
Boolean attribute for switching on verbose output like the build steps when doing a Docker build. Default is |
|
logDeprecationWarning |
Whether to log Fabric8 Maven Plugin deprecation warning or not. Defaults to |
|
5.2.4. Access Configuration
You can configure parameters to define how Fabric8 is going to connect to Kubernetes/OpenShift cluster instead of relaying on default parameters.
<configuration>
<access>
<username></username>
<password></password>
<masterUrl></masterUrl>
<apiVersion></apiVersion>
</access>
</configuration>
Element | Description | Property (System property or Maven property) |
---|---|---|
username |
Username on which to operate |
|
password |
Password on which to operate |
|
namespace |
Namespace on which to operate |
|
masterUrl |
Master URL on which to operate |
|
apiVersion |
Api version on which to operate |
|
caCertFile |
CaCert File on which to operate |
|
caCertData |
CaCert Data on which to operate |
|
clientCertFile |
Client Cert File on which to operate |
|
clientCertData |
Client Cert Data on which to operate |
|
clientKeyFile |
Client Key File on which to operate |
|
clientKeyData |
Client Key Data on which to operate |
|
clientKeyAlgo |
Client Key Algorithm on which to operate |
|
clientKeyPassphrase |
Client Key Passphrase on which to operate |
|
trustStoreFile |
Trust Store File on which to operate |
|
trustStorePassphrase |
Trust Store Passphrase on which to operate |
|
keyStoreFile |
Key Store File on which to operate |
|
keyStorePassphrase |
Key Store Passphrase on which to operate |
|
5.2.5. Image Configuration
The configuration how images should be created a defined in a dedicated <images>
sections. These are specified for each image within the <images>
element of the configuration with one 

</images>
</configuration>
1 | One or more 
</images>
</configuration>
...
</plugin>
This plugin supports so call dmp-plugins which are used during the build phase. dmp-plugins are enabled by just declaring a dependency in the plugin declaration:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<dependencies>
<dependency>
<groupId>io.fabric8</groupId>
<artifactId>run-java-sh</artifactId>
<version>1.2.2</version>
</dependency>
</dependencies>
</plugin>
These plugins contain a descriptor META-INF/maven/io.fabric8/dmp-plugin
with class names, line-by-line:
io.fabric8.runsh.RunShLoader
During a build with docker:build
, those classes are loaded and certain fixed method are called.
The following methods are supported:
Method | Description |
---|---|
addExtraFiles |
A static method called by dmp with a single |
If a configured plugin does not provide method of this name and signature, then it will be simply ignored. Also, no interface needs to be implemented to keep the coupling low.
The following official dmp-plugins are known and supported:
Name | G,A | Description |
---|---|---|
|
General purpose startup script fo running Java applications. The dmp plugin creates a |
Check out samples/run-java
for a fully working example.
All build relevant configuration is contained in the <build>
section
of an image configuration. The following configuration options are supported:
Element | Description |
---|---|
specifies the assembly configuration as described in Build Assembly |
|
Map specifying the value of Docker build args
which should be used when building the image with an external Dockerfile which uses build arguments. The key-value syntax is the same as when defining Maven properties (or |
|
buildOptions |
Map specifying the build options to provide to the docker daemon when building the image. These options map to the ones listed as query parameters in the
Docker Remote API and are restricted to simple options
(e.g.: memory, shmsize). If you use the respective configuration options for build options natively supported by the build configuration (i.e. |
cleanup |
Cleanup dangling (untagged) images after each build (including any containers created from them). Default is |
Path to a directory used for the build’s context. You can specify the |
|
A command to execute by default (i.e. if no command is provided when a container for this image is started). See Startup Arguments for details. |
|
compression |
The compression mode how the build archive is transmitted to the docker daemon ( |
dockerFile |
Path to a |
dockerFileDir (deprecated in favor of contextDir) |
Path to a directory holding a |
dockerArchive |
Path to a saved image archive which is then imported. See Docker archive for details. |
An entrypoint allows you to configure a container that will run as an executable. See Startup Arguments for details. |
|
The environments as described in Setting Environment Variables and Labels. |
|
filter |
Enable and set the delimiters for property replacements. By default properties in the format |
The base image which should be used for this image. If not given this default to |
|
Extended definition for a base image. This field holds a map of defined in
A provided |
|
Definition of a health check as described in Healthcheck |
|
imagePullPolicy |
Specific pull policy for the base image. This overwrites any global pull policy. See the globale configuration option imagePullPolicy for the possible values and the default. |
Scan the archive specified in |
|
Labels as described in Setting Environment Variables and Labels. |
|
maintainer |
The author ( |
noCache |
Don’t use Docker’s build cache. This can be overwritten by setting a system property |
cacheFrom |
A list of |
optimise |
if set to true then it will compress all the |
ports |
The exposed ports which is a list of |
shell |
Shell to be used for the runCmds. It contains arg elements which are defining the executable and its params. |
runCmds |
Commands to be run during the build process. It contains run elements which are passed to the shell. Whitespace is trimmed from each element and empty elements are ignored. The run commands are inserted right after the assembly and after workdir into the Dockerfile. This tag is not to be confused with the |
skip |
if set to true disables building of the image. This config option is best used together with a maven property |
skipTag |
If set to |
tags |
List of additional |
user |
User to which the Dockerfile should switch to the end (corresponds to the |
volumes |
List of |
workdir |
Directory to change to when starting the container. |
From this configuration this Plugin creates an in-memory Dockerfile, copies over the assembled files and calls the Docker daemon via its remote API.
<build>
<from>java:8u40</from>
<maintainer>[email protected]</maintainer>
<tags>
<tag>latest</tag>
<tag>${project.version}</tag>
</tags>
<ports>
<port>8080</port>
</ports>
<volumes>
<volume>/path/to/expose</volume>
</volumes>
<buildOptions>
<shmsize>2147483648</shmsize>
</buildOptions>
<shell>
<exec>
<arg>/bin/sh</arg>
<arg>-c</arg>
</exec>
</shell>
<runCmds>
<run>groupadd -r appUser</run>
<run>useradd -r -g appUser appUser</run>
</runCmds>
<entryPoint>
<!-- exec form for ENTRYPOINT -->
<exec>
<arg>java</arg>
<arg>-jar</arg>
<arg>/opt/demo/server.jar</arg>
</exec>
</entryPoint>
<assembly>
<mode>dir</mode>
<targetDir>/opt/demo</targetDir>
<descriptor>assembly.xml</descriptor>
</assembly>
</build>
In order to see the individual build steps you can switch on verbose
mode either by setting the property docker.verbose
or by using <verbose>true</verbose>
in the Global configuration
5.2.7. Assembly
The <assembly>
element within <build>
is has an XML struture and defines how build artifacts and other files can enter the Docker image.
Element | Description |
---|---|
name |
Assembly name, which is |
targetDir |
Directory under which the files and artifacts contained in the assembly will be copied within the container. The default value for this is |
Inlined assembly descriptor as described in Assembly Descriptor below. |
|
Path to an assembly descriptor file, whose format is described Assembly Descriptor below. |
|
Alias to a predefined assembly descriptor. The available aliases are also described in Assembly Descriptor below. |
|
dockerFileDir |
Directory containing an external Dockerfile. This option is deprecated, please use <dockerFileDir> directly in the <build> section. |
exportTargetDir |
Specification whether the |
ignorePermissions |
Specification if existing file permissions should be ignored
when creating the assembly archive with a mode |
mode |
Mode how the how the assembled files should be collected:
The archive formats have the advantage that file permission can be preserved better (since the copying is independent from the underlying files systems), but might triggers internal bugs from the Maven assembler (as it has been reported in #171) |
permissions |
Permission of the files to add:
|
tarLongFileMode |
Sets the TarArchiver behaviour on file paths with more than 100 characters length. Valid values are: "warn"(default), "fail", "truncate", "gnu", "posix", "posix_warn" or "omit" |
User and/or group under which the files should be added. The user must already exist in the base image. It has the general format If a third part is given, then the build changes to user For example, the image |
In the event you do not need to include any artifacts with the image, you may safely omit this element from the configuration.
Assembly Descriptor
With using the inline
, descriptor
or descriptorRef
option
it is possible to bring local files, artifacts and dependencies into
the running Docker container. A descriptor
points to a file
describing the data to put into an image to build. It has the same
format as for creating assemblies with the
maven-assembly-plugin with following exceptions:
-
<formats>
are ignored, the assembly will allways use a directory when preparing the data container (i.e. the format is fixed todir
) -
The
<id>
is ignored since only a single assembly descriptor is used (no need to distinguish multiple descriptors)
Also you can inline the assembly description with a inline
description
directly into the pom file. Adding the proper namespace even allows for
IDE autocompletion. As an example, refer to the profile inline
in
the data-jolokia-demo
's pom.xml.
Alternatively descriptorRef
can be used with the name of a
predefined assembly descriptor. The following symbolic names can be
used for descriptorRef
:
Assembly Reference | Description |
---|---|
artifact-with-dependencies |
Attaches project’s artifact and all its dependencies. Also, when a |
artifact |
Attaches only the project’s artifact but no dependencies. |
project |
Attaches the whole Maven project but with out the |
rootWar |
Copies the artifact as |
<images>


</images>
</configuration>
There is some special behaviour when using an externally provided registry like described above:
-
When pulling, the image pulled will be also tagged with a repository name without registry. The reasoning behind this is that this image then can be referenced also by the configuration when the registry is not specified anymore explicitly.
-
When pushing a local image, temporarily a tag including the registry is added and removed after the push. This is required because Docker can only push registry-named images.
12. Authentication
When pulling (via the autoPull
mode of fabric8:start
) or pushing image, it
might be necessary to authenticate against a Docker registry.
There are six different locations searched for credentials. In order, these are:
-
Providing system properties
docker.username
anddocker.password
from the outside. -
Providing system properties
registry.username
andregistry.password
from the outside. -
Using a
<authConfig>
section in the plugin configuration with<username>
and<password>
elements. -
Using OpenShift configuration in
~/.config/kube
-
Using a
<server>
configuration in~/.m2/settings.xml
-
Login into a registry with
docker login
(credentials in a credential helper or in~/.docker/config.json
)
Using the username and password directly in the pom.xml
is not
recommended since this is widely visible. This is easiest and
transparent way, though. Using an <authConfig>
is straight forward:
<plugin>
<configuration>

...
<authConfig>
<username>jolokia</username>
<password>s!cr!t</password>
</authConfig>
</configuration>
</plugin>
The system property provided credentials are a good compromise when using CI servers like Jenkins. You simply provide the credentials from the outside:
mvn -Ddocker.username=jolokia -Ddocker.password=s!cr!t fabric8:push
The most mavenish way is to add a server to the Maven settings file ~/.m2/settings.xml
:
<servers>
<server>
<id>docker.io</id>
<username>jolokia</username>
<password>s!cr!t</password>
</server>
....
</servers>
The server id must specify the registry to push to/pull from, which by
default is central index docker.io
(or index.docker.io
/ registry.hub.docker.com
as fallbacks).
Here you should add your docker.io account for your repositories. If you have multiple accounts
for the same registry, the second user can be specified as part of the ID. In the example above, if you
have a second account 'fabric8io' then use an <id>docker.io/fabric8io</id>
for this second entry. I.e. add the
username with a slash to the id name. The default without username is only taken if no server entry with
a username appended id is chosen.
The most secure way is to rely on docker’s credential store or credential helper and read confidential information from an external credentials store, such as the native keychain of the operating system. Follow the instruction on the docker login documentation.
As a final fallback, this plugin consults $DOCKER_CONFIG/config.json
if DOCKER_CONFIG
is set, or ~/.docker/config.json
if not, and reads credentials stored directly within this
file. This unsafe behavior happened when connecting to a registry with the command docker login
from the command line
with older versions of docker (pre 1.13.0) or when docker is not configured to use a
credential store.
12.1. Pull vs. Push Authentication
The credentials lookup described above is valid for both push and pull operations. In order to narrow things down, credentials can be provided for pull or push operations alone:
In an <authConfig>
section a sub-section <pull>
and/or <push>
can be added. In the example below the credentials provider are only
used for image push operations:
<plugin>
<configuration>

...
<authConfig>
<push>
<username>jolokia</username>
<password>s!cr!t</password>
</push>
</authConfig>
</configuration>
</plugin>
When the credentials are given on the command line as system
properties, then the properties docker.pull.username
/
docker.pull.password
and docker.push.username
/
docker.push.password
are used for pull and push operations,
respectively (when given). Either way, the standard lookup algorithm
as described in the previous section is used as fallback.
12.2. OpenShift Authentication
When working with the default registry in OpenShift, the credentials to authenticate are the OpenShift username and access token. So, a typical interaction with the OpenShift registry from the outside is:
oc login ... mvn -Ddocker.registry=docker-registry.domain.com:80/default/myimage \ -Ddocker.username=$(oc whoami) \ -Ddocker.password=$(oc whoami -t)
(note, that the image’s username part ("default" here") must correspond to an OpenShift project with the same name to which you currently connected account has access).
This can be simplified by using the system property
docker.useOpenShiftAuth
in which case the plugin does the
lookup. The equivalent to the example above is
oc login ... mvn -Ddocker.registry=docker-registry.domain.com:80/default/myimage \ -Ddocker.useOpenShiftAuth
Alternatively the configuration option <useOpenShiftAuth>
can be
added to the <authConfig>
section.
For dedicated pull and push configuration the system properties
docker.pull.useOpenShiftAuth
and docker.push.useOpenShiftAuth
are
available as well as the configuration option <useOpenShiftAuth>
in
an <pull>
or <push>
section within the <authConfig>
configuration.
If useOpenShiftAuth
is enabled then the OpenShift Konfiguration will be looked up in $KUBECONFIG
or, if this environment variable is not set, in ~/.kube/config
.
12.3. Password encryption
Regardless which mode you choose you can encrypt password as described
in the
Maven documentation. Assuming
that you have setup a master password in
~/.m2/security-settings.xml
you can create easily encrypt
passwords:
$ mvn --encrypt-password
Password:
{QJ6wvuEfacMHklqsmrtrn1/ClOLqLm8hB7yUL23KOKo=}
This password then can be used in authConfig
, docker.password
and/or the <server>
setting configuration. However, putting an
encrypted password into authConfig
in the pom.xml
doesn’t make
much sense, since this password is encrypted with an individual master
password.
12.4. Extended Authentication
Some docker registries require additional steps to authenticate.
Amazon ECR requires using an IAM access key to obtain temporary docker login credentials.
The docker:push
and docker:pull
goals automatically execute this exchange for any registry of the form <awsAccountId> .dkr.ecr. <awsRegion> .amazonaws.com, unless the skipExtendedAuth
configuration (docker.skip.extendedAuth
property) is set true.
Note that for an ECR repository with URI 123456789012.dkr.ecr.eu-west-1.amazonaws.com/example/image
the d-m-p’s docker.registry
should be set to 123456789012.dkr.ecr.eu-west-1.amazonaws.com
and example/image
is the <name>
of the image.
You can use any IAM access key with the necessary permissions in any of the locations mentioned above except ~/.docker/config.json
.
Use the IAM Access key ID as the username and the Secret access key as the password.
In case you’re using temporary security credentials provided by the AWS Security Token Service (AWS STS), you have to provide the security token as well.
To do so, either specify the docker.authToken
system property or provide an <auth>
element alongside username & password in the authConfig
.
In case you are running on an EC2 instance OR ECS with fargate deployment (OR ECS with EC2 with ECS_AWSVPC_BLOCK_IMDS as "true") that has an appropriate IAM role assigned (e.g. a role that grants the AWS built-in policy AmazonEC2ContainerRegistryPowerUser) authentication information doesn’t need to be provided at all. Instead the instance meta-data service or task metadata endpoint in case of ECS is queried for temporary access credentials supplied by the assigned role.
13. Volume Configuration
Fabric8 maven plugin supports volume configuration in XML format in pom.xml. These are the volume types which are supported:
Volume Type | Description |
---|---|
hostPath |
Mounts a file or directory from the host node’s filesystem into your pod |
emptyDir |
Containers in the Pod can all read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each Container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever. |
gitRepo |
It mounts an empty directory and clones a git repository into it for your Pod to use. |
secret |
It is used to pass sensitive information, such as passwords, to Pods. |
nfsPath |
Allows an existing NFS(Network File System) share to be mounted into your Pod. |
gcePdName |
Mounts a Google Compute Engine(GCE) into your Pod. You must create PD using |
glusterFsPath |
Allows a Glusterfs (an open source networked filesystem) volume to be mounted into your Pod. |
persistentVolumeClaim |
Used to mount a PersistentVolume into a Pod. |
awsElasticBlockStore |
Mounts an Amazon Web Services(AWS) EBS Volume into your Pod. |
azureDisk |
Mounts a Microsoft Azure Data Disk into a Pod |
azureFile |
Mounts a Microsoft Azure File Volume(SMB 2.1 and 3.0 into a Pod. |
cephfs |
Allows an existing CephFS volume to be mounted into your Pod. You must have your own Ceph server running with the share exported before you can use it. |
fc |
Allows existing fibre channel volume to be mounted in a Pod. You must configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them. |
flocker |
Flocker is an open source clustered Container data volume manager. A |
iscsi |
Allows an existing ISCSI(SCSI over IP) volume to be mounted into your Pod. |
portworxVolume |
A portworxVolume is an elastic block storage layer that runs hyperconverged with Kubernetes. |
quobyte |
Allows existing |
rbd |
Allows a Rados Block Device volume to be mounted into your Pod. |
scaleIO |
ScaleIO is a software-based storage platform that can use existing hardware to create clusters of scalable shared block networked storage. The scaleIO volume plugin allows deployed Pods to access existing ScaleIO volumes. |
storageOS |
A storageos volume allows an existing StorageOS volume to be mounted into your Pod. You must run the StorageOS container on each node that wants to access StorageOS volumes |
vsphereVolume |
Used to mount a vSphere VMDK volume into your Pod. |
downwardAPI |
A downwardAPI volume is used to make downward API data available to applications. It mounts a directory and writes the requested data in plain text files. |
14. Integrations
14.1. Dekorate
fabric8-maven-plugin provides a Zero Configuration approach to delegate deployment manifests generation to Dekorate.
Just by adding a dependency to Dekorate library in the pom.xml
file, all manifest
generation will be delegated to Dekorate.
<dependencies>
<!-- ... -->
<dependency>
<groupId>io.dekorate</groupId>
<artifactId>option-annotations</artifactId>
<version>${dekorate.version}</version>
</dependency>
<dependency>
<groupId>io.dekorate</groupId>
<artifactId>openshift-annotations</artifactId>
<version>${dekorate.version}</version>
</dependency>
<dependency>
<groupId>io.dekorate</groupId>
<artifactId>kubernetes-annotations</artifactId>
<version>${dekorate.version}</version>
</dependency>
<dependency>
<groupId>io.dekorate</groupId>
<artifactId>dekorate-spring-boot</artifactId>
<version>${dekorate.version}</version>
</dependency>
</dependencies>
A full example of the integration can be found in the directory samples/spring-boot-dekorate.
An experimental feature is also provided to merge resources generated both by fabric8-maven-plugin
and Dekorate. Yoy can activate this feature by using the following flag -Dfabric8.mergeWithDekorate
in the command-line, or setting it up as a property (<fabric8.mergeWithDekorate>true</fabric8.mergeWithDekorate>
).
15. Migration from version 2
This version 3 of f8-m-p is using a completely new configuration syntax compared to version 2.
If you have a maven project with a 2.x fabric8-maven-plugin then we recommend you run the mvn fabric8:migrate goal directly on your project to do the migration:
# in a fabric8-maven-plugin 2.x project
mvn fabric8:migrate
# now the project is using 3.x or later
Once the project is migrated to 3.x or later of the fabric8-maven-plugin you can then run this fabric8:setup
goal at any time to update to the latest plugin and goals.
16. FAQ
16.1. General questions
16.1.1. How do I define an environment variable?
The easiest way is to add a src/main/fabric8/deployment.yml
file to your project containing something like:
spec:
template:
spec:
containers:
-env:
- name: FOO
value: bar
The above will generate an environment variable $FOO
of value bar
For a full list of the environments used in java base images, see this list
16.1.2. How do I define a system property?
The simplest way is to add system properties to the JAVA_OPTIONS
environment variable.
For a full list of the environments used in java base images, see this list
e.g. add a src/main/fabric8/deployment.yml
file to your project containing something like:
spec:
template:
spec:
containers:
- env:
- name: JAVA_OPTIONS
value: "-Dfoo=bar -Dxyz=abc"
The above will define the system properties foo=bar
and xyz=abc
16.1.3. How do I mount a config file from a ConfigMap?
First you need to create your ConfigMap
resource via a file src/main/fabric8/configmap.yml
data:
application.properties: |
# spring application properties file
welcome = Hello from Kubernetes ConfigMap!!!
dummy = some value
Then mount the entry in the ConfigMap
into your Deployment
via a file src/main/fabric8/deployment.yml
metadata:
annotations:
configmap.fabric8.io/update-on-change: ${project.artifactId}
spec:
replicas: 1
template:
spec:
volumes:
- name: config
configMap:
name: ${project.artifactId}
items:
- key: application.properties
path: application.properties
containers:
- volumeMounts:
- name: config
mountPath: /deployments/config
Here is an example quickstart doing this
Note that the annotation configmap.fabric8.io/update-on-change
is optional; its used if your application is not capable of watching for changes in the /deployments/config/application.properties
file. In this case if you are also running the configmapcontroller then this will cause a rolling upgrade of your application to use the new ConfigMap
contents as you change it.
16.1.4. How do I use a Persistent Volume?
First you need to create your PersistentVolumeClaim
resource via a file src/main/fabric8/foo-pvc.yml
where foo
is the name of the PersistentVolumeClaim
. It might be your app requires multiple vpersistent volumes so you will need multiple PersistentVolumeClaim
resources.
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
Then to mount the PersistentVolumeClaim
into your Deployment
create a file src/main/fabric8/deployment.yml
spec:
template:
spec:
volumes:
- name: foo
persistentVolumeClaim:
claimName: foo
containers:
- volumeMounts:
- mountPath: /whatnot
name: foo
Where the above defines the PersistentVolumeClaim
called foo
which is then mounted into the container at /whatnot
Here is an example application
17. Appendix
17.1. Kind/Filename Type Mapping
Kind | Filename Type |
---|---|
BuildConfig |
|
ClusterRole |
|
ConfigMap |
|
ClusterRoleBinding |
|
CronJob |
|
CustomResourceDefinition |
|
DaemonSet |
|
Deployment |
|
DeploymentConfig |
|
ImageStream |
|
ImageStreamTag |
|
Ingress |
|
Job |
|
LimitRange |
|
Namespace |
|
OAuthClient |
|
PolicyBinding |
|
PersistentVolume |
|
PersistentVolumeClaim |
|
Project |
|
ProjectRequest |
|
ReplicaSet |
|
ReplicationController |
|
ResourceQuota |
|
Role |
|
RoleBinding |
|
RoleBindingRestriction |
|
Route |
|
Secret |
|
Service |
|
ServiceAccount |
|
StatefulSet |
|
Template |
|
Pod |
|
17.2. Custom Kind/Filename Mapping
You can add your custom Kind/Filename
mappings.
To do it you have two approaches:
-
Setting an environment variable or system property called
fabric8.mapping
pointing out to a.properties
files with pairs<kind>⇒filename1>, <filename2>
By default if no environment variable nor system property is set, scan for a file located at classpath/META-INF/fabric8/kind-filename-type-mapping-default.properties
. -
By embedding in MOJO configuration the mapping:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<configuration>
<mappings>
<mapping>
<kind>Var</kind>
<filenameTypes>foo, bar</filenameTypes>
</mapping>
</mappings>
</configuration>
</plugin>