In this article I'll describe several practices and considerations which can help you structuring your version control and artifact repository. The main challenge is finding a workable balance between the amount/complexity your deployment scripts and developer productivity / focus on business value. A lot of scripts (large investment) can make it easy for developers on the short term, however those scripts can easily become a burden.
If you are just looking for some good practices to structure your version control and artifact repository, look at the list below. If however you want to know why I think certain things are good and bad practice, read on.
Development
Use a per technology structure
SCA composites can use customizable MDS directories (you can update the path in adf-config.xml and even use variables. See here for example). In order to use shared objects in your Service Bus project however, they should be part of the same application (in order to avoid compilation errors in JDeveloper). The application poms for the Service Bus and SCA composites use the Maven module structure to refer to their projects. This means the application should be able to find the projects. When creating a new application with a new project for SCA composites and Service Bus, the application has the projects as sub-directories. SCA composites and Service Bus projects require separate applications. Thus there are several reasons why you would want to group projects per technology. This makes development more easy, avoids the dirty fixes needed for a custom directory structure and is more in line with the default structure provides when creating a new project.
Version control structure
A version control system (VCS) should allow you to identify different versions. The versions of the software also live in the artifact repository. If you want to create a fix on a specific version of the software, it is usual to create a tag from that version, branch the tag and fix it there since the trunk might have evolved further and sometimes contains changes you do not want yet.
When structuring VCS, it becomes important to think about how you are going to deploy. This determines what you want to branch. If you have a main job in your deployment tooling which calls sub-jobs per technology, you can branch per technology and the result can look like a branched functional unit in version control. You can also use (but I would not recommend it) references / externals (with specified revision) but this requires some extra scripting and might not work as expected in all cases. You want to use the same version number for the different artifacts in your functional unit (e.g. mvn versions:set -DnewVersion=x.x.x.x) to make it easy to see what belongs together. You can use a deploy job parameter for this or a separate file. This would mean that if a SCA composite changes and a Service Bus project is in the same functional unit but is unchanged, it still gets an increase in version number and gets deployed.
There are several benefits of the method described above;
- it is easy to identify the versions of the artifacts (e.g. Service Bus project, SCA composite) which are part of the functional unit. since they all have the same version
- you do not require a separate definition of a functional unit since you know based on the version number which artifacts belong together and you can use the Maven GroupId to identify the functional unit
- a lot of versions contain the same code
- you need to automate keeping the versions of the parts of functional unit in sync
The functional unit as a separate definition
We can let go of the 'keep-the-version-of-the-artifacts-of-the-functional-unit-in-sync' method. Then you would need a container artifact to determine which versions belong together and form a version of the functional unit or release. You can use a functional unit definition (pom with dependencies) for that (and have the release consist of functional units) or directly mention the separate components in your release since the GroupId in the artifact repository can indicate the functional unit. In the latter case, your functional unit will not have a separate definition / pom since it is no artifact itself and you would require less scripting since you do not need to bridge the gap from artifact to functional unit to release but only from artifact to release. Makes it all a bit simpler and of course simplicity requires less code and results in better maintainability.
Artifact repository
Artifact repository
Snapshot releases
You can ask yourself if in a continuous delivery environment, you need snapshot releases or snapshot artifacts. Every release has the potential to go to the production environment and the release content is continuously updated with new artifact versions. My suggestion is to not use snapshots but do keep track (automated) of which artifact version is in a release. After a release (usually every sprint in scrum), you can clean out the artifacts which did not make it to the release. See for example here.
GroupId
For ease of deployment I recommend to have an artifact structure which is in line with your deployment methodology. An artifact repository often uses so-called Maven coordinates to identify an artifact. These are GroupId, ArtifactId and Version. You can also use an optional classifier. The GroupId is ideal for identifying functional units, separate artifacts from functional units from releases.
Classifier
The classifier can be used to add for example a configuration plan. Mind though that you should not add configuration plans or property files which are environment specific since environments tend to change a lot. The configuration plans should contain placeholders and the deployment software (e.g. XLDeploy, Bamboo, Jenkins, Hudson and the like) should replace them with correct values. This makes it easier to secure those values and to let someone else manage them.
Deploying and build pipeline
Suppose you have a functional service which consists of two components. What should you do when deploying them?
Split deployment per technology
First I recommend to split them per technology in your deployment tooling. Use a modular setup. Do not create one superscript which does deployment of your entire custom functional unit to all required environments! When you want to install a functional service, the tooling should kick-off sub-jobs which do the deployment of the individual technologies. This makes maintaining the jobs and scripts easier (not a single large black box but several smaller black boxes). Also this allows you to provide jobs only with the information required to deploy a specific technology which is of course more secure. For example, a job deploying a SCA composite does not need the weblogic password of the Service Bus server. Also when branching, you can do this per technology and you do not need to wrap this in a greater abstraction.
In summary, splitting deployment per technology
- requires a modular setup of deployment scripts (better maintainability)
- is more secure
- is more flexible; changes can be applied faster
A build pipeline consists of several steps. Important steps to have at the end are;
- store the artifact in your artifact repository
- make the relevant tags in version control
- update the release with the new version of the deployed artifact
Avoid manual steps to add certain software to a release (such as a Wiki which is the base for what is in a release). This will cause issues since developers tend to forget this (do not consider the deployment of their own software their responsibility). When the process of adding code to a release is automatic, the responsibility of collecting the release is with the developer instead of a build / deployment team. If it is not in the release, the developer has not released it.
No comments:
Post a Comment