Wednesday, 11 July 2012

ArchiMate - training and spreading the word

A colleague and I recently participated in the ArchiMate 2.0 Extensions training, a two day event with the optional Open Group exam at the end of it. With 10 delegates of different expertise levels (and different attitudes to using TOGAF) 7 chose to sit the exam. We decided not to, as we could not commit the time to the revision needed to ensure successful completion, and we have been thinking about qualifications that show how we have applied the knowledge in the real world, as opposed to what has been learnt in the classroom. We were also unsure as to the value of the qualification without that experience, during the event it was clear that some places valued the qualification more than others.
There was a lot for the trainer to fit into the two days (in reality it turned out to be just over 1 and a half days thanks to fitting the exam in at the end of day two). I found it useful in giving me confidence in what I have been doing, and helped me understand the relationships involved in ArchiMate, which has been my biggest barrier (I have been relying heavily on the magic connector in Archi!). I also found that it helped my thinking, along side Sams blogs in here! Feedback from the training showed that people wanted to do more examples, although the training did a number of examples throughout the two days it was clear that people involved wanted more.
My colleague and I had a lot of discussion about how we saw the official training, and how it fit with our own intention to deliver some training through the Benefits Realisation funding from JISC. We realised a couple of things for our (much shorter) events:
  • Value of building up a model needs to be clear at the start, along with the acknowledgement of the amount of work this can take!
  • Working on 'paper' for the first examples is a good idea before introducing a tool like Archi. But 'paper' on its own can cause problems, using 'signed' post-it notes to allow you to move elements around would be useful for discussion purposes.
  • There is a quick acceptance that paper will only take you so far before you need a tool like Archi to develop more complex models for examples
  • Examples for us need to be relevant to HE, perhaps around Assessment, Course Information and Course Development (three popular areas within Staffordshire University)
  • Group work brings out the best discussions as long as they are with individuals with similar experience (having someone in the group with more experience can cause the team to split) 
  • Group work needs to split people from the same institution into different teams (unless they make up one team, otherwise again the team can split)
It is worth noting whilst we were studying away on this Open Group course, we had a number of colleagues attending a one day JISC lead Enterprise Architecture workshop (http://emergingpractices.jiscinvolve.org/wp/doing-ea-workshop-2/) this blog mentions a hands on tutorial https://fsdsupport.pbworks.com/w/page/27206793/hands%20on%20Archimate that might be useful.

Monday, 9 July 2012

Comparing Archimate Views With Process Maps


In a previous post on this blog about our use of Archimate I talked about the difference between Archimate views and business process maps. It can be a struggle to find the right level to model at when creating Archimate views. There is a natural tendency to include too much process detail, especially if modellers have process mapping expertise. Archimate views are intended to act as the focus for discussion of issues and trade-offs between domain experts, not to document process details. Consequently, the goal of the Archimate modeller is to capture enough information from domain experts to create the view whilst resisting the temptation to cram all the details into each view. A 'just enough detail' approach is needed. Because this can be a tricky balancing act, I thought an example of how they differ might be useful. This post shows how a simple Archimate view can be derived from a Business Process Modelling Notation (BPMN) process map and illustrates how the resulting Archimate view captures an overview with the BPMN process map providing the details.

The BPMN process map below, captured as part of the StaffsXCRI-CAP Course Data Project, shows the process for creating a postgraduate course sheet for a new award. The postgraduate course sheet is used to advertise the award. The process map shows the flow of actions carried out by staff involved in the process. The horizontal swim lanes denote who does what. Each of the boxes describes behaviour that occurs in sequence as part of the overall process. BPMN process maps such as this one describe the process in sufficient detail that someone given the process map would know what to do to participate in the process.
BPMN business process map shows process details

Throughout the process map there are references to actors, roles, business objects and other entities that we would want to include in the Archimate model. These are highlighted in the text of the process map below.
Text of the process map contains references to Archimate elements

The highlighted items refer to Archimate elements as shown in the process map below. The overall process becomes a Business Process element in the Archimate view. The swimlanes refer to Archimate Roles. The text describes several Business Objects, software applications and bits of infrastructure like shared network drives (represented as Technology Nodes).
BPMN process map overlaid with corresponding Archimate elements

These Archimate elements can be collected into an Archimate view and relationships and inferred elements added to arrive at the view shown below.


Archimate view of the postgraduate course creation process

You can see that the Archimate view doesn’t contain any of the process detail. It only shows that there is a course sheet creation process which has a range of roles involved in performing it, with business objects and applications used along the way. The value of this level of modelling might not be evident from this single view but as more views added to the model, a critical mass is reached whereupon the model becomes a useful tool for analysis. Individual actors, roles and business objects can be quickly dragged into a view and related elements added to find out, for example, what processes an actor or role is involved with or who needs access to particular business objects for what purpose.

In the JISC Enable project we used existing process maps, corporate process documentation and notes from a series of interviews with stakeholders as input for the creation of our Archimate models. The approach shown above works for written documentation in the same way as with process maps. Pick out references to Archimate elements in the text and add them into Archimate views.

To make gathering of the right information easier, we used a simple template for making notes from interviews which is shown below. The columns acted as a prompt for us to ask about all aspects of the business architecture and enough about the other architecture layers to have a good go at creating a model on the first attempt. This reduced the need to revisit the same areas with stakeholders because of missing information.

Simple template to promote collection of the right information when talking to stakeholders

At Staffordshire University, everyone who has tried Archimate modelling has found that the process of modelling itself has raised questions about how everything fits together. Answering these questions has led to improved understanding of the problems we are trying to solve, improved understanding of the solutions and improved communication with stakeholders.

Thursday, 5 July 2012

Process Automation and Continuous Integration in the JISC Enable Project


Continuing the JISC Enable Project technology retrospective, this post describes the approach we have used to automate parts of the software development process to improve the effectiveness of the development team.

Agile Project Management

The development team works in partnership with the customer to identify required features. We take a Scrum-style agile approach to software development which has the core roles of:
  • Product owner - who identifies and priorities features to be developed 
  • Development team - who develop features according to self-organised plans
  • Scrum master - who shields the development team from distractions and removes impediments.

In the case of our external examiner application, a colleague from our central quality service department acted as product owner and regular meetings were held with the intended user base to keep other stakeholders 'in the loop' with development progress and to show new features of the application as they were developed.

We use a free online agile project management tool called Pivotal Tracker to help manage the list of features to be developed and track progress in delivering those features.

Pivotal Tracker supports iterative development processes like Scrum, where development is 'time-boxed' into short regular iterations (or sprints in Scrum terminology). In each iteration, typically lasting a week or two, software features are developed by the team. At the end of the iteration, the software is demonstrated to the product owner. Because the product owner gets to see the software regularly as it is evolving, they get a better idea of what they really want from the software. During each iteration, the product owner can change the list of features to be developed by adding, removing and reprioritising features. In this way, the product owner can steer development direction to ensure a higher quality, more appropriate product is developed.

The Process Of Delivering A Software Feature

As each feature is developed, the developers follow a process to build and test the feature before it is deployed into production. An Archimate model of our current feature delivery process is shown below. The following sections describe the steps in the process and indicate where we have introduced automation to speed up development.

Archimate 'as is' view of feature delivery

Develop Feature (Tests and Code)

Software features are developed on the Java Platform Enterprise Edition version 6 (Java EE 6) Platform as described previously on this blog. 

Create Code and Tests Using Integrated Development Environments (IDEs)

The developer creates code to implement a feature and also writes code that tests the feature. These tests can be run automatically later in the software build process

The developers use Integrated Development Environments (IDEs) to maximise their productivity. IntelliJ IDEA is our primary IDE but we also use NetBeans if it is better suited to a particular task.

Our IDEs have a host of features to improve productivity. These include: 

Automated Builds With Maven

We use Apache Maven to build and manage our software projects. We have used Apache Ant in the past to manage software builds but creation of the build files became too time-consuming. Maven takes a different approach by introducing a standard build lifecycle and a standard project folder structure which, as long as the developer abides by the convention, allows maven to build the software and run tests without the developer having to write any build files. For example, if a developer writes some unit tests and puts them in the /src/test/java folder of the project, Maven will detect and run the tests with each build automatically.

Maven is useful for getting developers to standardise on a project layout which helps new developers to get up to speed more quickly. It is also very easy to work with if you just want to perform the usual type of build activity in your project. If you need to do something that Maven can't accomplish by default or via plugins then it becomes harder to work with.

Maven also helps to manages dependencies of your software on other software libraries. This feature works very well in most instances but manual inclusion or exclusion is sometimes required when project libraries have transitive dependencies on different versions of the same library. 

Automated Tests With JUnit

We use the JUnit unit testing framework to run unit tests. Unit testing is the practice of testing individual units of code in isolation from any other units. It is the lowest level of testing in the process. The aim is to ensure that the individual units are performing correctly before they are combined to produce software applications.

The unit testing approach involves setting the unit into a known state, calling an operation on the unit with some known input data and then checking the returned value to ensure it is as expected. Developers write code which performs this setup/test/teardown behaviour. JUnit provides a way to identify tests so that they can be run automatically (by Maven) during a build and provides support for setting up before each test and tidying up afterwards and support for checking the results through use of assertions.

Source code management

When multiple developers are working on the same project, it is important to use a source code management tool (a.k.a. version control tool). A source code management tool allows multiple developers to work on the same codebase without overwriting each others changes. It keeps track of the history of changes to the code, allowing changes to be rolled back if required. It can also automatically merge changes from different developers into the latest revision. 

We use Apache Subversion for source code management. We have a subversion source code repository on one of our servers and all developers 'check out' code from the repository, make changes to it and then commit the changes back to the repository. Support for this process is built into the IDE.   

IntelliJ IDEA version control menu

Test (Local Commit)

The unit tests are run as part of each project build to test the new features that have been developed. All the unit tests are run during every build so that any bug or regression that have might have been introduced in the latest round of development. Because all of the tests are automatically run as part of the build, the developer only has to trigger the build. The unit tests are run locally (on the developers PC) to check that they all pass before the code is committed to the code repository. If the tests do not pass and the developer has a 'broken build' and commits it to the source code repository regardless, other developers who update their code to the latest revision will have the same broken build. For this reason, it is important to get the test passing locally before committing the code.

Inspect/Manually Test

Developers perform some manual running and testing of the software by installing it locally and using it as the end user would. IDEs have support for building, deploying and running software which speeds up the development, deployment and testing cycle. Manual testing is most useful to test usability and look and feel issues.

Commit

When the developer is happy with the new feature and that the tests are passing, the code is committed to the code repository. This is done from inside the IDE. The subversion code repository configuration is stored in the IDE and the developer can simply select the files to be committed and select the commit option. The developer is then shown a list of the files to be committed and is prompted for a commit message. The developer supplies a commit message describing the changes that have been made to the codebase in this revision. The code is then committed to the repository and the commit message is stored in the change history. Commit messages are important because they allow other developers to identify what happened when which is invaluable when trying to determine which revision to roll back to or update to.

Revisions and commit messages - always include a message (unlike some of the examples above)

Test (Commit Stage)

Once the developer has checked the code for the new feature into the code repository the process moves on to an automated test stage where the same commit tests are run in a different environment (on the build server) to check that the new code is portable and can run in a different environment, i.e. it is not tied to the developer's local setup in any way. 

Continuous Integration with Jenkins

We use Jenkins, an open source continuous integration server, to automate many of the operations in our build pipeline. Continuous Integration is the practice of integrating software, created by different developers, frequently, usually at least once a day. The aim is to avoid the situation, often encountered in traditional software development, where integration of software components happens late in a project and leads to significant problems getting the components to work together. With continuous integration, any problems are identified through the automated unit and integration testing which accompanies each build. The problems can be addressed immediately, thereby reducing the risk of the project because integration problems are tackled iteratively rather than in a more risky big bang manner late in the project.

Jenkins dashboard showing build jobs

Jenkins provides a platform for running build jobs. It has support for Maven builds so we take advantage of this to reduce the amount of initial job configuration. To reduce the amount of work even further, we use NetBeans, which has excellent Jenkins support, to create the job in Jenkins from the local Maven project. Strictly speaking, NetBeans has support for Hudson rather than Jenkins. Jenkins was originally called Hudson and was developed by a Sun Microsystems employee Kohsuke Kawaguchi. After Oracle Corporation bought Sun Microsystems and took a rather overbearing approach to engagement with the Hudson developer community, the community voted with their feet and created Jenkins as a fork of the Hudson codebase. Oracle continues to develop Hudson in the absence of the community and Jenkins continues its healthy development with regular updates and a multitude of plugins. Setting up a Jenkins installation as a Hudson Builder allows the developer to create build jobs directly from within NetBeans.

Creating a build job from within NetBeans

Jobs can be manually invoked through the Jenkins Web-based interface or automatically triggered via various mechanisms. We set up our Jenkins jobs to be triggered by changes to the code. Each Jenkins job polls the subversion code repository every 10 minutes to see whether the code has changed. When a developer commits new code, within 10 minutes Jenkins will detect that the codebase has changed and will trigger a new build of the associated Maven project. The Maven project will run as it did on developers machines, checking out the source code, running any automated tests and packaging the code into executable software - Java Web Archive (WAR) files in the case of our web applications.

Code Quality Analysis

We configure each build job in Jenkins to run quality tests on the codebase using a tool called SONAR. SONAR reports on code quality and stores the results for each build results allowing downward trends to be identified, analysed and addressed. 

SONAR dashboard gives an overview of quality metrics for a project

SONAR Time Machine showing trends in quality metrics

Deploy Artifact

If the build job succeeds, the executable WAR file is stored in our artefact repository, ArtifactoryArtifactory stores the WAR files from all successful builds along with details of the build. This enables us to reproduce any build when necessary. Deployment of the WAR file to Artifactory is done by the Jenkins Artifactory plugin. The Artifactory plugin adds options to the build job to deploy the artefact and build information.



Artifactory options in a Jenkins build job

Artifactory stores the WAR from every build

Deploy Application To Staging

The next step of the build pipeline is to deploy the application to the staging server for further tests. The aim is to test the application in an environment which is as close to the production environment as possible. Currently this is a manual step, performed by a developer. 

GlassFish

We develop Java enterprise applications and run them on the GlassFish application server. The developer downloads the WAR file from artifactory and uses the GlassFish Admin Console to deploy it and run it. This takes care of the code side of the application. The database also needs to be updated to work with the new code.

GlassFish administration console

MyBatis Migrations

We use MyBatis Migrations to manage changes to the database schema. The MyBatis Schema Migration System (MyBatis Migrations) provides a simple mechanism for versioning and migrating the schema of a database. When a new version of the application is created, if the database schema has changed, we create an SQL script to update the schema to the new schema and another to roll back from the new schema to the old. These scripts are rolled into a versioned migration script which is used by Mybatis Migrations to apply changes to the database. The developer checks the current version of the database using the Migrations tool from the command line on the staging server and updates the schema to the latest version. Once the database has been updated, the application is ready for testing.

Test (Acceptance)

The acceptance testing stage is manually invoked one but the acceptance tests are automated using Selenium Web Driver to perform browser actions of the tests. Selenium Web Driver is a tool that allows browser operation to be automated. Using it, we can create automated tests which interact with our applications in the same way that a user would.

The tests are created using the Selenium IDE which records browser actions as the user interacts with the application.

Selenium IDE showing recorded interactions

Using the Selenium IDE, use cases or user stories can be enacted and recorded. These can be saved as code to be run as automated acceptance tests.

Saving interactions as a JUnit test



With our current setup, the developer runs the automated acceptance tests from their PC. Because we are testing Web applications via the browser we can test from anywhere. If the acceptance tests pass, the application is ready for deployment to the production server.





Deploy Application To Production

To update the application on the production server to the latest version, the developer downloads the WAR file from Artifactory and uses the GlassFish admin console to deploy it and uses MyBatis Migrations to migrate the database to the latest schema. With the application upgraded the cycle begins again.

Next Steps

We are working towards implementing continuous delivery and the steps outlined above have been put in place incrementally to move us towards this goal. Continuous delivery is a software development strategy that seeks to reduce the risk involved in releasing software by increasing the frequency of release and automating the release process. We have a number of improvements to make to our build process to automate the remaining manual steps, to add a Web-based interface to allow management of builds in the pipeline and to add smoke tests to check that all is well with the environment and that the deployed application has everything it needs to run in production. 

We plan to remove manual steps through automation with Gradle builds

We plan to use Gradle builds to automate the deployment and testing operations in the later stages of our build pipeline and to manage builds pipelines using the Jenkins build pipeline plugin. If you can afford it, there are many commercial tools which will provide this functionality out of the box.








Wednesday, 4 July 2012

Archi Training


Ian Watts and Fleur Corfield attended the ArchiMate 2.0 Certification Training in London on the 3rd and 4th July 2012 

This was an intensive course run by BiZZdesign intended as a run-up to sitting the Archimate Certification exam (which neither Ian or Fleur actually sat). Consisted of an introduction to Enterprise Architecture and the Archimate language followed by a look into the extensions to the Archimate language.