Thursday, 20 December 2012

Webinar January 16 2013


Our next event will be a webinar for anyone interested in Enterprise Architecture. It's being delivered by Staffordshire University and University of Bolton on 16th January 2013 (11:45-13:00). The session will include two case studies followed by discussion:

- Staffordshire University (delivered by Ray Reid and Ian Watts): using EA to transform the management of external examiners
- University of Bolton (delivered by Stephen Powell): improving faculty business processes - a mixed methods approach

You can register for the webinar using the following link: http://applyingea.eventbrite.com/

Friday, 9 November 2012

Creative thinking events

Unfortunately, our event planned for the 6 November with the RSC West Midlands in Wolverhampton had to be re-scheduled for 28 Feb 2013 due to insufficient numbers of participants. We have been wondering if it's just the wrong time of year, or the language we use to describe EA for new users.

I met with one of the participants from our summer event, and she described EA as a very exciting and useful  tool. She has used Archi to create many maps to help with planning and future proofing of plans. She had had 'coaching' from Sam and Fleur to help her develop her skills and she found this the most useful development activity. It's likely that she will present a short case study about her experience at one of our future events.

Thursday, 1 November 2012

Impact of workshop

I met recently with Sam Scott who had attended the StaffFest event on Enterprise Architecture to ask her if the session had been useful. She currently works in the marketing department with a wide variety of disparate teams on the course data project. She has used her knowledge of EA to create and use several maps to help thinking through the processes and as a way to share her understanding with the various teams. She hopes to use EA to continue to support planning in the future that she hopes will enable her to simplify and make more efficient the existing processes.

Tuesday, 9 October 2012

Creative Thinking

The project has been progressing, albeit slowly, over the summer months.

We met with our project partner, Stephen Powell from Bolton, on 18th September to share our plans for the workshop on 6 November with the RSC, and had some useful feedback.

We have planned a second workshop for April 2013 (probably the 9 or 16th) in Newcastle with Andrew Stewart.

Ray and Ian have been working to develop the workshop materials, activities and Powerpoints. Ian and I have created some 'cards' based on the Archi icons to use in the workshop for the initial activity. We hope that by engaging participants with a hands-on activity to create a simple process map with cards they will be able to conceptualise the EA process and be able to transfer to the Archi software easily.

We are planning to deliver a webinar with Stephen Powell sharing our case studies in January. This may be delivered as part of a face-to-face workshop.

I have arranged to meet with one of the participants from our first workshop in July to see how they have been implementing EA

Thursday, 20 September 2012

JISC Meeting

I met with Sarah Davies from JISC today on the phone. We discussed the project activity so far and I agreed to send up dates of the events we had planned. Sarah offered to help with promoting the events to appropriate colleagues.

Tuesday, 18 September 2012

Partner meeting

Ray, Ian and I met Stephen Powell today at Staffordshire University. We discussed the workshop event and the webinar and it was good to feel that we are working together effectively on the project.

Wednesday, 11 July 2012

ArchiMate - training and spreading the word

A colleague and I recently participated in the ArchiMate 2.0 Extensions training, a two day event with the optional Open Group exam at the end of it. With 10 delegates of different expertise levels (and different attitudes to using TOGAF) 7 chose to sit the exam. We decided not to, as we could not commit the time to the revision needed to ensure successful completion, and we have been thinking about qualifications that show how we have applied the knowledge in the real world, as opposed to what has been learnt in the classroom. We were also unsure as to the value of the qualification without that experience, during the event it was clear that some places valued the qualification more than others.
There was a lot for the trainer to fit into the two days (in reality it turned out to be just over 1 and a half days thanks to fitting the exam in at the end of day two). I found it useful in giving me confidence in what I have been doing, and helped me understand the relationships involved in ArchiMate, which has been my biggest barrier (I have been relying heavily on the magic connector in Archi!). I also found that it helped my thinking, along side Sams blogs in here! Feedback from the training showed that people wanted to do more examples, although the training did a number of examples throughout the two days it was clear that people involved wanted more.
My colleague and I had a lot of discussion about how we saw the official training, and how it fit with our own intention to deliver some training through the Benefits Realisation funding from JISC. We realised a couple of things for our (much shorter) events:
  • Value of building up a model needs to be clear at the start, along with the acknowledgement of the amount of work this can take!
  • Working on 'paper' for the first examples is a good idea before introducing a tool like Archi. But 'paper' on its own can cause problems, using 'signed' post-it notes to allow you to move elements around would be useful for discussion purposes.
  • There is a quick acceptance that paper will only take you so far before you need a tool like Archi to develop more complex models for examples
  • Examples for us need to be relevant to HE, perhaps around Assessment, Course Information and Course Development (three popular areas within Staffordshire University)
  • Group work brings out the best discussions as long as they are with individuals with similar experience (having someone in the group with more experience can cause the team to split) 
  • Group work needs to split people from the same institution into different teams (unless they make up one team, otherwise again the team can split)
It is worth noting whilst we were studying away on this Open Group course, we had a number of colleagues attending a one day JISC lead Enterprise Architecture workshop (http://emergingpractices.jiscinvolve.org/wp/doing-ea-workshop-2/) this blog mentions a hands on tutorial https://fsdsupport.pbworks.com/w/page/27206793/hands%20on%20Archimate that might be useful.

Monday, 9 July 2012

Comparing Archimate Views With Process Maps


In a previous post on this blog about our use of Archimate I talked about the difference between Archimate views and business process maps. It can be a struggle to find the right level to model at when creating Archimate views. There is a natural tendency to include too much process detail, especially if modellers have process mapping expertise. Archimate views are intended to act as the focus for discussion of issues and trade-offs between domain experts, not to document process details. Consequently, the goal of the Archimate modeller is to capture enough information from domain experts to create the view whilst resisting the temptation to cram all the details into each view. A 'just enough detail' approach is needed. Because this can be a tricky balancing act, I thought an example of how they differ might be useful. This post shows how a simple Archimate view can be derived from a Business Process Modelling Notation (BPMN) process map and illustrates how the resulting Archimate view captures an overview with the BPMN process map providing the details.

The BPMN process map below, captured as part of the StaffsXCRI-CAP Course Data Project, shows the process for creating a postgraduate course sheet for a new award. The postgraduate course sheet is used to advertise the award. The process map shows the flow of actions carried out by staff involved in the process. The horizontal swim lanes denote who does what. Each of the boxes describes behaviour that occurs in sequence as part of the overall process. BPMN process maps such as this one describe the process in sufficient detail that someone given the process map would know what to do to participate in the process.
BPMN business process map shows process details

Throughout the process map there are references to actors, roles, business objects and other entities that we would want to include in the Archimate model. These are highlighted in the text of the process map below.
Text of the process map contains references to Archimate elements

The highlighted items refer to Archimate elements as shown in the process map below. The overall process becomes a Business Process element in the Archimate view. The swimlanes refer to Archimate Roles. The text describes several Business Objects, software applications and bits of infrastructure like shared network drives (represented as Technology Nodes).
BPMN process map overlaid with corresponding Archimate elements

These Archimate elements can be collected into an Archimate view and relationships and inferred elements added to arrive at the view shown below.


Archimate view of the postgraduate course creation process

You can see that the Archimate view doesn’t contain any of the process detail. It only shows that there is a course sheet creation process which has a range of roles involved in performing it, with business objects and applications used along the way. The value of this level of modelling might not be evident from this single view but as more views added to the model, a critical mass is reached whereupon the model becomes a useful tool for analysis. Individual actors, roles and business objects can be quickly dragged into a view and related elements added to find out, for example, what processes an actor or role is involved with or who needs access to particular business objects for what purpose.

In the JISC Enable project we used existing process maps, corporate process documentation and notes from a series of interviews with stakeholders as input for the creation of our Archimate models. The approach shown above works for written documentation in the same way as with process maps. Pick out references to Archimate elements in the text and add them into Archimate views.

To make gathering of the right information easier, we used a simple template for making notes from interviews which is shown below. The columns acted as a prompt for us to ask about all aspects of the business architecture and enough about the other architecture layers to have a good go at creating a model on the first attempt. This reduced the need to revisit the same areas with stakeholders because of missing information.

Simple template to promote collection of the right information when talking to stakeholders

At Staffordshire University, everyone who has tried Archimate modelling has found that the process of modelling itself has raised questions about how everything fits together. Answering these questions has led to improved understanding of the problems we are trying to solve, improved understanding of the solutions and improved communication with stakeholders.

Thursday, 5 July 2012

Process Automation and Continuous Integration in the JISC Enable Project


Continuing the JISC Enable Project technology retrospective, this post describes the approach we have used to automate parts of the software development process to improve the effectiveness of the development team.

Agile Project Management

The development team works in partnership with the customer to identify required features. We take a Scrum-style agile approach to software development which has the core roles of:
  • Product owner - who identifies and priorities features to be developed 
  • Development team - who develop features according to self-organised plans
  • Scrum master - who shields the development team from distractions and removes impediments.

In the case of our external examiner application, a colleague from our central quality service department acted as product owner and regular meetings were held with the intended user base to keep other stakeholders 'in the loop' with development progress and to show new features of the application as they were developed.

We use a free online agile project management tool called Pivotal Tracker to help manage the list of features to be developed and track progress in delivering those features.

Pivotal Tracker supports iterative development processes like Scrum, where development is 'time-boxed' into short regular iterations (or sprints in Scrum terminology). In each iteration, typically lasting a week or two, software features are developed by the team. At the end of the iteration, the software is demonstrated to the product owner. Because the product owner gets to see the software regularly as it is evolving, they get a better idea of what they really want from the software. During each iteration, the product owner can change the list of features to be developed by adding, removing and reprioritising features. In this way, the product owner can steer development direction to ensure a higher quality, more appropriate product is developed.

The Process Of Delivering A Software Feature

As each feature is developed, the developers follow a process to build and test the feature before it is deployed into production. An Archimate model of our current feature delivery process is shown below. The following sections describe the steps in the process and indicate where we have introduced automation to speed up development.

Archimate 'as is' view of feature delivery

Develop Feature (Tests and Code)

Software features are developed on the Java Platform Enterprise Edition version 6 (Java EE 6) Platform as described previously on this blog. 

Create Code and Tests Using Integrated Development Environments (IDEs)

The developer creates code to implement a feature and also writes code that tests the feature. These tests can be run automatically later in the software build process

The developers use Integrated Development Environments (IDEs) to maximise their productivity. IntelliJ IDEA is our primary IDE but we also use NetBeans if it is better suited to a particular task.

Our IDEs have a host of features to improve productivity. These include: 

Automated Builds With Maven

We use Apache Maven to build and manage our software projects. We have used Apache Ant in the past to manage software builds but creation of the build files became too time-consuming. Maven takes a different approach by introducing a standard build lifecycle and a standard project folder structure which, as long as the developer abides by the convention, allows maven to build the software and run tests without the developer having to write any build files. For example, if a developer writes some unit tests and puts them in the /src/test/java folder of the project, Maven will detect and run the tests with each build automatically.

Maven is useful for getting developers to standardise on a project layout which helps new developers to get up to speed more quickly. It is also very easy to work with if you just want to perform the usual type of build activity in your project. If you need to do something that Maven can't accomplish by default or via plugins then it becomes harder to work with.

Maven also helps to manages dependencies of your software on other software libraries. This feature works very well in most instances but manual inclusion or exclusion is sometimes required when project libraries have transitive dependencies on different versions of the same library. 

Automated Tests With JUnit

We use the JUnit unit testing framework to run unit tests. Unit testing is the practice of testing individual units of code in isolation from any other units. It is the lowest level of testing in the process. The aim is to ensure that the individual units are performing correctly before they are combined to produce software applications.

The unit testing approach involves setting the unit into a known state, calling an operation on the unit with some known input data and then checking the returned value to ensure it is as expected. Developers write code which performs this setup/test/teardown behaviour. JUnit provides a way to identify tests so that they can be run automatically (by Maven) during a build and provides support for setting up before each test and tidying up afterwards and support for checking the results through use of assertions.

Source code management

When multiple developers are working on the same project, it is important to use a source code management tool (a.k.a. version control tool). A source code management tool allows multiple developers to work on the same codebase without overwriting each others changes. It keeps track of the history of changes to the code, allowing changes to be rolled back if required. It can also automatically merge changes from different developers into the latest revision. 

We use Apache Subversion for source code management. We have a subversion source code repository on one of our servers and all developers 'check out' code from the repository, make changes to it and then commit the changes back to the repository. Support for this process is built into the IDE.   

IntelliJ IDEA version control menu

Test (Local Commit)

The unit tests are run as part of each project build to test the new features that have been developed. All the unit tests are run during every build so that any bug or regression that have might have been introduced in the latest round of development. Because all of the tests are automatically run as part of the build, the developer only has to trigger the build. The unit tests are run locally (on the developers PC) to check that they all pass before the code is committed to the code repository. If the tests do not pass and the developer has a 'broken build' and commits it to the source code repository regardless, other developers who update their code to the latest revision will have the same broken build. For this reason, it is important to get the test passing locally before committing the code.

Inspect/Manually Test

Developers perform some manual running and testing of the software by installing it locally and using it as the end user would. IDEs have support for building, deploying and running software which speeds up the development, deployment and testing cycle. Manual testing is most useful to test usability and look and feel issues.

Commit

When the developer is happy with the new feature and that the tests are passing, the code is committed to the code repository. This is done from inside the IDE. The subversion code repository configuration is stored in the IDE and the developer can simply select the files to be committed and select the commit option. The developer is then shown a list of the files to be committed and is prompted for a commit message. The developer supplies a commit message describing the changes that have been made to the codebase in this revision. The code is then committed to the repository and the commit message is stored in the change history. Commit messages are important because they allow other developers to identify what happened when which is invaluable when trying to determine which revision to roll back to or update to.

Revisions and commit messages - always include a message (unlike some of the examples above)

Test (Commit Stage)

Once the developer has checked the code for the new feature into the code repository the process moves on to an automated test stage where the same commit tests are run in a different environment (on the build server) to check that the new code is portable and can run in a different environment, i.e. it is not tied to the developer's local setup in any way. 

Continuous Integration with Jenkins

We use Jenkins, an open source continuous integration server, to automate many of the operations in our build pipeline. Continuous Integration is the practice of integrating software, created by different developers, frequently, usually at least once a day. The aim is to avoid the situation, often encountered in traditional software development, where integration of software components happens late in a project and leads to significant problems getting the components to work together. With continuous integration, any problems are identified through the automated unit and integration testing which accompanies each build. The problems can be addressed immediately, thereby reducing the risk of the project because integration problems are tackled iteratively rather than in a more risky big bang manner late in the project.

Jenkins dashboard showing build jobs

Jenkins provides a platform for running build jobs. It has support for Maven builds so we take advantage of this to reduce the amount of initial job configuration. To reduce the amount of work even further, we use NetBeans, which has excellent Jenkins support, to create the job in Jenkins from the local Maven project. Strictly speaking, NetBeans has support for Hudson rather than Jenkins. Jenkins was originally called Hudson and was developed by a Sun Microsystems employee Kohsuke Kawaguchi. After Oracle Corporation bought Sun Microsystems and took a rather overbearing approach to engagement with the Hudson developer community, the community voted with their feet and created Jenkins as a fork of the Hudson codebase. Oracle continues to develop Hudson in the absence of the community and Jenkins continues its healthy development with regular updates and a multitude of plugins. Setting up a Jenkins installation as a Hudson Builder allows the developer to create build jobs directly from within NetBeans.

Creating a build job from within NetBeans

Jobs can be manually invoked through the Jenkins Web-based interface or automatically triggered via various mechanisms. We set up our Jenkins jobs to be triggered by changes to the code. Each Jenkins job polls the subversion code repository every 10 minutes to see whether the code has changed. When a developer commits new code, within 10 minutes Jenkins will detect that the codebase has changed and will trigger a new build of the associated Maven project. The Maven project will run as it did on developers machines, checking out the source code, running any automated tests and packaging the code into executable software - Java Web Archive (WAR) files in the case of our web applications.

Code Quality Analysis

We configure each build job in Jenkins to run quality tests on the codebase using a tool called SONAR. SONAR reports on code quality and stores the results for each build results allowing downward trends to be identified, analysed and addressed. 

SONAR dashboard gives an overview of quality metrics for a project

SONAR Time Machine showing trends in quality metrics

Deploy Artifact

If the build job succeeds, the executable WAR file is stored in our artefact repository, ArtifactoryArtifactory stores the WAR files from all successful builds along with details of the build. This enables us to reproduce any build when necessary. Deployment of the WAR file to Artifactory is done by the Jenkins Artifactory plugin. The Artifactory plugin adds options to the build job to deploy the artefact and build information.



Artifactory options in a Jenkins build job

Artifactory stores the WAR from every build

Deploy Application To Staging

The next step of the build pipeline is to deploy the application to the staging server for further tests. The aim is to test the application in an environment which is as close to the production environment as possible. Currently this is a manual step, performed by a developer. 

GlassFish

We develop Java enterprise applications and run them on the GlassFish application server. The developer downloads the WAR file from artifactory and uses the GlassFish Admin Console to deploy it and run it. This takes care of the code side of the application. The database also needs to be updated to work with the new code.

GlassFish administration console

MyBatis Migrations

We use MyBatis Migrations to manage changes to the database schema. The MyBatis Schema Migration System (MyBatis Migrations) provides a simple mechanism for versioning and migrating the schema of a database. When a new version of the application is created, if the database schema has changed, we create an SQL script to update the schema to the new schema and another to roll back from the new schema to the old. These scripts are rolled into a versioned migration script which is used by Mybatis Migrations to apply changes to the database. The developer checks the current version of the database using the Migrations tool from the command line on the staging server and updates the schema to the latest version. Once the database has been updated, the application is ready for testing.

Test (Acceptance)

The acceptance testing stage is manually invoked one but the acceptance tests are automated using Selenium Web Driver to perform browser actions of the tests. Selenium Web Driver is a tool that allows browser operation to be automated. Using it, we can create automated tests which interact with our applications in the same way that a user would.

The tests are created using the Selenium IDE which records browser actions as the user interacts with the application.

Selenium IDE showing recorded interactions

Using the Selenium IDE, use cases or user stories can be enacted and recorded. These can be saved as code to be run as automated acceptance tests.

Saving interactions as a JUnit test



With our current setup, the developer runs the automated acceptance tests from their PC. Because we are testing Web applications via the browser we can test from anywhere. If the acceptance tests pass, the application is ready for deployment to the production server.





Deploy Application To Production

To update the application on the production server to the latest version, the developer downloads the WAR file from Artifactory and uses the GlassFish admin console to deploy it and uses MyBatis Migrations to migrate the database to the latest schema. With the application upgraded the cycle begins again.

Next Steps

We are working towards implementing continuous delivery and the steps outlined above have been put in place incrementally to move us towards this goal. Continuous delivery is a software development strategy that seeks to reduce the risk involved in releasing software by increasing the frequency of release and automating the release process. We have a number of improvements to make to our build process to automate the remaining manual steps, to add a Web-based interface to allow management of builds in the pipeline and to add smoke tests to check that all is well with the environment and that the deployed application has everything it needs to run in production. 

We plan to remove manual steps through automation with Gradle builds

We plan to use Gradle builds to automate the deployment and testing operations in the later stages of our build pipeline and to manage builds pipelines using the Jenkins build pipeline plugin. If you can afford it, there are many commercial tools which will provide this functionality out of the box.








Wednesday, 4 July 2012

Archi Training


Ian Watts and Fleur Corfield attended the ArchiMate 2.0 Certification Training in London on the 3rd and 4th July 2012 

This was an intensive course run by BiZZdesign intended as a run-up to sitting the Archimate Certification exam (which neither Ian or Fleur actually sat). Consisted of an introduction to Enterprise Architecture and the Archimate language followed by a look into the extensions to the Archimate language.

Friday, 22 June 2012

Creative Thinking Events

The first workshops as part of our JISC Enterprise Benefits project have now been booked. These workshops will introduce participants to Enterprise Architecture and overview the Archimate software. The first is an internal overview during Staff Fest on the 2 July and the second is in conjunction with the RSC West Midlands in Wolverhampton on the 6 November 2011. This event will be a national session and we hope to include a range of participants. Full booking details will be available soon.

Friday, 8 June 2012

Birth, Death and Resurrection of Senior Management Engagement

At the start of the project a new Executive Pro Vice Chancellor (PVC) for Learning and Teaching had been appointed, who was the initial sponsor of the Enable project. The main role of the Executive PVC, was to chair the Senior Management Working Group (consisting of a number of senior faculty staff (Deans and/or Faculty Directors for Learning and Teaching) and a number of Directors/Heads of Services and senior colleagues. 

In addition to the start of a new PVC, the then Vice Chancellor had indicated that she would soon be retiring but had yet fixed a date.  This was subsequently confirmed as January 2011.  As a consequence, the academic years 2008/9 and 9/10 were characterised by certain amount of “planning blight” and senior managers being (understandably) cautious in the face of impending change.

Not only did the executive start the project in some state of organisational churn, so did the department the Enable team were working from. The Learning Development & Innovation (LDI) team had recently been moved (following an external review) from the University’s Information Service to the Academic Development Institute (led by the Director of Academic Development).

After the first four months of the project the Academic Development Institute was abolished and the LDI team (including all Enable project staff) became a standalone team reporting to the Executive PVC.   During this period, senior management engagement with the project was good .  There was also considerable engagement from staff involved in the various change initiatives across the University, and from award leaders, programme managers and Faculty business staff and quality administrators.

About 18 months into the project, the Executive PVC left the University (and was not replaced for about a year). Following a fairly lengthy hiatus during which it was unclear (even to the Head of LDI) who the LDI team reported to, it was agreed that the team and the Enable project should report to the Deputy Vice Chancellor through the Director of Academic Policy and Development.   Senior management engagement had waned somewhat during the “hiatus” (but “spoke” engagement had remained good), however the Deputy Vice Chancellor became very receptive to the ideas on managing change and sustaining innovation being promoted by Enable, and a good period of senior management engagement ensued.  However, this period also coincided with the “last days” of the previous Vice Chancellor and the selection and arrival of the new Vice Chancellor, who took up leadership of the University in January 2011. As a result, although engagement with Enable’s ideals was good, translation of this engagement into action was very difficult.  However, this period also saw the opportunity – seized by the Enable team – to initiate the “FLAG” work of Enable on the back of a number of Senior Leadership Team initiatives instigated by the new Vice Chancellor.

In June 2011, a new Executive PVC arrived at the University.  At this point, oversight of the LDI team moved into the new PVC’s purview although the Head of LDI continued to report to the Director of Academic Policy and Development who had similarly moved reporting lines.  This move created another “disjoint” in senior management engagement as the new PVC obviously had a great many things to take on board and to plan.  

By the end of the project, 7 of the 17 people who attended the first SMWG meeting had left the University, including the Executive PVC. Nevertheless engagement was subsequently renewed and the Executive team has now picked up messages  sent by Enable including the concept of ‘joined up thinking’, and the development of a Change Management role.  This renewed interest was due to the project team able to present a clear message of Enable to the executive thanks to previous experience with communicating with the executive team, along side this the project team were able to use senior management champions to pass the message of Enable on to the executive.

Despite the considerable “organizational churn” evidenced above, a constant and stable factor throughout has been a recognised institutional need to ensure curriculum development is responsive to demand.   This included ensuring that policies, processes, and supporting technologies for curriculum/product development were designed in a way that was responsive to the needs of both faculties and learners.   This required flexible management of the existing portfolio including the process for creating new product, along with guidelines and workflows to encourage a culture of innovation. 

A History of FLAG

Background

FLAG was first raised Flying forward (May 2011), this blog highlighted the reasons why a tool for supporting  course developments focusing Flexible Learning, and in consequence all course developments. FLAG (Flexible Learning Advice and Guidance) has been designed as a support tool, designed to address a number of issues highlighted by Enable. To reiterate the issues here:
  • Difficulty in finding the right advice on course design at the right point
  • Knowing which source of information would be the best/ most up to date
  • Identification of champions to support stakeholders engaged in course design
  • Reduction in faculties having to produce own advice and guidance
  • Takes burden off staff to hold expert knowledge in the whole process
The project blog from May 2011discusses the concerns around doing the project, including adding to an already perceived arduous process and ensuring the right level of stakeholder engagement.

Approach

As previously mentioned in the May 2011 post the project team treated FLAG development as an internal project, which included a full project plan with clear roles and responsibilities and a list of relevant stakeholders. In September a new bog, New Product Design, was posted around the approach of FLAG. This blog discusses the issues highlighted from engaging stakeholders across the University with a clear focus on the process of course development, using the baseline information from Enable. This focus with stakeholders helped the project team unpick issues not previously noted by Enable, or reinforced issues noted during the base lining process.
The project team spoke to course developers with examples of the ArchiMate models from the baseline that focused on the different stages of course development. For initial interviews with faculty staff the model was printed out that was then drawn on to update the model to what was taking place internally. It is worth noting that the initial models focused on University level processes, by discussing these with the faculties we were able to capture the unique processes from faculties.
Each updated model was then used to create a best practice workflow broken down into three stages of course development Strategic Approval, Planning, Validation (similar to the stages in the Manchester Metropolitan University Accreditation! game, also for a screenshot of the game check out the CETIS Blog) These stages were used to help break down the workflow, even with those stages each workflow was one side of A3 paper! These workflows were then taken around for a second round of interviews, and updates, changes and other aspects of course design were then added to the workflows. For example how the faculties engaged with both Partnerships and Quality needed further modification. This round of interviews helped capture the supporting documents used by staff at different points in the flow and where they needed links to useful documents.
After the second round of interviews with the workflows the project team input the master workflow into the Pineapple system. This then helped the team sharpen the workflow, and the links to supporting documentation. Once the workflow had been completed a draft handbook was written to support the use of the software, and both were given to staff within the Learning Development and Innovation team for testing purposes. Successful completion of the tests resulted in the project team promoting FLAG as ‘in pilot’ with faculties & partners and volunteers from each were collected.
At the start of the pilot the volunteers were asked to complete a short online questionnaire asking about how they managed course developments and whether they felt that they focused on traditional course design. At this point the pilot was blogged again. However since the launch of the pilot a number of changes have occurred in the University causing engagement to decrease. The first was the restructure of faculties and schools from 6 to 4, the second was the change in credit structures for modules, and finally the process itself started to go through some change. The changes to the process had a limited impact on the pilot as they have yet to be approved by the committees and in the long term these changes will be of benefit to the project as by putting them straight in to FLAG we can ensure that course design follows the latest process, with the most up to date support documentation.
Due to these changes, and the length of time it takes to go through course development, the project team have left the piloting teams to work on FLAG at their own pace, with a number of emails every 3 weeks ensuring staff are still happy using the tool. Unfortunately recently the project team have been informed that one course design team have stopped using the tool, and we are in the process of organising a meeting to find out the issues that stopped their engagement. The project team are also organising interviews with other staff engaged in the pilot and developing an exit questionnaire for these staff to find out if their approach has been improved by the use of the tool or whether it helped them think outside of the traditional course development box.
Information about FLAG, its models and work flows have been handed over to two new initiatives in the University, the first is the Student Records System which would store information on courses post validation and the other is the JISC funded XCRI-CAP project. The project team also intend to work with the Document Management initiative to discuss opportunities to further develop the tool within that environment.

Lessons Learnt

By using FLAG as a way of starting conversations about course design within faculties it was clear that the ‘uniqueness’ of each faculty was more of a perception within that faculty rather than the reality. This is important to capture to ensure continuing stakeholder engagement – and can help them realise similarities in behaviour.
Start with interviewing senior staff engaged in curriculum design, before interviewing those ‘on the coal face’.  This then can highlight the difference between perceived processes and what actually occurs.
It is useful to interview stakeholders in small groups, for example tutors from the same faculty, business and quality administrators from the same faculty, and service teams (partnerships and quality teams are important), before getting a mix of groups together to discuss the models and workflows.
As highlighted in the Flying the Flag blog post be prepared for the pilot to take some time, initial engagement with the pilot was high, however as the course development continued some pilots became disengaged with using the software. This could be expected depending on when course teams feel they need the most support. Continued engagement with the course development teams is required at this stage.
Process ownership can be difficult, especially over a large process such as course design and is often easy to ignore in a project. It is important to get buy in from those involved in managing the process so that they can take ownership of updating the tool when the processes change. It is often easy to think around one process owner, but consider a process ownership team for those larger processes.
Make sure you are clear about the purpose of the project is and what its scope is. Although this was a project within Enable using a project plan really helped communicate the scope and purpose of the project, and how if the project was a success the tool would be handed over for further development/ embedding to the process owners, not left with the Enable team.

Friday, 18 May 2012

Java Development in the JISC Enable Project

In a previous post I outlined the genesis of our external examiner application and how using the Archimate modelling language and Archi modelling tool helped us to secure approval for development. In this post, I describe the technical architecture of the application and try to highlight what was important to us and illustrate our line of thinking in choosing and using these technologies.

Readers who aren't interested in technical details may want to duck out at this point. It's also quite a long post!

The Business Problem

To recap the business problem, the University had identified significant duplication in quality-related business processes. We had gained approval to develop an application to address duplication in external examiner approval and reporting processes. The application would provide:
  • an interface for creating and updating information to track the approval and reporting processes
  • document management capability for sharing the documents and forms used in the processes
  • reporting capability to provide on-demand reports for sharing with stakeholders.
The application would reduce duplication by sharing data, documents and reports, removing the need to manage local data stores, manage local document libraries and produce local reports.

The Java EE 6 Platform

We chose to build the external examiner application on the Java Platform Enterprise Edition Version 6 (Java EE 6). Java EE 6 comprises a set of Application Programming Interfaces (APIs) to make development of multi-tiered and distributed enterprise applications easier, simpler and faster. We chose this platform for the following reasons:
We needed to do more with less
In recent years, we had been unable to replace staff who had left the team. Demands on the technical team had remained high with new opportunities for innovation needing to be grasped as they appeared. Consequently, it had been a case of needing to do more with less. We are always on the look-out for principles, practices and technologies to maximise efficiency and effectiveness of the team. Java EE 6 had the promise of achieving more with less (and cleaner) code.
We wanted the benefits of the Java EE 6 APIs
Prior to Enable, we had used Apache Tomcat. Tomcat implements the Java Servlet and JavaServer Pages (JSP) technologies. Applications we created to run on Tomcat were based on JSPs, servlets and portlets interacting with a database via the Java Database Connectivity (JDBC) API. This involved writing a significant amount of code to manage cross-cutting aspects of our applications like security, transactions and persistence.

Using an application server instead of Tomcat, we could use the Java EE 6 APIs and services provide by the application server to avoid a lot of the boilerplate code we would previously have written to manage the cross-cutting aspects. An application server implements the full Java EE Platform so it provides JSP and Servlet implementations and a host of other APIs including:
  • Enterprise JavaBeans (EJB)
  • Java Persistence API (JPA)
  • JavaServer Faces (JSF)
  • Java Message Service (JMS)
  • Contexts & Dependency Injection (CDI)
  • Java Transaction API (JTA)
  • JavaMail
An application server also allows configuration, tuning and monitoring to be managed centrally.
GlassFish Administration Console

We chose to use the GlassFish open source application server because it was the reference implementation for Java EE 6 and the only application server that supported Java EE 6 at the time.
Java EE 6 promised to be simpler than Spring
An alternative to Java EE would have been to use the Spring framework but we decided not to use Spring.
Spring was created as a simpler alternative to the overly-complex and invasive programming model of Java 2 Platform Enterprise Edition (J2EE), Java EE's predecessor. It emphasized simplicity of application design through use of dependency injection and aspect-oriented programming. Spring gained widespread adoption and became for many the obvious choice for enterprise Java development. We had some experience of Spring and liked the dependency injection and AOP elements but not use of XML for declarative configuration. Also, the Spring Framework had grown so much over the years that reacquainting ourselves with its large feature set was going to be a non-trivial exercise.

Java EE 6 uses a simplified programming model through use of a convention over configuration approach. With dependency injection, separation of concerns and persistence baked into the platform, Java EE 6-based applications promised to be as lean and mean as equivalent Spring applications, if not more so. Aiming for reduction in complexity, we opted for Java EE 6 instead of Spring.

Architecture of the external examiner system

An Archimate view of the applications used in the external examiners application is shown below.
Layered Archimate view of the external examiner application
GlassFish
Our application server was the GlassFish Server Open Source Edition which is free but follows the usual 'community' model of support, i.e. you solve your own problems with information gleaned from forums, blogs, bug tracking systems, etc.

Initially, we tried initially to run all three applications (External Examiners Web Application, Alfresco, JasperServer) on a single GlassFish instance. The memory requirements of the combined applications in production made it impossible to run all three together. Also, Alfresco is designed to run on Tomcat and, although our attempts to get it running on Glassfish were initially successful, each Alfresco release brought new configuration problems so we decided to run a 'vanilla' (default configuration, bundled) Alfresco instance on Tomcat on a separate server to avoid unnecessary configuration work.
MySQL
We chose MySQL as our database software because:
  • we had experience of it from previous developments
  • it is mature, robust and fast
  • it has a large community of users and comprehensive documentation
  • it has good free tooling available.
We use phpMyAdmin for database management and TOAD for MySQL for efficiently creating queries, generating database migration scripts and performing the more advanced database manipulation.

Alfresco
We used Alfresco to provide document management services for the external examiners application. We used Alfresco as an interim solution to fill the document management capability gap until a University-wide document management solution is implemented. The University has a Document Management steering group which has identified the need for an institutional enterprise document/content/records management system and gathered requirements for it. Work is progressing to prepare the business case and procure and implement a system.

In the absence of a University system, we used Alfresco Community - a free version of the Alfresco open source Enterprise Content Management system. This is another community-supported offering intended for non-critical environments. Alfresco was chosen to:
  • provide shared document management functionality for the application
  • be similar enough to a University-selected solution to make re-implemention using the University solution easy
  • illustrate the value of document management to gain further grass roots support for the document management proposal
  • get some experience interacting with a document management solution to inform the University implementation.
Uploading a report to the external examiners document library
JasperReports Server
JasperReports Server was chosen to provide shared reporting functionality to replace generation of reports directly from the Microsoft Access database and circulation of them by email. JasperReports Server hosts reports created using the iReport designer tool. The server allows stakeholders to run and download reports on demand. We used the JasperReports Server Community Edition which is free and has the usual community supported approach.
A report run on the server
iReport
iReport is a free, open source report designer for JasperReports. We used it to create reports to replace locally generated reports from the Microsoft Access database. We used TOAD for MySQL to visually design SQL queries to return data from the external examiner database for each report and used JDBC datasources in the reports so that the reports hosted on JasperReports Server dynamically query the database each time the report is run.
Visual design of a SQL query using TOAD for MySQL

The report is designed using fields from the database.
Designing a report using iReport
The report can be previewed and the report source file (XML) can be directly edited or inspected.
Editing the XML source of the report
Once finished, the compiled report files are deployed to the JasperReports Server to make them available for use.
List of reports hosted on the server

External Examiner Application

The external examiner application has been developed by the Learning Development and Innovation (LDI) department technical team. It provides an interface for creating and managing information associated with the external examiner appointment and reporting processes and a data import tool to transfer legacy data to the new database.

The application is managed as three separate projects to simplify development:
  • domain model
  • legacy data import application
  • web application

Domain Model project

The domain model is a separate project to allow it to be used by the data import application and the web application.
'Persistence Plumbing'
The domain model project models the 'things' in the real world that we are interested in and that we want to store and share information about. These are objects like external examiners, tenures, courses, reports, etc. It also includes the object relational mapping (ORM) metadata - the 'persistence plumbing' which allow these entities to be loaded from and saved to the database. We use the Java Persistence API (JPA) to do this with the Hibernate Java persistence framework providing the API implementation.
XCRI influence
Information about courses features heavily in the information recorded about external examiners and their tenures. We based the course information in our domain model on the XCRI CAP 1.1 information model . A class diagram of the domain model is shown below (open it in a new tab or window and click it to zoom into the detail).
External examiner application domain model classes

Because we were learning lots of new technologies concurrently, we wanted to keep each aspect as simple as possible. Inexperience with Hibernate made us conservative about how to implement the domain model mappings. We chose to avoid inheritance to keep the hibernate mappings simple which meant that the domain model was a bit more complicated. We replaced inheritance in the XCRI model with composition.
XCRI Course inherits GenericDType, our Course composes GenericDType

The downside of this was that any changes to the methods of objects being composed required corresponding changes to the methods in the objects doing the composing. Happily, changes to the XCRI objects in the domain model were relatively rare. If we started again today, with our Hibernate experience, we would just include the XCRI information model 'as is', with inheritance and all.
Mapping Metadata
The domain model contains metadata which maps object fields to tables and columns in the database. The mapping for the collection of Tenures associated with an Examiner is illustrated below. In this example, an annotation is added to the getTenures() method of the Examiner class to specify the table and columns that will be used to store the collection of tenures in the database. The Hibernate Java persistence framework can use this metadata to create the database structure when the application is first run. The Tenures collection is represented in the database as the examiner_tenures table, the structure of which is shown in the screenshot.
Persistence mapping of examiner tenures to a table in the database
Integration tests
We have created integration tests to check the persistence mappings. These tests use DbUnit which is an extension to the JUnit unit-testing framework. DbUnit is used to set the database to a known state before each test is run. The tests check that the database is in the expected state when a known object is saved and that the expected object is returned when loaded from a known database state. We use an in-memory HyperSQL database for these integration tests because the tests run faster and no clean up is required - after the tests have run, the in-memory database ceases to exist. The tests are run automatically on each build of the domain model project.

Data Import project

The data import application loads data from the legacy Microsoft Access database used by the central quality team and persists it to the shared MySQL database. This application is run once only to import the legacy data before the external examiner application is first used.
The main import method of the data import project
The data import application connects to the legacy Access database via JDBC and fires SQL queries at it to return information which is used to create domain model objects representing courses, presentations, examiners, etc. These objects are then persisted to the MySQL shared database via the JPA using the mappings previously mentioned. Some parsing of data and manipulation of objects in memory is required during data import because the external examiners domain model is more fine-grained than the Access database structure. Some columns represent more than one type of object in the domain model depending on the content of the record. For example, the award table in the access database has a continuing column which can contain
  • the name of the examiner who is taking over reporting duties for this award
  • an ending date for the award
  • the reason that the award is ending.
The text of such columns is parsed and the appropriate domain model object is created and populated.

Web Application project

The External Examiners Web Application provides a user interface for managing information to support the external examiner appointment and reporting processes. A new examiner record can be created or an existed examiner record can be located via the Search screen.
Search results

Clicking on one of the search results takes the user to the Edit screen where information can be entered and updated. On this screen, examiner contact details and tenure information can be recorded. Appointment records can be uploaded to Alfresco via the upload button. Uploaded documents are automatically placed into the correct faculty area. When reports arrive, they can be uploaded to Alfresco in the same manner on the reports tab.
Edit examiner screen showing an examiner's tenures

Technologies Used in the Web Application

PrimeFaces
We used the PrimeFaces JSF component suite for on-screen components because it is easy to use and it complements JSF by providing more sophisticated components than the default JSF suite. This makes development faster by allowing us to focus on building a user interface from existing components rather than having to design and build custom components. For example, PrimeFaces has a file upload component that we use to upload documents to Alfresco.
Seam
As we created the external examiners web application and gained experience in Java EE 6 development, we came to realise that Java EE 6 does not quite live up to its promise. Some aspects, like declarative error handling via the deployment descriptor, simply do not work and other aspects, like dependency injection, always seem to stop short of providing enough flexibility to suit the circumstances of your application. To overcome these issues, we turned to the JBoss Seam Framework to fill in the missing pieces.

Seam complements Java EE 6 well because it is based on the Java EE platform and many of its innovations have been contributed back into the Java EE platform. CDI was a Seam idea and the reference implementation of it is included in the Java EE distribution. Seam can be thought of as anticipating the next Java EE and it provides a host of features that you wish had been included in the reference implementation.

The Seam features most important to us were:
  • injection of objects into JSF converter classes (via the Faces module).
  • easy creation of exception handlers to handle application errors and session expiry (via the Solder module). The orthodox Java EE way to do this, via declarations in the web application deployment descriptor, did not work because the application server wrapped all exceptions in an EJBException making handling of individual error types impossible. Solder unwraps the exception stack and handles the root cause by default, allowing easy creation of methods to handle individual error types and conditions.
Integration with Alfresco
The external examiner web application and data import application integrate with Alfresco via two of Alfresco's RESTful APIs. For example, upload of an examiner appointment form by the external examiners Web application is handled as follows:
  1. When the user selects a file for upload and clicks the upload button, the PrimeFaces upload file component uploads the file to a temporary directory on the external examiner server.
  2. The Content Management Interoperability Services (CMIS) API 'Get Object' resource is used to return the node reference of the examiner's document folder.
  3. A multi-part POST to the Repository API 'upload' service is then used to upload the appointment form to the examiner's folder.

Design Patterns

The External Examiner Web Application implements two design patterns that help to simplify the application code. The design patterns are described in 'Real World Java EE Patterns - Rethinking Best Practices' by Adam Bien.
Persistent Domain Object (PDO) pattern
The domain model is a collection of Persistent Domain Objects. These are classes which model the real world objects we want to store information about in the database, e.g. examiner, tenure, award, report. These form a rich model of the real world objects including the business logic. This is in contrast to the anemic domain objects typically required for J2EE development. PDOs allow the developer to take an object-oriented approach to solving problems instead of having to work around the 'persistence plumbing' to interact with the domain model. Persistence metadata is added in the form of annotations to specify the mapping of objects to the database. The state of the PDOs are persisted to the database by the Entity Manager. As long as the PDOs remain in the attached state (i.e. managed by the entity manager) they can be modified through method calls and any changes will be flushed to the database when the objects are next saved.
Gateway pattern
The Gateway pattern allows PDOs to be exposed to the user interface layer. In our case this means being able to refer to domain model objects directly from JSF pages and components. The snippet below, from the examinerView page illustrates this, with the value of the tenuresTable being a direct reference to the examiner PDOs collection of tenures.
Tenures dataTable uses domain model objects directly
A Gateway object acts as a source for PDOs loaded from the database. The Gateway keeps the PDOs in the attached state by using an extended persistence context which remains alive and does not detach objects at the end of each transaction. Gateway classes are annotated to avoid transactions by default. A save method is created with an annotation which causes it to trigger a transaction. The transaction causes any changes in the PDO graph of objects to be flushed to the database. The PDOs can be used in object-oriented fashion and the save method called as needed to flush changes to the database. The Entity Manager does the heavy lifting of keeping track of all the changes to the attached PDOs and saving them to the database when a transaction is triggered.

The combination of PDOs and Gateway allows the developer to manipulate the domain model objects cleanly without having to worry about objects persistent state. This results in a cleaner, smaller codebase. High memory consumption is a potential problem if large object graphs are being loaded from the database or there are a high number of concurrent users but for our situation (approx. 20 users) profiling of the application indicated that this was not a problem.

Lessons learned:

  • Java EE 6 mostly lived up to its promise of simpler, cleaner, faster development. Significant effort was required to learn the technologies the first time around but subsequent developments on the same platform have been very rapid. Adam Bien's blog is well worth following for insight into 'just enough' Java EE application architecture.
  • To truly realise the faster, easier development promise of Java EE 6, you need to augment it with JBoss Seam to fill in some of the missing/broken pieces.
  • Basing the domain model on the XCRI CAP 1.1 information model was a wise choice. Although it was a more complex model than we might have created from scratch, we have reaped the benefit of that choice many times. Most recently, a QAA review has requested a change to the level of award detail stored with examiner records. Because of the flexibility of the XCRI-based domain model to represent most course structures, required changes to the domain model have been minimal. In addition, University Quality Improvement Service colleagues have seen the value of representing course (spec) and presentation (instance) separately and have decided to change their databases to fit the XCRI view of the world.
'XCRI thinking' spreads from the domain model to other University databases
  • We used composition instead of inheritance in the XCRI-inspired parts of our domain model because we thought representing inheritance in the persistence mappings would result in an overly-complex database structure. If we started again today, we would just implement it with inheritance.
  • Free open source ‘community’ editions of software tend to be fully featured but bugs are more common and get fixed first in the corresponding Enterprise version. You can expect to get what you pay for. Testing your application against new versions of such third party software is important. Community forums are generally very supportive but identifying and fixing problems is time consuming and goes against the desire for efficiency and effectiveness (more with less) that we are aiming for.
  • Much benefit is to be gained by participating fully in open source communities. We have blogged about our experiences, have answered questions in community forums and have asked our own questions. In each case, responses have given us a better understanding of the technologies we have used. Don't be afraid to ask questions or blog your experiences. Even if you get some information wrong, community members will correct you and improve your understanding further. The feedback is valuable.
  • With technologies like the Java EE stack which have been evolving for several years, it is important to be able to identify the 'current truth'. In other words, a lot of correct information on the Web refers to older versions of the same technology and so is no longer relevant. This becomes a problem in particular when first learning about a new technology. In trying to solve problems, searches can turn up solutions which work but which are out of date and hence not the most appropriate. We encountered this issue many times during the development of the External Examiners Web Application. At one point, we followed good but old guidance in the creation of the user interface, to create a nicely designed data transfer layer. Subsequently, using an up to date Java EE 6 approach, we made this layer redundant so we were able to remove it entirely and replace it with direct use of PDOs in JSF pages (as described above). Doing so left us with a smaller, cleaner codebase. The lesson from this is to try to find out how up to date any solution or guidance is before applying it.