In a
previous post I outlined the genesis of our external examiner
application and how using the
Archimate
modelling language and
Archi modelling
tool helped us to secure approval for development. In this post, I describe
the technical architecture of the application and try to highlight what was
important to us and illustrate our line of thinking in choosing and using these
technologies.
Readers who aren't interested in technical details may want to duck out at
this point. It's also quite a long post!
The Business Problem
To recap the business problem, the University had identified significant
duplication in quality-related business processes. We had gained approval to
develop an application to address duplication in external examiner approval and
reporting processes. The application would provide:
- an interface for creating and updating information to track the approval and reporting processes
- document management capability for sharing the documents and forms used in the processes
- reporting capability to provide on-demand reports for sharing with stakeholders.
The application would reduce duplication by sharing data,
documents and reports, removing the need to manage local data stores,
manage local document libraries and produce local reports.
The Java EE 6 Platform
We chose to build the external examiner application on the
Java Platform Enterprise Edition Version 6 (Java EE 6).
Java EE 6 comprises a set of Application Programming Interfaces (APIs) to make
development of multi-tiered and distributed enterprise applications easier,
simpler and faster. We chose this platform for the following reasons:
We needed to do more with less
In recent years, we had been unable to replace staff who had left the team.
Demands on the technical team had remained high with new opportunities for
innovation needing to be grasped as they appeared. Consequently, it had been a
case of needing to do more with less. We are always on the look-out for principles, practices
and technologies to maximise efficiency and effectiveness of the team. Java EE 6 had the promise of achieving more with less (and cleaner) code.
We wanted the benefits of the Java EE 6 APIs
Prior to Enable, we had used Apache Tomcat. Tomcat implements the Java
Servlet and JavaServer Pages (JSP) technologies. Applications we created to run
on Tomcat were based on JSPs, servlets and portlets interacting with a database
via the Java Database Connectivity (JDBC) API. This involved writing a
significant amount of code to manage cross-cutting aspects of our applications like security, transactions and
persistence.
Using an application server instead of Tomcat, we could use the Java EE 6 APIs and services provide by the application server to avoid a lot of the boilerplate code we would previously have written to manage the cross-cutting aspects. An application server implements
the full Java EE Platform so it provides JSP and Servlet implementations and a
host of other APIs including:
- Enterprise JavaBeans (EJB)
- Java Persistence API (JPA)
- JavaServer Faces (JSF)
- Java Message Service (JMS)
- Contexts & Dependency Injection (CDI)
- Java Transaction API (JTA)
- JavaMail
An application server also allows configuration, tuning and monitoring to be
managed centrally.
|
GlassFish Administration Console |
We chose to use the GlassFish open source application server because it was
the reference implementation for Java EE 6 and the only application server that
supported Java EE 6 at the time.
Java EE 6 promised to be simpler than Spring
An alternative to Java EE would have been to use the Spring framework but we
decided not to use Spring.
Spring was created as a simpler alternative to the overly-complex and
invasive programming model of Java 2 Platform Enterprise Edition (J2EE), Java
EE's predecessor. It emphasized simplicity of application design through use of
dependency injection and aspect-oriented programming. Spring gained widespread
adoption and became for many the obvious choice for enterprise Java development.
We had some experience of Spring and liked the dependency injection and AOP
elements but not use of XML for declarative configuration. Also, the Spring
Framework had grown so much over the years that reacquainting ourselves with its
large feature set was going to be a non-trivial exercise.
Java EE 6 uses a simplified programming model through use of a convention
over configuration approach. With dependency injection, separation of concerns
and persistence baked into the platform, Java EE 6-based applications promised
to be as lean and mean as equivalent Spring applications, if not more so. Aiming
for reduction in complexity, we opted for Java EE 6 instead of Spring.
Architecture of the external examiner system
An Archimate view of the applications used in the external examiners
application is shown below.
|
Layered Archimate view of the external examiner application |
GlassFish
Our application server was the
GlassFish Server Open Source
Edition which is free but follows the usual 'community' model of support,
i.e. you solve your own problems with information gleaned from forums, blogs,
bug tracking systems, etc.
Initially, we tried initially to run all three applications (External Examiners Web Application, Alfresco, JasperServer) on a single
GlassFish instance. The memory requirements of the combined applications in
production made it impossible to run all three together. Also, Alfresco is
designed to run on Tomcat and, although our attempts to get it running on
Glassfish were initially successful, each Alfresco release brought new
configuration problems so we decided to run a 'vanilla' (default configuration,
bundled) Alfresco instance on Tomcat on a
separate server to avoid unnecessary configuration work.
MySQL
We chose
MySQL as our database software because:
- we had experience of it from previous developments
- it is mature, robust and fast
- it has a large community of users and comprehensive documentation
- it has good free tooling available.
We use
phpMyAdmin for
database management and
TOAD for
MySQL for efficiently creating queries, generating database migration
scripts and performing the more advanced database manipulation.
Alfresco
We used Alfresco to provide document management services for the external
examiners application. We used Alfresco as an interim solution to fill the document
management capability gap until a University-wide document management solution is
implemented. The University has a Document Management steering group which has
identified the need for an institutional enterprise document/content/records
management system and gathered requirements for it. Work is progressing to
prepare the business case and procure and implement a system.
In the absence of a University system, we used
Alfresco Community - a free
version of the Alfresco open source Enterprise Content Management system. This
is another community-supported offering intended for non-critical environments.
Alfresco was chosen to:
- provide shared document management functionality for the application
- be similar enough to a University-selected solution to make re-implemention
using the University solution easy
- illustrate the value of document management to gain further grass roots
support for the document management proposal
- get some experience interacting with a document management solution to
inform the University implementation.
|
Uploading a report to the external examiners document library |
JasperReports Server
JasperReports
Server was chosen to provide shared reporting functionality to replace
generation of reports directly from the Microsoft Access database and
circulation of them by email. JasperReports Server hosts reports created using
the iReport designer tool. The server allows stakeholders to run and download
reports on demand. We used the JasperReports Server Community Edition which is
free and has the usual community supported approach.
|
A report run on the server |
iReport
iReport
is a free, open source report designer for JasperReports. We used
it to create reports to replace locally generated reports from the Microsoft Access database. We used
TOAD for MySQL to visually design SQL queries to return data from the external
examiner database for each report and used JDBC datasources in the reports so that the reports hosted on JasperReports Server dynamically query the database each
time the report is run.
|
Visual design of a SQL query using TOAD for MySQL |
The report is designed using fields from the database.
|
Designing a report using iReport |
The report can be previewed and the report source file (XML) can be directly
edited or inspected.
|
Editing the XML source of the report |
Once finished, the compiled report files are deployed to the JasperReports
Server to make them available for use.
|
List of reports hosted on the server |
External Examiner Application
The external examiner application has been developed by the Learning Development and Innovation (LDI) department technical
team. It provides an interface for creating and managing information associated
with the external examiner appointment and reporting processes and a data import
tool to transfer legacy data to the new database.
The application is managed as three separate projects to simplify
development:
- domain model
- legacy data import application
- web application
Domain Model project
The domain model is a separate project to allow it to be used by the data import
application and the web application.
'Persistence Plumbing'
The domain model project models the 'things' in the real world that we are
interested in and that we want to store and share information about. These are
objects like external examiners, tenures, courses, reports, etc. It also includes
the object relational mapping (ORM) metadata - the 'persistence plumbing' which
allow these entities to be loaded from and saved to the database. We use the
Java Persistence API (JPA) to do this with the
Hibernate Java persistence framework
providing the API implementation.
XCRI influence
Information about courses features heavily in the information recorded about
external examiners and their tenures. We based the course information in our
domain model on the
XCRI CAP 1.1 information model
. A class diagram of the domain model is shown below (open it in a new tab or window and click it to zoom into the detail).
|
External examiner application domain model classes |
Because we were learning lots of new technologies concurrently, we wanted to
keep each aspect as simple as possible. Inexperience with Hibernate made us
conservative about how to implement the domain model mappings. We chose to avoid
inheritance to keep the hibernate mappings simple which meant that the domain
model was a bit more complicated. We replaced inheritance in the XCRI model with
composition.
|
XCRI Course inherits GenericDType, our Course composes GenericDType |
The downside of this was that any changes to the methods of objects being
composed required corresponding changes to the methods in the objects doing the
composing. Happily, changes to the XCRI objects in the domain model were
relatively rare. If we started again today, with our Hibernate experience, we would just
include the XCRI information model 'as is', with inheritance and all.
Mapping Metadata
The domain model contains metadata which maps object fields to tables and
columns in the database. The mapping for the collection of Tenures associated
with an Examiner is illustrated below. In this example, an annotation is added to the
getTenures() method of the Examiner class to specify
the table and columns that will be used to store the collection of tenures in
the database. The Hibernate Java persistence framework can use this metadata to
create the database structure when the application is first run. The Tenures
collection is represented in the database as the
examiner_tenures table, the structure of which is
shown in the screenshot.
|
Persistence mapping of examiner tenures to a table in the database |
Integration tests
We have created integration tests to check the persistence mappings. These
tests use
DbUnit which is an extension to
the
JUnit unit-testing framework. DbUnit is
used to set the database to a known state before each test is run. The tests
check that the database is in the expected state when a known object is saved
and that the expected object is returned when loaded from a known database
state. We use an in-memory
HyperSQL database
for these integration tests because the tests run faster and no clean up is
required - after the tests have run, the in-memory database ceases to exist. The
tests are run automatically on each build of the domain model project.
Data Import project
The data import application loads data from the legacy Microsoft Access
database used by the central quality team and persists it to the shared MySQL
database. This application is run once only to import the legacy data before the
external examiner application is first used.
|
The main import method of the data import project |
The data import application connects to the legacy Access database via JDBC
and fires SQL queries at it to return information which is used to create domain
model objects representing courses, presentations, examiners, etc. These objects
are then persisted to the MySQL shared database via the JPA using the mappings
previously mentioned. Some parsing of data and manipulation of objects in memory
is required during data import because the external examiners domain model is
more fine-grained than the Access database structure. Some columns represent
more than one type of object in the domain model depending on the content of the
record. For example, the award table in the access database has a continuing
column which can contain
- the name of the examiner who is taking over reporting duties for this award
- an ending date for the award
- the reason that the award is ending.
The text of such columns is parsed and the appropriate domain model object is
created and populated.
Web Application project
The External Examiners Web Application provides a user interface for managing
information to support the external examiner appointment and reporting
processes. A new examiner record can be created or an existed examiner record
can be located via the Search screen.
|
Search results |
Clicking on one of the search results takes the user to the Edit screen where
information can be entered and updated. On this screen, examiner contact details
and tenure information can be recorded. Appointment records can be uploaded to
Alfresco via the upload button. Uploaded documents are automatically placed into
the correct faculty area. When reports arrive, they can be uploaded to Alfresco
in the same manner on the reports tab.
|
Edit examiner screen showing an examiner's tenures |
Technologies Used in the Web Application
PrimeFaces
We used the
PrimeFaces JSF component
suite for on-screen components because it is easy to use and it complements
JSF by providing more sophisticated components than the default JSF suite. This
makes development faster by allowing us to focus on building a user interface
from existing components rather than having to design and build custom components. For
example, PrimeFaces has a file upload component that we use to upload documents
to Alfresco.
Seam
As we created the external examiners web application and gained experience in
Java EE 6 development, we came to realise that Java EE 6 does not quite live up
to its promise. Some aspects, like declarative error handling via the deployment
descriptor, simply do not work and other aspects, like dependency injection,
always seem to stop short of providing enough flexibility to suit the
circumstances of your application. To overcome these issues, we turned to the
JBoss Seam Framework to fill in the missing
pieces.
Seam complements Java EE 6 well because it is based on the Java EE platform
and many of its innovations have been contributed back into the Java EE
platform. CDI was a Seam idea and the reference implementation of it is included
in the Java EE distribution. Seam can be thought of as anticipating the next
Java EE and it provides a host of features that you wish had been included in
the reference implementation.
The Seam features most important to us were:
- injection of objects
into JSF converter classes (via the Faces module).
- easy creation of exception handlers to handle
application errors and session expiry (via the Solder module). The orthodox Java EE way to do this, via
declarations in the web application deployment descriptor, did not work because
the application server wrapped all exceptions in an EJBException making handling
of individual error types impossible. Solder unwraps the exception stack and
handles the root cause by default, allowing easy creation of methods to handle
individual error types and conditions.
Integration with Alfresco
The external examiner web application and data import application integrate
with Alfresco via two of
Alfresco's RESTful APIs. For example, upload of an examiner appointment form by the
external examiners Web application is handled as follows:
- When the user selects a file for upload and clicks the upload button, the
PrimeFaces upload file component uploads the file to a temporary directory on
the external examiner server.
- The Content Management Interoperability Services (CMIS) API 'Get Object'
resource is used to return the node reference of the examiner's document folder.
- A multi-part POST to the Repository API 'upload' service is then
used to upload the appointment form to the examiner's folder.
Design Patterns
The External Examiner Web Application implements two
design patterns that help to simplify the application code. The design patterns
are described in
'Real World Java EE Patterns - Rethinking Best Practices' by Adam Bien.
Persistent Domain Object (PDO) pattern
The domain model is a collection of Persistent Domain Objects. These are
classes which model the real world objects we want to store information about in the database, e.g. examiner, tenure, award, report. These
form a rich model of the real world objects including the business logic. This
is in contrast to the
anemic domain objects typically required for J2EE
development. PDOs allow the developer to take an object-oriented approach to
solving problems instead of having to work around the 'persistence plumbing' to
interact with the domain model. Persistence metadata is added in the form of
annotations to specify the mapping of objects to the database. The state of the
PDOs are persisted to the database by the Entity Manager. As long as the PDOs
remain in the attached state (i.e. managed by the entity manager) they can be
modified through method calls and any changes will be flushed to the database
when the objects are next saved.
Gateway pattern
The Gateway pattern allows PDOs to be exposed to the user interface layer. In
our case this means being able to refer to domain model objects directly from
JSF pages and components. The snippet below, from the examinerView page
illustrates this, with the value of the tenuresTable being a direct reference to
the examiner PDOs collection of tenures.
|
Tenures dataTable uses domain model objects directly |
A Gateway object acts as a source for PDOs loaded from the database. The
Gateway keeps the PDOs in the attached state by using an extended persistence
context which remains alive and does not detach objects at the end of each
transaction. Gateway classes are annotated to avoid transactions by default. A
save method is created with an annotation which causes it to trigger a
transaction. The transaction causes any changes in the PDO graph of objects to
be flushed to the database. The PDOs can be used in object-oriented fashion and
the save method called as needed to flush changes to the database. The Entity
Manager does the heavy lifting of keeping track of all the changes to the
attached PDOs and saving them to the database when a transaction is
triggered.
The combination of PDOs and Gateway allows the developer to manipulate the
domain model objects cleanly without having to worry about objects persistent state. This results in a cleaner, smaller codebase. High memory
consumption is a potential problem if large object graphs are being loaded from
the database or there are a high number of concurrent users but for our
situation (approx. 20 users) profiling of the application indicated that this
was not a problem.
Lessons learned:
- Java EE 6 mostly lived up to its promise of simpler, cleaner, faster
development. Significant effort was required to learn the technologies the first
time around but subsequent developments on the same platform have been very
rapid. Adam Bien's blog is well worth
following for insight into 'just enough' Java EE application architecture.
- To truly realise the faster, easier development promise of Java EE 6, you
need to augment it with JBoss Seam to fill in some of the missing/broken
pieces.
- Basing the domain model on the XCRI CAP 1.1 information model was a wise
choice. Although it was a more complex model than we might have created from
scratch, we have reaped the benefit of that choice many times. Most recently, a QAA review has requested a change to the level of award detail stored with
examiner records. Because of the flexibility of the XCRI-based domain model to
represent most course structures, required changes to the domain model have been
minimal. In addition, University Quality Improvement Service colleagues have
seen the value of representing course (spec) and presentation (instance)
separately and have decided to change their databases to fit the XCRI view of
the world.
|
'XCRI thinking' spreads from the domain model to other University databases |
- We used composition instead of inheritance in the XCRI-inspired parts of our
domain model because we thought representing inheritance in the persistence
mappings would result in an overly-complex database structure. If we started
again today, we would just implement it with inheritance.
- Free open source ‘community’ editions of software tend to be fully featured
but bugs are more common and get fixed first in the corresponding Enterprise version. You can
expect to get what you pay for. Testing your application against new versions of
such third party software is important. Community forums are generally very supportive but identifying and fixing problems is time consuming and goes against the
desire for efficiency and effectiveness (more with less) that we are aiming
for.
- Much benefit is to be gained by participating fully in open source communities.
We have blogged about our
experiences, have answered questions in community forums and have asked our own
questions. In each case, responses have given us a better
understanding of the technologies we have used. Don't be afraid to ask questions
or blog your experiences. Even if you get some information wrong, community
members will correct you and improve your understanding further.
The feedback is valuable.
- With technologies like the Java EE stack which have been evolving for several
years, it is important to be able to identify the 'current truth'. In other
words, a lot of correct information on the Web refers to older versions of the
same technology and so is no longer relevant. This becomes a problem
in particular when first learning about a new technology. In trying to solve problems, searches can turn up solutions which work but which are out of date and hence not the most appropriate. We encountered this issue many times during the
development of the External Examiners Web Application. At one point, we followed good
but old guidance in the creation of the user interface, to create a
nicely designed data transfer layer. Subsequently, using an up to date Java EE 6 approach, we made this layer redundant so we were able to remove it entirely and replace it with direct use of PDOs in JSF
pages (as described above). Doing so left us with a smaller, cleaner codebase.
The lesson from this is to try to find out how up to
date any solution or guidance is before applying it.
No comments:
Post a Comment